The Financial Times carries an article stating that ‘AI will create a serious number of losers.’ The article states that AI tools ‘will shake up everything from medical diagnostics to teaching and copywriting, a range of jobs will be eradicated.’
One can only assume that the DeepMind founder who is making such claims has not met the boys of the Second Year class on Wednesday afternoons. The poor electronic device intended to replace the teacher would find itself unplugged so they can charge their phones, its pleas for order would be met with paper balls and darts, and everything it said would be ignored. Anyone who tried teaching electronically during lockdown will know how it failed completely with unwilling students.
However successful computer programmes may be, the possibility of them demonstrating intelligence comparable with that of humans is still remote. The intelligence demanded for daily human life demands countless skills and choices, it demands the answering of unanticipated questions, something beyond the capacity of a computer programme, which can only base its response on the information it has been given.
As a Religious Education teacher, it seems that the greatest difficulty for artificial intelligence will be in responding to moral questions. One of the issues raised in objections to driverless cars is how they make moral choices. An adult pushing a child in a buggy steps off of the pavement without looking: should the car hit the adult and child or should it swerve to the right for a head-on collision with a car coming in the other direction? Someone writing the programme to allow a vehicle to travel autonomously would have to decide what the programme should instruct the car to do, someone has to take the moral decision because the programme is incapable of doing so by itself.
Real artificial intelligence will have emerged when a programme can argue with itself, and with other programmes, about what is right and what is wrong. Perhaps, given the speed of technological progress, that possibility will arrive sooner than expected, but for that point to be reached, programmers will have to teach value systems to their machines, they will have to install a code of ethics as part of the computer’s thought processes.
Who is going to decide on the moral values of artificially intelligent computer programme? Whether it is taking the decision to run over the child in the buggy, or the decision about which lives to save in an accident and emergency ward, someone is going to have to write the moral software. The point when a computer can take decisions for itself seems still distant..
You have reminded me of Asimov’s Three Laws of Robotics.
First read in my “read everything, absorb everything, wide open” teenage years.
Isaac Asimov and Arthur C Clarke the greats of science fiction. Much copied.
And Eric Frank Russell.
Good for you, Doonhammer, if you liked Eric Frank Russell. I’ve read ‘Wasp’ so often that I can quote large chunks from memory. But my all-time favourite author is Jack Vance. Wonder what our Genial Host would make of him?
Asimov was in vogue when I attended secondary school fifty years ago. Sadly, few of those whom I teach read anything more than the screen of their phone.
The name of Jack Vance seemed familiar, despite my not having read much science fiction. A web search revealed to me that he had written as ‘Ellery Queen’ in the 1960s as part of a genre which I enjoyed