Atomic Responsibility
We must only cast our minds back to the advent of the atomic bomb to learn a thing or two about responsibility. The common take is that the USA was responsible for the events at Hiroshima and Nagasaki for it was they who ultimately dropped the bombs. Is it as simple as saying that the technology simply fell into the wrong hands? Such was the destruction and devastation, that little attention was paid to the manufacturers of the bomb, the creators of the technology, the commander who made the call, or the pilots, Paul Tibbets and Charles W. Sweeney? This is perhaps a well-founded thought process. Should a single person have been held accountable, a group, the military, or is history correct in saying it was the nation of the United States of America? Unfortunately, we live in an age where the answers to these types of questions haven’t gotten any easier. One of the reasons for this is simply that attributing responsibility can be very subjective.
AI in our Lives – Status Quo
Over the past two decades, Artificial Intelligence (AI) has transitioned from being primarily an academic endeavor to also playing an integral role in every well-known industry. And now we are faced with a familiar dilemma – who is responsible for AI? In particular, who or what is responsible for the actions of AI technology? There are ongoing efforts to provide answers to these challenging questions, and they are challenging for good reason. Advances in AI are occurring at a staggering rate. Organizations and in particular, Big Tech, are taking advantage of this technology for monetary gain and it’s fair to say, depend on it to keep up with and/or stay ahead of the competition. As a result, the status quo has become one of ‘create now, deal with the consequences later’. Nobody is suggesting this is ethically or morally sound. Arguably, the appropriate descriptor is ‘tolerated’. In terms of the zero to hero scale for AI, with zero being inanimate and hero being human-like intelligence, we are somewhere in between. Where exactly the current pin resides is open for debate, although one thing is certain; almost everyone who owns a smartphone, browses the internet, or makes commercial transactions is in one way or other, interacting with AI systems. In truth, we are concurrently benefitting from and being manipulated by AI in our daily lives.
Human-Like AI
In no arena is this more obvious than social media. Worryingly, listening to any teenager or influencer talking about social media will not yield any keywords related to responsibility or ethics, but about likes, comments, content algorithms, interests/ads, and other popularity-based terms. Perhaps this is the reason why the conversation about AI is not about responsibility and ethics; because we are too busy being immersed in the experience that AI provides. But this is not always going to be the case. Experts say that at some point, we will achieve human-like AI. Let that sink in for a moment. Imagine an entity that was not conceived through a human biological process yet possesses intelligence like that of a human being. Now imagine a world in which such entities were not created with any form of governance, control, or accountability. They were simply created and shuttled into the real world like a young cub as it braves its environment alone for the first time. Only in this scenario, the entity has the intelligence of a human. To stoke the fire a little further, one expert, Stuart Russell, predicts that this could become a reality as soon as within the next two generations.
Responsibility in AI – What should we do?
With these potential paradigm shifts in AI, do we really want to leave it until it’s too late to do what is morally and ethically correct in terms of responsibility? To help us decide, the same expert who predicted that human-like intelligence could be with us in the next two generations also had an interesting take on AI. He said that if we experience a sudden advance in AI technology, a huge breakthrough like that of the nuclear fission reaction and Einstein’s Theory of Relativity, we may have the human-like intelligence scenario on our hands much sooner than expected. Even if there is a tiny chance of such a breakthrough occurring in the near future, the sensible thing is to act now. That’s what we should do, and no amount of novelty from AI-fuelled technology in our daily lives should dictate otherwise.
How do we do it?
Define Moral Responsibility
First, a definition of what constitutes Moral Responsibility must be provided. Consider first, a notion known as Classes of Moral Agency. In this hierarchy, there are three classes: Operational Morality, Functional Morality, and Full Moral Agency. As we move from Operational Morality to Full Moral Agency, the agents, i.e., the AI robots/algorithms /entities transition from those whose morals are that of which are programmed by the developers, to those which have a high degree of autonomy and a high degree of ethical sensitivity. In other words, agents in the Full Moral Agency class have human-like rights and responsibilities and understand right from wrong in the same way as a human does, or at least, should. The keyword here is understand, and it is through the ability to understand that we can attribute the term Moral Responsibility to an agent.
Build Agents with Full Moral Responsibility
There are several suggested approaches to creating machines that understand and are ethically and morally aware of the actions they take. One approach recommends to first create the machine and then build in moral and ethical theory. Another approach says to build the machine and then apply human-like evolutionary and learning techniques so that it learns how to understand. However human-like AI agents are created, they must be created with Full Moral Responsibility. There is no other option. However, as human nature would dictate, it’s complicated!
Challenges to Full Moral Responsibility
Just like any human being progressing through life, there are going to be situations where maybe the environment or the challenge at hand are such that not enough has been learned to take the optimal action. Or perhaps the optimal action is poorly defined or ambiguous. Indeed, if we consider all the possible moments in life where we struggle to do the ‘right thing’, imagine how challenging it would be to create an agent with true, full moral responsibility.
Retribution Gap
Whether humans are capable or not of building robots with Full Moral Agency remains to be seen. After all, we ourselves are not perfect. In any case, as machines become more intelligent and autonomous, the likelihood of more mistakes and wrongdoing being committed will likely increase. However, since these machines are robots at the end of the day, humans are unlikely to attribute blame to the robots. Humans are equally as unlikely to assign blame to another human since hey, the robot did it! As such, there will be an increasing gap in culpability. This is in fact a theory of J Danaher. In his efforts in describing this notion, he coined the phrase ‘Retribution Gap’. Others argue that since robots are built by humans, there can never truly be a case whereby robots are responsible for their actions.
Closing Thoughts
Advances in AI are happening and happening fast. There are known challenges such as the retribution gap and creating AI agents with a moral and ethical decision-making capacity as well as the ability to understand, feel and think. To further complicate things, there are differences in opinion in how responsibility should be assigned in adverse consequences. We could be in a position in the next two generations that human-like AI is a reality and therefore, said differences must be used as constructive rather than destructive if we are ever to have safe, human-like AI with an appropriate retribution policy.