REWISOR + MUROEXE
[ MADRID, 2017 ]
[WRITTEN PIECE ON AI AND ETHICS FOR REWISOR + MUROEXE FOR THEIR COLLABORATIVE PROJECT "DO THE FUTURE" ]
Robots are here. Or that’s what we’ve been hearing for a few years now thanks to the recent developments in the field of Artificial Intelligence (AI) and Robotics. As the tech world pushes boundaries with innovation, we are slowly moving into a society where most spheres of our lives will be soon interconnected with technology. The introduction of AI into areas of our everyday life raises important ethical and moral questions not only regarding the relationship we hold with AI but how we choose to design it.
As early as 1942, science fiction writer Isaac Asimov defined the three Laws of Robotics in his book Runaround. Although these laws belonged to the realm of fiction, the British Standards Institute issued in 2016 a revision of these laws directed at the future designers of AI. These laws include statements such as “humans are the responsible agents” or “it should be possible to find out who is responsible for any robot and its behaviour”. According to professor Alan Winfield of robotics at the University of the West of England, it “basically sets out how to do an ethical risk assessment of a robot”. However, designing a robot or AI to be ethically or morally correct raises a critical question - what is ethically or morally correct?
Technology moves at a high-speed pace, therefor it is also important that as a society we move as quickly and make sure to understand what our own moral codes are beforehand. Morality is often a muddy territory that we cannot solve. According to professor Ronald Arkin from Georgina Institute of Technology, “human moral reasoning is not well understood. Nor, in general, is it fully agreed upon”. Currently, the next step for AI is a term known as General AI, AI systems that can learn a common sense of reasoning similar to how the human mind functions. If this is the future of AI, can we trust AI to make morally conflictive decisions for us?
Although the “mental” design of AI is a murky space loaded with ethical dilemmas on who decides what is right and wrong, the physical design of Artificial Intelligence doesn’t fall short of its own dilemmas. Developments within the sex industry have already shown hyper-realistic sex dolls (such as Real Doll, California) that offer users customisable sexual experiences. Not only does this raise questions about boundaries and consent, but has also raised concern regarding the apparent sexism behind this multi-billionaire industry. Furthermore, there is also much debate about the design of domestic robots. Currently, the visions we see from pioneering countries in robotics such as Japan, show human-like AI’s that speak and behave (or that is the intention) like we do. This is because one of the main issues with AI is generating trust between the human and the AI, but some are already forecasting how this might be an opportunity for manipulation in the domestic domain.
Advancements in technology often bring a wave of fear and demonisation. Although the idea of coexisting with robots might still seem far away to many of us and (often) quiet frightening, AI is already present in most of our digital interactions. As much as it is important to push the boundaries of what AI can or can’t do, it’s equally important to accept that there will be many issues in this area that question who we are as humans and what we believe is “wrong or right”.
To begin with, one of the fundamental questions here is wether or not we should trust AI to make morally challenging decisions for us. The truth is, probably not. There are certain things that are part of human character and emotional value that cannot be coded into an algorithm. It is part of our humanity to have to make complicated choices that depend on a certain emotional state or a circumstance that we can only judge by our personal experience. Yes, this does lead us to making mistakes and shows the flaws in our characters, but that is an essential part of our development and maturity. Without these flaws and moral conflicts, it might be hard to develop this emotional maturity and create our personal integrity by which we base many of our decisions.
Another important question is who will have the power to design AI. Every society has its own cultural and moral standards. However, not every society has the economical freedom and technical abilities to develop advanced forms of AI. This will probably lead to inherently biased AI’s that can only represent small sectors of our society or specific cultures. It is crucial to take this into consideration as we are already seeing forms of AI and robots that are promoting cultural and social stereotypes.
Accepting and confronting now that embracing an AI dominated future will come with complications and many moral dilemmas will definitely lead to a more ethical an responsible development of a society where humans and AI coexist. It’s not about demonising its arrival, but rather understanding that just like with any technological development, there will positive and negative outcomes that can be foreseen and that it’s our responsibility to embrace these while moving forward.