Eliezer Yudkowsky suggests that AI researchers need to focus
on producing humane AI, which he calls Friendly AI. He says that if AI
researchers create intelligence without morality, super-intelligent AI could
threaten humanity. The advances in computing power and nanotechnology mean that
the creation of inhumane AI is a real possibility. The issue with viewing AI
as humane or inhumane is that machines cannot be considered moral or immoral. Humane
or inhumane would not be an inherent characteristic of the AI itself. AI could
be humane or inhumane only in the sense that it produces a positive or negative
impact on humankind.
Noel Sharkey argues there is no evidence that machines can
attain human intelligence, and he is concerned that people want so much to
believe in AI that they will start to think that robots can replace humans in
some jobs. This can create serious ethical problems if robots are used in elder
care or military capacities. Shoichi Hamada of the
Japan Robot Association counts at least twenty companies working in the
elderly-care robot field to create inanimate caregivers. An important question
to ask is whether it is good to let machines be caregivers, and many say that
people should be looking after other people. He reminds us the fact is that there
will be more people who need care, and fewer people to provide it.
Regarding the question of AI being considered a moral agent,
John P. Sullins III, a member of the philosophy department of Sonoma State
University in California, argues that robots could, in some situations, be
considered moral actors. He says that a robot would not have to reach the level
of human consciousness to be considered a moral agent. As long as a robot is
autonomous, behaves in a way that suggests intentionality, and fulfills a social
role that includes responsibilities, it should be thought of as having some
moral agency. He concludes that even robots existing today may be considered
moral agents in a limited way, and should be treated as such. A “moral agent” typically would
have both responsibilities to others and will be accountable when they do not
fulfill those, and have rights from others. Others could be held responsible
for failing to treat them morally. A robot that hurts a human being cannot be
put in prison. The programmer or the creator of AI will be responsible.
Joanna J. Bryson, a professor in the department of computer
science at the University of Bath in the United Kingdom, argues that humans
have no more ethical responsibilities to robots than they do to any other tool.
She suggests that humans have a tendency to grant agency and consciousness to
robots for psychological and social reasons, and it is a waste of time and
energy, which are precious commodities for human beings. She concludes that
more effort should be made to inform people that robots are not people or even
animals, and therefore have no ethical standing.
AI military robots are already in use and argue that they
could be programmed to act more ethically than people. In certain situations,
robots can be better than humans, and hence, the focus should be on creating robots
that will act ethically. Some think that AI may dangerously change warfare as
robots used in warfare will not have feelings. The U.S already has deployed a
few thousand robots, chiefly UAVs, in Iraq and Afghanistan.
AI-enabled smart cars are already tested and used and could
have a significant impact on the society and automobile industry. These
self-driving cars can increase the safety of passengers but raise other
challenges regarding liability and decision-making in situations that cannot be
programmed easily or which may not have happened in the past.
AI has tremendous
power to enhance spying on people, and both authoritarian governments and democracies
are adopting technology as a tool of political and social control. In Chinese
cities, facial recognition is used to catch criminals in surveillance footage,
and to publicly shame those who commit minor offenses. AI-powered sex robots are
becoming more advanced, and companies are pushing for people to buy AI-powered
devices. People are using AI-based applications and devices regularly. Personal
assistances like Alexa and Google Home are now common in most houses, and
people talk and address them as other humans.
Artificial Intelligence, Opposing Viewpoints Series, 126.
Artificial Intelligence, Opposing Viewpoints Series, 135.
Artificial Intelligence, Opposing Viewpoints Series, 139.
Artificial Intelligence, Opposing Viewpoints Series, 141.
Artificial Intelligence, Opposing Viewpoints Series, 155.
Artificial Intelligence, Opposing Viewpoints Series, 188.
Will Knight, “Artificial Intelligence Is Watching Us and Judging Us,” Wired,
December 1, 2019, accessed January 10, 2020, https://www.wired.com/story/artificial-intelligence-watching-us-judging-us/.