Skip to main content

Impact of AI on Society


Narrow or weak AI has already impacted how humans interact with each other, share information, and how they access information. Home automation, including security, has been achieved using AI, and it is being used now in health care and other industries, which have proved to be very beneficial to society. There are many ethical issues raised by AI.

Eliezer Yudkowsky suggests that AI researchers need to focus on producing humane AI, which he calls Friendly AI. He says that if AI researchers create intelligence without morality, super-intelligent AI could threaten humanity. The advances in computing power and nanotechnology mean that the creation of inhumane AI is a real possibility.[1] The issue with viewing AI as humane or inhumane is that machines cannot be considered moral or immoral. Humane or inhumane would not be an inherent characteristic of the AI itself. AI could be humane or inhumane only in the sense that it produces a positive or negative impact on humankind.

Noel Sharkey argues there is no evidence that machines can attain human intelligence, and he is concerned that people want so much to believe in AI that they will start to think that robots can replace humans in some jobs. This can create serious ethical problems if robots are used in elder care or military capacities.[2] Shoichi Hamada of the Japan Robot Association counts at least twenty companies working in the elderly-care robot field to create inanimate caregivers. An important question to ask is whether it is good to let machines be caregivers, and many say that people should be looking after other people. He reminds us the fact is that there will be more people who need care, and fewer people to provide it.[3]

Regarding the question of AI being considered a moral agent, John P. Sullins III, a member of the philosophy department of Sonoma State University in California, argues that robots could, in some situations, be considered moral actors. He says that a robot would not have to reach the level of human consciousness to be considered a moral agent. As long as a robot is autonomous, behaves in a way that suggests intentionality, and fulfills a social role that includes responsibilities, it should be thought of as having some moral agency. He concludes that even robots existing today may be considered moral agents in a limited way, and should be treated as such.[4] A “moral agent” typically would have both responsibilities to others and will be accountable when they do not fulfill those, and have rights from others. Others could be held responsible for failing to treat them morally. A robot that hurts a human being cannot be put in prison. The programmer or the creator of AI will be responsible.

Joanna J. Bryson, a professor in the department of computer science at the University of Bath in the United Kingdom, argues that humans have no more ethical responsibilities to robots than they do to any other tool. She suggests that humans have a tendency to grant agency and consciousness to robots for psychological and social reasons, and it is a waste of time and energy, which are precious commodities for human beings. She concludes that more effort should be made to inform people that robots are not people or even animals, and therefore have no ethical standing.[5]

AI military robots are already in use and argue that they could be programmed to act more ethically than people. In certain situations, robots can be better than humans, and hence, the focus should be on creating robots that will act ethically. Some think that AI may dangerously change warfare as robots used in warfare will not have feelings. The U.S already has deployed a few thousand robots, chiefly UAVs, in Iraq and Afghanistan.[6]

AI-enabled smart cars are already tested and used and could have a significant impact on the society and automobile industry. These self-driving cars can increase the safety of passengers but raise other challenges regarding liability and decision-making in situations that cannot be programmed easily or which may not have happened in the past.

AI has tremendous power to enhance spying on people, and both authoritarian governments and democracies are adopting technology as a tool of political and social control. In Chinese cities, facial recognition is used to catch criminals in surveillance footage, and to publicly shame those who commit minor offenses.[7] AI-powered sex robots are becoming more advanced, and companies are pushing for people to buy AI-powered devices. People are using AI-based applications and devices regularly. Personal assistances like Alexa and Google Home are now common in most houses, and people talk and address them as other humans.



[1] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 126.

[2] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 135.

[3] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 139.

[4] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 141.

[5] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 155.

[6] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 188.

[7] Will Knight, “Artificial Intelligence Is Watching Us and Judging Us,” Wired, December 1, 2019, accessed January 10, 2020, https://www.wired.com/story/artificial-intelligence-watching-us-judging-us/.

Comments

Popular posts from this blog

Why should religious leaders care about Artificial Intelligence (AI)?

When I started with the research on AI and theology for my dissertation, I mentioned the topic to some people. Many people were impressed with my topic. I also got strange reactions from people who thought I was going to work on an issue that does not have anything to do with religion or theology. Someone commented, “at the end of the day, it is a machine, and you can take out the power plug.” I do not think they understand the issue or questions raised by AI.   I will be highlighting many issues in future posts. Let me give you one reason to start. It is the view on religious services or gathering. Online services have become standard for many years now, and many attend a church or religious service online. With COVID-19 and lockdowns, everyone went online to conduct their services. Many who opposed online services in the past started worship services over Zoom, Skype, Facebook live, Youtube live, etc. Even some people started doing communion online. Still, there are people i

Glossary - AI and Theology

As we have conversations about Theology and AI, it essential to understand some of the terms which will be used. As   I plan to write about these terms in future writing, I thought it would be good to add a list here. I will write posts specifically on some of these topics, but the goal here is to provide a brief definition. Alien-AI. AI mechanism which is not intended to mirror human intelligence. Android . In science fiction, a robot with a human appearance. Artificial Intelligence(AI). An area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include: speech recognition, learning, planning, and problem-solving . Consciousness. The state of being awake and aware of one's surroundings. Creator God. The creator and ruler of the universe and source of all moral authority; the supreme being. Cyborgs. A fictional or hypothetical p

Action Command Outcome (ACO) Theological Framework

One of the goals of this research was to develop a framework to address the challenges raised by AI. I decided to focus on God's attributes as addressing challenges with AI requires a strong foundation based on the creator God's understating from a Christian perspective. After providing all the information and raising awareness about AI, it was essential to have a framework to be used beyond this project in the future. There is a lack of awareness among Christian ministers about AI, and there are many questions raised by people as there is broad adoption of new technology. The questions have to be answered from a biblical viewpoint.  Therefore, I developed a framework and named it as "Action Command Outcome" (ACO) Theological Framework to evaluate different issues. It was not adapted from any other available frameworks. I had heard about the concept of different types of actions that humans can take. When discussing this research project with my thesis advisor, the