Wednesday, September 8, 2021

Bias in AI

Humans have many biases, and any tool created by humans can have bias inbuilt. AI will be inherently subject to bias as data generated by humans is utilized by algorithms. These biases must be accounted for, minimized, or removed through continual human oversight and discretion. It is a challenge to design and use AI in such a way that it treats all human beings as having equal worth and dignity. 

AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making. AI systems are only as good as the data it is provided. Bad data contain implicit racial, gender, or ideological biases, and many AI systems will continue to be trained using these data, which makes this an ongoing problem. Bias in AI does not come from the algorithm being used but from people.

Back in 2015, Jacky Alciné, a software engineer, pointed out that the image recognition algorithms in Google Photos were classifying his black friends as “gorillas.” Google said it was “appalled” at the mistake, apologized to Alciné, and promised to fix the problem.[1]AI should not be designed or used in ways that violate the fundamental principle of human dignity for all people.



[1] James Vincent, “Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling techs,” The Verge, January 12, 2018, accessed October 28, 2019, https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai.

Tuesday, September 7, 2021

Created Humankind as Creators

Technology advancements are happening at a fast pace, and humans are achieving what could not be imagined a few decades ago. In Genesis Chapter 1, God spoke, and it came into being. Now people can talk to their smart devices, and it will perform the actions requested or commanded. The children are growing up with a different worldview as humans have become Techno sapiens. Today most people rely on Google or similar search engines to find answers and expect these search engines to know everything. Psalms 115 talks about the idols made by human hands which have a mouth, but cannot speak, they have eyes, but cannot see, they have ears, but cannot hear, they have noses, but cannot smell, they have hands, but cannot feel, they have feet, but cannot walk, nor can they utter a sound with their throats. Now humans are creating robots that can speak, see, hear, smell, walk, touch, and talk.

God created humankind in His image and gave them different abilities, including intelligence, knowledge, and wisdom. Humans can develop the skills they have, and it has resulted in many scientific and technological developments. The Bible mentioned specific instances when people were given unique ability or skill by God to accomplish His purpose.

The Lord said to Moses, “See, I have called by name Bezalel the son of Uri, son of Hur, of the tribe of Judah, and I have filled him with the Spirit of God, with ability and intelligence, with knowledge and all craftsmanship, to devise artistic designs, to work in gold, silver, and bronze, in cutting stones for setting, and in carving wood, to work in every craft. And behold, I have appointed with him Oholiab, the son of Ahisamach, of the tribe of Dan. And I have given to all able men ability, that they may make all that I have commanded you:” (Exod. 31:1-7)

Humans can create many things, but those abilities cannot be compared with the abilities of the creator God. God spoke, and it came into existence (Gen. 1:1-24). Humans create using materials that are already available. Humans are not equal to God but work with and for God in the world. Humans are not at an equal level with God when it comes to creating anything new. Being the image-bearers of God, humans can co-create but at a level that is lower than that of God.

There is AI, which is used to inform and aid human reasoning and moral decision-making as it is a tool that excels at processing data and making determinations as it often mimics or exceeds human ability. These excel in data-based computation, but the technology is incapable of possessing the capacity for moral agency or responsibility.

“You shall have no other gods before me. “You shall not make for yourself a carved image, or any likeness of anything that is in heaven above, or that is in the earth beneath, or that is in the water under the earth. You shall not bow down to them or serve them, for I the Lord your God am a jealous God, visiting the iniquity of the fathers on the children to the third and the fourth generation of those who hate me, (Exod. 20:3-5)

AI machines should not be seen as a god when it becomes powerful from a task execution perspective. AI is not worthy of humanity’s hope, worship, or love. Humans should not be equated to Creator God when they create something which may appear to exceed human intelligence. Humans should not cede moral accountability or responsibilities to any form of AI that will ever be created. In the Bible, it is clear that only humans will be judged by God based on their actions (Heb. 9:27). No tools developed will be subject to judgment. While technology can be created with good use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision-making.

He will render to each one according to his works: to those who by patience in well-doing seek for glory and honor and immortality, he will give eternal life; but for those who are self-seeking and do not obey the truth, but obey unrighteousness, there will be wrath and fury. (Rom. 2:6-8)

The development of AI is a demonstration of the unique creative abilities of human beings, among many other things they have developed. When AI is employed per God’s moral will, it is an example of human obedience to the divine command to steward creation and to honor Him. Any technology innovation for the glory of God, the sake of human flourishing, and the love of neighbor are good and should be done in ways that lead to greater flourishing and the alleviation of human suffering. Human beings are God's created co‐creators whose purpose is to be the agency, acting in freedom, to birth the future that is most wholesome for the nature that has birthed us—the nature that is not only our genetic heritage but also the entire human community and the evolutionary and ecological reality in which and to which humans belong. Exercising this agency is said to be God's will for humans.[1] 



[1] Victoria Lorrimar, “The Scientific Character of Philip Hefner’s ‘Created Co-Creator’,” Zygon 52, no. 3 (September 2017): 726-746, accessed October 25, 2019, EBSCOhost Academic Search Premier.

Monday, September 6, 2021

Impact of AI on Society


Narrow or weak AI has already impacted how humans interact with each other, share information, and how they access information. Home automation, including security, has been achieved using AI, and it is being used now in health care and other industries, which have proved to be very beneficial to society. There are many ethical issues raised by AI.

Eliezer Yudkowsky suggests that AI researchers need to focus on producing humane AI, which he calls Friendly AI. He says that if AI researchers create intelligence without morality, super-intelligent AI could threaten humanity. The advances in computing power and nanotechnology mean that the creation of inhumane AI is a real possibility.[1] The issue with viewing AI as humane or inhumane is that machines cannot be considered moral or immoral. Humane or inhumane would not be an inherent characteristic of the AI itself. AI could be humane or inhumane only in the sense that it produces a positive or negative impact on humankind.

Noel Sharkey argues there is no evidence that machines can attain human intelligence, and he is concerned that people want so much to believe in AI that they will start to think that robots can replace humans in some jobs. This can create serious ethical problems if robots are used in elder care or military capacities.[2] Shoichi Hamada of the Japan Robot Association counts at least twenty companies working in the elderly-care robot field to create inanimate caregivers. An important question to ask is whether it is good to let machines be caregivers, and many say that people should be looking after other people. He reminds us the fact is that there will be more people who need care, and fewer people to provide it.[3]

Regarding the question of AI being considered a moral agent, John P. Sullins III, a member of the philosophy department of Sonoma State University in California, argues that robots could, in some situations, be considered moral actors. He says that a robot would not have to reach the level of human consciousness to be considered a moral agent. As long as a robot is autonomous, behaves in a way that suggests intentionality, and fulfills a social role that includes responsibilities, it should be thought of as having some moral agency. He concludes that even robots existing today may be considered moral agents in a limited way, and should be treated as such.[4] A “moral agent” typically would have both responsibilities to others and will be accountable when they do not fulfill those, and have rights from others. Others could be held responsible for failing to treat them morally. A robot that hurts a human being cannot be put in prison. The programmer or the creator of AI will be responsible.

Joanna J. Bryson, a professor in the department of computer science at the University of Bath in the United Kingdom, argues that humans have no more ethical responsibilities to robots than they do to any other tool. She suggests that humans have a tendency to grant agency and consciousness to robots for psychological and social reasons, and it is a waste of time and energy, which are precious commodities for human beings. She concludes that more effort should be made to inform people that robots are not people or even animals, and therefore have no ethical standing.[5]

AI military robots are already in use and argue that they could be programmed to act more ethically than people. In certain situations, robots can be better than humans, and hence, the focus should be on creating robots that will act ethically. Some think that AI may dangerously change warfare as robots used in warfare will not have feelings. The U.S already has deployed a few thousand robots, chiefly UAVs, in Iraq and Afghanistan.[6]

AI-enabled smart cars are already tested and used and could have a significant impact on the society and automobile industry. These self-driving cars can increase the safety of passengers but raise other challenges regarding liability and decision-making in situations that cannot be programmed easily or which may not have happened in the past.

AI has tremendous power to enhance spying on people, and both authoritarian governments and democracies are adopting technology as a tool of political and social control. In Chinese cities, facial recognition is used to catch criminals in surveillance footage, and to publicly shame those who commit minor offenses.[7] AI-powered sex robots are becoming more advanced, and companies are pushing for people to buy AI-powered devices. People are using AI-based applications and devices regularly. Personal assistances like Alexa and Google Home are now common in most houses, and people talk and address them as other humans.



[1] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 126.

[2] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 135.

[3] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 139.

[4] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 141.

[5] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 155.

[6] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 188.

[7] Will Knight, “Artificial Intelligence Is Watching Us and Judging Us,” Wired, December 1, 2019, accessed January 10, 2020, https://www.wired.com/story/artificial-intelligence-watching-us-judging-us/.

Saturday, September 4, 2021

AI and Christianity

From the Christian perspective, there are different arguments about the possibility of AI and its implications when compared to the creation of humans by God. In April 2019, sixty evangelical leaders released a statement addressing AI. The Ethics and Religious Liberty Commission of the Southern Baptist Convention spent nine months working on “Artificial Intelligence: An Evangelical Statement of Principles,” a document designed to equip the church with an ethical framework for thinking about this emergent technology.[1] The goal of this document was to help the church to think about AI from a biblical viewpoint. Leaders of many Christian institutions signed this document.

Russell C. Bjork is a professor of computer science at Gordon College in Wenham, Massachusetts, argues that theologically that the soul may emerge from bodily processes.[2] He also argues that in Christian teaching, human specialness need not be based on what humans are, but rather on what God intends for them. As a result, an AI could have a soul and would not diminish the theological basis of human worth. Christianity's insights into intelligence may help to suggest how to achieve AI and what its limits might be. He further argues that “as is true throughout the sciences, work in AI can be wrongly motivated, but it can also represent a very legitimate part of humanity's fulfillment of the cultural mandate (Gen. 1:28) through enhanced understanding of the greatest marvel of God's creation: human beings. There is no inherent theological conflict between a biblical view of personhood and work in AI, nor would successes in this field undermine human value or the doctrine of the image of God.”[3]

Harry Plantinga is a professor in the computer science department at Calvin College in Michigan who argues that faith affects how Christian computer scientists approach their work.[4] Faith can lead Christian computer scientists to the recognition that the soul, rather than material computational abilities, separate human beings from machines. Their faith affects the ethical choices made by Christian scientists. Computer science is a discipline with two aspects. On the one side, it is an engineering discipline: it involves the planning, design, construction, and maintenance of computer systems. The subject matter is a corpus of techniques to analyze problems, constructing solutions that will not collapse, guaranteeing, and measuring the robustness of programs.

On the other hand, it is also a science in the sense that mathematics is a science. It is the study of computation and computability, the study of the algorithm. Christian computer scientists and engineers should approach AI in an attitude of doxology and service. They must be careful to honor God in what they do and find that in loving God, they love others, and in serving others, they serve God. All problem-solving, including AI, must be addressed through the motivation of service, and the systems should be reliable, easy to use, and helpful to honor God. Social and ethical implications of the work should be considered, and beneficial aspects of computing must be pursued.[5]

AI cannot attain the image of humanity seen in the Bible. There may be robots that may be similar in looks or speech. To treat AI as a human is to undermine what it means to be human.


[1] “Artificial Intelligence: An Evangelical Statement of Principles,” ERLC.

[2] Noah Berlatsky, ed., Artificial Intelligence, Opposing Viewpoints Series (Detroit: Greenhaven Press, 2011), 30.

[3] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 41.

[4] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 42.

[5] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 48.

Friday, September 3, 2021

Turing Test

A Turing Test is a method of inquiry in AI for determining whether or not a computer is capable of thinking like a human being. The test is named after Alan Turing, the founder of the Turing Test and an English computer scientist, cryptanalyst, mathematician, and theoretical biologist. Turing proposed that if a computer can mimic human responses under some particular condition, then it can be said to possess AI. The original Turing Test requires three terminals, each of which is physically separated from the other two. One terminal is operated by a computer, while the other two are operated by humans. During the test, one of the humans functions as the questioner, while the second human and the computer function as respondents.[1] The questioner interrogates the respondents within a specific subject area, using a specified format and context. After a preset length of time or number of questions, the questioner is then asked to decide which respondent was human and which was a computer. The test is repeated many times. If the questioner makes the correct determination in half of the test runs or less, the computer is considered to have AI because the questioner regards it as “just as human” as the human respondent.[2]

Figure: Turing Test

Virginia Savova and Leonid argue that the Turing test is a valid test of AI. They contend that a machine could not fake its way through the Turing Test in a manner that violates our intuitions about intelligence. They also contend that no look-up table could be composed to allow a machine to pass the Turing Test adequately.[3] Mark Halpern, a computer software expert who has worked at IBM, argues that the Turing Test is fundamentally flawed. During a Turing Test conducted in 1991, judging was flagrantly inadequate with computers that were providing responses were judged human, while some humans were judged to be computers. He concluded that even if a computer were to pass the Turing Test, it would not show that they had achieved AI.[4]

Yaakov Menken, an orthodox rabbi, argues that despite significant advances in computer science, no computer has been created that even comes close to legitimately passing the Turing Test. Based on his observations and Jewish religious teaching, he concluded that human beings would never create a computer that can communicate with human intelligence.[5]



[1] “Turing Test,” TechTarget, accessed December 29, 2019, https://searchenterpriseai.techtarget.com/definition/Turing-test.

[2] Margaret Rouse, “Turing Test,” Search Enterprise AI, June, 2019, accessed October 29, 2019, https://searchenterpriseai.techtarget.com/definition/Turing-test.

[3] Noah Berlatsky, ed., Artificial Intelligence, Opposing Viewpoints Series (Detroit: Greenhaven Press, 2011), 68.

[4] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 78.

[5] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 99.

Saturday, August 28, 2021

Potential of United Human Intelligence


With the coming of the internet, people have come closer to each other than ever before in some aspects. This has alienated people from each other as there are more virtual relationships and conversations than with real people. Now many projects are undertaken jointly by people from different parts of the world. The scale at which it is happening now was not possible earlier. Ideas and thoughts can be shared with the world within seconds. The first chapter of the Bible records the account of the creation, which includes humankind. God created humankind in his image. Genesis 1:27 says that “So God created humans in his own image, in the image of God he created him; male and female he created them.”

Humans were created with many abilities, including physical, mental, and spiritual. The Bible reveals that humans were highly intelligent from the beginning. Adam was able to give names to all the livestock, the birds in the sky, and all the wild animals (Genesis 2:19-20). Later the invention of musical instruments and metallurgy is seen (Genesis 4:19–22). The ark which Noah built according to the specifications provided by God required some sophisticated engineering. The Bible is very clear that God gives knowledge, wisdom, and understanding to people. Proverbs 2:6 states that God gives wisdom, “For the Lord gives wisdom; from his mouth come knowledge and understanding.”

When the construction of the Ark of the Covenant was taking place, God specifically chose Bezalel son of Uri, the son of Hur, of the tribe of Judah, and filled him with the Spirit of God, with wisdom, with understanding, with knowledge, and with all kinds of skills to make artistic designs for work in gold, silver, and bronze, to cut and set stones, to work in wood, and to engage in all kinds of crafts (Exod. 31:2-5). Many instances are seen when God specifically gave abilities to people to accomplish His plan and purpose. The human intellect can be improved by gaining experience and education. History tells that that humans have strived for improvements, innovation, and efficiency to make life better. There have been many technological advancements that did not exist earlier, and human intellect has a significant role in these developments. Humanity has experienced exponential growth in technology in the last few decades, and it continues to grow with humans achieving advancement in science and technology, which were not imagined in the past.

In Genesis, chapter 11 has the account of human effort to build the tower of Babel. The goal was to create for themselves a city, with a tower that reaches to the heavens, so that they may make a name for themselves. They did not want to be scattered over the face of the whole earth (Gen. 11:4). The Lord came down to see the city and the tower under construction.

And the Lord came down to see the city and the tower, which the children of humans had built. And the Lord said, “Behold, they are one people, and they have all one language, and this is only the beginning of what they will do. And nothing that they propose to do will now be impossible for them. Come, let us go down and there confuse their language, so that they may not understand one another's speech.” (Gen. 11:5-7)

God wanted to stop the work and hence confused their language so they could not understand each other, and they got scattered over all the earth. There is an important point to be noted in this incident. God is saying that if people join together as one and speak the same language, nothing they plan to do will be impossible for them. So, there is an admission that if people unite together and plan to do something, whatever they propose will be possible for them. This ability may be referring to the human potential to take actions that may not be the will of God. Humankind’s united intelligence has excellent potential, and some of the technological breakthroughs are the result of many people contributing to a goal or project. Breakthroughs due to united human efforts should not be seen as omnipotence since the Bible teaches that human intellect is limited. “‘No human mind has conceived the things God has prepared for those who love him” (1 Cor. 2:9). The great intellects in the world will not be able to grasp the magnitude of God’s work. At the same time, this can be seen as an indication of the enormous potential of the united intelligence of humans to create things that are not possible individually

Globalization is the phenomenon referred to, mostly in economics. It is the process by which economies, cultures, and societies have come together to bring the world to oneness through a global network of trade and communication. It has helped the advancement of society as a whole. Globalization is not a new idea, and when used in its economic connotation, it refers to the removal of trade barriers amongst nations to improve and increase the flow of goods across the world. It applies to almost all human endeavors today as almost all aspects of human life are globalized.

Many globalization efforts have sought to turn the peoples of the world into one corporate entity, incorporating the whole of humanity into a single world society. The invention of the internet has provided humans with another opportunity to be united, and with the latest developments, language and geography cannot limit people from working together towards a common goal. The Internet has also been used as a vehicle to spread negative messages—like racist or nationalist propaganda. There are many open source and crowdsourced projects where people globally come together to work on initiatives that cannot be accomplished by a limited number of people. In a way, it can be seen as a situation similar to the time when the tower Babel was under construction.

The advancements in AI have been a result of the combined effort of people from many countries who speak different languages. There are many tools available that can easily translate any speech in real-time into many languages. The level of unity among humans in this aspect was never seen in the past to this extent. At Babel, the people were motivated by the spirit of pride and a compelling desire to make a name for themselves. The unity of humankind may portend a false sense of power that could lead to a greater rebellion against God. If it is to rebel, then God will put a limit as happened at Babel.

The united intellect of humans can accomplish a lot, but that should be used in a way that does not violate the commands of God. If it is used in a rebellious way or in a way that violates the principles given by the creator God, then humans will face the consequences and intervention should be expected.




[1]J. Ola Ojo. “Is Globalization a Re-Enactment of One World Language in Genesis 11:1-9?,” Practical Theology, vol. 6 (2013): 101, accessed December 15, 2019, EBSCOhost Atla Religion Database with Atla Serials Plus.

[2] Ojo, “Is Globalization a Re-Enactment of One World Language in Genesis 11:1-9?,” 101.

[3] Warren W. Wiersbe, Be Basic (Colorado Springs: Cook International, 2008), 136.

Wednesday, August 25, 2021

More Efficient or Human Society?

Do we need a more efficient or human society?  Everyone is running to make things efficient. It is a great thing to happen. Are we losing anything in this race for more efficiency?

The united intellect of humans can accomplish many things, and it should be used in a way to replace humans to perform various tasks that may not be suitable for them to do. The impact of AI replacing human workers is already happening in many industries, and the loss of jobs is a real concern. Work is part of God’s plan for human beings participating in the cultivation and stewardship of creation. The divine pattern is one of labor and rest in a healthy proportion to each other. AI can be used in ways that aid human work or allow them to make fuller use of their gif
ts. There is a possibility that humanity will use AI and other technological innovations as a reason to move toward a life of pure leisure.

AI replacing human work will mean fewer people working together. As a society, this could mean moving towards a more efficient society than a human society. If people start interacting more with robots and marrying them, this will impact the fabric of human society.


Saturday, August 21, 2021

Published Thesis Link

The basis of this blog is a thesis that is published. I want to share the link to the report if anyone is interested in reading the full report. I will continue to post about relevant issues related to Theology and AI.


The link to the thesis report - https://cdm16120.contentdm.oclc.org/digital/collection/p16120coll
4/id/1483/rec/5

Wednesday, May 19, 2021

AI Possibilities

One of the biggest challenges with AI is that strong AI does not exist today. It all depends on how different people define and understand intelligence. Philosophers and scientists disagree about whether the development of Strong AI is possible. Doug Merritt, the CEO of Splunk, recently stated that “AI does not exist today.”[1] AI encompasses many types of technologies like ML (Machine Learning), Deep Learning, and Natural Language Processing (NLP). All of these are narrow forms of AI and do not work with each other. 

The original vision of AI, which goes back to the 1950s, is about systems that can truly learn about anything across any domain. Merritt said that it could be 50 to 100 years to get to AI, and there are many issues and challenges to work out, such as with computational power and energy. The human brain only uses 50 watts a day. It is also a very complex distributed system that has a high filter for intuition.

The creation of machines that can think like humans has proved to be more difficult than anticipated initially. Vernor Vinge, a pioneer in AI, argues that sometime in the future, AI will surpass human intelligence, allowing for unimaginable advances.[2] He acknowledges that there are dangers in this scenario because robots may be immoral but conclude that overall advances in technology are much more likely to benefit humans than to destroy them. Artificial brains are not imminent since current brain simulations do not come close to imitating actual brain functions. According to John Horgan, scientists have little sense of how brains work and claim that computers will soon mimic human brain function is wishful thinking.[3] AI has several definitions, and the possibilities of AI depend on how intelligence is defined. Stuart J.Russell and Peter Norvig argue that computers can be considered to have achieved AI when they act like humans, when they think like humans, when they think rationally, or when they act rationally.[4] They note that “Most AI researchers take the weak AI hypothesis for granted, and do not care about the strong AI hypothesis—as long as their program works, they do not care whether you call it a simulation of intelligence or real intelligence.”[5]

[1] Tom Taulli, “Splunk CEO: Artificial Intelligence Does Not Exist Today,” Forbes, October 25, 2019, accessed October 29, 2019, https://www.forbes.com/sites/tomtaulli/2019/10/25/splunk-ceo--artificial-intelligence-does-not-exist-today/#1790af3de6a5.

[2] Noah Berlatsky, ed., Artificial Intelligence, Opposing Viewpoints Series (Detroit: Greenhaven Press, 2011), 20.

[3] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 25.

[4] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 49.

[5] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 18.

Wednesday, May 12, 2021

Principles or Laws of Robotics

AI is used extensively in robotics, and hence it is essential to review the principles or laws of robotics. There are different principles proposed for robotics with AI. They have not been officially adopted or implemented by researchers and companies working on AI.

US AI Strategic Plan

On May 3, 2016, the US Administration announced the formation of a new National Science and Technology Council (NSTC) Subcommittee on Machine Learning and AI, to help coordinate Federal activity in AI. An AI Research and Development Strategic Plan was released, which identifies the following priorities for federally-funded AI research: [1]

  1. Strategy 1: Make long-term investments in AI research. Prioritize investments in the next generation of AI that will drive discovery and insight and enable the United States to remain a world leader in AI.
  2. Strategy 2: Develop effective methods for human-AI collaboration. Rather than replace humans, most AI systems will collaborate with humans to achieve optimal performance. Research is needed to create effective interactions between humans and AI systems.
  3. Strategy 3: Understand and address the ethical, legal, and societal implications of AI. We expect AI technologies to behave according to the formal and informal norms to which we hold our fellow humans. Research is needed to understand the ethical, legal, and social implications of AI, and to develop methods for designing AI systems that align with ethical, legal, and societal goals.
  4. Strategy 4: Ensure the safety and security of AI systems. Before AI systems are in widespread use, assurance is needed that the systems will operate safely and securely, in a controlled, well-defined, and well-understood manner. Further progress in research is needed to address this challenge of creating AI systems that are reliable, dependable, and trustworthy.
  5. Strategy 5: Develop shared public datasets and environments for AI training and testing. The depth, quality, and accuracy of training datasets and resources significantly affect AI performance. Researchers need to develop high-quality datasets and environments and enable responsible access to high-quality datasets as well as testing and training resources.
  6. Strategy 6: Measure and evaluate AI technologies through standards and benchmarks. Essential to advancements in AI are standards, benchmarks, testbeds, and community engagement that guide and evaluate progress in AI. Additional research is needed to develop a broad spectrum of evaluative techniques.
  7. Strategy 7: Better understand the national AI R&D workforce needs. Advances in AI will require a strong community of AI researchers. An improved understanding of current and future R&D workforce demands in AI is needed to help ensure that sufficient AI experts are available to address the strategic R&D areas outlined in this plan.

On February 11, 2019, United States President Donald Trump signed Executive Order 13859, announcing the American AI Initiative, the United States’ national strategy on AI.[2] It shows that governments are taking the potential of AI seriously and realizing the need for policy to govern and initiatives to advance the use of AI.

European Commission Ethics Guidelines for Trustworthy AI

European Commission had a high-level expert group present on ethics guidelines for trustworthy AI. According to the guidelines presented, a trustworthy AI should be (a) lawful- respecting all applicable laws and regulations, (b) ethical- respecting ethical principles and values, and (c) robust- both from a technical perspective while taking into account its social environment.[3]

Asimov’s Three Laws of Robotics

Isaac Asimov was a famous and influential writer of robot stories. He came up with an ideal set of rules for machines to prevent robots from attacking humans but is not used by actual roboticists. These rules are Asimov’s “Three Laws of Robotics” to govern the behavior of robots in his world.[4] They are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Principles for Designers, Builders, and Users of Robots

In 2011, the Engineering and Physical Sciences Research Council and the Arts and Humanities Research Council of Great Britain jointly published a set of five ethical “principles for designers, builders, and users of robots” in the real world based on a September 2010 research workshop:[5]

  1. Robots should not be designed solely or primarily to kill or harm humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that assure their safety and security.
  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
  5. It should always be possible to find out who is legally responsible for a robot.

Nadella’s six principles for AI

Satya Nadella, CEO of Microsoft, put out the six principles and goals he believes AI research must follow to keep society safe.[6] Nadella's intentions are not a direct analog of Asimov's Laws. Nadella's principles are:

  1. AI must be designed to assist humanity. Machines that work alongside humans should do dangerous work like mining but still “respect human autonomy.”
  2. AI must be transparent. People should have an understanding of how technology sees and analyzes the world.
  3. AI must maximize efficiencies without destroying the dignity of people. We need broader, deeper, and more diverse engagement of populations in the design of these systems.
  4. AI must be designed for intelligent privacy. There must be sophisticated protections that secure personal and group information.
  5. AI must have algorithmic accountability so that humans can undo unintended harm.
  6. AI must guard against bias. Proper and representative research should be used to make sure AI does not discriminate against people as humans do.

[1] “National Artificial Intelligence R&D Strategic Plan,” NITRD, accessed October 25, 2019, https://www.nitrd.gov/news/national_ai_rd_strategic_plan.aspx.

[2] “Executive Order on AI,” accessed October 25, 2019, https://www.whitehouse.gov/ai/executive-order-ai.

[3] “Ethics guidelines for trustworthy AI,” accessed December 28, 2019, https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.

[4] Susan Schneider, *Science Fiction and Philosophy: From Time Travel to Superintelligence (*Hoboken, NJ: John Wiley & Sons, 2016), 297.

[5] “Ethical principles for Designers, Builders and Users of Robots,” accessed December 28, 2019, http://www.historyofinformation.com/detail.php?id=3653.

[6] James Vincent, “Satya Nadella's rules for AI are more boring (and relevant) than Asimov's Three Laws,” The Verge, June 29, 2019, accessed October 28, 2019, https://www.theverge.com/2016/6/29/12057516/satya-nadella-ai-robot-laws.

Thursday, February 11, 2021

History and Types of AI

 Technology is advancing fast, and wide adoption is taking less time. It took around ten thousand years to go from writing to the printing press, while it took only another five hundred for the email to become popular among the general public. According to Noah Berlatsky, the idea of AI has fascinated people for hundreds of years with the first science fiction novel, Mary Sheley's Frankenstein (1818), focused on a scientist who builds an artificial, intelligent creature.[1] The term AI was coined in 1956 by the American computer scientist John McCarthy, who defined it as "getting a computer to do things which, when done by people, are said to involve intelligence." AI can be defined as a broad area of computer science that makes machines seem like they have human intelligence. There is no standard definition of what constitutes AI, though, because there is a lack of agreement on what constitutes intelligence and its relation to machines. Over the years, many movies have been produced, which have captured people's imagination about the possibility of super-intelligent robots that can perform tasks more efficiently and accurately than humans. There have been many movies depicting the relationship between humans and robots. Some films have portrayed AI robots with limitations, while others have portrayed them as having emotional feelings and able to relate to humans like other humans. According to American computer scientist John McCarthy who coined the term AI, "Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals, and some machines." [2] Human intelligence includes capabilities such as logic, reasoning, conceptualization, self-awareness, learning, emotional knowledge, planning, creativity, abstract thinking, and problem-solving. A machine is generally considered to use AI if it can perform in a way that matches these abilities, which are in human intelligence. AI is categorized into three types. They are (a) Narrow or Weak AI, (b) General or Strong AI, and (c) Super AI.

Narrow or Weak AI

According to Joe Carter, "Narrow AI (or "weak AI) is the capability of a machine to perform a more limited number and range of intellectual tasks a human can do." Narrow AI can be programmed to "learn" in a limited sense but cannot understand the context. While different forms of AI functions can be put together to perform a range of varied and complex tasks, such machines remain in the category of narrow AI. Today there are many applications of narrow AI. This type of AI is not conscious, sentient, or driven by emotion the way that humans are. Narrow AI operates within a pre-determined, pre-defined range, even if it appears to be much more sophisticated than that. Google Assistant, Google Translate, facial recognition, speech recognition, Alexa, Cortona, Siri, and other natural language and image processing tools are examples of Narrow AI. These are called "Weak" AI because these machines are nowhere close to having human-like intelligence. They lack self-awareness, consciousness, and genuine intelligence to match human intelligence, and they cannot think for themselves. They perform the task they are designed to do and cannot perform anything beyond what they are programmed to do. AI can provide weather updates but cannot answer a question that is beyond the intelligence it is designed to operate and the dataset it has available. Sometimes machines can be made of many Narrow AI to perform more complex operations like driving a car. There are many benefits to this type of AI, as it is used to improve efficiency and accuracy.

Theories of human and animal intelligence are developed, and they are tested by building working models in software programs or robots. For Weak AI, these models are tools for understanding the mind.

General or Strong AI

According to Joe Carter, "General AI (or "strong AI") is the capability of a machine to perform many or all of the intellectual tasks a human can do, including the ability to understand the context and make judgments based on it. This type of AI currently does not exist outside the realm of science fiction, though it is the ultimate goal of many AI researchers."[3] Whether it is even possible to achieve general AI is currently unknown, and some researchers claim that it will be possible to have this type of AI. If it is achieved, such machines would likely not possess sentience (i.e., the ability to perceive one's environment and experience sensations such as pain and suffering, or pleasure and comfort). Currently, machines can process data faster than humans, but they cannot think abstractly, strategize, and tap thoughts and memories to make informed decisions or come up with creative ideas. This limitation makes machine intelligence inferior to the abilities humans possess. General AI is expected to be able to reason, solve problems, make judgments under uncertainty, plan, learn, integrate prior knowledge in decision-making, and be innovative, imaginative, and creative. For machines to achieve real human-like intelligence, they will need to be capable of experiencing consciousness. For Strong AI, the model has to be a mind.

Super AI

Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest."[4] This type of AI is supposed to surpass human intelligence in all aspects — from creativity to general wisdom, to problem-solving. These machines should be capable of exhibiting intelligence that is not seen in any humans. It is the type of AI that many people are worried about and the type of AI that people like Elon Musk and Stephen Hawking think will lead to the extinction of the human race.[5] This type of AI does not exist today, but researchers predict it is possible in the future.

 



[1] Noah Berlatsky, ed., Artificial Intelligence, Opposing Viewpoints Series (Detroit: Greenhaven Press, 2011), 14.

[2]Joe Carter, “The FAQs: What Christians Should Know About Artificial Intelligence,” The Gospel Coalition, April 18, 2019, accessed October 25, 2019, https://www.thegospelcoalition.org/article/the-faqs-what-christians-should-know-about-artificial-intelligence.

[3] Carter, “The FAQs: What Christians Should Know About Artificial Intelligence.”

[4] Tannya D. Jajal, “Distinguishing between Narrow AI, General AI and Super AI,” Medium, May 21, 2018, accessed October 25, 2019, https://medium.com/@tjajal/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22.

[5] Karphal, “Stephen Hawking Says AI. Could Be Worst Event in the History of Our Civilization.

Tuesday, February 9, 2021

Action Command Outcome (ACO) Theological Framework

One of the goals of this research was to develop a framework to address the challenges raised by AI. I decided to focus on God's attributes as addressing challenges with AI requires a strong foundation based on the creator God's understating from a Christian perspective. After providing all the information and raising awareness about AI, it was essential to have a framework to be used beyond this project in the future. There is a lack of awareness among Christian ministers about AI, and there are many questions raised by people as there is broad adoption of new technology. The questions have to be answered from a biblical viewpoint. 

Therefore, I developed a framework and named it as "Action Command Outcome" (ACO) Theological Framework to evaluate different issues. It was not adapted from any other available frameworks. I had heard about the concept of different types of actions that humans can take. When discussing this research project with my thesis advisor, the concept of good and evil outcomes was evaluated. The ACO framework results from my evaluation of different AI challenges and based on the data collected during the research interviews.




 

This framework starts with the action in question. The action is validated against the commandments found in the Bible, which could be explicit or implicit. The action is then evaluated to check if it is essential, desirable, tolerable, or forbidden. Based on where it falls, it tells the outcome of the action based on what the Bible commands. Following is an example of the ACO Theological Framework:

Table1: ACO Theological Framework Example

Actions

Command

Outcome

 

Explicit

Implicit

 

Essential

Love (1 John 3:11)

Help an online friend. (1 John 3:11)

Very Good

Desirable

Support Missionaries (1 Cor. 16:1-3)

Sharing words of encouragement on social media (1 Thess. 5:11)

Good

Tolerable

Wasting time not doing anything. (Eph. 5:16)

Spending much time watching Television (Eph. 5:16)

Not Good

Forbidden

Adultery (Matt. 5:27-28)

Watching Porn (Matt. 5:27-28)

Evil

 

The above table has actions that are mapped against the action type under the type of command. The Bible references are also added along with the action to be validated. The actions under explicit command are the ones directly found in the Bible. Some of the commandments may be essential, while others fall in a forbidden area, and others fall in the middle. When an action is not mentioned in the Bible directly, it is added under implicit commands. Helping an online friend, posting encouragement online, watching television, and watching porn is not directly referenced in the Bible. There are Bible verses that implicitly deal with the action. Biblical exegetical and hermeneutical skills are required to use this framework correctly.

ACO Theological Framework - Issues Raised by AI

AI is not referenced directly in the Bible. During the research interview, one respondent shared how he thought that AI robots are referenced directly in the Bible. I did not find any explicit references to AI in the Bible. Below is an example of how some of the issues related to AI are mapped in the ACO Theological Framework.

Table 31: ACO Theological Framework - AI Issues.

Actions

Command

Outcome

 

Explicit

Implicit

 

Essential

 

Use of AI in a child rescue operation (Ps. 82:3-4)

Very Good

Desirable

 

AI in cancer detection (1 Tim. 5:23)

Good

Tolerable

 

·        AI-generated sermons (2 Tim. 4:1-2)

·        AI in Warfare (Matt. 5:44)

Not Good

Forbidden

 

·        AI Sex Robots (Gen. 2:18-25; Matt. 5:27-30)

·        AI granted status similar to humans (Gen. 1:26-28)

·        Worship of AI (Ex. 20:3-5)

·        AI used for deceiving people (Prov. 6:16-19)

Evil

 

I have selected eight actions or topics related to AI and mapped them in ACO Theological Framework. All the actions were mapped under implicit commands. The supporting Bible verses are also provided. Some of the issues can be mapped against more than one action depending on how the how a Bible verse was interpreted. The data from the research interviews were used to complete the ACO framework for issues related to AI. The responses from ministry leaders and pastors were used as the basis to come to conclusions, along with the study undertaken by me on these topics. The good use of AI was seen as beneficial by many respondents. The use of AI sex robots, AI granted a status similar to humans, and any worship of AI was stated as evil by many respondents, especially by those who profess faith in God.

The ACO framework can be used as a tool when dealing with ethical and moral issues related to AI. It can be presented as a tool when teaching college students and adults about faith, religion, morality, ethics, and technology. I am confident that this framework can be used for other topics also. This framework can also be used as an evangelism tool to have a more in-depth conversation and can be started with general moral standards of different nations and cultures. I envision drawing a parallel between moral laws followed by humanity and the moral laws found in the Bible using this framework. Using the ACO framework can lead to meaningful conversation and will provide witnessing opportunities.

The ACO framework does not provide any specific guidance to AI's concerns and is not designed to get to the details of one issue and provide advice. There are many concerns related to self-driving cars that the interviewees raised. Self-driving cars themselves may fall in the desirable and good category, but there are concerns about self-driving cars that are not considered by the framework. A topic that was discussed during interviews was about self-driving cars killing humans. That is a serious concern, but that itself does not make self-driving cars evil unless there is data that show many accidents caused by these cars. A research project can be done to deal with different issues connected to a concern with AI and come up with guidance, along with the outcomes provided by the ACO framework.

 

Generative AI and Impacts

I have started doing some work with Generative AI. It is both promising and dangerous in many ways.  It is critical to understand what Gener...