Thursday, February 11, 2021

History and Types of AI

 Technology is advancing fast, and wide adoption is taking less time. It took around ten thousand years to go from writing to the printing press, while it took only another five hundred for the email to become popular among the general public. According to Noah Berlatsky, the idea of AI has fascinated people for hundreds of years with the first science fiction novel, Mary Sheley's Frankenstein (1818), focused on a scientist who builds an artificial, intelligent creature.[1] The term AI was coined in 1956 by the American computer scientist John McCarthy, who defined it as "getting a computer to do things which, when done by people, are said to involve intelligence." AI can be defined as a broad area of computer science that makes machines seem like they have human intelligence. There is no standard definition of what constitutes AI, though, because there is a lack of agreement on what constitutes intelligence and its relation to machines. Over the years, many movies have been produced, which have captured people's imagination about the possibility of super-intelligent robots that can perform tasks more efficiently and accurately than humans. There have been many movies depicting the relationship between humans and robots. Some films have portrayed AI robots with limitations, while others have portrayed them as having emotional feelings and able to relate to humans like other humans. According to American computer scientist John McCarthy who coined the term AI, "Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals, and some machines." [2] Human intelligence includes capabilities such as logic, reasoning, conceptualization, self-awareness, learning, emotional knowledge, planning, creativity, abstract thinking, and problem-solving. A machine is generally considered to use AI if it can perform in a way that matches these abilities, which are in human intelligence. AI is categorized into three types. They are (a) Narrow or Weak AI, (b) General or Strong AI, and (c) Super AI.

Narrow or Weak AI

According to Joe Carter, "Narrow AI (or "weak AI) is the capability of a machine to perform a more limited number and range of intellectual tasks a human can do." Narrow AI can be programmed to "learn" in a limited sense but cannot understand the context. While different forms of AI functions can be put together to perform a range of varied and complex tasks, such machines remain in the category of narrow AI. Today there are many applications of narrow AI. This type of AI is not conscious, sentient, or driven by emotion the way that humans are. Narrow AI operates within a pre-determined, pre-defined range, even if it appears to be much more sophisticated than that. Google Assistant, Google Translate, facial recognition, speech recognition, Alexa, Cortona, Siri, and other natural language and image processing tools are examples of Narrow AI. These are called "Weak" AI because these machines are nowhere close to having human-like intelligence. They lack self-awareness, consciousness, and genuine intelligence to match human intelligence, and they cannot think for themselves. They perform the task they are designed to do and cannot perform anything beyond what they are programmed to do. AI can provide weather updates but cannot answer a question that is beyond the intelligence it is designed to operate and the dataset it has available. Sometimes machines can be made of many Narrow AI to perform more complex operations like driving a car. There are many benefits to this type of AI, as it is used to improve efficiency and accuracy.

Theories of human and animal intelligence are developed, and they are tested by building working models in software programs or robots. For Weak AI, these models are tools for understanding the mind.

General or Strong AI

According to Joe Carter, "General AI (or "strong AI") is the capability of a machine to perform many or all of the intellectual tasks a human can do, including the ability to understand the context and make judgments based on it. This type of AI currently does not exist outside the realm of science fiction, though it is the ultimate goal of many AI researchers."[3] Whether it is even possible to achieve general AI is currently unknown, and some researchers claim that it will be possible to have this type of AI. If it is achieved, such machines would likely not possess sentience (i.e., the ability to perceive one's environment and experience sensations such as pain and suffering, or pleasure and comfort). Currently, machines can process data faster than humans, but they cannot think abstractly, strategize, and tap thoughts and memories to make informed decisions or come up with creative ideas. This limitation makes machine intelligence inferior to the abilities humans possess. General AI is expected to be able to reason, solve problems, make judgments under uncertainty, plan, learn, integrate prior knowledge in decision-making, and be innovative, imaginative, and creative. For machines to achieve real human-like intelligence, they will need to be capable of experiencing consciousness. For Strong AI, the model has to be a mind.

Super AI

Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest."[4] This type of AI is supposed to surpass human intelligence in all aspects — from creativity to general wisdom, to problem-solving. These machines should be capable of exhibiting intelligence that is not seen in any humans. It is the type of AI that many people are worried about and the type of AI that people like Elon Musk and Stephen Hawking think will lead to the extinction of the human race.[5] This type of AI does not exist today, but researchers predict it is possible in the future.

 



[1] Noah Berlatsky, ed., Artificial Intelligence, Opposing Viewpoints Series (Detroit: Greenhaven Press, 2011), 14.

[2]Joe Carter, “The FAQs: What Christians Should Know About Artificial Intelligence,” The Gospel Coalition, April 18, 2019, accessed October 25, 2019, https://www.thegospelcoalition.org/article/the-faqs-what-christians-should-know-about-artificial-intelligence.

[3] Carter, “The FAQs: What Christians Should Know About Artificial Intelligence.”

[4] Tannya D. Jajal, “Distinguishing between Narrow AI, General AI and Super AI,” Medium, May 21, 2018, accessed October 25, 2019, https://medium.com/@tjajal/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22.

[5] Karphal, “Stephen Hawking Says AI. Could Be Worst Event in the History of Our Civilization.

Tuesday, February 9, 2021

Action Command Outcome (ACO) Theological Framework

One of the goals of this research was to develop a framework to address the challenges raised by AI. I decided to focus on God's attributes as addressing challenges with AI requires a strong foundation based on the creator God's understating from a Christian perspective. After providing all the information and raising awareness about AI, it was essential to have a framework to be used beyond this project in the future. There is a lack of awareness among Christian ministers about AI, and there are many questions raised by people as there is broad adoption of new technology. The questions have to be answered from a biblical viewpoint. 

Therefore, I developed a framework and named it as "Action Command Outcome" (ACO) Theological Framework to evaluate different issues. It was not adapted from any other available frameworks. I had heard about the concept of different types of actions that humans can take. When discussing this research project with my thesis advisor, the concept of good and evil outcomes was evaluated. The ACO framework results from my evaluation of different AI challenges and based on the data collected during the research interviews.




 

This framework starts with the action in question. The action is validated against the commandments found in the Bible, which could be explicit or implicit. The action is then evaluated to check if it is essential, desirable, tolerable, or forbidden. Based on where it falls, it tells the outcome of the action based on what the Bible commands. Following is an example of the ACO Theological Framework:

Table1: ACO Theological Framework Example

Actions

Command

Outcome

 

Explicit

Implicit

 

Essential

Love (1 John 3:11)

Help an online friend. (1 John 3:11)

Very Good

Desirable

Support Missionaries (1 Cor. 16:1-3)

Sharing words of encouragement on social media (1 Thess. 5:11)

Good

Tolerable

Wasting time not doing anything. (Eph. 5:16)

Spending much time watching Television (Eph. 5:16)

Not Good

Forbidden

Adultery (Matt. 5:27-28)

Watching Porn (Matt. 5:27-28)

Evil

 

The above table has actions that are mapped against the action type under the type of command. The Bible references are also added along with the action to be validated. The actions under explicit command are the ones directly found in the Bible. Some of the commandments may be essential, while others fall in a forbidden area, and others fall in the middle. When an action is not mentioned in the Bible directly, it is added under implicit commands. Helping an online friend, posting encouragement online, watching television, and watching porn is not directly referenced in the Bible. There are Bible verses that implicitly deal with the action. Biblical exegetical and hermeneutical skills are required to use this framework correctly.

ACO Theological Framework - Issues Raised by AI

AI is not referenced directly in the Bible. During the research interview, one respondent shared how he thought that AI robots are referenced directly in the Bible. I did not find any explicit references to AI in the Bible. Below is an example of how some of the issues related to AI are mapped in the ACO Theological Framework.

Table 31: ACO Theological Framework - AI Issues.

Actions

Command

Outcome

 

Explicit

Implicit

 

Essential

 

Use of AI in a child rescue operation (Ps. 82:3-4)

Very Good

Desirable

 

AI in cancer detection (1 Tim. 5:23)

Good

Tolerable

 

·        AI-generated sermons (2 Tim. 4:1-2)

·        AI in Warfare (Matt. 5:44)

Not Good

Forbidden

 

·        AI Sex Robots (Gen. 2:18-25; Matt. 5:27-30)

·        AI granted status similar to humans (Gen. 1:26-28)

·        Worship of AI (Ex. 20:3-5)

·        AI used for deceiving people (Prov. 6:16-19)

Evil

 

I have selected eight actions or topics related to AI and mapped them in ACO Theological Framework. All the actions were mapped under implicit commands. The supporting Bible verses are also provided. Some of the issues can be mapped against more than one action depending on how the how a Bible verse was interpreted. The data from the research interviews were used to complete the ACO framework for issues related to AI. The responses from ministry leaders and pastors were used as the basis to come to conclusions, along with the study undertaken by me on these topics. The good use of AI was seen as beneficial by many respondents. The use of AI sex robots, AI granted a status similar to humans, and any worship of AI was stated as evil by many respondents, especially by those who profess faith in God.

The ACO framework can be used as a tool when dealing with ethical and moral issues related to AI. It can be presented as a tool when teaching college students and adults about faith, religion, morality, ethics, and technology. I am confident that this framework can be used for other topics also. This framework can also be used as an evangelism tool to have a more in-depth conversation and can be started with general moral standards of different nations and cultures. I envision drawing a parallel between moral laws followed by humanity and the moral laws found in the Bible using this framework. Using the ACO framework can lead to meaningful conversation and will provide witnessing opportunities.

The ACO framework does not provide any specific guidance to AI's concerns and is not designed to get to the details of one issue and provide advice. There are many concerns related to self-driving cars that the interviewees raised. Self-driving cars themselves may fall in the desirable and good category, but there are concerns about self-driving cars that are not considered by the framework. A topic that was discussed during interviews was about self-driving cars killing humans. That is a serious concern, but that itself does not make self-driving cars evil unless there is data that show many accidents caused by these cars. A research project can be done to deal with different issues connected to a concern with AI and come up with guidance, along with the outcomes provided by the ACO framework.

 

Justin Bieber’s AI-Generated Song Holy Jesus

Can AI be inspired? I have raised this question from the beginning, and it was one of the questions I raised in my doctoral thesis.     It i...