Wednesday, September 8, 2021

Bias in AI

Humans have many biases, and any tool created by humans can have bias inbuilt. AI will be inherently subject to bias as data generated by humans is utilized by algorithms. These biases must be accounted for, minimized, or removed through continual human oversight and discretion. It is a challenge to design and use AI in such a way that it treats all human beings as having equal worth and dignity. 

AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making. AI systems are only as good as the data it is provided. Bad data contain implicit racial, gender, or ideological biases, and many AI systems will continue to be trained using these data, which makes this an ongoing problem. Bias in AI does not come from the algorithm being used but from people.

Back in 2015, Jacky Alciné, a software engineer, pointed out that the image recognition algorithms in Google Photos were classifying his black friends as “gorillas.” Google said it was “appalled” at the mistake, apologized to Alciné, and promised to fix the problem.[1]AI should not be designed or used in ways that violate the fundamental principle of human dignity for all people.

[1] James Vincent, “Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling techs,” The Verge, January 12, 2018, accessed October 28, 2019,

Tuesday, September 7, 2021

Created Humankind as Creators

Technology advancements are happening at a fast pace, and humans are achieving what could not be imagined a few decades ago. In Genesis Chapter 1, God spoke, and it came into being. Now people can talk to their smart devices, and it will perform the actions requested or commanded. The children are growing up with a different worldview as humans have become Techno sapiens. Today most people rely on Google or similar search engines to find answers and expect these search engines to know everything. Psalms 115 talks about the idols made by human hands which have a mouth, but cannot speak, they have eyes, but cannot see, they have ears, but cannot hear, they have noses, but cannot smell, they have hands, but cannot feel, they have feet, but cannot walk, nor can they utter a sound with their throats. Now humans are creating robots that can speak, see, hear, smell, walk, touch, and talk.

God created humankind in His image and gave them different abilities, including intelligence, knowledge, and wisdom. Humans can develop the skills they have, and it has resulted in many scientific and technological developments. The Bible mentioned specific instances when people were given unique ability or skill by God to accomplish His purpose.

The Lord said to Moses, “See, I have called by name Bezalel the son of Uri, son of Hur, of the tribe of Judah, and I have filled him with the Spirit of God, with ability and intelligence, with knowledge and all craftsmanship, to devise artistic designs, to work in gold, silver, and bronze, in cutting stones for setting, and in carving wood, to work in every craft. And behold, I have appointed with him Oholiab, the son of Ahisamach, of the tribe of Dan. And I have given to all able men ability, that they may make all that I have commanded you:” (Exod. 31:1-7)

Humans can create many things, but those abilities cannot be compared with the abilities of the creator God. God spoke, and it came into existence (Gen. 1:1-24). Humans create using materials that are already available. Humans are not equal to God but work with and for God in the world. Humans are not at an equal level with God when it comes to creating anything new. Being the image-bearers of God, humans can co-create but at a level that is lower than that of God.

There is AI, which is used to inform and aid human reasoning and moral decision-making as it is a tool that excels at processing data and making determinations as it often mimics or exceeds human ability. These excel in data-based computation, but the technology is incapable of possessing the capacity for moral agency or responsibility.

“You shall have no other gods before me. “You shall not make for yourself a carved image, or any likeness of anything that is in heaven above, or that is in the earth beneath, or that is in the water under the earth. You shall not bow down to them or serve them, for I the Lord your God am a jealous God, visiting the iniquity of the fathers on the children to the third and the fourth generation of those who hate me, (Exod. 20:3-5)

AI machines should not be seen as a god when it becomes powerful from a task execution perspective. AI is not worthy of humanity’s hope, worship, or love. Humans should not be equated to Creator God when they create something which may appear to exceed human intelligence. Humans should not cede moral accountability or responsibilities to any form of AI that will ever be created. In the Bible, it is clear that only humans will be judged by God based on their actions (Heb. 9:27). No tools developed will be subject to judgment. While technology can be created with good use in view, it is not a moral agent. Humans alone bear the responsibility for moral decision-making.

He will render to each one according to his works: to those who by patience in well-doing seek for glory and honor and immortality, he will give eternal life; but for those who are self-seeking and do not obey the truth, but obey unrighteousness, there will be wrath and fury. (Rom. 2:6-8)

The development of AI is a demonstration of the unique creative abilities of human beings, among many other things they have developed. When AI is employed per God’s moral will, it is an example of human obedience to the divine command to steward creation and to honor Him. Any technology innovation for the glory of God, the sake of human flourishing, and the love of neighbor are good and should be done in ways that lead to greater flourishing and the alleviation of human suffering. Human beings are God's created co‐creators whose purpose is to be the agency, acting in freedom, to birth the future that is most wholesome for the nature that has birthed us—the nature that is not only our genetic heritage but also the entire human community and the evolutionary and ecological reality in which and to which humans belong. Exercising this agency is said to be God's will for humans.[1] 

[1] Victoria Lorrimar, “The Scientific Character of Philip Hefner’s ‘Created Co-Creator’,” Zygon 52, no. 3 (September 2017): 726-746, accessed October 25, 2019, EBSCOhost Academic Search Premier.

Monday, September 6, 2021

Impact of AI on Society

Narrow or weak AI has already impacted how humans interact with each other, share information, and how they access information. Home automation, including security, has been achieved using AI, and it is being used now in health care and other industries, which have proved to be very beneficial to society. There are many ethical issues raised by AI.

Eliezer Yudkowsky suggests that AI researchers need to focus on producing humane AI, which he calls Friendly AI. He says that if AI researchers create intelligence without morality, super-intelligent AI could threaten humanity. The advances in computing power and nanotechnology mean that the creation of inhumane AI is a real possibility.[1] The issue with viewing AI as humane or inhumane is that machines cannot be considered moral or immoral. Humane or inhumane would not be an inherent characteristic of the AI itself. AI could be humane or inhumane only in the sense that it produces a positive or negative impact on humankind.

Noel Sharkey argues there is no evidence that machines can attain human intelligence, and he is concerned that people want so much to believe in AI that they will start to think that robots can replace humans in some jobs. This can create serious ethical problems if robots are used in elder care or military capacities.[2] Shoichi Hamada of the Japan Robot Association counts at least twenty companies working in the elderly-care robot field to create inanimate caregivers. An important question to ask is whether it is good to let machines be caregivers, and many say that people should be looking after other people. He reminds us the fact is that there will be more people who need care, and fewer people to provide it.[3]

Regarding the question of AI being considered a moral agent, John P. Sullins III, a member of the philosophy department of Sonoma State University in California, argues that robots could, in some situations, be considered moral actors. He says that a robot would not have to reach the level of human consciousness to be considered a moral agent. As long as a robot is autonomous, behaves in a way that suggests intentionality, and fulfills a social role that includes responsibilities, it should be thought of as having some moral agency. He concludes that even robots existing today may be considered moral agents in a limited way, and should be treated as such.[4] A “moral agent” typically would have both responsibilities to others and will be accountable when they do not fulfill those, and have rights from others. Others could be held responsible for failing to treat them morally. A robot that hurts a human being cannot be put in prison. The programmer or the creator of AI will be responsible.

Joanna J. Bryson, a professor in the department of computer science at the University of Bath in the United Kingdom, argues that humans have no more ethical responsibilities to robots than they do to any other tool. She suggests that humans have a tendency to grant agency and consciousness to robots for psychological and social reasons, and it is a waste of time and energy, which are precious commodities for human beings. She concludes that more effort should be made to inform people that robots are not people or even animals, and therefore have no ethical standing.[5]

AI military robots are already in use and argue that they could be programmed to act more ethically than people. In certain situations, robots can be better than humans, and hence, the focus should be on creating robots that will act ethically. Some think that AI may dangerously change warfare as robots used in warfare will not have feelings. The U.S already has deployed a few thousand robots, chiefly UAVs, in Iraq and Afghanistan.[6]

AI-enabled smart cars are already tested and used and could have a significant impact on the society and automobile industry. These self-driving cars can increase the safety of passengers but raise other challenges regarding liability and decision-making in situations that cannot be programmed easily or which may not have happened in the past.

AI has tremendous power to enhance spying on people, and both authoritarian governments and democracies are adopting technology as a tool of political and social control. In Chinese cities, facial recognition is used to catch criminals in surveillance footage, and to publicly shame those who commit minor offenses.[7] AI-powered sex robots are becoming more advanced, and companies are pushing for people to buy AI-powered devices. People are using AI-based applications and devices regularly. Personal assistances like Alexa and Google Home are now common in most houses, and people talk and address them as other humans.

[1] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 126.

[2] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 135.

[3] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 139.

[4] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 141.

[5] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 155.

[6] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 188.

[7] Will Knight, “Artificial Intelligence Is Watching Us and Judging Us,” Wired, December 1, 2019, accessed January 10, 2020,

Saturday, September 4, 2021

AI and Christianity

From the Christian perspective, there are different arguments about the possibility of AI and its implications when compared to the creation of humans by God. In April 2019, sixty evangelical leaders released a statement addressing AI. The Ethics and Religious Liberty Commission of the Southern Baptist Convention spent nine months working on “Artificial Intelligence: An Evangelical Statement of Principles,” a document designed to equip the church with an ethical framework for thinking about this emergent technology.[1] The goal of this document was to help the church to think about AI from a biblical viewpoint. Leaders of many Christian institutions signed this document.

Russell C. Bjork is a professor of computer science at Gordon College in Wenham, Massachusetts, argues that theologically that the soul may emerge from bodily processes.[2] He also argues that in Christian teaching, human specialness need not be based on what humans are, but rather on what God intends for them. As a result, an AI could have a soul and would not diminish the theological basis of human worth. Christianity's insights into intelligence may help to suggest how to achieve AI and what its limits might be. He further argues that “as is true throughout the sciences, work in AI can be wrongly motivated, but it can also represent a very legitimate part of humanity's fulfillment of the cultural mandate (Gen. 1:28) through enhanced understanding of the greatest marvel of God's creation: human beings. There is no inherent theological conflict between a biblical view of personhood and work in AI, nor would successes in this field undermine human value or the doctrine of the image of God.”[3]

Harry Plantinga is a professor in the computer science department at Calvin College in Michigan who argues that faith affects how Christian computer scientists approach their work.[4] Faith can lead Christian computer scientists to the recognition that the soul, rather than material computational abilities, separate human beings from machines. Their faith affects the ethical choices made by Christian scientists. Computer science is a discipline with two aspects. On the one side, it is an engineering discipline: it involves the planning, design, construction, and maintenance of computer systems. The subject matter is a corpus of techniques to analyze problems, constructing solutions that will not collapse, guaranteeing, and measuring the robustness of programs.

On the other hand, it is also a science in the sense that mathematics is a science. It is the study of computation and computability, the study of the algorithm. Christian computer scientists and engineers should approach AI in an attitude of doxology and service. They must be careful to honor God in what they do and find that in loving God, they love others, and in serving others, they serve God. All problem-solving, including AI, must be addressed through the motivation of service, and the systems should be reliable, easy to use, and helpful to honor God. Social and ethical implications of the work should be considered, and beneficial aspects of computing must be pursued.[5]

AI cannot attain the image of humanity seen in the Bible. There may be robots that may be similar in looks or speech. To treat AI as a human is to undermine what it means to be human.

[1] “Artificial Intelligence: An Evangelical Statement of Principles,” ERLC.

[2] Noah Berlatsky, ed., Artificial Intelligence, Opposing Viewpoints Series (Detroit: Greenhaven Press, 2011), 30.

[3] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 41.

[4] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 42.

[5] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 48.

Friday, September 3, 2021

Turing Test

A Turing Test is a method of inquiry in AI for determining whether or not a computer is capable of thinking like a human being. The test is named after Alan Turing, the founder of the Turing Test and an English computer scientist, cryptanalyst, mathematician, and theoretical biologist. Turing proposed that if a computer can mimic human responses under some particular condition, then it can be said to possess AI. The original Turing Test requires three terminals, each of which is physically separated from the other two. One terminal is operated by a computer, while the other two are operated by humans. During the test, one of the humans functions as the questioner, while the second human and the computer function as respondents.[1] The questioner interrogates the respondents within a specific subject area, using a specified format and context. After a preset length of time or number of questions, the questioner is then asked to decide which respondent was human and which was a computer. The test is repeated many times. If the questioner makes the correct determination in half of the test runs or less, the computer is considered to have AI because the questioner regards it as “just as human” as the human respondent.[2]

Figure: Turing Test

Virginia Savova and Leonid argue that the Turing test is a valid test of AI. They contend that a machine could not fake its way through the Turing Test in a manner that violates our intuitions about intelligence. They also contend that no look-up table could be composed to allow a machine to pass the Turing Test adequately.[3] Mark Halpern, a computer software expert who has worked at IBM, argues that the Turing Test is fundamentally flawed. During a Turing Test conducted in 1991, judging was flagrantly inadequate with computers that were providing responses were judged human, while some humans were judged to be computers. He concluded that even if a computer were to pass the Turing Test, it would not show that they had achieved AI.[4]

Yaakov Menken, an orthodox rabbi, argues that despite significant advances in computer science, no computer has been created that even comes close to legitimately passing the Turing Test. Based on his observations and Jewish religious teaching, he concluded that human beings would never create a computer that can communicate with human intelligence.[5]

[1] “Turing Test,” TechTarget, accessed December 29, 2019,

[2] Margaret Rouse, “Turing Test,” Search Enterprise AI, June, 2019, accessed October 29, 2019,

[3] Noah Berlatsky, ed., Artificial Intelligence, Opposing Viewpoints Series (Detroit: Greenhaven Press, 2011), 68.

[4] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 78.

[5] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 99.

Sermon: "AI in the End Times: Unraveling Revelation" - from ChatGPT

I was inspired by a video I saw recently to ask ChatGPT about in role in end times. My question was - Prepare me a sermon on the role of AI...