Wednesday, May 19, 2021

AI Possibilities

One of the biggest challenges with AI is that strong AI does not exist today. It all depends on how different people define and understand intelligence. Philosophers and scientists disagree about whether the development of Strong AI is possible. Doug Merritt, the CEO of Splunk, recently stated that “AI does not exist today.”[1] AI encompasses many types of technologies like ML (Machine Learning), Deep Learning, and Natural Language Processing (NLP). All of these are narrow forms of AI and do not work with each other. 

The original vision of AI, which goes back to the 1950s, is about systems that can truly learn about anything across any domain. Merritt said that it could be 50 to 100 years to get to AI, and there are many issues and challenges to work out, such as with computational power and energy. The human brain only uses 50 watts a day. It is also a very complex distributed system that has a high filter for intuition.

The creation of machines that can think like humans has proved to be more difficult than anticipated initially. Vernor Vinge, a pioneer in AI, argues that sometime in the future, AI will surpass human intelligence, allowing for unimaginable advances.[2] He acknowledges that there are dangers in this scenario because robots may be immoral but conclude that overall advances in technology are much more likely to benefit humans than to destroy them. Artificial brains are not imminent since current brain simulations do not come close to imitating actual brain functions. According to John Horgan, scientists have little sense of how brains work and claim that computers will soon mimic human brain function is wishful thinking.[3] AI has several definitions, and the possibilities of AI depend on how intelligence is defined. Stuart J.Russell and Peter Norvig argue that computers can be considered to have achieved AI when they act like humans, when they think like humans, when they think rationally, or when they act rationally.[4] They note that “Most AI researchers take the weak AI hypothesis for granted, and do not care about the strong AI hypothesis—as long as their program works, they do not care whether you call it a simulation of intelligence or real intelligence.”[5]

[1] Tom Taulli, “Splunk CEO: Artificial Intelligence Does Not Exist Today,” Forbes, October 25, 2019, accessed October 29, 2019, https://www.forbes.com/sites/tomtaulli/2019/10/25/splunk-ceo--artificial-intelligence-does-not-exist-today/#1790af3de6a5.

[2] Noah Berlatsky, ed., Artificial Intelligence, Opposing Viewpoints Series (Detroit: Greenhaven Press, 2011), 20.

[3] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 25.

[4] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 49.

[5] Berlatsky, Artificial Intelligence, Opposing Viewpoints Series, 18.

Wednesday, May 12, 2021

Principles or Laws of Robotics

AI is used extensively in robotics, and hence it is essential to review the principles or laws of robotics. There are different principles proposed for robotics with AI. They have not been officially adopted or implemented by researchers and companies working on AI.

US AI Strategic Plan

On May 3, 2016, the US Administration announced the formation of a new National Science and Technology Council (NSTC) Subcommittee on Machine Learning and AI, to help coordinate Federal activity in AI. An AI Research and Development Strategic Plan was released, which identifies the following priorities for federally-funded AI research: [1]

  1. Strategy 1: Make long-term investments in AI research. Prioritize investments in the next generation of AI that will drive discovery and insight and enable the United States to remain a world leader in AI.
  2. Strategy 2: Develop effective methods for human-AI collaboration. Rather than replace humans, most AI systems will collaborate with humans to achieve optimal performance. Research is needed to create effective interactions between humans and AI systems.
  3. Strategy 3: Understand and address the ethical, legal, and societal implications of AI. We expect AI technologies to behave according to the formal and informal norms to which we hold our fellow humans. Research is needed to understand the ethical, legal, and social implications of AI, and to develop methods for designing AI systems that align with ethical, legal, and societal goals.
  4. Strategy 4: Ensure the safety and security of AI systems. Before AI systems are in widespread use, assurance is needed that the systems will operate safely and securely, in a controlled, well-defined, and well-understood manner. Further progress in research is needed to address this challenge of creating AI systems that are reliable, dependable, and trustworthy.
  5. Strategy 5: Develop shared public datasets and environments for AI training and testing. The depth, quality, and accuracy of training datasets and resources significantly affect AI performance. Researchers need to develop high-quality datasets and environments and enable responsible access to high-quality datasets as well as testing and training resources.
  6. Strategy 6: Measure and evaluate AI technologies through standards and benchmarks. Essential to advancements in AI are standards, benchmarks, testbeds, and community engagement that guide and evaluate progress in AI. Additional research is needed to develop a broad spectrum of evaluative techniques.
  7. Strategy 7: Better understand the national AI R&D workforce needs. Advances in AI will require a strong community of AI researchers. An improved understanding of current and future R&D workforce demands in AI is needed to help ensure that sufficient AI experts are available to address the strategic R&D areas outlined in this plan.

On February 11, 2019, United States President Donald Trump signed Executive Order 13859, announcing the American AI Initiative, the United States’ national strategy on AI.[2] It shows that governments are taking the potential of AI seriously and realizing the need for policy to govern and initiatives to advance the use of AI.

European Commission Ethics Guidelines for Trustworthy AI

European Commission had a high-level expert group present on ethics guidelines for trustworthy AI. According to the guidelines presented, a trustworthy AI should be (a) lawful- respecting all applicable laws and regulations, (b) ethical- respecting ethical principles and values, and (c) robust- both from a technical perspective while taking into account its social environment.[3]

Asimov’s Three Laws of Robotics

Isaac Asimov was a famous and influential writer of robot stories. He came up with an ideal set of rules for machines to prevent robots from attacking humans but is not used by actual roboticists. These rules are Asimov’s “Three Laws of Robotics” to govern the behavior of robots in his world.[4] They are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Principles for Designers, Builders, and Users of Robots

In 2011, the Engineering and Physical Sciences Research Council and the Arts and Humanities Research Council of Great Britain jointly published a set of five ethical “principles for designers, builders, and users of robots” in the real world based on a September 2010 research workshop:[5]

  1. Robots should not be designed solely or primarily to kill or harm humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that assure their safety and security.
  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
  5. It should always be possible to find out who is legally responsible for a robot.

Nadella’s six principles for AI

Satya Nadella, CEO of Microsoft, put out the six principles and goals he believes AI research must follow to keep society safe.[6] Nadella's intentions are not a direct analog of Asimov's Laws. Nadella's principles are:

  1. AI must be designed to assist humanity. Machines that work alongside humans should do dangerous work like mining but still “respect human autonomy.”
  2. AI must be transparent. People should have an understanding of how technology sees and analyzes the world.
  3. AI must maximize efficiencies without destroying the dignity of people. We need broader, deeper, and more diverse engagement of populations in the design of these systems.
  4. AI must be designed for intelligent privacy. There must be sophisticated protections that secure personal and group information.
  5. AI must have algorithmic accountability so that humans can undo unintended harm.
  6. AI must guard against bias. Proper and representative research should be used to make sure AI does not discriminate against people as humans do.

[1] “National Artificial Intelligence R&D Strategic Plan,” NITRD, accessed October 25, 2019, https://www.nitrd.gov/news/national_ai_rd_strategic_plan.aspx.

[2] “Executive Order on AI,” accessed October 25, 2019, https://www.whitehouse.gov/ai/executive-order-ai.

[3] “Ethics guidelines for trustworthy AI,” accessed December 28, 2019, https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.

[4] Susan Schneider, *Science Fiction and Philosophy: From Time Travel to Superintelligence (*Hoboken, NJ: John Wiley & Sons, 2016), 297.

[5] “Ethical principles for Designers, Builders and Users of Robots,” accessed December 28, 2019, http://www.historyofinformation.com/detail.php?id=3653.

[6] James Vincent, “Satya Nadella's rules for AI are more boring (and relevant) than Asimov's Three Laws,” The Verge, June 29, 2019, accessed October 28, 2019, https://www.theverge.com/2016/6/29/12057516/satya-nadella-ai-robot-laws.

Misuse of AI

One of the major concerns that people have expressed over the years with AI is the potential for misuse. There are many areas in which AI is...