Wednesday, May 12, 2021

Principles or Laws of Robotics

AI is used extensively in robotics, and hence it is essential to review the principles or laws of robotics. There are different principles proposed for robotics with AI. They have not been officially adopted or implemented by researchers and companies working on AI.

US AI Strategic Plan

On May 3, 2016, the US Administration announced the formation of a new National Science and Technology Council (NSTC) Subcommittee on Machine Learning and AI, to help coordinate Federal activity in AI. An AI Research and Development Strategic Plan was released, which identifies the following priorities for federally-funded AI research: [1]

  1. Strategy 1: Make long-term investments in AI research. Prioritize investments in the next generation of AI that will drive discovery and insight and enable the United States to remain a world leader in AI.
  2. Strategy 2: Develop effective methods for human-AI collaboration. Rather than replace humans, most AI systems will collaborate with humans to achieve optimal performance. Research is needed to create effective interactions between humans and AI systems.
  3. Strategy 3: Understand and address the ethical, legal, and societal implications of AI. We expect AI technologies to behave according to the formal and informal norms to which we hold our fellow humans. Research is needed to understand the ethical, legal, and social implications of AI, and to develop methods for designing AI systems that align with ethical, legal, and societal goals.
  4. Strategy 4: Ensure the safety and security of AI systems. Before AI systems are in widespread use, assurance is needed that the systems will operate safely and securely, in a controlled, well-defined, and well-understood manner. Further progress in research is needed to address this challenge of creating AI systems that are reliable, dependable, and trustworthy.
  5. Strategy 5: Develop shared public datasets and environments for AI training and testing. The depth, quality, and accuracy of training datasets and resources significantly affect AI performance. Researchers need to develop high-quality datasets and environments and enable responsible access to high-quality datasets as well as testing and training resources.
  6. Strategy 6: Measure and evaluate AI technologies through standards and benchmarks. Essential to advancements in AI are standards, benchmarks, testbeds, and community engagement that guide and evaluate progress in AI. Additional research is needed to develop a broad spectrum of evaluative techniques.
  7. Strategy 7: Better understand the national AI R&D workforce needs. Advances in AI will require a strong community of AI researchers. An improved understanding of current and future R&D workforce demands in AI is needed to help ensure that sufficient AI experts are available to address the strategic R&D areas outlined in this plan.

On February 11, 2019, United States President Donald Trump signed Executive Order 13859, announcing the American AI Initiative, the United States’ national strategy on AI.[2] It shows that governments are taking the potential of AI seriously and realizing the need for policy to govern and initiatives to advance the use of AI.

European Commission Ethics Guidelines for Trustworthy AI

European Commission had a high-level expert group present on ethics guidelines for trustworthy AI. According to the guidelines presented, a trustworthy AI should be (a) lawful- respecting all applicable laws and regulations, (b) ethical- respecting ethical principles and values, and (c) robust- both from a technical perspective while taking into account its social environment.[3]

Asimov’s Three Laws of Robotics

Isaac Asimov was a famous and influential writer of robot stories. He came up with an ideal set of rules for machines to prevent robots from attacking humans but is not used by actual roboticists. These rules are Asimov’s “Three Laws of Robotics” to govern the behavior of robots in his world.[4] They are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Principles for Designers, Builders, and Users of Robots

In 2011, the Engineering and Physical Sciences Research Council and the Arts and Humanities Research Council of Great Britain jointly published a set of five ethical “principles for designers, builders, and users of robots” in the real world based on a September 2010 research workshop:[5]

  1. Robots should not be designed solely or primarily to kill or harm humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that assure their safety and security.
  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
  5. It should always be possible to find out who is legally responsible for a robot.

Nadella’s six principles for AI

Satya Nadella, CEO of Microsoft, put out the six principles and goals he believes AI research must follow to keep society safe.[6] Nadella's intentions are not a direct analog of Asimov's Laws. Nadella's principles are:

  1. AI must be designed to assist humanity. Machines that work alongside humans should do dangerous work like mining but still “respect human autonomy.”
  2. AI must be transparent. People should have an understanding of how technology sees and analyzes the world.
  3. AI must maximize efficiencies without destroying the dignity of people. We need broader, deeper, and more diverse engagement of populations in the design of these systems.
  4. AI must be designed for intelligent privacy. There must be sophisticated protections that secure personal and group information.
  5. AI must have algorithmic accountability so that humans can undo unintended harm.
  6. AI must guard against bias. Proper and representative research should be used to make sure AI does not discriminate against people as humans do.

[1] “National Artificial Intelligence R&D Strategic Plan,” NITRD, accessed October 25, 2019, https://www.nitrd.gov/news/national_ai_rd_strategic_plan.aspx.

[2] “Executive Order on AI,” accessed October 25, 2019, https://www.whitehouse.gov/ai/executive-order-ai.

[3] “Ethics guidelines for trustworthy AI,” accessed December 28, 2019, https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.

[4] Susan Schneider, *Science Fiction and Philosophy: From Time Travel to Superintelligence (*Hoboken, NJ: John Wiley & Sons, 2016), 297.

[5] “Ethical principles for Designers, Builders and Users of Robots,” accessed December 28, 2019, http://www.historyofinformation.com/detail.php?id=3653.

[6] James Vincent, “Satya Nadella's rules for AI are more boring (and relevant) than Asimov's Three Laws,” The Verge, June 29, 2019, accessed October 28, 2019, https://www.theverge.com/2016/6/29/12057516/satya-nadella-ai-robot-laws.

No comments:

Post a Comment

Sermon: "AI in the End Times: Unraveling Revelation" - from ChatGPT

I was inspired by a video I saw recently to ask ChatGPT about in role in end times. My question was - Prepare me a sermon on the role of AI...