In the quest to achieve the fundamental goal of Artificial Intelligence - creating a computer that is indiscernible from a human brain there are, and will continue to be many concerns, some of which are categorized in this section. But the ultimate concern remains - is humanity prepared to exist in a world where machines are smarter than they are? Science Fiction has long dealt with this enigma as a quintessential theme sometimes going so far as to ask the question: "if this goal is reached, will humans even be necessary?" Now Science Reality is endeavoring to achieve that goal and, at the same time, alleviate the suspicions and answer the questions that AI demands.


Machine Learning requires a large, clean, problem relevant, training data set. Cleaning (removing duplicates, correcting bad data, etc.) is an onerous human-intensive, data preparation process. Moreover, just creating a large data set relevant to the problem for which the machine learning is intended, is often in and of itself, problematic. Several companies are doing research and building tools to apply AI to make future AI systems better at these tasks. The goal is to autonomously clean, augment, normalize, aggregate, and label raw data based on previously learned data patterns and successful ML projects. Additionally, many ML training data sets are trained on crowdsourced data sets and AI tools run largely on open-sourced software. Both the tools and training data sets can be susceptible to "Trojan attacks" that embed "triggers" to manipulate training data and cause AI tools to misidentify patterns of objects like facial recognitions or stop signs; make erroneous predictions or decisions; etc. As one counter to this problem, IARPA and private sector participants in the TrojAI program are focused on building tools predicting whether Trojans are present in AI image classification tools.


Machine Learning may also result in answers that are wrong or at best inaccurate because the software using them is built on programs following explicit pattern identifying instructions that may have existed only in the training data set and not in reality. The result can be critical in areas such as medical research or patient care. Knowing this is a possibility the questions that arise are, can the findings be trusted, can valid predictions be made from the results? The challenge then becomes how to produce similar results from another data set or to check the results in another manner, i.e. how can AI-produced results be trusted?


Humans can't always take the time or effort to explain, in detail, their thought processes that lead to actions or decisions. We sometimes intuitively trust our instincts or judgements in what appears to be a leap of faith. We are already building machines that "think" and make decisions in a different way than a human would and which the creators of that machine don't completely understand. AI is reliable and trustworthy in areas where the results can be checked by humans - e.g. the visual recognition and tally of the number of different species of trees in a forest. Exploratory algorithms doing research against very large data sets for currently unknown relationships or patterns, are difficult to verify. The types of AI that result in scientific discoveries will remain risky until the computer programs are better at assessing and communicating their degree of uncertainty, e.g. "I don't know", "it's not clear", or "this grouping looks good, but these are less certain". Researchers are beginning to address this problem by designing uncertainty-measuring schemes and protocols to estimate the accuracy and reproducibility their new observations.


Prometheus liberated mankind by introducing the flame of knowledge, DARPA is working to assure that Machine learning and other AI technologies are accountable and containable to ensure that they do not spread like wildfire. Accountability means knowing what conditions allow the machine to do its job unsupervised and what conditions require supervision. DARPA's Competency-Aware Machine Leaning (CAML) Program is attempting to determine the strategies AI uses to reach decisions and how to introduce control by making the technology accountable and trustworthy. In much broader terms, CAML is focused on ML learning systems that can self-assess task competency, strategy, algorithms, and decisions and present results in human-understandable explanations.

As discussed under Trust, public accountability, documentation and complete transparency need to be the rule and are necessary for legal purposes within a system of laws, judges, and courts established for humans. Machine autonomy must come with greater responsibility and the capacity to explain decisions. To accomplish this goal, AI researchers and developers essentially must focus, not only on performance and profit, but putting human values at the core of AI systems. While this is a difficult adjustment, ultimately software and hardware manufacturers will understand that the growth of AI relies on trust and accountability and creation of a new norm, including established human values.


We cannot just assume that AI will be benign, used only for good and benefit all. Artificial Intelligence by definition does not have a conscience and the technology is open for anyone to use, can be used in any manner conceived (including some that have not) and may produce unintended consequences (see the Internet). The result is that AI makes humans vulnerable in ways not considered, and AI itself is vulnerable to AI. As previously noted, IARPA is pursuing a solution to the Trojan Attack issue on Machine Learning training sets and Deep Neural networks. Biometric data (fingerprints, Facial recognition) is not as safe as we would like to believe and is also susceptible to similar sophisticated Trojan Attacks. "Weaponizing" AI is already well underway on all sides at the Nation State level. Dealing with this is more of an ethic and political issue. With currently available AI-technologies and the access and knowledge to use them, security professionals believe they have new weapons to counter many IT security threats (cyberattacks, hacking, phishing, information theft). Bad-actors and criminals, however, have the same access and knowledge and that may eventually just elevate the playing field to a new level. Social "Deepfakes" are a dangerous problem as they can change the idea that seeing is believing by using AI techniques to manipulate pictures and videos using multiple techniques including: adding new or misleading context around the event; editing and splicing the event to make it appear differently; using AI to produce biased or computer-generated images/text, adding backgrounds, and doctoring audio. Research on identifying and countering this capability is underway, but is dramatically behind the ability to react and quickly counter easily produced AI-driven "Deepfakes", especially in the ubiquitous social media eco-system and especially in relation to elections.


Science Fiction has a long history of questioning AI Ethics. Asimov's 3 rules for Robots are a prime example. AI today has reached the first stage in the journey toward solving the many ethical complexities posed by AI - recognizing that there is an AI Ethics problem. Some of the ethic solutions may come from Standards, Regulations, Trust, Accountability, Legalization and Leadership as they are applied. But AI Ethics run deeper, are multifaceted, and are not only mechanical but, more importantly, are societal. Since our future society will be deeply entwined and dependent on AI, the second stage of the journey, in which the conversations are just beginning, is to determine - what questions should be asked, what are the right questions to ask, and who and how should those questions be answered? World-wide various individuals, corporate, academic, government and public organizations are priming the conversations and striving to find not just an answer, but the right answer. For example: Michael Sandel at Harvard, in numerous TED talks, the Dalia Lama Center for Ethics, and many more. Some of the questions being pursued, but certainly not all, can be roughly characterized as Universal and Societal: Universal questions: What is ethical AI action? What will guide our AI programs to act for our greater good? How can AI personnel become sensitive to ethical judgments and values and bake them into the fabric of the projects they're working on? How can we ensure that these values take precedence over those of profit or immediacy? Do the creators of AI understand the full realm of technologic, economic, and social impacts and how are they responding? Societal questions: Companies controlling AI have influence over how society thinks and acts - how can/should the development and operation of AI applications be regulated? What fundamental rights do humans have to question the AI data sets, algorithms and processes that affect us on a daily basis? How do we define privacy in an AI world dependent on and profiting from vast data sets in innumerable ways and place the consumer in control of his/her data? In the long term AI, like the industrial revolution, may put millions out of jobs they currently hold (and may also create millions of new and higher skilled jobs) - what plans are being made now? When (not if) AI becomes smarter than humans, we'll want to be partners - how do we insure that AI is not our enemy? Will there be a point when AI is capable of evolving without human control or guidance - if so, is that acceptable? How do humans communicate the golden rule to AI?


Singularity is the point at which current AI technology becomes Artificial General Intelligence (AGI), or in other words, systems that are smarter than humans. Hypothetically the singularity would create an AGI autonomous self-improvement cycle that would rapidly far surpass human intelligence. There is no particular reason, other than human inclination to anthropomorphize, that the resulting AGI would have a humanistic intelligence structure. Any singularity scenario would generate an unparalleled and challenging change in human society. At this point in the evolution of AI, singularity is well into the future, but it is not too soon to speculate on potential if/when scenarios. Simply put a benign singularity scenario would create a human AGI partnership, a dystopian singularity scenario would replace humanity with super-human intelligence humanoid Robots. It is not perfectly clear at this point what needs to be done to steer the course toward a benign singularity, but working hard to resolve the issues listed above is a beginning. These concerns are being recognized and strategies to address them and, hopefully prevent singularity, are forming.



Addressing the concerns listed above with a four prong effort Standards, Regulations, Legislation and, Leadership is of immediate and paramount importance.


One of the fundamental factors needed to foster the development of trust, confidence and broad adaptation of AI is the construction of a basic, understandable agreement on a set of standards, definitions, performance metrics, and tools. Several efforts in different areas are underway to accomplish this goal and real progress is being made. For example: 1) Executive Order (EO)13859, dated 11 Feb 2019 assigns NIST the arduous task of creating within 180 days "a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI Technologies". On May 1st 2019, NIST issued a Request for Information (RFI) to federal agencies, the private sector, academic institutions, nongovernmental organizations and other stakeholders with an interest and expertise in AI and related standards to respond with ideas, thoughts, comments toward achieving U.S. leadership in AI standards. 2) DHS, a government agency with a high interest in AI technologies, is using and developing AI standards to mitigate risks, facilitate interoperability, identify bias or deviation from a norm or baseline, help measure progress toward a goal, and set uniform requirements. 3) Representing the private sector the Institute of Electrical and Electronics Engineers (IEEE) has issued a definition of AI standards as "documents that establish specifications and procedures designed to ensure the reliability of the materials, products, methods, and/or services people use every day [to] ensure product functionality and compatibility, facilitate interoperability and support consumer safety and public health", 4) In early 2017 an International Standards committee for the AI ecosystem was formed with working groups assigned to a) Foundational Standards - development of standards for AI concepts and terminology and a framework for ML systems; b) Trustworthiness - Investigate approaches to establish trust in AI systems through transparency, verifiability, "explainability" and controllability and c) Use Cases and Applications - categorize AI domains (e.g. social networks, embedded systems) and contexts (e.g. financial, healthcare).


A second fundamental factor required to trust, develop and grow AI is regulation. Regulation development is also covered in the Executive Order and applied to NIST efforts. The Office of Management and Budget (OMB) has been charged with issuing guidance to regulatory agencies and those agencies (e.g. Federal Reserve System, Occupational Safety and Health Administration, Consumer Product Safety Commission, etc.) are to develop plans for AI usage within their purview. Since the future of manufacturing is heavily dependent on AI, the many regulations created by these agencies will impact factories, supply chains, trade, farmers, etc. The result is that the private and government sectors will have to work closely together to insure that appropriate and workable regulations are put in place.

But a higher level of AI regulation requires deeper, broader and exceptionally careful thought to avoid impeding AI growth. Several recent discussions are taking the conversation to this higher level, and have based the over-arching concept for regulating AI on the question - what constitutes harm when AI is involved? To guide the regulation of AI, practical framework proposals were added to this concept. Paraphrased, these frameworks would include some (but by no means all) of those currently in discussion: 1) Do Not Weaponize AI. 2) Since AI is increasingly and convincingly interacting vocally with humans (e.g. SIRI and hotel concierges), and is capable of impersonating people with tweets, fake news (e.g. OpenAI - GPT-3) and Deepfakes, AI must identify itself at all times as not human; 3) explicit permission must be attained before an AI system can store, use and/or disclose what a human would consider personal or confidential information (e.g. SIRI conversations, rumba home maps, etc.); 4) AI must not increase prejudices or basis already existing in our systems and data sets, especially when making predictions of future actions or decisions. Mathematical models exist to ensure that algorithms do not introduce bias and regulations must insure that they are used.


The third factor needed to frame AI is legislation. Laws outline the context and boundaries for behavior in much the same way as standards and regulations. But they also codify and provide the threat of investigation, arrest, trial and penalty for misbehavior. The effort to place legal boundaries around AI is just beginning to take shape. This effort must carefully delineate the line between too much constriction of the growth of AI and too little. Laws must also insure that those that currently apply to private citizens, corporate entities, and government agencies must, where applicable, be written to ensure that AI systems are subject to the same laws as humans so that accountability, responsibility, etc. is associated with the appropriate entity. This includes changing laws to insure that not understanding or anticipating something an AI system did, cannot be used as an excuse. At the Federal level legislation tends to address broad issues and many laws were passed as sections of the 2020 National Defense Authorization Act (NDAA). For example: a) Creating a National Cloud Computing System for AI Research (National AI Research Resource Task Force Act of 2020) to investigate the feasibility of establishing a national cloud computing system for AI research; b) Strengthening the Militarys AI Strategy (The Artificial Intelligence Standards and National Security Act) requires DOD to report to Congress on its role in the development of AI standards, including accessing the establishment of the ability of AI standards to improve national security; c) Assessing the Role of China in International Standards Setting Organizations (Ensuring American Leadership over International Standards Act) directs NIST to examine Chinese government policies and standards development for emerging technologies including AI; Assessing the Risk Deepfakes Pose to National Security, the Deepfake Report Act directs the Department of Homeland Security (DHS) to conduct an annual study of deepfakes, the technology available to generate them and the available countermeasures. States and cities are appointing committees to study AI in many areas and addressing the need for AI laws. For example: a) a California law (Bot Disclosure) requires companies to disclose the use of AI bots to communicate online with humans; b) Washington State is working on legislation requiring any government agency using automatic decisions (Algorithms) to notify individuals whose information is used and to be able to explain the basis for that decision. This law ensures discrimination does not impact the constitutional or legal rights of residents. c) Washington State also passed a law regulating government use of facial recognition software requiring agencies to report on its use and obtain a warrant for investigative use unless there is an emergency. International lawmaking is also undertaking the complexity of AI guardrails. For example: a) The European Unions Commission on AI has been working steadily on an Ethics Guidelines for Artificial Intelligence paper defining AI oversight of high-risk applications. This imposes difficult questions, such as defining what makes an AI system high-risk and determining what specific requirements that would necessitate, deciding how to enforce a new law, and who is accountable for AI damage. This legislative effort risks creating a digital boundary between borders whose stricter restrictions than others are willing to adhere to might dampen trade and reduce other effectiveness of the law. In conclusion 2021 is a possible turning point for legislation of all types curbing the use of facial recognition, expanding the FTC authority to curtail harmful AI practices, participate on the working groups of the Global Partnership on AI moving toward more principled use of AI, etc..


The fourth factor needed to provide structure to the development of AI is leadership. It can be exhibited in, strategic plans, reports, white papers, road maps, committee hearings, working groups, guidelines, policy statements, and Presidential Orders. For example: a) The Artificial Intelligence Interagency Working Group (AI IWG) was formed in 2018 to coordinate Federal AI R&D across 32 participating agencies and to support activities tasked by both the NSTC Select Committee on AI and the Subcommittee on Machine Learning and Artificial Intelligence (MLAI); b) February 11, 2019 Executive Order 13859 was issued launching the American AI Initiative explaining how the Federal Government would play a role in facilitating AI R&D, promoting Trust in AI, Training for the changing workforce, and collaboration with allies; c) March 19, 2019, the US federal government launched to provide quick access to government AI initiatives; d) Jun 27, 2019 the White House released an updated strategic plan for AI R&D to guide congressional AI legislation. Included in the guiding framework was: 1) Make long-term investments in AI research, 2) Develop effective methods for human-AI collaboration, 3) Understand and address the ethical, legal, and societal implications of AI, 4) Develop shared public datasets and environments for AI training and testing, and 5) Expand public-private partnerships to accelerate advances in AI; e) Sep 2019 the General Service Administrating launched the AI Center of Excellence and the AI Community of Practice providing support, coordination and facilitating the use AI in Federal agencies; f) December 2020 Executive Order 13960 was issued promoting the use of Trustworthy AI in the Federal Government establishing a common policy for implementing principals for adopting trustworthy AI to effectively deliver services to the American people; g) Dec 4 2020 an Executive Order to guide federal agencies adoption of AI was signed. The order provides nine principals to use when designing, developing, acquiring or using AI. These guiding principles state that the AI under consideration must be: lawful; purposeful and performance-driven; accurate, reliable, and effective; safe, secure, and resilient; understandable; responsible and traceable; regularly monitored; transparent; and accountable.

Continue to Decision Process.