Global Data Science Forum

Expand all | Collapse all

What Makes AI for the Government and Public Sector Different?

  • 1.  What Makes AI for the Government and Public Sector Different?

    Posted Tue July 21, 2020 10:19 AM

    What Makes AI for the Government and Public Sector Different?

    The Government and Public Sector (Healthcare, Education) deal with matters of utmost importance to our lives, such as keeping us healthy & safe, our environment viable, and our economy sound.  This has certainly been highlighted by the corona virus outbreak.   And we, as citizens, expect the government and public sector to use whatever capabilities can be brought to bear to accomplish these important tasks.   That would include AI, which is viewed as having great promise for everything from providing strategic advantage for our national defense to providing the faster creation of vaccines against the Corona Virus and better treatments for COVID-19 symptoms. 

    However, as Spiderman says, "With great power comes great responsibility."[i]

    The Government and Public Sector (hereafter referred to as the Public Sector) operate in the public interest.  Its use of AI will be held to a higher standard than commercial organizations since what the Public Sector does must support the public good.  Contrast that with the private sector which performs many valuable functions but operates in the best interest of their shareholders and owners.   The private sector choses to use AI to accomplish their business goals and to maximize their profit while staying in compliance with applicable laws and regulations. 

    What are those higher standards?

    The standards will depend on which public sector organization or agency is involved.  Agencies are governed by laws, regulations, and procedures that will determine what is acceptable or unacceptable, whether decisions are made by humans or by AI running on machines.   For example, the Administrative Procedures Act (APA) governs many federal agencies and requires agencies that make a decision that a affects a citizen's rights to be able to explain why the decision was made.   The problem with AI is that many algorithms are opaque such as a neural network, although IBM and other companies are working on "Explainable AI"[ii] that can help.   In addition to explaining the algorithm, explaining the decision would require transparency on what data was used for a decision.  This too can be handled by "Explainable AI" system.

    Laws and regulations also require the public sector to not discriminate or be biased in decision making.   Mathematically, it is possible to ensure that the training sets used to create machine learning algorithms are balanced and that the resulting decisions are not biased against a particular class.  Several AI vendors implement these functions (IBM's Watson OpenScale is an example).  However, there are complexities in public sector decision making or changing interpretations or regulations that must be considered over time.  The ACUS AI report by Stanford and NYU Law Schools[iii] suggests always requiring a certain set of decisions to be made by humans which can then be compared to the decisions made by machines as a form of benchmarking for anomalies.  

    The U.S. Department of Defense has established for itself a set of ethical principles[iv].  While these may not apply to all in the public sector, they provide a good framework for areas to be considered.:

    1. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
    2. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
    1. Traceable. The Department's AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
    2. Reliable. The Department's AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
    3. Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
    4. Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
    5. Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
    6. Traceable. The Department's AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
    7. Reliable. The Department's AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
    8. Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

    To answer the question in the title,  AI in the public sector must not only strive to achieve the advantages of AI in finding subtleties in the data, massively scaling out decision-making, and reducing the time to decision, but also must ensure that the above standards and principles are upheld.  Even in stressful times like today that call for the use of our most powerful leading-edge technologies, the public sector must continue to earn the public's trust by employing AI responsibly.  

    Note:  If you are working on an AI project in the government or public sector, consider submitting a paper to the AAAI Fall Symposium on AI in Government and Public Sector Applications.  The Call for Papers is at https://sites.google.com/view/aaaifss19aigov/home  

    fstein@us.ibm.com

     

    [i] According to Wikipedia, this expression may have originated in 1793 at the French National Convention https://en.wikipedia.org/wiki/With_great_power_comes_great_responsibility

    [ii] https://aix360.mybluemix.net/

    [iii] Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, Report Submitted to the Administrative Conference of the United States

    [iv] https://www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/ Feb 24, 2020



    ------------------------------
    Frank Stein
    Director, A3 Center
    IBM Public & Federal Sector
    ------------------------------


  • 2.  RE: What Makes AI for the Government and Public Sector Different?

    Posted Wed July 22, 2020 04:27 AM
    Edited by ALEX FLEISCHER Wed July 22, 2020 04:28 AM
    Thanks for the sentence about Spiderman.

    Up to now I used to give credit to Spiderman as can be read in

    With great power comes great responsibility (Spiderman)

    I ll be happy to mention the French revolution from now on ...

    NB:

    I am French

    ------------------------------
    ALEX FLEISCHER
    ------------------------------



  • 3.  RE: What Makes AI for the Government and Public Sector Different?

    Posted Thu July 30, 2020 04:34 PM
    I didn't realize that wasn't from Spiderman until I looked it up for the attribution.

    ------------------------------
    Frank Stein
    ------------------------------