body.has-navbar-fixed-top { padding-top: 4.5rem; }

AI Now Report 2018

date Apr 15, 2019
authors AI Now Institute
reading time 2 mins
category report

Recommendation:

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.
  2. Facial recognition and affect recognition need stringent regulation to protect the public interest.
  3. The AI industry urgently needs new approaches to governance. As this report demonstrates, internal governance structures at most technology companies are failing to ensure accountability for AI systems.
  4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector.
  5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers
  6. Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services
  7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces.
  8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.”
  9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues.

Central problem and addresses the following key issues:

  1. The growing accountability gap in AI, which favors those who create and deploy these technologies at the expense of those most affected
  2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial and affect recognition, increasing the potential for centralized control and oppression
  3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
  4. Unregulated and unmonitored forms of AI experimentation on human populations
  5. The limits of technological solutions to problems of fairness, bias, and discrimination

Black boxes

As numerous scholars have noted, one significant barrier to accountability is the culture of industrial and legal secrecy that dominates AI development. Just as many AI technologies are “black boxes”, so are the industrial cultures that create them. Many of the fundamental building blocks required to understand AI systems and to ensure certain forms of accountability – from training data, to data models, to the code dictating algorithmic functions, to implementation guidelines and software, to the business decisions that directed design and development – are rarely accessible to review, hidden by corporate secrecy laws.

What is needed next?

  1. From Fairness to Justice
  2. Infrastructural Thinking
  3. Accounting for Hidden Labor in AI Systems
  4. Deeper Inter-disciplinarity
  5. Race, Gender and Power in AI
  6. Strategic Litigation and Policy Interventions
  7. Research and Organizing: An Emergent Coalition

Regulations

But regulation can only be effective if the legal and technological barriers that prevent auditing, understanding, and intervening in these systems are removed.