By Jessica Fjeld4 minute Read

When it comes to AI, Big Tech wants a hand in developing regulation. In a January 20 piece for the Financial Times calling for the regulation of the technology, Google CEO Sundar Pichai argued that his company’s artificial intelligence principles could be used as a template for future laws. Brad Smith of Microsoft said the same in a talk at the World Economic Forum earlier this week.

Google and Microsoft are right that it’s time for government to step in and provide safeguards, and that regulation should build on the important thinking that’s already been done. However, looking only to the perspectives of large tech companies, who’ve already established themselves as dominant players, is asking the fox for guidance on henhouse security procedures. We need to take a broader view.

Rapid advances in machine learning technology, which falls under the general umbrella of AI, have lent urgency to questions about how to build and use AI systems responsibly, safely, and ethically. Criminal justice algorithms are racially biased, autonomous vehicles have been involved in fatal crashes, and algorithmic content moderation has contributed to a wave of disinformation efforts. The task of ensuring AI actually supports human rights and well-being has at times felt overwhelming, the questions unanswerable.

That hasn’t stopped a lot of people from trying to answer them. Alongside Google’s and Microsoft’s, there have been principles for ethical AI from national governments and intergovernmental organizations, advocacy organizations, expert groups, and more.

Over the past year, I worked with a team of researchers to analyze AI principles from around the world, trying to see what they might have in common. We coded each principle in the 36 documents we ended up focusing on and uncovered eight key themes:

  • Fairness and nondiscrimination: AI systems shouldn’t reinforce social inequality—instead, they should promote inclusivity.
  • Accountability: Developers should plan for their technology’s impacts. Monitoring and auditing mechanisms need to be in place, and impacted individuals and populations should have access to adequate remedies.
  • Privacy: AI should respect privacy, both in sourcing the data that is used for development and in giving people agency over when and how their personal info is used to make decisions about them.
  • Transparency and explainability: We should know where AI systems are being used and how they reach the decisions they do.
  • Safety and security: AI systems should be tested to ensure they perform as intended and resist interference from unauthorized parties.
  • Professional responsibility: The people involved in the development and deployment of AI systems have an obligation to prioritize integrity, collaboration, professionalism, and foresight.
  • Human control of technology: To promote trust and respect autonomy, there should be human checks on AI, from review of important decisions to fail-safe mechanisms that kick in for extenuating circumstances.
  • Promotion of human values: We should be guided by our core values and the well-being of all humanity when we design and deploy AI systems.

The coherence of these various principles documents—from different regions and interest groups—suggests that social norms for AI are emerging.

Law and regulation originate in social norms, which makes Microsoft and Google correct to posit that these near-universal themes among AI principles are a good starting point for regulation. However, as we note in our paper, there’s a wide and thorny gap between being able to articulate goals for AI such as fairness, transparency, and safety, and writing rules that would govern the thousands of decisions, big and small, that result in any given technology being built and used responsibly.

One way to register that gap is to recognize the very divergent visions different organizations advance within these themes. For example, every document we looked at included some version of a fairness or nondiscrimination principle. But they call for different implementations. Some focus, for example, on forbidding the use of biased datasets—even though arguments that truly unbiased data don’t exist are pretty persuasive. Others call for greater diversity on development teams to ensure that a broader range of perspectives is baked into technologies from the start. Still others want to see AI used to uncover and remedy existing instances of discrimination. Regulators would need to parse these options carefully and decide which were appropriate.

In all, if advances in AI technology have landed us in unfamiliar territory, an analysis of AI principles might be the map we need to make sense of it all. But that’s only true if we look outside U.S.-based tech companies’ visions for the regulations that would best serve them. Principles from a broader range of stakeholders provide visibility into everything from the greatest risks that AI poses to vulnerable and marginalized populations to the key human values, such as self-determination, equality, and sustainability, that we should be seeking to protect.

AI principles are a map that should be on the table as regulators around the world draw up their next steps. However, even a perfect map doesn’t make the journey for you. At some point—and soon—policymakers need to set out the real-world implementations that will ensure that the power of AI technology reinforces the best, and not the worst, in humanity.


Jessica Fjeld is the assistant director of the Harvard Law School Cyberlaw Clinic at the Berkman Klein Center for Internet & Society and a member of the board of the Global Network Initiative. Lead author of the recent report “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI,” she focuses her research and legal practice on emerging technology, digital rights, and creativity.