Synopsis

Both the US FTC and the EU government made headlines this week with moves to stop misuse of Artificial Intelligence (AI). Which is more important? It depends.

 

The EU released its long-awaited set of AI regulations. The regulations are wide ranging, with restrictions on mass surveillance and the use of AI to manipulate people. The FTC also issued guidance on the use of AI. Which one deserves more of your attention? Focus on the FTC guidance now and keep a close watch on the EU for later.

The EU regulations start out simply enough:

“Faced with the rapid technological development of AI and a global policy context where more and more countries are investing heavily in AI, the EU must act as one to harness the many opportunities and address challenges of AI in a future-proof manner.”

Then they go on for 108 pages.

The FTC starts out with similar themes:

“Advances in artificial intelligence (AI) technology promise to revolutionize our approach to medicine, finance, business operations, media, and more. But research has highlighted how apparently “neutral” technology can produce troubling outcomes – including discrimination by race or other legally protected classes.”

The FTC article is a mere 3 pages. It also contains more substance. For US organizations, this demands immediate attention. The FTC cites three pieces of the law – Section 5 of the FTC Act, Fair Credit Reporting Act, and Equal Credit Opportunity Act. Then it brings the smack, “The result may be deception, discrimination – and an FTC law enforcement action.” I’ll highlight the practical steps here:

  • Start with the right foundation.
  • Watch out for discriminatory outcomes.
  • Embrace transparency and independence.
  • Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results.
  • Tell the truth about how you use data.
  • Do more good than harm.
  • Hold yourself accountable – or be ready for the FTC to do it for you.

Meanwhile, the EU has also been active on the AI front. You say you’re not subject to EU law? Here’s the rub. Most laws are local—except in the digital realm. When the European Union EU comes up with some new tech regulation, it can quickly spread around the world. Global companies adopt its typically strict rules for all their products and markets in order to avoid having to comply with multiple regimes. Other governments echo the EU’s rule book to help local firms compete. The textbook example is the EU’s General Data Protection Regulation (GDPR), which went into force in 2018 and swiftly became the global standard.

At least 175 countries, firms and other organizations have drawn up lists of ethical principles. But few of these describe:

  1. how such things as “robustness” or “transparency” can be objectively measured,
  2. How they can be achieved in practice, let alone
  3. how they can be backed up by enforceable laws.

Rather than regulating all applications of AI, the EU’s rules are meant to focus on the riskiest ones. Some will be banned outright, including services that use “subliminal techniques” to manipulate people. Others, such as facial recognition and credit scoring, are considered “high-risk” and so subject to strict rules on transparency and data quality. As with GDPR, penalties for violations are stiff: up to €30m ($36m) or 6% of global revenues, whichever is higher (in the case of a firm as big as Facebook, for example, that would come to more than $5bn).

What’s next? Hopefully a bit of trans-Atlantic cooperation to harmonize both approaches into a common regulatory framework.

Michael Conlin

Michael Conlin​

Chief Technology Officer​
Phone: (703) 216-5856​
michael.conlin@definitivelogic.com​