WSJ Opinion: The Economics of Regulating AI by Roland Fryer, March 20:

(One of) the key challenges of AI regulation is asymmetric information. Regulators can’t observe firms’ true risk profiles or day-to-day behavior.

Current AI rules (IL’s disclosure mandate, NY’s RAISE Act, EU AI Act) collapse into uniform compliance burdens that generate paperwork but zero information revelation. Worse, they create perverse, distorted incentives — as Fryer notes after IL’s ban on discriminatory AI used in hiring, firms are scrapping hiring algorithms that outperformed human judgment on meritocratic outcomes because the legal exposure under vague, overbroad statutes isn’t worth it. The regulation meant to reduce discrimination is increasing it.

Proposed solution is to use mechanism design: instead of surveillance-heavy, one-size-fits-all mandates, offer a menu of regulatory tracks that induce self-selection through incentive compatibility.

For example, consider a two track system that let AI company reveal their true high-low risk type: track A gives you lighter compliance in exchange for full transparency to a certified auditor — cheap if you’re genuinely safe, ruinous if you’re hiding something. Track B requires no transparency but imposes strict liability with penalties calibrated to actual social cost. A risky firm mimicking “safe” gets destroyed under A’s audit, so it rationally self-selects into B. Information is revealed through choices, not demanded through audits firms can game.

The problem gets slightly more complicated when there are hidden actions layered on top of hidden types. So getting the screening menu designed right first is crucial. Another problem is, sometimes companies don’t even know their own type — eg startups still learning their own systems. This leaves open problem for regulators and public to think about.

Regulators are always in the dark, at least a little. The question is whether they build systems that reveal information or bury it deeper. If we want to regulate complex, fast-moving domains like AI—or even persistent ones like policing—we have to start with humility. We don’t know what’s happening inside the black box. But with the right tools, we can coax it to open just enough to align incentives, discourage harm and reward truth. In regulation, the greatest danger isn’t bad actors. It’s being arrogant enough to design policy as if we have perfect information when we don’t. (Professor Fryer)

Reference:

Fryer, R. (2026, March 20). The economics of regulating AI. The Wall Street Journal. https://www.wsj.com/opinion/the-economics-of-regulating-ai-7773e722