Writing /In the News

AI Regulation and Ethics: What Research Shows About Governance of Emerging Technology

Artificial intelligence has moved from a research domain to an embedded feature of consequential decision-making systems in healthcare, finance, criminal justice, hiring, and public benefits in a remarkably short period, creating governance challenges that existing regulatory frameworks are poorly equipped to address. Research on the risks of AI systems, the evidence for bias and unfairness in deployed systems, and the effectiveness of different regulatory approaches is developing rapidly, though it has not kept pace with the deployment of AI in high-stakes contexts. Understanding what the research shows about AI governance is essential for policymakers, civil society, and the public navigating these debates. Algorithmic bias in AI systems used for consequential decisions is one of the most extensively studied AI ethics problems. Research using audit methods, which test AI system outputs for systematic differences across demographic groups, has documented bias in a wide range of deployed systems. Studies of recidivism prediction algorithms used in criminal sentencing find that Black defendants are systematically assigned higher risk scores than white defendants with equivalent future recidivism outcomes. Research on facial recognition systems finds substantially higher error rates for darker-skinned faces and women than for lighter-skinned faces and men. Research on natural language processing models finds that they encode and sometimes amplify the biases present in their training data. The causes of algorithmic bias are multiple and interact in complex ways. Training data that reflects historical patterns of discrimination produces models that replicate those patterns. Design choices about what variables to include and how to define success encode value judgments about what kinds of errors are acceptable. Optimization for performance metrics measured on majority populations can produce systems that work poorly for minority populations. Research on bias mitigation techniques finds that addressing one form of bias often creates tradeoffs with other fairness criteria, and that there is no single technical fix that satisfies all reasonable definitions of algorithmic fairness simultaneously. Labor market effects of AI and automation are another area of active research. Studies using occupation-level analysis find that AI and automation are automating a growing range of cognitive tasks that were previously thought to require human judgment, including legal document review, medical image analysis, and customer service. Research on the labor market effects of AI deployment finds concentration in specific occupations and income levels, with some evidence of wage stagnation or job displacement in automation-susceptible roles. The distributional effects of AI labor market disruption are a significant policy concern, though projections about the magnitude of displacement vary substantially across studies. Regulatory approaches to AI have been studied in comparative perspective across jurisdictions. The European Union's AI Act, which establishes a risk-based regulatory framework that imposes stricter requirements on high-risk AI applications, represents the most comprehensive AI-specific regulatory framework currently enacted. Research on the likely effects of the EU AI Act is based primarily on modeling and legal analysis rather than empirical evaluation of enacted policy, given its recent enactment. Research on sector-specific AI regulation in areas including healthcare, financial services, and consumer protection finds that existing sectoral regulators can address some AI-specific harms through existing authorities, though gaps remain. Transparency and explainability requirements for AI systems are frequently proposed as a regulatory response to algorithmic opacity. Research on the technical feasibility and practical utility of explainability techniques finds that current approaches can provide meaningful information about model behavior in some cases but are limited in their ability to provide genuine understanding of complex neural network models. Research on whether explainability information actually helps affected individuals or regulatory bodies evaluate AI decisions is limited but raises concerns that technically accurate explanations may not be practically useful for challenging AI-based decisions. Research on the institutional governance of AI within organizations finds that internal ethics review processes, AI ethics boards, and impact assessment requirements produce variable results depending on the authority, resources, and independence given to these oversight functions. Studies of tech company AI ethics practices find that internal governance often lacks the independence and authority to meaningfully constrain deployment decisions driven by commercial pressures. This pattern has prompted arguments for independent external oversight rather than reliance on self-regulation. The intersection of AI governance and democratic accountability is a dimension that researchers in political science and law have examined alongside the technical and organizational questions. Consequential decisions made by AI systems may be insulated from the democratic accountability mechanisms that apply to equivalent decisions made by human government officials. Research on AI in public sector decision-making examines whether existing administrative law requirements including due process, non-discrimination, and reasoned explanation can be applied to AI-assisted decisions, and what new frameworks might be needed when they cannot.
← All writing

More writing.