Research/AI Systems
Applied ResearchHuman Autonomy & Rights

AI Systems Research

Our AI Systems research examines machine learning transparency, algorithmic bias, and AI safety mechanisms to develop governance approaches that promote responsible AI development and deployment.

AI Systems Research

Research Overview

Our AI Systems research program investigates the technical, ethical, and governance dimensions of artificial intelligence systems, with a particular focus on transparency, accountability, bias mitigation, and safety. We combine technical expertise in machine learning with perspectives from social sciences, ethics, and policy to develop comprehensive approaches to AI governance.

This research area connects directly to our Human Autonomy & Rights focus area, examining how AI systems can either enhance or undermine human agency, dignity, and rights. We work closely with policymakers, industry partners, and civil society organizations to translate research insights into practical governance frameworks and technical standards.

Current Research Projects

Algorithmic Impact Assessment Framework

Developing a comprehensive methodology for assessing the societal impacts of algorithmic systems before and during deployment, with particular attention to human rights implications and differential impacts across demographic groups.

Explainable AI for High-Risk Applications

Investigating technical approaches to explainability in complex AI systems used in high-stakes domains such as healthcare, criminal justice, and financial services, with a focus on making explanations meaningful and actionable for affected individuals.

AI Safety and Alignment Governance

Examining governance approaches for ensuring the safety and alignment of increasingly capable AI systems, including institutional mechanisms, technical standards, and international coordination frameworks for managing risks from advanced AI.

Participatory AI Development

Exploring methodologies for meaningful stakeholder participation in AI development processes, with a focus on including marginalized communities and ensuring diverse perspectives shape AI systems that affect them.

Research Methodology

Our AI Systems research employs a multidisciplinary approach that combines technical analysis, empirical studies, normative inquiry, and policy development:

  • Technical Analysis: Examining the capabilities, limitations, and potential impacts of AI systems through technical audits, benchmarking, and formal verification methods
  • Empirical Studies: Conducting case studies of AI deployment in various contexts, including interviews with stakeholders and analysis of outcomes across different populations
  • Normative Inquiry: Investigating ethical frameworks and principles for responsible AI development, drawing on philosophy, human rights law, and democratic theory
  • Policy Development: Translating research insights into concrete governance proposals, technical standards, and institutional mechanisms for responsible AI

We prioritize collaborative research that engages diverse stakeholders, including technical experts, policymakers, civil society organizations, and communities affected by AI systems. This approach ensures our research addresses real-world governance challenges and produces actionable insights.

Featured Publications

Technical ReportApril 2025

Algorithmic Bias in Criminal Risk Assessment: Patterns and Interventions

This technical report analyzes patterns of bias in widely-used criminal risk assessment algorithms, documenting disparate impacts across demographic groups and proposing technical and governance interventions to mitigate these biases.

White PaperMarch 2025

Explainable AI in Practice: Implementation Guide for Public Sector Organizations

This white paper provides practical guidance for public sector organizations implementing explainable AI systems, including technical approaches, organizational processes, and legal considerations for meaningful transparency.