Artificial Intelligence & Technology Law
Building on the idea of “foundational” issues, the ILA Committee on Artificial Intelligence (AI) and Technology Law aims to move beyond the traditional responsive approach of addressing specific consequences of AI (e.g., algorithmic bias or privacy violations) to examine the broader ways in which AI systems produce knowledge, make decisions, and influence governance. This approach recognises that AI technologies often function as epistemic agents, structuring and filtering information in ways that shape societal understandings and norms. AI systems do not merely automate tasks; they fundamentally alter how knowledge is constituted and decisions are made. Through processes like data curation, algorithmic optimisation, and autonomous learning, AI systems often serve as gatekeepers and arbiters of truth in various domains, from scientific research to public policy. This shift raises critical questions for international law: What role should international legal frameworks play in scrutinising the methodologies and assumptions embedded in AI? How can the law ensure that such systems align with principles of transparency, accountability, and inclusivity? By focusing on these deeper epistemic issues, the Committee hopes to foster a forward-looking dialogue that ensures the progressive development of international law remains relevant in addressing the evolving influence of AI systems.
The Committee will engage with critical questions of justice, such as how AI systems can be designed and governed to uphold human rights, ensure fair access to technology, and prevent discrimination or exploitation. Additionally, it would explore how international legal mechanisms can hold actors accountable for harms caused by AI, such as algorithmic biases, misuse in conflict settings, and cross-border issues like jurisdictional disputes over AI-generated intellectual property or liabilities arising from autonomous systems operating across multiple jurisdictions. Furthermore, the Committee will explore how international legal norms can operationalise the principles of ethical AI governance, including fairness, inclusivity, and respect for cultural diversity. It will also consider mechanisms for fostering global cooperation to prevent AI-driven inequalities and promote equitable access to technological benefits.
The Committee will focus on five key areas of study:
1. Global Norms, Governance, and Comparative Regulation
2. AI and Human Rights
3. Accountability and Cross-Border Governance
4. AI and Sectoral Legal Frameworks
5. Sustainable and Ethical AI Development