KU Leuven Concludes 5-Year AI Law & Ethics Summer School as EU Act Enters Full Force

Image Source: Ku Leuven

KU Leuven has wrapped up the fifth and final edition of its Summer School on the Law, Ethics and Policy of Artificial Intelligence, capping a five year effort to arm professionals with insights into AI's regulatory and moral minefields. The 10 day hybrid programme, running from 30 June to 9 July 2025 in Leuven, Belgium, drew participants from across Europe and further afield for in person gatherings at the Faculty of Law alongside online streams, building on a cumulative total of more than 250 attendees over its run.

Organisers positioned the event against the backdrop of the EU AI Act's full enforcement since August 2024, which sorts AI systems by risk tiers to curb harms while spurring safe innovation. With lectures from scholars, EU officials and field experts, the school spotlighted how philosophical roots inform today's policy battles, from algorithmic bias to cross border governance.

Core Sessions Bridge Philosophy to Policy Realities

The curriculum kicked off with building blocks on AI's human side. On day one, Wannes Meert from KU Leuven's Faculty of Engineering and Leuven.AI delivered a crash course on artificial intelligence fundamentals, laying technical groundwork. This flowed into Roger Vergauwen's session on the philosophy of AI from the Institute of Philosophy, probing core debates like machine consciousness and moral agency in automated choices. Nathalie Smuha, the academic director and a law and criminology professor, followed with an ethics overview, unpacking dilemmas in AI deployment.

Later days turned to targeted challenges. Laurens Naudts from the University of Amsterdam led a module on AI and fairness, addressing bias mitigation across applications. Gloria González Fuster from Vrije Universiteit Brussel explored AI, race and gender, highlighting equity gaps in social algorithms. Sectoral lenses sharpened the focus: Rosamunde van Brakel from Vrije Universiteit Brussel covered AI and law enforcement on day seven, examining tools' role in surveillance and decision making. Day eight featured Smuha on AI and public services, dissecting administrative uses from welfare to urban planning.

A centrepiece came midway on 4 July with open public roundtables. From 14:30 CEST, Smuha chaired a global governance discussion with Carolina Aguerre from Universidad Católica del Uruguay on Latin American norms, Aziz Huq from the University of Chicago on US fragmentation, Zhiyu Li from Durham University on China's guideline driven approach, and Jake Okechukwu Effoduh from Toronto Metropolitan University on African adaptations. Speakers offered snapshots of regional efforts, followed by moderated exchanges on comparative lessons for harmonising rules in a tech fluid world. At 16:30, Lucilla Sioli, director of the EU AI Office, keynoted on Europe's AI future under the Act, stressing adaptive regulation for human centred outcomes, with audience Q&A rounding it out.

Completers received a certificate equivalent to three European Credit Transfer System credits, suiting postgrads, lawyers and developers eyeing compliance roles.

Origins in AI's Early Ethical Reckoning

The school sprang from 2021 concerns over AI's unchecked spread, launched under Smuha's lead amid EU talks on the then draft AI Act. It addressed a void: tech advances outpaced ethical and legal literacy, as seen in cases like the 2021 Dutch childcare benefits algorithm, where biased scoring wrongly accused thousands of fraud, sparking repayments and policy overhauls. KU Leuven's cross faculty teams in law, philosophy and engineering filled this by blending theory with practice, responding to demand that filled spots yearly.

No public rationale emerged for ending the series after 2025, though it aligned with the field's shift towards hands on Act implementation training.

Echoes in Policy and Practice

Viewed from outside, the programme mirrored Europe's bid to tether AI's speed to safeguards, influencing how firms and governments tackle thorny issues. A January 2025 blog post by Joanita Nagaba and colleagues, tied to the school, scrutinised the AI Act's handling of high risk systems, noting fuzzy lines in value chain duties that could snag accountability — from providers to deployers. Such gaps, the analysis argued, might tilt burdens towards resource strapped smaller developers, even as giants navigate via lobbying clout.

Environmental angles, often sidelined in AI hype, gained air via a February post by Sara Garsia, a KU Leuven doctoral researcher. It flagged AI's footprint — ICT slurping seven per cent of global electricity in 2022, projected to hit 13 per cent by 2030 — clashing with the EU's green digital twin transition, where data centres evade full carbon scrutiny despite water and rare earth demands.

These threads amplified the school's impact: alumni now shape briefs in Brussels and beyond, with sessions like law enforcement ones informing Act codes of practice due in 2026.

Horizons: Patchwork Rules and Persistent Puzzles

Ahead, the school's close hints at a landscape where ethics embeds deeper into standard curricula, yet global divides linger. The 4 July talks underscored this: Huq praised US sectoral robustness amid no federal law, Li detailed China's ethical provisioning rules, contrasting the EU's rights first model. Enforcement snags loom in transnational flows, analysts say, potentially driving sandboxes for testing and hybrid regimes blending the Act with laws like medical device standards.

Ultimately, KU Leuven's run spotlights AI's dual edge — tool for progress, test for principles. As Sioli noted in her address, success turns on rules that match machines' march, keeping human oversight at the helm.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

EY Launches AI Academy After Upskilling 44,000 Staff and Piloting 50+ Enterprise Projects

Next
Next

AI SDKs in 2025: How Software Development Kits Are Shaping the Next Wave of Intelligent Systems