Welcome to

SIG‑AI-ACT

Special Interest Group on Translating the EU AI Act into Technical Requirements

Bridging regulation and technical design for trustworthy AI.

Delft, The Netherlands

About SIG‑AI-ACT

Special Interest Group (SIG) AI-ACT is a transdisciplinary initiative aimed at translating the principles of the EU AI Act—such as fairness, transparency, privacy, and robustness—into technical requirements and design practices for AI systems.

The SIG is structured into modular subgroups on key topics—risk classification, transparency, human oversight, privacy, and data governance—working toward concrete outputs like guidelines, evaluation methods, and tooling.

Founded by researchers at TU Delft and funded by the Delft Design for Values Institute, SIG‑AI-ACT seeks to operationalize abstract legal and ethical principles through actionable, value‑driven specifications, targeting sensitive and high‑impact domains. Please note that SIG-AI-ACT is an independent academic initiative and is not affiliated with any national-level AI Special Interest Groups in the Netherlands.

Mission: To bridge the gap between legal obligations and technical practice in high‑risk AI by building tools, frameworks, and design methodologies grounded in the EU AI Act.

Objectives

  • Interpret and categorize regulatory language based on system type and risk levels.
  • Translate legal principles into actionable technical specifications, reusable patterns, and best practices.
  • Foster alignment across academia, industry, and regulators for trustworthy AI.

What We Do

SIG-AI-ACT operates as a modular, collaborative network involving academic researchers, practitioners, legal experts, policy experts, and industry partners. Core activities include:

  • Working groups bringing together experts on fairness, transparency, robustness, and privacy.
  • Regular sessions, reading groups, and interdisciplinary workshops.
  • Case studies with institutions and industry partners (e.g., healthcare).
  • A living technical specification that evolves with legal and technological shifts.
  • Evaluation frameworks to measure values like fairness and transparency in practice.
  • Governance tools for developers, auditors, and regulators.
  • Public consultations to prototype and reflect real-world needs.

Scope and Topics of Interest

SIG‑AI-ACT operates through focused subgroups that span theory, implementation, and evaluation.

Risk Classification

Mapping system categories and obligations to risk levels; scoping high‑risk systems.

Transparency

Documentation, model & data cards, evidence logs, and communication duties.

Human Oversight

Designing effective oversight, intervention points, and fallback modes.

Human-AI Collaboration

Optimizing teamwork between professionals and AI systems.

Privacy

Data minimization, PETs, purpose limitation, access control, logging.

Data Governance

Datasets, lineage, quality controls, bias assessments, and retention policies.

People

Coordinators

Organizer 1
Dr. Megha Khosla

TU Delft, EEMCS

Organizer 1
Dr. Masoud Mansoury

TU Delft, EEMCS

Organizer 1
Dr. Helma Torkamaan

TU Delft, TPM

Members

Join SIG-AI-ACT
Xanan Xin
tba
tba

We value diverse expertise and backgrounds.

News and Events

Milestones
2025 — Initiation & Foundations
Progress

Automatic countdown to project end and upcoming activities.

  • Next collaborative workshop (tba)
    Planned quartely
  • Project end (Dec 31, 2027)
    Target completion

Get Involved (Join Us!)

Are you a researcher, policy maker, developer, or PhD student working at the intersection of AI, law, and ethics? Join our community!

The sign-up form opens in a new tab and takes ~3 minutes.

Contact Us

For sponsorhip, collaborations, media, or questions about SIG‑AI-ACT, reach out to us:

  • Dr. Megha Khosla
  • Dr. Masoud Mansoury
  • Dr. Helma Torkamaan
Email the Organizers