Workshop: Reproducibility and AI: Responsibility qua Open Science?
This interactive, two-hour workshop (held in English) organised in collaboration with the TIER2 project explores how reproducibility and Open Science shape responsible AI research. Targeting researchers, policymakers, and infrastructure providers, we’ll discuss the balance between innovation and transparency—and whether AI itself can enhance reproducibility.
-
Duration: 2 hours, from 13:30 - 15:30
-
Participants: ~25 (target groups: all with interest in issues of AI, reproducibility and Open Science including researchers, policymakers, infrastructure providers and others) - please send an email to core@berlin-university-alliance.de if you would like to attend the workshop (first-come-first-served principle applies, a waiting list will be opened)
-
Facilitators: Tony Ross-Hellauer (Know Center Research GmbH), Dominik Kowald (Know Center Research GmbH), Alexandra Bannach-Brown (Berlin Institute of Health @ Charité), Vince Madai (Berlin Institute of Health @ Charité)
Workshop outline
Artificial Intelligence (AI) methods are being increasingly integrated in research across disciplines. Indeed, Machine Learning (ML) researchers were amongst the recipients of the 2024 Nobel Prizes for both Chemistry and Physics, underlining their potential for groundbreaking research. Yet, many research fields are currently reckoning with issues of poor levels of reproducibility – the ability to independently verify research findings. Some label it a “crisis”, and research employing or building ML models is no exception. Issues including lack of transparency, lack of open data or code, poor adherence to standards, and the sensitivity of ML training conditions mean that many papers are not even reproducible in principle. Unreliable results risk hindering scientific progress by wasting resources, reducing trust, slowing discovery, and undermining the foundation for future research. Lack of model transparency also has related implications for ethical issues such as bias, fairness, privacy, societal relevance and sustainability.
In this workshop, we explore what responsibility in use of AI for research entails. Responsible AI integrates ethical principles into AI systems to minimise risk and maximise positive outcomes. Through interactive discussion of the issue of reproducibility, we explore the extent to which reproducibility, and broader principles of Open Science, can and should be incorporated sine qua non as principles for responsible AI-enabled research. For example: to what extent are issues like innovation, data ownership, sensitivity, or even computing resources valid reasons for lack of transparency? To what extent can reproducibility of current models be ascertained, given their intensive compute requirements? Which benchmarks may be applied? At the same time, how might AI systems themselves potentially aid in ensuring reproducibility through enhancing documentation or in automating error-checking?
Workshop format
1. Introduction (30 minutes)
Objective: Set the stage for the workshop by defining key concepts and building participant engagement.
-
Welcome and overview (5 minutes): Introduce facilitators and workshop objectives; Briefly outline the structure and goals of the session.
-
Icebreaker exercise (10 minutes): To foster connections and prompt reflection on the session’s themes, a brief activity in which participants pair up, think about, and then share a brief anecdote where transparency or reproducibility (or lack thereof) influenced a project or decision.
-
Introduction to key concepts (15 minutes): (a) What is Reproducibility? Definition and significance in research and AI contexts; Examples of reproducibility successes and challenges in AI. (b) What is responsible AI? Introduce dimensions of trustworthiness in AI (e.g., transparency, fairness, accountability, robustness, and privacy). Explain how these dimensions intersect with Open Science principles.
2. Interactive Exercise 1 (30 minutes)
Objective: Explore one or two key questions about reproducibility and responsible AI through small group collaboration.
-
Instructions and Grouping (5 minutes): Divide participants into 4–5 small groups; Introduce the core questions for discussion.
-
Discussion Prompts (20 minutes): Each group discusses the following discussion points: What are the practical challenges of achieving reproducibility in AI research? To what extent do transparency and reproducibility contribute to trust in AI systems? How do we balance openness with risks, such as misuse or privacy concerns?
-
Group Reporting (5 minutes): Groups share their key takeaways with the larger group via one-minute elevator talks.
3. “3 x 3”, Interactive Exercise 2 (40 minutes)
Objective: Building on the first exercise, participants use exercise sheets to develop three responses to each of three key questions on key problems, solutions, and policy recommendations.
-
Instructions and Grouping (5 minutes): Regroup participants, potentially mixing them up to diversify perspectives; Introduce new questions and allow groups to self-define overarching group theme (e.g., technical, social, ethical, legal).
-
Discussion (20 minutes): Participants are given exercise sheets to collaboratively draft responses to the following questions: (1) What are the core issues regarding your topic limiting the reproducibility of AI-enabled research; (2) What are the most promising potential solutions for these issues; (3) What policy recommendations would you suggest addressing these issues and solutions?
-
Group Reporting (15 minutes): Groups present their findings and recommendations to the room via 3-minute elevator summaries.
4. Guided Plenary Discussion (20 minutes)
Objective: Engage the full group in an open dialogue to synthesise ideas, answer questions, and develop actionable insights.
-
Audience-Led Q&A (15 minutes): Participants pose questions or comments inspired by the exercises; Facilitators guide discussion and encourage diverse participation, introducing back-up prompt questions if needed.
Conclusion and Next Steps (5 minutes): Summarise key takeaways from the workshop; Share resources for further exploration (e.g., articles, toolkits, relevant organizations); Encourage participants to continue the dialogue beyond the session.
Please send an email to core@berlin-university-alliance.de if you would like to attend the workshop (first-come-first-served principle applies, a waiting list will be opened).