UCD will ensure that AI systems empower rather than replace human decision-making. Oversight mechanisms will be integrated into AI applications across teaching, research, and operations/support services to safeguard against errors, bias, and unintended consequences, while supporting academic freedom and informed decision-making.

UCD AI Governance Principles
.jpg)
The University is committed to responsible and ethical AI governance, rooted in our core values of Inclusion, Excellence, Engagement, Integrity, Collegiality, and Creativity. These principles, approved by the University Management Team, guide our use of artificial intelligence, ensuring we safeguard academic integrity, promote equitable learning access, maintain high ethical research standards, and provide transparency in administrative processes.
Developed and contextualised specifically for the higher education environment, the foundational principles below serve as a clear framework for how we approach, implement, and manage AI across the institution.
UCD AI Governance Principles
Fairness is critical in the deployment of AI to support University processes, including decision making, such as admissions, grading, and research funding decisions. UCD will ensure that the use of AI solutions does not perpetuate bias, ensuring equitable access to education and opportunities for all groups, including underrepresented communities. This aligns with UCD’s strategic commitment to widening participation and ensuring the supports are in place to enable each student to thrive.
Transparency ensures that students and staff understand how AI decisions are made, whether in grading, resource allocation, research recommendations or operational decisions. Explainability is especially critical in teaching and research environments where all parties need to understand and trust the use of AI tools and their impact. For students, for example, this includes ensuring that AI-driven grading, admissions processes, and learning support systems are explainable and auditable, preventing unfair bias or unintended consequences. For staff, this extends to AI's role in areas like performance evaluations, workload distribution, and recruitment, ensuring that AI does not create opaque decision-making processes or introduce unintended inequities. Additionally, robust data stewardship practices will be integral to ensuring transparency, providing clear oversight of how data is collected, stored and used within AI applications to build trust and accountability. Compliance with legal, regulatory, and ethical standards, including the EU AI Act and GDPR, will be central to AI governance, ensuring that AI systems operate within clearly defined institutional and external obligations.
AI systems must be reliable and secure, particularly when handling sensitive data such as student records, research intellectual property, financial information. Compliance with institutional policies, data protection regulations, and sectoral best practices will be a fundamental requirement for AI deployment, ensuring adherence to ethical and legal obligations. Safety in experimentation with AI (e.g., in labs or student projects) must also be emphasised. With large amounts of personal data collected for Research and Learning, the University will implement and adhere to strong privacy and data governance and data stewardship standards.
The integration and use of AI in UCD will prioritise ethical considerations and sustainability, ensuring systems are aligned with long-term societal and environmental benefits. This reflects the University’s strategic commitment to being a source of insight and objectivity on issues of vital importance to society.
The University will educate students, staff, alumni and the wider higher education community about AI’s capabilities, limitations, and its ethical implications, fostering critical thinking and preparing graduates for AI-rich workplaces. In alignment with the EU AI Act's Art. 4, UCD will implement measures to ensure that all personnel involved in the operation and use of AI systems possess a sufficient level of AI literacy. This includes tailored training that accounts for the technical knowledge, experience, and specific contexts in which AI systems are employed. UCD will leverage its expertise to engage with broader society, supporting public discourse and contributing to the development of responsible AI policies, in alignment with its strategic commitment to equip students with transversal skills, including digital literacy.
The University will seek out collaborative approaches, including those that involve partnerships with other institutions, industry, and students. Stakeholder involvement ensures that AI systems align with the values and needs of the broader community.
Before deploying AI solutions, tools and systems, the University will evaluate risks related to bias, data misuse, and safety, applying mitigation strategies as needed. This is particularly important in high-impact applications affecting students and institutional reporting and decision-making. Where risks to fairness, equity, or accountability are deemed unacceptably high, the University will choose not to implement AI solutions—e.g. in areas such as student admissions or progression—prioritising ethical integrity.