Artificial Intelligence & Machine Learning
,
Events
,
Next-Generation Technologies & Secure Development
Ulla Coester on Ethical Design and the Role of the EU AI Act
Unclear threats and unpredictable behavior complicate global trust in AI. Building a shared understanding through adaptable governance helps create consistent expectations for responsible development across societies, said Ulla Coester, project director, Fresenius University of Applied Sciences.
See Also: How Generative AI Enables Solo Cybercriminals
“Governance must therefore be adaptable to behavior. The question that arises in this context is, ‘What is at risk of being undermined – trust in AI or the trust in the strength and the self-understanding of society?’ It is essential that we use governance measures such as the EU AI Act to establish a shared understanding of the direction in which development should or must go,” Coester said.
Cross-disciplinary collaboration is critical for aligning AI with global values. Coester said the ethical evaluation of AI must begin with defining the solution’s criticality level, and teams must create roadmaps that account for cultural and regulatory differences from the start.
In this video interview with Information Security Media Group at RSAC Conference 2025 Coester also discussed:
- Why trust in AI depends on societal expectations, not just tech;
- The EU AI Act’s role in shaping common governance principles;
- How interdisciplinary teams ensure ethical and legal alignment.
Coester develops corporate cultures that align with both a company’s ethical values and societal demands in the context of digital transformation. She provides targeted support for implementing digitalization projects, with a special focus on AI-based applications and ethical considerations to boost user acceptance.