Skip to content

AI You Can Trust: Standards, Ethics, and Innovation

This webinar, initiated and hosted by the Talon Project, explores approaches and standards for building trustworthy AI, a complex challenge given AI’s increasing sophistication in models, applications, and infrastructure management.

John Soldatos from Netcompany Intrasoft welcomed attendees to the webinar, “AI You Can Trust: Standards, Ethics, and Innovation,” hosted by the Talon Project. He explained the webinar’s aim to illuminate approaches and standards for building trustworthy AI, highlighting the increasing complexity of AI’s sophistication in models and applications, including infrastructure management. He then introduced the four speakers: Christos Emanuilidis (University of Groningen), Leon Limonad (IBM Research), Renetta Polemi (UBITECH), and Renetta Polemi (Tastilio), emphasizing their expertise in industrial aspects, standardization, and the ethical considerations of trustworthy AI. Soldatos concluded by stating he would proceed directly to the first speaker.

Trust vs. Trustworthiness in AI

Christos Emanuilidis from the University of Groningen highlighted the distinction between trust and trustworthiness in AI. While explainable AI can increase trust, it doesn’t guarantee trustworthiness. 

→ Trustworthiness encompasses reliability, resilience, robustness, safety, transparency, usability, and controllability. 

→ It’s about verifiable ability to meet expectations. 

→ He emphasized the importance of human-AI teaming in decision-making, where agency can shift between humans and AI. 

→ Standards play a crucial role in achieving trustworthiness, offering benefits like improved service quality, growth, competitive edge, and regulatory compliance. 

→ The Humane project, which employs various learning paradigms like active learning and swarm learning, aims to demonstrate trustworthy AI in diverse use cases.

Situation-Aware Explainability (SACS)

Lior Limonad from IBM Research presented SACS, a framework for generating meaningful explanations about business processes. 

→ SACS analyzes process event logs to generate process, causal, and explainable AI (XAI) views. 

→ These views, combined with user inquiries, are used by a large language model (LLM) to create tailored explanations. 

→ A dedicated evaluation scale, incorporating fidelity, interpretability, trust, and curiosity, assesses the quality of these explanations. 

→ A user study revealed that adding knowledge improves fidelity but can reduce interpretability. 

→ Another study demonstrated the scale’s usefulness in comparing different LLMs for generating explanations. 

→ The SACS library is open-source, encouraging community feedback and improvement.

Talon’s Approach to Trustworthy AI

Sofia Karagiorgou from UBITEC discussed Talon’s approach to trustworthy AI. 

→ The project focuses on an AI orchestrator, security constraints via blockchain, and digital twins for explainability. 

→ Challenges include the dynamic nature of AI ethics, the black box nature of AI models, and the context-dependence of ethical considerations. 

→ Talon’s breakthroughs include an AI theoretical model for dimensioning hardware, a multimodal data ops and MLOps pipeline, and a zero-touch smart orchestrator for AI model serving. 

→ The project emphasizes a continuous journey towards trust in AI, with ethical considerations as a guiding compass.

Challenges and Efforts in AI Trustworthiness

Renetta Polemi from Trastilio addressed challenges in AI trustworthiness, particularly concerning the EU AI Act. 

→ She highlighted the need for harmonized standards, robust risk management frameworks that consider social and ethical threats, and standardized trustworthiness schemes for auditors. 

→ The lack of AI certification authorities and the need for training on AI trustworthiness were also emphasized. 

→ Projects like FAITH are developing trustworthiness risk management frameworks and tools, incorporating human factors and iterative processes.

Conclusion

The webinar underscored the importance of moving beyond trust to trustworthiness in AI. Key takeaways include the need for harmonized standards, robust risk management frameworks, explainability solutions, and ongoing research into AI trustworthiness assessment. The discussion also highlighted the challenges of explaining complex AI models like LLMs and the need for multidisciplinary collaboration to address the ethical and societal implications of AI. The open-source nature of projects like SACS and the ongoing development of frameworks like FAITH offer promising paths towards building AI systems that are not only powerful but also trustworthy.


This project has received funding from the European Union’s Horizon Europe Research and Innovation programme under grant agreement No. 101070181.