As artificial intelligence continues to reshape industries and society, the question of trustworthiness has become more critical than ever. The recent webinar “AI You Can Trust: Standards, Ethics, and Innovation,” hosted by the TALON Project, brought together leading researchers and practitioners to explore the complex landscape of building reliable, ethical AI systems. Here are the key lessons that emerged from this important discussion.

The Distinction Between Trust and Trustworthiness
One of the most fundamental insights from the webinar was the crucial distinction between trust and trustworthiness in AI systems. While these terms are often used interchangeably, they represent very different concepts that have profound implications for AI development and deployment.
Trust is subjective—it’s the confidence that users place in an AI system based on their perception and experience. Trustworthiness, however, is objective and measurable. It represents an AI system’s verifiable ability to meet expectations consistently across multiple dimensions including reliability, resilience, robustness, safety, transparency, usability, and controllability.
This distinction is particularly important because explainable AI can increase user trust without necessarily improving the system’s actual trustworthiness. A system might provide explanations that feel satisfying to users while still being fundamentally unreliable or unsafe.
The Evolution of Human-AI Collaboration
The webinar highlighted a significant shift in how we think about AI’s role in decision-making. Rather than viewing AI as a tool that humans use, we’re moving toward genuine human-AI teaming where both humans and AI agents work together as team members, with agency potentially shifting between them based on the situation and context.
This collaborative approach expands the capabilities of both humans and AI systems, but it also introduces new challenges around responsibility, accountability, and trust. When decisions emerge from human-AI collaboration, traditional frameworks for assigning responsibility may no longer be adequate.
The Critical Role of Standards
Standards emerged as a cornerstone of trustworthy AI development. They provide multiple benefits including improved service quality, sustainable growth, competitive advantages, and regulatory compliance. However, the webinar emphasized that we need more than just technical standards—we need harmonized approaches that can work across different domains and jurisdictions.
The challenge is particularly acute given the rapid pace of AI development. Standards must be dynamic and adaptable to keep pace with technological advancements while providing the stability that organizations need for long-term planning and investment.
Innovative Approaches to AI Explainability
The webinar showcased several innovative approaches to making AI systems more explainable and trustworthy:
Situation-Aware Explainability (SACS)
This framework generates meaningful explanations about business processes by analyzing process event logs and creating tailored explanations using large language models. What makes SACS particularly valuable is its comprehensive evaluation approach, which considers multiple factors including fidelity, interpretability, trust, and curiosity.
The TALON Project’s Comprehensive Approach
TALON introduces an AI orchestrator combined with security constraints via blockchain technology and digital twins to enhance explainability. This multi-faceted approach recognizes that trustworthiness cannot be achieved through any single technical solution.
Multi-Project Collaboration
Projects like HUMANE, FAME, and FAITH are developing complementary approaches to trustworthy AI, from manufacturing and smart cities to healthcare and finance. This diversity of applications helps ensure that trustworthiness frameworks can work across different domains and use cases.
Managing Trustworthiness Risks
The webinar emphasized that trustworthiness risk management requires a broader perspective than traditional cybersecurity approaches. Organizations must consider social, psychological, and ethical threats alongside technical vulnerabilities.
The FAITH project’s development of a trustworthiness risk management framework based on ISO 27005 provides a practical approach to this challenge, incorporating human factors and iterative processes to address the full spectrum of AI-related risks.
Practical Challenges and Solutions
Several practical challenges were addressed during the webinar:
Quantifying Trustworthiness
There’s no single metric for trustworthiness. Instead, organizations must consider multiple factors including explainability, reliability, safety, and context. Verification, documentation, and robust testing are essential components of any trustworthiness assessment.
Explaining Complex Models
For complex systems like large language models, the focus should be on explaining outcomes rather than internal workings. Verification and scrutiny of outputs become more important than understanding every aspect of the model’s internal decision-making process.
Interdisciplinary Collaboration
Different disciplines often use different vocabularies when discussing AI ethics and trustworthiness. Standardized vocabulary is essential, and existing frameworks like ISO 27000 for cybersecurity and IEEE standards for AI can provide a foundation until more universal standards are adopted.
The Path Forward
The webinar concluded with several key recommendations for the future of trustworthy AI:
- Harmonized Standards: We need globally consistent standards that can work across different jurisdictions and applications.
- Robust Risk Management: Organizations must adopt comprehensive risk management frameworks that go beyond technical considerations to include social and ethical factors.
- Continuous Assessment: Trustworthiness is not a one-time achievement but an ongoing process that requires continuous monitoring and improvement.
- Open Source Collaboration: Projects like SACS demonstrate the value of open-source approaches to trustworthiness, enabling community feedback and collaborative improvement.
- Multidisciplinary Engagement: Building trustworthy AI requires collaboration across technical, ethical, legal, and social domains.
Conclusion
The “AI You Can Trust” webinar underscored that trustworthiness is not just a technical challenge but a fundamental requirement for AI’s successful integration into society. As AI systems become more sophisticated and pervasive, the approaches and frameworks discussed in this webinar provide a roadmap for building AI that is not only powerful but also deserving of our trust.
The work being done by projects like TALON, FAITH, HUMANE, and FAME represents important steps toward this goal, but achieving truly trustworthy AI will require sustained collaboration across disciplines, organizations, and borders. The stakes are too high, and the opportunities too significant, for anything less than our best efforts to build AI systems that serve humanity’s best interests.
The TALON Project has received funding from the European Union’s Horizon Europe Research and Innovation programme under grant agreement No. 101070181.