The minds behind the magic
The minds behind the magic
As AI systems become more central to products and services, technical teams are under pressure to ensure they’re not just functional — but also fair, robust, and transparent. In this one-hour session, we’ll walk through how you can assess AI systems using open-source tools designed for practical, code-based risk evaluation.
We’ll explore how dynamic assessments work, run a sample use case on a classification or language model, and show how to interpret outputs to support product reviews, audits, or internal documentation. This session is ideal for product leads, system architects, technical auditors, or data science managers beginning to formalise AI risk checks in development.