The minds behind the magic

The minds behind the magic

    Get More Info
    Hands-On AI Risk Assessment with Open Tools

    Online Campus

    Online
    Anywhere
    Online

    Past Locations for this Event

    Hands-On AI Risk Assessment with Open Tools | Online

    Online Campus

    Online
    Anywhere
    Online

    Past Locations for this Event

    About this event

    As AI systems become more central to products and services, technical teams are under pressure to ensure they’re not just functional — but also fair, robust, and transparent. In this one-hour session, we’ll walk through how you can assess AI systems using open-source tools designed for practical, code-based risk evaluation.

    We’ll explore how dynamic assessments work, run a sample use case on a classification or language model, and show how to interpret outputs to support product reviews, audits, or internal documentation. This session is ideal for product leads, system architects, technical auditors, or data science managers beginning to formalise AI risk checks in development.

    Takeaways

    • Understand the different types of open-source AI risk assessment tools
    • Learn how dynamic testing can identify fairness, robustness, and transparency issues
    • Run a sample use case to evaluate a classification or LLM system (using any one of the tools like like AI Verify Toolkit, Moonshot, AIF360, or Microsoft’s Responsible AI Dashboard)
    • See how to interpret scores and generate reports for internal or external use

    Coming up near you

    Let’s Keep You Updated

    Enter your email to start following

    I have read and acknowledge General Assembly's Privacy Policy and Terms of Service. SMS message and data rates may apply.