AI
Free
The minds behind the magic
The minds behind the magic
Free
As AI systems become more central to products and services, technical teams are under pressure to ensure they’re not just functional — but also fair, robust, and transparent. In this one-hour session, we’ll walk through how you can assess AI systems using open-source tools designed for practical, code-based risk evaluation.
We’ll explore how dynamic assessments work, run a sample use case on a classification or language model, and show how to interpret outputs to support product reviews, audits, or internal documentation. This session is ideal for product leads, system architects, technical auditors, or data science managers beginning to formalise AI risk checks in development.
Understand the different types of open-source AI risk assessment tools
Learn how dynamic testing can identify fairness, robustness, and transparency issues
Run a sample use case to evaluate a classification or LLM system (using any one of the tools like like AI Verify Toolkit, Moonshot, AIF360, or Microsoft’s Responsible AI Dashboard)
See how to interpret scores and generate reports for internal or external use
Chief Product Officer, DeepDive Labs
Vidyaraman Sankaranarayanan is the Chief Product Officer at DeepDive Labs, a Singapore-based AI and data consultancy. He has led product strategy and execution across the company’s GenAI training, regulatory risk management, and compliance automation platforms.
Before DeepDive Labs, Vidyaraman spent over a decade at Microsoft in both Redmond and Singapore, where he held cross-functional roles spanning product design, compliance architecture, and security innovation. He was the Risk Architect for Office 365 in APAC, managing audits for financial services clients. He also spearheaded several product initiatives, including compliant user notifications, Office Mobile user acquisition experiments, and the Social Share plug-in for PowerPoint. Earlier, he was the PM for UX and classification efforts on Data Loss Prevention (DLP) in Outlook and Exchange, implementing features like "Policy Tips" and sensitive content detection using regex, probabilistic models, and fingerprinting. His contributions blended engineering rigor with user-centered design, especially in regulated environments.
He holds a Ph.D. in Computer Science from the University at Buffalo, where he authored a dissertation on game-theoretic approaches to security design, and an M.S. in Computer Engineering from the University of Kansas. His 15+ year career spans enterprise product development, AI-driven regulatory tooling, and risk-aware UX design.
DeepDive Labs is a consultancy that designs and develops custom SaaS tools and LLM-powered workflows for use cases such as responsible AI, cloud cost engineering, and AI risk management. In addition to tooling, it offers mid-career educational programs as a core part of its services—delivering focused, interactive courses and workshops tailored to teachers, students, and working professionals. Together, these offerings support the practical adoption of emerging technologies and evolving regulatory standards.
Your registration for or attendance at any General Assembly offering indicates your agreement to abide by this Community Code of Conduct policy and its terms.