Loading…
Friday November 15, 2024 11:30am - 11:45am IST
   If there’s any one thing that I took away from watching the 2008 children’s film Wall-E, it’s that the fictional megacorporation Buy’n’Large (BnL) had woeful threat modelling processes. In this fusion of fiction and real-world experience, I will demonstrate some practical examples of threat modelling generative AI systems and large-language models. I will draw on my experience threat modelling AI within the fintech and media industries, and illustrate these learnings of mine practically through one of Buy’n’Large’s own AI systems: Autopilot. We will discuss a number of AI-specific threats and how they should be tackled practically in BnL’s threat modelling sessions. These include: data tampering, information leakage, elevation of privilege, adversarial inputs, model inversion, and reprogramming; some of these relating to Wall-E plot points, and some to consider what else could potentially have gone wrong. Throughout this we will emphasise that threat modelling must leverage fast and agile methods to keep pace with data science teams working on AI. We will also discuss how AI can affect the trust dynamics of a threat model, and make an argument for a zero-trust approach to application architectures that leverage AI. Welcome to the newly formed BnL appsec team; we look forward to working with you.
                                   
    
Speakers
avatar for Chris Cooper

Chris Cooper

Director of Product Security, News Corp
Friday November 15, 2024 11:30am - 11:45am IST
Feedback form is now closed.

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!