If there’s any one thing that I took away from watching the 2008 children’s film Wall-E, it’s that the fictional megacorporation Buy’n’Large (BnL) had woeful threat modelling processes. In this fusion of fiction and real-world experience, I will demonstrate some practical examples of threat modelling generative AI systems and large-language models. I will draw on my experience threat modelling AI within the fintech and media industries, and illustrate these learnings of mine practically through one of Buy’n’Large’s own AI systems: Autopilot. We will discuss a number of AI-specific threats and how they should be tackled practically in BnL’s threat modelling sessions. These include: data tampering, information leakage, elevation of privilege, adversarial inputs, model inversion, and reprogramming; some of these relating to Wall-E plot points, and some to consider what else could potentially have gone wrong. Throughout this we will emphasise that threat modelling must leverage fast and agile methods to keep pace with data science teams working on AI. We will also discuss how AI can affect the trust dynamics of a threat model, and make an argument for a zero-trust approach to application architectures that leverage AI. Welcome to the newly formed BnL appsec team; we look forward to working with you.