Loading…
or to bookmark your favorites and sync them to your phone or calendar.
strong>Speaker [clear filter]
arrow_back View All Dates
Friday, November 15
 

10:45am IST

The Dark Side of AI: Exploring Adversarial Threats
Friday November 15, 2024 10:45am - 11:30am IST
 As artificial intelligence (AI) becomes an integral part of our digital landscape, the looming threat of adversarial attacks casts a shadow over its immense potential. This presentation takes a technical deep dive into the evolving landscape of AI security and the relentless tactics employed by adversaries to exploit vulnerabilities. Attendees will gain insights into the various attacker strategies including OWASP LLM TOP 10, and security flaws in LLM frameworks that are exploitable. Moreover, there will be demos of adversarial AI attacks on POC applications. Demos covered include the Fast Gradient Sign Method (FGSM), Prompt injection to Code execution, Poisoning Training Data, Model Serialization Attacks, and SQL injection in LLM applications. The session aims to equip attendees with a comprehensive understanding of the adversarial tactics prevalent in AI security and empower them to guard against the shadows that threaten AI systems.
                                   
    
Speakers
avatar for Alex Devassy

Alex Devassy

AppViewX
Friday November 15, 2024 10:45am - 11:30am IST

11:30am IST

Threat Modelling is, Buy’n’Large, the best way to secure generative AI
Friday November 15, 2024 11:30am - 11:45am IST
   If there’s any one thing that I took away from watching the 2008 children’s film Wall-E, it’s that the fictional megacorporation Buy’n’Large (BnL) had woeful threat modelling processes. In this fusion of fiction and real-world experience, I will demonstrate some practical examples of threat modelling generative AI systems and large-language models. I will draw on my experience threat modelling AI within the fintech and media industries, and illustrate these learnings of mine practically through one of Buy’n’Large’s own AI systems: Autopilot. We will discuss a number of AI-specific threats and how they should be tackled practically in BnL’s threat modelling sessions. These include: data tampering, information leakage, elevation of privilege, adversarial inputs, model inversion, and reprogramming; some of these relating to Wall-E plot points, and some to consider what else could potentially have gone wrong. Throughout this we will emphasise that threat modelling must leverage fast and agile methods to keep pace with data science teams working on AI. We will also discuss how AI can affect the trust dynamics of a threat model, and make an argument for a zero-trust approach to application architectures that leverage AI. Welcome to the newly formed BnL appsec team; we look forward to working with you.
                                   
    
Speakers
avatar for Chris Cooper

Chris Cooper

Director of Product Security, News Corp
Friday November 15, 2024 11:30am - 11:45am IST

11:45am IST

My Threat Modeling Journey
Friday November 15, 2024 11:45am - 12:30pm IST
  My Threat Modeling Journey is a personal account of [name]'s experiences in learning, applying, and teaching threat modeling. In this talk, [Name] how he first encountered threat modeling and how his understanding of security practices has evolved over time. He discusses the challenges faced in implementing threat modeling in various environments, from workshops to real-world applications, and how these experiences have shaped his approach to improving security design.
Through a combination of hands-on application and community-driven initiatives, including the establishment of an open threat modeling community in Japan, [Name] emphasizes the importance of continuous learning and collaboration. His journey also highlights the value of diversity in threat modeling, showcasing how different perspectives contribute to identifying and mitigating security risks more effectively.
   
    
Speakers
avatar for Takaharu Ogasa

Takaharu Ogasa

CEO, Security Initiative
Friday November 15, 2024 11:45am - 12:30pm IST

12:30pm IST

The Inmates Are Running the Asylum: Why Developers Drive Security Professionals Crazy and How to Restore Sanity
Friday November 15, 2024 12:30pm - 1:15pm IST
                                                In the evolving landscape of software development, the integration of DevSecOps has emerged as a critical paradigm, promising a harmonious blend of development, security, and operations to streamline feature delivery while ensuring security. However, the path to achieving this seamless integration is fraught with hurdles—ranging from the lack of security training among developers to the complexity of security tools, the scarcity of dedicated security personnel, and the generation of non- actionable security alerts. Historically, there has been a palpable tension between development teams, who prioritize rapid feature deployment, and security professionals, who focus on risk mitigation. This discrepancy often results in a "The Inmates Are Running the Asylum" scenario, where developers, driven by delivery deadlines, may inadvertently sideline security, leading to frustration among security teams. However, the essence of DevSecOps lies in reconciling these differences by embedding security into the development lifecycle, thereby enabling faster, more secure releases without compromising productivity. This paper explores strategies for embedding security into the development process in a harmonious manner, thereby enhancing productivity without compromising on security.
                                   
    
Speakers
avatar for Debrup Ghosh

Debrup Ghosh

Principal Product Manager, F5
Friday November 15, 2024 12:30pm - 1:15pm IST

1:15pm IST

Shift-left Security with Open Policy Agent(OPA) controls
Friday November 15, 2024 1:15pm - 2:00pm IST
  As organizations adopt Infrastructure-as-Code (IaC) practices, ensuring compliance and enforcing best practices across multiple Terraform configurations becomes a critical challenge. Open Policy Agent (OPA) provides a powerful solution by enabling policy-as-code, decoupling decision-making from policy enforcement. In this session, we will explore how to leverage OPA policies to define guardrails for Terraform runs, preventing or enforcing specific actions based on predetermined rules. We will demonstrate how to group these policies in a Version Control System (VCS) like GitHub and enforce them using GitHub Actions workflows. Attendees will learn how to write OPA policies tailored to their organization's requirements, integrate them with Terraform, and automate policy evaluation using GitHub Actions.
                                   
    
Speakers
avatar for Anshu Bathla

Anshu Bathla

Lead Consultant - Security, AWS
Friday November 15, 2024 1:15pm - 2:00pm IST

2:00pm IST

Zero Day Apocalypse: The Rising Cyber Threat Landscape
Friday November 15, 2024 2:00pm - 2:45pm IST
Imagine a hidden flaw in the application software that hackers can exploit before anyone even knows it exists. These are called zero-day vulnerabilities, and they're becoming a bigger problem. Hackers are finding these flaws faster than ever before, and when they use them to attack, the damage can be huge.
Think about the times you've heard about big companies getting hacked. Many of these attacks use zero-day vulnerabilities. Hackers can steal personal information, disrupt businesses, and even cause financial losses. It's like a ticking time bomb in your application.
To protect ourselves from these dangers, we need to be smarter about how we use and maintain our application software. This means regularly checking for updates, writing code carefully, and having a plan in place to deal with attacks when they happen.
In short, the threat of zero-day vulnerabilities is real and growing. By being proactive and taking the right steps, we can reduce our risk and keep our digital world safe.
                                   

Speakers
avatar for Somen Das

Somen Das

Technology Head, Attack Surface Management, TCS
Technology Head, Attack Surface Management @ TCS
Friday November 15, 2024 2:00pm - 2:45pm IST

2:45pm IST

Leveraging DevSecOps and AI to Triage and Resolve OWASP Top 10 Vulnerabilities in a project(s)
Friday November 15, 2024 2:45pm - 3:15pm IST
 In this session, we'll explore how to effectively triage and resolve vulnerabilities within the OWASP Top 10 categories using a DevSecOps platform like GitLab.
Attendees will learn:
1. How to configure security scanners within project pipelines to generate comprehensive vulnerability reports.
2. We'll then dive into using the data from these reports to triage vulnerabilities according to the OWASP Top 10 categories.
3. Finally, we’ll demonstrate how AI-assisted tools can suggest resolutions and automate the remediation of these identified critical vulnerabilities.
                                   
    
Speakers
avatar for Bala Kumar Subramani

Bala Kumar Subramani

Senior Backend Engineer, Gitlab
Friday November 15, 2024 2:45pm - 3:15pm IST

3:15pm IST

Thinking Like An Attacker - Quick Wins For Securing Your Application
Friday November 15, 2024 3:15pm - 3:30pm IST
It's tough being a developer. Between an endless backlog of tasks, unrealistic deadlines, shifting priorities, and staying up-to-date with the latest trends, adding security knowledge to the mix can feel overwhelming. But security doesn't have to be painful!
                   This talk will focus on adopting the "attacker's mindset" — understanding how malicious actors think to identify vulnerabilities before they do. By anticipating threats and applying a few simple strategies, developers can make quick, impactful improvements to their application's security. And the best part is, you won’t need to master complex tools or frameworks — just a shift in perspective and a few practical steps.

Whether you’re new to security or looking for ways to streamline your approach, this talk will provide you with practical tips to adopt an attacker's mindset and learn how to preemptively counter common threats.
                                   
    
Speakers
avatar for Hotanya Ragtah

Hotanya Ragtah

Information Security Consultant, Kiwibank
Friday November 15, 2024 3:15pm - 3:30pm IST

3:30pm IST

Building Secure AI Systems: A Comprehensive Approach to AI Security
Friday November 15, 2024 3:30pm - 4:15pm IST
 AI security is emerging and it necessary for organization and AI practitioners to incorporate secure way of developing and adopting AI systems. This means at each stage of SDLC there needs to controls and best practices to be followed specific to AI apart from traditional application security practices (Secure SDLC).
SDLC Stage vs AI Security Controls:
Design - Risk Assessments / Threat Modelling AI systems Development - Asset Inventory / tracking , protect sensitive data and supply chain security
Deployment - Secure MLOps, Incident Management, AI Security assessments, Secure Infrastructure
Operate and Maintain - Secure Logging and Monitoring, Patch management.
References and Resources will be shared.
By the end of this talk, the audience will have a clear understanding of how to approach AI security and will be equipped to develop guidelines that ensure the security of AI systems throughout their lifecycle.
                                   
    
Speakers
avatar for Gowtham Sundar

Gowtham Sundar

Head - AppSec, SPH Media
Friday November 15, 2024 3:30pm - 4:15pm IST

4:15pm IST

Maturing Your Application Security Program
Friday November 15, 2024 4:15pm - 5:00pm IST
                                                After working with over 300 companies on their application security programs the most common question I receive is “what’s next?”. They want to know how to mature their programs, and when they look at the maturity models available, they find them intimidating and so far beyond their current maturity level that they feel impossible. In this talk I will take you through 3 common AppSec program maturity levels I have encountered over the years, with practical and actionable next steps you could take immediately to improve your security posture.
                   Outline:
• Intro to myself, the format of this talk, and how I conducted this research (by working with over 300 companies over 5 years of working with IANs research. Meeting with 1-5 companies per week). All of the companies are considered enterprise, so this talk will focus on enterprise, not SMB.
• Explain to the audience, gently, that the current maturity models available (OWASP SAMM, BSIMM and NIST) scare the pants off of most clients, and seem utterly impossible. It’s not that the maturity frameworks are bad, it’s that they are too “pie in the sky” compared to where more organizations currently are. I *WISH* our industry could be at that level. We are not.
• Model 1: we just PenTest the important app(s) once a year, little to no governance
                   This is, unfortunately, extremely common. Even in 2024 I am working with enterprises that have ZERO AppSec programs, starting from scratch. That said, we all need to start somewhere, and if they are meeting with me (or another AppSec consultant) then they are serious!
                   Often no formal SDLC at all
Often very mixed tech stack
Often code is located in several places, everyone doing their own thing Why this model is bad: it’s quite expensive for what you are getting
                   (pentesting as your only AppSec is $$$$), terrible coverage, developers do not feel supported, there’s no opportunity for them to learn, etc.
                   How to mature this model, *realistic* next steps:
                   Secure Coding training, followed up with quarterly lunch and learn style events to teach them about whatever the most pressing issue is for the AppSec team (lesson on a common vulnerability, incident issues, whatever)
                   If little-to-no-budget:
                   use a free tool like Zap or Burp for DAST (in a sandbox), and Semgrep OSS
                   try to identify one or more devs who might be willing to do some security work, and assign the tools to them
                   Get a secure coding guideline and try to get the developers to follow it, even a super basic one
                   Create a list of security requirements for all new web app and API projects, try to get projects to use it
                   Try to get all your code into the same place, set SAST against your code repo, try not to cry after your first scan
                   Talk to head of developers about plans for IT modernization, you all want a real SDLC, you all want project management, you all want centralization
                                                             
       
                                                and standardization.
Threat model your super important app(s) then socialize the results with
                   the developers. So they understand the threats to your org and your mission. This will raise awareness a lot.
                   Share information on past incidents, if appropriate, so the devs understand where you are coming from
                   Scan your codebase for secrets, rotate them and start checking them in somewhere that is not your code base
                   Put a WAF in front of any app that is *terrible*
                   You still PenTest the important apps, but now sometimes you pass the tests.
                   If budget, add this:
hire a full time AppSec person as #1 step - best possible spend
buy a paid second generation SAST (no one will tolerate tons of false
                   positives at this point), still use free/almost-free DAST
• Model 2: SAST + DAST and perhaps one other tool, some governance
                   This is the most common AppSec program I see at Enterprises.
                   They pay a lot of money for 2-3 tools, and have partially rolled them out. They send lots of bug reports, few things get fixed. “Why won’t the developers just fix things?”
                   Little-to-no documentation, little-to-no interaction with developer teams, may have done a failed champs program
                   There is an AppSec program with 1 or 2 people, but it’s not consistent or well directed, always responding to fires
                   Why this model is bad: coverage is bad. Not much is fixed. We’re not here to find lots of vulns and make reports, we are here to reduce organizational risk, and these programs usually make only a tiny dent. They also tend to cause friction.
                   How to mature this model, *realistic* next steps:
Do all the stuff from the previous Maturity model.
You will likely need additional AppSec resources for this, ensure they are
                   not all just PenTesters who used to be sysadmins, you need to have at least one resource with lots of coding experience in their background that understands the developer brain and processes
                   Assess whether your tools are working for you and your devs. If you’ve had your SAST and DAST for 2+ years and they are not adopted or completely rolled out, WHY NOT? Likely you have the wrong toolset. Time to upgrade/modernize
                   Ideally you would have tooling that looks at your running apps (and APIs), your written code, and third party code. This can be anywhere from 2 tools to 5, depending on how thorough you want to be. Focus on getting as many true positives as possible, and try to add as much auto-triage as you can, so you show developers the lowest possible number of things to fix. Examples: validate your secret findings, only show reachable third party code vulns, use AI to tr
Speakers
avatar for Tanya Janca

Tanya Janca

Head of Community and Education, SEMGREP
Friday November 15, 2024 4:15pm - 5:00pm IST

5:30pm IST

Closing Note and Updates
Friday November 15, 2024 5:30pm - 6:00pm IST
Speakers
avatar for Vandana Verma Sehgal

Vandana Verma Sehgal

Security, OWASP
Vandana Verma Sehgal is Security Leader at Snyk. She is a member of the OWASP Global Board of Directors. She has experience ranging from Application Security to Infrastructure and now dealing with Product Security. She also works in various communities towards diversity initiatives... Read More →
Friday November 15, 2024 5:30pm - 6:00pm IST