Loading…
or to bookmark your favorites and sync them to your phone or calendar.
strong>Speaker [clear filter]
Thursday, November 14
 

10:45am IST

Scaling Smart Contract Security with AI Automation
Thursday November 14, 2024 10:45am - 11:10am IST
Smart contracts have revolutionized the way we execute agreements, enabling trustless transactions and automating processes across various industries. However, their complexity and potential vulnerabilities demand a meticulous approach to security. This talk delves into the vulnerabilities of smart contract security, addressing common pitfalls, best practices, and emerging trends. From code audits to decentralized finance (DeFi) exploits, we'll navigate through real-world examples to equip you with the knowledge needed to safeguard smart contracts effectively and the use of automation and AI to detect vulnerabilities.
1. Understanding Smart Contract Vulnerabilities:
Detailed exploration of common vulnerabilities in smart contracts (e.g., reentrancy, integer overflow/underflow, access control, etc.). Real-world examples of security breaches and their consequences.
2. Best Practices for Smart Contract Security:
Coding standards and practices to minimize vulnerabilities.
The role of formal verification in ensuring smart contract reliability. Importance of continuous education and staying updated with the latest security trends.
3. Code Audits and Their Role:
Importance of manual and automated code audits.
Step-by-step process of conducting a smart contract audit.
Tools and frameworks commonly used in the industry for auditing smart contracts.
4. Automation and AI in Smart Contract Security:
Introduction to automation and AI tools designed for smart contract security.
Demonstration of how automation can streamline the process of vulnerability detection.
Advantages and limitations of using AI in smart contract audits.
5. Real-World Applications and Future Trends:
Discussion on how industries can leverage automation for smart contract security.
Emerging trends and future directions in smart contract security. Predictions on how AI and automation will shape the future of smart contract development and security.
                                   
    
Speakers
avatar for Shashank

Shashank

CEO, CredShields
Thursday November 14, 2024 10:45am - 11:10am IST

11:10am IST

Policy Driven Vetting of OSS Components
Thursday November 14, 2024 11:10am - 11:45am IST
Open Source component vulnerabilities and other risks are often hard to fix, especially for unsupported or unmaintained projects. Establishing secure guardrails to prevent risky components, especially introduced as transitive dependencies, from getting into a software project is critical. In this talk, we will look at the policy as code features of vet and how to leverage it to set up OSS component vetting in pull request flows. We will also touch upon some of the advanced features of vet such as dependency graph reconstruction, upgrade advice, code analysis etc. https://github.com/safedep/vet 
Speakers
avatar for Abhisek Datta

Abhisek Datta

Co-Founder, SafeDep
Thursday November 14, 2024 11:10am - 11:45am IST

11:45am IST

HARMONIZE APPSEC STANDARDS: A MULTIFACETED APPROACH TO COMPLIANCE SPECTRUM MASTERY
Thursday November 14, 2024 11:45am - 12:30pm IST
 The presentation, "HARMONIZE APPSEC STANDARDS: A MULTIFACETED APPROACH TO COMPLIANCE SPECTRUM MASTERY," ventures into the vast expanse of application security (AppSec) regulations, transcending singular frameworks to embrace a holistic compliance landscape. With an ever-increasing array of standards such as NIST, PCI DSS, HIPAA, SOC2, and OWASP, the complexity of maintaining compliance is monumental. This session is a clarion call to harness Artificial Intelligence (AI) as a unifying force to navigate and uphold these diverse regulations efficiently.
This narrative weaves through the intricate tapestry of compliance challenges, spotlighting the need for a dynamic, AI-enhanced approach that not only streamlines adherence to current standards but also agilely adapts to emergent ones. The talk will unveil how AI can be the linchpin in developing secure systems, automating compliance processes, and undertaking preemptive threat mitigation, with a nod to the compliance engines that are paving the future of cybersecurity. Attending participants will be privy to an in-depth analysis of generative AI and its potential to revolutionize banking compliance, an area historically burdened by meticulous regulatory requirements and substantial financial implications. The discussion will pivot to demonstrate how generative AI transcends the banking domain to apply its transformative capabilities across multiple compliance frameworks, offering a beacon of innovation for AppSec professionals.
The presentation will guide the audience through a series of actionable insights, demonstrating the utilization of AI in enhancing security postures while meeting the stringent demands of varied compliance standards. AI in action across standards will illustrate the tangible benefits of AI integration, offering a blueprint for a robust, compliance-centric security strategy.
                                   
    
Speakers
avatar for BHARADWAJ D J

BHARADWAJ D J

SENIOR ARCHITECT - CYBER SECURITY, SYNECHRON TECHNOLOGIES
SENIOR ARCHITECT - CYBER SECURITY @SYNECHRON TECHNOLOGIES
Thursday November 14, 2024 11:45am - 12:30pm IST

12:30pm IST

Attack Surface Management - Safeguarding Digital Landscapes via AI
Thursday November 14, 2024 12:30pm - 1:15pm IST
                                                In an increasingly digital world, organizations face the challenge of securing expansive and dynamic digital landscapes. Attack Surface Management (ASM) is crucial in identifying and mitigating vulnerabilities across these environments. The integration of Artificial Intelligence (AI) into ASM offers a transformative approach, enabling proactive and adaptive defense mechanisms. This paper explores how AI enhances ASM by automating the detection and analysis of potential attack vectors, continuously monitoring for new threats, and providing actionable insights for preemptive security measures. By leveraging AI, organizations can significantly reduce their exposure to cyber risks, ensuring a more resilient and secure digital infrastructure.
                   I. Introduction
A. Overview of Digital Transformation
B. The Importance of Attack Surface Management (ASM) C. The Role of AI in Cybersecurity
                   II. Understanding Attack Surface Management A. Definition and Key Components
B. Challenges in Traditional ASM
C. The Need for Automation and Intelligence
                   III. Leveraging AI in Attack Surface Management
A. AI-Driven Asset Discovery
B. AI for Vulnerability Detection and Prioritization
C. Continuous Threat Monitoring and Intelligence
D. Adaptive response mechanisms based on AI insights
                   IV. Case Studies and Real-world Applications
A. Case Study 1: AI-Enhanced ASM in a Large Enterprise B. Case Study 2: AI in ASM for Critical Infrastructure
C. Comparative Analysis of AI vs. Traditional ASM
                   V. Challenges and Considerations
A. Ethical and Privacy Concerns
B. Integration with Existing Security Frameworks C. Limitations of AI in ASM
                   VI. Future Trends in AI-Driven ASM
A. Evolution of AI Technologies in Cybersecurity B. The Convergence of AI with Other Technologies C. The Road Ahead for AI in ASM
                   VII. Conclusion
A. Recap of AI’s Impact on ASM
B. The Imperative of AI Adoption in Cybersecurity
C. Final Thoughts on Safeguarding Digital Landscapes
The future of secure digital environments through AI-driven ASM
                                   
    
Speakers
avatar for Antara mane

Antara mane

Vertical Lead, SecureNeexus
Thursday November 14, 2024 12:30pm - 1:15pm IST

1:15pm IST

Building git hooks for your organization
Thursday November 14, 2024 1:15pm - 2:00pm IST
  Have you been in a situation where you had to help developers write secure code and you spent hours in training or code reviews? What if there was a way to make such reviews proactive?

This talk will discuss about using git-hooks for performing security scans. We will discuss using git hooks the usual way and how one team like application security can distribute the hooks across the organisation to enable developers write clean code. We will specifically talk about using pre-commit for code scanning.

We will look at ways we distribute the hooks to every developer in the company, problems we faced, how we collect metrics to understand usage pattern and efficiency. Problems like time consumption and other issues will also be discussed.
During this implementation there were a lot of challenges and lessons learned. Why current tools or frameworks won't work out. What could possibly go wrong when installing the hooks. We will discuss all of those.
                                   
    
Speakers
avatar for Naveen S

Naveen S

Lead Security Engineer, Freshworks
Thursday November 14, 2024 1:15pm - 2:00pm IST

2:00pm IST

ATT&CK LENS (An AI based Visualization for Mitre ATT&CK Exploration)
Thursday November 14, 2024 2:00pm - 2:45pm IST
      The ATT&CK LENS framework introduces an innovative approach to cybersecurity analysis by leveraging advanced artificial intelligence (AI) for dynamic visualization and exploration of the MITRE ATT&CK framework. Designed to address the growing complexity of cyber threat data, ATT&CK LENS enables cybersecurity professionals to analyze and visualize relationships between threat actors, malware, campaigns, and their associated techniques and tactics with unprecedented ease and precision. At its core, ATT&CK LENS is built on a robust AI-driven architecture that utilizes Natural Language Processing (NLP) and Retrieval-Augmented Generation (RAG) to interpret and process user queries expressed in natural language. This capability allows users to generate complex visualizations without the need for manual input or technical expertise. By transforming plain English queries into actionable data visualizations, the framework empowers users to explore the MITRE ATT&CK matrix in a more intuitive and efficient manner. The backend of ATT&CK LENS is implemented using Flask, a lightweight web framework that facilitates rapid development and integration. The visualization engine is powered by PyVis and NetworkX, which enable the creation of interactive network graphs that depict the intricate relationships within the ATT&CK dataset. These visualizations are not only dynamic but also customizable, allowing users to apply filters such as source_ref, relationship_type, and target_ref to refine the data displayed. One of the key strengths of ATT&CK LENS is its modular design, which ensures scalability and adaptability. The framework is designed to seamlessly integrate with the latest MITRE ATT&CK dataset, ensuring that users always have access to up-to-date information. Additionally, the modularity of the framework allows for future enhancements, such as the incorporation of additional AI models, data sources, or integration with other cybersecurity tools. ATT&CK LENS also includes a zero-touch capability, meaning it can automatically adapt to updates in the MITRE ATT&CK framework without requiring manual intervention. This feature significantly reduces the maintenance burden on users, ensuring that the framework remains a reliable and up-to-date resource for threat analysis. The framework’s AI component is particularly powerful in correlating tactics, techniques, and procedures (TTPs) to identify potential threat patterns and vulnerabilities. By using advanced AI models, ATT&CK LENS can process vast amounts of data and generate relevant visualizations that highlight critical threat relationships. This capability is crucial for cybersecurity professionals who need to make informed decisions in a rapidly evolving threat landscape. The framework has been rigorously evaluated through internal user testing, where it demonstrated significant improvements in both the speed and accuracy of threat analysis. Users reported a substantial reduction in the time required to analyze complex threat data and a marked improvement in their ability to identify and understand key threat relationships.
Speakers
avatar for Shankar Murali

Shankar Murali

AVP, Standard Charted Bank
avatar for Krishna Chaitanya Yeleswarapu

Krishna Chaitanya Yeleswarapu

Senior Manager, Standard Chartered Bank
Thursday November 14, 2024 2:00pm - 2:45pm IST

2:45pm IST

MPT! Pentest In Action
Thursday November 14, 2024 2:45pm - 3:30pm IST
In ever evolving software development world, security is also becoming fast paced. Hence, each product going through the pentest cycle has to be managed effectively and efficiently. Managing multiple pentests and testers is important. A single pane of glass view for all of these with risk posture is helpful. Security penetration testing is becoming as necessary and as usual a practice as software testing. Most, if not all, organizations either have their own penetration testing team or they utilize third-party pentesters.

Imagine any fast-paced organization developing multiple product lines and planning to release each of them from time to time. It becomes challenging for the organization's security team to efficiently manage all of these pentest activities running and effectively produce security assessment reports and track them. Because of such volume of work, the numbers of pentesters in organizations are increasing to keep up. Each pentester is doing multiple pentests. The next cycle of a previous pentest can get assigned to another pentester. Each pentesting cycle has issues and recurring issues. And above all, managing all these using Excel worksheets is nightmare. A pentesting activity knowledge base is kind of must. A single-pane-of- glass view to all pentests running, and the issues identified, is a necessity for everyone involved in the security review cycle. To solve these challenges, I have developed a solution called Managing Pentest (MPT): Pentest in Action.

MPT helps us solve various problems:
- Asset DB to know all organisation assets that are in pentest process. You can’t secure what you are not aware of! - Tracking each pentest
- Pentesting activity knowledge which comprises of what particular let say application does, or the purpose of hardware that we are testing
- When the next pentester takes over the testing all they have to do is view the asset and associated information, which is already there
- Time taken for each pentest
- Real time tracking of activity
- Issue status
- Common issues that are observed

Why MPT? MPT also provides security pentest analytics, which helps us not only track and view everything in single pane of glass but also: Finding improvement areas to boost pen tester productivity Understand the current risk posture Understand recurring issues
Speakers
avatar for Jyoti Raval

Jyoti Raval

Baker Hughes, Senior Staff Product Security Leader
Senior Staff Product Security Leader @ Baker Hughes
Thursday November 14, 2024 2:45pm - 3:30pm IST

3:30pm IST

Transforming Threat Modeling: Harnessing ChatGPT for Automated Security Analysis
Thursday November 14, 2024 3:30pm - 4:15pm IST
Introduction to ChatGPT: Uncover the versatility of advanced language models in cybersecurity. Explore their applications in threat modeling. Automating Threat Modeling: Learn how ChatGPT streamlines threat modeling, increasing efficiency by reducing manual efforts. Generating Dynamic Threat Models: See how ChatGPT assists in crafting threat models from system architecture, user stories, and design documents. Elevating Accuracy and Scope: Discuss techniques for refining accuracy and coverage of automated threat models generated by ChatGPT. Integration and Synergy: Explore seamless integration of automated threat modeling into existing security processes. Understand how collaboration between AI systems and security professionals can be encouraged.
Speakers
avatar for Gustavo Arreaza

Gustavo Arreaza

CTO, AppSecCl
Thursday November 14, 2024 3:30pm - 4:15pm IST

4:15pm IST

Practical Session on AI Red Teaming
Thursday November 14, 2024 4:15pm - 5:00pm IST
This  session delves into AI red teaming with a focus on identifying and mitigating vulnerabilities in large language models (LLMs). Participants will engage with an intentionally vulnerable LLM, exploring real- world adversarial scenarios where AI systems can be exploited. Using cutting-edge open-source and proprietary tools such as Protect AI’s Model Scan, Rebuff, LLMGuard, NeMo Guardrails, and Garak, i'll will demonstrate how to detect and exploit vulnerabilities in LLMs. Attendees will gain practical experience with these tools, learning how to assess model security, apply guardrails, and defend against adversarial attacks.
By the end of this session, participants will:
Understand how to perform AI red teaming on LLMs.
Use Model Scan, Rebuff, LLMGuard, NeMo Guardrails, and Garak for detecting and mitigating vulnerabilities.
Gain hands-on skills to apply these techniques in their own environments, improving the robustness and security of AI systems.
                                   
    
Speakers
avatar for R Nagarjun

R Nagarjun

Security Researcher, AKTO
Thursday November 14, 2024 4:15pm - 5:00pm IST
 
Friday, November 15
 

10:45am IST

The Dark Side of AI: Exploring Adversarial Threats
Friday November 15, 2024 10:45am - 11:30am IST
 As artificial intelligence (AI) becomes an integral part of our digital landscape, the looming threat of adversarial attacks casts a shadow over its immense potential. This presentation takes a technical deep dive into the evolving landscape of AI security and the relentless tactics employed by adversaries to exploit vulnerabilities. Attendees will gain insights into the various attacker strategies including OWASP LLM TOP 10, and security flaws in LLM frameworks that are exploitable. Moreover, there will be demos of adversarial AI attacks on POC applications. Demos covered include the Fast Gradient Sign Method (FGSM), Prompt injection to Code execution, Poisoning Training Data, Model Serialization Attacks, and SQL injection in LLM applications. The session aims to equip attendees with a comprehensive understanding of the adversarial tactics prevalent in AI security and empower them to guard against the shadows that threaten AI systems.
                                   
    
Speakers
avatar for Alex Devassy

Alex Devassy

AppViewX
Friday November 15, 2024 10:45am - 11:30am IST

11:30am IST

Threat Modelling is, Buy’n’Large, the best way to secure generative AI
Friday November 15, 2024 11:30am - 11:45am IST
   If there’s any one thing that I took away from watching the 2008 children’s film Wall-E, it’s that the fictional megacorporation Buy’n’Large (BnL) had woeful threat modelling processes. In this fusion of fiction and real-world experience, I will demonstrate some practical examples of threat modelling generative AI systems and large-language models. I will draw on my experience threat modelling AI within the fintech and media industries, and illustrate these learnings of mine practically through one of Buy’n’Large’s own AI systems: Autopilot. We will discuss a number of AI-specific threats and how they should be tackled practically in BnL’s threat modelling sessions. These include: data tampering, information leakage, elevation of privilege, adversarial inputs, model inversion, and reprogramming; some of these relating to Wall-E plot points, and some to consider what else could potentially have gone wrong. Throughout this we will emphasise that threat modelling must leverage fast and agile methods to keep pace with data science teams working on AI. We will also discuss how AI can affect the trust dynamics of a threat model, and make an argument for a zero-trust approach to application architectures that leverage AI. Welcome to the newly formed BnL appsec team; we look forward to working with you.
                                   
    
Speakers
avatar for Chris Cooper

Chris Cooper

Director of Product Security, News Corp
Friday November 15, 2024 11:30am - 11:45am IST

11:45am IST

My Threat Modeling Journey
Friday November 15, 2024 11:45am - 12:30pm IST
  My Threat Modeling Journey is a personal account of [name]'s experiences in learning, applying, and teaching threat modeling. In this talk, [Name] how he first encountered threat modeling and how his understanding of security practices has evolved over time. He discusses the challenges faced in implementing threat modeling in various environments, from workshops to real-world applications, and how these experiences have shaped his approach to improving security design.
Through a combination of hands-on application and community-driven initiatives, including the establishment of an open threat modeling community in Japan, [Name] emphasizes the importance of continuous learning and collaboration. His journey also highlights the value of diversity in threat modeling, showcasing how different perspectives contribute to identifying and mitigating security risks more effectively.
   
    
Speakers
avatar for Takaharu Ogasa

Takaharu Ogasa

CEO, Security Initiative
Friday November 15, 2024 11:45am - 12:30pm IST

12:30pm IST

The Inmates Are Running the Asylum: Why Developers Drive Security Professionals Crazy and How to Restore Sanity
Friday November 15, 2024 12:30pm - 1:15pm IST
                                                In the evolving landscape of software development, the integration of DevSecOps has emerged as a critical paradigm, promising a harmonious blend of development, security, and operations to streamline feature delivery while ensuring security. However, the path to achieving this seamless integration is fraught with hurdles—ranging from the lack of security training among developers to the complexity of security tools, the scarcity of dedicated security personnel, and the generation of non- actionable security alerts. Historically, there has been a palpable tension between development teams, who prioritize rapid feature deployment, and security professionals, who focus on risk mitigation. This discrepancy often results in a "The Inmates Are Running the Asylum" scenario, where developers, driven by delivery deadlines, may inadvertently sideline security, leading to frustration among security teams. However, the essence of DevSecOps lies in reconciling these differences by embedding security into the development lifecycle, thereby enabling faster, more secure releases without compromising productivity. This paper explores strategies for embedding security into the development process in a harmonious manner, thereby enhancing productivity without compromising on security.
                                   
    
Speakers
avatar for Debrup Ghosh

Debrup Ghosh

Principal Product Manager, F5
Friday November 15, 2024 12:30pm - 1:15pm IST

1:15pm IST

Shift-left Security with Open Policy Agent(OPA) controls
Friday November 15, 2024 1:15pm - 2:00pm IST
  As organizations adopt Infrastructure-as-Code (IaC) practices, ensuring compliance and enforcing best practices across multiple Terraform configurations becomes a critical challenge. Open Policy Agent (OPA) provides a powerful solution by enabling policy-as-code, decoupling decision-making from policy enforcement. In this session, we will explore how to leverage OPA policies to define guardrails for Terraform runs, preventing or enforcing specific actions based on predetermined rules. We will demonstrate how to group these policies in a Version Control System (VCS) like GitHub and enforce them using GitHub Actions workflows. Attendees will learn how to write OPA policies tailored to their organization's requirements, integrate them with Terraform, and automate policy evaluation using GitHub Actions.
                                   
    
Speakers
avatar for Anshu Bathla

Anshu Bathla

Lead Consultant - Security, AWS
Friday November 15, 2024 1:15pm - 2:00pm IST

2:00pm IST

Zero Day Apocalypse: The Rising Cyber Threat Landscape
Friday November 15, 2024 2:00pm - 2:45pm IST
Imagine a hidden flaw in the application software that hackers can exploit before anyone even knows it exists. These are called zero-day vulnerabilities, and they're becoming a bigger problem. Hackers are finding these flaws faster than ever before, and when they use them to attack, the damage can be huge.
Think about the times you've heard about big companies getting hacked. Many of these attacks use zero-day vulnerabilities. Hackers can steal personal information, disrupt businesses, and even cause financial losses. It's like a ticking time bomb in your application.
To protect ourselves from these dangers, we need to be smarter about how we use and maintain our application software. This means regularly checking for updates, writing code carefully, and having a plan in place to deal with attacks when they happen.
In short, the threat of zero-day vulnerabilities is real and growing. By being proactive and taking the right steps, we can reduce our risk and keep our digital world safe.
                                   

Speakers
avatar for Somen Das

Somen Das

Technology Head, Attack Surface Management, TCS
Technology Head, Attack Surface Management @ TCS
Friday November 15, 2024 2:00pm - 2:45pm IST

2:45pm IST

Leveraging DevSecOps and AI to Triage and Resolve OWASP Top 10 Vulnerabilities in a project(s)
Friday November 15, 2024 2:45pm - 3:15pm IST
 In this session, we'll explore how to effectively triage and resolve vulnerabilities within the OWASP Top 10 categories using a DevSecOps platform like GitLab.
Attendees will learn:
1. How to configure security scanners within project pipelines to generate comprehensive vulnerability reports.
2. We'll then dive into using the data from these reports to triage vulnerabilities according to the OWASP Top 10 categories.
3. Finally, we’ll demonstrate how AI-assisted tools can suggest resolutions and automate the remediation of these identified critical vulnerabilities.
                                   
    
Speakers
avatar for Bala Kumar Subramani

Bala Kumar Subramani

Senior Backend Engineer, Gitlab
Friday November 15, 2024 2:45pm - 3:15pm IST

3:15pm IST

Thinking Like An Attacker - Quick Wins For Securing Your Application
Friday November 15, 2024 3:15pm - 3:30pm IST
It's tough being a developer. Between an endless backlog of tasks, unrealistic deadlines, shifting priorities, and staying up-to-date with the latest trends, adding security knowledge to the mix can feel overwhelming. But security doesn't have to be painful!
                   This talk will focus on adopting the "attacker's mindset" — understanding how malicious actors think to identify vulnerabilities before they do. By anticipating threats and applying a few simple strategies, developers can make quick, impactful improvements to their application's security. And the best part is, you won’t need to master complex tools or frameworks — just a shift in perspective and a few practical steps.

Whether you’re new to security or looking for ways to streamline your approach, this talk will provide you with practical tips to adopt an attacker's mindset and learn how to preemptively counter common threats.
                                   
    
Speakers
avatar for Hotanya Ragtah

Hotanya Ragtah

Information Security Consultant, Kiwibank
Friday November 15, 2024 3:15pm - 3:30pm IST

3:30pm IST

Building Secure AI Systems: A Comprehensive Approach to AI Security
Friday November 15, 2024 3:30pm - 4:15pm IST
 AI security is emerging and it necessary for organization and AI practitioners to incorporate secure way of developing and adopting AI systems. This means at each stage of SDLC there needs to controls and best practices to be followed specific to AI apart from traditional application security practices (Secure SDLC).
SDLC Stage vs AI Security Controls:
Design - Risk Assessments / Threat Modelling AI systems Development - Asset Inventory / tracking , protect sensitive data and supply chain security
Deployment - Secure MLOps, Incident Management, AI Security assessments, Secure Infrastructure
Operate and Maintain - Secure Logging and Monitoring, Patch management.
References and Resources will be shared.
By the end of this talk, the audience will have a clear understanding of how to approach AI security and will be equipped to develop guidelines that ensure the security of AI systems throughout their lifecycle.
                                   
    
Speakers
avatar for Gowtham Sundar

Gowtham Sundar

Head - AppSec, SPH Media
Friday November 15, 2024 3:30pm - 4:15pm IST

4:15pm IST

Maturing Your Application Security Program
Friday November 15, 2024 4:15pm - 5:00pm IST
                                                After working with over 300 companies on their application security programs the most common question I receive is “what’s next?”. They want to know how to mature their programs, and when they look at the maturity models available, they find them intimidating and so far beyond their current maturity level that they feel impossible. In this talk I will take you through 3 common AppSec program maturity levels I have encountered over the years, with practical and actionable next steps you could take immediately to improve your security posture.
                   Outline:
• Intro to myself, the format of this talk, and how I conducted this research (by working with over 300 companies over 5 years of working with IANs research. Meeting with 1-5 companies per week). All of the companies are considered enterprise, so this talk will focus on enterprise, not SMB.
• Explain to the audience, gently, that the current maturity models available (OWASP SAMM, BSIMM and NIST) scare the pants off of most clients, and seem utterly impossible. It’s not that the maturity frameworks are bad, it’s that they are too “pie in the sky” compared to where more organizations currently are. I *WISH* our industry could be at that level. We are not.
• Model 1: we just PenTest the important app(s) once a year, little to no governance
                   This is, unfortunately, extremely common. Even in 2024 I am working with enterprises that have ZERO AppSec programs, starting from scratch. That said, we all need to start somewhere, and if they are meeting with me (or another AppSec consultant) then they are serious!
                   Often no formal SDLC at all
Often very mixed tech stack
Often code is located in several places, everyone doing their own thing Why this model is bad: it’s quite expensive for what you are getting
                   (pentesting as your only AppSec is $$$$), terrible coverage, developers do not feel supported, there’s no opportunity for them to learn, etc.
                   How to mature this model, *realistic* next steps:
                   Secure Coding training, followed up with quarterly lunch and learn style events to teach them about whatever the most pressing issue is for the AppSec team (lesson on a common vulnerability, incident issues, whatever)
                   If little-to-no-budget:
                   use a free tool like Zap or Burp for DAST (in a sandbox), and Semgrep OSS
                   try to identify one or more devs who might be willing to do some security work, and assign the tools to them
                   Get a secure coding guideline and try to get the developers to follow it, even a super basic one
                   Create a list of security requirements for all new web app and API projects, try to get projects to use it
                   Try to get all your code into the same place, set SAST against your code repo, try not to cry after your first scan
                   Talk to head of developers about plans for IT modernization, you all want a real SDLC, you all want project management, you all want centralization
                                                             
       
                                                and standardization.
Threat model your super important app(s) then socialize the results with
                   the developers. So they understand the threats to your org and your mission. This will raise awareness a lot.
                   Share information on past incidents, if appropriate, so the devs understand where you are coming from
                   Scan your codebase for secrets, rotate them and start checking them in somewhere that is not your code base
                   Put a WAF in front of any app that is *terrible*
                   You still PenTest the important apps, but now sometimes you pass the tests.
                   If budget, add this:
hire a full time AppSec person as #1 step - best possible spend
buy a paid second generation SAST (no one will tolerate tons of false
                   positives at this point), still use free/almost-free DAST
• Model 2: SAST + DAST and perhaps one other tool, some governance
                   This is the most common AppSec program I see at Enterprises.
                   They pay a lot of money for 2-3 tools, and have partially rolled them out. They send lots of bug reports, few things get fixed. “Why won’t the developers just fix things?”
                   Little-to-no documentation, little-to-no interaction with developer teams, may have done a failed champs program
                   There is an AppSec program with 1 or 2 people, but it’s not consistent or well directed, always responding to fires
                   Why this model is bad: coverage is bad. Not much is fixed. We’re not here to find lots of vulns and make reports, we are here to reduce organizational risk, and these programs usually make only a tiny dent. They also tend to cause friction.
                   How to mature this model, *realistic* next steps:
Do all the stuff from the previous Maturity model.
You will likely need additional AppSec resources for this, ensure they are
                   not all just PenTesters who used to be sysadmins, you need to have at least one resource with lots of coding experience in their background that understands the developer brain and processes
                   Assess whether your tools are working for you and your devs. If you’ve had your SAST and DAST for 2+ years and they are not adopted or completely rolled out, WHY NOT? Likely you have the wrong toolset. Time to upgrade/modernize
                   Ideally you would have tooling that looks at your running apps (and APIs), your written code, and third party code. This can be anywhere from 2 tools to 5, depending on how thorough you want to be. Focus on getting as many true positives as possible, and try to add as much auto-triage as you can, so you show developers the lowest possible number of things to fix. Examples: validate your secret findings, only show reachable third party code vulns, use AI to tr
Speakers
avatar for Tanya Janca

Tanya Janca

Head of Community and Education, SEMGREP
Friday November 15, 2024 4:15pm - 5:00pm IST

5:30pm IST

Closing Note and Updates
Friday November 15, 2024 5:30pm - 6:00pm IST
Speakers
avatar for Vandana Verma Sehgal

Vandana Verma Sehgal

Security, OWASP
Vandana Verma Sehgal is Security Leader at Snyk. She is a member of the OWASP Global Board of Directors. She has experience ranging from Application Security to Infrastructure and now dealing with Product Security. She also works in various communities towards diversity initiatives... Read More →
Friday November 15, 2024 5:30pm - 6:00pm IST