Loading…
or to bookmark your favorites and sync them to your phone or calendar.
Thursday, November 14
 

11:45am IST

HARMONIZE APPSEC STANDARDS: A MULTIFACETED APPROACH TO COMPLIANCE SPECTRUM MASTERY
Thursday November 14, 2024 11:45am - 12:30pm IST
 The presentation, "HARMONIZE APPSEC STANDARDS: A MULTIFACETED APPROACH TO COMPLIANCE SPECTRUM MASTERY," ventures into the vast expanse of application security (AppSec) regulations, transcending singular frameworks to embrace a holistic compliance landscape. With an ever-increasing array of standards such as NIST, PCI DSS, HIPAA, SOC2, and OWASP, the complexity of maintaining compliance is monumental. This session is a clarion call to harness Artificial Intelligence (AI) as a unifying force to navigate and uphold these diverse regulations efficiently.
This narrative weaves through the intricate tapestry of compliance challenges, spotlighting the need for a dynamic, AI-enhanced approach that not only streamlines adherence to current standards but also agilely adapts to emergent ones. The talk will unveil how AI can be the linchpin in developing secure systems, automating compliance processes, and undertaking preemptive threat mitigation, with a nod to the compliance engines that are paving the future of cybersecurity. Attending participants will be privy to an in-depth analysis of generative AI and its potential to revolutionize banking compliance, an area historically burdened by meticulous regulatory requirements and substantial financial implications. The discussion will pivot to demonstrate how generative AI transcends the banking domain to apply its transformative capabilities across multiple compliance frameworks, offering a beacon of innovation for AppSec professionals.
The presentation will guide the audience through a series of actionable insights, demonstrating the utilization of AI in enhancing security postures while meeting the stringent demands of varied compliance standards. AI in action across standards will illustrate the tangible benefits of AI integration, offering a blueprint for a robust, compliance-centric security strategy.
                                   
    
Speakers
avatar for BHARADWAJ D J

BHARADWAJ D J

SENIOR ARCHITECT - CYBER SECURITY, SYNECHRON TECHNOLOGIES
SENIOR ARCHITECT - CYBER SECURITY @SYNECHRON TECHNOLOGIES
Thursday November 14, 2024 11:45am - 12:30pm IST

12:30pm IST

Attack Surface Management - Safeguarding Digital Landscapes via AI
Thursday November 14, 2024 12:30pm - 1:15pm IST
                                                In an increasingly digital world, organizations face the challenge of securing expansive and dynamic digital landscapes. Attack Surface Management (ASM) is crucial in identifying and mitigating vulnerabilities across these environments. The integration of Artificial Intelligence (AI) into ASM offers a transformative approach, enabling proactive and adaptive defense mechanisms. This paper explores how AI enhances ASM by automating the detection and analysis of potential attack vectors, continuously monitoring for new threats, and providing actionable insights for preemptive security measures. By leveraging AI, organizations can significantly reduce their exposure to cyber risks, ensuring a more resilient and secure digital infrastructure.
                   I. Introduction
A. Overview of Digital Transformation
B. The Importance of Attack Surface Management (ASM) C. The Role of AI in Cybersecurity
                   II. Understanding Attack Surface Management A. Definition and Key Components
B. Challenges in Traditional ASM
C. The Need for Automation and Intelligence
                   III. Leveraging AI in Attack Surface Management
A. AI-Driven Asset Discovery
B. AI for Vulnerability Detection and Prioritization
C. Continuous Threat Monitoring and Intelligence
D. Adaptive response mechanisms based on AI insights
                   IV. Case Studies and Real-world Applications
A. Case Study 1: AI-Enhanced ASM in a Large Enterprise B. Case Study 2: AI in ASM for Critical Infrastructure
C. Comparative Analysis of AI vs. Traditional ASM
                   V. Challenges and Considerations
A. Ethical and Privacy Concerns
B. Integration with Existing Security Frameworks C. Limitations of AI in ASM
                   VI. Future Trends in AI-Driven ASM
A. Evolution of AI Technologies in Cybersecurity B. The Convergence of AI with Other Technologies C. The Road Ahead for AI in ASM
                   VII. Conclusion
A. Recap of AI’s Impact on ASM
B. The Imperative of AI Adoption in Cybersecurity
C. Final Thoughts on Safeguarding Digital Landscapes
The future of secure digital environments through AI-driven ASM
                                   
    
Speakers
avatar for Antara mane

Antara mane

Vertical Lead, SecureNeexus
Thursday November 14, 2024 12:30pm - 1:15pm IST
 
Friday, November 15
 

1:15pm IST

Shift-left Security with Open Policy Agent(OPA) controls
Friday November 15, 2024 1:15pm - 2:00pm IST
  As organizations adopt Infrastructure-as-Code (IaC) practices, ensuring compliance and enforcing best practices across multiple Terraform configurations becomes a critical challenge. Open Policy Agent (OPA) provides a powerful solution by enabling policy-as-code, decoupling decision-making from policy enforcement. In this session, we will explore how to leverage OPA policies to define guardrails for Terraform runs, preventing or enforcing specific actions based on predetermined rules. We will demonstrate how to group these policies in a Version Control System (VCS) like GitHub and enforce them using GitHub Actions workflows. Attendees will learn how to write OPA policies tailored to their organization's requirements, integrate them with Terraform, and automate policy evaluation using GitHub Actions.
                                   
    
Speakers
avatar for Anshu Bathla

Anshu Bathla

Lead Consultant - Security, AWS
Friday November 15, 2024 1:15pm - 2:00pm IST

3:15pm IST

Thinking Like An Attacker - Quick Wins For Securing Your Application
Friday November 15, 2024 3:15pm - 3:30pm IST
It's tough being a developer. Between an endless backlog of tasks, unrealistic deadlines, shifting priorities, and staying up-to-date with the latest trends, adding security knowledge to the mix can feel overwhelming. But security doesn't have to be painful!
                   This talk will focus on adopting the "attacker's mindset" — understanding how malicious actors think to identify vulnerabilities before they do. By anticipating threats and applying a few simple strategies, developers can make quick, impactful improvements to their application's security. And the best part is, you won’t need to master complex tools or frameworks — just a shift in perspective and a few practical steps.

Whether you’re new to security or looking for ways to streamline your approach, this talk will provide you with practical tips to adopt an attacker's mindset and learn how to preemptively counter common threats.
                                   
    
Speakers
avatar for Hotanya Ragtah

Hotanya Ragtah

Information Security Consultant, Kiwibank
Friday November 15, 2024 3:15pm - 3:30pm IST

3:30pm IST

Building Secure AI Systems: A Comprehensive Approach to AI Security
Friday November 15, 2024 3:30pm - 4:15pm IST
 AI security is emerging and it necessary for organization and AI practitioners to incorporate secure way of developing and adopting AI systems. This means at each stage of SDLC there needs to controls and best practices to be followed specific to AI apart from traditional application security practices (Secure SDLC).
SDLC Stage vs AI Security Controls:
Design - Risk Assessments / Threat Modelling AI systems Development - Asset Inventory / tracking , protect sensitive data and supply chain security
Deployment - Secure MLOps, Incident Management, AI Security assessments, Secure Infrastructure
Operate and Maintain - Secure Logging and Monitoring, Patch management.
References and Resources will be shared.
By the end of this talk, the audience will have a clear understanding of how to approach AI security and will be equipped to develop guidelines that ensure the security of AI systems throughout their lifecycle.
                                   
    
Speakers
avatar for Gowtham Sundar

Gowtham Sundar

Head - AppSec, SPH Media
Friday November 15, 2024 3:30pm - 4:15pm IST

4:15pm IST

Maturing Your Application Security Program
Friday November 15, 2024 4:15pm - 5:00pm IST
                                                After working with over 300 companies on their application security programs the most common question I receive is “what’s next?”. They want to know how to mature their programs, and when they look at the maturity models available, they find them intimidating and so far beyond their current maturity level that they feel impossible. In this talk I will take you through 3 common AppSec program maturity levels I have encountered over the years, with practical and actionable next steps you could take immediately to improve your security posture.
                   Outline:
• Intro to myself, the format of this talk, and how I conducted this research (by working with over 300 companies over 5 years of working with IANs research. Meeting with 1-5 companies per week). All of the companies are considered enterprise, so this talk will focus on enterprise, not SMB.
• Explain to the audience, gently, that the current maturity models available (OWASP SAMM, BSIMM and NIST) scare the pants off of most clients, and seem utterly impossible. It’s not that the maturity frameworks are bad, it’s that they are too “pie in the sky” compared to where more organizations currently are. I *WISH* our industry could be at that level. We are not.
• Model 1: we just PenTest the important app(s) once a year, little to no governance
                   This is, unfortunately, extremely common. Even in 2024 I am working with enterprises that have ZERO AppSec programs, starting from scratch. That said, we all need to start somewhere, and if they are meeting with me (or another AppSec consultant) then they are serious!
                   Often no formal SDLC at all
Often very mixed tech stack
Often code is located in several places, everyone doing their own thing Why this model is bad: it’s quite expensive for what you are getting
                   (pentesting as your only AppSec is $$$$), terrible coverage, developers do not feel supported, there’s no opportunity for them to learn, etc.
                   How to mature this model, *realistic* next steps:
                   Secure Coding training, followed up with quarterly lunch and learn style events to teach them about whatever the most pressing issue is for the AppSec team (lesson on a common vulnerability, incident issues, whatever)
                   If little-to-no-budget:
                   use a free tool like Zap or Burp for DAST (in a sandbox), and Semgrep OSS
                   try to identify one or more devs who might be willing to do some security work, and assign the tools to them
                   Get a secure coding guideline and try to get the developers to follow it, even a super basic one
                   Create a list of security requirements for all new web app and API projects, try to get projects to use it
                   Try to get all your code into the same place, set SAST against your code repo, try not to cry after your first scan
                   Talk to head of developers about plans for IT modernization, you all want a real SDLC, you all want project management, you all want centralization
                                                             
       
                                                and standardization.
Threat model your super important app(s) then socialize the results with
                   the developers. So they understand the threats to your org and your mission. This will raise awareness a lot.
                   Share information on past incidents, if appropriate, so the devs understand where you are coming from
                   Scan your codebase for secrets, rotate them and start checking them in somewhere that is not your code base
                   Put a WAF in front of any app that is *terrible*
                   You still PenTest the important apps, but now sometimes you pass the tests.
                   If budget, add this:
hire a full time AppSec person as #1 step - best possible spend
buy a paid second generation SAST (no one will tolerate tons of false
                   positives at this point), still use free/almost-free DAST
• Model 2: SAST + DAST and perhaps one other tool, some governance
                   This is the most common AppSec program I see at Enterprises.
                   They pay a lot of money for 2-3 tools, and have partially rolled them out. They send lots of bug reports, few things get fixed. “Why won’t the developers just fix things?”
                   Little-to-no documentation, little-to-no interaction with developer teams, may have done a failed champs program
                   There is an AppSec program with 1 or 2 people, but it’s not consistent or well directed, always responding to fires
                   Why this model is bad: coverage is bad. Not much is fixed. We’re not here to find lots of vulns and make reports, we are here to reduce organizational risk, and these programs usually make only a tiny dent. They also tend to cause friction.
                   How to mature this model, *realistic* next steps:
Do all the stuff from the previous Maturity model.
You will likely need additional AppSec resources for this, ensure they are
                   not all just PenTesters who used to be sysadmins, you need to have at least one resource with lots of coding experience in their background that understands the developer brain and processes
                   Assess whether your tools are working for you and your devs. If you’ve had your SAST and DAST for 2+ years and they are not adopted or completely rolled out, WHY NOT? Likely you have the wrong toolset. Time to upgrade/modernize
                   Ideally you would have tooling that looks at your running apps (and APIs), your written code, and third party code. This can be anywhere from 2 tools to 5, depending on how thorough you want to be. Focus on getting as many true positives as possible, and try to add as much auto-triage as you can, so you show developers the lowest possible number of things to fix. Examples: validate your secret findings, only show reachable third party code vulns, use AI to tr
Speakers
avatar for Tanya Janca

Tanya Janca

Head of Community and Education, SEMGREP
Friday November 15, 2024 4:15pm - 5:00pm IST