Loading…
or to bookmark your favorites and sync them to your phone or calendar.
strong>Speaker [clear filter]
arrow_back View All Dates
Thursday, November 14
 

10:45am IST

Scaling Smart Contract Security with AI Automation
Thursday November 14, 2024 10:45am - 11:10am IST
Smart contracts have revolutionized the way we execute agreements, enabling trustless transactions and automating processes across various industries. However, their complexity and potential vulnerabilities demand a meticulous approach to security. This talk delves into the vulnerabilities of smart contract security, addressing common pitfalls, best practices, and emerging trends. From code audits to decentralized finance (DeFi) exploits, we'll navigate through real-world examples to equip you with the knowledge needed to safeguard smart contracts effectively and the use of automation and AI to detect vulnerabilities.
1. Understanding Smart Contract Vulnerabilities:
Detailed exploration of common vulnerabilities in smart contracts (e.g., reentrancy, integer overflow/underflow, access control, etc.). Real-world examples of security breaches and their consequences.
2. Best Practices for Smart Contract Security:
Coding standards and practices to minimize vulnerabilities.
The role of formal verification in ensuring smart contract reliability. Importance of continuous education and staying updated with the latest security trends.
3. Code Audits and Their Role:
Importance of manual and automated code audits.
Step-by-step process of conducting a smart contract audit.
Tools and frameworks commonly used in the industry for auditing smart contracts.
4. Automation and AI in Smart Contract Security:
Introduction to automation and AI tools designed for smart contract security.
Demonstration of how automation can streamline the process of vulnerability detection.
Advantages and limitations of using AI in smart contract audits.
5. Real-World Applications and Future Trends:
Discussion on how industries can leverage automation for smart contract security.
Emerging trends and future directions in smart contract security. Predictions on how AI and automation will shape the future of smart contract development and security.
                                   
    
Speakers
avatar for Shashank

Shashank

CEO, CredShields
Thursday November 14, 2024 10:45am - 11:10am IST

11:10am IST

Policy Driven Vetting of OSS Components
Thursday November 14, 2024 11:10am - 11:45am IST
Open Source component vulnerabilities and other risks are often hard to fix, especially for unsupported or unmaintained projects. Establishing secure guardrails to prevent risky components, especially introduced as transitive dependencies, from getting into a software project is critical. In this talk, we will look at the policy as code features of vet and how to leverage it to set up OSS component vetting in pull request flows. We will also touch upon some of the advanced features of vet such as dependency graph reconstruction, upgrade advice, code analysis etc. https://github.com/safedep/vet 
Speakers
avatar for Abhisek Datta

Abhisek Datta

Co-Founder, SafeDep
Thursday November 14, 2024 11:10am - 11:45am IST

11:45am IST

HARMONIZE APPSEC STANDARDS: A MULTIFACETED APPROACH TO COMPLIANCE SPECTRUM MASTERY
Thursday November 14, 2024 11:45am - 12:30pm IST
 The presentation, "HARMONIZE APPSEC STANDARDS: A MULTIFACETED APPROACH TO COMPLIANCE SPECTRUM MASTERY," ventures into the vast expanse of application security (AppSec) regulations, transcending singular frameworks to embrace a holistic compliance landscape. With an ever-increasing array of standards such as NIST, PCI DSS, HIPAA, SOC2, and OWASP, the complexity of maintaining compliance is monumental. This session is a clarion call to harness Artificial Intelligence (AI) as a unifying force to navigate and uphold these diverse regulations efficiently.
This narrative weaves through the intricate tapestry of compliance challenges, spotlighting the need for a dynamic, AI-enhanced approach that not only streamlines adherence to current standards but also agilely adapts to emergent ones. The talk will unveil how AI can be the linchpin in developing secure systems, automating compliance processes, and undertaking preemptive threat mitigation, with a nod to the compliance engines that are paving the future of cybersecurity. Attending participants will be privy to an in-depth analysis of generative AI and its potential to revolutionize banking compliance, an area historically burdened by meticulous regulatory requirements and substantial financial implications. The discussion will pivot to demonstrate how generative AI transcends the banking domain to apply its transformative capabilities across multiple compliance frameworks, offering a beacon of innovation for AppSec professionals.
The presentation will guide the audience through a series of actionable insights, demonstrating the utilization of AI in enhancing security postures while meeting the stringent demands of varied compliance standards. AI in action across standards will illustrate the tangible benefits of AI integration, offering a blueprint for a robust, compliance-centric security strategy.
                                   
    
Speakers
avatar for BHARADWAJ D J

BHARADWAJ D J

SENIOR ARCHITECT - CYBER SECURITY, SYNECHRON TECHNOLOGIES
SENIOR ARCHITECT - CYBER SECURITY @SYNECHRON TECHNOLOGIES
Thursday November 14, 2024 11:45am - 12:30pm IST

12:30pm IST

Attack Surface Management - Safeguarding Digital Landscapes via AI
Thursday November 14, 2024 12:30pm - 1:15pm IST
                                                In an increasingly digital world, organizations face the challenge of securing expansive and dynamic digital landscapes. Attack Surface Management (ASM) is crucial in identifying and mitigating vulnerabilities across these environments. The integration of Artificial Intelligence (AI) into ASM offers a transformative approach, enabling proactive and adaptive defense mechanisms. This paper explores how AI enhances ASM by automating the detection and analysis of potential attack vectors, continuously monitoring for new threats, and providing actionable insights for preemptive security measures. By leveraging AI, organizations can significantly reduce their exposure to cyber risks, ensuring a more resilient and secure digital infrastructure.
                   I. Introduction
A. Overview of Digital Transformation
B. The Importance of Attack Surface Management (ASM) C. The Role of AI in Cybersecurity
                   II. Understanding Attack Surface Management A. Definition and Key Components
B. Challenges in Traditional ASM
C. The Need for Automation and Intelligence
                   III. Leveraging AI in Attack Surface Management
A. AI-Driven Asset Discovery
B. AI for Vulnerability Detection and Prioritization
C. Continuous Threat Monitoring and Intelligence
D. Adaptive response mechanisms based on AI insights
                   IV. Case Studies and Real-world Applications
A. Case Study 1: AI-Enhanced ASM in a Large Enterprise B. Case Study 2: AI in ASM for Critical Infrastructure
C. Comparative Analysis of AI vs. Traditional ASM
                   V. Challenges and Considerations
A. Ethical and Privacy Concerns
B. Integration with Existing Security Frameworks C. Limitations of AI in ASM
                   VI. Future Trends in AI-Driven ASM
A. Evolution of AI Technologies in Cybersecurity B. The Convergence of AI with Other Technologies C. The Road Ahead for AI in ASM
                   VII. Conclusion
A. Recap of AI’s Impact on ASM
B. The Imperative of AI Adoption in Cybersecurity
C. Final Thoughts on Safeguarding Digital Landscapes
The future of secure digital environments through AI-driven ASM
                                   
    
Speakers
avatar for Antara mane

Antara mane

Vertical Lead, SecureNeexus
Thursday November 14, 2024 12:30pm - 1:15pm IST

1:15pm IST

Building git hooks for your organization
Thursday November 14, 2024 1:15pm - 2:00pm IST
  Have you been in a situation where you had to help developers write secure code and you spent hours in training or code reviews? What if there was a way to make such reviews proactive?

This talk will discuss about using git-hooks for performing security scans. We will discuss using git hooks the usual way and how one team like application security can distribute the hooks across the organisation to enable developers write clean code. We will specifically talk about using pre-commit for code scanning.

We will look at ways we distribute the hooks to every developer in the company, problems we faced, how we collect metrics to understand usage pattern and efficiency. Problems like time consumption and other issues will also be discussed.
During this implementation there were a lot of challenges and lessons learned. Why current tools or frameworks won't work out. What could possibly go wrong when installing the hooks. We will discuss all of those.
                                   
    
Speakers
avatar for Naveen S

Naveen S

Lead Security Engineer, Freshworks
Thursday November 14, 2024 1:15pm - 2:00pm IST

2:00pm IST

ATT&CK LENS (An AI based Visualization for Mitre ATT&CK Exploration)
Thursday November 14, 2024 2:00pm - 2:45pm IST
      The ATT&CK LENS framework introduces an innovative approach to cybersecurity analysis by leveraging advanced artificial intelligence (AI) for dynamic visualization and exploration of the MITRE ATT&CK framework. Designed to address the growing complexity of cyber threat data, ATT&CK LENS enables cybersecurity professionals to analyze and visualize relationships between threat actors, malware, campaigns, and their associated techniques and tactics with unprecedented ease and precision. At its core, ATT&CK LENS is built on a robust AI-driven architecture that utilizes Natural Language Processing (NLP) and Retrieval-Augmented Generation (RAG) to interpret and process user queries expressed in natural language. This capability allows users to generate complex visualizations without the need for manual input or technical expertise. By transforming plain English queries into actionable data visualizations, the framework empowers users to explore the MITRE ATT&CK matrix in a more intuitive and efficient manner. The backend of ATT&CK LENS is implemented using Flask, a lightweight web framework that facilitates rapid development and integration. The visualization engine is powered by PyVis and NetworkX, which enable the creation of interactive network graphs that depict the intricate relationships within the ATT&CK dataset. These visualizations are not only dynamic but also customizable, allowing users to apply filters such as source_ref, relationship_type, and target_ref to refine the data displayed. One of the key strengths of ATT&CK LENS is its modular design, which ensures scalability and adaptability. The framework is designed to seamlessly integrate with the latest MITRE ATT&CK dataset, ensuring that users always have access to up-to-date information. Additionally, the modularity of the framework allows for future enhancements, such as the incorporation of additional AI models, data sources, or integration with other cybersecurity tools. ATT&CK LENS also includes a zero-touch capability, meaning it can automatically adapt to updates in the MITRE ATT&CK framework without requiring manual intervention. This feature significantly reduces the maintenance burden on users, ensuring that the framework remains a reliable and up-to-date resource for threat analysis. The framework’s AI component is particularly powerful in correlating tactics, techniques, and procedures (TTPs) to identify potential threat patterns and vulnerabilities. By using advanced AI models, ATT&CK LENS can process vast amounts of data and generate relevant visualizations that highlight critical threat relationships. This capability is crucial for cybersecurity professionals who need to make informed decisions in a rapidly evolving threat landscape. The framework has been rigorously evaluated through internal user testing, where it demonstrated significant improvements in both the speed and accuracy of threat analysis. Users reported a substantial reduction in the time required to analyze complex threat data and a marked improvement in their ability to identify and understand key threat relationships.
Speakers
avatar for Shankar Murali

Shankar Murali

AVP, Standard Charted Bank
avatar for Krishna Chaitanya Yeleswarapu

Krishna Chaitanya Yeleswarapu

Senior Manager, Standard Chartered Bank
Thursday November 14, 2024 2:00pm - 2:45pm IST

2:45pm IST

MPT! Pentest In Action
Thursday November 14, 2024 2:45pm - 3:30pm IST
In ever evolving software development world, security is also becoming fast paced. Hence, each product going through the pentest cycle has to be managed effectively and efficiently. Managing multiple pentests and testers is important. A single pane of glass view for all of these with risk posture is helpful. Security penetration testing is becoming as necessary and as usual a practice as software testing. Most, if not all, organizations either have their own penetration testing team or they utilize third-party pentesters.

Imagine any fast-paced organization developing multiple product lines and planning to release each of them from time to time. It becomes challenging for the organization's security team to efficiently manage all of these pentest activities running and effectively produce security assessment reports and track them. Because of such volume of work, the numbers of pentesters in organizations are increasing to keep up. Each pentester is doing multiple pentests. The next cycle of a previous pentest can get assigned to another pentester. Each pentesting cycle has issues and recurring issues. And above all, managing all these using Excel worksheets is nightmare. A pentesting activity knowledge base is kind of must. A single-pane-of- glass view to all pentests running, and the issues identified, is a necessity for everyone involved in the security review cycle. To solve these challenges, I have developed a solution called Managing Pentest (MPT): Pentest in Action.

MPT helps us solve various problems:
- Asset DB to know all organisation assets that are in pentest process. You can’t secure what you are not aware of! - Tracking each pentest
- Pentesting activity knowledge which comprises of what particular let say application does, or the purpose of hardware that we are testing
- When the next pentester takes over the testing all they have to do is view the asset and associated information, which is already there
- Time taken for each pentest
- Real time tracking of activity
- Issue status
- Common issues that are observed

Why MPT? MPT also provides security pentest analytics, which helps us not only track and view everything in single pane of glass but also: Finding improvement areas to boost pen tester productivity Understand the current risk posture Understand recurring issues
Speakers
avatar for Jyoti Raval

Jyoti Raval

Baker Hughes, Senior Staff Product Security Leader
Senior Staff Product Security Leader @ Baker Hughes
Thursday November 14, 2024 2:45pm - 3:30pm IST

3:30pm IST

Transforming Threat Modeling: Harnessing ChatGPT for Automated Security Analysis
Thursday November 14, 2024 3:30pm - 4:15pm IST
Introduction to ChatGPT: Uncover the versatility of advanced language models in cybersecurity. Explore their applications in threat modeling. Automating Threat Modeling: Learn how ChatGPT streamlines threat modeling, increasing efficiency by reducing manual efforts. Generating Dynamic Threat Models: See how ChatGPT assists in crafting threat models from system architecture, user stories, and design documents. Elevating Accuracy and Scope: Discuss techniques for refining accuracy and coverage of automated threat models generated by ChatGPT. Integration and Synergy: Explore seamless integration of automated threat modeling into existing security processes. Understand how collaboration between AI systems and security professionals can be encouraged.
Speakers
avatar for Gustavo Arreaza

Gustavo Arreaza

CTO, AppSecCl
Thursday November 14, 2024 3:30pm - 4:15pm IST

4:15pm IST

Practical Session on AI Red Teaming
Thursday November 14, 2024 4:15pm - 5:00pm IST
This  session delves into AI red teaming with a focus on identifying and mitigating vulnerabilities in large language models (LLMs). Participants will engage with an intentionally vulnerable LLM, exploring real- world adversarial scenarios where AI systems can be exploited. Using cutting-edge open-source and proprietary tools such as Protect AI’s Model Scan, Rebuff, LLMGuard, NeMo Guardrails, and Garak, i'll will demonstrate how to detect and exploit vulnerabilities in LLMs. Attendees will gain practical experience with these tools, learning how to assess model security, apply guardrails, and defend against adversarial attacks.
By the end of this session, participants will:
Understand how to perform AI red teaming on LLMs.
Use Model Scan, Rebuff, LLMGuard, NeMo Guardrails, and Garak for detecting and mitigating vulnerabilities.
Gain hands-on skills to apply these techniques in their own environments, improving the robustness and security of AI systems.
                                   
    
Speakers
avatar for R Nagarjun

R Nagarjun

Security Researcher, AKTO
Thursday November 14, 2024 4:15pm - 5:00pm IST