The 9 most popular AI cybersecurity tools at Black Hat 2024

2024.08.10

At the Black Hat 2024 conference, AI-driven cybersecurity tools and technologies became the focus, leading the new trend in the cybersecurity industry. Many security vendors and startups showcased their latest achievements, using generative AI to manage risks, detect and fight cybercrime, and protect corporate security. Here are some of the most anticipated AI-driven cybersecurity products and services at the conference:

Apiiro: Intelligent risk detection in the software development design phase

Apiiro has launched an AI-driven feature called "Risk Detection at Design Stage" that is designed to analyze feature requests to identify risks and initiate security reviews or threat models in the early stages of application development. This feature, based on Apiiro's proprietary LLM, enables application security practitioners to mitigate security and compliance risks at the design stage before coding, saving time, reducing rework, and accelerating secure software delivery. The main risk analysis areas include architecture design, sensitive data handling, user permissions, generative AI technology, and third-party integration.

SentinelOne: Purple AI, CIEM, xSPM and SIEM

SentinelOne has added a series of new features to its Singularity platform, designed to enhance the security of endpoints, identities, and cloud environments using generative AI technology. New features include Purple AI, which provides natural language alert summaries and query support to help analysts simplify alerts. Cloud Infrastructure Rights Management (CIEM) helps control access to cloud resources. In addition, SentinelOne has launched extended security posture management (xSPM) and AI-driven SIEM to provide real-time insights and scalable security solutions.

Cymulate: AI Copilot

Cymulate announced the launch of its AI Copilot, a generative AI solution designed to enable security controls against real-time threats. AI Copilot introduces a dynamic attack planner that allows users to perform custom threat assessments by copying and pasting URLs or content from threat advisories, news articles, and security research findings. The feature is designed to quickly identify and remediate security vulnerabilities, reducing the time and expertise required for threat assessments. AI Copilot also generates customized product documentation and simplifies troubleshooting, optimizing the security validation process and freeing up IT resources.

Cequence: Generative AI-driven Unified API Protection (UAP)

Cequence has made several updates to its unified API protection platform, focusing on the safe use of generative AI applications and large language models. Key enhancements include a test suite for the OWASP LLM Top 10 threats, automatic detection and blocking of AI bot activity, flow graphs for visualizing API flows, and new integrations for comprehensive API discovery. The platform also processes API traffic locally, improving efficiency and privacy, and provides attack surface detection for API gateways and infrastructure.

RAD Security: AI-driven incident investigation

RAD Security has launched its AI-driven incident investigation capabilities, designed to improve cloud security through behavioral detection and response. The approach reduces false positives and improves the accuracy of incident assessments by combining LLM-driven investigations and behavioral detection. RAD Security's Cloud Detection and Response (CDR) solution creates behavioral baselines to detect zero-day attacks and enriches detections with real-time identity and infrastructure context. New features include an Amazon EKS plugin, automated AI-driven investigations, a discovery center for incident navigation, and an updated RAD open source catalog to improve detection capabilities.

Code42: Incydr supports generative AI for data leak prevention

Code42, a Mimecast company, has launched an upgraded version of its Incydr solution to prevent data leakage to generative AI tools. Incydr's new data visualization and PRISM system can help security teams locate and respond to the movement of data to generative AI tools such as ChatGPT and Google Gemini. The solution includes detection and blocking of risky activities, educational videos for employees, and will soon support the ChatGPT desktop application.

Legit Security: AI Security Command Center

Legit Security has launched the AI ​​Security Command Center, which aims to provide security teams with a console to achieve AI visibility and protection in development environments. The dashboard will help mitigate the risks of using AI models in application code, provide centralized visibility of AI model inventories, perform risk correlation and prioritization, and expand ASPM controls to include AI security posture management. Legit Security also announced that it has joined the Alliance for Secure AI (CoSAI) to promote comprehensive AI security measures in software development.

Balbix: Conversational AI Security Assistant

At Black Hat USA 2024, Balbix launched BIX, a conversational AI assistant designed to simplify cyber risk and exposure management. BIX aims to simplify risk management by providing personalized, contextual recommendations based on user roles and past interactions. With mobile access, real-time updates, and integration with existing cybersecurity and IT systems, BIX helps security teams make decisions and communicate across channels. Leveraging a multi-agent architecture based on large RAG language models and Nvidia hardware, BIX is designed to break down complex tasks into manageable subtasks to improve operational efficiency and reduce response times.

Orca Research: AI Goat

Orca Research has launched AI Goat, an open source AI security learning environment designed to address the OWASP Top 10 ML risks. Available in Orca Research's GitHub repository, the tool uses Terraform to build a vulnerable AI environment with a range of threats and vulnerabilities for security training and education scenarios. The tool is designed to help security professionals and penetration testers understand and test AI-specific vulnerabilities and improve their ability to defend against such attacks. At Black Hat USA 2024, Shain Singh, head of the OWASP ML Security Top 10 project, emphasized that AI Goat enhances the understanding of AI risks by simulating real-world vulnerabilities and misconfigurations, and helps organizations better guard against potential AI attacks.