David Kellerman, CTO -CYMUTE -CYMUTE -FEMOMPRENCH Series

David Kellerman is the field CTO for Cymulate and is a senior technical client in the information and cybersecurity field. David leads clients to success and high safety standards.
Cymulate is a cybersecurity company that provides continuous security verification through automatic attack simulation. Its platform enables organizations to actively test, evaluate and optimize their security posture by simulating real-world cyber threats, including ransomware, phishing, and lateral mobile attacks. By providing Violation and Attack Simulation (BAS), exposure management, and security posture management, Cymulate can help businesses identify vulnerabilities and improve their defense capabilities in real time.
What do you think is the main driver of the rise of AI-related cybersecurity threats in 2025?
AI-related cybersecurity threats are on the rise due to the increased accessibility of AI. Threat participants now have access to AI tools that can help them iterate over malware, make more trusted phishing emails, and increase their attacks to increase their impact. These strategies are not “new”, but the speed and accuracy of deployment have greatly increased the verbose of cyber threats that security teams need to address. Organizations are eager to implement AI technology while not fully aware that security controls need to be around it to ensure threat actors are not easily exploited.
Are there any specific industry or sector that is more susceptible to these AI-related threats and why?
Industry that always shares cross-channel data among employees, customers, or customers is vulnerable to AI-related threats, as AI makes threat actors more likely to participate in compelling social engineering programs Phishing scams are effectively a digital game, and if attackers can now send more positive emails to recipients, their success rate will greatly increase their success rate. Organizations that showcase their AI-driven services to the public potentially invite attackers to try to exploit it. While this is the inheritance risk of public services, it is crucial to do this correctly.
What are the key vulnerabilities in an organization when using public LLM for business functions?
Data leakage may be the number one concern. When using a public large language model (LLM), it is difficult to determine where that data is going – the last thing you want to do is accidentally upload sensitive information to publicly accessed AI tools. If you need to analyze confidential data, keep it internally. No help is required to potentially turn around and leak data to public LLMs on the wider Internet.
How can enterprises effectively protect sensitive data when testing or implementing AI systems in production?
When testing AI systems in production, organizations should adopt an offensive mindset (rather than a defensive mindset). I mean, security teams should proactively test and verify the security of their AI systems, rather than responding to incoming threats. Consistently monitoring attacks and verifying security systems can help ensure that sensitive data is protected and works as expected.
How can organizations proactively defend against evolving AI-driven attacks?
Although threat actors are using AI to develop threats, security teams can also use AI to update their Violation and Attack Simulation (BAS) tools to ensure they protect emerging threats. Tools, such as Cymutal’s daily threat feed, load the latest emerging threats into Cymate’s violation and attack simulation software to ensure security teams are verifying their organization’s cybersecurity against the latest threats. AI can help automate such processes, keeping organizations agile and ready to face the latest threats.
What role does automatic security verification platforms such as Cymulate play in mitigating the risks posed by AI-powered cyber threats?
An automated security verification platform can help organizations maintain emerging AI-driven cyber threats through tools designed to identify, verify and prioritize threats. With AI acting as a power multiplier for attackers, it is important not only to detect potential vulnerabilities in the network and system, but also to verify who poses a real threat to the organization. Only in this way can exposure be effectively prioritized, allowing the organization to mitigate the most dangerous threat first and then carry out less stressful projects. Attackers use AI to explore potential weaknesses in digital environments before launching highly tailored attacks, meaning that the ability to address dangerous vulnerabilities in an automated and effective way has never been more critical.
How can enterprises prepare for AI-powered attacks in combination with violations and attack simulation tools?
BAS software is an important part of exposure management, enabling organizations to create realistic attack solutions that can be used to validate security controls of today’s most pressing threats. The latest threats from the Cymulate Threat Research Group and Intel’s main research (combining information about emerging threats and new simulations) are applied to Cymute’s BAS tools every day, reminding security leaders if their existing security controls do not block or detect new threats. With BAS, organizations can also tailor AI-Driven simulations for their unique environment and security policies through an open framework to create and automate custom campaigns and advanced attack solutions.
The three major suggestions you will make to the security team are leading the way in these emerging threats?
Threats become more and more complex every day. Organizations without an effective exposure management plan are dangerously behind, so my first suggestion is to implement a solution that allows organizations to effectively prioritize their exposure. Next, make sure exposure management solutions include BAS capabilities, enabling security teams to simulate emerging threats (AI and other ways) to evaluate how the organization’s security controls are performed. Finally, I recommend leveraging automation to ensure that verification and testing can be performed continuously, not just during periodic reviews. With a small minute threatening the landscape, the latest information is crucial. The threat data from last quarter are hopeless.
What developments in AI technology do you foresee may intensify or mitigate cybersecurity risks in the next five years?
It depends largely on how AI accesses. Today, low-level attackers can use AI capabilities to improve and upscale attacks, but they are not creating new, unprecedented strategies – they just make existing strategies more effective. Now, we can make up for this (mainly). However, if AI continues to grow more advanced and remains highly accessible, that may change. Regulation will work here – the EU (to a lesser extent, the United States) has taken steps to manage how AI is developed and used, so it would be interesting to see if this has an impact on AI development.
Do you expect organizations to prioritize AI-related cybersecurity threats to change when compared to traditional cybersecurity challenges?
We have seen organizations recognize the value of solutions like BAS and exposure management. AI allows threat actors to quickly launch advanced, targeted campaigns, and security teams need to gain any advantage to help them stay ahead of the curve. Organizations using verification tools will make it easier to keep their heads over time by prioritizing and mitigating the most urgent and dangerous threats first. Remember that most attackers are looking for simple scores. You may not be able to stop every attack, but you can avoid making yourself an easy target.
Thank you for your excellent interview, and readers who hope to learn more should visit Cymulate.