Rick Caccia, CEO and Co-founder of Diverseai – Interview Series

Rick Caccia, CEO and co-founder of Divelseai, has extensive experience in launching security and compliance products. He has held leadership roles in products and markets for Palo Alto Networks, Google and Symantec. Caccia previously led Arcsight’s product marketing through an IPO and subsequently served as the business of a listed company, and served as Exabeam’s first chief marketing officer. He holds several degrees from the University of California, Berkeley.
Dicestaire is developing a secure platform designed to ensure the safe and secure use of AI in the enterprise. With new security challenges emerging from every major technology shift, such as Web, Mobile, and Cloud Computing, offering opportunities for industry leaders. AI represents the next frontier in this evolution.
The company aims to establish itself as a leader in AI security by combining expertise in machine learning, cybersecurity and large-scale cloud operations. Its team brings deep experience in AI development, reverse engineering and multi-cloud Kubernetes deployment to address the critical challenges of ensuring AI-driven technologies.
What inspired the witnesses you co-founded, and what are the key challenges you aim to address in AI governance and security?
When we first started a company, we thought security teams would be worried about attacks on their internal AI models. Instead, the top 15 CISOs we spoke with said that the contrary is that widespread company LLM launches still have a long way to go, but the urgent issue is protecting employees’ use of other people’s AI applications. We took a step back and found that the problem is not resistant to terrible cyber attacks, but rather that it can safely enable companies to use AI effectively. While governance may be less sexy than cyberattack, it’s what security and privacy teams actually need. They need work on using third-party AI for employees, a way to implement acceptable usage strategies, and a way to protect data without blocking the use of that data. This is what we built.
Given your extensive experience with Google Cloud, Palo Alto Networks, and other cybersecurity companies, how do these roles affect your approach to building witnesses?
Over the years, I have talked with many Ciso. One of the most common things I hear from Cisos today is: “About AI, I don’t want to be a ‘doc’; I want to help our employees use it to get better.” As a long time with cybersecurity The suppliers work with people, this is a very different statement. When the Internet is a new and transformative technology, it reminds the Internet era more. When we establish witnesses, we start specifically with product features that can safely adopt AI. Our message is that these things are like magic, and of course everyone wants to experience magic. I think security companies are too fast to play the fear card and we want to be different.
What sets witnesses apart from other AI governance and security platforms in the market today?
Well, on the one hand, most other vendors in the field are focused primarily on the safety part, not the governance part. To me, governance is like the brakes on a car. If you really want to get somewhere quickly, you need effective brakes in addition to a powerful engine. Without the brakes, no one would drive a Ferrari very quickly. In this case, the company you use AI is Ferrari, and the witnesses are the brakes and steering wheel.
By contrast, most of our competitors focus on theoretical terrorist attacks on organizational AI models. This is a real question, but it’s a different question from getting visibility and controlling how my employees use any of the 5,000+ AI applications already used on the Internet. It is much easier for us to add an AI firewall (what we have) than to add effective governance and risk management to AI firewall vendors.
How can witnesses balance the need for AI innovation with enterprise security and compliance?
As I wrote before, we believe AI should be like magic – it can help you do amazing things. With this in mind, we believe that AI innovation and security are related. If your employees can use AI safely, they will use it frequently and you will move forward ahead of time. If you apply the typical security mindset and lock it up, your competitors won’t do that and they will keep moving forward. Everything we do is to adopt AI safely. As one customer told me: “It’s magic, but most suppliers see it as something dark magic, horror and fearful.” In the Witness, we’re helping to achieve magic.
Can you talk about the company’s core concepts on AI governance? Do you see AI security as an enabler rather than a limit?
We often come to us at events we raise, Cisos comes to us and they tell us: “Your competitors are all about the terrible level of AI, and you are the only vendor that tells us how to use it effectively.” Google’s Sundar “AI may be deeper than fire,” Pichai said, which is an interesting metaphor. As we have seen recently, fire can cause great damage. But controlled fires can make steel, thus accelerating innovation. Sometimes, during witnesses, we talk about creating innovations that enable our customers to safely guide AI to “fire” to create products that are equivalent to steel. Also, if you think AI is similar to magic, then maybe our goal is to give you a wand, guide and control it.
In either case, we absolutely believe that securely enabling AI is the goal. To give you an example, there are many tools to prevent data loss (DLP) which is a technology that will always exist. People are trying to apply DLP to AI usage, maybe the DLP browser plugin has entered your work in a long prompt, please help in it, and it is timely and quickly to have a customer ID number in it. what happens? DLP products block the prompts and you will never get the answer. That’s the limit. Instead, with witnesses, we can identify the same number and edit it silently and surgically, and then unedit in the AI response so that you get useful answers while also ensuring data is secure. That’s support.
What is the biggest risk when an enterprise deploys AI generation? How can witnesses mitigate them?
The first is visibility. Many were surprised to find that the AI application universe is not just Chatgpt, but now DeepSeek. In fact, there are thousands of AI applications on the internet that businesses use to absorb the risks of their employees, so the first step is to gain visibility: what AI applications my employees use, and they are using them , is this risky?
The second is control. Your legal team has developed a comprehensive acceptable use policy for AI that ensures the security of customer data, citizen data, intellectual property rights, and employee safety. How will you implement this policy? Is it your endpoint security product? In your firewall? In your VPN? In your cloud? What if they were all from different vendors? So you need a way to define and execute acceptable usage policies that are consistent across AI models, applications, cloud and security products.
The third is to protect your own applications. In 2025, we will see faster adoption of LLMs in the enterprise and then launch more quickly chat applications powered by these LLMs. So, businesses need not only to ensure that the application is protected, but also to ensure that the application does not say “stupid” words, such as recommending competitors.
We solve these three. We provide visibility into which apps people are accessing, how they use them, strategies based on who you are and what you are going to do, and very effective at preventing attacks like jailbreaking or unwanted behavior in a bot tool.
How does the witness’s AI observability feature help companies track the use of employee AI and prevent “shadow AI” risks?
Dicteressai connects your network easily and silently, building a directory of every AI application (in fact there are thousands of applications on the internet) that your employees can access. We tell you where these apps are, they host data, and more so that you can understand the risks of these apps. You can turn on conversation visibility where we can use deep packet inspection to observe prompts and responses. We can classify prompts by risk and intention. The intention may be “write code” or “write company contract”. This is important because then we let you write intent-based policy controls.
What role does AI policy enforcement play in ensuring company AI compliance and how can witnesses simplify this process?
Compliance means ensuring your company complies with regulations or policies, and there are two parts to ensure compliance. The first is that you have to be able to identify the activity in question. For example, I need to know that employees are using customer data in ways that may be related to data protection laws. We do this using an observability platform. The second part is to describe and implement policies for the activity. You don’t want to simply know that customer data is leaking, you want to prevent it from leaking. So we have built a unique AI-specific policy engine, witness/control, that allows you to easily build identity and intent-based policies to protect data, prevent harmful or illegal responses, etc. For example, you could build a policy that says, “Only our legal department can use Chatgpt to write a company contract, and if you do, any PII will be automatically modified.” Easy to say, and easy to implement with witnesses.
How can witnesses address concerns surrounding LLM jailbreak and rapid injection attacks?
We have a hardcore AI research team, very keen. Early on, they built a system to create synthetic attack data, in addition to attracting a wide range of training data sets. As a result, we have quickly injected everything, we are over 99% efficient, and we regularly capture attacks that the model itself misses.
In fact, most of the companies we talk to want to start with employee application governance and then later they launch AI customer applications based on their internal data. So they use witnesses to protect their people and then open the quick injection firewall. A system, a consistent approach to building policies, is easy to scale.
What are your witness’ long-term goals? Where will you see AI governance develop in the next five years?
So far, we have only talked about one person-to-person application model here. Our next phase will be processing the application to the application, i.e. proxy AI. We have designed APIs in the platform to work as well as agents and humans. Beyond that, we believe we have established a new way to gain network-level visibility and policy control in the AI era, and we will keep the company’s development in mind.
Thank you for your excellent interview and hopefully learn more about the readers who should visit the witness.