Nick Kathmann, Logicgate’s CISO/CIO-Interview Series

Nicholas Kathmann is Logicgate’s Chief Information Security Officer (CISO), who leads the company’s information security program, oversees platform security innovations, and manages cybersecurity risks with customers. With over twenty years of IT experience and over 18 years in cybersecurity, Kathmann has established and led security businesses in small businesses and Fortune 100 Enterprises.
Logicgate is a risk and compliance platform that helps organizations automate and expand their governance, risk and compliance (GRC) programs. LogicGate, with its flagship FirkCloud®, enables teams to identify, evaluate and manage risks across the enterprise through customizable workflows, real-time insights and integrations. The platform supports a wide range of use cases including third-party risk, cybersecurity compliance and internal audit management, helping companies build more agile and resilient risk strategies
You are CISO and CIO at Logicgate – how do you see the responsibility of AI to change these roles over the next 2-3 years?
AI has changed both roles, but over the next 2-3 years, I think we will see a significant growth in proxy AI and have the right to reimagine how we handle business processes every day. Anything that will usually go to the IT help desk (like resetting passwords, installing apps, etc.) can be handled by an AI proxy. Another key use case is to leverage AI agents to handle tedious audit assessments, allowing CISOs and CIOs to prioritize more strategic requirements.
With the trend of federal cyber layoffs and deregulation, how do companies handle AI deployment while maintaining a strong security posture?
When we see the U.S. deregulation is actually strengthening the EU. Therefore, if you are a multinational company, you are expected to comply with global regulatory requirements responsible for using AI. For companies that operate only in the United States, I think there is a learning period when it comes to adopting AI. I think it’s important for these companies to develop strong AI governance policies and maintain human supervision during deployment, ensuring there is nothing rogue.
What is the biggest blind spot you see today when integrating AI into existing cybersecurity frameworks?
While I can think of several areas, the most influential blind spots will be where your data is and where it traversed. The introduction of AI will only make monitoring in this field more challenging. Vendors are enabling AI capabilities in their products, but that data does not always go directly to the AI model/vendor. This makes traditional security tools such as DLP and web monitoring efficiently blind.
You have already said that most AI governance strategies are “paper tigers”. What are the core elements of the governance framework that actually works?
When I say “paper tiger”, I mean that only small teams know the process and standard governance strategies and are not executed or even ignored throughout the organization. AI is very common, which means it affects every group and every team. The “One Size Is for All” strategy is invalid. The financial team that implements AI capabilities in its ERP is different from the product team that implements AI capabilities in a specific product, and the list continues. The core elements of a strong governance framework vary, but IAPP, OWASP, NIST and other consulting agencies have quite good capabilities for defining assessments. The hardest part is figuring out when the requirements will apply to each use case.
How can companies avoid AI models drifting and ensure they are responsible for using over time without over-designing their policies?
Drift and degradation are just part of the use of technology, but AI can greatly speed up the process. However, if the drift becomes too large, corrective measures will be required. Over time, a comprehensive testing strategy is needed to seek and measure accuracy, bias and other red flags. If companies want to avoid bias and drift, they need to first make sure they have tools to identify and measure.
What should be changed in maintaining agile AI governance, limited policy updates and real-time feedback loops play?
Although they now play a role in reducing risk and responsibility to providers, real-time feedback loops hinder customers and users’ ability to perform AI governance, especially if communication mechanisms change too frequently.
What are your concerns about AI bias and discrimination in terms of coverage or credit scores, especially the Buy Now, Pay Later (BNPL) service?
Last year, I talked with AI/ML researchers at a large multinational bank who have been trying AI/LLM in risk models. Even models trained on large and accurate data sets will make surprising, unsupported decisions to approve or deny coverage. For example, if “great credit” is mentioned in a chat transcript or in a communication with a client, these models will reject loans by default—whether the client has said it or not, or if the bank employee says so. If AI is to be relied on, banks need better oversight and accountability and need to minimize these “surprises”.
What do you think about how we review or evaluate algorithms that make high-risk decisions and the algorithms that should be responsible for?
This can be traced back to the comprehensive test model, where it is necessary to continuously test and benchmark algorithms/models in algorithms/models as close to real-time as possible. This can be difficult because the model output may lead to ideal results that require humans to identify outliers. As an example of banking, a model that denies all loan simplicity will have a significant risk rating because underwriting of zero loans will default forever. In this case, the organization that implements the model/algorithm is responsible for the results of the model, just as humans make decisions.
As more and more companies need online insurance, how can AI tools reshape the risk landscape and insurance business?
AI tools are very good at spreading large amounts of data and finding patterns or trends. On the customer side, these tools will help understand the real risks of the organization and manage this risk. In the underwriter’s side, these tools will help find contradictions and organizations that become immature over time.
How can companies use AI to proactively reduce cyber risks and negotiate better conditions in today’s insurance market?
Today, the best way to leverage AI to reduce risks and negotiate better insurance clauses is to filter out noise and distractions to help you focus on the risks that matter most. If you reduce these risks in a comprehensive way, your network insurance rate should drop. Too easy to be overwhelmed by huge risks. When focusing on the most critical issues, don’t get stuck and try to solve each one.
What tactical steps do you recommend for companies that want to implement AI responsibly but don’t know where to start?
First, you need to understand what your use case is and record the desired results. Everyone wants to implement AI, but it’s important to think about your goals first and go back from there – I think many organizations are working hard today. Once you have a good understanding of the use cases, you can look at different AI frameworks and understand which applicable controls are critical to your use cases and implementation. Strong AI governance is also crucial, because automation is only as useful as its data input, as it reduces risk and efficiency. Organizations leveraging AI must do this responsibly, as partners and prospects are asking tough questions around AI spread and usage. Not knowing the answer may mean missing a business transaction, which directly affects the bottom line.
If you had to predict the biggest AI-related security risks five years from now on, what would it be? How should we prepare today?
My prediction is that as proxy AI is built in more business processes and applications, attackers will commit fraud and abuse to manipulate these agents to deliver malicious results. We have seen this by manipulating customer service agents, resulting in unauthorized transactions and refunds. Threat participants’ use of language skills to bypass policies and interfere with agency decision-making.
Thanks for your excellent interview, and readers who hope to learn more should visit Logicgate.