Cybersecurity Interview

Securing the Future: Navigating AI-Driven Cybersecurity with Nathan Wenzler of Tenable

In an era where AI’s transformative potential intersects with escalating cyber threats, Nathan Wenzler, Chief Security Strategist at Tenable, offers invaluable insights. He highlights the burgeoning risks AI systems face, underscoring vulnerabilities ranging from data poisoning to unauthorized access. Furthermore, Wenzler delves into the perils of unsanctioned AI tool usage, emphasizing the imperative for robust security measures. Addressing these concerns, he advocates for integrating security into AI system design and underscores the pivotal role of exposure management in fortifying cybersecurity postures. Looking ahead, Wenzler forecasts AI-related cyber threats and advocates proactive measures. For cybersecurity professionals navigating this complex landscape, he emphasizes the application of established security frameworks and best practices. Wenzler’s expertise illuminates the path forward, ensuring organizations harness AI’s potential while safeguarding against evolving threats.

Why are AI systems increasingly becoming a prime target for cybercriminals, and what specific vulnerabilities do these systems present compared to traditional IT infrastructure?

AI systems are increasingly being targeted by cybercriminals due to their growing presence in businesses and their use in critical operations. Vulnerabilities in AI systems can lead to data poisoning, evasion, privacy breaches, and abusive attacks, which can severely impact performance, erode trust and open the door to more data breaches and cyberattacks. Attackers can even potentially exploit AI models for unauthorised access and use them as entry points into broader networks. Additionally, any accounts or entitlements used by the AI system to access back-end data stores can be ideal targets for attackers looking for service accounts with high-level access to sensitive systems, potentially leading to further credential theft, data loss, ransomware, and intellectual property theft. Organisations must address AI vulnerabilities to prevent these significant risks.

In the context of cybersecurity, what are the primary concerns associated with the unauthorised or unsanctioned use of AI tools within organisations, particularly in India?

Security and IT leaders have long struggled with the issue of shadow IT that employees use without the company’s approval or knowledge. Gartner predicts that 75% of employees will be using shadow IT by 2027, which increases cyber risk because companies cannot protect against risks they do not know about.

Workers in many roles and sectors are experimenting with AI tools to boost productivity, improve efficiency and complete tasks quickly, often without security IT’s knowledge or consent. AI applications often interact with sensitive data and systems, making them attractive targets for threat actors. Without adequate oversight and security protocols, unauthorised AI usage may expose organisations to data breaches, unauthorised access, and other cybersecurity threats. To mitigate this risk, businesses must implement robust security measures, including encryption, access controls, and regular security audits, to safeguard AI-driven assets and infrastructure.

Could you outline some essential security measures that organisations should prioritise to effectively safeguard their AI systems from potential cyber threats and attacks?

AI systems must be designed with security in mind and integrated with risk awareness and threat modelling to ensure that the data used to feed their models isn’t poisoned as well as preventing compromises of the applications themselves by attackers. Secure software development requires partnership with DevSecOps teams to incorporate security measures directly into the code, including review of third-party libraries, crafting documentation and managing technical debt. Secure coding practices and deployment can only be a meaningful security investment if AI models also include compromise-prevention functionality. Additionally, organisations must secure operational infrastructure and conduct regular maintenance, such as logging and monitoring, and update management. Mitigating risk also includes implementing preventive security measures like exposure management, including data encryption, strong access controls, and regular security audits, to safeguard AI-driven assets and infrastructure.

Given the extensive involvement of third-party vendors in the AI supply chain, what strategies or best practices would you recommend for Indian organisations to ensure comprehensive security throughout this ecosystem?

Performing a proper risk assessment of third-party vendors as a part of any supply chain can be an incredibly difficult and painstaking process. Many vendors do not allow their customers or partners access into their networks to perform any sort of review, nor do they typically provide access to source code for the applications or tools the customer is using due to intellectual property theft concerns and other due diligence matters. This inability to look into these third-party vendor environments creates a huge challenge for any organisation trying to make sound risk decisions when it comes to determining what level of risk they may be introducing into their environment from their partners and vendors. At a minimum, organisations must perform basic third-party risk reviews that include questionnaires and require the vendors to provide reports from their recent risk assessments that can show they are demonstrating their due diligence to protect their customers. If possible, however, customers should try to perform deeper reviews and even conduct their own risk assessments to more thoroughly understand the potential for harm that could be introduced by the vendors in their supply chain.

What role does exposure management play in enhancing the security posture of AI systems, and how can organisations effectively integrate exposure management into their cybersecurity strategies?

As AI systems make the IT infrastructure more complex than ever, visibility across the entire environment as well as the vulnerabilities, misconfigurations and other flaws that put an organisation at risk is a big problem that exposure management solves. It identifies and assesses all technical assets in the environment such as cloud, IT, OT and IoT and allows an organisation to better prioritise and contextualise what risks are greatest to financial and reputational well-being.

Identifying vulnerabilities and other security risks starts with being able to identify and understand the target. That’s what exposure management does. It includes data about configuration issues, vulnerabilities and attack paths across a spectrum of assets and technologies — including identity solutions; cloud configurations and deployments; and web applications. With that level of visibility, organisations are better positioned to understand where the greatest risks are within their environment and start taking the necessary steps to meaningfully mitigate risk.

How do you foresee the landscape of AI-related cyber threats evolving in the near future, and what proactive steps can organisations take to stay ahead of these emerging threats?

AI represents a significant opportunity for business expansion and efficiency, yet it also presents a nuanced challenge. Cybercriminals are leveraging AI tools themselves to swiftly pinpoint vulnerabilities and craft exploit codes, posing threats to critical infrastructure, enterprises, and intellectual property through tactics like ransomware deployment.

Conversely, AI serves as a powerful tool for security professionals, empowering them to discern software vulnerabilities, assess their severity, and contextualise the associated risks. Proactive measures would include identifying where risks lie and deploying measures to mitigate them. For example, organisations can use Generative AI-powered exposure management tools to translate security-related queries into clear, simple and easy-to-understand insights about the nature of risks and how to mitigate them. They gain clear visibility and dig deeper into complex attack paths, enabling faster understanding with explanations and remediation; and derive actionable insights and guidance to mitigate the highest impact exposures, empowering security teams to proactively address risks and reduce overall risk.

Lastly, what recommendations would you offer to cybersecurity professionals and leaders in India who are navigating the complexities of securing AI-driven environments within their organisations?

Organisations need to cut through the buzz and hype surrounding AI tools and remember that these tools are just like any other piece of software they use in their environment to create efficiency for employees, provide services to customers or constituents and support the overall mission of the business.

When looked at from that lens, leaning on the same best practices, frameworks and processes we use to secure other software applications, supporting infrastructure and the credentials used to access, maintain and use these tools are all applicable to securing AI tools and their environments. Performing regular risk assessments of the AI tools, limiting privileged access, continuously monitoring the services for potential attacks or risks and implementing strong data integrity protection controls are all critical pieces of a security program for these types of AI-driven environments. Doing this will allow organisations to safely leverage these tools for better efficiency and create trust from their users that the information these tools provide is accurate and actionable.

Related posts

Bad bots use residential IPs to appear human and evade defenses, reveals Barracuda’s Threat Spotlight Report

SSI Bureau

Revolutionizing cybersecurity with next-gen cloud-based SIEM solutions from Securonix

SSI Bureau

Securonix Appoints Scott Sampson as Chief Revenue Officer

SSI Bureau

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More