Blogs
The latest cybersecurity trends, best practices, security vulnerabilities, and more
Protecting Data in the Age of AI: A Guide to Getting Started
Start with the Basics: AI Acceptable Use Policies and User Training
By Liberty Williams, Robert Foster and Laurie Robb · March 10, 2025
It’s not easy being a CISO
CISOs feel overwhelmed by compliance requirements, expanding role expectations, budget constraints, and a lack of alignment with CEO and board-level stakeholders. In the recent Trellix Mind of the CISO report, a recent survey of executive leaders found that “84% of CISOs believe the role should be split into technical (CISO) and business-focused (BISO) roles.” The leaders surveyed for this report see long-term potential impacts for the stress on the CISO role with “91% of CISOs agreeing that their expanding responsibilities will lead to higher turnover in the role, and 49% who do not see a future as a CISO.”
The rapid emergence of both proprietary and publicly accessible AI tools that are reshaping our technological landscape is adding more stress to the CISO. These tools usher in a new era of efficiency and innovation while potentially exposing an organization to greater risks, like data breaches.
It’s not easy protecting data
As cybersecurity professionals, we stand in the gap between the tremendous opportunities and significant risks that AI brings to the table. AI-powered applications—ranging from language models and image generators to code assistants—can streamline our operations and accelerate discoveries. However, many of these tools train themselves on user input, which may inadvertently expose our organizations to data breaches, intellectual property issues, and reputational harm. At the same time, national and global organizations are imposing personal financial liability on organizations for data breaches, which increases the strain on our CISOs and executive leadership.
To address this challenge—CISOs, IT Security Managers, and cybersecurity professionals—are evolving from gatekeepers of information to crystal-ball-wielding predictors who try to assess the potential benefits of AI tools while mitigating the risks to sensitive information. To navigate this shifting terrain, we must adopt a multi-layered strategy that combines clear policy frameworks, ongoing employee education, and advanced technological defenses.
Creating clear AI boundaries - Acceptable Use Policies
One of the first steps in a strategy to protect critical data is the development of a robust AI Acceptable Use Policy. This policy isn’t just a set of rules; it’s a strategic document that establishes clear expectations for how both public and corporate-approved AI tools will be used by anyone engaged in activities on behalf of an organization. An acceptable use policy should be developed with the understanding that it is a living document to be updated as the landscape shifts.
An AI Acceptable Use Policy should cover the following topics:
- Data classification and handling: Train employees to identify and properly handle sensitive data, ensuring it isn’t exposed through public AI platforms.
- Intellectual property protection: Reinforce the importance of respecting IP rights and avoiding any activities that could lead to infringement.
- Ethical considerations: Include guidelines for ethical AI use, emphasizing fairness, transparency, and accountability.
- Compliance requirements: Ensure the policy aligns with industry regulations and data protection laws.
- Approved AI use: Clear guidance on which (if any) tools are corporately approved and how employees access them.
- External AI interaction: It may not be a problem to let employees chat with public/retail models to plan their vacations or touch up their photos, but company data should never be shared with AI vendors who train models with data provided by users.
- Supplier and third-party expectations: Guidelines for suppliers and other third parties who may have access to sensitive data, including how information can be used in their own AI models.
A solid AI Acceptable Use Policy defines protocols for handling our sensitive data, emphasizes the importance of protecting intellectual property, and outlines the ethical considerations necessary for responsible AI use. By aligning these guidelines with relevant industry regulations and data protection laws, organizations can create a solid foundation for safe AI integration.
Creating an AI-aware culture through training
Equally important to the policy itself is making sure that the human element is considered. Even the best policies can fall short without proper stakeholder training - for employees, contractors, and any third party with access to your data. Continuous education is vital in a landscape where AI technology evolves rapidly. Training programs should include both learning about the AI Acceptable Use Policy and how to:
- Recognize the risks: Understand potential threats like data exfiltration, IP theft, and malware.
- Make informed decisions: Weigh the risks and benefits when selecting AI tools for any use case.
- Follow best practices: Adhere to established security protocols.
- Report incidents: Quickly flag any security breaches or suspicious activities.
This proactive approach reinforces best practices and fosters a culture where security concerns are promptly reported and addressed.
It only works if it’s a group effort
In this rapidly evolving landscape, staying ahead of threats requires a proactive, adaptive approach. The best policies and training will be created through collaboration. The CISO alone will not determine the risks and rewards that AI brings to an organization. Efforts to establish policies and implement training should be cross-functional and broadly supported by organizational leadership to succeed.
Organizations should also bring their cybersecurity partners to the table. Experts with our Cyber Consulting Services can help with best practices, risk assessments, policy development, and custom training programs. Partners that span geography and industry types can help ensure that policies and training account for a robust set of risks and regulatory considerations.
The bottom line is that no one answer fits every organization for protecting sensitive data in the age of AI. But developing strong policies and implementing solid training programs is a great way to get started.
Want to read more expert perspectives about AI? Check out these related blogs. Learn more about Trellix data security products or request a demo to try for yourself today.
Authored in part with the assistance of GenAI tools, and yes, we did follow our own AI-use policy.
RECENT NEWS
-
Feb 5, 2025
Trellix Accelerates Secure Cloud Adoption in Australia with New Government Accreditations
-
Jan 28, 2025
Trellix and NEXTGEN Accelerate Cybersecurity Platform Adoption in Australia and New Zealand
-
Jan 22, 2025
Trellix Welcomes New CEO to Lead Next Phase of Growth
-
Jan 14, 2025
Trellix Accelerates Global Partner Growth with Revamped Xtend Partner Program
-
Jan 13, 2025
Trellix Promotes Gareth Maclachlan to Chief Product Officer
RECENT STORIES
Latest from our newsroom
Get the latest
Stay up to date with the latest cybersecurity trends, best practices, security vulnerabilities, and so much more.
Zero spam. Unsubscribe at any time.