Roughly nine in ten marketers are using AI, but only one in three companies tell them how to do it safely
- David Pagliari
- Apr 23
- 2 min read
Updated: May 22
Therefore two-thirds of marketing teams are winging it!
In the rush to adopt AI tools, organizations often overlook the need for clear AI policies. According to a recent report, 75 % of people already use AI at work and more than 70% have brought their own AI tools into the workplace, yet only 38% of organizations have AI policies (Source: 2025 AI Marketing Institute).
Many organizations lack comprehensive or even interim AI policies, and this exposes the business to significant risk. Risks are not theoretical and can manifest in bias or hallucinations in AI output; copyright or other intellectual-property violations; plagiarism; and regulatory non-compliance. Bear in mind that you may not own what you generate with AI, copyright protects material that is the product of human creativity, and the AI copyright laws are not only in flux as governments try to figure out how to regulate AI but it appears that UK, Europe and US are all taking separate regulatory paths.
With the above in mind, this is not a reason or the time to delay deploying AI policies as there are some key policy domains that can be addressed now, agnostic of copyright regulations:
Data Input Policy: Define permissible input data for AI tools, especially third-party ones. Prohibit uploading personally identifiable information (PII) and sensitive company information. There have sadly been several cases where employee salary information has been added to HR models resulting in privacy data beaches. Be sure to conduct periodic usage audits to prevent accidental exposure.
AI Content Governance Policy: Define where AI-generated content is permissible and prohibit its use in sensitive contexts such as crisis communications, legal statements or in content where copyright can be disputed.
Ethics & Bias Policy: Conduct regular bias audits of AI models to ensure diverse representation and prevent targeting vulnerable groups. If for example, a company is trying to validate target markets for a product, training the model on existing customers only may exclude specific target segments.
Transparency & Explainability Policy: Ensure AI-driven decisions are explainable! Maintain model-level documentation (e.g. model cards) and provide channels for concerns on model outputs. As for Intellectual Property Policy: Set guidelines for AI-generated content ownership and be sure to address legal differences across regions. Vet AI outputs before publication for potential IP violations.
Training & Upskilling Policy: Implement mandatory AI literacy training and regular refreshers for all relevant staff. Measure regularly AI knowledge amongst employees and identify gaps to be addressed.
Performance & Accountability Policy: Assign named owners for each AI-driven campaign and establish forums for reporting AI misuse, errors, or training gaps.
Data Source: 2024 State of AI marketing report and Microsoft Work Trend Index Annual Report
