was successfully added to your cart.

AI Acceptable Use Policy

AI Acceptable Use Policy

Last updated: July 26, 2025

1. Purpose

The purpose of this AI Acceptable Use Policy is to define the conditions under which artificial intelligence systems may be developed, acquired, deployed, and used. The acceptable use of AI must align with our mission and values, support the welfare and rights of our stakeholders and the public, and comply with all applicable laws and regulations.

All AI use should contribute positively to society. Uses of AI that contravene these objectives, violate legal obligations, or present unreasonable risks to individuals, communities, or the environment are strictly prohibited.

2. Scope

This policy is mandatory for all internal team members, business partners, and customers who access or deploy AI systems that we develop, deploy, or license.

  • Team Members: This includes all employees, contractors, seconded staff, interns, trainees, and other individuals performing work under the organization’s direct authority—whether on a permanent, fixed-term, or temporary basis.
  • Business Partners: This includes vendors, consultants, service providers, and any third parties acting on behalf of the organization or providing AI-related services. Any contractual agreement with such parties should include provisions requiring full compliance with this policy and its underlying procedures.
  • Customers: Where this policy is referenced within a license agreement, service contract, or End User License Agreement (EULA), it becomes binding on customers who use our AI systems or outputs. Customers are expected to adhere to the same standards of responsible AI use and ethical deployment.

Together, these groups are referred to collectively as “users” under this policy framework.

3. Unacceptable Uses of AI

Each AI system must undergo a formal risk classification. Risk mitigation, testing, and required approvals are determined by the assigned risk level. No system classified as “unacceptable” may be developed or deployed. This risk-based approach ensures that AI systems are designed, operated, and governed in proportion to their potential harms and societal impact.

The following categories of AI applications are classified as unacceptable and are strictly prohibited under this policy. This list unifies unaccepted uses as defined by US federal and state laws with the acceptable use policies of major cloud and model providers.

3.1 Human Rights, Civil Liberties, and Safety

  • Autonomous weapons or lethal autonomous systems
  • Predictive policing or pre-emptive criminal risk scoring
  • Social scoring systems that rate individuals based on behavior, socioeconomic status, or characteristics
  • Biometric categorization or real-time biometric identification in public spaces
  • Unlawful tracking or stalking systems
  • Surveillance systems used to target or discriminate against, harass a person or vulnerable populations
  • Systems for ongoing surveillance or real-time or near real-time identification or persistent tracking of the individual using any of their personal data, including biometric data, without the individual’s express consent
  • Voice-activated or generative systems that manipulate children’s behavior in harmful ways
  • Advertising using AI generated actors or voice without disclosure
  • Use of AI to facilitate human trafficking or the exploitation of vulnerable individuals
  • Training or deploying AI to engage in or assist fraud, extortion, or identity theft

3.2 Misinformation, Influence, and Deception

  • AI systems used to manipulate, deceive, or impersonate others without consent
  • Systems that produce or disseminate misinformation, disinformation, or propaganda
  • Deepfakes or synthetic media used without consent or disclosure, or for harassment, sexual exploitation, fraud, or election interference
  • Deepfakes or sexual scenes with a minor that abuse or harass the minor
  • Bots that interact with humans for commercial or political purposes without disclosing their automated nature
  • Use of AI to manipulate elections, influence voter behavior, or suppress participation
  • AI-generated content that encourages or instructs on self-harm or suicide
  • Any content that incites violence or physical harm against others
  • Use of AI to create or distribute content that promotes terrorism or violent extremism
  • Use of AI for the creation or dissemination of intimate imagery without consent (non-consensual deepnudes)
  • Use of AI to generate fake product or service reviews
  • Use of AI for astroturfing campaigns (false grassroots support)
  • Training or deploying AI tools that impersonate emergency services or medical providers

3.3 Data Privacy, Consent, and Security

  • Re-identification of anonymized data without consent
  • Use of personal data for training AI without informed, specific, and revocable consent
  • Training on datasets that include copyrighted, confidential, or sensitive material without proper licenses or legal basis
  • AI systems that violate privacy or data protection laws, including unauthorized surveillance, profiling, or scraping
  • Generative models that memorize and reproduce sensitive or proprietary data
  • Facial recognition databases without express consent

3.4 Discrimination and Unfair Outcomes

  • AI systems that produce or reinforce discriminatory outcomes based on race, gender, ethnicity, religion, age, disability, or other protected characteristics
  • Employment, financial, medical, or educational decision-making systems without bias evaluation and audit trails
  • Systems that enable digital redlining or deny opportunities unfairly to protected groups
  • Use of demographic attributes for personalization without regulatory justification and independent review

3.5 Intellectual Property and Ethical Content Generation

  • Generation of copyrighted material without licensing or legal basis
  • Creation of AI-generated actors, voices, or likenesses without disclosure or consent
  • Commercial use of deceased individuals’ likenesses without estate permission
  • Fabrication of scientific research, legal documents, or medical advice
  • Generation of content that is obscene, offensive, harmful, or unsafe for minors

3.6 Safety and Misuse Prevention

  • Deployment of AI systems that present unmitigated risks to critical infrastructure, health, or public safety
  • Use of AI in life-critical settings without extensive validation, monitoring, and fail-safes
  • Circumvention of safety measures, filters, or integrity mechanisms in AI models or APIs
  • Use of AI for spam, phishing, malware generation, or security evasion
  • Training AI systems to generate exploits, jailbreaks, or instructions for circumventing security software
  • Using AI to assist in denial-of-service attacks or intrusion into protected networks
  • Using AI to impersonate licensed professionals without explicit user awareness and disclaimers

4. Enforcement and Review

Any AI system identified as violating this Acceptable Use Policy will be immediately subject to a mandatory stop. The system must undergo a risk reassessment by the AI Governance Office. If the use is determined to fall under an unacceptable category, it must be terminated, and any deployment reversed.

Please submit any questions, feedback, or concerns to our team at legal@johnsnowlabs.com.

preloader