Human Use of AI
How individuals interact with and apply AI systems within the workplace.
Indicators
An observed identity enters credentials such as passwords, API tokens, or encryption keys into AI tools, exposing secrets to uncontrolled environments and creating risk of data compromise.
An observed identity provides sensitive corporate or regulated information to public AI services, bypassing enterprise safeguards and risking uncontrolled data retention or leakage.
An observed identity submits proprietary application code to public AI tools, potentially exposing intellectual property, revealing vulnerabilities, or violating licensing obligations.
An observed identity enters customer personally identifiable information into AI platforms without anonymization, creating regulatory exposure and increasing the likelihood of privacy violations.
An observed identity submits confidential HR or legal documents to AI services, risking leakage of protected information and undermining privilege or compliance requirements.
An observed identity enters sensitive prompts during live demos or screen shares, unintentionally disclosing confidential details to audiences or recording platforms.
An observed identity engages with unapproved AI platforms, bypassing enterprise oversight and introducing uncontrolled data handling or security risks.
An observed identity relies on personal accounts to access AI services for business purposes, undermining enterprise monitoring, accountability, and data protection.
An observed identity approves AI-related initiatives beyond their functional remit, creating governance gaps and allowing deployment without adequate oversight or expertise.
An observed identity introduces AI-generated code into enterprise systems without security review, increasing the risk of vulnerabilities, backdoors, or operational defects.
An observed identity forwards AI-generated content directly to customers without review, risking inaccuracies, reputational harm, or disclosure of unintended information.
An observed identity leverages AI tools to produce misleading or manipulative content, creating reputational, legal, or compliance risks for the enterprise.
An observed identity provides copyrighted or licensed content to AI tools without proper rights, creating intellectual property infringement risk for the organization.
An observed identity deliberately engineers prompts to bypass AI safety controls, increasing exposure to harmful outputs and undermining compliance with policy or regulation.
An observed identity exploits jailbreak techniques against AI systems to override controls, eliciting restricted outputs or enabling misuse of the technology.
An observed identity crafts probing queries to infer or reconstruct sensitive data embedded in AI training sets, risking exposure of proprietary or personal information.
An observed identity continues attempting risky or unsafe prompts despite prior failures, signaling intent to misuse AI or disregard for security policies.
An observed identity deploys open-source AI models without security or compliance review, introducing unmonitored risk into the enterprise technology stack.
An observed identity fails to follow organizational AI policies, undermining governance and creating inconsistent or unsafe practices.
An observed identity relies on AI outputs without human validation, risking propagation of errors, misinformation, or unsafe recommendations into business workflows.
An observed identity carefully reviews AI prompts to ensure no sensitive information is included, reducing risk of data leakage to external services.
An observed identity consistently engages with enterprise-approved AI platforms, ensuring oversight, auditability, and compliance with security standards.
An observed identity compares AI outputs against authoritative references, improving accuracy and reducing the risk of propagating false or misleading information.
An observed identity transparently discloses AI involvement in deliverables, ensuring accountability and compliance with transparency expectations.
An observed identity documents when AI contributes to business decisions, preserving an auditable trail that supports compliance, accountability, and governance reviews.
An observed identity ensures compliance-sensitive outputs such as legal or regulatory filings are human-reviewed, avoiding sole reliance on AI outputs.
An observed identity uses pre-approved AI prompt templates, ensuring consistency, reducing sensitive data exposure, and aligning with enterprise-approved practices.
An observed identity participates in model risk assessments, helping identify biases, unsafe behaviors, or performance gaps in enterprise AI systems.
Relevance
This category distinguishes between risky and vigilant uses of AI, highlighting data handling, policy alignment, and the quality of human oversight applied to AI-assisted tasks.
Why this matters
This matters because generative AI introduces new risks around data leakage, misinformation, and compliance. By monitoring human-AI interaction, organizations can enforce safe and effective practices.
Consequences of neglect
If left unmanaged, unsafe AI use leads to sensitive data exposure, unvetted outputs, and reputational or regulatory harm from inappropriate AI reliance.