AI Tools Usage Policy
Guidelines for using company-managed AI tools responsibly.
Privacy & Data Protection
Before using any AI tool with company data, you must disable data training and sharing features.
| Tool | Action Required | Where |
|---|---|---|
| Claude | Disable "Allow Anthropic to use your conversations to improve Claude" | Settings > Privacy |
| ChatGPT | Disable "Improve the model for everyone" | Settings > Data Controls |
| Gemini | No action needed | Managed by Google Workspace admin |
| NotebookLM | No action needed | Managed by Google Workspace admin |
| Perplexity | Disable data sharing | Settings > AI Data Usage |
Acceptable Use
- Do use AI tools to increase productivity, improve code quality, and accelerate research
- Do verify AI-generated content before using it in production or external communications
- Do report any security concerns to IT support immediately
- Don't share confidential client data, PII, or classified information with AI tools unless the tool is explicitly approved for that data classification
- Don't use personal AI accounts for company work — use the managed tools provided here
- Don't bypass privacy settings or disable security controls
Access Tiers
Currently, all employees have access to all tools. If tiered access is enabled in the future:
| Tier | Tools | Typical Roles |
|---|---|---|
| Developers | All tools including Claude Code | Engineering, DevOps, SRE |
| Power Users | All tools except Claude Code | Product, Design, Analytics |
| General | Gemini, NotebookLM | All other employees |
Questions?
Submit a request or attend AI Tools office hours (Tuesdays & Thursdays, schedule TBD).