The “Do Not Paste” List
- Brayden Cantzler
- Jan 27
- 3 min read

Simple AI Privacy Rules Every SMB Should Adopt
AI tools are great at drafting emails, cleaning up wording, and turning messy notes into something usable.
The risk shows up when someone tries to “help AI understand” by pasting the real thing: a full client email chain, a contract excerpt, a payroll report, an HR situation, or a screenshot that includes access details.
If you want one rule that prevents most avoidable mistakes, it’s this:
If you wouldn’t forward it in an email, don’t paste it into AI.
That single habit keeps teams moving fast without turning AI into a privacy problem.
Why People Paste Too Much
Most oversharing isn’t malicious. It’s momentum.
Someone’s under pressure, wants the response to sound professional, and assumes “more context = better output.” So they paste everything.
But context often includes details that never needed to be there: names, addresses, account info, medical notes, internal pricing, or credentials.
TEC Tip: Your biggest risk isn’t the AI model. It’s the copy/paste reflex.
A Quick Real-World Scenario
This is a common one:
An HR manager wants help rewriting a sensitive email. They paste:
“Employee John Smith was written up on 12/3 for attendance issues. He has a medical condition…”
That prompt includes identifying information and potentially protected details.
Here’s the safer version that gets the same value:
“Draft an HR email about an attendance issue. Keep the tone professional and neutral. Provide three versions: firm, supportive, and formal. Use placeholders like [Employee] and [Date].”
Same outcome. Less risk.
The “Do Not Paste” List
If your prompt includes any of the categories below, stop and rewrite it with placeholders.
1) Personal And Client-Identifying Information (PII)
Full names tied to services, case details, or sensitive context
Dates of birth
Home addresses
Driver’s license numbers
Passport numbers
Any unique IDs that can identify a person
2) Financial Details
Bank account and routing numbers
Credit card numbers
Payroll data
Tax documents
Invoices with account numbers, nonpublic pricing terms, or customer-identifying detail
3) Credentials And Access Information
Passwords
MFA codes or backup codes
API keys
Admin links that contain tokens
VPN configs or anything that could help someone access your environment
4) Protected Or Regulated Data
Patient data (PHI)
Student records
Legal privileged communications
Program participant details tied to compliance
5) Confidential Business Information
Nonpublic contracts and legal clauses
Pricing sheets, margins, vendor terms
M&A or partnership discussions
HR performance notes, investigations, or disciplinary issues
Background checks
Internal incident reports (security, legal, HR)
TEC Tip: “Internal” is not a safe category. Internal documents often contain the exact context you’re trying to protect.
“Okay, But I Still Need AI To Help.” Use Paste-Safe Prompts.
This is where most policies fail: they tell people what not to do, but don’t show them what to do instead.
Here are paste-safe rewrites that keep the value:
Instead of pasting a client email chain:
“Client = [Client A]. Issue = delayed deliverable. Goal = maintain trust. Draft a response with 3 tone options.”
Instead of pasting a contract clause:
“I’m reviewing a termination or liability clause. What are common risks and questions to check before approval?”
Instead of pasting an invoice dispute or payroll detail:
“Create a professional invoice reminder template with placeholders for amounts and dates.”
Instead of pasting credentials or config files:
“Give me a checklist for credential storage, key rotation, and least privilege.”
Instead of pasting grant or program participant details:
“Create an outline and compliance checklist based on these requirements. Use placeholders only.”
TEC Tip: Ask AI for structure (templates, checklists, outlines, options). Don’t feed it the sensitive source material.
Why This Matters For Security, Too
AI introduces new ways sensitive information can leak, including prompt injection and accidental disclosure in connected systems. OWASP’s Top 10 for LLM Applications includes categories like sensitive information disclosure and prompt injection.
OWASP LLM Top 10 (PDF): https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-v2025.pdf
For a practical security perspective on generative AI data handling, AWS also emphasizes understanding how the information you enter is stored, processed, and shared when using generative AI services.
AWS Security Blog: https://aws.amazon.com/blogs/security/securing-generative-ai-data-compliance-and-privacy-considerations/
How TEC Can Help Your Team Adopt AI Responsibly
If you want AI productivity without privacy headaches, we can help you set the baseline the right way:
• AI acceptable-use policy (one-page, practical)
• “Do Not Paste” standards and training
• Approved workflow templates (meeting recaps, SOP drafts, client communications)
• Tool selection guidance and rollout planning
• Security alignment with your existing IT environment




