- TheTip.AI - AI for Business Newsletter
- Posts
- Cut interview costs by 90% with Anthropic's Interviewer
Cut interview costs by 90% with Anthropic's Interviewer

AI interviewer now available for testing
Hey AI Enthusiast,
Anthropic just released something different.
They built an AI tool that conducts full interviews. Not surveys. Not forms. Actual conversations.
They used it to interview 1,250 professionals about how they use AI at work. General workforce, scientists, creatives.
The interviews ran 10-15 minutes each. AI asked questions, followed up on answers, adapted based on responses.
86% of professionals said AI saves them time. 65% were satisfied with AI's role in their work. But 69% mentioned social stigma around using AI tools at work.
One fact-checker told the AI: "A colleague recently said they hate AI and I just said nothing. I don't tell anyone my process because I know how a lot of people feel about AI."
This is the first large-scale qualitative research on AI usage at this scale. Traditional interviews with 1,250 people would take months and cost a fortune.
But the real Marketing Monday lesson isn't about the research.
It's about what most people get wrong after AI generates their content.
But first, today's prompt (then the editing step everyone skips...)
🔥 Prompt of the Day 🔥
Newsletter Curation System
Act as an AI newsletter specialist. Create one systematized workflow for producing weekly [TOPIC] newsletters using AI tools.
Essential Details:
Newsletter Topic: [SUBJECT FOCUS]
Subscriber Count: [AUDIENCE SIZE]
Content Sources: [WHERE TO CURATE FROM]
AI Tools: [PLATFORMS USED]
Personalization Level: [SEGMENTATION DEPTH]
Production Time Goal: [HOURS TO CREATE]
Create one newsletter system including:
Content discovery AI prompts (5 variations)
Summarization workflow (3 steps)
Introduction/commentary framework
Link curation criteria
Subject line A/B test generator (10 options)
Time-saving automation points
Turn 8 hours of newsletter creation into 2 hours.
âś… Marketing Monday âś…
Raw AI Output Is Garbage Without Editing
Most people treat AI-generated content like it's finished.
They copy the output. Paste it wherever. Ship it.
Then wonder why it sounds like every other AI-written piece on the internet.
Raw AI output is like uncooked ingredients. The magic happens in the editing kitchen.
Polish makes perfect.
The Problem With Shipping Raw AI Content
AI writes in patterns. Predictable phrases. Generic examples. Safe language.
It sounds like AI because everyone's using the same models with similar prompts.
Your competitors are shipping the same bland content. Your audience can tell.
"I hear from colleagues that they can tell when email correspondence is AI generated," one salesperson told Anthropic's interview AI. "They have a slightly negative regard for the sender. They feel slighted that the sender is 'too lazy' to send them a personal note."
That's the cost of raw AI output.
What Editing Actually Does
Editing transforms AI content from generic to yours.
It's not about fixing grammar. It's about injecting humanity.
Step 1: Inject Brand Personality
AI doesn't know your voice. It writes in neutral corporate speak unless you force it not to.
Take every sentence and ask: "Would I actually say this?"
If not, rewrite it.
Your brand voice lives in the details:
How you start sentences
Which words you avoid
Your punctuation choices
Your humor style
Your industry jargon
AI can't replicate this without heavy editing.
Step 2: Add Specific Examples AI Doesn't Know
AI pulls from training data. It doesn't know your business, your clients, your wins, your lessons.
Every piece of AI content needs real examples:
Client results with actual numbers
Personal stories from your experience
Industry-specific situations
Real names and companies (when appropriate)
Generic: "This strategy increased conversions."
Specific: "We tested this with a SaaS client in Q3. Their trial-to-paid conversion went from 12% to 19% in six weeks. The change cost nothing to implement."
Specificity proves expertise. AI can't provide it.
Step 3: Verify Every Claim and Statistic
AI hallucinates. Confidently.
It will cite studies that don't exist. Quote statistics that are wrong. Reference examples that never happened.
Check every fact. Every number. Every claim.
If you can't verify it, remove it or find the real data.
Your credibility depends on accuracy. One fake stat ruins trust.
Step 4: Remove Obvious AI Phrases and Patterns
AI has tells. Phrases it loves. Patterns it repeats.
Watch for:
"In today's digital landscape..."
"It's important to note that..."
"This underscores the importance of..."
"Delve into..."
"Leverage..."
"Unlock..."
Numbered lists that feel arbitrary
Conclusions that summarize everything twice
Delete these immediately. They scream AI.
Step 5: Keep Editing Until It Sounds Human
The test: Read it out loud.
If you wouldn't say it in a conversation, rewrite it.
If it sounds like a press release, rewrite it.
If it could have been written by anyone, rewrite it.
AI starts conversations. Humans finish them.
The Real Cost of Skipping This
I see people shipping AI content unedited every day.
Blog posts that read like every other blog post. Emails that sound corporate. Social posts that get ignored.
They saved 30 minutes by skipping editing.
They lost weeks of audience trust.
Editing is where differentiation happens. It's where your voice emerges. It's where generic becomes valuable.
The Anthropic Research Proves This
In Anthropic's interviews, professionals described AI as 65% augmentative and 35% automative.
But when they analyzed actual Claude usage, it was 47% augmentation and 49% automation.
People think they're collaborating more than they actually are.
The gap between perception and reality shows up in the quality of output.
If you're treating AI like automation—copy, paste, ship—your content reflects it.
If you're treating AI like augmentation—generate, edit, refine—your content stands out.
What Actually Works
Use AI to generate first drafts. Fast.
Then spend the time you saved on editing. Ruthlessly.
Add your examples. Verify claims. Remove AI patterns. Inject personality. Read it out loud. Fix what sounds wrong.
The content that wins isn't AI-generated.
It's AI-assisted, human-finished.
Did You Know?
AI discovered that mushroom networks in forests perform calculations similar to computers when distributing nutrients between trees.
🗞️ Breaking AI News 🗞️
Inside Anthropic's 1,250 AI Interviews
Anthropic designed a tool to run large-scale qualitative research on AI usage. They call it Anthropic Interviewer.
It's powered by Claude and conducts detailed interviews automatically at scale. Feeds results back to human researchers for analysis.
For this initial test, they interviewed 1,250 professionals:
1,000 general workforce
125 scientists
125 creatives
All participants provided consent for their interview data to be analyzed and publicly released.
How It Works
Three stages: planning, interviewing, analysis.
Planning: Anthropic Interviewer creates an interview rubric. Focuses on the same research questions across all interviews while remaining flexible for individual variations.
Human researchers collaborate with the AI to finalize the plan.
Interviewing: The AI conducts real-time, adaptive interviews on Claude.ai. 10-15 minutes per participant.
It follows the plan but adjusts based on responses. Asks follow-up questions. Explores tangents when relevant.
Analysis: Human researchers work with Anthropic Interviewer to analyze transcripts. The AI identifies emergent themes and quantifies their prevalence.
What They Found
General workforce:
86% said AI saves them time
65% satisfied with AI's role
69% mentioned social stigma around using AI at work
41% felt secure; 55% expressed anxiety about AI's impact on their future
People want to preserve tasks that define their professional identity while delegating routine work to AI.
One pastor: "If I use AI and up my skills with it, it can save me so much time on the admin side which will free me up to be with the people."
Creatives:
97% said AI saved them time
68% said it increased their work quality
70% mentioned managing peer judgment around AI use
One map artist: "I don't want my brand and my business image to be so heavily tied to AI and the stigma that surrounds it."
Economic anxiety appeared throughout. A voice actor: "Certain sectors of voice acting have essentially died due to the rise of AI, such as industrial voice acting."
All 125 creative participants mentioned wanting to remain in control of their creative outputs. But many acknowledged moments where AI drove creative decisions.
One artist: "The AI is driving a good bit of the concepts; I simply try to guide it… 60% AI, 40% my ideas."
Scientists:
79% mentioned trust and reliability concerns as the primary barrier
27% cited technical limitations
91% expressed desire for more AI assistance despite current limitations
Scientists primarily use AI for literature review, coding, and writing. Not for core research like hypothesis generation and experimentation.
One information security researcher: "If I have to double check and confirm every single detail the agent is giving me to make sure there are no mistakes, that kind of defeats the purpose."
Scientists want AI partnership but can't yet trust it for core research.
Why This Matters
Traditional interviews with 1,250 people would be expensive and time-consuming. Anthropic Interviewer made it feasible.
But the significance extends beyond methodology. It shifts what questions we can ask about AI's role in society.
Previously, Anthropic only had insight into how people used Claude within the chat window. They didn't know how people felt about using AI, what they wanted to change, or how they envisioned AI's future role.
Now they do.
Anthropic is using these insights to inform product development, partnerships with creative institutions, grants for scientists, and teacher training programs.
They're also launching public pilot interviews. Anyone can participate in a 10-15 minute interview to share their perspective on AI's role in their life and work.
The anonymized insights will be analyzed as part of societal impacts research and published.
The Bigger Picture
This is Anthropic's latest step to center human voices in AI development.
It started with Collective Constitutional AI—gathering public perspectives to shape Claude's behavior.
Now it extends to understanding how people actually use AI, what they struggle with, and what they need.
The findings inform Claude's development, Anthropic's policies, and partnerships with specific communities.
AI companies building in public, gathering real feedback, and adjusting based on what people actually experience—this is how it should work.

Over to You...
Would you trust AI to interview potential hires about their skills and cultural fit?
Hit reply and share.
To evolving hiring processes,
Jeff J. Hunter
Founder, AI Persona Method | TheTip.ai
NEW: “AI Money Group” to Learn to Make Money with AI
![]() | » NEW: Join the AI Money Group « 🚀 Zero to Product Masterclass - Watch us build a sellable AI product LIVE, then do it yourself 📞 Monthly Group Calls - Live training, Q&A, and strategy sessions with Jeff |
Sent to: {{email}} Jeff J Hunter, 3220 W Monte Vista Ave #105, Turlock, Don't want future emails? |

Reply