Who wins AI shopping wars?

Major AI shopping shake-up happening right now

Hey AI Enthusiast,

OpenAI and Perplexity both dropped AI shopping features this week.

You can now ask ChatGPT to find "a gaming laptop under $1000 with a 15+ inch screen." Or show it a picture of expensive shoes and ask for cheaper alternatives.

Perplexity's going even deeper. Their chatbot remembers where you live and what you do. So you can ask for recommendations based on what it already knows about you.

Adobe predicts AI shopping will grow 520% this holiday season.

But here's what caught my attention...

The AI shopping startups aren't worried. At all.

But first, let me share today's prompt and optimization news (then see why the startups are confident...)

πŸ”₯ Prompt of the DayπŸ”₯

Smart A/B Test Hypothesis Generator

Act as a CRO specialist. Using ChatGPT or Claude, create one A/B test hypothesis generator for [WEBSITE/CAMPAIGN ELEMENT].

Essential Details:

  • Test Element: [PAGE/EMAIL/AD/FUNNEL]

  • Current Performance: [BASELINE METRIC]

  • Traffic Volume: [MONTHLY VISITORS]

  • Conversion Goal: [WHAT TO IMPROVE]

  • Previous Tests: [LEARNING HISTORY]

  • Testing Velocity: [TESTS PER MONTH]

Create one hypothesis system including:

  1. Performance data analysis framework

  2. AI hypothesis generation prompts (20+)

  3. Impact vs effort prioritization

  4. Statistical significance calculator

  5. Test design specifications

  6. Learning documentation template

Never run out of test ideas with AI.

βœ… Amazon Makes AI 2.5x Faster βœ…

Amazon just released EAGLE for SageMaker.

It's a technique that speeds up AI inference by 2.5x without compromising quality.

Instead of using a separate "draft" model to predict tokens, EAGLE uses the main model's own hidden layers to predict future tokens in parallel.

Think of it like the model becoming its own assistant.

Faster responses. Lower costs. Same quality output.

The real power is in the customization.

You can train EAGLE on your own data. Not generic benchmarks. Your actual workload patterns.

Amazon tested this with a Qwen3-32B model. With custom training, they got output throughput up to 412 tokens per second at 8 concurrent requests. That's 214 tokens per second with base EAGLE training, compared to just 156 tokens per second without EAGLE.

The math matters here. If you're running high-volume AI applications, that's real money saved on compute.

SageMaker supports six model architectures: Llama, Qwen2, Qwen3, and more. You can bring your own models or use their pre-trained versions.

The workflow is straightforward. Upload your model to S3. Run the optimization job. Deploy through the same interface you already use.

The benchmarks are public. The performance gains are measurable.

If you're paying for AI inference at scale, this is worth testing.

Did You Know?

Bookstores use AI that knows which books you'll buy based on how long you spend reading the back cover and first page.

πŸ—žοΈ Breaking AI News πŸ—žοΈ

Two weeks before Black Friday, both companies rolled out nearly identical features. Upload a photo. Ask for alternatives. Get product recommendations. Check out without leaving the chat.

OpenAI partnered with Shopify. Perplexity went with PayPal.

Both want a piece of the e-commerce action. Makes sense when you're burning millions on compute and still figuring out profitability.

But the niche AI shopping startups like Phia, Cherry, and Onton aren't sweating it.

"Any model is only as good as its data sources," Onton CEO Zach Hudson told TechCrunch. "ChatGPT and Perplexity piggyback off Bing or Google. That makes them only as good as the first few results from those indexes."

Daydream CEO Julie Bornstein agrees. She's been in e-commerce for years. Her take: general search has always sucked for fashion.

"Finding a dress you love is not the same as finding a television," she said. "That level of understanding comes from domain-specific data and merchandising logic that grasps silhouettes, fabrics, occasions, and how people build outfits over time."

The specialized startups built their own datasets. They're training on better data. Cleaner catalogs. Actual merchandising expertise.

Hudson's blunt about it: "If you're using only off-the-shelf LLMs and a conversational interface, it's very hard to see how a startup can compete with larger companies."

Translation: If you're not specializing, you're dead.

The big guys have reach and retail partnerships. But they're still scraping the same messy web data everyone else uses.

Meanwhile, the niche players are curating datasets that actually understand their vertical. That's the moat.

This reminds me of every market where a giant tries to crush a specialist. The giant has distribution. The specialist has expertise.

In 2025, expertise is data. And good data beats big distribution every time.

Over to You...

Are you using AI for shopping research yet? Or building any agents for your business?

Hit reply and let me know.

To smarter automation,

Jeff J. Hunter 
Founder, AI Persona Method | TheTip.ai

NEW: β€œAI Money Group” to Learn to Make Money with AI

Β» NEW: Join the AI Money Group Β«
πŸ’° AI Money Blueprint: Your First $1K with AI - Learn the 7 proven ways to make money with AI right now

πŸš€ Zero to Product Masterclass - Watch us build a sellable AI product LIVE, then do it yourself

πŸ“ž Monthly Group Calls - Live training, Q&A, and strategy sessions with Jeff

Sent to: {{email}}

Jeff J Hunter, 3220 W Monte Vista Ave #105, Turlock,
CA 95380, United States

Don't want future emails?

Reply

or to participate.