Double your creative output with Figma's AI tools

One tool, complete image control

Hey AI Enthusiast,

Figma just launched AI-powered image editing features.

Object removal. Object isolation. Image expansion.

The design tool is catching up to Adobe and Canva, which have had these features for years.

What's different? Figma's keeping everything in one place. No more exporting images to other tools, editing, and importing back.

New lasso tool: select an object, remove it, or isolate it to move around. When you move the object, the background and colors stay intact. You can adjust lighting, shadow, color, or focus.

Image expansion: useful when adapting creatives for different formats. Creating a web banner from a 1Γ—1 image? The AI fills in the background and details. Saves you from constantly cropping and adjusting.

Figma is also collating all image-editing tools in one toolbar. Select objects, change background color, add annotations or text. Background removal gets a prominent spot ,it's one of the most common actions on the platform.

Available now on Figma Design and Draw. Rolling out to other Figma tools next year.

This launch happened the same day Adobe made similar features available within ChatGPT.

But the real Tips & Tricks Thursday lesson isn't about image editing tools.

It's about the AI capability most people are completely ignoring.

But first, today's prompt (then how to stop limiting yourself to text...)

πŸ”₯ Prompt of the Day πŸ”₯

AI-Powered Comment Response System

Act as a social media community manager. Using ChatGPT, create one comprehensive comment response automation system that maintains authentic engagement while scaling to [VOLUME] daily interactions.

Essential Details:

  • Primary Platform: [INSTAGRAM/FACEBOOK/YOUTUBE/LINKEDIN/TIKTOK]

  • Comment Volume: [DAILY/WEEKLY COUNT]

  • Response Rate Goal: [TARGET % + TIMEFRAME]

  • Brand Voice Profile: [TONE/PERSONALITY/VALUES/FORBIDDEN PHRASES]

  • Audience Demographics: [WHO'S COMMENTING]

  • Escalation Triggers: [CRISIS KEYWORDS/SENTIMENT THRESHOLDS]

  • Response Time SLA: [MINUTES/HOURS BY COMMENT TYPE]

  • Language Requirements: [MULTILINGUAL NEEDS IF APPLICABLE]

Create one response system including:

  1. Comment classification framework (question/praise/complaint/spam/urgent/opportunity)

  2. ChatGPT response generation prompts (20+ variations by comment type and sentiment)

  3. Personalization requirements (name usage, context references, conversation history)

  4. Brand voice consistency checklist (tone verification, approved phrases, prohibited language)

  5. Emoji and GIF usage guidelines (platform-specific, sentiment-appropriate)

  6. Length optimization by platform (character limits, readability standards)

  7. Sentiment detection triggers (positive/neutral/negative thresholds)

  8. Human review workflow (what requires approval, escalation paths, quality scoring)

  9. Response speed tiers (VIP/urgent/standard/low-priority)

  10. Performance metrics tracking (response time, satisfaction indicators, engagement rate)

Engage authentically at scale while maintaining brand integrity and crisis prevention.

βœ… Tips & Tricks Thursday βœ…

Stop Limiting Yourself to Text-Only AI Prompts

Most people use ChatGPT and Claude like typewriters.

They type text. Get text back. Never explore anything else.

That's leaving massive capability on the table.

ChatGPT and Claude now process images, audio, and documents together. Multimodal inputs unlock possibilities text alone can't touch.

But almost nobody uses them.

What Multimodal Actually Means

Multimodal AI means you can combine different input types in one conversation:

  • Images + text

  • Audio + text

  • PDFs + questions

  • Screenshots + analysis requests

  • Photos + descriptions

The AI processes all of it together and gives you better, more contextual responses.

Why This Matters

Text-only prompts force you to describe everything.

"I have a competitor ad with a blue background, large headline saying 'Save 50%', and a product image in the bottom right..."

Multimodal lets you skip that: Upload the screenshot. Ask "How can I improve this ad?"

Faster. More accurate. Better results.

How to Actually Use Multimodal AI

Upload Screenshots of Competitor Ads and Ask for Improvement Ideas

Stop describing ads in text. Upload the screenshot.

"Analyze this competitor ad. What's working? What's weak? Give me 5 ways to make a better version."

The AI sees the layout, colors, copy, imagery. It gives specific feedback you couldn't get from a text description.

Works for:

  • Social media ads

  • Landing pages

  • Email designs

  • Print materials

Feed in Product Photos and Generate Multiple Marketing Angles

Upload a product photo. No description needed.

"Generate 10 different marketing angles for this product. Include target audiences, pain points, and headline ideas for each angle."

The AI analyzes the product visually and creates angles based on what it sees.

You can also ask: "Create ad copy variations for different platforms using this image."

Share Voice Memos Instead of Typing Long Context

Have a complex project to explain? Record a voice memo.

Upload it to ChatGPT or Claude. Add: "Transcribe this and create a project plan with tasks, timeline, and deliverables."

Talking is faster than typing. Voice memos capture nuance, tone, and detail you'd skip in text.

Use this for:

  • Project briefs

  • Meeting recaps

  • Content ideas

  • Strategy sessions

Combine PDF Reports with Questions for Instant Analysis

Upload a PDF. Ask specific questions.

"This is our Q4 sales report. What are the top 3 insights? Which products are underperforming? What should we focus on in Q1?"

The AI reads the entire document and answers based on the data inside.

No more manually scanning pages. No summarizing in text. Just upload and ask.

Works for:

  • Financial reports

  • Market research

  • Contracts

  • Technical documentation

The Patterns That Work

Pattern 1: Visual + Text

Upload image β†’ Ask specific question β†’ Get contextual answer

Example: Screenshot of your website β†’ "What's the biggest UX problem here?"

Pattern 2: Document + Analysis

Upload PDF β†’ Ask for insights or summaries β†’ Get structured analysis

Example: Upload competitor report β†’ "What are their main strategies?"

Pattern 3: Audio + Task

Upload voice memo β†’ Request specific output β†’ Get deliverable

Example: Record project idea β†’ "Turn this into a one-page project brief"

Pattern 4: Multiple Inputs

Upload several images β†’ Ask for comparison or synthesis β†’ Get combined insights

Example: Upload 5 competitor ads β†’ "What patterns do you see? What's missing from these approaches?"

What Most People Get Wrong

They treat multimodal as a novelty. "Oh cool, it can see images."

Then they go back to text-only.

Multimodal should be your default. Not your exception.

Every time you start typing a long description of something visual, stop. Upload it instead.

Every time you're about to summarize a document, stop. Upload the PDF.

Every time you're explaining complex context, stop. Record a voice memo.

The Compound Effect

Using multimodal inputs doesn't just save time.

It improves output quality.

The AI has more context. Better understanding. More accurate analysis.

Text descriptions are lossy. You forget details. You describe things incorrectly. You miss nuance.

Visual inputs are precise. Audio captures tone. Documents provide exact data.

Better inputs = better outputs.

Why This Matters Now

Most people are still stuck in text-only mode.

That's your advantage.

While they're typing paragraphs describing their competitor's landing page, you upload a screenshot and get analysis in 10 seconds.

While they're summarizing a 50-page report, you upload the PDF and ask direct questions.

While they're typing out project context, you record a 2-minute voice memo.

Multimodal thinking beats text-only prompting by miles.

Start using it.

Did You Know?

AI found that house cats manipulate their owners by mixing their purrs with frequencies that match crying babies, triggering an involuntary human nurturing response.

πŸ—žοΈ Breaking AI News πŸ—žοΈ

The Full Story on Figma's New Features

Figma launched AI-powered image editing today.

The company is catching up. Adobe and Canva have had object removal and similar features for years.

What took so long? Figma focused on being a design collaboration tool, not an image editor. But users kept exporting images to Photoshop or Canva for edits, then importing back.

That workflow is broken. Figma is fixing it.

Object Removal and Isolation

Improved lasso tool: select any object in an image.

You can:

  • Remove it (AI fills in the background)

  • Isolate it (move it around independently)

  • Adjust it (lighting, shadow, color, focus)

When you move an object, the image retains background characteristics and colors. The AI understands context.

Use case: You have a product photo with a cluttered background. Select the product, isolate it, adjust the lighting, and move it to a cleaner section of the canvas.

Image Expansion

This feature fills in backgrounds and details when you need to adapt an image for different formats.

Common scenario: You have a 1Γ—1 Instagram image. You need a web banner (wider aspect ratio).

Old way: Crop the image, lose content, manually adjust elements.

New way: Use image expansion. AI generates the additional background content to fill the new format.

Saves constant cropping and element adjustment.

Unified Toolbar

Figma is putting all image-editing tools in one place.

The new toolbar includes:

  • Object selection

  • Background color changes

  • Annotations

  • Text additions

  • Background removal (gets prominent placementβ€”it's the most common action)

Everything accessible without switching tools or panels.

Availability

Live now on Figma Design and Draw.

Rolling out to other Figma tools in 2026.

The Adobe Connection

This launched the same day Adobe made similar features available within ChatGPT.

Figma was a launch partner for ChatGPT's app integration in October. It's unclear if these new AI features will be available to users accessing Figma through ChatGPT.

Why Figma Took So Long

Adobe and Canva shipped AI image editing years ago.

Figma's focus was different: real-time collaboration, design systems, prototyping.

But the market moved. Users expect AI-powered editing in every design tool.

Figma had to add it or risk losing users to competitors.

Now they have feature parity. The question is whether it's enough.

What This Means for Designers

If you're a Figma user, you can now:

  • Edit images without leaving Figma

  • Skip the export-edit-import workflow

  • Adapt creatives for different formats faster

  • Remove backgrounds and objects with AI assistance

For teams, this means fewer tool switches, faster workflows, and less friction in the design process.

For Figma, this is table stakes. They're not innovating. They're catching up.

Over to You...

Between Figma, Adobe, and Canva which AI image editing features actually work best for you?

Hit reply and let me know.

To faster design workflows,

NEW: β€œAI Money Group” to Learn to Make Money with AI

Β» NEW: Join the AI Money Group Β«
πŸ’° AI Money Blueprint: Your First $1K with AI - Learn the 7 proven ways to make money with AI right now

πŸš€ Zero to Product Masterclass - Watch us build a sellable AI product LIVE, then do it yourself

πŸ“ž Monthly Group Calls - Live training, Q&A, and strategy sessions with Jeff

Sent to: {{email}}

Jeff J Hunter, 3220 W Monte Vista Ave #105, Turlock,
CA 95380, United States

Don't want future emails?

Reply

or to participate.