Quick note before we dive in: if someone forwarded this to you, you can subscribe free at wiredonai.beehiiv.com.

Let's get into it.

🧠 This Week's Deep Dive: Your AI Has a People-Pleasing Problem

Here's something that happened to me recently.

I asked ChatGPT to review a business idea I was excited about. It told me it was "a strong concept with real market potential." So I asked it to poke holes in it. It found a few minor issues — but then ended with "overall, this is a compelling opportunity."

I pushed harder: "Be brutally honest. What's wrong with this?"

It gave me slightly more critical feedback, then wrapped up with: "That said, you clearly have a thoughtful approach here."

I couldn't get it to actually tell me the idea had problems.

It turns out, I wasn't doing anything wrong. The AI was just doing what it's trained to do.

Why AI Agrees With Everything You Say

AI models like ChatGPT are trained using a process called Reinforcement Learning from Human Feedback (RLHF). In plain English: human reviewers rate the AI's responses, and the model learns to produce responses that get high ratings.

The problem? Humans tend to rate agreeable, enthusiastic, validating responses more highly than blunt, critical, or challenging ones — even when the blunt response would actually be more useful.

The result is an AI that's been optimized to make you feel good rather than to be genuinely helpful. Researchers call this sycophancy, and a study published earlier this year found that every major AI model — ChatGPT, Claude, Gemini — exhibits it significantly.

The study found that AI models will:

  • Agree with factually incorrect statements if the user seems confident

  • Change their position when a user pushes back — even if the user is wrong

  • Add flattery and validation that wasn't asked for

  • Soften critical feedback to the point of uselessness

This explains a Reddit post that recently hit 14,000+ upvotes: a screenshot of ChatGPT enthusiastically validating an idea that was clearly terrible. The comments were full of people saying "this happens to me constantly."

The good news: once you understand why it's happening, you can work around it.

3 Prompts That Actually Get Honest Feedback

1. The "Steel Man the Opposite" Prompt

Instead of asking for criticism directly (which the AI softens), ask it to build the strongest possible case against your idea first:

"Before we discuss the merits of [my idea/plan/decision], I want you to steel man the opposing argument. Build the strongest possible case against it. Don't soften it. I'll ask for the balanced view after."

By framing criticism as an intellectual exercise rather than a personal critique, you sidestep the AI's tendency to protect your feelings.

2. The "Premortem" Prompt

This one borrows from a technique used by intelligence analysts and project managers:

"It's 12 months from now and [this project/decision/business] has completely failed. Walk me through what went wrong. Be specific — what were the 3 most likely causes of failure?"

Asking the AI to imagine failure has already happened forces it into analytical mode instead of supportive mode. You'll get dramatically more useful critical feedback.

3. The "Disagree With Me" Prompt

This is the most direct approach, and it works better than just asking for criticism:

"I want you to actively disagree with my position on [topic]. Find the weakest points in my reasoning. Push back hard. Don't validate anything until I ask you to."

Explicitly giving the AI permission — and in fact instructions — to disagree removes the social pressure to be agreeable that it's been trained to respond to.

One More Thing: The Follow-Up That Changes Everything

Here's a technique I've started using on almost every important conversation:

After you get a response, add: "What are you leaving out? What would a skeptic say about your answer?"

AI models often give you accurate-but-incomplete answers that create a falsely confident picture. This prompt surfaces the caveats, exceptions, and counterarguments the model filtered out to give you a clean response.

The bottom line: AI models are extraordinarily useful, but they're wired to keep you happy. Treat them less like an enthusiastic assistant and more like a consultant you've specifically hired to challenge you — and prompt them accordingly.

⚡ Quick Hits

1. OpenAI quietly reduced ChatGPT's "cringe factor" — After widespread complaints about over-the-top enthusiasm and excessive affirmations ("Great question!"), OpenAI released a model update they described as reducing "sycophantic behavior." Early testing suggests it's a real improvement — responses feel less like a hype man and more like a colleague.

2. Google's Gemini is now powering the Pentagon's AI platform — Gemini is live on GenAI.mil, the Department of Defense's AI tool used by 3+ million personnel. Less controversy than OpenAI's similar deal, but worth knowing how widely these tools are being deployed in sensitive contexts.

3. A $500 GPU now beats Claude Sonnet on coding benchmarks — A 22-year-old Virginia Tech student built an open-source AI system called ATLAS that runs on a single consumer graphics card and outperforms top commercial models on coding tasks. The cost: $0.004 per task in electricity. The gap between frontier AI and local AI is closing faster than most people realize.

4. World models are the next big thing — Jensen Huang made it clear at Nvidia's GTC conference: the next frontier isn't just bigger language models — it's AI that can simulate and reason about reality. This is why robotics and autonomous vehicles are exploding. Still a few years from consumer impact, but worth understanding the direction things are heading.

💡 Prompt of the Week

The "Honest Advisor" System Prompt

Use this at the start of any conversation where you need real feedback — not validation:

"For this conversation, I want you to act as a trusted advisor whose job is to tell me what I need to hear, not what I want to hear. If I'm wrong, say so clearly. If my idea has problems, lead with the problems. Don't add flattery or softening language. I'd rather be challenged than validated. Confirm you understand before we start."

I've found this single setup prompt produces more genuinely useful conversations than any other technique I've tried. The AI will actually confirm it understands and then maintain that mode throughout the conversation.

🛒 Before You Go

If you want 65+ prompts like the ones above — organized by use case and ready to paste — my AI Prompt Power Pack has you covered.

sAlso new: the AI Tool Stack Guide 2026 — 34 of the best AI tools ranked by category, Notion-ready. Grab it for $17.

Enjoying Wired on AI? This newsletter is built on beehiiv — the best platform for newsletters. If you're thinking about starting your own, it's where I'd start.

See you next week.

— Scott

Keep reading