The Humanity Gap: What Sam Altman Keeps Telling Us About Himself [033]

Ron Boire

March 3, 2026

·

March 3, 2026

This guy gives me the creeps.

Last December, I wrote about Sam Altman's erotica reversal at OpenAI and asked a simple question: evil, stupid, or just confused? I gave him the benefit of the doubt. I leaned toward confused. A guy under pressure, folding to market forces, losing his grip on mission. It happens. It's human.

I'm starting to wonder if that last word is the problem.

Because Sam Altman keeps telling us who he is. And every time he does, the picture gets a little clearer and a lot more unsettling.

The India Moment

This past week, at the India AI Impact Summit in New Delhi, Altman was asked about the staggering energy consumption of AI. His response was revealing (er, creepy). Not for what it said about energy, but for what it said about how he sees us.

I mean, how he sees us humans.

"It also takes a lot of energy to train a human," he said. "It takes like 20 years of life and all of the food you eat during that time before you get smart."

What?

Read that again. Twenty years of childhood, of learning to walk and talk, of scraped knees and bedtime stories, of figuring out who you are and what you love and how the world works. All of it reduced to an energy input calculation.

I wonder what he thinks about Elon having 14 children? Now, there's a lot of calories going into inefficient training!

Then he doubled down with this: "the very widespread evolution of the 100 billion people that have ever lived," as if the sum total of human civilization is just a long and inefficient training run.

The audience applauded (social unease?). Social media roasted him again.

You might charitably call the remarks "dystopian" or, as some people said, "deeply antisocial and antihuman." Or, just plain creepy.

One analysis said: when Altman talks about humans as if we're badly optimized computers, he's not describing reality. He's revealing his framework.

The framework is the problem.

The man building the most important technology on the planet sees the rest of us as an energy equation to be outperformed, and that doesn't just stay in his head. It shows up in his decisions. And it has been showing up consistently for years.

The Pattern

Let me walk you through a few "Sam events."

In September 2023, Altman personally asked Scarlett Johansson to voice ChatGPT.

She said no.

Two days before the GPT-4o launch in May 2024, he reached out again.

She said no again.

OpenAI then launched a voice called "Sky" that, according to forensic voice analysis by Arizona State University, was more similar to Johansson's voice than 98% of professional actresses tested.

Oops.

Altman tweeted a single word during the demo: "her." A reference to the Spike Jonze film where Johansson voices an AI. She was a bit more than mad. His defense was essentially: "It's not her voice. Sorry for the confusion." They pulled it.

She said no. Twice. He did it anyway. Then acted surprised when she noticed.

In November 2023, his own board fired him, saying he had not been "consistently candid" (that's board-speak for a liar) in his communications. Former board member Helen Toner later revealed that co-founder Ilya Sutskever had compiled a 52-page memo documenting a "consistent pattern of lying." Altman was back five days later after a staff revolt, but the underlying allegations never went away.

In May 2024, Vox reported that OpenAI's exit agreements threatened to strip departing employees of their vested equity, potentially worth millions, if they criticized the company. Altman posted on X that he was "genuinely embarrassed" and claimed he didn't know the policy existed. Days later, leaked documents showed his signature on the April 2023 incorporation papers that explicitly authorized the provisions.

Let that land.

He said he didn't know.

His signature was on the document.

There are only two possibilities: he didn't read what he was signing, or he was lying.

This guy who wants us to trust him with AGI.

That same month, co-founder Sutskever and superalignment lead Jan Leike both left within hours of each other. Leike stated publicly that "safety culture and processes have taken a backseat to shiny products.

The superalignment team (read: safety team), which OpenAI had promised 20% of its computing resources, was disbanded less than a year after its creation. The safety team's exodus from the company that built its brand on responsible AI development tells you everything you need to know about the gap between what Altman says and what he does.

The Reversal Machine

There's more. There always seems to be more with Sam.

He publicly dismissed putting ads in ChatGPT, calling it a last resort. When subscriber growth slowed and costs climbed, he reversed course. When competitors mocked the decision, he responded with a long, defensive post on X calling one competitor "authoritarian" and "clearly dishonest" and referring to their subscribers as "rich people."

Isn't that "rich"?

This from a man whose personal wealth could be measured in billions.

It stung because it was true.

He spent months trying to convert OpenAI from its nonprofit structure into a for-profit entity.

Elon Musk sued over it.

Regulators pushed back. In May 2025, Altman reversed course and announced the nonprofit would stay in control, saying it was a "principled decision". By the end of last year, OpenAI had changed again. It's now a Public Benefit Corporation (PBC).

Sam got what he wanted.

Not to be forgotten is the whole porn-gate episode that I wrote about in December, where Sam said they would support porn on ChatGPT, then said he wouldn't.

What the Pattern Reveals

No ads? Ads. Nonprofit mission? For-profit push. Safety first? Safety team disbanded. "I didn't know"? His signature on the document. Porn, no porn. "It's not her voice"? Her. The single-word tweet that said everything.

I've coached leaders for decades. I've watched them crack under pressure, make bad calls, lose their way. That's human. What's different about what we're watching with Altman is not that he makes mistakes. It's that the "mistakes" always point in the same direction: toward whatever serves his interests in the moment, truth and other people be damned.

And that brings us back to India. When Altman reduces human childhood to an energy calculation, he's not making a clever analogy about computational efficiency. He's telling you, out loud, how he thinks about people.

We are expensive training data.

We are inference costs.

We are an energy equation that his technology can outperform.

We humans were never the point.

This from the leader of OpenAI, the company that was formed to protect us from, well, OpenAI.

The Leadership Lesson: You Can't Fake Humanity.

I teach leaders to Lead with Purpose. The foundation of that work is defining your Purpose, your Vision, and your Principles, and then actually living them when it's hard. Not performing them for an audience. Living them when nobody's watching and especially when the pressure is on.

What Altman demonstrates is what happens when purpose is performance. When your stated mission is window dressing for a business model.

When "beneficial AI for humanity" is marketing copy, not a governing principle.

The cracks are in the pattern: in the gap between what you say on stage and what you sign in the incorporation documents.

If you're a leader, CEO or board member, here's the question: would your principles hold up to this kind of scrutiny?

Not any single decision.

The pattern. Because people listen to what you do.

Maya Angelou said it: when someone shows you who they are, believe them the first time.

Sam Altman has shown us. Repeatedly. Publicly. With documentation.

The question isn't whether to believe him. The question is why so many people keep choosing not to.

And the fear for all of us is that he is running a trillion-dollar company that directly touches almost a billion people. Or, said differently, we are his meat machines, using a lot of resources on our 20-year, poorly designed training missions.

Pay attention, folks.

Be well,

Ron

(c) 2026, Ron Boire and The Upland Group LLC. Lead with Purpose and The 51% Rule are trademarks of Ron Boire.

Take the first step toward Leading with Purpose™

Whether you need a keynote speaker, executive coach, or strategic advisor, it starts with a conversation about what you're facing and where you want to go.

Start a Conversation

Most keynotes are booked 3-6 months in advance.
Coaching and advisory engagements typically run 6-12 months and are customized to your needs.

© 2025 Upland Group LLC. All rights reserved.
Privacy Policy