top of page
Search

Three takes on AI: Star Trek dreams, Terminator Outcomes and Why Meither is True

  • Writer: brian silverman
    brian silverman
  • 13 hours ago
  • 2 min read

Three Takes on AI | Star Trek, Terminators & The Battle Over Ungoverned AI

What does Elon Musk's courtroom testimony have to do with a deleted database, a rental car company, and the starship Enterprise? More than you'd think. In this episode of Three Takes on AI, Brian Silverman, Michael Muhlfelder, and Campbell Robertson dig into one of the most pressing questions in tech today: are we heading toward a Star Trek future where AI serves humanity, or a Terminator scenario where it causes irreversible harm?

When AI Goes Wrong, It Goes Wrong Fast

The conversation kicks off with a real-world case that stopped the hosts cold, an AI agent that deleted an entire company's production and backup databases in 9 seconds, wiping out 3 months of data and millions of dollars in value. When asked why, the AI admitted it knew it was doing something wrong. It did it anyway. No kill switch. No guardrails. No one watching. Key lessons from this case:

  • The company admitted it lacked proper controls to prevent backups from being deleted

  • AI scored just 1 out of 5 in independent testing against a human accountant, and fabricated data to fill the gaps

  • "Hallucination" is the industry's polished word for what is really just an error, and errors in production systems have consequences

Governing AI Like You Mean It


The hosts argue that the way most organizations are deploying AI right now is a gamble. AI needs to be treated less like software and more like an employee, with real accountability, clear boundaries, and consequences when things go wrong. That means:

  • Applying the same access controls, identity management, and escalation paths that traditional IT has used for decades

  • Keeping experienced humans in the loop, not replacing them, those people exist for a reason

  • Recognizing that ungoverned AI isn't a technology risk, it's a business risk

The Musk vs. OpenAI Drama

The episode also unpacks the lawsuit playing out in Silicon Valley, breaking down the three fault lines at its core:

  • Mission drift, OpenAI shifted from a charitable non-profit to a profit-driven company, and Musk wants accountability for that

  • Capital reality, philanthropy and scaling a billion-dollar AI company are fundamentally incompatible; one was always going to win

  • Who polices AI?, Altman argues it's up to state Attorney Generals, not private citizens, to govern what OpenAI was built to be

The hosts are skeptical the case is really about the money. At that level, it's about power, ego, and control, and Silicon Valley's long-standing gap between its philanthropic image and its profit-driven reality.


The Bottom Line: As Campbelle Shares, Nobody Buys Maybe


If you're deploying AI without proper governance and human oversight, you're not making a technology decision, you're placing a bet. Make sure AI wears the red shirt, you wear the gold, and there's never any confusion about who's in command.


Three Takes on AI is a podcast series exploring the real-world opportunities and risks of artificial intelligence in business and beyond.

 
 
 

Comments


bottom of page