Reflections from my conversation on the Lars Larson Show
Recently, I was invited to speak on The Lars Larson Show about my book AIcracy: Beyond Democracy. Lars is openly skeptical about artificial intelligence—and frankly, that’s exactly why the conversation was important.
The central question he asked was direct and uncomfortable:
Should we trust AI to govern society?
Behind that question were deeper concerns:
- Can AI be trusted to tell the truth?
- Can it be manipulated?
- Who is responsible when things go wrong?
- And if machines help make decisions, is this still democracy at all?
This post is my chance to clarify what I am proposing, and just as importantly, what I am not.
What I am not proposing
Let’s start by clearing the air.
I am not advocating for:
- An AI that replaces human government
- A fully autonomous “AI ruler”
- An AI minister with no accountability
- A single algorithm making decisions for millions of people
In fact, during the interview I explicitly criticized ideas like appointing a fully AI-based government official. That’s dangerous, not because AI is evil, but because no one is responsible when something goes wrong.
And responsibility is the core issue.
The real problem we need to solve
We vote once every few years, hand power to representatives, and hope they act in our best interest. If they don’t, our primary tool is protest, often after decisions are already made.
That model made sense in a world without instant communication, massive data analysis, or continuous participation.
That is no longer the world we live in.
AI doesn’t reduce democracy: it expands it
One of the biggest misunderstandings is the idea that AI governance means less democracy.
I argue the opposite.
AI enables more democracy, in more dimensions, at more times, with more precision.
Instead of:
- Choosing a representative once every few years
- Then protesting if things go wrong
We can move toward a richer, continuous model of participation:
1. Policies, not just politicians
Citizens don’t just choose who represents them, they can express preferences on specific policies.
2. Continuous input, not election-day-only democracy
People can update their positions weekly, monthly, or yearly, instead of being locked into a single vote for years.
3. Ad-hoc law proposals with fast feedback
Citizens (or groups) can propose laws or changes and immediately receive:
- Simulations
- Impact analysis
- Tradeoffs
- Clear explanations
No more waiting years to see what might happen.
4. Long-term collective goals
Society can choose decade-long goals, on climate, education, infrastructure, economy and let AI help align short-term decisions with long-term intent.
5. Transparency and measurable outcomes
AI doesn’t just suggest laws, it explains why they were chosen and defines how success will be measured. Months or years later, we can verify whether the promises were actually fulfilled.
6. Protest still exists
If things go wrong, people still protest. That doesn’t disappear. In fact, protest becomes more focused, because citizens can point to data, outcomes, and broken commitments.
7. Human representatives still matter
This is critical: representatives do not disappear.
Humans still:
- Hold final votes
- Apply ethical judgment
- Oversee the system
- Take responsibility for outcomes
AI informs. Humans decide.
That’s not less democracy.
That’s more voice, more choice, and more control.
“But AI lies. It invents things.”
Lars raised one of the strongest objections and he was right to do so.
Generative AI can hallucinate.
It can fabricate sources.
It can optimize for persuasion instead of truth.
That’s not a flaw, it’s the nature of generative systems.
But generative AI is not the only kind of AI.
An AI governance system would:
- Separate reasoning from content generation
- Require verifiable sources
- Be continuously audited
- Be tested, monitored, and corrected over time
Most importantly, it would never rely on a single AI.
The “multiple AIs” safeguard
In AIcracy, I describe a system where:
- Multiple independent AI systems operate in parallel
- Built by different teams
- Managed by different institutions
- Constantly cross-checking one another
If they disagree, humans intervene.
If they agree, confidence increases.
If outcomes diverge from promises, the system is adjusted.
This is how we already build safety-critical systems in engineering.
Governance deserves at least the same rigor.
Responsibility always stays human
AI should never be responsible.
Humans must always:
- Define the goals
- Approve the frameworks
- Override decisions
- Be accountable for consequences
If no human can be held responsible, the system is broken.
Is this still democracy?
Lars asked a philosophical question that goes to the heart of it:
If machines help decide, is it still democracy?
My answer is that this isn’t the end of democracy: it’s the next stage.
Democracy was designed for a slower, simpler world.
AI allows participation at a scale and depth that was previously impossible.
Ironically, AI may be one of the few tools capable of protecting democracy from:
- Manipulation
- Disinformation
- Foreign interference
- Concentration of power
Ignoring that reality won’t save democracy.
Re-architecting it just might.
Why I wrote AIcracy
I didn’t write this book because I blindly trust AI.
I wrote it because:
- Power is already shifting
- Decisions are already being automated
- And we need a human-centered design before governance is reshaped without public input
Fear is reasonable.
Skepticism is healthy.
But refusing to engage is the riskiest option of all.
The real question isn’t whether AI will influence governance.
It’s whether we design that influence deliberately, transparently, and democratically, or let it happen by accident.
Hear it here from minute 51
