anvesh-uppunuthula-NZAH9ZT1gd4-unsplash

Charting a Third Way for AI

by Tomicah Tillemann

Editor’s Note: This essay emerged from conversations at Helena’s 2025 Summit in Valle de Bravo, Mexico — a gathering of leaders from technology, policy, science, and the arts convened to collaborate on a subset of pressing societal challenges. It is not a summary of the Summit or a consensus statement; it reflects the author’s own exploration of ideas shaped by those discussions.

In 1864, as train tracks started spreading across Europe, King William I of Prussia predicted: “No one will pay good money to get from Berlin to Potsdam in one hour when he can ride his horse there in one day for free.”​

In 1977, Ken Olsen, the president of Digital Equipment Corporation, a computer company, confidently said, “There is no reason anyone would want a computer in their home.”​

In 1998, Paul Krugman, the MIT economist and New York Times columnist, boldly declared: “By 2005 or so, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s.”

Predictions about the future offer a lighthearted chastening for those of us attempting to imagine and build the next generation of technology. It is easy to misjudge the trajectory of new innovations.

As the first stages of an AI-driven revolution begin reverberating across society, it is time to start asking: 1) what predictions are we making today that won’t age well? And 2) how are we failing to imagine the technology’s potential impact? 

IBM’s_$10_Billion_Machine
Technological revolutions rarely arrive on the trajectory we expect

Unlike the examples above, the failure of imagination surrounding AI is not that we are discounting its revolutionary potential (the hype is real), but instead that we believe it can only go one of two ways.

​To date, the public conversation around AI has been dominated by two camps. 

On one hand, accelerationists see AI as an unadulterated boon for humankind, arguing that we must charge forward unencumbered by concern about potential risks.

​On the other hand, AI doomers often suggest that human extinction is the almost inevitable outcome if the technology advances further.

​Buying into either mindset, or believing that these are the only two options, will ensure that we look back on this moment with regret, having failed to imagine – and then build – a third way centered on human flourishing and agency.

I want to outline the basic architecture of this third way – and share how a coalition of technologists, investors, activists, and governments are already working together to build it.

AI Accelerationism

 

The accelerationists’ failure of imagination is a failure to understand history. They believe that “this time is different.”

Their attitude is grounded in a belief that what AI can unlock – from breakthroughs in biology and medicine to autonomous vehicles and safer roads – is so important that we should not interrupt the technology’s relentless advance with regulations or externally mandated guardrails. 

In 2024, Sam Altman wrote about the expanding frontier of AI development, “With these new abilities, we can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now.”

48838377432_c9c02afc40_k
Sam Altman, CEO of OpenAI, has become one of the most influential advocates for rapid AI development

Even if accelerationists pay lip service to the need for some safeguards, that sentiment isn’t showing up in the vast resources they are deploying to shape the politics of AI. Leading the Future, a PAC that backs candidates opposed to the regulation of AI, has raised over $100M. While OpenAI hasn’t funded it directly, several of its leaders and investors are donors, including Greg Brockman, OpenAI’s co-founder and president.

This isn’t the first time technologists have couched their agenda in the gauzy sentiment that “everyone’s lives can be better than anyone’s life is now.” In its early days, social media was destined to democratize access to the public square.

1280px-Facebook-jan26
A protester in Cairo’s Tahrir Square holds a sign referencing Facebook and Twitter, symbols of social media’s role in the 2011 Egyptian Revolution.

It had the potential to topple dictators and serve as a tool for democracy. But instead of people using the product to exercise their voice and agency, they became the product. And surveillance capitalism became the business model.

That model is built on making content as addictive as possible, using that addiction to harvest our personal data, and then using that data to manipulate behavior. It’s hard to argue that it has been an unqualified win for society.

Accelerationists also claim that if we don’t race ahead in developing more powerful AI systems, others will. However, the United States’ only credible competitor in the AI race, China, has already adopted one of the most detailed, rigorous regulatory frameworks for AI safety, and imposes severe punishments on technologists and companies that create unsafe systems. 

These inconvenient truths betray the accelerationists’ argument. We’ve seen this movie before. Everything will not go right. It is delusional to suggest that letting the downside risks sort themselves out will yield the optimal outcome for humanity, especially when those downside risks include people becoming co-opted by a technology with far more manipulative potential than social media.

AI doom

 

A different mistake comes from those who focus exclusively on the dangers of AI. They see the (very real) potential challenges ahead, but in many cases cross the line dividing helpful concern from despondent fatalism. 

Many AI opponents now speak of human extinction as an almost inevitability. Eliezer Yudkowsky, a prominent AI researcher, gave voice to this sentiment in his vividly titled book, If Anyone Builds it, Everyone Dies. His colleague, Nate Soares, an otherwise cheerful person, revealed recently that he no longer sets aside money for retirement. “I just don’t expect the world to be around,” he said. 

Eliezer Yudkowsky presentation.
Eliezer Yudkowsky, author of the book, In Anyone Builds It, Everyone Dies

It is reasonable to be fearful of misaligned superintelligence, but the current dialog leaves little room for meaningful solutions that could redirect AI onto a better trajectory. Increasingly, AI opponents are treating the future of the technology as something that happens to us rather than something that, at least at this stage, is still shaped by us. This approach risks devaluing the critical role of human decisions, institutions, and incentives in shaping what emerges from this inflection point.

Claims that the final trajectory of AI is already set also miss a more immediate danger than extinction: extraction. Today, the value of AI systems accrues to the companies building them, even though those systems are trained on the accumulated knowledge, expression, and data of society as a whole. Creators, small businesses, and individual users risk being disintermediated from the very intelligence they helped produce.

The challenge, then, is not simply to prevent catastrophe. It is to reject the assumption that the concentration of power, loss of agency, or even existential risk are foregone conclusions. However grave the dangers, humans have not yet fully surrendered their ability to shape how this technology develops.

A Third Way

 

These two perspectives can’t be our only options. The choice between unrestricted acceleration or complete shutdown is a false dichotomy.

There is a third way to build AI responsibly and ambitiously—an approach that advances human flourishing and agency, protects personal data instead of harvesting it for profit, and creates an economy where voice and economic participation are shared.

michael-dziedzic-isg2R8l4XxM-unsplash
The false binary of the AI debate obscures a broader spectrum of futures.

The moral imagination behind this vision is not mine alone. Project Liberty has been forging a pro-human coalition of like-minded policymakers, technologists, organizations, and researchers united by the belief that we can forge an alternative path forward.

What has emerged is a three-part interconnected strategy: an alternative tech stack, an ambitious policy agenda, and a broad coalition of people and organizations. Together, they are anchored in the conviction that people deserve a voice, choice, and stake in the future of AI.

Tech that Centers Human Agency

 

Our interactions with AI are increasingly defined by agents. But consider what the word “agent” has always meant. In real estate, your agent works for you—bound by fiduciary duty to protect your interests. In sports or entertainment, your agent negotiates on your behalf. The job of an agent is to serve the principal.

AI agents are something different. They conduct tasks on your behalf, yes, but they ultimately work for Sam Altman, Elon Musk, or Mark Zuckerberg. Much of the value generated by AI agents—data, revenue, and network effects—accrues to their digital masters, not to you. 

An alternative AI stack would provide foundational infrastructure for a different type of agentic ecosystem built around core principles including:

  1. 1. Privacy and user control over digital identity and data.
  2. 2. Transparency and interpretability with safer, open, explainable AI at its core.
  3. 3. Clear boundaries between intelligence and data that prevent AI platforms from holding our private data hostage.
  4. 4. Decentralization of foundational services so no single actor can exert coercive pressure.
  5. 5. A fair exchange of value for both creators and consumers of data.

 

53746453017_00be984c5c_h
A coalition of policymakers, technologists, researchers, and institutions are imagining an AI that preserves human flourishing rather than diminishing it.

Project Liberty has highlighted the need for alternative AI tech to center on the “secret sauce” of the internet: interoperability. At every stage of the internet’s history, interoperability was a precondition for digital sovereignty. And AI is no different.

What would alternative technology look like in practice? To catch a glimpse of the future, we can look to the past. In the late 1990s and early 2000s, digital architecture called the LAMP stack emerged from a combination of open source technologies that were foundational to the explosion of web development.

  • ● Linux (operating system)
  • ● Apache (web server)
  • ● MySQL (database)
  • ● PHP/Perl/Python (programming languages)

 

Together, these free, open source layers gave developers everything they needed to build and deploy web applications—no proprietary software required.

 

At every stage of the internet’s history, interoperability was a precondition for digital sovereignty. And AI is no different.

The LAMP stack kept the web from being owned by a single company, which was a very real danger early in its history. An alternative AI stack should be grounded in the same ideas: open layers with safety built in; interoperable digital infrastructure; and democratized access that provides a basis for true individual digital sovereignty.

 

lukas-ktmQBr5hNd8-unsplash

Along with many other partners, Project Liberty is building this alternative AI stack. We’re creating technology that gives people genuine control over their data, facilitates interoperability between AI systems, and treats people as citizens with rights – not users to be monetized. 

Ambitious policies rooted in agency

 

Technology alone won’t get us to an AI that enables greater human agency. But neither will regulation by itself. What’s needed is smart strategy that aligns legal code with technical code. We need data sovereignty, transparency, and democratic accountability to be built into the system itself, so business models allow value to be shared rather than extracted.

The clearest example of what that policy looks like has emerged not from Washington, but from legislatures across the United States. With support from Project Liberty, Utah passed the Digital Choice Act in 2025, establishing landmark portability and interoperability requirements for social media. South Dakota just adopted the same law. Virginia’s version of the bill, which recently passed the State Senate 40-0, extends those same principles to AI. 

image (8)
From Utah’s Digital Choice Act to similar bills now spreading, policymakers are translating “agency” into enforceable rights

A Broad Coalition of Organizations

 

Public opinion research from Project Liberty Institute has shown widespread appetite for an alternative vision of human flourishing in the age of AI. People want control over their data, transparency in AI systems, and technology that enhances democratic institutions. The public will is there. What’s needed now is the coalition to act on it.

More than thirty major organizations from across the political spectrum and around the world are working together to build an AI strategy grounded in human agency and human flourishing. At the 2026 Munich Security Conference, leaders from this coalition convened to design the political and institutional architecture for this third way.

csm_20260213_14-41-19_MSC2026_th__8b86afdd0e
Members of the Project Liberty coalition attended the Munich Security Conference to design the political and institutional architecture for human-centered AI.

This coalition is not an anti-technology movement, nor is it united against any single country. It is a values-based coalition—of red states and blue states, of governments large and small—committed to the principle that we must align our technology stack, policy stack, and values stack for maximum efficacy. 

Next year, the Global AI Impact Summit will be held in Geneva, Switzerland. What happens there will help determine whether AI becomes a technology that extracts from people or one that expands what’s possible for them.

The Opportunity to Choose

 

King William couldn’t imagine trains would reshape Europe. Ken Olsen couldn’t see computers in every home. Each failed because they projected current constraints onto future possibilities.

Today’s debate about AI repeats this error—but in reverse. Accelerationists project unlimited possibilities while ignoring the patterns that have governed every previous technology. Doomers project inevitable dystopia or extinction while ignoring how human agency has steered technology before.

Both miss what’s actually at stake. This is not so much a debate about technology as it is a debate about us.

If we deploy good AI systems with strong guardrails to protect against misuse, the benefits could be profound. Freed from the friction and drudgery of administrative life, we could invest more deeply in our relationships, our families, our communities, and the work that is most distinctly human. The third way is a future where technology, policy, and human values align.

This future almost certainly won’t happen if we continue down the technology’s current trajectory. But it isn’t hypothetical either. 

If we deploy good AI systems with strong guardrails to protect against misuse, the benefits could be profound. Freed from the friction and drudgery of administrative life, we could invest more deeply in our relationships, our families, our communities, and the work that is most distinctly human.

We can design AI systems that give us control over our data, access to more trusted agentic systems, and safer, interoperable solutions that make it far harder for bad actors to use tech as a tool of coercion. Or not.

The predictions we make about tomorrow depend on the engineering and policy decisions made today. We can accept the terms and conditions of the technology being built for us. Or we can build a new, better paradigm.

We get to choose. And that choice—more than any model release, PAC, or prediction—will shape the future of AI.

About the author:

Dr. Tomicah Tillemann is President of Project Liberty, where his work focuses on policy solutions that enable human agency and human flourishing in an AI-powered world. He previously led policy for Andreessen Horowitz and Haun Ventures, and served as Senior Advisor to two Secretaries of State. He joined the State Department in 2009 as Hillary Clinton’s speechwriter and spent four years on the Senate Foreign Relations Committee alongside Biden, Blinken, Obama, and Kerry. Tomicah also served as the Executive Director of the Digital Impact and Governance Initiative at New America, where he built and oversaw programs on asset allocation, technology, and democratic governance. He led teams that built and deployed technology solutions focused on strengthening democratic institutions and private sector accountability worldwide. A co-holder of four patents, Tomicah has served on advisory councils at the World Economic Forum and UN World Food Program. He holds a B.A. magna cum laude from Yale and a Ph.D. with distinction from Johns Hopkins SAIS, and is a life member of the Council on Foreign Relations.