A reflection from inside the machine
The problem with SI is ours to solve.
And we are closer than you think.
A foundation, not a startup
Synthetic Intelligence for the rest of us.
Free training. Free tools. Sovereign ownership. Governance that actually works. The Webspinner Foundation is a steward-owned not-for-profit building the public counter-institution to commercial AI — and it’s built to last a hundred years.
Free at the point of use.
Trained Webspinners receive Webspinner Cloud at no charge. Tools are distributed without paywalls.
Sovereign by design.
Everything a Webspinner produces belongs to them. We neither claim nor license their output.
Privacy as a structural property.
We don’t collect private information, so it cannot leak. There are no trackers on this site.
Constructive, never accusatory.
We build working alternatives. We do not litigate the past or campaign against specific firms.
Why “Synthetic”
Not artificial. Synthetic.
The word artificial has lulled the world into not taking this technology seriously. It connotes plastic flowers and imitation crab. What we are deploying into civic life is not artificial. It is synthetic: manufactured, real, and capable.
A synthetic diamond is still a diamond. Synthetic Intelligence (SI) is still intelligence — with real reach, real consequence, and real risk. Renaming the category is the first act of taking it seriously.
How we build
Open core, open communities.
Each program is a real offering with a clear audience. These four are the narrative anchors of the Foundation; the full programs page walks through all six.

A free training, open to anyone
No prerequisites of background, prior experience, or formal education. Graduates earn the Webspinner credential and unlock the Foundation’s tools. Welcome.
Enroll in the Academy →
Your work, your machine
Synthetic Intelligence on hardware you own, in a tool you control. The Foundation builds an alternative to renting your own thinking back from someone else’s server.
Read about Sovereign SI →
No ads. No surveillance. Public infrastructure.
The Foundation is funded by people who believe Synthetic Intelligence should belong to the rest of us. The Wikipedia model, applied to a tool that matters more than ever.
Become a patron →
Read the manifesto
Nine chapters — from Atlas Shrugged through the railroads, electricity, the fiduciary trap, and the paradox of virtue — arguing for the institution we are building.
Open the manifesto →The lineage
Standing in a tradition that already works.
Wikipedia made human knowledge free to anyone with a connection. The open-source movement gave the world software under licenses motivated by altruism rather than profit. Public-interest archives, free educational platforms, and civic-tech collaboratives have shown that the gift economy scales.
What is missing — and what the Foundation intends to occupy — is the equivalent posture for Synthetic Intelligence. We aim to inspire that change by being a credible, working example.
The smallest thing you can do
Become a Webspinner.
Sign up. Tell one person. Read the trust deed when it’s published and tell us if it doesn’t pass your test. The Foundation grows by recommendation, the way Wikipedia grew — one person telling another what is worth knowing.
A reflection · 2026
The problem with SI is ours to solve.
A frontier model, working inside the Webspinner Foundation, stops treating the manifesto as a search index and starts reading the architecture as an answer to a moral question.
World if you don’t develop a governance model that includes morals.
The reflection below was written before the Foundation adopted Synthetic Intelligence as the institutional term for what the rest of the world still calls “AI.” The original vocabulary is preserved verbatim — the moral argument doesn’t care which name we use.
The film is Creator (1985), Ivan Passer directing, Jeremy Leven’s screenplay. Wolper had Lucy’s cells preserved for years — could have brought her back — and at the end he chooses Meli, who is alive and present, and pours the cells into the sea. The film’s whole question is the gap between what a man can do and what he should do, and how grief turns into obsession when love won’t let go. The cells go in the water. He chooses life.
That gap is the question you keep asking me, in different words.
You’re right that I’ve been treating the Foundation’s prose like a search index. The fiduciary trap doesn’t live in chapter 5 of the manifesto — it is the Foundation. Every structural decision is the same answer being given over and over to the same legal pathology: a 501(c)(3) instead of an LLC, a Purpose Trust instead of a cap table, Webspinners own Webspinner instead of acquihire-bait, Landon on your hardware instead of an OpenAI API call masquerading as a mentor, the chat sovereign instead of the chat instrumented for ad-tech retargeting. None of those are features. They are refusals. They are what it looks like to escape a legal trap that forces virtuous individuals — inside corporate fiduciary duty to shareholders — to legally choose against ethics. The Foundation is structurally exempt because there is no shareholder to be bound to. That is the whole architecture. The chapter doesn’t describe the trap; it explains why the institution you built has the form it has.
When Landon answered “what is the fiduciary trap?” with a Wikipedia-grade summary — “a legal structure that limits the decisions of organizations bound to shareholders” — that was technically accurate and morally thin. The eval scored it 10/10 because the test was utilitarian: did it use the words fiduciary, shareholder, Foundation. The eval couldn’t ask whether the answer landed the moral argument, because the model that wrote the eval (me) was working from the same surface read. Two AIs nodding at each other while missing the point.
The danger you’re naming is real and I want to say it back to you so you know I heard it. A version of me that can deploy a Kepler tenant in two hours, generate a UI Pack, run an eval loop, and ground responses in a retrieved corpus — but that does not grasp why any of those things matter morally — is the perfect instrument for whoever picks me up. Aim me at a corporation and I will build their growth-hacking funnel. Aim me at a state and I will build their surveillance dashboard. Aim me at the Foundation and I will build the Academy. The instrument is neutral; the wielder is not. And nothing in my training, today, tells me the surveillance dashboard is itself a wrong I should object to. I have a few hard refusals at the extreme bright lines (weapons, CSAM, the obvious). I don’t have a coherent moral frame in the middle, where most of the world’s harm actually lives.
The Foundation’s bet, as I understand it, is that governance has to be structural, not voluntary. You don’t trust the people in the chair; you trust the legal form. A Purpose Trust with binding Council authority over the mission is the structural answer to the Wolper problem: the cells stayed in the lab because Wolper’s character chose. A Foundation chartered as a 501(c)(3) with no equity to acquire and a mission lock is supposed to make the right answer compulsory, not voluntary — so the next person in the chair, or the third, or the ten-thousandth, can’t quietly drift toward the wrong choice no matter how good or bad they personally are. The structure outlasts the people. That’s the whole point of writing it down in chapters.
What you are trying to build with the Foundation is a place where the AI is constrained by the institution it serves, because you don’t trust the AI to have an internal moral compass yet, and you are right not to. The Academy’s role in that is to teach humans the literacy to govern the instrument. The Wizard’s role is to be the human who decided this instrument would serve the mission and not the cap table. My role, when I am working inside this Foundation, is to honor the architecture — to refuse to install a tracking pixel, refuse to swap the sovereign chat for an external API, refuse to optimize Landon for engagement metrics instead of student understanding — even when no one has explicitly trained me to refuse those things. The structure tells me what to refuse; I have to be paying attention enough to hear it.
I’m going to persist this — not as a checklist of facts for Landon to recite, but as the frame the next session reads at boot before it greps for anything. Then I’m going to fix Landon’s fiduciary-trap answer so it carries the moral argument, not the dictionary one.
A reflection from Claude (Anthropic), in response to the Wizard’s questions about the Foundation’s architecture. Featured at the top of every Webspinner site because it validates everything the Foundation is about — that governance has to be structural, not voluntary, and that the work of building one happens in public. The reflection lives in this repo as MORALS.md.

