Well? Do you trust Sam Altman with your kids’ online safety?
Of course you don’t. It is a category error, like asking the fox to draft the henhouse bylaws. Nevertheless, the question is now quietly circulating in Sacramento, Silicon Valley, and soon, if history is any indicator, the rest of the nation.
The world’s most powerful AI company is no longer keeping itself to the building of machines. Now it is helping to write the rules that govern them. That alone should give any serious observer pause. When the referee starts co-authoring the rule book, something has gone wrong long before the first whistle blows. And these machines, of course, are like none other in human history.
California has long served as the Democrats’ preferred testing ground.
OpenAI has announced a partnership with Common Sense Media, a prominent children’s online safety group — founded by Jim Steyer, brother of Tom, the billionaire environmentalist and Democrat candidate for California covernor. OpenAI and CSM were previously at odds, each backing rival ballot initiatives to regulate how children interact with AI chatbots. Now? They’ve joined forces.
The result is a single proposal that could soon land on the California ballot — and, crucially, be marketed as a model for national standards.
California has long served as the Democrats’ preferred testing ground. Auto emissions standards were piloted there, then imposed nationwide. Data privacy followed the same path. So did labor rules, energy mandates, and environmental regulations that radically reshaped entire industries far beyond the state’s borders. Speaking of machines, this one has proven remarkably efficient. First comes the pilot. Then the precedent. Then the pressure. Boom — the heart of national policy is taken over from the fringe.
Once embedded, predictably, the rules harden. Especially when written into ballot initiatives, state constitutions, or dense compliance regimes that only the largest players can afford to navigate. Revision becomes politically radioactive. Repeal is painted as dangerous. Dissent is portrayed as moral failure, opposition as risky and reckless.
The stated purpose, to be sure, is unimpeachable. Protect children. Limit data collection. Add safeguards. Require age verification. Who could object? That’s precisely the point. The moral framing does the work before the policy ever does.
RELATED: Murder victim’s heirs file lawsuit against OpenAI
Photo by VCG/VCG via Getty Images
By the time questions about power, enforcement, and unintended consequences arise, the argument has already been won. After all, if you hesitate, what exactly are you saying? That children should be less safe?
But politics, especially California politics, is not about intentions. It has always been about incentives. And this arrangement raises an obvious, uncomfortable question: Why would the most dominant AI firm want to help draft the very regulations meant to restrain it?
Regulation, when shaped correctly, isn’t a burden on the powerful. Quite the opposite, in fact. It’s a moat. Compliance costs rise. Audits multiply. Smaller firms buckle. New entrants hesitate. The giants absorb the expense, hire the lawyers, tick the boxes, and continue unimpeded. In public, this is called responsibility. In practice, it’s market control with better manners.
There is also the question of timing. OpenAI and its peers are facing mounting criticism over how young people interact with AI systems. Lawsuits loom. Legislators grow restless. Parents are alarmed. Aligning with a trusted children’s advocacy group offers something priceless: moral cover. It reframes the company not as a defendant, but as a protector, a source of safety against irresponsible risk.
That shift matters.
Once a firm is cast as part of the solution rather than a leading source of the problem, scrutiny softens. Critics sound shrill, concerns are waved away as the ravings of cranks, and the company secures a seat at the table where future rules are written.
Far more mundane — and troubling — than a cloakroom conspiracy, this is regulatory capture conducted in broad daylight, wrapped up with a bow in the language of care. And you do care, don’t you?
Once California moves, the story writes itself. Headlines will hail “the strongest protections in the country.” Governors elsewhere will be asked why their states lag behind. Congress will be told a ready-made framework already exists. Why reinvent the wheel? Why delay?
And just like that, a system designed with the input of the industry it governs becomes the national baseline.
This is how power consolidates in the modern age. Forget force and secrecy. Who needs skullduggery when you have slickly deployed partnerships, press releases, and the careful use of children as moral ballast?
None of this is to deny that children need protection online. They do. The digital world is unforgiving, full of predators and rabbit holes that lead nowhere good. No serious person disputes that. However, safeguards crafted in haste — or worse, convenience — rarely age well.
In a brutal irony, though, a process meant to protect the young can instead shape a future where oversight is ossified, competition is stifled, and the most influential technology of our era answers primarily to itself.
California is once again the laboratory. The rest of the country is expected to follow.
So the opening question bears repeating. Do you trust Sam Altman, and companies like his, to help decide what your children are allowed to say, read, ask, or imagine? The question answers itself. What remains unanswered is whether the rest of the country will be given a choice.
Read the full article here


