Stupid people who can’t think will be vetting software that they believe thinks for them. What could possibly go wrong.
The AI also tells them that they’re awesome and smart. It’s a great match.
AI be like

You’re absolutely right!
good, I’ll add vetted models to my blocklist
this worries me with any tech.
-
if a smaller company dvelopes a competing product (openai and anthropic used to be small) will it hinder them and provide access to only the mion stream companies?
-
how does this affect non-us models?
-
will they only be approved if they say wonderful things about trump? have you ever asked grok about elon?
I really feel like you’re actually being too generous to this proposal.
Let’s be clear, when this administration says they want to vet new models, what they mean is that they want to turn them into right wing propaganda engines. This is “reprogram ChatGPT to say the 2020 election was stolen, white genocide is real and trans people are all sex predators, or we won’t certify it.”
This is some Cold War regulatory capture BS based on a Myth(os), something that didn’t happen:
Anthropic did in its Mythos system card, suggest a model has “broken containment and sent a message” when it A) was instructed to do so and B) did not actually break out of any container.
-
“free market”
Start with vetting the big arch and other burgers then let’s talk
He’s been vetting them directly for years. Daily vettings, directly into his gullet. What more do you want?





