Altman’s remarks in his tweet drew an overwhelmingly negative reaction.
“You’re welcome,” one user responded. “Nice to know that our reward is our jobs being taken away.”
Others called him a “f***ing psychopath” and “scum.”
“Nothing says ‘you’re being replaced’ quite like a heartfelt thank you from the guy doing the replacing,” one user wrote.
Until something stops working, and the IA tells you that you are right, then since you don’t know how to program you just see gibberish, then you need to call someone that know how to code and they will charge you an arm and a leg, it would be the same as when you take your car to the mechanic
Sam Altman hallucinates more than his AI
How come so many billionaires are so incredibly immeasurably stupid? Seriously, this dude sounds like some random moron on the street. Musk does too. I’ve seen interviews with Musk where he rambles his answers aimlessly. I mean fuck you’d at least expect them to be a bit above average. But no. They’re angry online pseudointellectual level.
I don’t understand how people see this kind of thing and think, “this guy is dumb.”
For a decade the world has been seeing constant evidence that when you lie brazenly and repeatedly enough you become untouchable. All one needs to is completely discard the idea of shame. This guy isn’t an idiot- he just knows (probably rightly) that he has everything to gain and nothing to lose by pretending to be.
Narcissism + recklessness + greed + privilege, filtered through a heavy layer of survivorship bias, and whatever jackass makes it to the end was apparently a bold visionary genius the whole time.
But then once they’re in that club, the money and notoriety are their own advantage.
Something that a friend pointed out to me as a possible factor is the religious backdrop of US Christianity. I’ve forgotten the specific phrase (it was something like “prosperity Christianity”), but basically the idea that good fortune (from hard work) is vindication of God in this life. It’s pretty deeply tied to the Protestant work ethic, which is pretty pervasive in US culture, even in ostensibly secular institutions.
The original idea was more or less “as well as having faith, you should also work very hard, because that’s part of your duty. Then you will be blessed and will have good fortune”. However, that has been increasingly distorted and subject to a logical fallacy that means people get it backwards. For instance, let’s say we took this doctrine to be an absolute fact: that if you diligently do good work, then you will be blessed, and have good fortune. I.e
if good work, then blessed
If blessed, then good fortune
∴ If good work, then good fortune
However, under this doctrine, people often commit the fallacy of “affirming the consequent”. For instance, if the only lamp in a room breaks, then the room will be dark. However, the room being dark doesn’t necessarily mean that the lamp is broken (it could be off, or be stolen, or covered). So what people do is they go “I am wealthy. People who are blessed have good fortune, and I have good fortune, so therefore I must be blessed”. This logic has been used to justify all sorts of awful, awful crimes against humanity. For instance, enslaved people must be bad people because they clearly do not have good fortune. But the person who owns those slaves surely must be blessed, because he has good fortune.
This way of thinking is so deeply embedded into US culture that even devout atheists end up absorbing a lot of this logic. This is only one small part of the puzzle as to why billionaires are so dumb, but applying this lens really helped me to understand the self-validation cycle that a lot of billionaires and powerful people get into.
The way I imagine this cycle going is that someone who is quite successful under capitalism (often due to advantages like inherited wealth) has a brief moment of self reflection where they wonder “am I actually doing well here? Do I have anything of value to add? I was given a lot of opportunities to succeed (e.g. inherited wealth), but have I effectively utilised those opportunities? How would I know if I had actually done well? Sure, I’ve grown my wealth a heckton, but maybe a different person with these same opportunities would have done far better than I did?”
With those questions comes a heckton of dread. And like, I actually really sympathise with that dread, because it’s a fairly universal feeling, I suspect. For instance, I dropped out of university due to a heckton of external extenuating circumstances. When I’m feeling bad about this, people who knew me during this period often reassure me that it was not my fault, and that it’s a testament to my strength that I held out as long as I did. Certainly, that’s what I’d like to believe, but the terrifying question that I’ll never be able to answer is “what if those external circumstances didn’t exist? What if I would’ve dropped out even if not for all that, and if I’m actually just not smart enough to study what I wanted?”. We can’t see alternative timelines.
What’s different about billionaires though is that they have so much money that they can ignore the uncomfortable dread, rather than sitting with it and doing some useful self reflection, before setting it aside. They push it out of mind and distract themselves by throwing themselves into work or hedonism, or both (I have never known a billionaire, but I have known some very wealthy CEO types, and they worked themselves to the bone, potentially to avoid feeling this imposter syndrome dread. I’m inclined to view their hyper working habits as being irrational in this way because a lot of the excess work they did seemed to be bullshit work (in the sense of David Graeber’s “bullshit jobs” — that is, it was work done to make themselves feel useful)).
Another thing that I have that billionaires don’t is friends that I trust to guide me on my self reflection. I trust my friends when they tell me my university disaster wasn’t my fault because they have shown that they are more than willing to call me out when I make poor choices. Even in scenarios where I am clearly the victim of some fucked up thing, if I have made things worse for myself by making poor choices (something I’m prone to doing if I’m in a fatalistic depression spiral), they hold me accountable for my choices, in addition to sympathetically supporting me.
Instead, billionaires are surrounded by people who they can’t trust. Sycophants everywhere, who don’t care about who you are as a person, but what you can do for them. You’re less likely to have people calling you out for things, but you also won’t get much affirmation for the genuinely good things about your personality. Like, let’s imagine if Sam Altman had an aspect of his personality that was a really good quality that was distinctly him, and thus the kind of thing that would be productive to view as part of his self identity because it could help him focus on that as a direction of future growth. And let’s say he had a genuine, non-sycophantic friend who tried to highlight this to him — how would he be able to tell that this was a genuine compliment coming from a genuine friend, and not just another bullshit sycophant? You can’t, not really.
It’s tragic really. The ultra rich have basically gatekept themselves from genuine human connection. They burn out from being on guard all the time, and so they surround themselves with people in their own wealth class (people who are also extremely poorly adjusted). I find it quite sad, because this isolation seems to be an inevitable consequence of being mega-rich. This is why when I say things like “billionaires should not exist”, I’m not just speaking in favour of peons like us, but also out of compassion for the billionaires. I resent them like hell, but I also deeply pity them. I’d love to be financially comfortable enough to not worry about whether I’ll be having to be sleeping in my car next month, but I’d rather be in my position in theirs. If by some weird twist of fate, I suddenly became mega rich, I would do everything I possibly could to give away money until I was “merely” financially comfortable.
I got a bit off track with my ranting because I am procrastinating getting food, so I’ll bring it back to your question. Basically, billionaires get dumb because they are emotionally maladjusted and often deeply insecure. Wealth becomes a thing by which they measure their own self worth, but no amount of wealth can fill the vacuous chasm in their hearts caused by a deep isolation and lack of genuine fulfillment. Occasionally they do get slices of this fulfillment — see Mark Zuckerberg getting heavily into MMA.
But if they ever have moments of self reflection where they experience that normal and healthy self doubt, they are so socially isolated and maladjusted to actually reflect. Their wealth means they can afford to never be uncomfortable, and that applies here too. So to escape their dread, they build a narrative of how they deserve it. They’re not just lucky — they are actually very smart and good and they deserve their wealth. And the sycophants around them will tell them they’re absolutely right. Meanwhile, the people they respect as their peers (other billionaires) are also prone to spouting psuedointellectual bullshit whilst pretending to be smart, so this validates their own dumbassery.
The psuedointellectual stuff is another reason I pity them. I was a Gifted Kid™, and because I didn’t have friends in school, my intelligence was basically my entire identity. This meant I was so desperately scared of losing that that I would bullshit about what I knew or not. Nowadays, I’m a lot better at being open about when people ask me about something I either haven’t heard of, don’t understand, or can’t quite remember. I often say “I got a hell of a lot smarter when I let myself be more dumb”, because learning to be more vulnerable meant I had the opportunity to learn a heckton from loads of cool people (rather than being preoccupied with appearing smart).
Billionaires are dumb because they’re cosplaying smart people, and they’re so deep in the role that they forget they’re cosplaying. They’re also surrounded by other dumbasses spouting psuedointellectual bullshit, but they will never call them out on this, because they’re so pathetically insecure that they fear that this will out them as being an imposter — they don’t realise that their peers are also cosplaying. It’s an absurd echo chamber of the worst kind.
One aspect of the Dark Triad is vast overestimation of ones own capabilities. People in power aren’t highly intelligent, they’re just sociopaths
People with actual reasoning and intellect will see these numskulls for what they are. Unfortunately the average person has very little reasoning or intellect. The amount of my peers who haven’t opened a book since school ended is too damned high.
Any software engineer who uses AI knows this to be horseshit. If anything, it’ll lead to more engineering jobs when all of the pleb CEOs who think they’re CTOs now begin to realize that there’s more to coding than the code, and their software is ass at scale. Hopefully this happens in hilarious and public ways for us all to enjoy.
Assuming corporations actually want working software. If the customer base is ok with broken products, they will keep the vibecoded broken stuff online.
Time to learn to hack instead, break the security and cost them directly
Sam is still early, and obnoxious, but I’ve been monitoring AI progress since the 1980s. Roughly one year ago, AI coding agents sort of turned the corner from not really any more useful than a Google search (which is, itself very useful), into getting things right more than they hallucinate. That was an important watershed, because from that point they could make forward progress, fixing more mistakes than they made.
In the 12 months since, there has been steady and rapid forward progress. If you haven’t asked an AI to code something for you in the last 3 months, you’re out of touch with where it’s at today.
Even free Gemini rips out really good bash scripts faster than you can look up the first weird thing you want it to do.
I personally don’t use AI, but I concede that for some people, it can be useful for them, if they use the AI as a tool for their own thinking, rather than subordinating themselves to the chatbot. Mostly, this means ensuring that they’re able to check whether the AI is right or not.
When I dabbled in using coding AI, there were a few basic tasks that it was useful for. There were a few hallucinations, but because the task was basic and well within my proficiency to scan, I was able to set it right; even with these corrections, it still saved me time overall. However, when I tried to use it on tasks that were beyond my own technical expertise, things got messy really quickly. Things weren’t working, so I felt sure that there must be some hallucinated errors, but I couldn’t tell what they were because the task was at or beyond the limit of my own technical competency. A couple of times, I managed to eventually figure out how to fix the error, but it was so exhausting compared to how problem solving a code problem feels, and I felt dissatisfied by the lack of learning involved.
Ordinarily, struggling through a complex code problem leaves me with a greater understanding of my domain, but I didn’t this time. I guess I did get a little better at prompting the AI, but I felt like I learned far less than if I had solved the problem myself. Battling through to build a thorough understanding of my problem and my tools takes a long time upfront, but the next time I do this task or a similar one, I’ll be quicker, and these time improvements will build and build as my proficiency continues to grow. That’s why I stopped dabbling with AI coding assistants/agents — because even though using them for this complex task still saved me time compared to usual, in the long term, the time savings from using an AI is negligible compared to the time savings from increasing my own proficiency.
Now I hear what you’re saying about how much more effective AI coding agents are becoming, and how the hallucination rate is lower than it was. I haven’t had much first hand experience for quite a few months now, but I have no doubt that I would be incredibly impressed at the progress in such a relatively short time. The time savings from using AI would likely be larger today than it was when I tested it, and in a year, it’ll be even better. However, in my view, that will still not be able to compete with the long term time savings of a human gaining proficiency. You might disagree with me on that.
But the thing is, that human proficiency isn’t just a means to save time on their regular task, but a valuable end in and of itself. That proficiency is how we protect ourselves when things go wrong in unexpected ways. Even if the AI models we’re using now could perfectly capture and reproduce the sum of our collected knowledge, I don’t believe they can come close to rivalling humans in the realm of creating new knowledge, or adapting to completely novel circumstances. Perhaps some day, that might be possible for AI, but that’s not going to be possible with any of the AI architectures that we have today. In the meantime, creative and proficient humans will continue to find ways to exploit the flaws in AI systems, possibly for nefarious ends. A society that relies heavily on AI will need more technical expertise, not less.
“Even free Gemini rips out really good bash scripts faster than you can look up the first weird thing you want it to do.”
The crux of my argument is “how does someone who isn’t proficient in bash tell whether the bash script that AI has generated is a good one or a bad one?”. Even if hallucination rate continues to drop, it will always be non-zero. Sure, humans are also far from perfect, but that’s why so many of our systems include oversight mechanisms that involve many sets of eyes on critical systems; Junior developers are mentored by more experienced devs, who help ensure they don’t break stuff with their inexperience (at least, in an ideal world. In practice, many senior Devs are so overworked and stretched thin that they can’t give the guidance they should. Again, this is a case for more proficient humans). Replacing proficient humans with AI will build a culture of unquestioningly following the AI. Even if hallucination rate is a fraction of the human error rate, it will always be non-zero, and therefore there will be disasters.
And when it all goes to shit, who will fix it if we have allowed human proficiency to wither away and die?
I too love vibe coded security flawed software. Thank you Sam Very cool. AI is for some basic error checking and bootstrapping to get you started it is not touching a programmer at all.
Hahahaha… go fuck yourself, you miserable thief
This dude is an empty husk of a human being. Sim Notman
He and Mr. Beast are the perfect couple
my (non technical boss) proudly declared they had “vibe coded a new landing page for our contact us page”
when you hit the submit button it downloaded the .ico of the website to the temp files and nothing else.
Perfect for a malicious code injection!
All these AI ceos seem to be getting pretty desperate these days, do they know something we don’t
Maybe they know we’re cooking ourselves off the planet and they want to speed things up.
I think they decided that the collapse was inevitable and decided to party while there was time, until they had to run to the bunkers.
Yeah, they know that all they have is advanced predictive text and that if people see through the mysticism they’re trying to project (about this predictive text on steroids being the emergence of a technogod), they’re fucked.
lets hope if robots take over they dispose of those useless ceos and billionaires
Of all the jobs that require a lack of human empathy and cold calculation, CEO is the first in line for AI replacement.
So, we’re just trafficking in misinformation now?
Sam Altman Thanks Programmers for Their Effort
True
, Says Their Time Is Over
Complete fiction. Clickbait misinformation.
The tweet:
I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficult to remember how much effort it really took.
Thank you for getting us to this point.
I agree, the “Says Their Time Is Over” part is clickbaity, but the sentiment is the same. He’s thanking “manual programmers” for taking us to this point, clearly implying that from now on they will be no longer required
I can understand how that can be read into what he said.
I was simply pointing out that the headline is lying about what he said.
And he’s out of his mind or lying just the same. AI needs constant supervision, and if a human has to understand the code well enough to debug it, they may as well learn about it the code by – writing it themselves…
He’s basically a salesman, you can’t expect him to say the truth against his interests
Someone who believes or pretends to believe they’re telling the truth when they try to sell me magic farts is still a lunatic or an asshole.
If you swap “says” with “hints”, it works though.
If you swap “says” with “doesn’t say” it also works.
He didn’t say them at the same time. He is absolutely all about firing them. It’s his favorite pastime.
Every programmer working on a problem right now knows that’s complete bullshit.
That said, I would love for robots to take over all of our jobs. Just make sure they’re working for all of us and not just the people at the top.
Just make sure they’re working for all of us and not just the people at the top.
Yeah, I put a request for just that in the corporate comment box, hope to hear back with those assurances real soon now.
That said, I would love for robots to take over all of our jobs. Just make sure they’re working for all of us and not just the people at the top.
It seems to me that you live in some kind of rainbow world, this will never happen. If robots replace humans, it means that humans, as useless consumers of resources, will be destroyed.
Yep. That’s the plan.
Is it okay that robots dispose of billionaires along with everyone else? Okay, I don’t care anymore.
lol There aren’t even discussions happening about rules of robotics or anything along those lines. It’s just a race into the unknown with no real sense of direction.
The human way
The dumb human way. Asimov was human too.
We can achieve purpose without production, but nobody understands what that looks like and they want to hold onto their roles in production until they feel secure about not needing to produce for society to survive.
I’m hopeful that humans won’t need to trade their lives to make others rich. AI and Robotics might be a path toward that if the people are the benefactors.
Worked with playwright in C# .NET today and lol the “AIs” knew shit about it. Constantly mixed C# with JavaScript and Python code together and it was incoherent
Then your input is wrong. I mainly work in .net c# and playwright and I have agents building my e2e tests in playwright with just test cases and test steps. These are custom agents I built myself that have the guardrails in place for the agent to stay in bounds.
The agents are getting pretty good at reviewing code, too. You don’t have to listen to everything they say, but they do point out a lot of stuff that you pretty much have to admit: yeah, that would be better if I changed it to the suggested revision.
I wasnt building test cases :)
The point was that AI can use playwright and write in C# just fine. Tests was an example of that.
Tests are a great use for AI coding, lately. Six months ago Claude Sonnet was writing tests that always passed without testing anything, that has improved dramatically with Opus 4.5/4.6 - it actually hits the functionality now, not just code coverage.
I’ll take all the AI they throw at me once the governments start taxing the rich appropriately, and these taxes cover for at least the basic needs of everyone. I’m not holding my breath, they always want everything.
I really enjoyed that story, thanks for sharing. I don’t often read fiction, but I found myself drawn into that.
For anyone who would like more context for what is linked, it’s a short sci-fi story (79 pages, according to the Kindle edition, 9 chapters when reading online). I give this context because I found myself confused at first, because OP gave no info besides the link, so I didn’t realise it was a fictional story I was reading
I read almost all of the first chapter out of courtesy, but I could do with some introduction of what this is. The piece doesn’t even have a prologue and you just copypasted a link with no context. What’s this in general?
I wish more people brought this up instead of just flinging shit at users/companies using AI.
They’re fighting an unwinable battle, AI is here to stay and it will keep improving, we have to adapt and ensure people are able to have their basic needs met.
But that’s scary socialism or w/e.
I do both.
AI is absolutely not here to stay. This kind of nonsense needs to be nipped in the bud. The capital investment for ai simply can’t be recouped without major fantastical leaps in business.
The revenue coming in is a shell game. The investment numbers simply can’t be recouped without being more expensive than actual people.
So yeah. It absolutely can go away.
Lmfao and computers are just for nerds
Edit: OpenAI, Anthropic, etc can all die, but LLMs are not. You can run a local model.
Now I completely agree with the hype train is completely out of control and its a monetary bubble, but the tool itself is not going away.
Edit2: I think the dotcom bubble is a good analogy, the underlying idea of the internet and all it can do and online ordering and such was solid, just an insane amount of hype on top that simply couldn’t be reached at that time. But now, the biggest companies ever are mainly internet/tech companies.
When people are complaining about AI, it’s often the scale of it they have beef with: the fact that it’s being shoved into their face everywhere they look, mandated for use in their job by management, even if it does not make them more productive. A consequence of it being shoved everywhere are the larger problems that make people angry, such as the excessive resource use by AI data centres.
I agree that LLMs are here to stay — I understand enough about how the tech works that I know that there is tremendous potential for their use (I originally got into learning about machine learning because I wanted to better understand AlphaFold, a protein structure prediction model made by Google Deepmind (not sure I’d count this as an LLM, but under the hood, it works pretty similarly)). However, the problem of AI is more about how the technology is functioning at a societal level than a purely technological problem.
I believe that the current societal impact of the AI boom far exceeds the actual technological impact of LLMs. Whilst I get your point about the dotcom bubble analogy, I think that in that case, the ratio of “harms caused by the dotcom bubble” to “genuine societal impact of the technology once the bubble has popped” is much smaller. I grant that we have the benefit of hindsight with the internet, because the tech has had so much time to mature and become integrated with society, whereas we’re still in the middle of the AI hype bubble, but I don’t believe that LLMs/AI are capable of being anywhere near as transformative to society as the internet. There may be niche fields that are overturned or even functionally destroyed, but there are few genuine use-cases of LLMs. They’ll still exist after the bubble has popped, and they’ll have their uses, but I don’t believe they’ll be anywhere near as ubiquitous as they are now.
Regardless of whether you agree with me on this, one thing we are in accord with is that the bubble is bullshit and harmful. Personally, something that frustrates me with it is that I am genuinely curious to see genuine progress in the real use cases for LLMs — I’m open to the possibility that in 10-20 years time, my predictions in my previous paragraph may have been proven to be wrong. However, the bubble is just delaying that kind of meaningful integration into society, as well as hindering areas of research that could improve LLMs
(as well as crowding out other areas of AI research that are based on different architectures and methods, which may get us much closer to the sci-fi sense of AI than LLMs ever could. Song-Chun Zhu is an example of a researcher who used to work in this field of AI, but got burnt out by how the economic pressures on research meant that it was hard to do research that wasn’t based on this one dominant method. He’s one of many who is nowadays more interested in researching AI in a “small-data for big tasks” paradigm)
Computers are input-output devices. You put things into a computer and it does what you tell it to do.
LMM’s do not do this they just give you a facsimile of what it believes you want.
LMM’s will not go away but their functionality is extremely limited, as has been proven by it’s failure to ‘change business forever.’
And no, ‘but the tech isn’t there’ isn’t an argument right now. This is economics. The investment for it’s current capabilities are far outsized, and there will be a massive contraction.
We are so far beyond “computer is just input output device” realistically. There’s thousands of layers of things built on top that produce what we know as a computer and anywhere along that chain things can be broken/not perform as expected because any other layer on the chain failed to do what it was supposed to.
Realistically, what’s the difference between a thing and the facsimile of a thing when the result is the same?
Semantics.
A person creates something. LMM models just blurt out an approximation of what they think might be what you want.
These idiots want to replace technical workers when AI is more attuned to replacing a CEO at this point. We don’t need Sam anymore.
Right. There are consequences when engineers make mistakes.
Lol the “AI” that can write a functioning optimized software especially with niche stuff, i still have to see
And yet, if you walk into any discussion about LLM use in coding, devs come out in droves to defend and even champion its use…

These companies have more money in their hands than they can carry, is it too crazy to think they may be somehow buying on public perception? I’m not trying to make a point on this, I don’t have the time, just saying, this kind of thing happens all the time; investing on public opinion is quite cheap for them.
I think most engineers don’t really like it because it’s making things harder and way less fun. The people who usually seem all about it are engineering managers and c suite idiots. They are all pushing it so hard. The message is clear, use the ai or be fired. It’s been sold to them and they bought it with everything they have. It’s not going to happen the way they think though.
AI tools are nice when I use them the way I want to. I like to do the stuff I am good at and have it help me with what I’m not so good at. At work they want us to just use it for everything though until it can just replace us. That’s bullshit and it ruined any benefits we would have gained
most engineers don’t really like it because it’s making things harder and way less fun
This is how I felt about managing teams of junior developers and/or offshore teams. Just too much annoying work and the result was invariably shitty. The only times in my 25-year programming career that I enjoyed myself and produced work that I was proud of was when I did everything myself.








