

There have been rumors of Cook retiring for the past five years, at least, and I’m pretty sure that shareholders are quite happy with Apple market cap at almost $4 trillion.
It has nothing to do with AI.


There have been rumors of Cook retiring for the past five years, at least, and I’m pretty sure that shareholders are quite happy with Apple market cap at almost $4 trillion.
It has nothing to do with AI.


AI is a non essential tool. Anything that a chatbot produces, can and should be achievable by a human with access to the same sources of information. Anyone hired to do a specialist job, who cannot perform without access to AI, should be summarily fired because their output would be indistinguishable from that of their LLM of choice.
In contrast, the Internet (as massive interconnected network), computers, even books, enable humans to deal with information in ways impossible to achieve without them, and help augment us. Reading feeds your brain. Computers are a window to creativity. AI does nothing of the sort, in fact I believe it does the opposite, pushing us to outsource our thinking processes while making us feel smart, undeservedly.
We aren’t comparing humans to code.
Except for the bit where LLM behaviors aren’t deterministic, but those of most compilers in most situations are.
And before anyone says that LLVM in version X produced wildly different assembly from version Y, it is not remotely comparable to what LLMs do, not even close.
I would need a citation for that “2x-5x faster” with the same quality, because that hasn’t been my experience at all. Most of my colleagues treat LLMs as “better Google”, and agentic coding in production has been downsized, to the point where it may help with the least critical paths only. And we aren’t particularly AI skeptic, at all.
Also, I feel like progress has stalled in the past couple of years, e.g. Opus latest version doesn’t seem to provide me with any noticeable advantages over the previous one. Are they getting better on paper? I suppose they do, but I couldn’t care less about that if they don’t give me better results.
The thing is, writing code was never the issue, engineering it is. If a machine helps me write code 10 times faster, that saves me maybe a couple hours, which isn’t really meaningful. On the other hand, it increases my workload by forcing me to thoroughly check the work of less experienced devs who rely on them, just to make sure that there aren’t errors that could cause serious harm.
I guess what I’m trying to say is that AI is giving inexperienced people confidence they shouldn’t have in the first place, and that’s not a good thing.
What does it mean? Because just yesterday I saw this guy live streaming a vibe coding session, and he sounded exactly like “Bill”.
The link you posted does not support your conclusion at all.