

It is a different substrate for reasoning, emergent, statistical, and language-based, and it can still yield coherent, goal-directed outcomes.
That’s some buzzword bingo there… A very long winded way of saying it isn’t human-like reasoning but you want to call it that anyway.
If you went accept that reasoning often fails to show continuity, well then there’s also the lying.
Examining a reasoning chain around generating code for an embedded control scenario. At one point it says the code may effect the behavior of how a motor is controlled, and so it will test if the motor operates.
Now the truth of the matter is that the model has no access to perform such a test, but the reasoning chain is just a fiction, so it described a result, asserting that it performed the test and it passed, or failed. Not based on a test, but by text prediction. So sometimes it says it failed, then carries on as if it passed, sometimes it decides to redo some code to address the error, but leaves it broken in real life. Of course it can claim it works when it didn’t at all. It can show how “reasoning” can help though. If the code is generated based on one application, but when applied to a motor control scenario, people had issues and so generating the extra text caused it to zero in on some stack overflow thread where someone made a similar mistake.

So it is just as good as the typical CEO