

We do understand exactly how LLMs work though, and it no way fits with any theories of consciousness. It’s just a word extruder with a really good pattern matcher.
I keep picking instances that don’t last. I’m formerly known as:
@[email protected]
@[email protected]
@[email protected]
@[email protected]


We do understand exactly how LLMs work though, and it no way fits with any theories of consciousness. It’s just a word extruder with a really good pattern matcher.


I like the comparison but LLMs can’t go insane as they just word pattern engines. It’s why I refuse to go along with the AI industry’s insistance in calling it a “hallucination” when it spits out the wrong words. It literally can not have a false perception of reality because it does not perceive anything in the first place.


This feels to me like a common folk saying from somewhere translated into English. It’s also a very apt and appropriately vulgar metaphor for the situation.
I can’t see the thumbnail, but I could already hear the Ryan George character in my head just from reading the image text, so when I saw a youtube link I assumed it had to be one of those skits.
I’m guessing he means the gun is to deter people from physically harming him, or shoot the ones who aren’t deterred. This in no way helps anyone whose health is threatened by any of the numerous more common threats to health, but that’s not his problem.

One of the reasons I switched to a PieFed server. I’ve been happy with the switch though, it feels generally smoother in my browser as well. That’s the beauty of the Fediverse. Even on a whole different platform, I’m still able to participate in this discussion hosted on a Lemmy server.

“AI should always be a choice—something people can easily turn off." “It will evolve into a modern AI browser and support a portfolio of new and trusted software additions.”
How does he not get how contradictory these positions sound. Really a missed opportunity to brand themselves as the browser without AI bullshit and gain users who want to get away from that crap. Sure, they promise it’ll have an off switch, but even if that’s true, they’re still wasting a lot of their very limited budget pursuing it. Really shows where their priorities are.
The problem with the “AI will destroy us all” take is that it attributes non-existent competence to these techs. I don’t deny it may raise some useful opposition, but it also makes the problem worse on the other end. There will be a good chunk of people who look at the extremes of the claims (useless slop vs doom) and figure that the truth is somewhere in between. They’ll conclude it must be a pretty good technology. This is why some of the AI evangelists themselves have been pushing the doom possibility. “Oh no, it might be just too good. Well, not just yet, don’t worry too much, but it is already so good we’ve started preparing for it,” they claim.