MentalEdge
Eskating cyclist, gamer and enjoyer of anime. Probably an artist. Also I code sometimes, pretty much just to mod titanfall 2 tho.
Introverted, yet I enjoy discussion to a fault.
- 80 Posts
- 41 Comments
I mean… They were both regular customers of a bookstore.
Their guests were probably a bunch of people from their book club. Where they probably even met.
MentalEdge@sopuli.xyzto
Technology@lemmy.world•Intel announced plans to start making GPUs, challenging NVIDIA's dominanceEnglish
5·6 days agoWut?
Alchemist and Battlemage cards were fine.
Edit: oh no. It’s a pivot to AI compute 🤦♂️
MentalEdge@sopuli.xyzto
Technology@lemmy.world•Nvidia CEO pushes back against report that his company's $100B OpenAI investment has stalledEnglish
154·10 days agoLooks at numbers.
Subtracts money being spent, from money being earned.
Huh.
Are you sure?
MentalEdge@sopuli.xyzto
Technology@lemmy.world•OnePlus update blocks downgrades and custom ROMs by blowing a fuseEnglish
9·16 days agoThe original “One” phone was even supposed to run cyanogemod out of the box at one point.
MentalEdge@sopuli.xyzto
Technology@lemmy.world•AI Is Scheming, and Stopping It Won’t Be Easy, OpenAI Study FindsEnglish
1·26 days agoLike you said, it might be impossible to avoid ascribing things like intentionality to it
That’s not what I meant. When you say “it makes stuff up” you are describing how the model statistically predicts the expected output.
You know that. I know that.
That’s the asterisk. The more in-depth explanation a lot of people won’t bother getting far enough to learn about. Someone who doesn’t read that far into it, can read that same phrase and assume that we’re discussing what type of personality LLMs exhibit, that they are “liars”. But they’d be wrong. Neither of us is attributing intention to it or discussing what kind of “person” it is, in reality we’re referring to the fact that it’s “just” a really complex probability engine that can’t “know” anything.
No matter what word we use, if it is pre-existing, it will come with pre-existing meanings that are kinda right, but also not quite, requiring that everyone involved in a discussion know things that won’t be explained every time a term or phrase is used.
The language isn’t “inaccurate” between you and me because you and I know the technical definition, and therefore what aspect of LLMs is being discussed.
Terminology that is “accurate” without this context does not and cannot exist, short of coming up with completely new words.
MentalEdge@sopuli.xyzto
Technology@lemmy.world•AI Is Scheming, and Stopping It Won’t Be Easy, OpenAI Study FindsEnglish
6·26 days agoYes.
Who are you trying to convince?
What AI is doing is making things up.
This language also credits LLMs with an implied ability to think they don’t have.
My point is we literally can’t describe their behaviour without using language that makes it seems like they do more than they do.
So we’re just going to have to accept that discussing it will have to come with a bunch of asterisks a lot of people are going to ignore. And which many will actively try to hide in an effort to hype up the possibility that this tech is a stepping stone to AGI.
MentalEdge@sopuli.xyzto
Technology@lemmy.world•AI Is Scheming, and Stopping It Won’t Be Easy, OpenAI Study FindsEnglish
7·26 days agoObviusly.
And like hallucinations, it’s undesired behavior that proponents off LLMs will need to “fix” (a practical impossibility as far as I’m concerned, like unbaking a cake).
But how would you use words to explain the phenomenon?
“LLMs hallucinate and lie” is probably the shortest description that most people will be able to grasp.
MentalEdge@sopuli.xyzto
Technology@lemmy.world•AI Is Scheming, and Stopping It Won’t Be Easy, OpenAI Study FindsEnglish
6·26 days agoYup. The way the article titled itself isn’t helping.
MentalEdge@sopuli.xyzto
Technology@lemmy.world•AI Is Scheming, and Stopping It Won’t Be Easy, OpenAI Study FindsEnglish
261·27 days agoSeems like it’s a technical term, a bit like “hallucination”.
It refers to when an LLM will in some way try to deceive or manipulate the user interacting with it.
There’s hallucination, when a model “genuinely” claims something untrue is true.
This is about how a model might lie, even though the “chain of thought” shows it “knows” better.
It’s just yet another reason the output of LLMs are suspect and unreliable.
MentalEdge@sopuli.xyzto
Programmer Humor@programming.dev•Guys, what's the best Linux distro to install on my PC?
34·27 days agoSame but among arch users it’s “your config” and “my config”.
I’m antitheist, personally.
We have the power of god and anime on our side.
MentalEdge@sopuli.xyzto
Programmer Humor@programming.dev•And then everyone stood up and clapped
437·1 month agoIf this actually worked, the rich wouldn’t be trying to sell AI to the poor. They’d keep it to themselves.
Didn’t there use to be an unlimited tier?
Welcome to premium khajit wares.
You have coin, yes?






True passion. For MURDER!