

You put an /s there but I can think of no other explanation for why an LLM might do this


You put an /s there but I can think of no other explanation for why an LLM might do this


According to a study Sharma published last week, in which he investigated how using AI chatbots could cause users to form a distorted perception of reality, he found “thousands” of these interactions that may produce these distortions “occur daily.”
This is interesting. I think it suggests that the AI psychosis problem is bigger than we realize. All we know of now is a few high profile cases, without any real stats on this issue. But Sharma seems to know enough to be disturbed by the scale of this issue. Anecdotally this matches my own experience, as I think I’ve come across at least one person suffering from it.
I suspect that AI psychosis will become widely acknowledged problem within the next decade, similar to how it’s now widely acknowledged that Instagram and TikTok can trigger eating disorders.


That’s very interesting. Thanks for the info


That’s good news. Hopefully it will gain traction among individuals as well as a viable Discord alternative


I guess from a security perspective it also helps to obfuscate Vance’s whereabouts. It’s difficult to know which one of the 40 vehicles he’s actually in.


Yeah. In my experience it’s more complicated than Discord but not as complicated as the fediverse. But either way, even if we can use this as an opportunity to boost its popularity among techy people then that would be a win in my books. Hopefully one day someone will come along and make a front end for Mstrix thats as frictionless to use as Discord


LLMs now achieve nearly perfect scores on medical licensing exams, but this does not necessarily translate to accurate performance in real-world settings
This is an interesting distinction. Intuitively it feels like something similar is going on with programming. Gemini is apparently passing all these crazy benchmarks but I couldn’t even get it to one-shot a game of snake in C


We need more than a discord clone. We need something that is structurally capable of avoiding these kind of top-down attacks. We need something decentralized. I think something using the matrix protocol would be a good fit


Self-hosting a matrix instance might be an okay alternative. And because its decentralized its not really possible for it to have top-down age verification imposed on it, if it ever does get popular


Are there any lawyers in here that can tell us how likely this thing is to succeed?


Interesting. And yeah I fid see that Claire Obscure Expedition 33 stuff actually haha.
Thank you for your perspective and your advice! I appreciate it.


Thank you for the detailed response. Most opinions on this topic are very much so based on vibes rather than real experience, so it’s interesting to hear an informed opinion from someone on the inside.
I hope to become a software developer one day too (it’s a slow process, because I’m teaching myself in my free time) so I sometimes worry if all the effort I’m putting in is even worth it if LLMs will be doing all the programming in a few years. Do you think that’s a concern? Will these tools continue to develop to that point or are they hitting a wall, like some people are saying?


Wow, lasts 10+ years and has 3D capabilities. That sounds pretty good. What model is it?


Nor do I think they need an AGI to do this.
Yeah I guess theres a lot of interesting stuff we can do with AI without necessarily achieving AGI. What about programming? Even if we don’t get AGI soon, do you still think LLMs will be snatching up a sizeable chunk of programming jobs?


Interesting. Stuff like this makes me suspicious of the current LLM hype. I know it’s not necessarily language models per se being used by these vehicles, but still. If we were really on the cusp of AGI then I’d expect us to have at least cracked autonomous driving by now.


That’s interesting I wonder why it wasn’t working for me


Wow 97% is a lot. Interesting info tho thanks


Waymo really seems to be winning out over Tesla with the self-driving thing. I wonder how much of that is really just because Waymo cars have a remote human driving them in situations where a Tesla would just crap out


It didn’t work for me either. I wonder if it’s already been fixed. The Google team seems to be really on top of it wherever there’s public criticism of their AI models. I remember a post on hacker news pointing out a “what year is it” bug for Google search summary. It to get the problem fixed in like two or three hours or so
I have hope. There are ways to design the front-end that smooth over the technical aspects for non-technical users. The mastodon sign-up page is a good example of this.