return2ozma@lemmy.world to Technology@lemmy.worldEnglish · 1 month agoHuge Study of Chats Between Delusional Users and AI Finds Alarming Patternsfuturism.comexternal-linkmessage-square116linkfedilinkarrow-up1377arrow-down113
arrow-up1364arrow-down1external-linkHuge Study of Chats Between Delusional Users and AI Finds Alarming Patternsfuturism.comreturn2ozma@lemmy.world to Technology@lemmy.worldEnglish · 1 month agomessage-square116linkfedilink
minus-squareageedizzle@piefed.calinkfedilinkEnglisharrow-up12arrow-down2·edit-210 days agodeleted by creator
minus-squareaffenlehrer@feddit.orglinkfedilinkEnglisharrow-up3·1 month agoAlso, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).
deleted by creator
Also, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).