

You support lying. Good to know.
If you don’t understand the tomato comment, no wonder you’re having so much trouble with the interpretation and lying topics!


You support lying. Good to know.
If you don’t understand the tomato comment, no wonder you’re having so much trouble with the interpretation and lying topics!


Did the douchebag say exactly what the post title said he said?


I recommend a re-read, my good buddy, of all my posts in this thread.
Truth. Not lies. Not conjecture.
This can be the truth that he was dodging the question.
Don’t say people said things they didn’t say. Simple.


Post said he said a thing. He did not say the thing. Not complicated.
Could have worded the post title to be accurate: didn’t. Instead, lied.
Words matter. Truth matters. Interpretation is how you get religious people performing atrocities based on millennia old writings.
“[Asshole] Squirms Under Questioning, Refuses To Admit 16hrs A Day Is Addictive Behavior.”
Not hard.


What editing? Didn’t edit either if those posts.


I just dislike sensationalism.
If the truth isn’t enough, then I don’t want it.


Post headline deserves a downvote. Quote from article:
Lanier asked Mosseri what he thought of K.G.M’s longest single day of use of Instagram being 16 hours.
“That sounds like problematic use,” the Instagram boss answered. He did not call it an addiction.
He also didn’t say it was a tomato. Like wtf do you want, I can’t tell if he was asked specifically if 16 hours a day was an addiction. The prior question was about whether he had known she had a 16hr day, and he had not. (He should have; poor trial prep.)
This is sensationalist BS and I dearly want this platform to be better than that.
Just so we’re clear, Meta can die in a fire and the world would be better off, I’m not defending them in the slightest.


Can run decent size models with one of these: https://store.minisforum.com/products/minisforum-ms-s1-max-mini-pc
For $1k more you can have the same thing from nvidia in their dgx spark. You can use high speed fabric to connect two of ‘em and run 405b parameter models, or so they claim.
Point being that’s some pretty big models in the 3-4k range, and massive models for less than 10k. The nvidia one supports comfyui so I assume it supports cuda.
It ain’t cheap and AI has soooo many negatives, but… it does have some positives and local LLMs mitigate some of the minuses, so I hope this helps!


Keep doing it. They all have strengths and suckiness at the same time.


That’s… not what they said.


Don’t forget obligatory data mining the crap out of you!
You really don’t understand and are just driving the point home the more you post. I feel kinda sorry for you.