

I’m really grateful for the introduction to deceptive patterns here.
I was not aware of it, and I think it’s important to have language that can describe specifically how tech companies are trying to coerce people.


I’m really grateful for the introduction to deceptive patterns here.
I was not aware of it, and I think it’s important to have language that can describe specifically how tech companies are trying to coerce people.


There’s a 1920 x 1200 non-touch display option, which will surely get you better battery life than OLED. But what’s most interesting about it is the 1-120 Hz variable refresh rate, which Dell says is a first to for this model. That extremely low refresh should help save power when static images or text is on the screen.
Ah yeah, I should have read the rest of the article. I didn’t know about that feature though, that’s cool


1 Hz display option: like an e-Ink display?
(it says 120Hz in the article)
For first-timers: pick at random and use it until it annoys you. Then you can make an informed decision second (third, fourth, …, nth) time around


I think “sludge” would be a good alternative


I could go for some of that sweet, sweet radiation right now
She had to pick what to read too!
I think I’d last a week in that job, I’d end up choosing weird stuff and getting fired
Feels like a variation on this old quote:
The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.
origin unknown


I’m working towards something like that. I’m hoping to ultimately drop the smartphone altogether, and I’ve set my current phone’s end of life (2027ish?) as the goal.
I think the other thing that’s necessary to keep the same sense of connectedness is a device to receive notifications, and I have an open source smartwatch I want to program for that. I’ve been working on a notification server too (kind of like Gotify), but at the moment it’s a work in progress


By layers I mean image layers when manipulating an image in an image editor. So I guess what you’re saying is an image would be flattened before being passed to a compression algorithm?


I wonder if hypothetically, AI could do the same with a box over text, even if it was 100% opaque. For example, if the data from the layer containing text was part of the image data passed to an image compression algorithm, and that data was somehow reflected in the output
Misread as Pelletburo, now sad there’s no pet feeder called that
For up to 480W of fun!


I had only heard of the subs going to the US until now.
It’s kinda crazy they’re crossing the Atlantic in those things, even crazier they’re doing round trips


I think they had a RISC-V CPU as an experimental option for a while, but I couldn’t see it on their site recently.
Not sure what happened with that
EDIT: my mistake, it was an emulated RISC-V CPU, running on an FPGA (source)


I was a long-time Linux user at the time of the systemd switchover.
Your memories of the good old times are your own


I see from your other comment in the thread that you’re enthusiastic about systemd, and that’s great.
I’m glad we inhabit a software ecosystem broad enough that we can both be happy


Those people usually see themselves as moral and righteous and expect the world at large to follow their personal creed.
If they don’t like systemd but are forced to use it for some reason, I can understand why they might have some negative feelings
Once I switched to a distro with OpenRC, I stopped feeling the need to argue about systemd


So what you’re saying is, the guy in the last frame should be laughing?
I think it’s interesting that the phrase “ARM-free” roadmap is being used. I had no idea there had been so much market penetration of RISC-V already