The modern web is an insult to the idea of efficiency at practically every level.
You cannot convince me that isolation and sandboxing requires a fat 4Gb slice of RAM for a measly 4 tabs.
It is crazy that I can have a core 2 duo with 8 gig of RAM that struggles loading web pages
Can’t wait for the new evidence that Epstein is behind that too.
Actshually it’s bandwidth censorship if you make something too heavy to be used then it won’t get used. It is one of the things China is doing to separate their internet from the rest of the worlds, by having an internet so blazingly fast it is unbearable to goto the world wide web.
So yesh, the epstien class are making the news too slow for typical users to access. /maybe some sarcasm maybe not I’m not sure yet
EDIT: I have decided I was not being sarcastic. https://ioda.inetintel.cc.gatech.edu/reports/shining-a-light-on-the-slowdown-ioda-to-track-internet-bandwidth-throttling/
Episodes of network throttling have been reported in countries like Russia, Iran, Egypt, and Zimbabwe, and many more, especially during politically sensitive periods such as elections and protests. In some cases, entire regions such as Iran’s Khuzestan province have experienced indiscriminate throttling, regardless of the protocol or specific services in use. Throttling is particularly effective and appealing to authoritarian governments for several reasons: Throttling is simple to implement, difficult to detect or attribute and hard to circumvent.```china is making webpages that are less than 20 megabytes…but at what cost?
Actually I can, because I use Linux.
Though an install is like 8gb now, instead of 700mb.
Which is not even true. If you want small you can get it.
Unreal Engine is one of biggest offenders in gaming.
PCs aren’t faster, they have more cores, so they can do more at a time, but it takes effort to optimize for parallel work. Also the form factor keeps getting smaller, more people use laptops now and you can’t cheat thermal efficiency.
My first PC ran at 16MHz on turbo.
PCs today are orders of magnitude faster. Way less fun, but faster.
What’s even more orders of magnitude slower and infinitely more bloated is software. Which is the point of the post.
It’s almost impossible to find any piece of actually optimised software these days (with some exceptions like sqlite) to the point that 99% percent of the software currently in use can be considered unintentional (or intentional) malware.
Particularly egregious are web browsers, which seem designed to waste the maximum possible amount of resources and run as inefficiently as possible.
And the fact that most supposedly desktop software these days runs on top of one of those pieces ofintentional (it’s impossible to achieve such levels of inefficiency and bloat unintentionally, it requires active effort) malware obviously doesn’t help.
Turbo slowed your processor down though
Browsers are not the same as they where. They are basically bikers ring systems in themselves now.
I came from C and C++ and had learned that parallelism is hard. Then I tried parallelism on Rust in a project of mine and it was so insanely easy.
What do you mean pc’s aren’t faster? Yes they have more cores, they also clock higher (mostly) and have more instructions per clock. Computers now perform way better than ever before in every single metric most tasks, even linear ones, could be way faster
It’s all about memory latency and bandwidth now which has improved greatly PC’s are still getting faster. There is a new RAM standards being pushed right now CAMM2 is really exciting it pushes back the need for soldered memory.
The faster single core out of order execution performance on newer x86 CPUs lets it work on that higher bandwidth of data too.
For anyone unsure: Jevon’s Paradox is that when there’s more of a resource to consume, humans will consume more resource rather than make the gains to use the resource better.
Case in point: AI models could be written to be more efficient in token use (see DeepSeek), but instead AI companies just buy up all the GPUs and shove more compute in.
For the expansive bloat - same goes for phones. Our phones are orders of magnitude better than what they were 10 years ago, and now it’s loaded with bloat because the manufacturer thinks “Well, there’s more computer and memory. Let’s shove more bloat in there!”
Case in point: AI models could be written to be more efficient in token use
They are being written to be more efficient in inference, but the gains are being offset by trying to wring more capabilities out of the models by ballooning token use.
Which is indeed a form of Jevon’s paradox
Costs have been dropping by a factor of 3 per year, but token use increased 40x over the same period. So while the efficiency is contributing a bit to the use, the use is exploding even faster.
I think we’re meaning the same thing.
Yes, but have you considered if I just rephrase what you just said but from a slightly different perspective?
“Haeeves?? Ohh, you were saying what I was saying in a different way!”
Jevon’s Paradox is that when there’s more of a resource to consume, humans will consume more resource rather than make the gains to use the resource better.
More specifically, it’s when an improvement in efficiency cause the underlying resource to be used more, because the efficiency reduces cost and then using that resource becomes even more economically attractive.
So when factories got more efficient at using coal in the 19th century, England saw a huge increase in coal demand, despite using less coal for any given task.
Also Eli Whitney inventing the cotton gin to make extracting cotton less of a tedious and backbreaking process, which lead to a massive expansion in slavery plantations in the American South due to the increased output and profitability of the crop.
This happens not only with efficiency gains. There is risk overcompensation, which feels kinda the same. Cars that are more secure cause reckless driving, which in turn is the reason accidents happen more often, which eat into the safety gains.
I always felt American car companies were a really good example of that back in the 60s-70s when enormously long vehicles with giant engines were the order of the day. Why not bigger? Why not stronger? It also acted as a symbol of American strength, which was being measured by raw power just like today lol.
This also reminds me of the way video game programmers in the late 70s/early 80s had such tight limitations to work within that you had to get creative if you wanted to make something stand out. Some very interesting stories from that era.
I also love to think about the tricks the programmer of Prince of Persia had employed to get the “shadow prince” to work…
My PC is 15 times faster than the one I had 10 years ago. It’s the same old PC but I got rid of Windows.
Everything bad people said about web apps 20+ years ago has proved true.
It’s like, great, now we have consistent cross-platform software. But it’s all bloated, slow, and only “consistent” with itself (if even). The world raced to the bottom, and here we are. Everything is bound to lowest-common-denominator tech. Everything has all the disadvantages of client-server architecture even when it all runs (or should run) locally.
It is completely fucking insane how long I have to wait for lists to populate with data that could already be in memory.
But at least we’re not stuck with Windows-only admin consoles anymore, so that’s nice.
All the advances in hardware performance have been used to make it faster (more to the point, “cheaper”) to develop software, not faster to run it.
Webapps are in general badly written and inefficient.
I’m dreading when poorly optimized vibe coding works it’s way into mainstream software and create a glut of technical debt. Performance gonna plummet the next 5 years just wait.
Already happening with Windows. Also supposedly with Nvidia GPU drivers, with some AMD execs pushing for the same now
Let me assure you this is already happening.
And that us poors still on limited bandwidth plans get charged for going over our monthly quotas because everything has to be streamed or loaded from the cloud instead of installed (or at least cached) locally.
If only bad people weren’t the ones who said it, maybe we would have listened 😔
I almost started a little rant about Ignaz Semmelweis before I got the joke. :P
Bloated electron apps are what makes Linux on desktop viable today at all, but you guys aren’t ready for that conversation.
Yes, in that the existence of bloated electron apps tends to cause web apps to be properly maintained, as a side effect.
But thankfully, we don’t actually have to use the Electron version, to benefit.
Unless it’s Teams apparently, that’s the last Electron app I want to install.
I can only think of a couple Electron apps I use, and none that are important or frequently used.
Uhhh like what?
Note, I don’t know how comprehensive this wiki list is, just quick research
https://en.wikipedia.org/wiki/List_of_software_using_Electron
From those, I’m only currently using a handful.
balenaEtcher, Discord, Synergy, and Obsidian
Spotify, Steam
Balena Etcher is a software crime though.
Steam UI uses chromium embedded framework which saves 50% of the ram and startup time.
What’s wrong with balena etcher?
It’s hundreds of megabytes for something that unetbootin, image writer, and others do in a couple of MBs.
Oh dang, is that the overheard of the software framework it uses? I think that’s what this thread was about but I can’t remember.
The viability of linux isn’t dependent on them though
It’s dependant on being able to run everything anyone could possibly need. “I don’t use it, therfore it is not essential” is the kind of approach that’s always made it niche.
Of course Adobe is still missing.
My point was that a list of programs doesn’t inherently determine whether or not Linux is a viable operating system, its viability varies based on each users workflow.
Agreed. I wasn’t the one that claimed that
The tech debt problem will keep getting worse as product teams keep promising more in less time. Keep making developers move faster. I’m sure nothing bad will come of it.
Capitalism truly ruins everything good and pure. I used to love writing clean code and now it’s just “prompt this AI to spit out sloppy code that mostly works so you can focus on what really matters… meetings!”
What really matters isn’t meetings, it’s profits.
so you can focus on what really matters…
meetings!collecting unemployment!
Thought leaders spent the last couple of decades propaganding that features-per-week is the only metric to optimize, and that if your software has any bit of efficiency or quality in it that’s a clear indicator for a lost opportunity to sacrifice it on the alter of code churning.
The result is not “amazing”. I’d be more amazed had it turned out differently.
Fucking “features”. Can’t software just be finished? I bought App. App does exactly what I need it to do. Leave. It. Alone.
No, never! Tech corps (both devs and app stores) brainwashed people into thinking “no updates = bad”.
Recently, I have seen people complain about lack of updates for: OS for a handheld emulation device (not the emulator, the OS, which does not have any glaring issues), and Gemini protocol browser (gemini protocol is simple and has not changed since 2019 or so).
Maybe these people don’t use the calculator app because arithmetic was not updated in a few thousand years.
arithmetic was not updated in a few thousand years.
Oh boy, don’t let a mathematician hear this.
A big part of this issue is mobile OS APIs. You can’t just finish an android app and be done. It gets bit rot so fast. You get maybe 1-2 years with no updates before “this app was built for an older version of android” then “this app is not compatible with your device”.
“More AI features”? Of course we can implement more AI features for you.
It’s kind of funny how eagerly we programmers criticize “premature optimization”, when often optimization is not premature at all but truly necessary. A related problem is that programmers often have top-of-the-line gear, so code that works acceptably well on their equipment is hideously slow when running on normal people’s machines. When I was managing my team, I would encourage people to develop on out-of-date devices (or at least test their code out on them once in a while).
It’s kind of funny how eagerly we programmers criticize “premature optimization”, when often optimization is not premature at all but truly necessary.
I will forever be salty about that one time I blamed of premature optimization for pushing to optimize a code that was allocating memory faster than the GC could free it, which was causing one of the production servers to keep getting OOM crashes.
If urgent emails from one of the big clients who put the entire company into emergency mode during a holiday is still considered “premature”, then no optimization is ever going to be mature.
Premature optimisation often makes things slower rather than faster. E.g. if something’s written to have the theoretical optimal Big O complexity class, that might only break even around a million elements, and be significantly slower for a hundred elements where everything fits in L1 and the simplest implemention possible is fine. If you don’t know the kind of situations the implementation will be used in yet, you can’t know whether the optimisation is really an optimisation. If it’s only used a few times on a few elements, then it doesn’t matter either way, but if it’s used loads but only ever on a small dataset, it can make things much worse.
Also, it’s common that the things that end up being slow in software are things the developer didn’t expect to be slow (otherwise they’d have been careful to avoid them). Premature optimisation will only ever affect the things a developer expects to be slow.
Optomisation often has a cost, weather it’s code complexity, maintenance or even just salary. So it has to be worth it, and there are many areas where it isn’t enough unfortunately.
And that lazy mentality just passes the cost to the consumer.
How is that mindset lazy? Unhappy customers also have a cost! At my last job the customer just always bought hardware specifically for the software as a matter of process, partly because the price of the hardware compared to the price of the software was negligible. You literally couldn’t make a customer care.
In industrial software, I’m sure performance is a pretty stark line between “good enough” and “costing us money”.
The pattern I’ve seen in customer facing software is a software backend will depend on some external service (e.g. postgres), then blame any slowness (and even stability issues…) on that other service. Each time I’ve been able to dig into a case like this, the developer has been lazy, not understanding how the external service works, or how to use it efficiently. For example, a coworker told me our postgres system was overloaded, because his select queries were taking too long, and he had already created indexes. When I examined his query, it wasn’t able to use any of the indexes he created, and it was querying without appropriate statistics, so it always did a full table scan. All but 2 of the indexes he made were unused, so I deleted those, then added a suitable extended statistics object, and an index his query could use. That made the query run thousands of times faster, sped up writes, and saved disk space.
Most of the optimization I see is in algorithms, and most of the slowness I see is fundamentally misunderstanding what a program does and/or how a computer works.
Slowness makes customers unhappy too, but with no solid line between “I have what I want” and “this product is inadequate”.
How is that mindset lazy?
Are you really asking how it’s lazy to pass unoptimized code to a customer and make their hardware do all the work for you because optimization was too costly?? Like I get that you are in an Enterprise space, but this mentality is very prevalent and is why computers from today don’t feel that much faster software wise than they did 10 years ago. The faster hardware gets, the lazier devs can be because why optimize when they’ve got all those cycles and RAM available?
And this isn’t a different at you, that’s software development in general, and I don’t see it getting any better.
It’s not just software development, it’s everywhere. Devices are cheap, people are expensive. So it’s not lazy, he’s being asked to put his expensive time into efforts the customer actually wants to pay for. If having him optimize the code further costs way more than buying a better computer, it doesn’t make sense economically for him to waste his time on that.
Is that yet another example of how the economy has strange incentives? For sure, but that doesn’t make him lazy.
I never called them lazy, I stated that the mentality is lazy, which it is. Whether or not that laziness is profit driven, it still comes down to not wanting to put forth the effort to make a product that runs better.
Systemic laziness as profit generation is still laziness. We’re just excusing it with cost and shit, and if everyone is lazy, then no one is.
If cost is a justification for this kind of laziness, it also justifies slop code development. After all, it’s cheaper that way, right?
Exactly the mindset responsible for the state of modern software.
Your spelling is terrible
Bro just denied bro’s lemmy comment pull request
Oops, forgot the AI step
On Linux it really is noticeable
Well, until you open a browser… or five, because these days nobody wants to build native applications anymore and instead they shove webapps into electron containers.
Right now, my laptop doesn’t have to run much. Just a combination of KDE, browser, emails, music player, a couple of messengers and some background services. In total, that uses about 9.5 GB of RAM. 20 years ago we would have run the same workload with less than 1 GB.
Yeah, discotd is eating 1.5gb of ram. Actually crazy
I recently had occasion to run a headless Linux distrk and got my socks knowcked off when I ran top and saw idle RAM use at fucking 400 MB
The same? Try worse. Most devices have seen input latency going up. Most applications have a higher latency post input as well.
Switching from an old system with old UI to a new system sometimes feels like molasses.
I work in support for a SaaS product and every single click on the platform takes a noticeable amount of time. I don’t understand why anyone is paying any amount of money for this product. I have the FOSS equivalent of our software in a test VM and its far more responsive.
Except for KDE. At least compared to cinnamon, I find KDE much more responsive.
AI generated code will make things worse. They are good at providing solutions that generally give the correct output but the code they generate tends to be shit in a final product style.
Though perhaps performance will improve since at least the AI isn’t limited by only knowing JavaScript.
I still have no idea what it is, but over time my computer, which has KDE on it, gets super slow and I HAVE to restart. Even if I close all applications it’s still slow.
It’s one reason I’ve been considering upgrading from6 cores and 32 GB to 16 and 64.
Upgrade isn’t likely to help. If KDE is struggling on 6@32, you have something going on that 16@64 is only going to make it last twice as long before choking.
wail till it’s slow
Check your Ram / CPU in top and the disk in iotop, hammering the disk/CPU (of a bad disk/ssd) can make kde feel slow.
plasmashell --replace # this just dumps plasmashell’s widgets/panels
See if you got a lot of ram/CPU back or it’s running well, if so if might be a bad widget or panel
if it’s still slow,
kwin_x11 --replace
or
kwin_wayland --replace &
This dumps everything and refreshes the graphics driver/compositor/window manager
If that makes it better, you’re likely looking at a graphics driver issue
I’ve seen some stuff where going to sleep and coming out degrades perf
I’ve seen some stuff where going to sleep and coming out degrades perf
I’ll have to try some of these suggestions myself, as I’ve been dealing with my UI locking up if the monitors turn off and I wake it up too soon. Sometimes I still have ssh access to it, so thanks for the shell commands!
I was doing horrible things the other day and ended up with my KDE login page not working when I came out of sleep.
CTRL+ALT+F2 > text login > loginctl unlock-sessions
I’m aware of the TUI logins (I think f7 is your graphical, but I might be wrong) and sometimes those work too. I’ve started just sshing in because the terminal switching was hit and miss.
But thanks for that loginctl command, I’ll have to give that one a try as well!
F7 is generally right, some distros change it up (nixos is 3)
Hmm, I haven’t noticed high CPU usage, but usually it only leaves me around 500MB actually free RAM, basically the entire rest of it is either in use or cache (often about 15 gigs for cache). Turning on the 64 gig swapfile usually still leaves me with close to no free RAM.
I’ll see if it’s slow already when I get home, I restarted yesterday. Then I’ll try the tricks you suggested. For all I know maybe it’s not even KDE itself.
Root and home are on separate NVMe drives and there’s a SATA SSD for misc non-system stuff.
GPU is nvidia 3060ti with latest proprietary drivers.
The PC does not sleep at all.
To be fair I also want to upgrade to speed up Rust compilation when working on side projects and because I often have to store 40-50 gigs in tmpfs and would prefer it to be entirely in RAM so it’s faster to both write and read.
Don’t let me stop you from upgrading, that’s got loads of upsides. Just suspecting you still have something else to fix before you’ll really get to use it :)
It CAN be ok to have very low free ram if it’s used up by buffers/cache. (freeable) If Buff/cache gets below about 3GB on most systems, you’ll start to struggle.
If you have 16GB, it’s running low, and you can’t account for it in top, you have something leaking somewhere.
Lol I sorted top by memory usage and realized I’m using 12 gigs on an LLM I was playing around with to get local code completion in my JetBrains IDE. It didn’t work all that well anyway and I forgot to disable it.
I did have similar issues before this too, but I imagine blowing 12 gigs on an LLM must’ve exacerbated things. I’m wondering how long I can go now before I’m starting to run out of memory again. Though I was still sitting at 7 gigs buffer/cache and it hadn’t slowed down yet.
12/16, That’ll do it. Hopefully that’s all, good luck out there and happy KDE’ing
Have you tried disabling the file indexing service? I think it’s called Baloo?
Usually it doesn’t have too much overhead, but in combination with certain workflows it could be a bottleneck.
Have you gone through settings and disabled unnecessary effects, indexing and such? With default settings it can get quite slow but with some small changes it becomes very snappy.
I have a 2 core, 2 thread, 4gb RAM 3855u Chromebook that I installed Plasma on, and it’s usually pretty responsive.
I have not, but also it’s not slow immediately, it takes time under use to get slow. Fresh boot is quite fast. And then once it’s slow, even if I close my IDE, browsers and everything, it remains slow, even if CPU usage is really low and there’s theoretically plenty of memory that could be freed easily.
Have you tried disabling all local Trojans and seeing if that helps?
I switched to Durex, seems to be faster now, thanks!
I want to avoid building react native apps.
You do really feel this when you’re using old hardware.
I have an iPad that’s maybe a decade old at this point. I’m using it for the exact same things I was a decade ago, except that I can barely use the web browser. I don’t know if it’s the browser or the pages or both, but most web sites are unbearably slow, and some simply don’t work, javascript hangs and some elements simply never load. The device is too old to get OS updates, which means I can’t update some of the apps. But, that’s a good thing because those old apps are still very responsive. The apps I can update are getting slower and slower all the time.
It’s the pages. It’s all the JavaScript. And especially the HTML5 stuff. The amount of code that is executed in a webpage these days is staggering. And JS isn’t exactly a computationally modest language.
Of the 200kB loaded on a typical Wikipedia page, about 85kb of it is JS and CSS.
Another 45kB for a single SVG, which in complex cases is a computationally nontrivial image format.
I don’t agree. It’s both. I’ve opened basic no JS sites on old tablets to test them out and even those pages BARELY load
What caused the latency in that case?
Probably just the browser itself, considering how bloated they’re getting. It’s not super surprising, considering the apps run about as fast (on a good day) as it did 5-10 years ago on a new phone, it’s gonna run like dogshit on a phone from that era.
I can’t update YouTube on my iPad 2 that I got running again for the first time in years. It said it had been 70,000~ hours since last full charge. I wanted to use it to watch videos on when I’m going to bed. But I can’t actually login to YouTube because the app is so old and I seemingly can’t update it.
I was using the web browser and yeah I don’t remember it being so damn slow. It’s crazy how that is.
Is your iPad on iOS 9.3.5? It is infamously slow.
It is possible to downgrade it to 8.4.1 (faster, partially more broken) or even 6.1.3 (fast and old school, many apps don’t work, but there are apps in Cydia to fix stuff).
Biggest issue I encountered is sites requiring TLSv1.3 for HTTPS encryption, and browsers simply do not support that.
I have an old YouTube app on my iPad, and it still works fine. One of the more responsive apps on the device. I get nagged nearly every time I use it to update to the newest YouTube release, but that’s impossible. I’d first have to upgrade my OS, and Apple no longer releases new OSes for this generation of iPads. So, I’m stuck with an old YouTube, which mostly works fine, and an occasional nag message.
I’m sure within a year or two mine will be like yours and YouTube will simply no longer work. But, for now it’s in a relatively good spot where I can use a version of YouTube designed for this particular hardware that doesn’t feel sluggish.
When you become one with the penguin, though … then you can begin to feel how much faster modern hardware is.
Hell, I’ve got a 2016 budget-model chromebook that still feels quick and snappy that way.
I’ve got a 2007 laptop that was shitty even for its time, and it does the job perfectly as a home server with Debian and a few good open source services I want to host
Sadly, it is not how it is for me. I’ve never (in last 20 years) experienced freezes that bad and that frequent as with my new beefy Linux PC.
But… 2016 was a decade ago. If it feels quick and snappy that way that means the post is right.
Which it kinda isn’t but hey.
the point is the software is what’s wrong, not the hardware. it feels snappy because it’s linux, not because it’s old hardware.
Except the Linux userbase has been saying that exact thing for the past ten years, so again, has Linux also degraded in sync or, hear me out here, is this mostly a nostalgia thing that makes you forget the cludgy performance issues of the software you used when you were younger and things have mostly gotten snappier over time across the board?
As a current dual booter I’ll say that Windows and Linux don’t feel fundamentally different these days, for good and ill. Windows has a remarkably crappy and entirely self-inflicted issue with their online-search-in-Start-menu feature, which sucks but is toggleable at least. Otherwise I have KDE and Win11 set up the same way and they both work pretty much the same. And both measurably better than their respective iterations 10, let alone 15 or 20 years ago.
Windows and Linux don’t feel fundamentally different these days
Try Windows 11 vs. Linux on a shitty old laptop with a budget 2-core processor and 2GB of RAM. Then tell me Windows and Linux don’t feel any different.
My bf bought me a brand new laptop with Win 10 preinstalled, and even after disabling or uninstalling as much as I could, it was literally like watching a slideshow. Then I installed Linux, and it…worked like you’d expect a brand new computer to work, fast and smooth. Never used Win 11 because I stopped using Windows after that.
Myyyyyeeeeh. A lightweight distro or a conemporaneous distro sure.
If I’m running GPU accelerated Steam, tons of tabs on Firefox and the same highly customized KDE desktop full of translucent components and extra animations I am willing to bet they’d both chug.
Which is what the conversation is about: new software doesn’t suck, it’s doing more stuff.
For sure, all things being equal Linux does run ligher on RAM and VRAM, so if you’re using something that is speficially memory-limited so Windows and Linux fall on opposite sides of overflowing the available memory you’ll definitely see better performance on Linux, but that’s not an inherent issue with poorly made software having a huge performance overhead.
I’m pretty sure the “unused RAM is wasted RAM” thing has caused its share of damage from shit developers who took it to mean use memory with reckless abandon.
Would be nice if I could force programs to use more ram though. I actually have 100GB of DDR4 my desktop. I bought it over a year ago when DDR4 was unloved and cheap. But I have tried to force programs to not be offloading as much. Like Firefox, I hate that I have the ram but it’s still unloading webpages in the background and won’t use more than 6GB ever.
I actually have 100GB of DDR4
They’ve got RAM! Get’em!
Will disabling the swap file fix that?
Don’t fully disable swap on Windows, it can break things :-/
I didn’t know that, that used to not be the case.
Maybe it has changed again, but in the past I gave it a try. When 16 GB was a lot. Then when 32 GB was a lot. I always thought “Not filling up the RAM anyway, might as well disable it!”
Yeah, no, Windows is not a fan. Like you get random “running out of memory” errors, even though with 16 GB I still had 3-4 GB free RAM available.
Some apps require the page file, same as crash dumps. So I just set it to a fixed value (like 32 GB min + max) on my 64 GB machine.
If not, just mount your swap file in RAM lmao
RAM disk is your friend.
Programs that care about memory optimization will typically adapt to your setup, up to a point. More ram isnt going to make a program run any better if it has no use for it
Set swappiness to 5 or something similar, or disable swap altogether unless you’re regularly getting close to max usage
With 32 and 64 GB systems I’ve never run out of RAM, so the RAM isn’t the issue at all.
Optimization just sucks.
Have you ever tried running a decent sized LLM locally?
Decent sized for what?
Creative writing and roleplay? Plenty, but I try to fit it into my 16 GB VRAM as otherwise it’s too slow for my liking.
Coding/complex tasks? No, that would need 128GB and upwards and it would still be awfully slow. Except you use a Mac with unified memory.
For image and video generation you’d want to fit it into GPU VRAM again, system RAM would be way too slow.
I use a Mac with unified memory, so that distinction slipped my mind.
In most cases, you either optimize the memory, or you optimize the speed of execution.
Having more memory means we can optimize the speed of execution.
Now, the side effect is that we can also afford to be slower to gain other benefits: Ease of development (come in javascript everywhere, or python) at the cost of speed, maintainability at the cost of speed, etc…
So, even though you dont always see performance gains as the years go, that doesn’t mean shit devs, it means the priority is somewhere else. We have more complex software today than 20 years ago because we can afford not to focus on ram and speed optimization, and instead focus on maintainable, unoptimized code that does complex stuff.
Optimization is not everything.
unoptimized code that does complex stuff.
You can still have complex code that is optimized for performance. You can spend more resources to do more complex computations and still be optimized so long as you’re not wasting processing power on pointless stuff.
For example, in some of my code I have to get a physics model within 0.001°. I don’t use that step size every loop, because that’d be stupid and wasteful. I start iterating with 1° until it overshoots the target, back off, reduce the step to 1/10, and loop through that logic until I get my result with the desired accuracy.
Of course! But sometimes, most often even, the optimization is not worth the development to get it. We’re particularly talking about memory optimization here, and it is so cheap (or at least it was… ha) that it is not worth optimizing like we used to 25 years ago. Instead you use higher level languages with garbage collection or equivalents that are easier to maintain with and faster to implement new stuff with. You use algorithms that consume a fuck ton of memory for speed improvements. And as long as it is fast enough, you shouldn’t over optimize.
Proper optimization these days is more of a hobby.
Now obviously some fields require a lot more optimization - embedded systems, for instance. Or simulations, which get a lot of value from being optimized as much as possible.
Unfortunately, a lot of dev studios tend to just build their games on the highest end systems they can and don’t bother checking for lower-end hardware. For a lot of systems, there’s plenty of programs that don’t run “good enough”. And sometimes I’ll even have issies with M$ applications on decent workstation hardware. Notes and Teams are frustratingly slow to work with sometimes
Windows 11 is the slowest Windows I’ve ever used, by far. Why do I have to wait 15-45 seconds to see my folders when I open explorer? If you have a slow or intermittent Internet connection it’s literally unusable.
Even Windows 10 is literally unusable for me. When pressing the windows key it literally takes about 4 seconds until the search pops up, just for it to be literally garbage.
Found out about this while watching “Halt and Catch Fire” (AMC’s effort to recreate the magic of Mad Men, but on the computer).
In 1982 Walter J. Doherty and Ahrvind J. Thadani published, in the IBM Systems Journal, a research paper that set the requirement for computer response time to be 400 milliseconds, not 2,000 (2 seconds) which had been the previous standard. When a human being’s command was executed and returned an answer in under 400 milliseconds, it was deemed to exceed the Doherty threshold, and use of such applications were deemed to be “addicting” to users.
if it only occurs hours or days after boot, try killing the startmenuexperiencehost process. that’s what I was doing until I switched to linux
I am using windows like once a week at maximum and then it only takes about 10 minutes. So I kind of do not really care and am glad, that I do not need to use it more often.
It takes forever to boot I know that and that’s from fast food which is extra pathetic.
fast food
Too many nuggies
Maybe if Windows quit pigging out on tendies and slimmed down it would be as baf
The Windows bloat each new generation is way out of control.
Probably that’s the folder explorer or whatever itself crashing.
yeah
and like why does it crash? it worked fine on Windows 10
I’ve given up trying to understand modern PC software. I can barely keep up with the little microcontrollers I work with. They aren’t so little.




















