

(diarrhea noise)


(diarrhea noise)


Not trying to shit on your idea here, but this is actually going to make your systems much less performant in a number of ways. Let me explain why.
The Linux kernel memory scheduler is extremely good at what it does. It’s probably the reason why Linux crushed adoption into the server market so rapidly 20+ years ago. Not only is it fast, it’s super smart.
The process scheduler is also top-tier, with BSD only sliiightly maybe winning out in some edge cases because it takes more resources into consideration when planning executions. This is why BSD wins out in the network performance category so often (until very recently).
All of that to say, if you have enough memory to hold whatever you need to run in memory, everything runs great. Cut to you not having memory, and needing to swap.
Once swap enters into the fray, the performance of both memory and process scheduling drops exponentially depending on the number of running processes. For each extra process needing more memory, the rest of the system takes a hit from the interrupt needed to find and clear cached pages, or allocate swap. This is called Memory Contention.
Memory Contention engages a whole host of different levera in various parts of the kernel. The more you have, the more CPU cycles you use to solve for it, and you also increase wait time for everything in the system from I/O interrupt at the process scheduler.
What you’re doing by enabling BOTH swap and zram in the way you are increasing the workload of the kernel by a factor of two, so twice the amount of effort at the CPU, and twice the amount of I/O wait when contention comes into play. It’s just not performant, and it’s wasting energy.
They both deal with swap, just either at the disk or available allocated RAM space, and you’re just making your machine do extra work to figure out where it should be swapping to.


That’s just the difference between an LTS and everything else though. Debian is meant to be slow to release, battle tested, and focused on stability.


They’ve about a half dozen stupid decisions just in the past decade that has garnered the tarnish on their reputation. Trying to rationalize it won’t make the issues go away.
Note that this same hate sure isn’t going towards Debian.


Autoimmune research might be useful.


Good to know!


Lol…yuuup


deleted by creator


I think you’re unfamiliar with the general ideas around exactly what a display is in an OS, so don’t be offended if I break it down:
In Windows, there is only THE compositor, meaning no separate distinction from one process or another, it’s all the same display process as far the OS goes.
In MacOS, there is the compositor (the screen display manager) that loads first, and everything after that is a subprocess that handles different things: login security, window management, launcher, search…etc.
In Linux everything is generally separate. Your first login screen is its own process, which then calls another process to load your DE or whatever, and then everything is handed off after that.
If all you want is a “Kiosk Mode”, you just skip everything else. No display manager, login manager, DE…etc. You just boot the kernel, and have a compositor load. That compositor will then be responsible for displaying what you launch from there. So you Daisy chain things like that, and skip all the stuff you don’t need.


Wayland isn’t going to be your issue if you’re saying you’re building something from scratch. It’s pretty lightweight on its own, because it’s just a framework of libraries and APIs.
The compositor and environment you build around that is what will be taking the majority of resources to run whatever rinterface you’re going to have.
Sway is fairly minimal, but if you just run a bare compositor layer and figure out a launcher from there, it would be lighter. But you’re talking about Megabytes of difference, so it’s not going to be much different.
Edit: also, I was assuming you’re asking about memory, but could be wrong.


In Gentoo, emerge compiles packages from source on practically every machine you set up. matrixOS remedies this by building once and distributing binaries, so you skip the compilation wait entirely.
Okay, soooooo…basically disregarding the entire point and benefit of Gentoo? The entire reason you’d want to build from source on a specific machine or architecture is for the compiler optimizations done on that hardware. Just shipping binaries around is normal, so I’m not getting what the point is here.


Lolwut?
You think that’s why Canonical and RedHat make money, huh? 🤣🤣🤣


Symlinks are just links to a hard file. Unless you’re setting specific flags, you’re coming the hard files along with everything else. I’d run a dedupe script on your copied files and see if you didn’t happen to double up on some things.


Depends on how the code is using it. You could look deeper, but that’s not what OP is asking for help with.
It’s not about how big they are really, it’s about how many can be open at a time. Without sane limits, then anything is a ticking time bomb.


Reduce the number of active connections, or the total number of active transfers available at once, and that will lower that number.
If you’re POSITIVE your memory situation is in good shape (meaning you’re not running out of memory), then you can increase the max number of open files allowed for your user, or globally: https://www.howtogeek.com/805629/too-many-open-files-linux/
Again: if you do this, you will likely start hitting OOMkill situations, which is going to be worse. The file limit set right now are preventing that from happening.


You have a process holding open a bunch of FD’s. Instead of just blindly increasing the system limits, try and find the culprit with something like: lsof | awk '{print $1}' | sort | uniq -c | sort -nr
That will give you a list of which processes are holding open descriptors. See which are the worst offenders and try and fix the issue.
You COULD just increase the fd open max, but then you actually will more than likely run into OOMkill issues because you aren’t solving the problematic process.


May you be the first
$1,000 won’t cover anything for people without other income.