• 325 Posts
  • 4.87K Comments
Joined 3 years ago
cake
Cake day: July 7th, 2023

help-circle


  • Not trying to shit on your idea here, but this is actually going to make your systems much less performant in a number of ways. Let me explain why.

    The Linux kernel memory scheduler is extremely good at what it does. It’s probably the reason why Linux crushed adoption into the server market so rapidly 20+ years ago. Not only is it fast, it’s super smart.

    The process scheduler is also top-tier, with BSD only sliiightly maybe winning out in some edge cases because it takes more resources into consideration when planning executions. This is why BSD wins out in the network performance category so often (until very recently).

    All of that to say, if you have enough memory to hold whatever you need to run in memory, everything runs great. Cut to you not having memory, and needing to swap.

    Once swap enters into the fray, the performance of both memory and process scheduling drops exponentially depending on the number of running processes. For each extra process needing more memory, the rest of the system takes a hit from the interrupt needed to find and clear cached pages, or allocate swap. This is called Memory Contention.

    Memory Contention engages a whole host of different levera in various parts of the kernel. The more you have, the more CPU cycles you use to solve for it, and you also increase wait time for everything in the system from I/O interrupt at the process scheduler.

    What you’re doing by enabling BOTH swap and zram in the way you are increasing the workload of the kernel by a factor of two, so twice the amount of effort at the CPU, and twice the amount of I/O wait when contention comes into play. It’s just not performant, and it’s wasting energy.

    They both deal with swap, just either at the disk or available allocated RAM space, and you’re just making your machine do extra work to figure out where it should be swapping to.











  • I think you’re unfamiliar with the general ideas around exactly what a display is in an OS, so don’t be offended if I break it down:

    In Windows, there is only THE compositor, meaning no separate distinction from one process or another, it’s all the same display process as far the OS goes.

    In MacOS, there is the compositor (the screen display manager) that loads first, and everything after that is a subprocess that handles different things: login security, window management, launcher, search…etc.

    In Linux everything is generally separate. Your first login screen is its own process, which then calls another process to load your DE or whatever, and then everything is handed off after that.

    If all you want is a “Kiosk Mode”, you just skip everything else. No display manager, login manager, DE…etc. You just boot the kernel, and have a compositor load. That compositor will then be responsible for displaying what you launch from there. So you Daisy chain things like that, and skip all the stuff you don’t need.


  • Wayland isn’t going to be your issue if you’re saying you’re building something from scratch. It’s pretty lightweight on its own, because it’s just a framework of libraries and APIs.

    The compositor and environment you build around that is what will be taking the majority of resources to run whatever rinterface you’re going to have.

    Sway is fairly minimal, but if you just run a bare compositor layer and figure out a launcher from there, it would be lighter. But you’re talking about Megabytes of difference, so it’s not going to be much different.

    Edit: also, I was assuming you’re asking about memory, but could be wrong.