• 2 Posts
  • 10 Comments
Joined 3 months ago
cake
Cake day: February 3rd, 2026

help-circle





  • Honestly, I think your friend is right, it’s a question of economy of scale. As you scale up there will be less and less wasted resources in overhead. Once you reach the scale where you need hundreds or thousands (or hundreds of thousands) of servers to operate your site you’d likely be able to fairly efficiently dimension the amount of servers you have so that each server is pretty efficiently utilized. Youd only need to keep enough spare capacity to handle traffic bursts, which would also become smaller compared to the baseline load the larger your site becomes.

    Realistically most self-hosted setups will be mostly idle in terms of CPU capacity needed, with bursts as soon as the few users accesses the services.

    As for datacenters using optimized machines there is probably some truth to it. Looking at server CPUs they usually constrain power to each core to add more cores to the CPU. Compared to consumer CPUs where at least high-end CPUs crank the power to get the most single-core performance. This depends heavily on what kind of hardware you are self-hosting on though. If you are using a raspberry-pi your of course going to be in favor, same is probably true for miniPCs. However if you’re using your old gaming computer with an older high-end CPU, your power efficiency is very likely sub-optimal.

    As a “fun” fact/anecdote, I recently calculated that my home server which pulls ~160W comes out as 115kWh in a month. This is a bit closer than I would like to the 150-200 kWh I spend on charging my plug-in hybrid each month… To be fair though I had not invested much in power efficiency of this computer, running the old gaming computer approach and a lot of HDDs.

    That said there is plenty of other advantages with self-hosting, but I’m not sure the environmental angle works out as better overall.




  • Worth noting that despite the headline this does not have anything to do with the huge outage in the end of 2025.

    The company said the incident in December was an “extremely limited event” affecting only a single service in parts of mainland China. Amazon added that the second incident did not have an impact on a “customer facing AWS service.”

    Neither disruption was anywhere near as severe as a 15-hour AWS outage in October 2025 that forced multiple customers’ apps and websites offline—including OpenAI’s ChatGPT.

    I would also have felt some level of schadenfreude if it turned out that any of the really big incidents in the end of 2025 was a result of managements aggressive pushes for AI coding. Perhaps that would cool off the heads of executives a bit if there were very real examples pf shit properly hitting the fan…