Formerly /u/Zalack on Reddit.

  • 11 Posts
  • 136 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle





  • Honestly sometimes just making a show of it not getting to you can get people like that to leave you be. Just start looking get dead in the eye and saying “thanks for the tip. I’ll take it under advisement”, every time she starts doing that to you. Every time. Same inflection. Even if you have to do it 20 times in a row. Even if she gets angry. Don’t say anything else to her unless it’s required to do your job.

    Eventually she’ll get annoyed or bored enough to leave you alone and try to bother someone else she can get a reaction out of.






  • It’s worth pointing out that reproducible builds aren’t always guaranteed if software developers aren’t specifically programming with them in mind.

    imagine a program that inserts randomness during compile time for seeds. Reach build would generate a different seed even from the same source code, and would fail being diffed against the actual release.

    Or maybe the developer inserts information about the build environment for debugging such as the build time and exact OS version. This would cause verification builds to differ.

    Rust (the programing language) has had a long history of working towards reproducible builds for software written in the language, for instance.

    It’s one of those things that sounds straightforward and then pesky reality comes and fucks up your year.







  • It’s not that strange. A timeout occurs on several servers overnight, and maybe a bunch of Lemmy instances are all run in the same timezone, so all their admins wake up around the same time and fix it.

    Well it’s a timeout, so by fixing it at the same time the admins have “synchronized” when timeouts across their servers are likely to occur again since it’s tangentially related to time. They’re likely to all fail again around the same moment.

    It’s kind of similar to the thundering herd where a bunch of things getting errors will synchronize their retries in a giant herd and strain the server. It’s why good clients will add exponential backoff AND jitter (a little bit of randomness to when the retry is done, not just every x^2 seconds). That way if you have a million clients, it’s less likely that all 1,000,000 of them will attempt a retry at the extract same time, because they all got an error from your server at the same time when it failed.

    Edit: looked at the ticket and it’s not exactly the kind of timeout I was thinking of.

    This timeout might be caused by something that’s loosely a function of time or resources usage. If it’s resource usage, because the servers are federated, those spikes might happen across servers as everything is pushing events to subscribers. So, failure gets synchronized.

    Or it could just be a coincidence. We as humans like to look for patterns in random events.


  • Haven’t seen acollierasto mentioned yet.

    She’s a scientist with a PHD in Astrophysics and does deep dives on specific topics, generally from the angle of science communication and how it often fails that topic in some way.

    Her videos are very simple and low production value, but packed with information. She’s a great communicator and you walk away from each video, not just with better knowledge on a topic, but also with a sense of where the holes in that knowledge are. Like where the limits of the metaphor being used to covey the topic to you exist.