So after we’ve extended the virtual cloud server twice, we’re at the max for the current configuration. And with this crazy growth (almost 12k users!!) even now the server is more and more reaching capacity.
Therefore I decided to order a dedicated server. Same one as used for mastodon.world.
So the bad news… we will need some downtime. Hopefully, not too much. I will prepare the new server, copy (rsync) stuff over, stop Lemmy, do last rsync and change the DNS. If all goes well it would take maybe 10 minutes downtime, 30 at most. (With mastodon.world it took 20 minutes, mainly because of a typo :-) )
For those who would like to donate, to cover server costs, you can do so at our OpenCollective or Patreon
Thanks!
Update The server was migrated. It took around 4 minutes downtime. For those who asked, it now uses a dedicated server with a AMD EPYC 7502P 32 Cores “Rome” CPU and 128GB RAM. Should be enough for now.
I will be tuning the database a bit, so that should give some extra seconds of downtime, but just refresh and it’s back. After that I’ll investigate further to the cause of the slow posting. Thanks @[email protected] for assisting with that.
Wow that was fast.
this is very Reddit-y of you
Redditors made such memes a thing, we’re taking them with us where we go.
Thanks for the awesome work!
Just donated $10! Appreciate all the work you all are doing to keep up with the growth.
Like many others, I came from Reddit and was initially hesitant to try it out, but I love this place so much! It really feels like the “worse” parts of Reddit have been skimmed off, and that definitely shows with how nice people seem here! Thank you so much!
how nice people seem here
yes! I love the culture of this place so far
Truth is for me as someone who used Reddit for about the last 16 years, it very much feels like the early days of Reddit again.
Which is a very good thing, because that’s what I originally signed up for compared to a metric fuckton of karma farming spam bots.
I just hope it gains enough traction to be sustainable in the long run, especially considering that it’s relying on donations for funding, I believe?
undefined> metric fuckton of karma farming spam bots.
People are hard at work writing bots for lemmy so don’t worry, you’ll be able to enjoy your regular hogwash again really soon.
Personally I think lemmy should go as far out of its way as possible to make bots in any and all forms just about impossible.
Yeah, we can enjoy while it lasts, because with more users more questionable content will come
Found one russian troll already. Oh well…
Edit: lol, was not referring to OP, it was some world news post comment with chiese username that spread misinformation about russian war in ukraine. I just added my thoughts on the community.
Lesson learned today: never take anything for granted—if there’s a chance to be massively misunderstood, it will eventually happen lol
you can easily block any user by click on the 🚫 sign under their comment, and never have to deal with their bs again
what about that post made you think they were a russian troll?
I think they meant they’ve seen one Russian troll on Lemmy already, not that skidface is a Russian troll.
I … Have to assume so, anyway
Can confirm I am not a russian troll ;)
I’m not too familiar with Lemmy’s codebase, but I am a devops engineer. Is the software written in any way to support horizontal scaling? If so, I’d be happy to consult/help to get the instance onto an autoscaling platform eventually.
Doesn’t support HA or horizontal scaling yet from what I read. Unsure if kbin does. Probably would have to add support for horizontal scaling to have that auto scaling do anything.
Yeah, that’s what I was afraid of. Understandable though, since horizontal scaling/HA usually isn’t a priority when developing a new application.
The code is open source on GitHub and the backend is written in Rust.
I have no idea how it goes in terms of scaling…
Apparently it’s not ideal at Horizontal scaling (that’s what I’ve picked up from reading stuff here, could be wrong)
I think they can horizontally scale the Postgres maybe? Postgres is probably the biggest performance bottleneck.
Have they implemented the postgres? Last I read they were still using websockets (I think I’m not a programmer and don’t know what all that means lmfao)
Came here from Reddit and I already love it so much more! :)
Thankyou for everything!
Just curious, what sort of hardware is lemmy.world using/moving to? Wondering if there’s a good way to predict load based on number of users.
Yes. It’s called performance testing. Basically an engineer would need to setup test user transactions to simulate live traffic and load test the system to see how everything scales, where it breaks, etc. Then you can use the results of the tests to figure out how big of an instance you should use for your projected number of users.
Jmeter, and locust.io are the two biggest open source performance test tools.
The alternative is take a wild guess. See how the system behaves, and make adjustments in real time… like what @[email protected] is currently doing.
Does it work on water now that it has MORE POWA?
So, I just want to make sure I understand this as I am a new user from reddit. Instances are server based and cost money. Instances are Lemmy.World, Beebaw, Lemmy.Film, etc etc. These are all seperate hosted instances. Correct?
And donations would help pay for the server, ie lemmy.world?
“Lemmy instances” are analogous to “email servers”: your account is hosted on one of them, but you can communicate with people on other ones, because the servers know how to talk to each other.
Expanding the capacity of the Lemmy service will involve both (1) more instances, and (2) more resources for existing instances.
Nice!
Thanks for everything you’re doing. I signed up for Patreon to contribute!
Thanks for you work on this! What is the planned time for the outage?