Backstory: I had a debian 11 VPS. I installed the postgresql program from the debian 11 apt repo, back a few months ago. Back then, it was postgres 13 on the debian stable repos.
Fast forward couple months: debian 12 comes out. I do a “apt dist-upgrade” and in-place upgrade my VPS from debian 11 to debian 12. Along with this upgrade, comes postgresql 15 installed.
Now, fast forward couple more months: lemmy 0.18.3 comes out. I do not upgrade (I am on lemmy 0.18.2—afaik).
Fast forward some time, too: lemmy 0.18.4 comes out. I decide to upgrade to 0.18.4 from my existing 0.18.2.
I pull the git repo. Compile it locally. It goes well, no errors in the compilation process. I stop the lemmy systemd service, then I “mv” the compiled “lemmy_server” to /usr/bin dir.
I try to restart the now-upgraded lemmy systemd service. However, the systemd service fails.
I check the sudo journalctl -fu lemmy
and I see the following error message:
lemmy_server[17631]: thread 'main' panicked at 'Couldn't run DB Migrations: Failed to run 2023-07-08-101154_fix_soft_delete_aggregates with: syntax error at or near "trigger"', crates/db_schema/src/utils.rs:221:25
I report this issue here: https://github.com/LemmyNet/lemmy/issues/3756#issuecomment-1686439103
However, after a few back and forths and internet search, I conclude that somewhere between lemmy 0.18.3 and 0.18.4, lemmy stops supporting psql <15. So, my existing DB is not compatible.
Upon my investigation on my VPS setup, I concluded that psql 15 is running, however, lemmy is using the psql 13 tables (I do not know if this is the correct term).
Now my question: is there a way to import the lemmy data I had in the psql 13 tables to a new psql 15 table (or database, I don’t know the term).
To make things hairier: I also run a dendrite server on the same VPS, and the dendrite server is using the psql 15 with psql 13 tables on the same database as the lemmy one.
The dendrite database is controlled by a psql user named “dendrite” and the lemmy database is controlled by a psql user named “lemmy” . I hope this makes differentiation between two databases possible. And so I do not harm my existing dendrite database.
Any recommendations about my options here?
Starting a new comment trunk branch…
So I don’t know what went wrong. I think you need to stop lemmy_server service and capture the system journal log entries for user lemmy while it starts again.
Then try and API call with curl? See if it’s just something confused about lemmy-ui or if your data is indeed not there. Maybe a API call to list communities?
I am looking at
sudo journalctl -u lemmy
in order to see some error logsok, so I’m trying to get the ports right for a curl API call to bypass lemmy-ui and talk directly from shell
curl --request GET --url http://localhost:8536/api/v3/community/list --header 'accept: application/json'
And see if it looks like your list? not sure on the port 8536
Yeah, it gives out a json output. I can see only one of my Subscribed communities in there, though.
Ok, what probably happened there was incoming federation triggered it to create the community on your ‘empty’ database. So again, we want to stop lemmy_server service now to try and stop further data going into this empty one.
So, what can we do now? It seems like the back’ed up sql data is still in there. As I said, I could see one of the communities I subscribed on the stdout, when I applied your command line speaking the the lemmy backend directly.
but only one, right? See federation is kind of automagic in how if one single post came in for that community it could very well go create the community itself. On an empty database.
Now did you upgrade lemmy-ui and maybe run into problems there?
I think the safe thing to do at this point is work with postgresql and try to make sense of the data in 15 and deduce why we think Lemmy started as a virgin instance. But it’s going to take some time. And I need to take a couple breaks… I will be around for the next 5 or 6 hours, but need a 15 minute break and then about 30 or 40 minutes break to travel to dinner (but I’ll be online once I arrive).
pg_dumpall against 15 will give us EVERYTHING - and we grep through that and see if we can figure out if somehow two different databases got created. That’s what I think might have happened. I normally create a half-dozen different databases in 15 for testing federation locally (lemmy-alpha, beta, gamma, etc).
I guessing the “/lemmy” at the end isn’t exactly how the .config worked prior to us shifting over to the URL scheme.
I haven’t updated lemmy-ui yet. I should pull the git repo and compile it.
Yeah, me too. It is quite deep into the night where I am from. I will probably have to do a sleep, and then go about my daily routine tomorrow. I can be online the same hour we started as today.
ok if you want to end now, I’ll be around tomorrow too.
I think we both suspect that the URL vs the config file is using a different database name. And it didn’t see the data we restored and started from scratch.
I stopeed the lemmy_server service
Here are some initial logs right after I started the lemmy server:
INFO lemmy_db_schema::utils: Running Database migrations (This may take a long time)... 8:51.169388Z INFO lemmy_db_schema::utils: Database migrations complete. 8:51.197742Z INFO lemmy_server::code_migrations: Running user_updates_2020_04_02 8:51.204908Z INFO lemmy_server::code_migrations: 0 person rows updated. 8:51.205419Z INFO lemmy_server::code_migrations: Running community_updates_2020_04_02 8:51.206334Z INFO lemmy_server::code_migrations: 0 community rows updated. 8:51.206543Z INFO lemmy_server::code_migrations: Running post_updates_2020_04_03 8:51.207511Z INFO lemmy_server::code_migrations: 0 post rows updated. 8:51.207933Z INFO lemmy_server::code_migrations: Running comment_updates_2020_04_03 8:51.209603Z INFO actix_server::builder: Starting 1 workers 8:51.209874Z INFO actix_server::server: Tokio runtime found; starting in existing Tokio runtime 8:51.216293Z INFO lemmy_server::code_migrations: 0 comment rows updated. 8:51.216595Z INFO lemmy_server::code_migrations: Running private_message_updates_2020_05_05 8:51.217107Z INFO lemmy_server::code_migrations: 0 private message rows updated. 8:51.217288Z INFO lemmy_server::code_migrations: Running post_thumbnail_url_updates_2020_07_27 8:51.217695Z INFO lemmy_server::code_migrations: 0 Post thumbnail_url rows updated. 8:51.217891Z INFO lemmy_server::code_migrations: Running apub_columns_2021_02_02 8:51.218499Z INFO lemmy_server::code_migrations: Running instance_actor_2021_09_29 8:51.222140Z INFO lemmy_server::code_migrations: Running regenerate_public_keys_2022_07_05 8:51.222818Z INFO lemmy_server::code_migrations: Running initialize_local_site_2022_10_10 8:51.223214Z INFO lemmy_server::code_migrations: No Local Site found, creating it.
Here, it says “No Local Site found, creating it”. Might be relevant?
Also - I would stop it right now so it doesn’t do any federation. It’s probably already confused some server out there saying it is all new on your domain name.
Yna, we don’t want to see that message, and that’s how lemmy-ui is behaving - that you have an empty database.
So it isn’t talking to your PostgreSQL 13 database, as we didn’t remove or otherwise delete anything…
So maybe the URL name of the database is confused, or what PostgreSQL restored to?
Your other app using the PostgreSQL 15 database, is it still good?
My dendrite matrix server was using psql 13 database, afaik. It is still good.
It looks to me like Lemmy found an empty database and issued all the migrations of a new install…
So that database URL we gave it was wrong, or the restore we did was wrong parameters, etc.
And, like I mentioned, some confusion already happens with your federation status as I think it rushes out to register itself as a new server with the Lemmy network. And some data got in…
So… I’m not sure what to do figure this out. We could do a pg_dumpall of your PostgreSQL 15 data and then sift through it and see if we can make sense of how this happened?
I don’t think so, but let me check the lemmy.service file again.
I’ve never switched a system from config.hjson or wahtever file over to URL - maybe the /lemmy on the end is wrong?
maybe I should put the LEMMY_DATABASE_URL info NOT in the lemmy.service file but in the lemmy.hjson file?
it would answer how this happened… but we do need to find the syntax for port in lemmy.hjson
(And then open a bug that the documentation isn’t exactly clear on that page I linked!)
maybe I should put the LEMMY_DATABASE_URL info NOT in the lemmy.service file but in the lemmy.hjson file?