I was wondering if the nature of decentralization would negatively affect SEO, since people can access the same post from many different instance
https://lemmy.ml/robots.txt , https://lemmy.world/robots.txt , etc don’t seem to disallow posts, so the text-based content should be easy to index, at least for these instances.
related news: Google is getting a lot worse because of the Reddit blackouts.
How long does it usually take for google to index websites? Because I tried the string
lemmy site:lemmy.ml after:2023-06-15
and only one post turned up for me and it wasMemes
… the current state of affairs does not seem promising 😔 And if I tried with another instance with the same keywordslemmy site:kbin.social after:2023-06-15
nothing even turned up.I wonder though, will search engines adapt to Lemmy and its fediverse system? Or will search engines die? Or will we see dedicated search engines to search through the fediverse?
How long does it usually take for google to index websites?
Anything between a couple of hours to more than a week, I don’t think having a “real-time feed” through Google is important though. Other than world cup scores, their results were never about speed.
An earlier post pointed out: federated sites seem like they will suffer against central content in a SEO world - regardless of whether they are technically indexable.
I wonder if lemmy should have a SEO friendly federated site… .com domain, robots.txt and everything else…
right, though the SEO game is changing drastically with AI. People are using GPT-like models often in place of searches and likewise, expecting search results to hit their answers rather than being vague pointers. Following this reasoning, the search engines will need direct users to where the valuable information is, not always, but often enough to not lose users to competitors.
So the thing about SEO is that it’s often an attention game that advertisers and smaller websites compete with each other. The information in public forums and threads is invaluable for the success of the search engine itself, so they’re the ones that will eventually have to adapt to the new federated reality, should it become mainstream - and I do hope so.
Haven’t checked if they do this, but you can tell Google which one is supposed to be the “real” post. So you shouldn’t get duplicate content.
The current version of the frontend is not doing that yet, though.
I tried searching for the title of this post verbatim and it isn’t in google results period.
The second link for me when searching for Lemmy on Google is the link to the “Join Lemmy” website. Surprisingly, Brave Search, which has seemingly no search bubbles or accounts, shows the same.
Sorry I didnt make the post very clear. I was referring to an individual posts when people search for a specific issue/discussion in Lemmy.
Oh, good point. Yes, probably? We can not simply assume search engines know that all of these point to the same content:
- https://slrpnk.net/c/technology
- https://feddit.de/c/[email protected]
- https://sopuli.xyz/c/[email protected]
- https://beehaw.org/c/[email protected]
Or even worse, due to defederation, they may not all point to the exact same content.
Without further investment either from lemmy or the search engine’s side, they are probably seen as distinct sources, not aggregated. Which makes each individually less relevant and less likely to show up .
Also note none of the adresses above contain ‘lemmy’. How would users search for content on lemmy in these cases? Can’t do “technology site:lemmy”, or?
But I can say, lemmy content is visible. Haven’t seen it on the first page of ecosia yet, but on page 2 or 3.
Maybe you could use use site:lemmy.ml, because they federate with most instances, they’re likely to have most of lemmy’s content?