cross-posted from: https://lemmit.online/post/5364920
just for good measures
This is an automated archive made by the Lemmit Bot.
The original was posted on /r/memes by /u/brylex1 on 2025-03-10 03:01:06+00:00.
cross-posted from: https://lemmit.online/post/5364920
just for good measures
This is an automated archive made by the Lemmit Bot.
The original was posted on /r/memes by /u/brylex1 on 2025-03-10 03:01:06+00:00.
Don’t get me wrong though… throwing an LLM at it would be a lot easier and faster. Just a mind boggling use of resources for a task that could probably be done more efficiently :D
Setting this up with Apache Solr and a suitable search frontend runs a high risk of becoming an abandoned side project itself^^
Yeah LLM seems like the go to solution. And the best one. And talking about resources, we can use barely smart models which can generate coherent sentences, be it 0.5b-3b models offloaded to CPU inference only.