I’m a dev in recovery. When I looked for addiction recovery apps, I realized the user isn’t the customer, they are the product.

Most “free” recovery apps literally sell your addiction data. If you are recovering from gambling, they can sell your behavioral profile back to gambling networks. If you are recovering from alcohol, they sell your data to advertisers that then advertise alcohol to you.

I built LiftMind

It’s an AI-driven addiction recovery strategist and journaling app, but I architected it to be hostile to surveillance. It is still in beta, and I need a gut check from this community on the setup:

Monero First: I accept XMR Monero so the payment layer is as anonymous as the auth layer.

No Personal Info: I don’t ask for an email, name, or phone number. You reg with just a username and password. If I look at the DB, I can’t tell who is who.

Blind AI Proxy: I use an external LLM (Gemini) for the intelligence, but I treat it like a calculator, not a database. Your ip, username or any other data is never sent to gemini, only the data required for pattern recognition is sent. Google only sees a request coming from my server IP, but they have no way to link it to “You”.

My Question: Since I don’t collect the PII to begin with, is this “Blind Proxy + No KYC” model is sufficient for high-threat models?

  • Special Wall@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    15 days ago

    If Gemini truly can’t see PII (no way to add “notes” for example) then I don’t think that would be too big of a concern for most people, at least for those who don’t have a distain for LLMs in the first place. Though I do feel that people with “high threat models” (would be good to be precise about what a “high threat model” is in this instance) would prefer to have a local app that interfaces with a local Ollama API, rather than an internet-connected service.

    What precisely is Gemini “calculating” here and why can’t its function be replaced on a lightweight local LLM?

    Edit: After reading the information from the website, it sounds like there are a lot of opportunities for users to accidentally identify themselves to AI providers or open up de-anonymization attack vectors. If I were very concerned about my identity being linked to my recovery behavior, I would probably not use this service as it is now.

    • liftmindOP
      link
      fedilink
      arrow-up
      2
      ·
      15 days ago

      Fair point on the notes. You’re right, if a user explicitly types “I am John Doe” in the journal, that string does get passed to the LLM. I can strip headers and IPs, but I can’t perfectly scrub context without breaking the analysis.

      To mitigate that, I use the paid API. Unlike the free version, Google is contractually blocked from training on the data. I realize that is a legal promise rather than a technical guarantee, but it is the same binding agreement used by hospitals and banks.

      As for why not local/Ollama? Two reasons:

      1. Intelligence: For psychological pattern recognition, small local models hallucinated way too much in my testing and missed obvious patterns. I need SOTA reasoning to avoid giving bad recovery advice.
      2. Hardware: Local inference kills battery and requires high-end phones. I want this tool accessible to everyone, not just people with $1k devices.

      I’m planning a “Local Only” toggle for the future, but the tech isn’t quite there yet for the average user.

      • Special Wall@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 days ago

        Okay, that’s a fair reason to use Gemini.

        If you are trying to cater to people with a specifically “high threat model” (who are going to want zero-trust privacy protections), then the journals are an issue you’ll have to somehow address.

        Even if a user does not type full details like their name, small things like “I got banana ice cream today” and “I went for a night drive” can build a detailed profile over time, which even if ephemeral could be correlated using the database if that is sent for every query.

        • liftmindOP
          link
          fedilink
          arrow-up
          2
          ·
          15 days ago

          I used to daily drive Qubes OS, so I totally get your point on correlation.

          But I had to prioritize utility. LiftMind’s main purpose is 1. to actually work and help people overcome addiction, and 2. to provide a safe harbor for people who don’t want to hand over personal info.

          The main “threat” I’m solving for right now is the paper trail, allowing people to pay via XMR so their bank statement doesn’t show they are using a recovery service. It might not be bulletproof for a targeted attack yet, but it solves the immediate privacy problem for most people.

  • arseneSpeculoos
    link
    fedilink
    arrow-up
    1
    ·
    15 days ago

    What will be your customer acquisition channels?

    Have you validated that some customers actually think that it’s important that their data is not resold?

    • liftmindOP
      link
      fedilink
      arrow-up
      2
      ·
      15 days ago

      Well i have been shadowbanned / banned on almost all major platforms so far, such as twitter and reddit. Currently i have given my close friends free access to the AI features to test and improve it before any major marketing if the time for that comes.

      • arseneSpeculoos
        link
        fedilink
        arrow-up
        2
        ·
        15 days ago

        Those might not be good channels anyway.

        What about rehab centers and other companies in the industry? Could they talk about your product to their users while you give them a referral fee (unique discount code like what you see for Youtube video sponsors)?

        • liftmindOP
          link
          fedilink
          arrow-up
          2
          ·
          15 days ago

          Very good idea! However for now as i am still testing it im not comfortable selling it to bigger companies yet.