So, I was reading the privacy notice and the terms of use and I did read some sketchy stuff about it (data used in advertising, getting keystroke). How bad is it? Is it like chatgpt or worse? Anything I can do about it?

  • some_guy@lemmy.sdf.org
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    I ran ollama run deepseek-r1:1.5b

    Perhaps the way I ran it had an impact. How did you run it? I didn’t pay attention to temp settings on Github. I actually don’t know how to set that without reviewing docs. (I’m not terribly interested in AI bots and only participate at the surface)

    • voracitude@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 day ago

      Ah! It’s because you’re using the 1.5B model. It’s too small. Good for specific functions, not for chat. For Q&A, you want at least the 7B model (but 8B is about the same size and I think performs better for language tasks).