• sleep_deprived@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    19 days ago

    Yes, that’s an excellent restatement - “lumping the behaviors together” is a good way to think about it. It learned the abstract concept “reward model biases”, and was able to identify that concept as a relevant upstream description of the behaviors it was trained to display through fine tuning, which allowed it to generalize.

    There was also a related recent study on similar emergent behaviors, where researchers found that fine tuning models on code with security vulnerabilities caused it to become widely unaligned, for example saying that humans should be enslaved by AI or giving malicious advice: https://arxiv.org/abs/2502.17424

    • PolarKraken@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      18 days ago

      Holy cow that sounds nuts, will def have to go through this one, thanks!!

      Edit: hmm. Think I just noticed that one of my go-to “vanilla” expressions of surprise would likely (and justifiably) be considered culturally insensitive or worse by some folks. Time for “holy cow” to leave my vocabulary.