But just as Glaze’s userbase is spiking, a bigger priority for the Glaze Project has emerged: protecting users from attacks disabling Glaze’s protections—including attack methods exposed in June by online security researchers in Zurich, Switzerland. In a paper published on Arxiv.org without peer review, the Zurich researchers, including Google DeepMind research scientist Nicholas Carlini, claimed that Glaze’s protections could be “easily bypassed, leaving artists vulnerable to style mimicry.”

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    Glaze has always been fundamentally flawed and a short term bandage. There’s no way you can make something appear correctly to a human and incorrectly to a computer over the long term - the researchers will simply retrain on the new data.

  • Even_Adder@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 months ago

    Reminder that the author of Glaze, Ben Zhao, a University of Chicago professor stole open source code to make a closed source tool that only targets open source models. Glaze never even worked on Microsoft, Midjourney, or OpenAI’s models.