• dormedas@lemmy.dormedas.com
    link
    fedilink
    arrow-up
    14
    ·
    4 days ago

    Their company is an AI assistant for shopping, so trying to put AI everywhere including places it shouldn’t be is gonna happen.

    I like my build scripts dependable, debuggable, and deterministic. This is wild. When the bot makes a pull request, and the user (who may be someone else at some point) doesn’t respond with exactly what the prompt wants, what happens? What happens when Claude Code updates, or has an outage? Don’t change that GitHub action at the end’s name without remembering to update the prompt as well.

    • kautau@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      3 days ago

      Or worse. A single bad actor (according to the company) poisoned grok to be white supremacist. How many unsupervised, privileged LLM commands could run in a short time if an angry employee at Anthropic poisons the LLM to cause malicious damage to servers, environments, or pipelines it has access to?