Since Anthropic launched we've been using it at a lot. It's the best programming agent I've seen so far: it gives concise answers, it can run shell tools a...
Their company is an AI assistant for shopping, so trying to put AI everywhere including places it shouldn’t be is gonna happen.
I like my build scripts dependable, debuggable, and deterministic. This is wild. When the bot makes a pull request, and the user (who may be someone else at some point) doesn’t respond with exactly what the prompt wants, what happens? What happens when Claude Code updates, or has an outage? Don’t change that GitHub action at the end’s name without remembering to update the prompt as well.
Or worse. A single bad actor (according to the company) poisoned grok to be white supremacist. How many unsupervised, privileged LLM commands could run in a short time if an angry employee at Anthropic poisons the LLM to cause malicious damage to servers, environments, or pipelines it has access to?
Their company is an AI assistant for shopping, so trying to put AI everywhere including places it shouldn’t be is gonna happen.
I like my build scripts dependable, debuggable, and deterministic. This is wild. When the bot makes a pull request, and the user (who may be someone else at some point) doesn’t respond with exactly what the prompt wants, what happens? What happens when Claude Code updates, or has an outage? Don’t change that GitHub action at the end’s name without remembering to update the prompt as well.
Or worse. A single bad actor (according to the company) poisoned grok to be white supremacist. How many unsupervised, privileged LLM commands could run in a short time if an angry employee at Anthropic poisons the LLM to cause malicious damage to servers, environments, or pipelines it has access to?