Microsoft creates secure boot: “we should be able to run whatever we want on our hardware!”
Microsoft lets users install crowdstrike on their computer: “Microsoft shouldn’t let us run this on our hardware!”
Microsoft creates secure boot: “we should be able to run whatever we want on our hardware!”
Microsoft lets users install crowdstrike on their computer: “Microsoft shouldn’t let us run this on our hardware!”
20 years? more like 5
if this is your first time doing a big trip together, honestly, forget about it being prefect. it won’t be, and that’s ok. trips don’t need to be perfect to be meaningful, in fact, i’ve found the opposite to be true. the more wild and unexpected the adventure is, the more memorable and important it becomes to me.
so I’d say it’s best to keep an idea of things you’d like to see or do, but also be flexible and willing to adapt. traveling with someone that forces everyone to stick to a rigid itinerary is never fun and is a good way to ruin the trip. all it takes is one lost bag or one missed train to throw all your careful planning out the window. better to roll with the punches than self destruct when that happens.
i’m curious how you think you know all of this? sounds to me like you’ve created a neat straw man that lives in your head for you to get mad at
if you’re talking about that recent pic of him floating around with a chain and a bread, that was an AI doctored photo
this kinda shit makes me understand the sovcit stuff a little more, “just send an email with this magic subject text and your rights are secured!”
you’re so close, just why exactly do you think people are using it for these things it’s not meant for?
because every company, every CEO, every VP, is pushing every sector of their companies to adopt AI no matter what.
most actual people understand the limitations you list, but it’s the capitalists at the table that are making AI show up where it’s not wanted
TLS doesn’t encrypt the host name of the urls you are visiting and DNS traffic is insanely easy to sniff even if you aren’t using your ISPs service.
the hostname of a website is explicitly not encrypted when using TLS. the Encrypted Client Hello extension fixes this but requires DNS over HTTPS and is still relatively new.
honestly i wouldn’t trust your linux example at all, what happens with run([“echo”, “&& rm -rf /“])
just a guess, but in order for an LLM to generate or draw anything it needs source material in the form of training data. For copyrighted characters this would mean OpenAI would be willingly feeding their LLM copyrighted images which would likely open them up to legal action.
even in your hypothetical of a file name passed in through the args, either the attacker has enough access to run said tool with whatever args they want, or, they have taken over that process and can inject whatever args they want.
either attack vector requires a prior breach of the system. you’re owned either way.
the only way this actually works as an exploit is if there are poorly written services out there that blindly call through to CreateProcess
that take in user sourced input without any sanitization, which if you’re doing that then no duh you’re gonna have a bad time.
cmd.exe
is always going to be invoked if you’re executing a batch script, it’s literally the interpreter for .bat files. the issue is, as usual, code that might be blindly taking user input and not even bothering to sanitize it before using it.
i’m not understanding how this is supposed to be so severe. if an attacker has the ability to change the arguments to a CreateProcess
call, aren’t you hosed already? they could just change it to invoke any command or batch file they wanted.
computer science teaches you the theories of computation which absolute starts with mechanical computers.
if one didn’t study Turing’s tape machine in their compsci program then they should demand their money back.
open source software getting backdoored by nefarious committers is not an indictment on closed source software in any way. this was discovered by a microsoft employee due to its effect on cpu usage and its introduction of faults in valgrind, neither of which required the source to discover.
the only thing this proves is that you should never fully trust any external dependencies.
yeah silly me for supporting artists with my money but also downloading drm-free copies of things so I can actually exercise a semblance of ownership. but sure, keelhaul me so you can keep your sense of smug superiority.
AI is a tool that is fundamentally based on the concept of theft and plagiarism. The LLM training data comes from artists and creators that did not consent to their work being plagiarized by a hallucinating machine.
it literally explains what they’re for in the product listing:
These labels aid your warehouse operations.
• Categorize inventory, reorder points, product dating or special instructions.
• Apply these labels to pallets, boxes and shelves for easy identification.
• Easy to write on.
a surprisingly disappointing article from ars, i expect better from them.
the author appears to be confusing “relay attacks” with “cloning” and doesn’t really explain the flow of the attach that well.
really this just sounds like a complicated MitM attack, using the victim’s phone as the “middle” component between the victim’s physical card and the attacker’s rooted phone.
the whole “cloning the UID attack” at the end of the article is irrelevant, NFC payment cards don’t work like that.