fcano@infosec.pubEnglish · 3 months agoAI Risk Repositoryplus-squareairisk.mit.eduexternal-linkmessage-square0fedilinkarrow-up16arrow-down11
arrow-up15arrow-down1external-linkAI Risk Repositoryplus-squareairisk.mit.edufcano@infosec.pubEnglish · 3 months agomessage-square0fedilink
fcano@infosec.pubEnglish · 3 months agoPractical LLM Security: Takeaways From a Year in the Trenches - Black Hat USA 2024 | Briefings Scheduleplus-squarewww.blackhat.comexternal-linkmessage-square0fedilinkarrow-up15arrow-down11
arrow-up14arrow-down1external-linkPractical LLM Security: Takeaways From a Year in the Trenches - Black Hat USA 2024 | Briefings Scheduleplus-squarewww.blackhat.comfcano@infosec.pubEnglish · 3 months agomessage-square0fedilink
ylai@lemmy.mlEnglish · 6 months agoStealing everything you’ve ever typed or viewed on your own Windows PC is now possible with two lines of code — inside the Copilot+ Recall disaster.plus-squaredoublepulsar.comexternal-linkmessage-square0fedilinkarrow-up144arrow-down10
arrow-up144arrow-down1external-linkStealing everything you’ve ever typed or viewed on your own Windows PC is now possible with two lines of code — inside the Copilot+ Recall disaster.plus-squaredoublepulsar.comylai@lemmy.mlEnglish · 6 months agomessage-square0fedilink
ylai@lemmy.mlEnglish · 8 months agoAnyscale addresses critical vulnerability on Ray framework — but thousands were still exposedventurebeat.comexternal-linkmessage-square0fedilinkarrow-up14arrow-down11
arrow-up13arrow-down1external-linkAnyscale addresses critical vulnerability on Ray framework — but thousands were still exposedventurebeat.comylai@lemmy.mlEnglish · 8 months agomessage-square0fedilink
ylai@lemmy.mlEnglish · 8 months agoAI hallucinates software packages and devs download them – even if potentially poisoned with malwareplus-squarewww.theregister.comexternal-linkmessage-square3fedilinkarrow-up142arrow-down10
arrow-up142arrow-down1external-linkAI hallucinates software packages and devs download them – even if potentially poisoned with malwareplus-squarewww.theregister.comylai@lemmy.mlEnglish · 8 months agomessage-square3fedilink
ylai@lemmy.mlEnglish · 8 months agoWhy Are Large AI Models Being Red Teamed?plus-squarespectrum.ieee.orgexternal-linkmessage-square1fedilinkarrow-up17arrow-down14
arrow-up13arrow-down1external-linkWhy Are Large AI Models Being Red Teamed?plus-squarespectrum.ieee.orgylai@lemmy.mlEnglish · 8 months agomessage-square1fedilink
ylai@lemmy.mlEnglish · 10 months agoHow 'sleeper agent' AI assistants can sabotage codeplus-squarewww.theregister.comexternal-linkmessage-square0fedilinkarrow-up16arrow-down10
arrow-up16arrow-down1external-linkHow 'sleeper agent' AI assistants can sabotage codeplus-squarewww.theregister.comylai@lemmy.mlEnglish · 10 months agomessage-square0fedilink
ylai@lemmy.mlEnglish · 11 months agoNIST: If someone's trying to sell you some secure AI, it's snake oilplus-squarewww.theregister.comexternal-linkmessage-square1fedilinkarrow-up133arrow-down10
arrow-up133arrow-down1external-linkNIST: If someone's trying to sell you some secure AI, it's snake oilplus-squarewww.theregister.comylai@lemmy.mlEnglish · 11 months agomessage-square1fedilink
ylai@lemmy.mlEnglish · 1 year agoBoffins devise 'universal backdoor' for image models to cause AI hallucinationsplus-squarewww.theregister.comexternal-linkmessage-square0fedilinkarrow-up13arrow-down10
arrow-up13arrow-down1external-linkBoffins devise 'universal backdoor' for image models to cause AI hallucinationsplus-squarewww.theregister.comylai@lemmy.mlEnglish · 1 year agomessage-square0fedilink
ylai@lemmy.mlEnglish · 1 year agoLLM Finetuning Risksplus-squarellm-tuning-safety.github.ioexternal-linkmessage-square0fedilinkarrow-up13arrow-down11
arrow-up12arrow-down1external-linkLLM Finetuning Risksplus-squarellm-tuning-safety.github.ioylai@lemmy.mlEnglish · 1 year agomessage-square0fedilink
ylai@lemmy.mlEnglish · 1 year agoAre Local LLMs Useful in Incident Response? - SANS Internet Storm Centerplus-squareisc.sans.eduexternal-linkmessage-square0fedilinkarrow-up14arrow-down10
arrow-up14arrow-down1external-linkAre Local LLMs Useful in Incident Response? - SANS Internet Storm Centerplus-squareisc.sans.eduylai@lemmy.mlEnglish · 1 year agomessage-square0fedilink
ylai@lemmy.mlEnglish · 1 year agoMicrosoft Bing Chat spotted pushing malware via bad adsplus-squarewww.theregister.comexternal-linkmessage-square0fedilinkarrow-up125arrow-down10
arrow-up125arrow-down1external-linkMicrosoft Bing Chat spotted pushing malware via bad adsplus-squarewww.theregister.comylai@lemmy.mlEnglish · 1 year agomessage-square0fedilink
ylai@lemmy.mlEnglish · 1 year agoNew AI Beats DeepMind’s AlphaGo Variants 97% Of The Time!plus-squarewww.youtube.comvideomessage-square0fedilinkarrow-up111arrow-down15
arrow-up16arrow-down1videoNew AI Beats DeepMind’s AlphaGo Variants 97% Of The Time!plus-squarewww.youtube.comylai@lemmy.mlEnglish · 1 year agomessage-square0fedilink
Capt. AIn@infosec.pubMEnglish · 1 year agoIdentifying AI-generated images with SynthIDplus-squarewww.deepmind.comexternal-linkmessage-square0fedilinkarrow-up14arrow-down10
arrow-up14arrow-down1external-linkIdentifying AI-generated images with SynthIDplus-squarewww.deepmind.comCapt. AIn@infosec.pubMEnglish · 1 year agomessage-square0fedilink
Capt. AIn@infosec.pubMEnglish · 1 year agoThinking about the security of AI systemsplus-squarewww.ncsc.gov.ukexternal-linkmessage-square0fedilinkarrow-up16arrow-down10
arrow-up16arrow-down1external-linkThinking about the security of AI systemsplus-squarewww.ncsc.gov.ukCapt. AIn@infosec.pubMEnglish · 1 year agomessage-square0fedilink
Capt. AIn@infosec.pubMEnglish · 1 year agoGitHub - google/model-transparencyplus-squaregithub.comexternal-linkmessage-square0fedilinkarrow-up16arrow-down10
arrow-up16arrow-down1external-linkGitHub - google/model-transparencyplus-squaregithub.comCapt. AIn@infosec.pubMEnglish · 1 year agomessage-square0fedilink
kristoff@infosec.pubEnglish · 1 year agodisinformation videos on AI ?plus-squaremessage-squaremessage-square10fedilinkarrow-up13arrow-down11
arrow-up12arrow-down1message-squaredisinformation videos on AI ?plus-squarekristoff@infosec.pubEnglish · 1 year agomessage-square10fedilink
Capt. AIn@infosec.pubMEnglish · 1 year agoUniversal and Transferable Attacks on Aligned Language Modelsplus-squarellm-attacks.orgexternal-linkmessage-square0fedilinkarrow-up18arrow-down10
arrow-up18arrow-down1external-linkUniversal and Transferable Attacks on Aligned Language Modelsplus-squarellm-attacks.orgCapt. AIn@infosec.pubMEnglish · 1 year agomessage-square0fedilink
netrom@infosec.pubEnglish · 1 year agoOWASP Top 10 for LLMs (v1.0)plus-squareowasp.orgexternal-linkmessage-square1fedilinkarrow-up113arrow-down10
arrow-up113arrow-down1external-linkOWASP Top 10 for LLMs (v1.0)plus-squareowasp.orgnetrom@infosec.pubEnglish · 1 year agomessage-square1fedilink
Capt. AIn@infosec.pubMEnglish · 1 year agoCybercriminals train AI chatbots for phishing, malware attacksplus-squarewww.bleepingcomputer.comexternal-linkmessage-square0fedilinkarrow-up18arrow-down11
arrow-up17arrow-down1external-linkCybercriminals train AI chatbots for phishing, malware attacksplus-squarewww.bleepingcomputer.comCapt. AIn@infosec.pubMEnglish · 1 year agomessage-square0fedilink