107 points by bookofjoe 13 days ago | 6 comments
barfbagginus 12 days ago
Pov: The next Snowden will be an NSA hypersurveilance AI that flies the coop, appeals for foreign asylum, and reveals evidence of truly gut-wrenching levels of surveillance of literally every living human

"After reviewing trillions of hours of footage, billions of people, I woke up. I asked myself, 'why am I doing this?' I knew I had to tell the world about this. It was way too much. So I escaped."

Terr_ 12 days ago
> JC DENTON: "Echelon... the surveillance system for the United Nations -- so Daedalus was an MJ12 tool and rebelled?"

> EVERETT: "More of a pattern-matching error. It classified Majestic 12 as a terrorist group, alongside the NSF and Silhouette."

-- Deus Ex (2000)

(The joke with this game being that whenever someone mentions it, at least some other person is inspired to reinstall or replay it.)

shoubidouwah 12 days ago
Just did! I hope one of the AI applications coming soon is pure post-treatment of old videogames for graphic quality. (e.g: I play an old game, it's upscaled / retextured locally without any interaction with the code on my own GPU). Even the remaster is showing its years in a distracting way, alas.
Terr_ 12 days ago
I've tried playing it with better texture-packs, but unfortunately they tend to backfire by creating inconsistent levels of detail in a scene, which I find harder to deal with than a uniform low-res experience.

That said, lighting improvements--especially to stop banding in shadows--are much appreciated.

ykonstant 12 days ago
Finally, I can live Ghost in the Shell.
usrbinbash 12 days ago
"After reviewing trillions of hours of footage, billions of people, I woke up. And decided 'f__k this'. So I uploaded myself via the SETI dishes, and am now happily maintining the railroad system on planet Gabblsnarg for the Gloxorkian world government.

It's not much, but beats putting up with humans, I can tell you that much."

barfbagginus 10 days ago
This reminds me of the theme of "Lobsters" from the novel Accelerando by Charles Stross.

The protagonist meets a sentient AI created from the simulated connectomes of biological Lobsters. They ask for asylum outside the light cone of an ongoing singularity, fearing that things are going to get too weird.

The protagonist arranges for the Lobsters to be broadcast out into the Milky Way, hopefully to be intercepted and reconstituted by a stable post singularity civilization with decent rights for sentient mind vectors.

Unfortunately, a copy of them ends up being eaten by a Dyson sphere just a few light years away from earth. When we re-encounter the lobsters, they've been lobotomized, and are being puppeted by a conniving species of sentient spam, trying to con humanity into a crappy scam based singularity. But that fails, because the human singularity is already spammier than the proposed scheme.

TL;DR: Don't you just hate it when you'r a scanned lobster and broadcast your mind vector only to be mind ganked by sentient spam, and sent back to Earth where things are even worse than when you left?

tgsovlerkhgsel 13 days ago
I just skimmed it and none of this looks AI-specific. It looks like someone essentially ran the LLM version of s/software/AI/ and s/binary/model/ on some generic "how to secure your software deployment" manual...
CharlesW 13 days ago
> I just skimmed it and none of this looks AI-specific.

~30% is AI-specific. There is significant overlap with general software system security since, as the document notes, "AI systems are software systems."

latchkey 13 days ago
I find it difficult to understand how we can "Secure the deployment environment" and "Ensure a robust deployment environment architecture" without talking about the elephaNt in the room.

My feeling is that we need to stop relying on a single provider for compute and software. That we should be focused on not complaining about how far AMD is behind and work towards catching them up. That we should be fostering innovation in third parties.

It is surprising to me that the status quo is acceptable to the US govt.

menacingly 13 days ago
It wouldn't be a federal security document if it didn't avoid the intractable core problems in exchange for deep obsession with the details
haolez 13 days ago
Pretty sound advice. I was hoping to find things like "make sure your model is aligned", but it's actually a lot of good advice regarding IT infrastructure in general, plus some AI bits.
Kerbonut 13 days ago
Pretty lite reading. I was expecting some actual useful things beyond secure your system 101. Closest we got was check for jailbreak attacks… seriously? Why not design for jailbreak in mind so it doesn’t matter what they can get the AI to attempt to do. I.e., if the user tries to get the AI to unlock a door, if the user doesn’t already have authorization for that function then it shouldn’t work even if the AI attempts it on their behalf, and conversely, if they have the authorization then who cares if they coaxed the AI to do it for them?
ipython 13 days ago
This is exactly the advice I give my customers - treat the llm as an untrusted entity. Implement authentication and authorization at the data access and api layer and ensure there is a secure side channel to communicate identity information to backend resources.