Let's say you create an AGI. (Yeah, I know, we don't know how to define it or test for it. That's not the point.)
And let's say that you worry about containing it so that it doesn't take over the world. (Perhaps reasonably.)
But let's say you're using it the way we use AI right now, to help people code. It generates code for people, which they then run, not in the contained environment. You don't have a contained AI any more, do you? Especially if its training data included things like, say, APT techniques and examples, and code obfuscation techniques.
I mean, we worry about it hacking a human - persuading a human to relax the firewall rules, or whatever. But letting it generate code that we will run is far more of a security hole, isn't it?
Am I all wet here?
Presumably, people aren't including unvetted code regardless of whether or not it was AI generated. If they are, that's a much larger problem than AI, and the problem is the human.