Firstly, Vinge did not predict any particular outcome, just a failure of futurism to make useful predictions (not that it had a great track record anyway).
I tend to take his quote that Technological Singularity will lead to the "End the human era" to be a particularly wrong statement, and certainly alarmist.
If you do not appreciate the vast difference between software that can introspect on every aspect of its cognitive processing, and completely modify any aspect of its mental architecture (subject to design ability), versus humans who have a fixed hardware design and a very limited introspective and behavior modification capability, then you can say nothing useful about this subject.
Except of course that's not actually what AIs can do, can they?
Firstly, just because a software can "introspect" its own code doesn't mean that it will be able to actually understand or modify it. You qualify - "subject to design ability" - which is actually not a minor qualifier, it's an enormous hurdle particularly when you consider how hard it is for us to understand how the brain works.
Secondly, in biologicals many functions are in fact automated. We don't have to actively think about breathing for instance. In a much more complex machine AI (particularly ones that are able to manipulate the physical world), some functions will have to be by necessity automated, which will also limit how much it can self-modify. Being able to switch its modules around isn't helpful if it accidentally shuts off the temperature control modules and cause the hardware to shutdown completely. There are limits on how much humans can "modify" an AI while it is running, the same will apply to an AI trying to tweak itself.
You've just declared the entire fields of psychology and sociology worthless. I guess all those hundreds of thousands of researchers were just wasting their lives.
Except of course that's not what I actually said. What I said is that you will have difficulty in predicting the actions of any individual human being; and that with a machine it will be no different.
Moreover, psychologists and sociologists largely base their "predictions" not on analyzing brain wave patterns, but by observing human behavior - and it's the behavior of large numbers of individuals. And sociologists in fact often use statistical tools to make these determinations.
So again, a machine AI is no more "unpredictable" than a human one. If it does Action A 90% of the time vs Action B, then we know it's predisposed towards action A. Just because you don't understand the exact algorithm doesn't mean that it can't be predicted to some extent just because you can't "place yourself in the shoes of the machine". And even with these tools, the predictions are far from 100% accurate, particularly when dealing on an individual level.
Aside from the fact that quantity has a quality all of its own, you are again failing to appreciate the effects of fundamental architecture differences. For example any symbolic AI architecture will have radically different task performance, internal structure and failure modes to anything in the biomorphic connectionist class.
Firstly, software architecture is a different thing from hardware power; and the bigger hurdle currently is in fact the software side and designing the right architectures; and I would argue that it would always be the greater hurdle.
Secondly, while it is true that different architectures work very differently, you still aren't proving that it will necessary be any more "unpredictable" than a human. If you're saying that an AI can't even be predicted using statistical analysis based on its outputs, then it's really just a completely random machine devoid of any actual structure or intelligence - just like how some people may suffer from some kind impairment due to brain damage.