The Basis Of My Futurism

In a previous essay of mine (The Future Of Intelligence), I specifically disclaimed any attempt to provide evidence for my thoughts about the future, and about The Singularity in particular. This essay attempts to present some of my reasoning and evidence.

This mini-essay can be considered a somewhat less-well-written version of Xuenay's "14 objections against AI/Friendly AI/The Singularity answered". You should probably read that too/instead.

Beliefs

My conclusions are based on the following premises. I can offer only the barest evidence for most of them. Better still, several of them they are impossible to disprove at this time. Sorry about that.

The silly little tags are for quicker reference in the conclusions section.

SRNPossible
Self-replicating nanotechnological robots are possible. There's actually reasonably solid evidence for this; Google can help you with this, but you might want to start with Is Molecular Nanotechnology "Scientific"?.
SRNGraspable
Self-replicating nanobots (SRN) are within our grasp, technologically, in the near-term. Thirty years is, I think, a good upper bound. I would be unsurprised to see them in fifteen.
SRNEexistential
SRN are an existential risk to the human species. See Bostrom's Existential Risks Paper, amongst many others, for discussion on this point.
SRNWeapons
SRN-based weapons (or, for that matter, any nanotech weapons, self-replicating or not) are a fundamentally unstable military technologies, and have massive first-strike advantages, which makes them massively more dangerous than current WMDs. Ask google about {nanotechnology "first strike"}; this has been written about extensively. See in particular Nanotechnology and International Security.
StupidHumans
Humans are too stupid to be trusted with a military technology that is an existential risk and has a first-strike advantage. I argue that this is true prima facia, and that no sane, intelligent human would argue against it. I'm sure that once this is posted to the 'net, someone will try to prove me wrong.
AGIPossible
It is possible to have a computer-based general intelligence (at least as general as the average human), of at least human-level general intelligence. At this time, there is no direct evidence that this is possible. The only argument for it is anti-anthropocentrism, i.e. that it is arrogant to believe otherwnise because humans can't be that special. On the other hand, there's no evidence to the contrary, either. AGI stands for Artificial General Intelligence, by the way, and is used to distinguish an AI that can have a conversation with you from "AI" in the sense of "pathfinding algorithms in video games" and such.
IntIndependent
The stronger version of AGIPossible: There is no aspect of intelligence, in any form, that is dependent on its physical sub-strate. Any form of intelligence that a biological being can have, a silicon being can also have (even if that means emulating hormones or whatever) and vice-versa. This includes social intelligence, mathematical intelligence, sexual intelligence, aggressive intelligence (aka "cunning") and so on and so forth. Again, no evidence for this at all. Do, however, look at Xuenay's 14 Objections to see someone try to present such evidence anyways.
HumansUnspecial
The universe is capable of having beings fundamentally smarter than humans. What "fundamentally smarter" means is up for grabs; I have a friend that insists that humans are Turing complete with respect to intelligence, and that there is nothing that any intelligence could ever come up with that any fully-functional human could not understand if they had an indefinate amount of time to study it. I think he's probably right, but that doesn't change the fact that one Turing-complete machine can compute both faster and better than another, and I think that applies to minds as well. If you want me to talk about this one more, let me know. No evidence for this at all, again, except the anti-anthropocentrism argument. No evidence against, though, either.
BigIntPossible
There is little or no upper-bound on the maximum intelligence of a being, except for physical limits themselves. In other words, human-level intelligence isn't the ceiling (see point #8) and there's no hidden ceiling just above human-level. If there is a ceiling, it's at the point where the being's computational substrate is so huge (planet-sized or larger) that light-speed lag starts causing problems. Matrioshka Brains seem to be the absolutely insane outer limits of this idea. Again, no evidence. Again, no evidence against, though, either.

Conclusions

General

Hard Takeoff Conclusions

Hard Takeoff is the theory that once we have a near-human-level AGI, we'll have something far smarter than humans very quickly thereafter.