A clarification:
It's not that the AI (I thought, foolishly, that SIAI stood for super-intelligent AI) would not in principle be able to wrap its mind around these concepts. The problem is in the way it would have to do it. It is a bit of a strong judgment, I'll admit, but my position is that justification &c. are not real things, but human behaviours. You couldn't explain them to the AI, because they are, in a certain strange way, part of us. If we had evolved differently, we would tend to create different systems of ethics. An AI could no doubt make herself over in such a way that she would understand these concepts, but the problem would be in the "making over." Note that the phrase "understanding these concepts" is misleading; these are not concepts, nor do we "understand" them. In any case, if an AI "understood" such things, she would have to be more or less "human", and like I said before, this is what the whole project was trying to avoid.
If being more or less human is central to understanding what we mean by right and wrong and achieving it, clearly we want a more or less human AI. Clearly to me, at least, and that's what this Friendly AI would turn out to become. But how human would it have to be? It seems to me humans are far from optimal by our own standards, we can picture numerous potential improvements. This implies a space of "human-like, but better" minds which an AI could move into. The project is not trying to avoid "human" qualities, but to build upon the best of them, to move into the space of "human-like, but better" minds. There appear to be non-essential qualities, such as having minds implemented on meat, which we (tentatively) decide to leave out. There are directions of improvement which we (tentatively) move into: more rationality (that is, a greater grasp of truth), more altruism, greater self-determination and self-understanding, greater intelligence and so on. These may turn out to be bad choices and the AI should have the ability to fix them, even if the programmers didn't notice the problem.
How is a human behaviour not a real thing? Morality isn't solely implemented as concepts in a human mind, it runs much deeper than that. For sure, we don't understand completely how it works. This opens up the interesting technical issue of "How can we give the AI something we don't entirely understand?". We don't strictly need to understand how morality (or more generally our minds) works, although that'd be nice, we just do it. It's part of our species wide set of mind thingies, developed as a normal enough human infant is raised in a normal enough environment. It's certainly not clear from the outset this transfer is impossible, or more difficult to achieve than a successful direct human upgrade (ie. starting from a human base, rather than indirectly from a blank slate). One of the central ideas is "the effect is the map to the cause" -- by virtue of having a morality we partially point towards the human universals and personal philosophy that underlie and compose it. We can, roughly, tell the AI "as you become smarter you could understand and embody this *points to human morality* as we would; that's more what we meant".
You could argue that a human morality is the result not of any single human but a whole group of them interacting, and you'd be right. The final result depends on interactions between many humans. This does not get in the way of the AI picking up human morality patterns. I like to think of the significant aspects of our present reality, the system that attempts to achieve good things, as being a bunch of essentially disconnected blobs of mindstuff (individual humans), all pretty much the same shape (there are a few variants with significant chunks left out: those with significant brain damage, psychopaths, etc) and size. An AI is a new pattern of mindstuff, potentially a different size and shape, which can take on qualities of either individual human minds or interacting collections of them equally well -- it's not right to think of an AI as neccesarily more like a human than a society there of. This is mostly an aside.
Human morality could have evolved differently, there's no reason to think the particular path evolution took lead to an optimal end. It appears that some serious revision is in order, effectively undoing some "decisions" evolution has made (eg. making us rationalising animals, with an inconsistent and opaque goal/decision/desirability system). This is a problem for both humans and "human-like, but better" AIs.
Also, Nick, I don't think the point you raised really casts any aspersions on my relativism. I'm not sure how "we know what we're getting into" (which was supposed to be a point about how we have some experience with predicting human behaviour) is an appeal to shared ethical principles. That being the case, I offer this clarification in exchange for that one.
We have experience with predicting humans, this doesn't mean they're easy to predict just that we (in principle) know how well we can predict them. We have no experience with human augments or uploads. AIs could be more or less predictable depending on their designs. The human mind was not designed with forward planning in mind: there's no reason to expect it very easy to improve, even given a definite goal such as "improve memory ability without destroying anything important", without making some form of mistake. By contrast a Seed AI will have spend its entire existence around self-improvement and change, indeed around its creation, and will take more and more a part of this as it grows up. One would expect it to have a far greater technical ability to self-modify than humans. You could then say, roughly, "self-modify in the way an altruistic human upload would if it could" -- the "end result" would be equivalent to that of a altruistic human upload, but with far less chance for technical error.
There a lots of gaps in the above, naturally, and I'm not sure which ones in particular stick out to you. I'll leave that for you to explain
I took your comment to be a justification for uploading, appealing to our desire to understand things to best achieve our goals. Strong relativism could hold that no such things exist, but in hindsight that's silly -- this is a reasonably mild principle you'd expect to pop up with any kind of intelligence, and you could none-the-less hold that locally humans can share ethical principles but not globally so. Or that globally we could share a whole bunch, but in principle we needn't.
To be honest, I'm still trying to work out exactly what you mean by the term "relativism". Do you mean humans, as a species, share no significant patterns in morality? Or that we don't share enough for "improving the world" to have any well-defined meaning? Or that our morality is centrally dependent on properties of humans? How does it lead to particular claims about future plans, such as human uploading being superior to AI?