Ok, wow, I'm seriously confused with the interface at the moment. I feel the frustration of my parents.
Savage: Essentially, I'm saying: *You* shouldn't trust anyone else to undergo intelligence explosion, because they might compromise your utility function. But you should certainly trust your own goal structure (can anyone find a counterexample?). Why do you care about your utility function harming anyone elses, if it's protecting and enhancing yours. If you care about others, you'll care about others. When you don't, that's evolution, baby.
Michael Wilson: Where is that from?
Savage: imminst.org thread
Michael Wilson: Depressing.
Savage: i couldn't really think of an argument
Well, I'm glad you couldn't think of an argument at first, Savage. It reassures me that I'm not completely crazy [depressing?].
Michael Wilson: Actually you probably shouldn't trust your own goal structure when it's inconsistent.
Ok. So we know that our utility function changes over time. Surely, we should attempt to maximize our utility beyond our present utility function. Well, I think if you're future utility functions are represented in your current utility function, then go for it, sport. And if not, why do you care in any decision you're making? If you believe our present utility function is inconsistent [arguable, but it is a valid worry], shouldn't you assess that as part of your overall utility function? And if not, isn't it moot for the logic of Friendliness in any transhumanization.
Michael Wilson: And you should trust other people if you think the results of them attempting to execute their goal system are better than you attempting to execute yours (on competence grounds)
Yeah, it could very well be a crapshoot, who will best maximize your utility. Then you'd think you'd value it, and it'd be represented in you utility function. If not, why would we worry about it from our one round game. There will be nash equilibriums to our actions in these transhumanist debates. I'm all about respecting the marketplace of ideas and biodiversity. So would my AGI, in reality an extension of my self-identity. Why worry about friendliness? The most friendly person is us!
Michael Wilson: Finally, the author seems to be a Randian.
Oof. I don't very much like
argumentum ad hominem. If we want to discuss specific issues, I will be happy to do so.
Michael Wilson: It's perfectly possible to directly assign utility to other people's utility functions.
Michael Wilson: (or rather, the notion of other people's utility functions being executed)
Ok, sure. If you care about it, you'd put it into your skynet. If you don't, you don't. Still doesn't affect the answer to the question for the AGI engineer.
Michael Wilson: Amusing, Randian philosophy does that implicitly while decrying it explicitly.
We saw the bait, and now here's the hook.
Michael Wilson: Collective Volition, for example, assumes that other people's volitions being executed is something you should value.
Michael Wilson: (if you're a good person ™ )
Michael Wilson: If you didn't, you wouldn't implement CV in the first place, you'd take over the world and either delete everyone else or make them your playthings.
Michael Wilson: (if you were a Randian and in denial about the (limited) value you place on other people's goal systems you'd probably make an eden for yourself and not let anyone else in, saying 'charity only makes people weak, they should build their own seed AIs' or similar)
Ok. Sure, I might not recognize collective volition. And that would suck for everybody. But that would still be the optimal thing to do for *me*. Fortunately, people like Ben Goertzel, who will be the first to truly get the shiney new transhumanist toys, won't be dicks. I have to say I trust them in general to respect me enough to allow me to get my own transhumanist Red Bull. But my trust in Ben shouldn't matter any more to him and his AGI's goal structure than it matters to him.
Michael Wilson: Feel free to cc that if you'd like
Thank you for your input. Feel free to join the ImmInst community, if you like. We're not all as crazy as me, I promise.
Edited by modelcadet, 25 February 2008 - 04:38 PM.