• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

An Intuitive Explanation of Bayesian Reasoning


  • Please log in to reply
5 replies to this topic

#1 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 15 September 2003 - 10:03 AM


ImmInst Chat
Sept 21, 2003 @ 8pm Eastern
http://www.imminst.org/chat

Posted Image


An Intuitive Explanation of Bayesian Reasoning

Bayes' Theorem
for the curious and bewildered;
an excruciatingly gentle introduction.


Posted Image
By Eliezer Yudkowsky

Your friends and colleagues are talking about something called "Bayes' Theorem" or "Bayes' Rule", or something called Bayesian reasoning. They sound really enthusiastic about it, too, so you google and find a webpage about Bayes' Theorem and...

It's this equation. That's all. Just one equation. The page you found gives a definition of it, but it doesn't say what it is, or why it's useful, or why your friends would be interested in it. It looks like this random statistics thing.

So you came here. Maybe you don't understand what the equation says. Maybe you understand it in theory, but every time you try to apply it in practice you get mixed up trying to remember the difference between p(a|x) and p(x|a), and whether p(a)*p(x|a) belongs in the numerator or the denominator. Maybe you see the theorem, and you understand the theorem, and you can use the theorem, but you can't understand why your friends and/or research colleagues seem to think it's the secret of the universe. Maybe your friends are all wearing Bayes' Theorem T-shirts, and you're feeling left out. Maybe you're a girl looking for a boyfriend, but the boy you're interested in refuses to date anyone who "isn't Bayesian". What matters is that Bayes is cool, and if you don't know Bayes, you aren't cool.

More: http://yudkowsky.net/bayes/bayes.html

Posted Image
Reverend Bayes

#2 celindra

  • Guest
  • 43 posts
  • 0
  • Location:Saint Joseph, TN

Posted 15 September 2003 - 10:19 AM

Why, BJ?

#3 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 15 September 2003 - 10:31 AM

Bayes is important to creating AI. Thus, in my opinion, important to immortalists as AI is the key to overcoming our limitations. Most importantly creating Friendly AI.

#4 outlawpoet

  • Guest
  • 140 posts
  • 0

Posted 15 September 2003 - 11:16 AM

I think also, it's important to note that Bayesian thinking is important if you're to make progress in meta-rationality. I mean, you could work with the popperian theory of proof, but it's really a weaker format that doesn't give as much fruit for free.

And being rational should be a goal for everybody, it multiplies your efforts, and helps you make best use of your limited means. Since everybody here, IS limited, right?

#5 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 16 September 2003 - 01:22 AM

Many atheists nowadays follow the outline of Popper's logical positivism, (part of which is the idea that statements should be falsifiable) and Bayes' Theorem usurps Popperian rationality, making it important for everyone. The problem with Bayes is that the learning curve is higher than many other topics we commonly discuss, (I still have a lot to learn) so it actually takes a bit of *reading* to familiarize yourself with the idea and how it fits into the bigger picture of science. The satisfying "oh, I see" feeling doesn't come after the first sentence.

#6 Bruce Klein

  • Topic Starter
  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 22 September 2003 - 05:21 PM

BJK_ Thanks Eliezer for allowing us the opportunity to chat with you..
18:02:42 BJK_ I hope this doesn't tax your patients
18:03:04 Eliezer don't worry, I'm sure it will
18:03:31 BJK_ I'm glad you've prepared yourself for dissapointment
18:03:59 haploid patients ?
18:04:27 Eliezer don't worry, no matter how prepared I am, it's still possible to surprise me
18:05:44 NickHay how's this chat going to work?
18:05:53 NickHay just start asking questions?
18:05:57 Eliezer sure
18:06:03 BJK_ yes..

:06:07 NickHay Truth in a mind isn't a scalar quantity, it includes notions of dependencies upon other truths and usefulness (for instance) (SPDM kind of thing). How does Bayes' theorem handle this? By composing elementary links into a network? By going up a level of organisation somehow? Hmm, and then there's separating the different ways truth can be represented in a mind (eg. beliefs, concepts, brainware).
18:06:43 cyborg01 Hello=)
18:06:51 NickHay I understand this is unlikely to have a simple answer, or to be asking the right question ;)
18:07:03 cyborg01 I'm interested in how this relates to connectionism
18:07:57 localroger localroger (~Roger@[death to spam].adsl-35-122-212.msy.bellsouth.net) has joined #immortal

18:08:04 Eliezer Nick: the way Bayesian reasoning usually works, it concerns itself with joint distributions
18:08:22 Eliezer the probability of A&B, the probability of ~A&B, ~A&~B, A&~B
18:08:38 Eliezer think of the probability of A and B being independent, as the special case
18:08:52 rav3n rav3n (raven@[death to spam].h24-79-81-213.wp.shawcable.net) has joined #immortal

18:09:08 NickHay if they're not independent, then one can be evidence for the other?
18:09:32 Eliezer if they're not independent, then one must be evidence for the other, and vice versa
18:09:42 haploid Ok my first question is: what does bayes' theorem have to do with the development of AI ?
18:09:46 Eliezer the question is whether any given brain is smart enough to keep track of it
18:09:51 Ge Ge (JavaUser@[death to spam].AC88EDFD.ipt.aol.com) has quit IRC [Quit: Leaving]

18:09:59 Eliezer or whether you have to lose track, and start being a bounded Bayesian or a Bayesian wannabe
18:10:19 NickHay what does A range over?
18:10:26 NickHay any structure?
18:10:42 Eliezer sure, A and B are arbitrary variables (not sure I understand your question)
18:11:27 NickHay it was how probability, a scalar notion of plausibily say, is a basic form of truth
18:11:30 Eliezer haploid: are you previously familiar with Bayes' Theorem, read the intro, etc. ?
18:11:39 NickHay how do you create the more complex structures more useful for minds?
18:11:54 Eliezer you do a probability distribution over a set of structures
18:12:57 NickHay that still has probability as a scalar quality
18:13:03 Eliezer for example, in Bayesian networks, part of the work of training networks, is in assigning a probability for a particular network structure
18:13:06 NickHay *quantity
18:13:17 Eliezer for example, that A causes B, as opposed to A and B having a mutual third cause C
18:13:53 haploid Yes, familiar by way of college math, which of course did not provide applications to AGI =)
18:14:47 Eliezer haploid: there are two structures that any mind, or anything intended to function as a mind, needs to follow
18:15:09 Eliezer one is Bayes' Theorem, and the other is the expected utility equation
18:15:16 Eliezer together they are known as a Bayesian decision system
18:15:31 Eliezer they don't need to be followed perfectly, but any mind will reflect something of that structure
18:15:41 BJK_ Do you think it's possible for all post singularity intelligences to work with total or near total bayesian reasoning.. it would seem to be easy enough to compute..
18:15:55 cyborg01 Eliezer: many concepts cannot be represented by symbols (A, B etc..) so how do you solve that problem
18:16:11 BJK_ is this style of thinking.. the most advanced?
18:16:14 Eliezer BJK: in general, Bayesian reasoning is NP-hard or harder
18:16:36 Eliezer barring breakthroughs to a lower level of physics, we'll be approximating it for all nontrivial cases, forever
18:17:00 BJK_ but to a point that would be very useful.. i would think..
18:17:26 Eliezer cyborg: for some concepts, you would take a complex structure and assign it a probability; other concepts are implicitly Bayesian
18:17:28 Eliezer an example
18:17:43 Eliezer Aristotle tells us: "all men are mortal, Socrates is a man, therefore Socrates is mortal"
18:17:52 haploid Ok, I can see bayes as a possible implementation method, but I don't see how a mind *of necessity* must be bayesian.
18:18:07 Eliezer and that is true *by definition*, it is a piece of logic that is *absolutely* certain and reliable, or so Aristotle claims
18:18:11 haploid It seems that you present that as axiomatic.
18:18:21 Eliezer now, let's suppose that we give Socrates a cup of hemlock to drink
18:18:32 Eliezer and, thanks to some help provided by the Immortality Institute
18:18:35 Eliezer he doesn't die
18:18:40 Eliezer what went wrong with the logic?
18:18:58 Eliezer * Eliezer throws the question to any members of the audience who haven't already heard this one
18:19:02 cyborg01 Eliezer: I think in your example, 'men', 'mortal', 'Socrates' are all complex concepts
18:19:19 localroger The prior is incorrect. Thanks to the immortality institute, the assumption "all men are mortal" is no longer true.
18:19:27 cyborg01 Hehe
18:19:35 Eliezer no, we've *defined* men as being mortal
18:19:42 NickHay can't be wrong

18:19:55 Eliezer let's say that the Aristotelian class 'man' includes mortality, five fingers on each hand, red blood, and the wearing of clothes
18:20:09 Eliezer each aspect of the definition is individually necessary, and together they are sufficient
18:20:29 Eliezer that is how concepts work in logic, or, for that matter, in classical AIs and their semantic nets
18:20:53 cyborg01 So your approach is basically purely symbollic
18:21:01 Eliezer supposedly it is absolutely reliable, which is why logicians and GOFAI researchers use it
18:21:10 Eliezer and yet, in practice, Socrates drinks the hemlock and lives... what went wrong?
18:21:36 Eliezer cyborg: no, I'm trying to explain what's wrong with the symbolic approach - or the classical connectionist approach, for that matter
18:21:45 cyborg01 In reality he didn't live...
18:21:46 BJK_ the observer got it wrong... she was seeing things..
18:21:52 localroger Then nothing went wrong. Ultimate mortality has nothing to do with whether a specific incident kills you.
18:22:03 goomba` goomba` (zubat@[death to spam].ACA33AED.ipt.aol.com) has quit IRC [Ping timeout]

18:22:08 Eliezer * Eliezer says to localroger: "Heh."
18:22:50 Eliezer for the purposes of this discussion, we shall assume that "Socrates" in this case is a citizen of the 24th century, and "mortality" is defined with respect to the local hazard function and not ultimate longevity
18:23:08 cyborg01 I think connectionism can handle these problems quite well... through learning
18:23:29 Eliezer anyway, I would answer that if "man" is defined to require "mortality" by definition, then we can never know whether Socrates is a man except by feeding him hemlock to see if he is mortal
18:23:42 Eliezer the Aristotelian syllogism can never produce any new information
18:24:06 Eliezer it can only identify Socrates as a man, after we have independently observed every single characteristic we would want to infer
18:24:27 cyborg01 I see..
18:25:04 Eliezer a Bayesian, on the other hand, will talk about p( mortality | five fingers & red blood & wears clothes )
18:25:15 NickHay Eliezer: in the general joint probability distribution, would A range over cognitive structures and B.. all of physical reality? hmm, that doesn't sound right.
18:25:28 Eliezer or p( five fingers | mortality & red blood & wears clothes )
18:25:41 Eliezer a set of clustered characteristics that are usually associated, and so can be used to infer each other
18:26:09 Eliezer but the human mind sums up this complex thing, this empirical, probabilistic clustering, as a single solid substance, man-ness, that is either present or absent in a thing
18:26:29 cyborg01 OK..
18:26:42 Eliezer classical AI researchers don't see the complexity, and so they build semantic nets with a single token called 'man', and expect it to work, and surprise surprise, it fails
18:27:06 Eliezer actually, for a Bayesian it is possible to prove that Aristotelian logic is useless to settle any empirical issue
18:27:24 Eliezer Aristotelian logical chains are true in all possible worlds
18:27:33 Eliezer therefore they cannot tell you which possible world you are living in
18:28:13 Eliezer in Bayesian terms, the likelihood ratio is unity for P (Aristotle | A) and P (Aristotle | ~A)
18:28:41 Eliezer it leaves the probability of any empirical question A unaltered
18:29:13 Eliezer you can only settle A by observing it in some way - looking at variables that are correlated with A, that have different probabilities of being true depending on whether A is true or false
18:29:44 cyborg01 OK..
18:30:07 Eliezer to the extent that connectionist nets work, they will work because their probability of outputting 'mortal' given 'five fingers' and 'wears clothes' reflects the Bayesian logic of the conditional probabilities
18:30:14 Eliezer in other words, because they have Bayes-structure in them
18:30:25 cyborg01 Exactly
18:30:36 goomba` goomba` (zubat@[death to spam].ACABE0AB.ipt.aol.com) has joined #immortal

18:30:58 cyborg01 That's what I'm trying to say.. connectionism contains Bayesianism
18:31:13 Eliezer no, Bayesianism contains connectionism as a special case
18:31:30 cyborg01 OK=)
18:31:31 Eliezer many kinds of systems can have Bayes-structure besides the ones that we usually regard as neural nets
18:32:31 cyborg01 I guess if your 'clusters' of characteristics get complex enough, then it will approach the structure of connectionist networks
18:32:54 Eliezer if the characteristics themselves are complex in particular ways, connectionist networks may no longer be the best way to handle it
18:33:33 Eliezer for example, if the relation of A to B can be most simply expressed as, say, a Fourier transform or some other thing that neural networks aren't good at (maybe they're very good at Fourier transforms, I wouldn't know)
18:33:49 Eliezer in this case, A and B are so complex that you can no longer brute-force perfect Bayesian reasoning
18:34:02 cyborg01 I don't see how you can 'anchor' the symbols to the real world without using neural nets
18:34:47 Eliezer well, let's say that you want to anchor the AI with respect to the concepts 'red' and 'ball'
18:34:56 Eliezer the AI has a camera and a motor arm
18:34:57 cyborg01 OK..
18:35:18 Eliezer any program that lets the AI pick up a red ball when you type in 'red ball' would, I would say, have grounded the symbol with respect to that problem domain
18:35:31 Eliezer the 'red ball' may have many other properties as well, besides color and shape, that the AI can observe
18:35:35 cyborg01 Right
18:35:52 Eliezer but if the invocation of those two concepts provides the AI with sufficient *distinguishing* information to find the red ball amidst any others
18:36:03 Eliezer then the AI has actually gotten more information out of the empirical universe than you put into it
18:36:33 Eliezer it knows not just that the red ball is a red ball, but also that it makes a jingling noise when picked up, and that it is six inches across
18:36:41 Eliezer this is something that 'all men are mortal' can never do
18:36:43 cyborg01 OK..
18:39:13 cyborg01 So maybe you'll need neural nets to represent symbols.. and let bayesianism work on a higher level..
18:39:35 Eliezer Bayesianism is always doing the work, everywhere, when you look closely
18:39:42 localroger It may be worth noting that even humans aren't perfectly grounded. We are subject to a number of perceptual defects but we muddle through.
18:39:51 haploid I thought we were beyond symbolism.
18:40:11 Eliezer we are... Bayes-structure is what is beyond symbolism
18:40:25 Eliezer symbolic AI fails because it does not contain Bayes-structure, 'all men are mortal' and all that
18:40:39 Eliezer connectionist nets work, sort of, because they have a little Bayes-structure in them
18:40:42 haploid I'm talking to cyborg. He mentioned using neural networks to "represent symbols".
18:40:51 Eliezer ah
18:41:13 cyborg01 Well if you don't have symbols then everything becomes ungrounded
18:41:17 cyborg01 ?
18:41:58 haploid Define symbols. when one says "symbol" in an AI discussion, it generally refers to the GOFAI concept of symbols-as-lisp-tokens.
18:42:21 cyborg01 Yeah I'm thinking of that
18:43:08 cyborg01 But can you build a bayesian AI without using lisp-like symbolic representation
18:43:10 haploid So you're talking about a neural network to map an image of a red ball to _RED_BALL ?
18:43:14 localroger I may be over-simplifying, but it seems to me the point is that in a Bayesian system you might have a "symbol" but it won't be Boolean, it will be a probability estimate.
18:43:53 localroger Even the component concepts that define "RED BALL" will themselves be approximations, and may drift and change over time as observations are made.
18:44:03 cyborg01 I see..
18:44:37 haploid I don't see why a system couldn't be non-connectionist from CCD pixel all the way to bayesian engine. Why do you need a neural network on the inputs ?
18:44:57 cyborg01 So the bayesian AI can automatically learn the environment and creates its own representation? is that what you're saying
18:45:12 localroger Basically, yes.
18:45:24 Kid-A Kid-A (~moonmonke@[death to spam].217.137.106.56) has quit IRC [Quit: Careful with that axe, Eugene]

18:45:29 cyborg01 OK I get it now..
18:45:35 localroger (I think that's how humans do it, too.)
18:45:55 cyborg01 The thing is.. this sounds like connectionism in disguise
18:46:01 NickHay well, all AI are Bayesian in so much as they work. some realise this, others don't.
18:46:05 cyborg01 Or maybe the 2 are equivalent
18:46:40 NickHay you can't really "automattically learn" things with complex structure behind the learning - learning isn't a simple activity, unless you have infinite computing power
18:46:51 localroger The main thing about the Bayesian approach is that you never have a symbol which represents 100% probability of anything. Your symbols are ephemeral; they could even dissappear into irrelevance.
18:46:52 Eliezer a Bayesian doesn't care whether or not a lisp token inside the AI is labeled 'Red Ball', or there's an output neuron labeled 'red ball', or some other representation entirely
18:46:58 NickHay same with humans, we have complex brainware to work this
18:47:11 Eliezer a Bayesian just asks about the correlation between internal parts of the AI's state and the red ball out there in the world
18:47:22 NickHay no suggestively labelled components
18:47:28 cyborg01 OK..
18:47:42 Eliezer can I infer the presence of the empirical red ball from this piece of state here? can I infer this piece of state from the red ball?
18:48:01 Eliezer to what extent has the interior of the AI, in itself, become good evidence about the state of the exterior world?
18:49:23 localroger One clear advantage of the Bayesian model is that it can adapt past learning mistakes. If the only ball the AI has ever seen is red it may associate "redness" with "balldom." But when you give it a blue ball, it forms new probability distributions.
18:49:23 cyborg01 But as soon as you connect the CCD pixel to the bayesian AI, it actually is somewhat like a neural net...
18:49:34 Eliezer there's no need for suggestive labels like 'red ball symbol' or 'red ball output neuron' - the meaning is independent of any labels; the 'meaning' is the probabilistic correlation between the internal state and the external world, and its usability as Bayesian evidence
18:49:40 haploid no
18:49:56 haploid I'd say the neural net is "actually somewhat like a bayesian system"
18:49:57 Mind Mind (~Java@[death to spam].c68.115.9.166.stp.wi.charter.com) has joined #immortal

18:50:01 cyborg01 I've read a NN book that the backpropagation NN implicitly implements Bayesian logic
18:50:02 NickHay a neural net is one particular way of computing ones, one possible substrate to build a mind from
18:50:26 Eliezer anything that works must implicitly implement Bayesian logic, or it wouldn't work ;)
18:50:38 Eliezer it's just that there are other things that work too, besides neural nets
18:51:08 haploid Does Novamente use bayesian reasoning ?
18:51:15 localroger The thing is, there are certain situations where the neural net seems to implement incorrect Bayesian logic (just as human brains sometimes do). Those problems that are "hard" for neural nets, for example.
18:51:25 Eliezer * Eliezer says to haploid: "I don't know, because I don't know if it works."
18:51:32 cyborg01 I bet.. so the problem it seems, is whether there is an optimal implementation of Bayesian logic
18:51:41 NickHay some which are easier to convey Friendliness to, some which work much better given fixed resources
18:51:50 Eliezer if you've got infinite computing power, you'd just throw a Solomonoff Induction implementation at the problem
18:52:17 Eliezer though... there are actually serious problems with using SI in goal systems for social agents, as I tried to point out on the AGI list at one point
18:52:40 haploid ok
18:52:47 NickHay can SI be fixed to be reflective?
18:52:52 NickHay in a simple way?
18:53:01 Eliezer if you fix it, it's no longer SI. also it doesn't look like it to me
18:53:12 cyborg01 The solomonoff induction is itself intractable.. so in practice its not much useful
18:53:19 haploid If the axiom "bayesian reasoning is required for a mind", then Cyc is dead.
18:53:25 NickHay I was assuming infinite computing power there
18:53:51 Eliezer Cyc was always dead, at least as presented
18:54:12 Eliezer people are not mandated by law to remain in what you regard as the boundaries of their paradigm
18:54:17 Eliezer who knows what they're really doing down there
18:54:28 cyborg01 Heh

18:55:58 haploid SI ?
18:56:11 Eliezer solomonoff induction
18:56:15 Eliezer not superintelligence
18:56:16 NickHay Eliezer: do you have any interesting nonstandard formulations of Bayes' theorem?
18:56:27 Eliezer Bayes has been around for a while
18:56:28 NickHay sharable, but perhaps unshared
18:56:37 Eliezer all the interesting formulations I know of, are already standard
18:56:48 NickHay yah, although the mind-builder perspective is unique
18:56:51 NickHay ok
18:56:59 Eliezer I was speaking mathematically
18:57:16 NickHay I wasn't necessarily
18:58:11 haploid hm
18:58:27 haploid SO who was thhe matching donor on the fellowship challenge? =)
18:58:58 Eliezer * Eliezer declines to answer
18:59:16 NickHay we're running close to the official end of the chat, any last-minute questions for Eliezer, or comments from Eliezer?
19:00:17 cyborg01 Eliezer: maybe you have read Sutton's work on machine learning.. that's relevant
19:00:39 haploid Always curious about siai status; how long will the $10k last? What are the goals of your research for 2004 ?
19:00:54 Eliezer until April
19:01:35 Eliezer other questions... too complex for IRC
19:01:50 Eliezer Sutton's work; no, haven't read it yet
19:02:16 cyborg01 He has a book on machine learning, and some material on the net
19:02:27 Eliezer oh, I know *that*
19:02:35 Eliezer I too am Google
19:02:50 cyborg01 OK=)
19:02:59 NickHay * NickHay declares the semi-official end of chat
19:03:03 NickHay * NickHay is Bruce right now
19:03:15 NickHay thanks for chatting with us, Eliezer




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users