A rational agent makes the best choices "given it current information". But it seems from the book that the function which measures the performance can be omniscient. (For instance it "knows" the distribution of dirt in both vacuum rooms. ) Is that correct?

asked 05 Oct '11, 10:27

anorman's gravatar image

anorman
162

Which book are you referring to? IS it AI:MA by Norvig?
(10 Oct '11, 18:23) goldenmean goldenmean's gravatar image

I believe so; since to truly access the rationality of a choice made by a "rational agent" a function would need to have knowledge of the current state of everything and how that current state of everything would be affected by all possible choices that could be made, so that an objective rating can be given on any choice that is made considering all other possibilities.

All this is theoretical though. In practice, we humans and our rationality/logic are the standard used to access whether or not a choice made by a "rational agent" was the best/most rational choice. Hope this helps.

link

answered 05 Oct '11, 11:11

skyon's gravatar image

skyon ♦
1.3k21125

So my point is, thet the statement on the top of pg 39 that: "rational choice depends only on the precept sequence to date" is wrong on at least two counts. 1. We judge rationality with an omniscient metric and 2. The agent's precepts are dependent on past decisions (which room to look in etc.).

link

answered 05 Oct '11, 11:27

anorman's gravatar image

anorman
162

Actually you left two important points out of your reasoning: By the end of the first paragraph in page 39 the author states that "Doing actions in order to modify future percepts — sometimes called information gathering — is an important part of rationality (...)." Therefore it does take into account an agent's ability to influence how its own future percepts will turn out; "Rationality" as employed by the AIAMA text is defined as "[the selection of] an action that is expected to maximize its performance measure". Therefore not only is "rationality" not measured by an omniscient metric (the text explicitly concedes that an agent acting rationally can still take unfortunate decisions), but whether it's right or wrong is beside the point — this is a definition, i.e. this is what the book means when it says that an agent, action etc. is "rational".
(05 Oct '11, 13:39) xperroni ♦ xperroni's gravatar image
Your first point is well taken and I thank you. On the second, it is not the uncertainty aspect of performance that seems critical (ie we can ask to maximize "expected" results) it is that if "[the selection of] an action that is expected to maximize its performance measure" and the performance measure relies on omniscient knowledge then we ARE judging the rationality by an omniscient metric. (I would be more than happy to take this off line with you. I looked at your website and you are very knowledgable.)
(05 Oct '11, 14:07) anorman anorman's gravatar image
But that's the point: the performance measure does not rely on omniscient knowledge. "Rationality" is measured in terms of "expected" results, that is, the outcomes deemed possible given what the agent knows. This implies that an agent can be said to be "rational" even when his actions, though correct by the knowledge it possesses, rate poorly in the light of further knowledge to which it has no access. To use the example from the book, an agent tasked with crossing a street, and bearing knowledge about the unfavorable consequences of being ran over by a bus, would be deemed "rational" if it looked both ways and waited until no moving vehicles were in sight before proceeding. It can still be foiled by other factors outside the scope of its world model – say, if it was hit by a falling cargo door midway through – but that doesn't mean it's not "rational"; at most it could mean its world model does not sufficiently fit actual environment circumstances, leading to otherwise correct reasoning processes producing poor results (i.e. the "garbage in, garbage out" problem).
(05 Oct '11, 15:03) xperroni ♦ xperroni's gravatar image

In principle, yes. In practice, there are fundamental limits to how much an entity can know about its environment (i.e. the Uncertainty Principle), as well as to how accurate predictions on unobservable quantities can be (i.e. the Bayes error). And that's not considering time constraints, which can severely limit an agent's ability to analyse what's going on in any level of detail...

link

answered 05 Oct '11, 12:26

xperroni's gravatar image

xperroni ♦
1.4k21123

Your answer
toggle preview

Follow this Question via Email

Once you sign in you will be able to subscribe for any updates here

Q&A Editor Basics

  • to upload an image into your question or answer hit
  • to create bulleted or numbered lists hit or
  • to add a title or header hit
  • to section your text hit
  • to make a link clickable, surround it with <a> and </a> (for example, <a>www.google.com</a>)
  • basic HTML tags are also supported (for those who know a bit of HTML)
  • To insert an EQUATION you can use LaTeX. (backslash \ has to be escaped, so in your LaTeX code you have to replace \ with \\). You can see more examples and info here

powered by OSQA