|
@@ -0,0 +1,64 @@
|
|
|
+# AI - lesson 03
|
|
|
+#### Francesco Arrigoni
|
|
|
+###### 14 October 2015
|
|
|
+## Goal Oriented agents
|
|
|
+Historically called __problem-solving agents__
|
|
|
+
|
|
|
+Steps involved:
|
|
|
+1. __Formulate a problem__
|
|
|
+ Usually performed by the designer
|
|
|
+2. __Search for a solution__
|
|
|
+ Usually performed by the agent itself
|
|
|
+3. __Execute the actions__ that compose the solution.
|
|
|
+ Performed by the agent
|
|
|
+
|
|
|
+This is different from usual systems in which step 2 (formulate the algorithm) is performed by a human.
|
|
|
+
|
|
|
+The types of algorithms created by the agents are usually simple, without IF statements.
|
|
|
+
|
|
|
+### Formulating the problem:
|
|
|
+The __environment__ we have to interact with is:
|
|
|
+- static
|
|
|
+- deterministic
|
|
|
+- fully observable
|
|
|
+- known
|
|
|
+- discrete
|
|
|
+
|
|
|
+The problem can be formulated defining the following five elements:
|
|
|
+(example problem is the reduced version of 15 puzzle: 8 puzzle)
|
|
|
+1. The __initial state__: a description of the initial situation of the problem
|
|
|
+2. __Actions__ (s) = {actions that are applicable is s} Return the legal moves from the state s.
|
|
|
+
|
|
|
+ The state s<sub>0</sub> is
|
|
|
+ 7|2|3
|
|
|
+ ---|---|---
|
|
|
+ 1|4|8
|
|
|
+6|5|
|
|
|
+
|
|
|
+Actions(s<sub>0</sub>)={$\leftarrow,\uparrow$}
|
|
|
+3. __Result__ (s,a) = s' have the chosen move as argument and returns the new state.
|
|
|
+$a \in Action(s)$
|
|
|
+The __result__ function is also called transition function (model)
|
|
|
+
|
|
|
+We are considering a deterministic model, so that the result of a given action is *fixed*
|
|
|
+
|
|
|
+I can state that __initial state__, __actions__ and __result__ are elements of a *graph* named __state space__
|
|
|
+
|
|
|
+In our __state space__ graph:
|
|
|
+- *vertices* correspond to *states*
|
|
|
+- *arcs* correspond to *actions*.
|
|
|
+
|
|
|
+The graph is *implicitly defined*, it's not represented with an incidence matrix.
|
|
|
+For example the state space of chess game is estimated to be $10^{120}$.
|
|
|
+It is almost impossible to represent, but we can build algorithm that can win a game exploring only a very small portion of the graph.
|
|
|
+
|
|
|
+The agents viewed at lesson are in a sense less powerful than Operating Research algoritms, but they don't require knowing the totality of the graph, and this is an advantage.
|
|
|
+
|
|
|
+A __path__ on a graph is a *sequence of states* that are connected on the graph.
|
|
|
+
|
|
|
+4. __Goal test__: returns true or false, it is a test to check if the case is final or not.
|
|
|
+ - __explicit__ goal test: list of all the final states ex: 8 puzzle
|
|
|
+ - __implicit__ goal test: condition that characterizes final states ex: chess
|
|
|
+
|
|
|
+Every path in the state space should be associate to a *cost* (number), that is the sum of the costs of the action composing the path.
|
|
|
+5. __Path cost__: is the sum of the costs of the action composing the path.
|