# AI - lesson 08 #### Francesco Arrigoni ###### 2 December 2015 ## First Order Logic ### Inference prodecures for first order logic - __substitution__ $\theta = \{variable / ground term\}$ - __unification__ $UNIFY(\alpha,\beta)=\theta \alpha\theta=\beta\theta$ |$\alpha$|$\beta$|$\theta| |---|---|---| |knows(John,x)|knows(John,Jane)|{x|Jane\}| |knows(John,x)|knows(y,oj)|{x/oj,y/John}| |knows(John,x)|knows(y,Mother(y))|{y/John,x/Mother(John)}| |#knows(John,x)|#knows(x,oj)|| |knows(John,x)|knows(x,y)|{x/John,y/John}| |knows(John,x)|knows(y,x)|{y/John}{y/John,x/Jane}| __Unification__ is about finding the values that make $\alpha$ and $\beta$ the same for a given row. An example algorithm that does this is comparing character per character the various strings. In the $\theta$ column we find the associations resulting from the comparison. For the # formulas there is no substitution that unifies the formulas. The last formulas represents a genereal cases that admits more substitutions as solutions. __MGU__ are the substitutions that are more *general*. If we have a __knowledge base__ $$KB\iff\alpha$$ $$KB\iff$$ In the case of FOL if a substitution is possible, then the right $\theta$ value is returned. - __forward chaining__ 1. $king(John)$ 2. $(\forall y)Greedy(Y)$ 3. $(\forall x)king(x)\capGreedy(x)\rightarrow Evil(x)$ 4. $Evil(John)$ 1-2 are called __facts__ while 3-4 are called __rules__ We can only derive new facts, not rules. In FOL it is possible to apply the same rule more than one time, while in Propositional Logic it was useless. In the case we have nested functions rules, we can apply them infinite times, this leads to *semi-decidability*. The quantifiers are needed for every variables to have a __Well Formed Formula__ $\not king(Jane)$ this is not a definite clause and it is not WFF $king(John)\cup Greedy(John$ is also not a definite clause because it is not a conjunction of definite clauses. The main *inference rule* in FOL is called __Modus Ponens_ $\quotient{p_1'p_2'...p_n' \; p_1\cap p_2\cap ... \cap p_n \rightarrow q}{q\theta}$ ##### example: $\quotient{kong(John)\; Greedy(y) \; king(x)\cap Greedy(x) \rightarrow Evil(x)}{Evil(John)}$ #### Exercize Demonstrate that colonel West is guilty $\exists x \; Missile(x) \cap Owns (Nono,x)$ $Missile(M)\cap Owns(Nono,M)$ It is possible to prove that these two formulas are not *logically equivalent* (are not the same thing) But they are *inferentially equivalent* because you can infere the same results with each of them. 1. $American(x)\cap Weapon(y) \cap Hostile(z) \cap Sells(x,y,z) \rightarrow Criminal(X)$ 2. $Missile(M)$ 3. $Owns(Nono,M)$ 4. $Missile(x)\cap Owns(Nono,x)\rightarrow Sells(West,x,Nono)$ 5. $American(West)$ 6. $Enemy(Nono,America)$ 7. $Missile(x)\rightarrow Weapon(x)$ 8. $Enemy(x,America)\rightarrow Hostile(x)$ I can change the name of the *x* variable with another one because it is *universally quantified* Now we would like to prove the following formula. $\alpha : Criminal(West)$ If we are using forward chaining, we have to apply the rules, as many times as possible. 9. $Sells(West,M,Nono) \; MP(2,3,4) \; \theta ={x/M}$ 10. $Weapon(M) \; MP(2,7) \; \theta ={x/M}$ 11. $Hostile(Nono) \; MP(6,8) \; \theta ={x/Nono}$ 12. $Criminal(West) \; MP(5,10,11,9,1) \; \theta ={x/West, y/M, z/Nono}$ ### Algorithm Given the goal $\alpha:Criminal(West)$ - Checks if the goal is already present in the set of rules - Then it applies Modus Ponens in a reverse order: - the leafs of the goal node are all the rules required for the father node, and they are considered goals as well. - When all the leafs are satisfied i can say that i have