Saturday, July 24, 2010

AO* Algorithm2.

The AO* ALGORITHM



The problem reduction algorithm we just described is a simplification of an algorithm described in Martelli and Montanari, Martelli and Montanari and Nilson. Nilsson calls it the AO* algorithm , the name we assume.



1. Place the start node s on open.



2. Using the search tree constructed thus far, compute the most promising solution tree T



3. Select a node n that is both on open and a part of T. Remove n from open and place it on closed.



4. If n is a terminal goal node, label n as solved. If the solution of n results in any of n’s ancestors being solved, label all the ancestors as solved. If the start node s is solved, exit with success where T is the solution tree. Remove from open all nodes with a solved ancestor.



5. If n is not a solvable node (operators cannot be applied), label n as unsolvable. If the start node is labeled as unsolvable, exit with failure. If any of n’s ancestors become unsolvable because n is, label them unsolvable as well. Remove from open all nodes with unsolvable ancestors.



6. Otherwise, expand node n generating all of its successors. For each such successor node that contains more than one sub problem, generate their successors to give individual sub problems. Attach to each newly generated node a back pointer to its predecessor. Compute the cost estimate h* for each newly generated node and place all such nodes that do not yet have descendents on open. Next, recomputed the values of h* at n and each ancestor of n.



7. Return to step 2.



A* Algorithm2.

A* algorithm: The best first search algorithm that was just presented is a simplification an algorithm called A* algorithm which was first presented by HART.

Algorithm:

Step 1: put the initial node on a list start

Step 2: if (start is empty) or (start = goal) terminate search.

Step 3: remove the first node from the start call this node a

Step 4: if (a= goal) terminate search with success.

Step 5: else if node has successors generate all of them estimate the fitness number of the successors by totaling the evaluation function value and the cost function value and sort the fitness number.

Step 6: name the new list as start 1.

Step 7: replace start with start 1.

Step 8: go to step 2.

What do you know about BFS.?

Breadth First Search (BFS): This is also a brute force search procedure like DFS. We are searching progresses level by level. Unlike DFS which goes deep into the tree. An operator employed to generate all possible children of a node. BFS being a brute force search generates all the nodes for identifying the goal.

ALGORITHM:

Step 1. Put the initial node on a list “START”

Step 2. If START is empty or goal terminate the search.

Step 3. Remove the first node from the Start and call this node “a”

Step 4. If a =”GOAL” terminate search with success

Step 5. Else if node “a” has successors generate all of them and add them at the tail of

“START”

Step 6. Go to step 2.

Advantages:

1. BFS will not get trapped exploring a blind alley.

2. If there is a solution then BFS is guaranteed to find it.

3. The amount of time needed to generate all the nodes is considerable because of the time complexity.

4. Memory constraint is also a major problem because of the space complexity.

5. The searching process remembers all unwanted nodes, which are not practical use for the search process.



What do you know about DFS.?

Depth first search: This is a very simple type of brute force searching techniques. The search begins by expanding the initial node i.e. by using an operator generate all successors of the initial node and test them.

This procedure finds whether the goal can be reached or not but the path it has to follow has not been mentioned. Diving downward into a tree as quickly as possible performs Dfs searches.

Algorithm:

Step1: Put the initial node on a list START.

Step2: If START is empty or START = GOAL terminates search.

Step3: Remove the first node from START. Call this node a.

Step4: If (a= GOAL) terminates search with success.

Step5: Else if node a has successors, generate all of them and add them at the beginning

Of START.

Step6: Go to Step 2.

The major draw back of the DFS is the determination of the depth citric with the search has to proceed this depth is called cut of depth.

The value of cutoff depth is essential because the search will go on and on. If the cutoff depth is smaller solution may not be found. And if cutoff depth is large time complexity will be more.

Advantages: DFS requires less memory since only the nodes on the current path are stored.

By chance DFS may find a solution with out examining much of the search space at all.



Artificial Intelligence

Blogger Buzz: Blogger integrates with Amazon Associates

Friday, July 23, 2010

Branch and Bound Algorithem.

ALGORITHM: Branch - and - Bound search



1. Place the start node of zero path length on the queue.



2. Until the queue is empty or a goal node has been found do the following:



(i) Determine if the s first path in the queue contains a good node.



(ii) If the first path contains a goal node exit with success.



(iii) If the first path does not contain a good node remove the path from the queue and form new paths by extending the removed paths by on step.



(iv) Compute the cost of the new paths and add them to the queue.



(v) Sort the paths on the queue with lowest-cost path in front.



3. otherwise exit with failure.



Now let us see how the branch - and -bound search could be used to find the shortest solution to the water- jug problem. Unfortunately although this algorithm is more efficient and simple. It require exponential time. The exact amount of time it saves for particular problem depends on the order in which the paths are explored. And also in this problem, there is a possibility that the algorithm enters into an indefinite loop. For example , consider the following configuration generated in the process of generating a path.



Means Ends Analysis.

MEANS - ENDS ANALYSIS:-


Most of the search strategies either reason forward of backward however, often a mixture o the two directions is appropriate. Such mixed strategy would make it possible to solve the major parts of problem first and solve the smaller problems the arise when combining them together. Such a technique is called "Means - Ends Analysis".

The means -ends analysis process centers around finding the difference between current state and goal state. The problem space of means - ends analysis has an initial state and one or more goal state, a set of operate with a set of preconditions their application and difference functions that computes the difference between two state a(i) and s(j). A problem is solved using means - ends analysis by


1. Computing the current state s1 to a goal state s2 and computing their difference D12.

2. Satisfy the preconditions for some recommended operator op is selected, then to reduce the difference D12.

3. The operator OP is applied if possible. If not the current state is solved a goal is created and means- ends analysis is applied recursively to reduce the sub goal.

4. If the sub goal is solved state is restored and work resumed on the original problem.

( the first AI program to use means - ends analysis was the GPS General problem solver)

means- ends analysis I useful for many human planning activities. Consider the example of planing for an office worker. Suppose we have a different table of three rules:

1. If in out current state we are hungry , and in our goal state we are not hungry , then either the "visit hotel" or "visit Canteen " operator is recommended.

2. If our current state we do not have money , and if in your goal state we have money, then the "Visit our bank" operator or the "Visit secretary" operator is recommended.

3. If our current state we do not know where something is , need in our goal state we do know, then either the "visit office enquiry" , "visit secretary" or "visit co worker " operator is recommended.

Constraint Satisfaction Problem2.

2. Given Problem :

  C ROSS
+ROADS
-------------
DANGER
-------------
 
Constraints & Production Rules :-
 
Same rules as used for the previous problem.

Initial State of Problem:

C = ?
R = ?
D = ?
A = ?
O = ?
N = ?
G = ?
S = ?
E = ?
C1 = ?
C2 = ?
C3 = ?
C4 = ?

Goal State :
 
The Digits to the Letters should be assigned in such a manner. So that , the sum is satisfied.

Solution Process :
 
Again , I am following depth – first search method to solve the problem.

1. Initial Guess , D=1. Because the sum of two single digits can , at most , generate a carry ‘1’.

2. Now , we start guessing the digits and try to solve the problem. The guessing and the consequent effect of it is shown in the form of a tree below .









 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
At Step(2) we have assigned a single digit to every letter in accordance with the constraints and production rules.


Now by backtracking, we find the different digits assigned to different letters and hence reach the solution state.

Solution State:-

C = 9
N = 8
G = 7
A = 5
O = 2
E = 4
D = 1
R = 6
S = 3
C1 = 0
C2 = 0
C3 = 0
C4 = 0

              C4(0) C3 C2(0) C1(0)
              C9 R(6) O(2) S(3) S(3)
           + R6 O(2) A(5) D(1) S(3)
    D(1) A(5) N(8) G(7) E(4) R(6)

Constraint Satisfaction Problem1.

CONSTRAINT SATISFACTION:-

Many problems in AI can be considered as problems of constraint satisfaction, in which the goal state satisfies a given set of constraint. constraint satisfaction problems can be solved by using any of the search strategies. The general form of the constraint satisfaction procedure is as follows:

Until a complete solution is found or until all paths have led to lead ends, do

1. select an unexpanded node of the search graph.

2. Apply the constraint inference rules to the selected node to generate all possible new constraints.

3. If the set of constraints contains a contradiction, then report that this path is a dead end.

4. If the set of constraints describes a complete solution then report success.

5. If neither a constraint nor a complete solution has been found then apply the rules to generate new partial solutions. Insert these partial solutions into the search graph.

Example: consider the crypt arithmetic problems.

    SEND
+ MORE
----------
MONEY
----------

Assign decimal digit to each of the letters in such a way that the answer to the problem is correct to the same letter occurs more than once , it must be assign the same digit each time . no two different letters may be assigned the same digit. Consider the crypt arithmetic problem.

   SEND
+ MORE
-----------
MONEY
-----------

CONSTRAINTS:-

1. no two digit can be assigned to same letter.

2. only single digit number can be assign to a letter.

1. no two letters can be assigned same digit.

2. Assumption can be made at various levels such that they do not contradict each other.

3. The problem can be decomposed into secured constraints. A constraint satisfaction approach may be used.

4. Any of search techniques may be used.

5. Backtracking may be performed as applicable us applied search techniques.

6. Rule of arithmetic may be followed.

Initial state of problem.
D=?
E=?
Y=?
N=?
R=?
O=?
S=?
M=?
C1=?
C2=?
C1 ,C 2, C3 stands for the carry variables respectively.

Goal State: the digits to the letters must be assigned in such a manner so that the sum is satisfied.


Solution Process:

We are following the depth-first method to solve the problem.

1. initial guess m=1 because the sum of two single digits can generate at most a carry '1'.

2. When n=1 o=0 or 1 because the largest single digit number added to m=1 can generate the sum of either 0 or 1 depend on the carry received from the carry sum. By this we conclude that o=0 because m is already 1 hence we cannot assign same digit another letter(rule no.)

3. We have m=1 and o=0 to get o=0 we have s=8 or 9, again depending on the carry received from the earlier sum.

The same process can be repeated further. The problem has to be composed into various constraints. And each constraints is to be satisfied by guessing the possible digits that the letters can be assumed that the initial guess has been already made . rest of the process is being shown in the form of a tree, using depth-first search for the clear understandability of the solution process.

            D>6(Controduction)
 
   

Hill Climbing Procedure.

HILL CLIMBING PROCEDURE


This is a variety of depth-first (generate - and - test) search. A feedback is used here to decide on the direction of motion in the search space. In the depth-first search, the test function will merely accept or reject a solution. But in hill climbing the test function is provided with a heuristic function which provides an estimate of how close a given state is to goal state. The hill climbing test procedure is as follows :

 

1. General he first proposed solution as done in depth-first procedure. See if it is a solution. If so quit , else continue.



2. From this solution generate new set of solutions use , some application rules



3. For each element of this set



(i) Apply test function. It is a solution quit.



(ii) Else see whether it is closer to the goal state than the solution already generated. If yes, remember it else discard it.



4. Take the best element so far generated and use it as the next proposed solution.



This step corresponds to move through the problem space in the direction



Towards the goal state.



5. Go back to step 2.



Sometimes this procedure may lead to a position, which is not a solution, but from which there is no move that improves things. This will happen if we have reached one of the following three states.



(a) A "local maximum " which is a state better than all its neighbors , but is not better than some other states farther away. Local maxim sometimes occur with in sight of a solution. In such cases they are called " Foothills".



(b) A "plateau'' which is a flat area of the search space, in which neighboring states have the same value. On a plateau, it is not possible to determine the best direction in which to move by making local comparisons.



(c) A "ridge" which is an area in the search that is higher than the surrounding areas, but can not be searched in a simple move.



To overcome theses problems we can

(a) Back track to some earlier nodes and try a different direction. This is a good way of dealing with local maxim.



(b) Make a big jump an some direction to a new area in the search. This can be done by applying two more rules of the same rule several times, before testing. This is a good strategy is dealing with plate and ridges.



Hill climbing becomes inefficient in large problem spaces, and when combinatorial explosion occurs. But it is a useful when combined with other methods.

Generate and Test Procedure.

GENERATE AND TEST


This is the simplest search strategy. It consists of the following steps;



1. Generating a possible solution for some problems; this means generating
a particular point in the problem space. For others it may be generating
a path from a start state.




2. Test to see if this is actually a solution by comparing the chosen point at
the end point of the chosen path to the set of acceptable goal states.




3. If a solution has been found, quit otherwise return to step 1.




The generate - and - Test algorithm is a depth first search procedure because complete possible solutions are generated before test. This can be implemented states are likely to appear often in a tree; it can be implemented on a search graph rather than a tree.


Thursday, July 22, 2010

Problem Reduction with AO* Algorithm.

PROBLEM REDUCTION ( AND - OR graphs - AO * Algorithm)

When a problem can be divided into a set of sub problems, where each sub problem can be solved separately and a combination of these will be a solution, AND-OR graphs or AND - OR trees are used for representing the solution. The decomposition of the problem or problem reduction generates AND arcs. One AND are may point to any number of successor nodes. All these must be solved so that the arc will rise to many arcs, indicating several possible solutions. Hence the graph is known as AND - OR instead of AND. Figure shows an AND - OR graph.

An algorithm to find a solution in an AND - OR graph must handle AND area appropriately. A* algorithm can not search AND - OR graphs efficiently. This can be understand from the give figure.
FIGURE : AND - OR graph

In figure (a) the top node A has been expanded producing two area one leading to B and leading to C-D . the numbers at each node represent the value of f ' at that node (cost of getting to the goal state from current state). For simplicity, it is assumed that every operation(i.e. applying a rule) has unit cost, i.e., each are with single successor will have a cost of 1 and each of its components. With the available information till now , it appears that C is the most promising node to expand since its f ' = 3 , the lowest but going through B would be better since to use C we must also use D' and the cost would be 9(3+4+1+1). Through B it would be 6(5+1).

Thus the choice of the next node to expand depends not only n a value but also on whether that node is part of the current best path form the initial mode. Figure (b) makes this clearer. In figure the node G appears to be the most promising node, with the least f ' value. But G is not on the current beat path, since to use G we must use GH with a cost of 9 and again this demands that arcs be used (with a cost of 27). The path from A through B, E-F is better with a total cost of (17+1=18). Thus we can see that to search an AND-OR graph, the following three things must be done.
1. traverse the graph starting at the initial node and following the current best path, and accumulate the set of nodes that are on the path and have not yet been expanded.

2. Pick one of these unexpanded nodes and expand it. Add its successors to the graph and computer f ' (cost of the remaining distance) for each of them.

3. Change the f ' estimate of the newly expanded node to reflect the new information produced by its successors. Propagate this change backward through the graph. Decide which of the current best path.

The propagation of revised cost estimation backward is in the tree is not necessary in A* algorithm. This is because in AO* algorithm expanded nodes are re-examined so that the current best path can be selected. The working of AO* algorithm is illustrated in figure as follows:

Referring the figure. The initial node is expanded and D is Marked initially as promising node. D is expanded producing an AND arc E-F. f ' value of D is updated to 10. Going backwards we can see that the AND arc B-C is better . it is now marked as current best path. B and C have to be expanded next. This process continues until a solution is found or all paths have led to dead ends, indicating that there is no solution. An A* algorithm the path from one node to the other is always that of the lowest cost and it is independent of the paths through other nodes.

The algorithm for performing a heuristic search of an AND - OR graph is given below. Unlike A* algorithm which used two lists OPEN and CLOSED, the AO* algorithm uses a single structure G. G represents the part of the search graph generated so far. Each node in G points down to its immediate successors and up to its immediate predecessors, and also has with it the value of h' cost of a path from itself to a set of solution nodes. The cost of getting from the start nodes to the current node "g" is not stored as in the A* algorithm. This is because it is not possible to compute a single such value since there may be many paths to the same state. In AO* algorithm serves as the estimate of goodness of a node. Also a there should value called FUTILITY is used. The estimated cost of a solution is greater than FUTILITY then the search is abandoned as too expansive to be practical.
For representing above graphs AO* algorithm is as follows

AO* ALGORITHM:
1. Let G consists only to the node representing the initial state call this node INTT. Compute
    h' (INIT).

2. Until INIT is labeled SOLVED or hi (INIT) becomes greater than FUTILITY, repeat the
    following procedure.

(I)     Trace the marked arcs from INIT and select an unbounded node NODE.

(II)  Generate the successors of NODE . if there are no successors then assign FUTILITY as
        h' (NODE). This means that NODE is not solvable. If there are successors then for each one  
        called SUCCESSOR, that is not also an ancester of NODE do the following


            (a) add SUCCESSOR to graph G

            (b) if successor is not a terminal node, mark it solved and assign zero to its h ' value.

            (c) If successor is not a terminal node, compute it h' value.

(III) propagate the newly discovered information up the graph by doing the following . let S be a
        set of nodes that have been marked SOLVED. Initialize S to NODE. Until S is empty repeat
        the following procedure;

           (a) select a node from S call if CURRENT and remove it from S.

          (b) compute h' of each of the arcs emerging from CURRENT , Assign minimum h' to   
               CURRENT.

          (c) Mark the minimum cost path a s the best out of CURRENT.

          (d) Mark CURRENT SOLVED if all of the nodes connected to it through the new marked
               are have been labeled SOLVED.

          (e) If CURRENT has been marked SOLVED or its h ' has just changed, its new status must
               be propagate backwards up the graph . hence all the ancestors of CURRENT are added
               to S.
(Refered From Artificial Intelligence TMH)

AO*  Search Procedure.

1. Place the start node on open.

2. Using the search tree, compute the most promising solution tree TP .

3. Select node n that is both on open and a part of tp, remove n from open and place it no closed.

4. If n is a goal node, label n as solved. If the start node is solved, exit with success where tp is the solution tree, remove all nodes from open with a solved ancestor.

5. If n is not solvable node, label n as unsolvable. If the start node is labeled as unsolvable, exit with failure. Remove all nodes from open ,with unsolvable ancestors.

6. Otherwise, expand node n generating all of its successor compute the cost of for each newly generated node and place all such nodes on open.

7. Go back to step(2)

Note: AO* will always find minimum cost solution.






Best First Seach Procedure with A* Algorithem.


A is an initial node, which is expand to B,C and D. A heuristic function, say cost of reaching the goal , is applied to each of these nodes, since D is most promising, it is expanded next, producing two successor nodes E and F. Heuristic function is applied to them. Now out of the four remaining ( B,C and F) B looks more promising and hence it is expand generating nodes G and H . Again when evaluated E appears to be the next stop J has to be expanded giving rise to nodes I and J. In the next step J has to be expanded, since it is more promising . this process continues until a solution is found.

Above figure shows the best - first search tree. Since a search tree may generate duplicate nodes, usually a search graph is preferred.

The best - first search is implemented by an algorithm known as A* algorithm. The algorithm searches a directed graph in which each node represents a point in the problem space. Each node will contain a description of the problem state it represents and it will have links to its parent nodes and successor nodes. In addition it will also indicate how best it is for the search process. A* algorithm uses have been generated, heuristic functions applied to them, but successors not generated. The list CLOSED contains nodes which have been examined, i.e., their successors generated.

A heuristic function f estimates the merits of each generated node. This function f has two components g and h. the function g gives the cost of getting from the initial state to the current node. The function h is an estimate of the addition cost of getting from current node to a goal state. The function f (=g+h) gives the cost of getting from the initial state to a goal state via the current node.

THE A* ALGORITHM:-
 
1. Start with OPEN containing the initial node. Its g=0 and f ' = h '

Set CLOSED to empty list.

2. Repeat

If OPEN is empty , stop and return failure

Else pick the BESTNODE on OPEN with lowest f ' value and place it on CLOSED

If BESTNODE is goal state return success and stop

Else

Generate the successors of BESTNODE.
 
For each SUCCESSOR do the following:

1. Set SUCCESSOR to point back to BESTNODE. (back links will help to

recover the path)

2. compute g(SUCCESSOR) = g(BESTNODE) cost of getting from BESTNODE to SUCCESSOR.

3. If SUCCESSOR is the same as any node on OPEN, call that node OLS and add OLD to BESTNODE 's successors. Check g(OLD) and g(SUCCESSOR). It g(SUCCESSOR) is cheaper then reset OLD 's parent link to point to BESTNODE. Update g(OLD) and f '(OLD).

4. If SUCCESSOR was not on OPEN , see if it is on CLOSED . if so call the node CLOSED OLD , and better as earlier and set the parent link and g and f ' values appropriately.

5. If SUCCESSOR was not already on earlier OPEN or CLOSED, then put it on OPEN and add it to the list of BESTNODE 's successors.

Compute f ' (SUCCESSOR) = g(SUCCESSOR) + h ' (SUCCESSOR)
 
Best first searches will always find good paths to a goal after exploring the entire state space. All that is required is that a good measure of goal distance be used.








Breadth First Search Procedure.

Depth First Search Procedure.

ALGORITHM : DEPTH - FIRST SEARCH

1. place the starting node in the queue.

2. If the queue is empty, return failure and stop.

3. If the first element on the queue is a goal node g , return succed and stop otherwise.

4. Remove and expand the first element and place the children at the front of the queue.

5. Go back to step 2.



Travelling Salesmen Problem.

HEURISTIC FUNCTIONS

HEURISTIC FUNCTIONS

A Heuristic technique helps in solving problems, even though there is no guarantee that it will never lead in the wrong direction. There are heuristics of every general applicability as well as domain specific. The strategies are general purpose heuristics. In order to use them in a specific domain they are coupler with some domain specific heuristics. There are two major ways in which domain - specific, heuristic information can be incorporated into rule-based search procedure.

- In the rules themselves

- As a heuristic function that evaluates individual problem states and determines how desired they
  are.

A heuristic function is a function that maps from problem state description to measures desirability, usually represented as number weights. The value of a heuristic function at a given node in the search process gives a good estimate of that node being on the desired path to solution. Well designed heuristic functions can provides a fairly good estimate of whether a path is good or not. ( " The sum of the distances traveled so far" is a simple heuristic function in the traveling salesman problem) . the purpose of a heuristic function is to guide the search process in the most profitable directions, by suggesting which path to follow first when more than one path is available. However in many problems, the cost of computing the value of a heuristic function would be more than the effort saved in the search process. Hence generally there is a trade-off between the cost of evaluating a heuristic function and the savings in search that the function provides.



Problem Trees Vs Graphs.

Farword Versus Backword Reasoning.

FORWARD VERSUS BACKWARD REASONING

(Search Direction)

A search procedure must find a path between initial and goal states. There are two directions in which a search process could proceed.

(1) Reason forward from the initial states: Being form the root of the search tree. General the next level of the tree by finding all the rules whose left sides match the root node, and use their right sides to generate the siblings. Repeat the process until a configuration that matches the goal state is generated.



(2) Reason forward from the goal state(s): Begin building a search tree starting with the goal configuration(s) at the root. Generate the next level of the tree by finding all the rules whose right sides match with the root node. Use the left sides of the rules to generate the new nodes. Continue until a node that matches the start state is generated. This method of chaining backward from the desired final state is called goal directed reasoning or back tracing.



Selection of forward reasoning or backward reasoning depends on which direction offers less branching factor and justifies its reasoning process to the user. Most of the search techniques can be used to search either forward or backward. One exception is the means-ends analysis technique which proceeds by reducing differences between current and goal states, sometimes reasoning forward and sometimes backward.



The following are the factors which determine the choice of direction for a particular problem.

1. Are there more possible start states on goal states? We would like to move from the smaller set of states to the larger set of states.



2. In which direction is the branching factor (that is, their average number of nodes that can be reached directly from a single node) greater ? we would lime to proceed in the direction with the lower branching factor.



3. Will the program be asked to justify its reasoning process to a user ? If so, it is important to proceed in the direction that corresponds more closely with the way the user will think.



4. What kind of event is going to trigger a problem-solving episode? If it is



the arrival of a new factor, forward reasoning makes sends. If it is a query



to which a response is desired, backward reasoning is more natural.


Tuesday, July 20, 2010

Heuristic Search Techniques.

Introduction:- Many of the problems are too complex to be solvable by direct techniques.They have to be solved only by suitable heuristic search techniques. Though the heuristic techniques can be described independently, they are domain specific. They are called " Weak Methods", since they are vulnerable to combinational explosion. Even so, they provide the frame work into which domain specific knowledge can be placed.

Every search process can be viewed as a traversal of a directed graph, in which the nodes represent problem states and the arcs represent relationships between states. The search process must find a path through this graph, starting at an initial state and ending in one or more final states. The following issues have to be considered before going for a search.

Heuristic Search Techniques.


Heuristic techniques are called weak methods, since they are vulnerable to combinatorial explosion. Even then these techniques continue to provide framework into which domain specific knowledge can be placed, either by hand or as a result of learning. The following are some general purpose control strategies ( often called weak methods).

- Generate - and - test


- Hill climbing


- Breadth - First search


- Depth - First search


- Best First Search (A* search)


- Problem reduction(AO* search)


- Constraint satisfaction


- Means - ends analysis


A heuristic procedure, or heuristic, is defined as having the following properties.

1. It will usually find good, although not necessary optimum solutions.



2. It is faster and easier to implement than any known exact algorithm 
                                                      ( one which guarantees an optimum solution ).
In general, heuristic search improve the quality of the path that are exported. Using good heuristics we can hope to get good solutions to hard problems such as the traveling salesman problem in less than exponential time. There are some good general purpose heuristics that are useful in a wide variety of problems. It is also possible to construct special purpose heuristics to solve particular problems. For example, consider the traveling salesman problem.

Heuristic Search.

HEURISTIC SEARCH

               Solve complex problems efficiently ,it is necessary to compromise the requirements of the movability and systematically. A control structure has to be constructed that no longer guarantees the best solution, but that will almost always find a very good answer. Such a technique is said to be heuristic (rule of thumb). A heuristic search improves the efficiently of the search process, but sacrifices the claims of completeness. But they improve the quality of the paths that are explored. Using good heuristics we can get good solutions to hard problems, such as the traveling salesman problem.

Applying it to the traveling salesman problem produces the following procedure.

1. Arbitrarily select a starting city.

2. To select the next city, look at all cities not yet visited. Select the one closet to the current city.
     Go to it next.

3. Repeat step 2 until all the cities have been visited.

This procedure executes in time proportional to N * N , instead of N! and it is possible to prove an upper bound on the error it incurs. In many AI problems , however, it is not possible to produce such bounds. This is true for two reasons.

For real world problems, it is often hard to measure precisely the goodness of a particular solution. For instance , answers to questions like “Why has inflation increased?” can not be precise.

ii) For real world problems it is often useful to introduce heuristics based on relatively unstructured knowledge. This is because often a mathematical analysis is not possible. Without heuristics, it is not possible to tackle combinatorial explosion. Moreover, we go for optimum solution that satisfy some set of requirements. We stop with satisfactory solutions even though there might be better solutions.

Ex.A good example of this is the search for a taking space. Most people will stop as soon as there find a fairly good space, even if there must be a slightly better space or ahead.

BreadthFirstSearch&DepthFirstSearchTrees for WaterJug Problem.

PROBLEM CHARACTERISTICS

Heuristic search is a very general method applicable to a large class of problem . It includes a variety of techniques. In order to choose an appropriate method, it is necessary to analyze the problem with respect to the following considerations.

Is the problem decomposable ?
A very large and composite problem can be easily solved if it can be broken into smaller problems and recursion could be used. Suppose we want to solve.

Ex:- ∫ x2 + 3x+sin2x cos 2x dx

This can be done by breaking it into three smaller problems and solving each by applying specific rules. Adding the results the complete solution is obtained.

2. Can solution steps be ignored or undone?

Problem fall under three classes ignorable , recoverable and irrecoverable. This classification is with reference to the steps of the solution to a problem. Consider thermo proving. We may later find that it is of no help. We can still proceed further, since nothing is lost by this redundant step. This is an example of ignorable solutions steps.

Now consider the 8 puzzle problem tray and arranged in specified order. While moving from the start state towards goal state, we may make some stupid move and consider theorem proving. We may proceed by first proving lemma. But we may backtrack and undo the unwanted move. This only involves additional steps and the solution steps are recoverable.

Lastly consider the game of chess. If a wrong move is made, it can neither be ignored nor be recovered. The thing to do is to make the best use of current situation and proceed. This is an example of an irrecoverable solution steps.

1. Ignorable problems Ex:- theorem proving

                                         · In which solution steps can be ignored.

2. Recoverable problems Ex:- 8 puzzle

                                         · In which solution steps can be undone

3. Irrecoverable problems Ex:- Chess

                                         · In which solution steps can’t be undone

A knowledge of these will help in determining the control structure.

3.. Is the Universal Predictable?

Problems can be classified into those with certain outcome (eight puzzle and water jug problems) and those with uncertain outcome ( playing cards) . in certain – outcome problems, planning could be done to generate a sequence of operators that guarantees to a lead to a solution. Planning helps to avoid unwanted solution steps. For uncertain out come problems, planning can at best generate a sequence of operators that has a good probability of leading to a solution. The uncertain outcome problems do not guarantee a solution and it is often very expensive since the number of solution and it is often very expensive since the number of solution paths to be explored increases exponentially with the number of points at which the outcome can not be predicted. Thus one of the hardest types of problems to solve is the irrecoverable, uncertain – outcome problems ( Ex:- Playing cards).

4. Is good solution absolute or relative ?
                                            (Is the solution a state or a path ?)

There are two categories of problems. In one, like the water jug and 8 puzzle problems, we are satisfied with the solution, unmindful of the solution path taken, whereas in the other category not just any solution is acceptable. We want the best, like that of traveling sales man problem, where it is the shortest path. In any – path problems, by heuristic methods we obtain a solution and we do not explore alternatives. For the best-path problems all possible paths are explored using an exhaustive search until the best path is obtained.

5. The knowledge base consistent ?

In some problems the knowledge base is consistent and in some it is not. For example consider the case when a Boolean expression is evaluated. The knowledge base now contains theorems and laws of Boolean Algebra which are always true. On the contrary consider a knowledge base that contains facts about production and cost. These keep varying with time. Hence many reasoning schemes that work well in consistent domains are not appropriate in inconsistent domains.

Ex.Boolean expression evaluation.

6. What is the role of Knowledge?

Though one could have unlimited computing power, the size of the knowledge base available for solving the problem does matter in arriving at a good solution. Take for example the game of playing chess, just the rues for determining legal moves and some simple control mechanism is sufficient to arrive at a solution. But additional knowledge about good strategy and tactics could help to constrain the search and speed up the execution of the program. The solution would then be realistic.

Consider the case of predicting the political trend. This would require an enormous amount of knowledge even to be able to recognize a solution , leave alone the best.

Ex:- 1. Playing chess 2. News paper understanding

7. Does the task requires interaction with the person.

The problems can again be categorized under two heads.

1. Solitary in which the computer will be given a problem description and will produce an answer, with no intermediate communication and with he demand for an explanation of the reasoning process. Simple theorem proving falls under this category . given the basic rules and laws, the theorem could be proved, if one exists.

Ex:- theorem proving (give basic rules & laws to computer)

2. Conversational, in which there will be intermediate communication between a person and the computer, wither to provide additional assistance to the computer or to provide additional informed information to the user, or both problems such as medical diagnosis fall under this category, where people will be unwilling to accept the verdict of the program, if they can not follow its reasoning.

Ex:- Problems such as medical diagnosis.

8. Problem Classification

Actual problems are examined from the point of view , the task here is examine an input and decide which of a set of known classes.

Ex:- Problems such as medical diagnosis , engineering design.

Production System. Types of Production Systems.

A Knowledge representation formalism consists of collections of condition-action rules(Production Rules or Operators), a database which is modified in accordance with the rules, and a Production System Interpreter which controls the operation of the rules i.e The 'control mechanism' of a Production System, determining the order in which Production Rules are fired.

A system that uses this form of knowledge representation is called a production system.

A production system consists of rules and factors. Knowledge is encoded in a declarative from which comprises of a set of rules of the form
Situation ------------ Action
SITUATION that implies ACTION.

Example:-

IF the initial state is a goal state THEN quit.


The major components of an AI production system are

i. A global database

ii. A set of production rules and

iii. A control system


The goal database is the central data structure used by an AI production system. The production system. The production rules operate on the global database. Each rule has a precondition that is either satisfied or not by the database. If the precondition is satisfied, the rule can be applied. Application of the rule changes the database. The control system chooses which applicable rule should be applied and ceases computation when a termination condition on the database is satisfied. If several rules are to fire at the same time, the control system resolves the conflicts.


Four classes of production systems:-

1. A monotonic production system

2. A non monotonic production system

3. A partially commutative production system


4. A commutative production system.

Advantages of production systems:-

1. Production systems provide an excellent tool for structuring AI programs.


2. Production Systems are highly modular because the individual rules can be added, removed or modified independently.


3. The production rules are expressed in a natural form, so the statements contained in the knowledge base should the a recording of an expert thinking out loud.

Disadvantages of Production Systems:-

One important disadvantage is the fact that it may be very difficult analyse the flow of control within a production system because the individual rules don’t call each other.

Production systems describe the operations that can be performed in a search for a solution to the problem. They can be classified as follows.


Monotonic production system :- A system in which the application of a rule never prevents the later application of another rule, that could have also been applied at the time the first rule was selected.

Partially commutative production system:-

A production system in which the application of a particular sequence of rules transforms state X into state Y, then any permutation of those rules that is allowable also transforms state x into state Y.

Theorem proving falls under monotonic partially communicative system. Blocks world and 8 puzzle problems like chemical analysis and synthesis come under monotonic, not partially commutative systems. Playing the game of bridge comes under non monotonic , not partially commutative system.

For any problem, several production systems exist. Some will be efficient than others. Though it may seem that there is no relationship between kinds of problems and kinds of production systems, in practice there is a definite relationship.

Partially commutative , monotonic production systems are useful for solving ignorable problems. These systems are important for man implementation standpoint because they can be implemented without the ability to backtrack to previous states, when it is discovered that an incorrect path was followed. Such systems increase the efficiency since it is not necessary to keep track of the changes made in the search process.

Monotonic partially commutative systems are useful for problems in which changes occur but can be reversed and in which the order of operation is not critical (ex: 8 puzzle problem).

Production systems that are not partially commutative are useful for many problems in which irreversible changes occur, such as chemical analysis. When dealing with such systems, the order in which operations are performed is very important and hence correct decisions have to be made at the first time itself.