Saturday, September 15, 2012

What is Boltzmann machine


A Boltzmann machine is the name given to a type of stochastic recurrent neural network by Geoffrey Hinton and Terry Sejnowski. Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield nets. They were one of the first examples of a neural network capable of learning internal representations, and are able to represent and (given sufficient time) solve difficult combinatoric problems. However, due to a number of issues discussed below, Boltzmann machines with unconstrained connectivity have not proven useful for practical problems in machine learning or inference. They are still theoretically intriguing, however, due to the locality and Hebbian nature of their training algorithm, as well as their parallelism and the resemblance of their dynamics to simple physical processes. If the connectivity is constrained, the learning can be made efficient enough to be useful for practical problems.
They are named after the Boltzmann distribution in statistical mechanics, which is used in their sampling function.


Define simulated annealing?


A technique to find a good solution to an optimization problem by trying random variations of the current solution. A worse variation is accepted as the new solution with a probability that decreases as the computation proceeds. The slower the cooling schedule, or rate of decrease, the more likely the algorithm is to find an optimal or near-optimal solution.

Simulated annealing is a generalization of a Monte Carlo method for examining the equations of state and frozen states of n-body systems [Metropolis et al. 1953]. The concept is based on the manner in which liquids freeze or metals recrystalize in the process of annealing. In an annealing process a melt, initially at high temperature and disordered, is slowly cooled so that the system at any time is approximately in thermodynamic equilibrium. As cooling proceeds, the system becomes more ordered and approaches a "frozen" ground state at T=0. Hence the process can be thought of as an adiabatic approach to the lowest energy state. If the initial temperature of the system is too low or cooling is done insufficiently slowly the system may become quenched forming defects or freezing out in metastable states (ie. trapped in a local minimum energy state).
The original Metropolis scheme was that an initial state of a thermodynamic system was chosen at energy E and temperature T, holding T constant the initial configuration is perturbed and the change in energy dE is computed. If the change in energy is negative the new configuration is accepted. If the change in energy is positive it is accepted with a probability given by the Boltzmann factor exp -(dE/T). This processes is then repeated sufficient times to give good sampling statistics for the current temperature, and then the temperature is decremented and the entire process repeated until a frozen state is achieved at T=0.
By analogy the generalization of this Monte Carlo approach to combinatorial problems is straight forward [Kirkpatrick et al. 1983, Cerny 1985]. The current state of the thermodynamic system is analogous to the current solution to the combinatorial problem, the energy equation for the thermodynamic system is analogous to at the objective function, and ground state is analogous to the global minimum. The major difficulty (art) in implementation of the algorithm is that there is no obvious analogy for the temperature T with respect to a free parameter in the combinatorial problem. Furthermore, avoidance of entrainment in local minima (quenching) is dependent on the "annealing schedule", the choice of initial temperature, how many iterations are performed at each temperature, and how much the temperature is decremented at each step as cooling proceeds.
Simulated annealing has been used in various combinatorial optimization problems and has been particularly successful in circuit design problems 

Describe Pattern Recognition problem?


Methods for solving pattern recognition tasks generally assume a sequential model for the pattern recognition process, consisting of pattern environment, sensors to collect data from the environment, feature extraction from the data and association/ storage/classification/clustering using the features. The simplest solution to a pattern recognition problem is to use template matching, where the data of the test pattern are matched point by point with the corresponding data in the reference pattern. Obviously, this can work only for very simple and highly restricted pattern recognition tasks. At the next level of complexity, one can assume a deterministic model for the pattern generation process, and derive the parameters of the model from given data in order to represent the pattern information in the data. Matching test and reference patterns are done at the parametric level. This works well when the model of the gene;ation process is known with reasonable accuracy. One could also assume a stochastic model for the pattern generation process, and derive the parameters of the model from a large set of training patterns. Matching between test and reference patterns can be performed by several statistical methods like likelihood ratio, variance weighted distance, Bayesian classification etc. Other approaches for pattern recognition tasks depend on extracting features from parameters or data. These features may be specific for the task. A pattern is described in terms of features, and pattern matching is done using descriptions of the features.

Another method based on descriptions is called syntactic or structural pattern recognition in which a pattern in expressed in terms of primitives suitable for the classes of pattern under study (Schalkoft 1992). Pattern matching is performed by matching the descriptions of the patterns in terms of the primitives. More recently, methods based on the knowledge of the sources generating the patterns are being explored for pattern recognition tasks. These knowledge-based systems express I knowledge in the form of rules for generating and perceiving patterns.

The main difficulty in each of the pattern recognition techniques alluded to above is that of choosing an appropriate model for the pattern generating process and estimating the parameters of the model in the case of a model-based approach, or extraction of features from data parameters in the case of feature-based methods, or selecting appropriate primitives in the case of syntactic pattern recognition, or deriving rules in the case of a knowledge-based approach. It is all the more difficult when the test patterns are noisy and distorted versions of the patterns used in the training process. The ultimate goal is to impart to a machine the pattern recognition capabilities comparable to those of human beings. This goal is difficult to achieve using most of the conventional methods, because, as mentioned earlier, these methods assume a sequential model for the pattern recognition process. On the other hand, the human pattern recognition process is an integrated process involving the use of biological neural processing even from the stage of sensing the environment. Thus the neural processing takes place directly on the data for feature extraction and pattern matching. Moreover, the large size (in terms of number of neurons and interconnections) of the biological neural network and the inherently different r mechanism of processing are attributed to our abilities of pattern recognition in spite of variability and noise in the data. Moreover, we are able to deal effortlessly with temporal patterns and also with the so-called stability-plasticity dilemma as well.

It is for these reasons attempts are being made to explore new models of computing, inspired by the structure and function of the biological neural network. Such models for computing are based on artificial neural networks,

A Simple Artificial Neuron


A Simple Artificial Neuron

Our basic computational element (model neuron) is often called a node or unit. It receives input from some other units, or perhaps from an external source. Each input has an associated weight w, which can be modified so as to model synaptic learning. The unit computes some function f of the weighted sum of its inputs:
  
Its output, in turn, can serve as input to other units

 The weighted sum is called the net input to unit i, often written neti.
  • Note that wij refers to the weight from unit j to unit i (not the other way around).
  • The function f is the unit's activation function. In the simplest case, f is the identity function, and the unit's output is just its net input. This is called a linear unit. 

Describe neural model?


Computational neurobiologists have constructed very elaborate computer models of neurons in order to run detailed simulations of particular circuits in the brain. As Computer Scientists, we are more interested in the general properties of neural networks, independent of how they are actually "implemented" in the brain. This means that we can use much simpler, abstract "neurons", which (hopefully) capture the essence of neural computation even if they leave out much of the details of how biological neurons work.
People have implemented model neurons in hardware as electronic circuits, often integrated on VLSI chips. Remember though that computers run much faster than brains - we can therefore run fairly large networks of simple model neurons as software simulations in reasonable time. This has obvious advantages over having to use special "neural" computer hardware. 

Explain with example traveling salesman problem?


The "Traveling Salesman Problem" (TSP) is a common problem applied to artificial intelligence. The TSP presents the computer with a number of cities, and the computer must compute the optimal path between the cities. This applet uses a genetic algorithm to produce a solution to the "Traveling Salesman Problem".

The traveling salesman problem (TSP) is a problem in discrete or combinatorial optimization. It is a prominent illustration of a class of problems in computational complexity theory, which are classified as NP-hard. In the traveling-salesman problem, which is closely related to the Hamiltonian cycle problem, a salesman must visit n cities. Modeling the problem as a complete graph with n vertices, we can say that the salesman wishes to make a tour, or hamiltonian cycle, visiting each city exactly once and to finishing at the city he starts from. There is an integer cost c(i, j) to travel from city i to city j , and the salesman wishes to make the tour whose total cost is minimum, where the total cost is the sum of the individual costs along the edges of the tour.

Representing the cities by vertices and the roads between them by edges. We get a graph. In this graph, with every edge there is associated a real number such a graph is called a weighted graph being the weight of edge.

In our problem, if each of the cities has a road to every other city, we have a complete weighted graph. This graph has numerators Hamiltonian circuits, and we are to pick the one that has the smallest sum of the distance

The total number of different Hamiltonian circuits in a complete graph of n vertices can be shown to be (n - 1)!/2. This follows from the fact that starting from any vertex we have n - 1 edges to choose from the first vertex , n- 2 from the second, n- 3 from the third, and so on, these being independent, result with (n-1) choices This number is, however, divided y2, because each Hamiftonian circuit has been counted twice
Theoretically the problem of the traveling salesman can always be solved by enumeration all (n-1)!/2 Hamtltonian circuits, calculation the distance traveled in each, and then picking the shortest one. However for a large value of n, the labor involved is too great even for a digital computer.
The problem is to prescribe a manageable algorithm for finding the shortest route. No efficient algorithm for problems of arbitrary size has yet been found, although many attempts have been made. Since this problem has application in operations research, some specific large-scale examples have been worked out. There are also available several heuristic methods of solution that give a route very close to the shortest one, but do not guarantee the shortest.

Write a short note on Best first research?


Best-first search, rather than plunging as deep as possible into the tree (as in depth-first search), or traversing each level of the tree in succession (as in breadth-first search), uses a heuristic to decide at each stage which is the best place to continue the search.
Best-first search in its most basic form consists of the following algorithm (adapted from Pearl, 1984):
The first step is to define the OPEN list with a single node, the starting node. The second step is to check whether or not OPEN is empty. If it is empty, then the algorithm returns failure and exits. The third step is to remove the node with the best score, n, from OPEN and place it in CLOSED. The fourth step “expands” the node n, where expansion is the identification of successor nodes of n. The fifth step then checks each of the successor nodes to see whether of not one of them is the goal node. If any successor is the goal node, the algorithm returns success and the solution, which consists of a path traced backwards from the goal to the start node. Otherwise, the algorithm proceeds to the sixth step. For every successor node, the algorithm applies the evaluation function, f, to it, then checks to see if the node has been in either OPEN or CLOSED. If the node has not been in either, it gets added to OPEN. Finally, the seventh step establishes a looping structure by sending the algorithm back to the second step. This loop will only be broken if the algorithm returns success in step five or failure in step two.
The algorithm is represented here in pseudo-code:
1. Define a list, OPEN, consisting solely of a single node, the start node, s.
2. IF the list is empty, return failure.
3. Remove from the list the node n with the best score (the node where f is the minimum), and move it to a list, CLOSED.
4. Expand node n.
5. IF any successor to n is the goal node, return success and the solution (by tracing the path from the goal node to s).
6. FOR each successor node:
a) apply the evaluation function, f, to the node.
b) IF the node has not been in either list, add it to OPEN.
7. GOTO 2.

Applications

Best-first search and its more advanced variants have been used in such applications as games and web crawlers. In a web crawler, each web page is treated as a node, and all the hyperlinks on the page are treated as unvisited successor nodes. A crawler that uses best-first search generally uses an evaluation function that assigns priority to links based on how closely the contents of their parent page resemble the search query (Menczer, Pant, Ruiz, and Srinivasan, 2001). In games, best-first search may be used as a path-finding algorithm for game characters. For example, it could be used by an enemy agent to find the location of the player in the game world. Some games divide up the terrain into “tiles” which can either be blocked or unblocked. In such cases, the search algorithm treats each tile as a node, with the neighbouring unblocked tiles being successor nodes, and the goal node being the destination tile (Koenig, 2004).
Q

Describe procedural Vs declarative knowledge?


Declarative knowledge is defined as the factual information stored in memory and known to be static in nature. Other names, e.g. descriptive knowledge, propositional knowledge, etc. are also given. It is the part of knowledge which describes how things are. Things/events/processes, their attributes, and the relations between these things/events/processes and their attributes define the domain of declarative knowledge.
Procedural knowledge is the knowledge of how to perform, or how to operate. Names such as know-how are also given. It is said that one becomes more skilled in problem solving when he relies more on procedural knowledge than declarative knowledge.
Procedural knowledge
Declarative knowledge

·   high efficiency

·   low modifiability

·   low cognitive adequacy (better for knowledge engineers)

·        higher level of abstraction

·        suitable for indipendent facts

·        good modifiability

·        good readablity

·        good cognitive matching (better for domain experts and end-users)

  • low computational efficiency

What do you understand by forward Vs backward reasoning?


Whether you use forward or backwards reasoning to solve a problem depends on the properties of your rule set and initial facts. Sometimes, if you have some particular goal (to test some hypothesis), then backward chaining will be much more efficient, as you avoid drawing conclusions from irrelevant facts. However, sometimes backward chaining can be very wasteful - there may be many possible ways of trying to prove something, and you may have to try almost all of them before you find one that works. Forward chaining may be better if you have lots of things you want to prove (or if you just want to find out in general what new facts are true); when you have a small set of initial facts; and when there tend to be lots of different rules which allow you to draw the same conclusion. Backward chaining may be better if you are trying to prove a single fact, given a large set of initial facts, and where, if you used forward chaining, lots of rules would be eligible to fire in any cycle.
Forward Reasoning
Backward Reasoning
Forward: from the start states.
Backward: from the goal states.
Forward rules: to encode knowledge about how to respond to certain input.
Backward rules: to encode knowledge about how to achieve particular goals.
           Combining forward and backward reasoning
                   ¬ A1, …, Ak-1, Ak, Ak+1, …, An

                     achieved by                      achieved by
                       forward reasoning backward reasoning


What is Expert System?


·        An expert system, is an interactive computer-based decision tool that uses both facts and heuristics to solve difficult decision making problems, based on knowledge acquired from an expert.
·        An expert system is a model and associated procedure that exhibits, within a specific domain, a degree of expertise in problem solving that is comparable to that of a human expert.
·        An expert system compared with traditional computer : Inference engine + Knowledge = Expert system ( Algorithm + Data structures = Program in traditional computer )
·        First expert system, called DENDRAL, was developed in the early 70's at Stanford University

Expert systems are computer applications which embody some non-algorithmic expertise for solving certain types of problems. For example, expert systems are used in diagnostic applications. They also play chess, make financial planning decisions, configure computers, monitor real time systems, underwrite insurance policies, and perform many services which previously required human expertise.

Expert System Components And Human Interfaces :

Expert systems have a number of major system components and interface with individuals who interact with the system in various roles. These are illustrated below.



What are the issues in knowledge representation?


The fundamental goal of Knowledge Representation is to facilitate inferencing (conclusions) from knowledge. The issues that arise while using  KR techniques are many. Some of these are explained below.

·    Important Attributes : Any attribute of objects so basic that they occur in almost every problem domain ?

·        Relationship among attributes: Any important relationship that exists among object attributes ?

·   Choosing Granularity : At what level of detail should the knowledge be represented ?

·        Set of objects : How sets of objects be represented ?

·  Finding Right structure : Given a large amount of knowledge stored, how can relevant parts be accessed ?

Wednesday, March 21, 2012

Artificial Intelligence Applications.


Artificial Intelligence Applications.
Applications of Artificial Intelligence:-

1.Problem Solving

2.Game Playing

3.Theorem Proving

4.Natural Language Processing & Understanding

5.Perception General

      · Speech Reorganization

      · Pattern Reorganization

6.Image Processing

7.Expert System

8.Computer Vision

9.Robotics

10.Intelligent Computer Assisted Instruction

11.Automatic programming

12.Planning & Decision Support systems

13.Engineering Design & Comical Analysis

14. Neural Architecture.

15. Heuristic Classification.

1 Problem Solving:-

This is the first application area of AI research., the objective of this particular area of research is how to implement the procedures on AI systems to solve the problems like Human Beings.

2 :- Game Playing:-
Much of early research in state space search was done using common board games such as checkers, chess and 8 puzzle. Most games are played using a well defined set of rules. This makes it easy to generate the search space and frees the researcher from many of the ambiguities and complexities inherent in less structured problems. The board Configurations used in playing these games are easily represented in computer, requiring none of complex formalisms. For solving large and complex AI problems it requires lots of techniques like Heuristics. We commonly used the term intelligence seems to reside in the heuristics used by Human beings to solve the problems.

3 :- Theorem Proving:-

Theorem proving is another application area of AI research., ie. To prove Boolean Algebra theorems as a humans we first try to prove Lemma., i.e it tell us whether the Theorem is having feasible solution or not. If the theorem having feasible solution we will try to prove it otherwise discard it., In the same way whether the AI system will react to prove Lemma before trying to attempting to prove a theorem., is the focus of this application area of research.

4 Natural Langauge understading:-

The main goal of this problem is we can ask the question to the computer in our mother tongue the computer can receive that particular language and the system gave the response with in the same language. The effective use of a Computer has involved the use off a Programming Language of a set of Commands that we must use to Communicate with the Computer. The goal of natural language processing is to enable people and language such as English, rather than in a computer language.

It can be divided in to Two sub fields.

Natural Language Understanding : Which investigates methods of allowing the Computer to improve instructions given in ordinary English so that Computers can understand people more easily.

Natural Language Generation : This aims to have Computers produce ordinary English language so that people an understand Computers more easily.

5. Perception:-

The process of perception is usually involves that the set of operations i.e. Touching , Smelling Listening , Tasting , and Eating. These Perceptual activities incorporation into Intelligent Computer System is concerned with the areas of Natural language Understanding & Processing and Computer Vision mainly. The are two major Challenges in the application area of Perception.

1. Speech Reorganization

2. Pattern Reorganization

¨Speech Reorganization:-

The main goal of this problem is how the Computer System can recognize our Speeches. (Next process is to understand those Speeches and process them i.e. Encoding & Decoding i.e producing the result in the same language.) Its one is very difficult; Speech Reorganization can be described in two ways.

1. Discrete Speech Reorganization

Means People can interact with the Computer in their mother tongue. In such interaction whether they can insert time gap in between the two words or two sentences (In this type of Speech Reorganization the computer takes some time for searching the database).

2. Continues Speech Reorganization

Means when we interact with the computer in our mother tongue we can not insert the time gap in between the two words or sentences , i.e. we can talk continuously with the Computer (For this purpose we can increase speed of the computer).
¨Pattern Reorganization: -

 this the computer can identify the real world objects with the help of “Camera”. Its one is also very difficult , because

- To identify the regular shape objects, we can see that object from any angle; we can imagine the actual shape of the object (means to picturise which part is light fallen) through this we can identify the total structure of that particular object.

-To identify the irregular shape things, we can see that particular thing from any angle; through this we cannot imagine the actual structure. With help of that we can attach the Camera to the computer and picturise certain part of the light fallen image with the help of that whether the AI system can recognize the actual structure of the image or not? It is some what difficult compare to the regular shape things, till now the research is going on. This is related the application area of Computer Vision.

A Pattern is a quantitative or structured description of an object or some other entity of interest of an Image. Pattern is found an arrangement of descriptors. Pattern recognition is the research area that studies the operation and design of systems that recognize patterns in data. It encloses the discriminate analysis, feature extraction, error estimation, cluster analysis, and parsing (sometimes called syntactical pattern recognition). Important application areas are image analysis, character recognition, speech recognition and analysis, man and machine diagnostics, person identification and industrial inspection.

Closely Related Areas Pattern Recognition

Artificial Intelligence

Expert systems and machine learning

Neural Networks

Computer Vision

Cognition

Perception

Image Processing


6.Image Processing:-
Where as in pattern reorganization we can catch the image of real world things with the help of Camera. The goal of Image Processing is to identify the relations between the parts of image.

It is a simple task to attach a Camera to a computer so that the computer can receive visual images. People generally use Vision as their primary means of sensing their environment. We generally see more than we here. i.e. how can we provide such perceptual facilities touch, smell, taste, listen, and eat to the AI System. The goal of Computer Vision research is to give computers this powerful facility for understanding their surroundings. Currently, one of the primary uses of Computer Vision is in the area of Robotics.

Ex: - We can take a Satellite image to identify the roots and forests; we can make digitize all the image and place on the disk. With the help of particular scale to convert the image in to dots form, later we can identify that particular image at any time. Its one is time consuming process. With the help of “ image processing” how to reduce the time to process an image till now the AI research will be continuously going on.

In Image Processing the process of image recognition can be broken into the following main stages.

· Image capture

· Edge detection

· Segmentation

· Recognition and Analysis.

Image capturing can be performed by a simple Camera, which converts light signals from a scale of electrical signals., i.e., done by human visual system. We obtained these light signals in a set of 0’s and 1’s. Each pixel takes on one of a number of possible values often from 0 to 255. Color images are broken down in the same way, but with varying colors instead of gray scales. When a computer receives an image from sensor in form of set of pixels. These pixels are integrated to give the computer an understanding of what it is perceiving.

An image has been obtained, is to determine where the edges are in the image, the very first stage of analysis is called edge detection. Objects in the real world are almost all have solid edges of one kind or another, detecting those images is first step in the process of determining which objects are present in a scene.

Once the edges have been detected, in an image, this information can be used to Segment the image, into homogeneous areas. There are other methods available for segmenting an image, apart from using edge detection, like threshold method. This method involves finding the color of each pixel in an image and considering adjacent pixels to be in the same area as long as their color is similar enough.

A similar method for segmenting images is splitting and merging. Splitting involves taking an area that is not homogeneous and splitting it into two or more smaller areas, each of which is homogeneous. Merging involves taking two areas that are the same as each other, and adjacent to each other and combining them together into a large area. This provides a sophisticated interactive approach to segmenting an image.

Intermediate Level of processing

Low Level Processing High Level Processing

7.§Expert system:- Expert means the person who had complete knowledge in particular field, ie is called as an expert. The main aim of this problem is with the help of experts, to load their tricks on to the compute and make available those tricks to the other users. The expert can solve the problems with in the time.

The goal of this problem is how to load the tricks and ideas of an expert on to the computer, till now the research will be going on.

8. § Computer Vision:- It is a simple task to attach a camera to a computer so that the computer can receive visual images. People generally use vision as their primary means of sensing their environment. We generally see more than we here, feel, smell, or taste.

The goal of computer vision research is to give computers this powerful facility for understanding their surroundings. Currently, one of the primary uses of computer vision is in the area of Robotics.

9. § Robotics:-

A robot is an electro – mechanical device that can be programmed to perfume manual tasks. The robotics industries association formally defines to move a Robot as a “ Programmable multi-functional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for the performance of variety of tasks”.

Not all robotics is considered to be part of AI. A Robot that perform sonly the actions that it is has been pre-programmed to perform is considered to be a “dumb” robot, includes some kind of sensory apparatus, such as a camera , that allows it to respond to changes in its environment , rather than just to follow instructions “mindlessly”.

10. § Intelligent Computer – Assisted Instruction:-
Computer - Assisted Instruction (CAI) has been used in bringing the power of the computer to bear on the educational process. Now AI methods are being applied to the development of intelligent computerized “ Tutors” that shape their teaching techniques to fit the leaning patterns of individual students.

11. § Automatic Programming:- Programming is the process of telling the computer exactly what we want to do . the goal of automatic programming is to create special programs that act as intelligent “Tools” to assist programmers and expedite each phase of the programming process. The ultimate aim of automatic programming is a computer system that could develop programs by itself, in response to an in according with the specifications of the program developer.

12. § Planning and Decision Support system:- When we have a goal, either we rely on luck and providence to achieve that goal or we design and implement a plan. The realization of a complex goal may require to construction of a formal and detailed plan. Intelligent planning programs are designed to provide active assistance in the planning process and are expected to the particularly helpful to managers with decision making responsibilities.

13. §Engineering Design & Camical Analysis:-

Artificial Intelligence applications are playing major role in Engineering Drawings & Camical analysis to design expert drawings and Camical synthesis.

14. § Neural Architecture:-

People or more intelligent than Computers,. But AI researchers are trying how make Computers Intelligent. Humans are better at interpreting noisy input, such as recognizing a face in a darkened room from an odd angle. Even where human may not be able to solve some problem, we generally can make a reasonable guess as to its solution. Neural architectures, because they capture knowledge in a large no. of units. Neural architectures are robust because knowledge is distributed somewhat uniformly around the network.

Neural architectures also provide a natural model for parallelism, because each neuron is an independent unit. This showdown searching the data base a massively parallel architecture like the human brain would not suffer from this problem.

15. § Heuristic Classification:-

The term Heuristic means to Find & Discover., find the problem and discover the solution. For solving complex AI problems it’s requires lots of knowledge and some represented mechanisms in form of Heuristic Search Techniques., i.e refered to known as Heuristic Classification.