There can be one or many solutions to a given problem, depending on the scenario, As there can be many ways to solve that problem. Think about how do you approach a problem. Lets say you need to do something straight forward like a math multiplication. Clearly there is one correct solution, but many algorithms to multiply, depending on the size of the input. Now, take a more complicated problem, like playing a game(imagine your favorite game, chess, poker, call of duty, DOTA, anything..). In most of these games, at a given point in time, you have multiple moves that you can make, and you choose the one that gives you best possible outcome. In this scenario, there is no one correct solution, but there is a best possible solution, depending on what you want to achieve. Also, there are multiple ways to approach the problem, based on what strategy you choose to have for your game play.
The rational agents for Artificial Intelligence approach the problems in a similar fashion. It has to search through the solution space to provide the best result. This makes search algorithms important in the study of Artificial Intelligence. As to, what is considered as the best result and why a solution is preferred over another, is something we program into the AI. In this article we, will see how an Artificial Intelligence searches for the solution to a given problem.
To get a better understanding of Artificial Intelligence systems, refer
When you think of Artificial Intelligence, the first thing that comes to mind is either Robots or Machines with Brains…hackernoon.com
To learn more about Rational Agents powered by Artificial Intelligence, refer
There are multiple approaches that you might take to create Artificial Intelligence, based on what we hope to achieve…hackernoon.com
The simple reflex agents don’t specifically search for best possible solution, as they are programmed to perform a particular action for a particular state. On the contrary, the artificially intelligent agents that work towards a goal, identify the action or series of actions that lead to the goal. The series of actions that lead to the goal becomes the solution for the given problem. Here, the agent has to consider the impact of the action on the future states. Such agents search through all the possible solutions to find the best possible solution for the given problem.
Defining the problem
The agent should be clear about the goal. This means we need to program the agent, such that it can clearly classify a state as goal. Since there are many ways to reach that goal, the agent should also be able to evaluate a solution and determine its preference for a solution. Let’s look at the factors that need to be defined for formulation of a problem.
- Initial State : The state in which the agent starts or initial condition of the agent.
- States : All states that are reachable from initial state by any sequence of actions or all possible states that the agent can take. This is also referred to as State space.
- Actions : All possible actions that the agent can execute. Specifically, it provides the list of actions, that an agent can perform in a particular state. This is also referred to as Action space.
- Transition Model : This property describes the results of each action taken in a particular state.
- Goal Test : A way to check, whether a state is the goal.
- Path Cost : A function that assigns a numeric cost to a path w.r.t. performance measure
Problem formulation provides an easy way to design a rational agent.
Unlike state space, which is a physical configuration, the search space is an abstract conﬁguration represented by a search tree or graph of possible solutions.
A search tree is used to model the sequence of actions. It is constructed with initial state as the root. The actions taken make the branches and the nodes are results of those actions. A node has depth, path cost and associated state in the state space.
The search space is divided into 3 regions, namely
Search involves moving the nodes from unexplored region to the explored region. Strategical order of these moves performs a better search. The moves are also known as node expansion. Picking the order of node expansion has provided us with different search strategies that are suited for different kind of problems.
Different search strategies are evaluated along completeness, time complexity, space complexity and optimality. The time and space complexity are measured in terms of:
- b: maximum branching factor of the search tree (actions per state).
- d: depth of the solution
- m: maximum depth of the state space (may be ∞) (also noted sometimes D).
Types of Search
There are 2 kinds of search, based on whether they use information about the goal.
This type of search does not use any domain knowledge. This means that it does not use any information that helps it reach the goal, like closeness or location of the goal. The strategies or algorithms, using this form of search, ignore where they are going until they find a goal and report success.
The basic uninformed search strategies are:
- BFS(Beadth First Search): It expands the shallowest node(node having lowest depth) first.
- DFS(Depth First Search): It expands deepest node first.
- DLS(Depth Limited Search): It is DFS with a limit on depth.
- IDS(Iterative Deepening Search): It is DFS with increasing limit
- UCS(Uniform Cost Search): It expands the node with least cost(Cost for expanding the node).
This type of search uses domain knowledge. It generally uses a heuristic function that estimates how close a state is to the goal. This heuristic need not be perfect. This function is used to estimate the cost from a state to the closest goal.
The basic informed search strategies are:
- Greedy search (best first search) : It expands the node that appears to be closest to goal
- A* search : Minimize the total estimated solution cost, that includes cost of reaching a state and cost of reaching goal from that state.
Search Agents are just one kind of algorithms in Artificial Intelligence. Here, an AI has to choose from a large solution space, given that it has a large action space on a large state space. Selecting the right search strategy for your Artificial Intelligence, can greatly amplify the quality of results. This involves formulating the problem, that your AI is going to solve, in the right way. Some of the real world examples, where they are being used are:
- Route finding problem : Example of applications include tools for driving directions in websites, in-car systems, etc.
- Travelling salesman problem : Find the shortest tour to visit each city exactly once.
- VLSI Layout : position million of components and connections on a chip to minimize area, shorten delays.
- Robot Navigation : Special case of route ﬁnding for robots with no speciﬁc routes or connections, where state space and action space are potentialy infinite.
- Automatic assembly sequencing : ﬁnd an order in which to assemble parts of an object which is a diﬃcult and expensive geometric search.
- Protein design : ﬁnd a sequence of amino acids that will fold into a 3D protein with the right properties to cure some disease.
Although, we are just scratching the surface, this article provides you with an outline of what kind of algorithms drive an AI. Depending on the problem, an Artificial Intelligence can use many other algorithms involving Machine Learning, Bayesian networks, Markov models, etc.
I’ll soon be elaborating on these AI algorithms that drive rational search agents, and other algorithms including use of machine learning in Artificial Intelligence. So, for being more aware of the world of A.I., follow me. It’s the best way to find out when I write more articles like this.
If you liked this article, be sure to show your support by clapping for this article below and if you have any questions, leave a comment and I will do my best to answer.
That’s all folks, Have a nice day :)
Content for this article is inspired and taken from, Artiﬁcial Intelligence, A Modern Approach. Stuart Russell and Peter Norvig. Third Edition. Pearson Education.