paint-brush
Rational Agents for Artificial Intelligenceby@prashantgupta17
38,031 reads
38,031 reads

Rational Agents for Artificial Intelligence

by Prashant GuptaSeptember 21st, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

There are multiple approaches that you might take to create Artificial Intelligence, based on what we hope to achieve with it and how will we measure its success. It ranges from extremely rare and complex systems, like self driving cars and robotics, to something that is a part of our daily lives, like face recognition, machine translation and email classification.

Company Mentioned

Mention Thumbnail
featured image - Rational Agents for Artificial Intelligence
Prashant Gupta HackerNoon profile picture

There are multiple approaches that you might take to create Artificial Intelligence, based on what we hope to achieve with it and how will we measure its success. It ranges from extremely rare and complex systems, like self driving cars and robotics, to something that is a part of our daily lives, like face recognition, machine translation and email classification.

The article below gives an insight into what it takes to truly create Artificial Intelligence.


So you think you know what is Artificial Intelligence?_When you think of Artificial Intelligence, the first thing that comes to mind is either Robots or Machines with Brains…_hackernoon.com

The path you take will depend upon what are the goals of your AI and how well you understand the complexity and feasibility of various approaches. In this article we will discuss the approach that is considered more feasible and general for scientific development, i.e. study of the design of rational/intelligent agents.

What is an Agent?

Anything that can be seen as

  • perceiving its environment through sensors
  • acting upon it through actuators

It will run in cycles of perceiving, thinking and acting. Take humans for example, we perceive our environment through our five senses(sensors), we think about it and then act using our body parts(actuators). Similarly, robotic agents perceive environment through sensors that we provide them(can be camera, microphones, infrared detectors), they do some computing(think) and then act using various motors/actuators attached for function. Now, it should be clear that the world around you is full of agents like your cell phone, vaccum cleaner, smart fridge, thermostat, camera and even yourself.

What is an Intelligent Agent?

An agent which acts in a way that is expected to maximize to its performance measure, given the evidence provided by what it perceived and whatever built-in knowledge it has.

The performance measure defines the criterion of success for an agent.

Such agents are also known as Rational Agents. The rationality of the agent is measured by its performance measure, the prior knowledge it has, the environment it can perceive and actions it can perform.

This concept is central in Artificial Intelligence.

The above properties of the intelligent agents are often grouped in the term PEAS, which stands for Performance, Environment, Actuators and Sensors. So, for example a self driving car would be having following PEAS :-

  • Performance: Safety, time, legal drive, comfort.
  • Environment: Roads, other cars, pedestrians, road signs.
  • Actuators: Steering, accelerator, brake, signal, horn.
  • Sensors: Camera, sonar, GPS, speedometer, odometer, accelerometer, engine sensors, keyboard.

To satisfy real world use cases, the Artificial Intelligence itself needs to have a wide spectrum of intelligent agents. This introduces diversity in the types of agents and the environments we have. Let take a look.

Type of Environments

A rational agent needs to be designed, keeping in mind the type of environment it will be used in. Below are the types:-

  • Fully observable and partially observable : An agent’s sensors give it access to the complete state of the environment at each point in time, if fully observable, otherwise not. For example, chess is a fully observable environment, while poker is not
  • Deterministic and Stochastic : The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic). Stochastic environment is random in nature and cannot be completely determined. For example, 8-puzzle has a deterministic environment, but driverless car does not.
  • Static and Dynamic : The static environment is unchanged while an agent is deliberating. (The environment is semi-dynamic if the environment itself does not change with the passage of time but the agent’s performance score does.). A dynamic environment, on the other hand, does change. Backgammon has static environment and a roomba has dynamic.
  • Discrete and Continuous : A limited number of distinct, clearly defined perceptions and actions, constitute a discrete environment. E.g., checkers is an example of a discrete environment, while self-driving car evolves in a continuous one.
  • Single agent and Multi-agent : An agent operating just by itself has a single agent environment. However if there are other agents involved, then it’s a multi agent environment. Self-driving cars have multi agent environment.

There are other types of environments, episodic and sequential, known and unknown, that define scope of an agent.

Types of Agents

There are 4 types of agents in general, varying in the level of intelligence or the complexity of the tasks they are able to perform. All the types can improve their performance and generate better actions over time. These can be generalized as learning agents.

Simple reflex agents

These select an action based on the current state only, ignoring the history of perceptions.

They can only work if the environment is fully observable, or the correct action is based on what is perceived currently.

Model-based reflex agents

Agents keep track of partially observable environments. These have an internal state depending on perception history. The environment/ world is modeled based on how it evolves independently from the agent, and how the agent actions affects the world.

Goal-based agents

This is an improvement over model based agents, and used in cases where knowing the current state of the environment is not enough. Agents combine the provided goal information with the environment model, to chose the actions which achieve that goal.

Utility-based agents

An improvement over goal based agents, helpful when achieving the desired goal is not enough. We might need to consider a cost. For example, we may look for quicker, safer, cheaper trip to reach a destination. This is denoted by a utility function. A utility agent will chose the action that maximizes the expected utility.

A general intelligent agent, also known as learning agent, was proposed by Alan Turing, and is now the preferred method for creating state-of-the-art systems in Artificial Intelligence.

All the agents described above can be generalized into these learning agents to generate better actions.

Learning Agents

Learning element: responsible for making improvements — Performance element: responsible for selecting external actions. It is what we considered as agent so far. — Critic: How well is the agent is doing w.r.t. a fixed performance standard. — Problem generator: allows the agent to explore.

Internal State Representation

As the agents get complex, so does their internal structure. The way in which they store the internal state changes. By its nature, a simple reflex agent does not need to store a state, but other types do. The image below provides a high level representation of agent states, in order of increasing expressiveness power(left to right).

Credit : Percy Liang

  • Atomic representation : In this case, the state is stored as black box, i.e. without any internal structure. For example, for Roomba(a robotic vaccum cleaner), the internal state is a patch already vaccumed, you don’t have to know anything else. As depicted in the image, such representation works for model and goal based agents and used in various AI algorithms such as search problems and adversarial games.
  • Factored Representation : The state, in this representation, is no longer a black box. It now has attribute-value pairs, also known as variables that can contain a value. For example, while finding a route, you have a GPS location and amount of gas in the tank. This adds a constraint to the problem. As depicted in the image, such representation works for goal based agents and used in various AI algorithms such as constraint satisfaction and bayesian networks.
  • Structured Representation : In this representation, we have relationships between the variables/ factored states. This induces logic in the AI algorithms. For example, in natural language processing, the states are whether the statement contains a reference to a person and whether the adjective in that statement represents that person. The relation in these states will decide, whether the statement was a sarcastic one. This is high level Artificial Intelligence, used in algorithms like first order logic, knowledge-based learning and natural language understanding.

There is much more to these rational agents for Artificial Intelligence, and this was just an overview. As you can tell, the study of the design of rational agents is really important part of Artificial Intelligence, as it has applications in a wide variety of fields. However, these agents don’t work on their own, they need an AI algorithm to drive them. Most of these algorithms involve searching.

I’ll soon be writing more on the AI algorithms that drive rational agents and use of machine learning in Artificial Intelligence. So, for being more aware of the world of A.I., follow me. It’s the best way to find out when I write more articles like this.

If you liked this article, be sure to show your support by clapping for this article below and if you have any questions, leave a comment and I will do my best to answer.

You can also follow me on Twitter at @Prashant_1722, email me directly or find me on linkedin. I’d love to hear from you.

That’s all folks, Have a nice day :)

Credit

Content for this article is inspired and taken from, Artificial Intelligence, A Modern Approach. Stuart Russell and Peter Norvig. Third Edition. Pearson Education.