There are multiple approaches that you might take to create Artificial Intelligence, based on what we hope to achieve with it and how will we measure its success. It ranges from extremely rare and complex systems, like self driving cars and robotics, to something that is a part of our daily lives, like face recognition, machine translation and email classification. The article below gives an insight into what it takes to truly create Artificial Intelligence. _When you think of Artificial Intelligence, the first thing that comes to mind is either Robots or Machines with Brains…_hackernoon.com So you think you know what is Artificial Intelligence? and how well you understand the complexity and feasibility of various approaches. In this article we will discuss the approach that is considered more feasible and general for scientific development, i.e. The path you take will depend upon what are the goals of your AI study of the design of rational/intelligent agents. What is an Agent? Anything that can be seen as perceiving its environment through sensors acting upon it through actuators Take humans for example, we perceive our environment through our five senses(sensors), we think about it and then act using our body parts(actuators). Similarly, robotic agents perceive environment through sensors that we provide them(can be camera, microphones, infrared detectors), they do some computing(think) and then act using various motors/actuators attached for function. Now, it should be clear that the world around you is full of agents like your cell phone, vaccum cleaner, smart fridge, thermostat, camera and even yourself. It will run in cycles of perceiving, thinking and acting. What is an Intelligent Agent? An agent which acts in a way that is expected to maximize to its performance measure, given the evidence provided by what it perceived and whatever built-in knowledge it has. The performance measure defines the criterion of success for an agent. Such agents are also known as The rationality of the agent is measured by its performance measure, the prior knowledge it has, the environment it can perceive and actions it can perform. Rational Agents. This concept is central in Artificial Intelligence. So, for example a self driving car would be having following PEAS :- The above properties of the intelligent agents are often grouped in the term PEAS, which stands for Performance, Environment, Actuators and Sensors. Performance: Safety, time, legal drive, comfort. Environment: Roads, other cars, pedestrians, road signs. Actuators: Steering, accelerator, brake, signal, horn. Sensors: Camera, sonar, GPS, speedometer, odometer, accelerometer, engine sensors, keyboard. To satisfy real world use cases, the Artificial Intelligence itself needs to have a wide spectrum of intelligent agents. This introduces diversity in the types of agents and the environments we have. Let take a look. Type of Environments A rational agent needs to be designed, keeping in mind the type of environment it will be used in. Below are the types:- : An agent’s sensors give it , if fully observable, otherwise not. For example, Fully observable and partially observable access to the complete state of the environment at each point in time chess is a fully observable environment, while poker is not : The . (If the environment is deterministic except for the actions of other agents, then the environment is strategic). . For example, Deterministic and Stochastic next state of the environment is completely determined by the current state and the action executed by the agent Stochastic environment is random in nature and cannot be completely determined 8-puzzle has a deterministic environment, but driverless car does not. : The (The environment is semi-dynamic if the environment itself does not change with the passage of time but the agent’s performance score does.). A . Static and Dynamic static environment is unchanged while an agent is deliberating. dynamic environment, on the other hand, does change Backgammon has static environment and a roomba has dynamic. : A limited number of distinct, clearly defined perceptions and actions, constitute a discrete environment. E.g., Discrete and Continuous checkers is an example of a discrete environment, while self-driving car evolves in a continuous one. : . Single agent and Multi-agent An agent operating just by itself has a single agent environment. However if there are other agents involved, then it’s a multi agent environment Self-driving cars have multi agent environment. There are other types of environments, episodic and sequential, known and unknown, that define scope of an agent. Types of Agents There are . All the types can improve their performance and generate better actions over time. These can be generalized as learning agents. 4 types of agents in general, varying in the level of intelligence or the complexity of the tasks they are able to perform Simple reflex agents These select an action based on the current state only, ignoring the history of perceptions. They can only work if the environment is fully observable, or the correct action is based on what is perceived currently. Model-based reflex agents These . The environment/ world is modeled based on how it evolves independently from the agent, and how the agent actions affects the world. Agents keep track of partially observable environments. have an internal state depending on perception history Goal-based agents This is an improvement over model based agents, and used in cases where knowing the current state of the environment is not enough. Agents combine the provided goal information with the environment model, to chose the actions which achieve that goal. Utility-based agents An improvement over goal based agents, helpful when achieving the desired goal is not enough. We might need to consider a cost. For example, we may look for quicker, safer, cheaper trip to reach a destination. This is denoted by a utility function. A utility agent will chose the action that maximizes the expected utility. A general intelligent agent, also known as learning agent, was proposed by Alan Turing, and is now the preferred method for creating state-of-the-art systems in Artificial Intelligence. All the agents described above can be generalized into these learning agents to generate better actions. Learning Agents Learning element: responsible for making improvements — Performance element: responsible for selecting external actions. It is what we considered as agent so far. — Critic: How well is the agent is doing w.r.t. a fixed performance standard. — Problem generator: allows the agent to explore. Internal State Representation As the agents get complex, so does their internal structure. The way in which they store the internal state changes. By its nature, a simple reflex agent does not need to store a state, but other types do. The image below provides a high level representation of agent states, in order of increasing expressiveness power(left to right). Credit : Percy Liang In this case, . For example, for Roomba(a robotic vaccum cleaner), the internal state is a patch already vaccumed, you don’t have to know anything else. As depicted in the image, such representation works for model and goal based agents and Atomic representation : the state is stored as black box, i.e. without any internal structure used in various AI algorithms such as search problems and adversarial games. The state, in this representation, is no longer a black box. . For example, while finding a route, you have a GPS location and amount of gas in the tank. This adds a constraint to the problem. As depicted in the image, such representation works for goal based agents and . Factored Representation : It now has attribute-value pairs, also known as variables that can contain a value used in various AI algorithms such as constraint satisfaction and bayesian networks In this representation, For example, in natural language processing, the states are whether the statement contains a reference to a person and whether the adjective in that statement represents that person. The relation in these states will decide, whether the statement was a sarcastic one. Structured Representation : we have relationships between the variables/ factored states. This induces logic in the AI algorithms. This is high level Artificial Intelligence, used in algorithms like first order logic, knowledge-based learning and natural language understanding. There is much more to these rational agents for Artificial Intelligence, and this was just an overview. As you can tell, the , as it has applications in a wide variety of fields. However, these agents don’t work on their own, they need an AI algorithm to drive them. Most of these algorithms involve searching. study of the design of rational agents is really important part of Artificial Intelligence I’ll soon be writing more on the AI algorithms that drive rational agents and use of machine learning in Artificial Intelligence. So, for being more aware of the world of A.I., . It’s the best way to find out when I write more articles like this. follow me If you liked this article, be sure to for this article below and if you have any questions, and I will do my best to answer. show your support by clapping leave a comment You can also follow me on , or . I’d love to hear from you. Twitter at @Prashant_1722 email me directly find me on linkedin That’s all folks, Have a nice day :) Credit Content for this article is inspired and taken from, Artificial Intelligence, A Modern Approach. Stuart Russell and Peter Norvig. Third Edition. Pearson Education.