Agents are entities that perceive their environment through sensors and act upon it using
actuators. They can be categorized based on their level of intelligence and autonomy. Below
are the primary types of agents in AI:
Simple Reflex Agents
These agents operate based on predefined rules and do not consider past experiences. They
react to specific inputs (percepts) with a corresponding output (action). Characteristics:
- Direct mapping of conditions to actions using if-then rules.
- No memory or learning capability.
- Works well in fully observable environments but struggles with uncertainty.
Example:
- A thermostat that turns on/off based on temperature.
- A self-driving car’s obstacle avoidance system that stops when an object is detected.
Model-Based Reflex Agents
Unlike simple reflex agents, model-based agents maintain an internal model of the
environment, which helps them handle partially observable situations. They use a state
representation to track missing information.
Characteristics:
- Uses an internal representation of the environment.
- Can handle uncertainty better than simple reflex agents.
- Stores information about past percepts.
Example:
- A chess-playing AI that remembers previous moves.
- A vacuum cleaner that remembers which areas it has already cleaned.
Goal-Based Agents
These agents act not just on rules but also based on desired outcomes (goals). They evaluate
multiple actions and choose the one that moves them closer to their goal.
Characteristics:
- Considers the consequences of actions before making a decision.
- Uses search algorithms to find the best sequence of actions.
- More intelligent than reflex-based agents.
Example:
- A GPS navigation system finding the best route to a destination.
- A robotic arm assembling a product in a factory based on specific steps.