AI: A MODERN APPROACH - COMPREHENSIVE
1. What is the Turing Test? A test proposed by Alan Turing where a computer
passes if a human interrogator cannot distinguish it from a human through text-
based conversation. It requires natural language processing, knowledge
representation, automated reasoning, and machine learning.
2. What are the four approaches to defining AI?
• Thinking Humanly (cognitive modeling)
• Thinking Rationally (laws of thought)
• Acting Humanly (Turing Test approach)
• Acting Rationally (rational agent approach)
3. Define a rational agent. An agent that acts to maximize its expected
performance measure, given its percept sequence and built-in knowledge.
4. What is the PEAS framework? Performance measure, Environment,
Actuators, Sensors - a framework for specifying task environments for
intelligent agents.
5. What is the difference between a reflex agent and a goal-based agent?
Reflex agents select actions based solely on current percepts; goal-based agents
consider future consequences and select actions that achieve their goals.
6. List the properties of task environments. Fully/Partially Observable,
Single/Multi-agent, Deterministic/Stochastic, Episodic/Sequential,
Static/Dynamic, Discrete/Continuous, Known/Unknown
7. What is a model-based reflex agent? An agent that maintains internal state
to track aspects of the world not evident in the current percept, using a model of
how the world evolves.
,8. What is a utility-based agent? An agent that uses a utility function to
measure how desirable different states are, allowing comparison when multiple
goals conflict or when goals are uncertain.
9. Define agent function vs. agent program. Agent function is an abstract
mathematical description mapping percept sequences to actions; agent program
is the concrete implementation running on the agent architecture.
10. What is bounded rationality? The idea that agents have limited
computational resources and must make decisions with incomplete information
and limited time.
11. Give an example of a fully observable environment. Chess - both players
can see the complete board state at all times.
12. Give an example of a partially observable environment. Poker - players
cannot see opponents' cards or future cards in the deck.
13. What makes an environment stochastic? Outcomes are not completely
determined by the agent's actions; randomness or uncertainty exists in state
transitions.
14. What is a multi-agent environment? An environment containing multiple
agents whose actions affect each other (competitive or cooperative).
15. What is the difference between episodic and sequential environments?
Episodic: agent's experience is divided into atomic episodes with no dependence
between them. Sequential: current decisions affect future decisions.
16. What is a learning agent composed of? Learning element, performance
element, critic, and problem generator.
17. What is the role of the critic in a learning agent? Provides feedback on
how well the agent is doing with respect to a fixed performance standard.
18. What does the problem generator do in a learning agent? Suggests
exploratory actions that might lead to better long-term performance, even if
suboptimal in the short term.
19. What is situated learning? Learning that occurs through interaction with
the environment rather than from explicit instruction.
20. What is representation in AI? The way information about the world is
encoded in data structures that an agent can manipulate.
PART II: PROBLEM SOLVING & SEARCH (Questions 21-80)
, 21. Define a search problem formally. A tuple (S, s₀, A, T, G, c) where S is
state space, s₀ is initial state, A is actions, T is transition model, G is goal test, c
is path cost.
22. What is the state space of a problem? The set of all possible states
reachable from the initial state by any sequence of actions.
23. What is a solution to a search problem? A sequence of actions from the
initial state to a goal state.
24. What is an optimal solution? A solution with the lowest path cost among
all solutions.
25. What is the difference between tree search and graph search? Tree
search may revisit states; graph search maintains an explored set to avoid
revisiting states.
26. Define completeness of a search algorithm. An algorithm is complete if it
always finds a solution when one exists.
27. Define optimality of a search algorithm. An algorithm is optimal if it
finds a solution with the lowest path cost.
28. What is time complexity measured in for search algorithms? The
number of nodes generated during the search.
29. What is space complexity measured in for search algorithms? The
maximum number of nodes stored in memory.
30. How does breadth-first search (BFS) work? Expands the shallowest
unexpanded node first using a FIFO queue; explores all nodes at depth d before
exploring nodes at depth d+1.
31. Is BFS complete and optimal? Complete: Yes (if branching factor is
finite). Optimal: Yes (if step costs are equal).
32. What is the time complexity of BFS? O(b^d) where b is branching factor
and d is depth of shallowest solution.
33. What is the space complexity of BFS? O(b^d) - major drawback as it must
store all frontier nodes.
34. How does uniform-cost search work? Expands the node with lowest path
cost g(n) using a priority queue; generalizes BFS for different step costs.
1. What is the Turing Test? A test proposed by Alan Turing where a computer
passes if a human interrogator cannot distinguish it from a human through text-
based conversation. It requires natural language processing, knowledge
representation, automated reasoning, and machine learning.
2. What are the four approaches to defining AI?
• Thinking Humanly (cognitive modeling)
• Thinking Rationally (laws of thought)
• Acting Humanly (Turing Test approach)
• Acting Rationally (rational agent approach)
3. Define a rational agent. An agent that acts to maximize its expected
performance measure, given its percept sequence and built-in knowledge.
4. What is the PEAS framework? Performance measure, Environment,
Actuators, Sensors - a framework for specifying task environments for
intelligent agents.
5. What is the difference between a reflex agent and a goal-based agent?
Reflex agents select actions based solely on current percepts; goal-based agents
consider future consequences and select actions that achieve their goals.
6. List the properties of task environments. Fully/Partially Observable,
Single/Multi-agent, Deterministic/Stochastic, Episodic/Sequential,
Static/Dynamic, Discrete/Continuous, Known/Unknown
7. What is a model-based reflex agent? An agent that maintains internal state
to track aspects of the world not evident in the current percept, using a model of
how the world evolves.
,8. What is a utility-based agent? An agent that uses a utility function to
measure how desirable different states are, allowing comparison when multiple
goals conflict or when goals are uncertain.
9. Define agent function vs. agent program. Agent function is an abstract
mathematical description mapping percept sequences to actions; agent program
is the concrete implementation running on the agent architecture.
10. What is bounded rationality? The idea that agents have limited
computational resources and must make decisions with incomplete information
and limited time.
11. Give an example of a fully observable environment. Chess - both players
can see the complete board state at all times.
12. Give an example of a partially observable environment. Poker - players
cannot see opponents' cards or future cards in the deck.
13. What makes an environment stochastic? Outcomes are not completely
determined by the agent's actions; randomness or uncertainty exists in state
transitions.
14. What is a multi-agent environment? An environment containing multiple
agents whose actions affect each other (competitive or cooperative).
15. What is the difference between episodic and sequential environments?
Episodic: agent's experience is divided into atomic episodes with no dependence
between them. Sequential: current decisions affect future decisions.
16. What is a learning agent composed of? Learning element, performance
element, critic, and problem generator.
17. What is the role of the critic in a learning agent? Provides feedback on
how well the agent is doing with respect to a fixed performance standard.
18. What does the problem generator do in a learning agent? Suggests
exploratory actions that might lead to better long-term performance, even if
suboptimal in the short term.
19. What is situated learning? Learning that occurs through interaction with
the environment rather than from explicit instruction.
20. What is representation in AI? The way information about the world is
encoded in data structures that an agent can manipulate.
PART II: PROBLEM SOLVING & SEARCH (Questions 21-80)
, 21. Define a search problem formally. A tuple (S, s₀, A, T, G, c) where S is
state space, s₀ is initial state, A is actions, T is transition model, G is goal test, c
is path cost.
22. What is the state space of a problem? The set of all possible states
reachable from the initial state by any sequence of actions.
23. What is a solution to a search problem? A sequence of actions from the
initial state to a goal state.
24. What is an optimal solution? A solution with the lowest path cost among
all solutions.
25. What is the difference between tree search and graph search? Tree
search may revisit states; graph search maintains an explored set to avoid
revisiting states.
26. Define completeness of a search algorithm. An algorithm is complete if it
always finds a solution when one exists.
27. Define optimality of a search algorithm. An algorithm is optimal if it
finds a solution with the lowest path cost.
28. What is time complexity measured in for search algorithms? The
number of nodes generated during the search.
29. What is space complexity measured in for search algorithms? The
maximum number of nodes stored in memory.
30. How does breadth-first search (BFS) work? Expands the shallowest
unexpanded node first using a FIFO queue; explores all nodes at depth d before
exploring nodes at depth d+1.
31. Is BFS complete and optimal? Complete: Yes (if branching factor is
finite). Optimal: Yes (if step costs are equal).
32. What is the time complexity of BFS? O(b^d) where b is branching factor
and d is depth of shallowest solution.
33. What is the space complexity of BFS? O(b^d) - major drawback as it must
store all frontier nodes.
34. How does uniform-cost search work? Expands the node with lowest path
cost g(n) using a priority queue; generalizes BFS for different step costs.