Test With Revised Answers.
1. What is the "Frame Problem" in AI?
correct answer The Frame Problem is a classic challenge in AI and philosophy concerning
how a formal system can represent the effects of an action without having to explicitly write
down every detail that doesn't change. In a complex environment, most facts remain true
after an action is performed (e.g., if you move a book, the color of the wall stays the same).
Manually encoding all these "non-changes" leads to a computational explosion. Modern AI
addresses this through localized state representations and deep learning models that learn
to focus only on relevant environmental variables.
2. Explain the difference between "Model-Based" and "Model-Free" Reinforcement Learning.
correct answer In Model-Based RL, the agent attempts to understand its environment by
creating a "model" of how the world works, allowing it to predict future states and rewards
before taking action (planning). In contrast, Model-Free RL does not attempt to learn the
environment's dynamics; instead, it learns a policy or value function directly through trial
and error. While Model-Based approaches are more sample-efficient, Model-Free methods
like Q-Learning or Policy Gradients are often easier to implement in highly complex or
unpredictable environments.
3. What is "Simultaneous Localization and Mapping" (SLAM) in robotics?
correct answer SLAM is a computational problem where a robot needs to build a map of an
unknown environment while simultaneously keeping track of its own location within that
map. This is a "chicken-and-egg" problem because a map is needed for localization, and
accurate localization is needed to build the map. It relies on sensors like LiDAR, cameras, and
IMUs, combined with algorithms like Extended Kalman Filters or Particle Filters to handle the
inherent noise in sensor data.
4. Describe the "Vanishing Gradient Problem" in deep neural networks.
correct answer The Vanishing Gradient Problem occurs during the training of very deep
networks when the gradients used to update weights become extremely small as they are
backpropagated through layers. Since weights are updated proportionally to the gradient,
the earliest layers of the network eventually stop learning. This was a major barrier for deep
learning until the introduction of the ReLU activation function, Batch Normalization, and
Residual Networks (ResNets), which allow gradients to flow more easily through the
architecture.
5. Explain the concept of "Attention Mechanisms" in Transformers.
correct answer Attention mechanisms allow a model to focus on specific parts of the input
sequence when producing an output, rather than treating all parts of the data equally. In
Natural Language Processing, "Self-Attention" calculates the relationship between every
word in a sentence, allowing the model to understand context (e.g., in the sentence "The
bank was closed," the model uses attention to determine if "bank" refers to a river or a
financial institution). This parallel processing capability is what makes the Transformer
architecture superior to older sequential models like LSTMs.
, 6. What is "Symbolic AI" (or GOFAI)?
correct answer Symbolic AI, or "Good Old-Fashioned AI," is based on the idea that human
intelligence can be replicated through the manipulation of high-level symbols and logical
rules. Unlike modern neural networks that learn from data, Symbolic AI requires humans to
manually program the logic and knowledge. While it is excellent for transparent reasoning
and solving math problems, it struggles with "messy" real-world data like image recognition
or natural language nuances, where deep learning excels.
7. Describe "Inverse Reinforcement Learning" (IRL).
correct answer Inverse Reinforcement Learning is the process of deriving a reward function
from the observed behavior of an expert (usually a human). In standard RL, we provide the
reward; in IRL, the machine watches the expert and tries to figure out what the expert's goal
actually was. This is vital for "Learning from Demonstration," where it might be difficult to
mathematically define a complex task, but easy to show the machine how to do it.
8. What are "Residual Connections" (Skip Connections)?
correct answer Residual connections are a structural feature in neural networks where the
output of one layer is added to the output of a later layer, "skipping" one or more
intermediate layers. This allows the network to learn "identity mappings," ensuring that
higher layers perform at least as well as lower layers. Practically, this solves the vanishing
gradient problem and allows for the training of networks with hundreds or thousands of
layers, such as ResNet-50 or ResNet-101.
9. Define "Autoencoders" and their primary use.
correct answer An autoencoder is a type of unsupervised neural network designed to learn a
compressed representation (encoding) of input data and then reconstruct the original input
from that compression. It consists of an "Encoder" that shrinks the data into a bottleneck
layer and a "Decoder" that expands it back. They are primarily used for dimensionality
reduction, image denoising, and anomaly detection, as they learn to capture only the most
essential features of the data.
10. What is "Neural Architecture Search" (NAS)?
correct answer Neural Architecture Search is a technique for automating the design of
artificial neural networks. Instead of human engineers manually testing different numbers of
layers, filters, or connections, an AI algorithm (often using reinforcement learning or
evolutionary strategies) "searches" through millions of possible architectures to find the one
that performs best for a specific task. This has led to the discovery of highly efficient models
like EfficientNet.
11. Explain "Quantization" in AI model deployment.
correct answer Quantization is the process of reducing the precision of the numbers used to
represent a model's weights and biases (e.g., converting 32-bit floating-point numbers to 8-
bit integers). This significantly reduces the model's memory footprint and increases inference
speed, which is crucial for running complex AI on "Edge" devices like smartphones,
smartwatches, and low-power IoT sensors without a major loss in accuracy.
12. What is "Federated Learning"?
correct answer Federated Learning is a machine learning technique that trains an algorithm
across multiple decentralized edge devices or servers holding local data samples, without
exchanging them. Instead of uploading sensitive user data to a central cloud, the devices