CS 234 Winter 2020: Assignment #2.
Due date: February 5, 2020 at 11:59 PM (23:59) PST
These questions require thought, but do not require long answers. Please be as concise as possible.
We encourage students to discuss in groups for assignments. We ask that you abide by the university
Honor Code and that of the Computer Science department. If you have discussed the problems with
others, please include a statement saying who you discussed problems with. Failure to follow these
instructions will be reported to the Office of Community Standards. We reserve the right to run a
fraud-detection software on your code. Please refer to website, Academic Collaboration and Misconduct
section for details about collaboration policy.
Please review any additional instructions posted on the assignment page. When you are ready to
submit, please follow the instructions on the course website. Make sure you test your code using
the provided commands and do not edit outside of the marked areas.
You’ll need to download the starter code and fill the appropriate functions following the instructions
from the handout and the code’s documentation. Training DeepMind’s network on Pong takes roughly
12 hours on GPU, so please start early! (Only a completed run will recieve full credit) We will give
you access to an Azure GPU cluster. You’ll find the setup instructions on the course assignment page.
Introduction
In this assignment we will implement deep Q-learning, following DeepMind’s paper ([mnih2015human] and
[mnih-atari-2013]) that learns to play Atari games from raw pixels. The purpose is to demonstrate the
effectiveness of deep neural networks as well as some of the techniques used in practice to stabilize training
and achieve better performance. In the process, you’ll become familiar with TensorFlow. We will train our
networks on the Pong-v0 environment from OpenAI gym, but the code can easily be applied to any other
environment.
In Pong, one player scores if the ball passes by the other player. An episode is over when one of the players
reaches 21 points. Thus, the total return of an episode is between −21 (lost every point) and +21 (won every
point). Our agent plays against a decent hard-coded AI player. Average human performance is −3 (reported
in [mnih-atari-2013]). In this assignment, you will train an AI agent with super-human performance,
reaching at least +10 (hopefully more!).
1
, CS 234 Winter 2020: Assignment #2
0 Test Environment (6 pts)
Before running our code on Pong, it is crucial to test our code on a test environment. In this problem, you
will reason about optimality in the provided test environment by hand; later, to sanity-check your code,
you will verify that your implementation is able to achieve this optimality. You should be able to run your
models on CPU in no more than a few minutes on the following environment:
• 4 states: 0, 1, 2, 3
• 5 actions: 0, 1, 2, 3, 4. Action 0 ≤ i ≤ 3 goes to state i, while action 4 makes the agent stay in the same
state.
• Rewards: Going to state i from states 0, 1, and 3 gives a reward R(i), where R(0) = 0.1, R(1) =
−0.2, R(2) = 0, R(3) = −0.1. If we start in state 2, then the rewards defind above are multiplied by
−10. See Table 1 for the full transition and reward structure.
• One episode lasts 5 time steps (for a total of 5 actions) and always starts in state 0 (no rewards at the
initial state).
State (s) Action (a) Next State (s0 ) Reward (R)
0 0 0 0.1
0 1 1 -0.2
0 2 2 0.0
0 3 3 -0.1
0 4 0 0.1
1 0 0 0.1
1 1 1 -0.2
1 2 2 0.0
1 3 3 -0.1
1 4 1 -0.2
2 0 0 -1.0
2 1 1 2.0
2 2 2 0.0
2 3 3 1.0
2 4 2 0.0
3 0 0 0.1
3 1 1 -0.2
3 2 2 0.0
3 3 3 -0.1
3 4 3 -0.1
Table 1: Transition table for the Test Environment
An example of a trajectory (or episode) in the test environment is shown in Figure 5, and the trajectory
can be represented in terms of st , at , Rt as: s0 = 0, a0 = 1, R0 = −0.2, s1 = 1, a1 = 2, R1 = 0, s2 = 2, a2 =
4, R2 = 0, s3 = 2, a3 = 3, R3 = (−0.1) · (−10) = 1, s4 = 3, a4 = 0, R4 = 0.1, s5 = 0.
Page 2 of 12
Due date: February 5, 2020 at 11:59 PM (23:59) PST
These questions require thought, but do not require long answers. Please be as concise as possible.
We encourage students to discuss in groups for assignments. We ask that you abide by the university
Honor Code and that of the Computer Science department. If you have discussed the problems with
others, please include a statement saying who you discussed problems with. Failure to follow these
instructions will be reported to the Office of Community Standards. We reserve the right to run a
fraud-detection software on your code. Please refer to website, Academic Collaboration and Misconduct
section for details about collaboration policy.
Please review any additional instructions posted on the assignment page. When you are ready to
submit, please follow the instructions on the course website. Make sure you test your code using
the provided commands and do not edit outside of the marked areas.
You’ll need to download the starter code and fill the appropriate functions following the instructions
from the handout and the code’s documentation. Training DeepMind’s network on Pong takes roughly
12 hours on GPU, so please start early! (Only a completed run will recieve full credit) We will give
you access to an Azure GPU cluster. You’ll find the setup instructions on the course assignment page.
Introduction
In this assignment we will implement deep Q-learning, following DeepMind’s paper ([mnih2015human] and
[mnih-atari-2013]) that learns to play Atari games from raw pixels. The purpose is to demonstrate the
effectiveness of deep neural networks as well as some of the techniques used in practice to stabilize training
and achieve better performance. In the process, you’ll become familiar with TensorFlow. We will train our
networks on the Pong-v0 environment from OpenAI gym, but the code can easily be applied to any other
environment.
In Pong, one player scores if the ball passes by the other player. An episode is over when one of the players
reaches 21 points. Thus, the total return of an episode is between −21 (lost every point) and +21 (won every
point). Our agent plays against a decent hard-coded AI player. Average human performance is −3 (reported
in [mnih-atari-2013]). In this assignment, you will train an AI agent with super-human performance,
reaching at least +10 (hopefully more!).
1
, CS 234 Winter 2020: Assignment #2
0 Test Environment (6 pts)
Before running our code on Pong, it is crucial to test our code on a test environment. In this problem, you
will reason about optimality in the provided test environment by hand; later, to sanity-check your code,
you will verify that your implementation is able to achieve this optimality. You should be able to run your
models on CPU in no more than a few minutes on the following environment:
• 4 states: 0, 1, 2, 3
• 5 actions: 0, 1, 2, 3, 4. Action 0 ≤ i ≤ 3 goes to state i, while action 4 makes the agent stay in the same
state.
• Rewards: Going to state i from states 0, 1, and 3 gives a reward R(i), where R(0) = 0.1, R(1) =
−0.2, R(2) = 0, R(3) = −0.1. If we start in state 2, then the rewards defind above are multiplied by
−10. See Table 1 for the full transition and reward structure.
• One episode lasts 5 time steps (for a total of 5 actions) and always starts in state 0 (no rewards at the
initial state).
State (s) Action (a) Next State (s0 ) Reward (R)
0 0 0 0.1
0 1 1 -0.2
0 2 2 0.0
0 3 3 -0.1
0 4 0 0.1
1 0 0 0.1
1 1 1 -0.2
1 2 2 0.0
1 3 3 -0.1
1 4 1 -0.2
2 0 0 -1.0
2 1 1 2.0
2 2 2 0.0
2 3 3 1.0
2 4 2 0.0
3 0 0 0.1
3 1 1 -0.2
3 2 2 0.0
3 3 3 -0.1
3 4 3 -0.1
Table 1: Transition table for the Test Environment
An example of a trajectory (or episode) in the test environment is shown in Figure 5, and the trajectory
can be represented in terms of st , at , Rt as: s0 = 0, a0 = 1, R0 = −0.2, s1 = 1, a1 = 2, R1 = 0, s2 = 2, a2 =
4, R2 = 0, s3 = 2, a3 = 3, R3 = (−0.1) · (−10) = 1, s4 = 3, a4 = 0, R4 = 0.1, s5 = 0.
Page 2 of 12