100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Exam (elaborations)

CS 234 assignment 2-ALL ANSWERS 100% CORRECT

Rating
3,5
(4)
Sold
38
Pages
12
Grade
A+
Uploaded on
06-07-2021
Written in
2020/2021

CS 234 Winter 2021: Assignment #2 Due date: Part 1 (0-4): February 5, 2021 at 6 PM (18:00) PST Part 2 (5-6): February 12, 2021 at 6 PM (18:00) PST These questions require thought, but do not require long answers. Please be as concise as possible. We encourage students to discuss in groups for assignments. We ask that you abide by the university Honor Code and that of the Computer Science department. If you have discussed the problems with others, please include a statement saying who you discussed problems with. Failure to follow these instructions will be reported to the Office of Community Standards. We reserve the right to run a fraud-detection software on your code. Please refer to website, Academic Collaboration and Misconduct section for details about collaboration policy. Please review any additional instructions posted on the assignment page. When you are ready to submit, please follow the instructions on the course website. Make sure you test your code using the provided commands and do not edit outside of the marked areas. You’ll need to download the starter code and fill the appropriate functions following the instructions from the handout and the code’s documentation. Training DeepMind’s network on Pong takes roughly 12 hours on GPU, so please start early! (Only a completed run will recieve full credit) We will give you access to an Azure GPU cluster. You’ll find the setup instructions on the course assignment page. Introduction In this assignment we will implement deep Q-learning, following DeepMind’s paper ([1] and [2]) that learns to play Atari games from raw pixels. The purpose is to demonstrate the effectiveness of deep neural networks as well as some of the techniques used in practice to stabilize training and achieve better performance. In the process, you’ll become familiar with PyTorch. We will train our networks on the Pong-v0 environment from OpenAI gym, but the code can easily be applied to any other environment. In Pong, one player scores if the ball passes by the other player. An episode is over when one of the players reaches 21 points. Thus, the total return of an episode is between −21 (lost every point) and +21 (won every point). Our agent plays against a decent hard-coded AI player. Average human performance is −3 (reported in [2]). In this assignment, you will train an AI agent with super-human performance, reaching at least +10 (hopefully more!). 1CS 234 Winter 2021: Assignment #2 0 Distributions induced by a policy (13 pts) In this problem, we’ll work with an infinite-horizon MDP M = hS; A; R; T ; γi and consider stochastic policies of the form π : S ! ∆(A)1. Additionally, we’ll assume that M has a single, fixed starting state s0 2 S for simplicity. (a) (written, 3 pts) Consider a fixed stochastic policy and imagine running several rollouts of this policy within the environment. Naturally, depending on the stochasticity of the MDP M and the policy itself, some trajectories are more likely than others. Write down an expression for ρπ(τ), the likelihood of sampling a trajectory τ = (s0; a0; s1; a1; : : :) by running π in M. To put this distribution in context, recall that V π(s0) = Eτ∼ρπ tP1=0 γtR(st; at) j s0 : Solution: ρπ(τ) = 1Y t =0 π(atjst)T (st+1jst; at) (b) (written, 5 pts) Just as ρπ captures the distribution over trajectories induced by π, we can also examine the distribution over states induced by π. In particular, define the discounted, stationary state distribution of a policy π as dπ(s) = (1 − γ) 1X t =0 γtp(st = s); where p(st = s) denotes the probability of being in state s at timestep t while following policy π; your answer to the previous part should help you reason about how you might compute this value. Consider an arbitrary function f : S × A ! R. Prove the following identity: E τ∼ρπ "Xt1=0 γtf(st; at)# = (1 −1 γ)Es∼dπ Ea∼π(s) [f(s; a)] : Hint: You may find it helpful to first consider how things work out for f(s; a) = 1; 8(s; a) 2 S × A. Hint: What is p(st = s)? Solution: E τ∼ρπ "Xt1=0 γtf(st; at)# = Xt1=0 γtEτ∼ρπ [f(st; at)] = E τ∼ρπ [f(s0; a0)] + γEτ∼ρπ [f(s1; a1)] + γ2Eτ∼ρπ [f(s2; a2)] + ::: = X a0 π(a0js0)f(s0; a0) + γ X a0 π(a0js0) X s1 T (s1js0; a0) X a1 π(a1js1)f(s1; a1) + ::: = X s p(s0 = s)Ea∼π(s)[f(s; a)] + γ X s p(s1 = s)Ea∼π(s)[f(s; a)] + ::: = X s 1X t =0 γtp(st = s)Ea∼π(s)[f(s; a)] = 1 (1 − γ) X s dπ(s)Ea∼π(s)[f(s; a)] = 1 (1 − γ)Es∼dπ Ea∼π(s) [f(s; a)] 1For a finite set X, ∆(X) refers to the set of categorical distributions with support on X or, equivalently, the ∆jX j−1 probability simplex. Page 2 of 12CS 234 Winter 2021: Assignment #2 (c) (written, 5 pts) For any policy π, we define the following function Aπ(s; a) = Qπ(s; a) − V π(s): Prove the following statement holds for all policies π; π0: V π(s0) − V π0(s0) = 1 (1 − γ)Es∼dπ hEa∼π(s) hAπ0(s; a)ii : Solution: V π(s0) − V π0(s0) = Eτ∼ρπ "Xt1=0 γtR(st; at)# − V π0(s0) = E τ∼ρπ "Xt1=0 γt R(st; at) + V π0(st) − V π0(st)# − V π0(s0) = E τ∼ρπ "Xt1=0 γt R(st; at) + γ

Show more Read less
Institution
Course









Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
Course

Document information

Uploaded on
July 6, 2021
Number of pages
12
Written in
2020/2021
Type
Exam (elaborations)
Contains
Questions & answers

Subjects

Content preview

CS 234 Winter 2021: Assignment #2

Due date:
Part 1 (0-4): February 5, 2021 at 6 PM (18:00) PST
Part 2 (5-6): February 12, 2021 at 6 PM (18:00) PST

These questions require thought, but do not require long answers. Please be as concise as possible.

We encourage students to discuss in groups for assignments. We ask that you abide by the university
Honor Code and that of the Computer Science department. If you have discussed the problems with
others, please include a statement saying who you discussed problems with. Failure to follow these
instructions will be reported to the Office of Community Standards. We reserve the right to run a
fraud-detection software on your code. Please refer to website, Academic Collaboration and Misconduct
section for details about collaboration policy.
Please review any additional instructions posted on the assignment page. When you are ready to
submit, please follow the instructions on the course website. Make sure you test your code using
the provided commands and do not edit outside of the marked areas.

You’ll need to download the starter code and fill the appropriate functions following the instructions
from the handout and the code’s documentation. Training DeepMind’s network on Pong takes roughly
12 hours on GPU, so please start early! (Only a completed run will recieve full credit) We will give
you access to an Azure GPU cluster. You’ll find the setup instructions on the course assignment page.



Introduction
In this assignment we will implement deep Q-learning, following DeepMind’s paper ([1] and [2]) that learns
to play Atari games from raw pixels. The purpose is to demonstrate the effectiveness of deep neural networks
as well as some of the techniques used in practice to stabilize training and achieve better performance. In
the process, you’ll become familiar with PyTorch. We will train our networks on the Pong-v0 environment
from OpenAI gym, but the code can easily be applied to any other environment.

In Pong, one player scores if the ball passes by the other player. An episode is over when one of the players
reaches 21 points. Thus, the total return of an episode is between −21 (lost every point) and +21 (won
every point). Our agent plays against a decent hard-coded AI player. Average human performance is −3
(reported in [2]). In this assignment, you will train an AI agent with super-human performance, reaching at
least +10 (hopefully more!).




1

, CS 234 Winter 2021: Assignment #2


0 Distributions induced by a policy (13 pts)
In this problem, we’ll work with an infinite-horizon MDP M = hS, A, R, T , γi and consider stochastic policies
of the form π : S → ∆(A)1 . Additionally, we’ll assume that M has a single, fixed starting state s0 ∈ S for
simplicity.

(a) (written, 3 pts) Consider a fixed stochastic policy and imagine running several rollouts of this policy
within the environment. Naturally, depending on the stochasticity of the MDP M and the policy itself,
some trajectories are more likely than others. Write down an expression for ρπ (τ ), the likelihood of
sampling a trajectory τ = (s  running π in M. To put this distribution in context,
0 , a0 , s1 , a1 , . . .) by
∞
recall that V π (s0 ) = Eτ ∼ρπ γ t R(st , at ) | s0 .
P
t=0
Solution:

Y
ρπ (τ ) = π(at |st )T (st+1 |st , at )
t=0



(b) (written, 5 pts) Just as ρπ captures the distribution over trajectories induced by π, we can also ex-
amine the distribution over states induced by π. In particular, define the discounted, stationary state
distribution of a policy π as

X
dπ (s) = (1 − γ) γ t p(st = s),
t=0

where p(st = s) denotes the probability of being in state s at timestep t while following policy π; your
answer to the previous part should help you reason about how you might compute this value. Consider
an arbitrary function f : S × A → R. Prove the following identity:
"∞ #
X 1
γ t f (st , at ) =
 
Eτ ∼ρπ Es∼dπ Ea∼π(s) [f (s, a)] .
t=0
(1 − γ)

Hint: You may find it helpful to first consider how things work out for f (s, a) = 1, ∀(s, a) ∈ S × A.
Hint: What is p(st = s)?
Solution:
"∞ # ∞
X X
t
Eτ ∼ρπ γ f (st , at ) = γ t Eτ ∼ρπ [f (st , at )]
t=0 t=0

= Eτ ∼ρπ [f (s0 , a0 )] + γEτ ∼ρπ [f (s1 , a1 )] + γ 2 Eτ ∼ρπ [f (s2 , a2 )] + ...
X X X X
= π(a0 |s0 )f (s0 , a0 ) + γ π(a0 |s0 ) T (s1 |s0 , a0 ) π(a1 |s1 )f (s1 , a1 ) + ...
a0 a0 s1 a1
X X
= p(s0 = s)Ea∼π(s) [f (s, a)] + γ p(s1 = s)Ea∼π(s) [f (s, a)] + ...
s s

XX
= γ t p(st = s)Ea∼π(s) [f (s, a)]
s t=0
1 X 1
dπ (s)Ea∼π(s) [f (s, a)] =
 
= Es∼dπ Ea∼π(s) [f (s, a)]
(1 − γ) s (1 − γ)




a finite set X , ∆(X ) refers to the set of categorical distributions with support on X or, equivalently, the ∆|X |−1
1 For

probability simplex.


Page 2 of 12
R176,07
Get access to the full document:
Purchased by 38 students

100% satisfaction guarantee
Immediately available after payment
Both online and in PDF
No strings attached

Reviews from verified buyers

Showing all 4 reviews
2 year ago

3 year ago

3 year ago

3 year ago

Thank you for the 5stars!!! Much appreciated!!

3 year ago

3 year ago

Thank you verybmuch for the 5stars!! Much appreciated!!

3,5

4 reviews

5
2
4
0
3
1
2
0
1
1
Trustworthy reviews on Stuvia

All reviews are made by real Stuvia users after verified purchases.

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
Themanehoppe American Intercontinental University Online
Follow You need to be logged in order to follow users or courses
Sold
292
Member since
4 year
Number of followers
223
Documents
3485
Last sold
2 months ago

3,4

48 reviews

5
21
4
5
3
7
2
3
1
12

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their exams and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can immediately select a different document that better matches what you need.

Pay how you prefer, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card or EFT and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions