100% tevredenheidsgarantie Direct beschikbaar na je betaling Lees online óf als PDF Geen vaste maandelijkse kosten 4.2 TrustPilot
logo-home
Tentamen (uitwerkingen)

CS 234 assignment 2-ALL ANSWERS 100% CORRECT

Beoordeling
3,5
(4)
Verkocht
38
Pagina's
12
Cijfer
A+
Geüpload op
06-07-2021
Geschreven in
2020/2021

CS 234 Winter 2021: Assignment #2 Due date: Part 1 (0-4): February 5, 2021 at 6 PM (18:00) PST Part 2 (5-6): February 12, 2021 at 6 PM (18:00) PST These questions require thought, but do not require long answers. Please be as concise as possible. We encourage students to discuss in groups for assignments. We ask that you abide by the university Honor Code and that of the Computer Science department. If you have discussed the problems with others, please include a statement saying who you discussed problems with. Failure to follow these instructions will be reported to the Office of Community Standards. We reserve the right to run a fraud-detection software on your code. Please refer to website, Academic Collaboration and Misconduct section for details about collaboration policy. Please review any additional instructions posted on the assignment page. When you are ready to submit, please follow the instructions on the course website. Make sure you test your code using the provided commands and do not edit outside of the marked areas. You’ll need to download the starter code and fill the appropriate functions following the instructions from the handout and the code’s documentation. Training DeepMind’s network on Pong takes roughly 12 hours on GPU, so please start early! (Only a completed run will recieve full credit) We will give you access to an Azure GPU cluster. You’ll find the setup instructions on the course assignment page. Introduction In this assignment we will implement deep Q-learning, following DeepMind’s paper ([1] and [2]) that learns to play Atari games from raw pixels. The purpose is to demonstrate the effectiveness of deep neural networks as well as some of the techniques used in practice to stabilize training and achieve better performance. In the process, you’ll become familiar with PyTorch. We will train our networks on the Pong-v0 environment from OpenAI gym, but the code can easily be applied to any other environment. In Pong, one player scores if the ball passes by the other player. An episode is over when one of the players reaches 21 points. Thus, the total return of an episode is between −21 (lost every point) and +21 (won every point). Our agent plays against a decent hard-coded AI player. Average human performance is −3 (reported in [2]). In this assignment, you will train an AI agent with super-human performance, reaching at least +10 (hopefully more!). 1CS 234 Winter 2021: Assignment #2 0 Distributions induced by a policy (13 pts) In this problem, we’ll work with an infinite-horizon MDP M = hS; A; R; T ; γi and consider stochastic policies of the form π : S ! ∆(A)1. Additionally, we’ll assume that M has a single, fixed starting state s0 2 S for simplicity. (a) (written, 3 pts) Consider a fixed stochastic policy and imagine running several rollouts of this policy within the environment. Naturally, depending on the stochasticity of the MDP M and the policy itself, some trajectories are more likely than others. Write down an expression for ρπ(τ), the likelihood of sampling a trajectory τ = (s0; a0; s1; a1; : : :) by running π in M. To put this distribution in context, recall that V π(s0) = Eτ∼ρπ tP1=0 γtR(st; at) j s0 : Solution: ρπ(τ) = 1Y t =0 π(atjst)T (st+1jst; at) (b) (written, 5 pts) Just as ρπ captures the distribution over trajectories induced by π, we can also examine the distribution over states induced by π. In particular, define the discounted, stationary state distribution of a policy π as dπ(s) = (1 − γ) 1X t =0 γtp(st = s); where p(st = s) denotes the probability of being in state s at timestep t while following policy π; your answer to the previous part should help you reason about how you might compute this value. Consider an arbitrary function f : S × A ! R. Prove the following identity: E τ∼ρπ "Xt1=0 γtf(st; at)# = (1 −1 γ)Es∼dπ Ea∼π(s) [f(s; a)] : Hint: You may find it helpful to first consider how things work out for f(s; a) = 1; 8(s; a) 2 S × A. Hint: What is p(st = s)? Solution: E τ∼ρπ "Xt1=0 γtf(st; at)# = Xt1=0 γtEτ∼ρπ [f(st; at)] = E τ∼ρπ [f(s0; a0)] + γEτ∼ρπ [f(s1; a1)] + γ2Eτ∼ρπ [f(s2; a2)] + ::: = X a0 π(a0js0)f(s0; a0) + γ X a0 π(a0js0) X s1 T (s1js0; a0) X a1 π(a1js1)f(s1; a1) + ::: = X s p(s0 = s)Ea∼π(s)[f(s; a)] + γ X s p(s1 = s)Ea∼π(s)[f(s; a)] + ::: = X s 1X t =0 γtp(st = s)Ea∼π(s)[f(s; a)] = 1 (1 − γ) X s dπ(s)Ea∼π(s)[f(s; a)] = 1 (1 − γ)Es∼dπ Ea∼π(s) [f(s; a)] 1For a finite set X, ∆(X) refers to the set of categorical distributions with support on X or, equivalently, the ∆jX j−1 probability simplex. Page 2 of 12CS 234 Winter 2021: Assignment #2 (c) (written, 5 pts) For any policy π, we define the following function Aπ(s; a) = Qπ(s; a) − V π(s): Prove the following statement holds for all policies π; π0: V π(s0) − V π0(s0) = 1 (1 − γ)Es∼dπ hEa∼π(s) hAπ0(s; a)ii : Solution: V π(s0) − V π0(s0) = Eτ∼ρπ "Xt1=0 γtR(st; at)# − V π0(s0) = E τ∼ρπ "Xt1=0 γt R(st; at) + V π0(st) − V π0(st)# − V π0(s0) = E τ∼ρπ "Xt1=0 γt R(st; at) + γ

Meer zien Lees minder
Instelling
Vak









Oeps! We kunnen je document nu niet laden. Probeer het nog eens of neem contact op met support.

Geschreven voor

Instelling
Vak

Documentinformatie

Geüpload op
6 juli 2021
Aantal pagina's
12
Geschreven in
2020/2021
Type
Tentamen (uitwerkingen)
Bevat
Vragen en antwoorden

Onderwerpen

Voorbeeld van de inhoud

CS 234 Winter 2021: Assignment #2

Due date:
Part 1 (0-4): February 5, 2021 at 6 PM (18:00) PST
Part 2 (5-6): February 12, 2021 at 6 PM (18:00) PST

These questions require thought, but do not require long answers. Please be as concise as possible.

We encourage students to discuss in groups for assignments. We ask that you abide by the university
Honor Code and that of the Computer Science department. If you have discussed the problems with
others, please include a statement saying who you discussed problems with. Failure to follow these
instructions will be reported to the Office of Community Standards. We reserve the right to run a
fraud-detection software on your code. Please refer to website, Academic Collaboration and Misconduct
section for details about collaboration policy.
Please review any additional instructions posted on the assignment page. When you are ready to
submit, please follow the instructions on the course website. Make sure you test your code using
the provided commands and do not edit outside of the marked areas.

You’ll need to download the starter code and fill the appropriate functions following the instructions
from the handout and the code’s documentation. Training DeepMind’s network on Pong takes roughly
12 hours on GPU, so please start early! (Only a completed run will recieve full credit) We will give
you access to an Azure GPU cluster. You’ll find the setup instructions on the course assignment page.



Introduction
In this assignment we will implement deep Q-learning, following DeepMind’s paper ([1] and [2]) that learns
to play Atari games from raw pixels. The purpose is to demonstrate the effectiveness of deep neural networks
as well as some of the techniques used in practice to stabilize training and achieve better performance. In
the process, you’ll become familiar with PyTorch. We will train our networks on the Pong-v0 environment
from OpenAI gym, but the code can easily be applied to any other environment.

In Pong, one player scores if the ball passes by the other player. An episode is over when one of the players
reaches 21 points. Thus, the total return of an episode is between −21 (lost every point) and +21 (won
every point). Our agent plays against a decent hard-coded AI player. Average human performance is −3
(reported in [2]). In this assignment, you will train an AI agent with super-human performance, reaching at
least +10 (hopefully more!).




1

, CS 234 Winter 2021: Assignment #2


0 Distributions induced by a policy (13 pts)
In this problem, we’ll work with an infinite-horizon MDP M = hS, A, R, T , γi and consider stochastic policies
of the form π : S → ∆(A)1 . Additionally, we’ll assume that M has a single, fixed starting state s0 ∈ S for
simplicity.

(a) (written, 3 pts) Consider a fixed stochastic policy and imagine running several rollouts of this policy
within the environment. Naturally, depending on the stochasticity of the MDP M and the policy itself,
some trajectories are more likely than others. Write down an expression for ρπ (τ ), the likelihood of
sampling a trajectory τ = (s  running π in M. To put this distribution in context,
0 , a0 , s1 , a1 , . . .) by
∞
recall that V π (s0 ) = Eτ ∼ρπ γ t R(st , at ) | s0 .
P
t=0
Solution:

Y
ρπ (τ ) = π(at |st )T (st+1 |st , at )
t=0



(b) (written, 5 pts) Just as ρπ captures the distribution over trajectories induced by π, we can also ex-
amine the distribution over states induced by π. In particular, define the discounted, stationary state
distribution of a policy π as

X
dπ (s) = (1 − γ) γ t p(st = s),
t=0

where p(st = s) denotes the probability of being in state s at timestep t while following policy π; your
answer to the previous part should help you reason about how you might compute this value. Consider
an arbitrary function f : S × A → R. Prove the following identity:
"∞ #
X 1
γ t f (st , at ) =
 
Eτ ∼ρπ Es∼dπ Ea∼π(s) [f (s, a)] .
t=0
(1 − γ)

Hint: You may find it helpful to first consider how things work out for f (s, a) = 1, ∀(s, a) ∈ S × A.
Hint: What is p(st = s)?
Solution:
"∞ # ∞
X X
t
Eτ ∼ρπ γ f (st , at ) = γ t Eτ ∼ρπ [f (st , at )]
t=0 t=0

= Eτ ∼ρπ [f (s0 , a0 )] + γEτ ∼ρπ [f (s1 , a1 )] + γ 2 Eτ ∼ρπ [f (s2 , a2 )] + ...
X X X X
= π(a0 |s0 )f (s0 , a0 ) + γ π(a0 |s0 ) T (s1 |s0 , a0 ) π(a1 |s1 )f (s1 , a1 ) + ...
a0 a0 s1 a1
X X
= p(s0 = s)Ea∼π(s) [f (s, a)] + γ p(s1 = s)Ea∼π(s) [f (s, a)] + ...
s s

XX
= γ t p(st = s)Ea∼π(s) [f (s, a)]
s t=0
1 X 1
dπ (s)Ea∼π(s) [f (s, a)] =
 
= Es∼dπ Ea∼π(s) [f (s, a)]
(1 − γ) s (1 − γ)




a finite set X , ∆(X ) refers to the set of categorical distributions with support on X or, equivalently, the ∆|X |−1
1 For

probability simplex.


Page 2 of 12
€8,85
Krijg toegang tot het volledige document:
Gekocht door 38 studenten

100% tevredenheidsgarantie
Direct beschikbaar na je betaling
Lees online óf als PDF
Geen vaste maandelijkse kosten

Beoordelingen van geverifieerde kopers

Alle 4 reviews worden weergegeven
2 jaar geleden

3 jaar geleden

3 jaar geleden

3 jaar geleden

Thank you for the 5stars!!! Much appreciated!!

3 jaar geleden

3 jaar geleden

Thank you verybmuch for the 5stars!! Much appreciated!!

3,5

4 beoordelingen

5
2
4
0
3
1
2
0
1
1
Betrouwbare reviews op Stuvia

Alle beoordelingen zijn geschreven door echte Stuvia-gebruikers na geverifieerde aankopen.

Maak kennis met de verkoper

Seller avatar
De reputatie van een verkoper is gebaseerd op het aantal documenten dat iemand tegen betaling verkocht heeft en de beoordelingen die voor die items ontvangen zijn. Er zijn drie niveau’s te onderscheiden: brons, zilver en goud. Hoe beter de reputatie, hoe meer de kwaliteit van zijn of haar werk te vertrouwen is.
Themanehoppe American Intercontinental University Online
Volgen Je moet ingelogd zijn om studenten of vakken te kunnen volgen
Verkocht
292
Lid sinds
4 jaar
Aantal volgers
223
Documenten
3485
Laatst verkocht
2 maanden geleden

3,4

48 beoordelingen

5
21
4
5
3
7
2
3
1
12

Recent door jou bekeken

Waarom studenten kiezen voor Stuvia

Gemaakt door medestudenten, geverifieerd door reviews

Kwaliteit die je kunt vertrouwen: geschreven door studenten die slaagden en beoordeeld door anderen die dit document gebruikten.

Niet tevreden? Kies een ander document

Geen zorgen! Je kunt voor hetzelfde geld direct een ander document kiezen dat beter past bij wat je zoekt.

Betaal zoals je wilt, start meteen met leren

Geen abonnement, geen verplichtingen. Betaal zoals je gewend bent via iDeal of creditcard en download je PDF-document meteen.

Student with book image

“Gekocht, gedownload en geslaagd. Zo makkelijk kan het dus zijn.”

Alisha Student

Veelgestelde vragen