100% de satisfacción garantizada Inmediatamente disponible después del pago Tanto en línea como en PDF No estas atado a nada 4.2 TrustPilot
logo-home
Examen

CS 234 assignment 2-ALL ANSWERS 100% CORRECT

Puntuación
3,5
(4)
Vendido
38
Páginas
12
Grado
A+
Subido en
06-07-2021
Escrito en
2020/2021

CS 234 Winter 2021: Assignment #2 Due date: Part 1 (0-4): February 5, 2021 at 6 PM (18:00) PST Part 2 (5-6): February 12, 2021 at 6 PM (18:00) PST These questions require thought, but do not require long answers. Please be as concise as possible. We encourage students to discuss in groups for assignments. We ask that you abide by the university Honor Code and that of the Computer Science department. If you have discussed the problems with others, please include a statement saying who you discussed problems with. Failure to follow these instructions will be reported to the Office of Community Standards. We reserve the right to run a fraud-detection software on your code. Please refer to website, Academic Collaboration and Misconduct section for details about collaboration policy. Please review any additional instructions posted on the assignment page. When you are ready to submit, please follow the instructions on the course website. Make sure you test your code using the provided commands and do not edit outside of the marked areas. You’ll need to download the starter code and fill the appropriate functions following the instructions from the handout and the code’s documentation. Training DeepMind’s network on Pong takes roughly 12 hours on GPU, so please start early! (Only a completed run will recieve full credit) We will give you access to an Azure GPU cluster. You’ll find the setup instructions on the course assignment page. Introduction In this assignment we will implement deep Q-learning, following DeepMind’s paper ([1] and [2]) that learns to play Atari games from raw pixels. The purpose is to demonstrate the effectiveness of deep neural networks as well as some of the techniques used in practice to stabilize training and achieve better performance. In the process, you’ll become familiar with PyTorch. We will train our networks on the Pong-v0 environment from OpenAI gym, but the code can easily be applied to any other environment. In Pong, one player scores if the ball passes by the other player. An episode is over when one of the players reaches 21 points. Thus, the total return of an episode is between −21 (lost every point) and +21 (won every point). Our agent plays against a decent hard-coded AI player. Average human performance is −3 (reported in [2]). In this assignment, you will train an AI agent with super-human performance, reaching at least +10 (hopefully more!). 1CS 234 Winter 2021: Assignment #2 0 Distributions induced by a policy (13 pts) In this problem, we’ll work with an infinite-horizon MDP M = hS; A; R; T ; γi and consider stochastic policies of the form π : S ! ∆(A)1. Additionally, we’ll assume that M has a single, fixed starting state s0 2 S for simplicity. (a) (written, 3 pts) Consider a fixed stochastic policy and imagine running several rollouts of this policy within the environment. Naturally, depending on the stochasticity of the MDP M and the policy itself, some trajectories are more likely than others. Write down an expression for ρπ(τ), the likelihood of sampling a trajectory τ = (s0; a0; s1; a1; : : :) by running π in M. To put this distribution in context, recall that V π(s0) = Eτ∼ρπ tP1=0 γtR(st; at) j s0 : Solution: ρπ(τ) = 1Y t =0 π(atjst)T (st+1jst; at) (b) (written, 5 pts) Just as ρπ captures the distribution over trajectories induced by π, we can also examine the distribution over states induced by π. In particular, define the discounted, stationary state distribution of a policy π as dπ(s) = (1 − γ) 1X t =0 γtp(st = s); where p(st = s) denotes the probability of being in state s at timestep t while following policy π; your answer to the previous part should help you reason about how you might compute this value. Consider an arbitrary function f : S × A ! R. Prove the following identity: E τ∼ρπ "Xt1=0 γtf(st; at)# = (1 −1 γ)Es∼dπ Ea∼π(s) [f(s; a)] : Hint: You may find it helpful to first consider how things work out for f(s; a) = 1; 8(s; a) 2 S × A. Hint: What is p(st = s)? Solution: E τ∼ρπ "Xt1=0 γtf(st; at)# = Xt1=0 γtEτ∼ρπ [f(st; at)] = E τ∼ρπ [f(s0; a0)] + γEτ∼ρπ [f(s1; a1)] + γ2Eτ∼ρπ [f(s2; a2)] + ::: = X a0 π(a0js0)f(s0; a0) + γ X a0 π(a0js0) X s1 T (s1js0; a0) X a1 π(a1js1)f(s1; a1) + ::: = X s p(s0 = s)Ea∼π(s)[f(s; a)] + γ X s p(s1 = s)Ea∼π(s)[f(s; a)] + ::: = X s 1X t =0 γtp(st = s)Ea∼π(s)[f(s; a)] = 1 (1 − γ) X s dπ(s)Ea∼π(s)[f(s; a)] = 1 (1 − γ)Es∼dπ Ea∼π(s) [f(s; a)] 1For a finite set X, ∆(X) refers to the set of categorical distributions with support on X or, equivalently, the ∆jX j−1 probability simplex. Page 2 of 12CS 234 Winter 2021: Assignment #2 (c) (written, 5 pts) For any policy π, we define the following function Aπ(s; a) = Qπ(s; a) − V π(s): Prove the following statement holds for all policies π; π0: V π(s0) − V π0(s0) = 1 (1 − γ)Es∼dπ hEa∼π(s) hAπ0(s; a)ii : Solution: V π(s0) − V π0(s0) = Eτ∼ρπ "Xt1=0 γtR(st; at)# − V π0(s0) = E τ∼ρπ "Xt1=0 γt R(st; at) + V π0(st) − V π0(st)# − V π0(s0) = E τ∼ρπ "Xt1=0 γt R(st; at) + γ

Mostrar más Leer menos
Institución
Grado









Ups! No podemos cargar tu documento ahora. Inténtalo de nuevo o contacta con soporte.

Escuela, estudio y materia

Institución
Grado

Información del documento

Subido en
6 de julio de 2021
Número de páginas
12
Escrito en
2020/2021
Tipo
Examen
Contiene
Preguntas y respuestas

Temas

Vista previa del contenido

CS 234 Winter 2021: Assignment #2

Due date:
Part 1 (0-4): February 5, 2021 at 6 PM (18:00) PST
Part 2 (5-6): February 12, 2021 at 6 PM (18:00) PST

These questions require thought, but do not require long answers. Please be as concise as possible.

We encourage students to discuss in groups for assignments. We ask that you abide by the university
Honor Code and that of the Computer Science department. If you have discussed the problems with
others, please include a statement saying who you discussed problems with. Failure to follow these
instructions will be reported to the Office of Community Standards. We reserve the right to run a
fraud-detection software on your code. Please refer to website, Academic Collaboration and Misconduct
section for details about collaboration policy.
Please review any additional instructions posted on the assignment page. When you are ready to
submit, please follow the instructions on the course website. Make sure you test your code using
the provided commands and do not edit outside of the marked areas.

You’ll need to download the starter code and fill the appropriate functions following the instructions
from the handout and the code’s documentation. Training DeepMind’s network on Pong takes roughly
12 hours on GPU, so please start early! (Only a completed run will recieve full credit) We will give
you access to an Azure GPU cluster. You’ll find the setup instructions on the course assignment page.



Introduction
In this assignment we will implement deep Q-learning, following DeepMind’s paper ([1] and [2]) that learns
to play Atari games from raw pixels. The purpose is to demonstrate the effectiveness of deep neural networks
as well as some of the techniques used in practice to stabilize training and achieve better performance. In
the process, you’ll become familiar with PyTorch. We will train our networks on the Pong-v0 environment
from OpenAI gym, but the code can easily be applied to any other environment.

In Pong, one player scores if the ball passes by the other player. An episode is over when one of the players
reaches 21 points. Thus, the total return of an episode is between −21 (lost every point) and +21 (won
every point). Our agent plays against a decent hard-coded AI player. Average human performance is −3
(reported in [2]). In this assignment, you will train an AI agent with super-human performance, reaching at
least +10 (hopefully more!).




1

, CS 234 Winter 2021: Assignment #2


0 Distributions induced by a policy (13 pts)
In this problem, we’ll work with an infinite-horizon MDP M = hS, A, R, T , γi and consider stochastic policies
of the form π : S → ∆(A)1 . Additionally, we’ll assume that M has a single, fixed starting state s0 ∈ S for
simplicity.

(a) (written, 3 pts) Consider a fixed stochastic policy and imagine running several rollouts of this policy
within the environment. Naturally, depending on the stochasticity of the MDP M and the policy itself,
some trajectories are more likely than others. Write down an expression for ρπ (τ ), the likelihood of
sampling a trajectory τ = (s  running π in M. To put this distribution in context,
0 , a0 , s1 , a1 , . . .) by
∞
recall that V π (s0 ) = Eτ ∼ρπ γ t R(st , at ) | s0 .
P
t=0
Solution:

Y
ρπ (τ ) = π(at |st )T (st+1 |st , at )
t=0



(b) (written, 5 pts) Just as ρπ captures the distribution over trajectories induced by π, we can also ex-
amine the distribution over states induced by π. In particular, define the discounted, stationary state
distribution of a policy π as

X
dπ (s) = (1 − γ) γ t p(st = s),
t=0

where p(st = s) denotes the probability of being in state s at timestep t while following policy π; your
answer to the previous part should help you reason about how you might compute this value. Consider
an arbitrary function f : S × A → R. Prove the following identity:
"∞ #
X 1
γ t f (st , at ) =
 
Eτ ∼ρπ Es∼dπ Ea∼π(s) [f (s, a)] .
t=0
(1 − γ)

Hint: You may find it helpful to first consider how things work out for f (s, a) = 1, ∀(s, a) ∈ S × A.
Hint: What is p(st = s)?
Solution:
"∞ # ∞
X X
t
Eτ ∼ρπ γ f (st , at ) = γ t Eτ ∼ρπ [f (st , at )]
t=0 t=0

= Eτ ∼ρπ [f (s0 , a0 )] + γEτ ∼ρπ [f (s1 , a1 )] + γ 2 Eτ ∼ρπ [f (s2 , a2 )] + ...
X X X X
= π(a0 |s0 )f (s0 , a0 ) + γ π(a0 |s0 ) T (s1 |s0 , a0 ) π(a1 |s1 )f (s1 , a1 ) + ...
a0 a0 s1 a1
X X
= p(s0 = s)Ea∼π(s) [f (s, a)] + γ p(s1 = s)Ea∼π(s) [f (s, a)] + ...
s s

XX
= γ t p(st = s)Ea∼π(s) [f (s, a)]
s t=0
1 X 1
dπ (s)Ea∼π(s) [f (s, a)] =
 
= Es∼dπ Ea∼π(s) [f (s, a)]
(1 − γ) s (1 − γ)




a finite set X , ∆(X ) refers to the set of categorical distributions with support on X or, equivalently, the ∆|X |−1
1 For

probability simplex.


Page 2 of 12
8,85 €
Accede al documento completo:
Comprado por 38 estudiantes

100% de satisfacción garantizada
Inmediatamente disponible después del pago
Tanto en línea como en PDF
No estas atado a nada

Reseñas de compradores verificados

Se muestran los 4 comentarios
2 año hace

3 año hace

3 año hace

3 año hace

Thank you for the 5stars!!! Much appreciated!!

3 año hace

3 año hace

Thank you verybmuch for the 5stars!! Much appreciated!!

3,5

4 reseñas

5
2
4
0
3
1
2
0
1
1
Reseñas confiables sobre Stuvia

Todas las reseñas las realizan usuarios reales de Stuvia después de compras verificadas.

Conoce al vendedor

Seller avatar
Los indicadores de reputación están sujetos a la cantidad de artículos vendidos por una tarifa y las reseñas que ha recibido por esos documentos. Hay tres niveles: Bronce, Plata y Oro. Cuanto mayor reputación, más podrás confiar en la calidad del trabajo del vendedor.
Themanehoppe American Intercontinental University Online
Seguir Necesitas iniciar sesión para seguir a otros usuarios o asignaturas
Vendido
292
Miembro desde
4 año
Número de seguidores
223
Documentos
3485
Última venta
2 meses hace

3,4

48 reseñas

5
21
4
5
3
7
2
3
1
12

Recientemente visto por ti

Por qué los estudiantes eligen Stuvia

Creado por compañeros estudiantes, verificado por reseñas

Calidad en la que puedes confiar: escrito por estudiantes que aprobaron y evaluado por otros que han usado estos resúmenes.

¿No estás satisfecho? Elige otro documento

¡No te preocupes! Puedes elegir directamente otro documento que se ajuste mejor a lo que buscas.

Paga como quieras, empieza a estudiar al instante

Sin suscripción, sin compromisos. Paga como estés acostumbrado con tarjeta de crédito y descarga tu documento PDF inmediatamente.

Student with book image

“Comprado, descargado y aprobado. Así de fácil puede ser.”

Alisha Student

Preguntas frecuentes