100% de satisfacción garantizada Inmediatamente disponible después del pago Tanto en línea como en PDF No estas atado a nada 4,6 TrustPilot
logo-home
Examen

CS 234 assignment 2 Updated-ALL ANSWERS 100% CORRECT Study Guide

Puntuación
2.0
(1)
Vendido
4
Páginas
12
Grado
A
Subido en
23-03-2022
Escrito en
2022/2023

CS 234 Winter 2022: Assignment #2 Introduction In this assignment we will implement deep Q-learning, following DeepMind’s paper ([1] and [2]) that learns to play Atari games from raw pixels. The purpose is to demonstrate the effectiveness of deep neural networks as well as some of the techniques used in practice to stabilize training and achieve better performance. In the process, you’ll become familiar with PyTorch. We will train our networks on the Pong-v0 environment from OpenAI gym, but the code can easily be applied to any other environment. In Pong, one player scores if the ball passes by the other player. An episode is over when one of the players reaches 21 points. Thus, the total return of an episode is between −21 (lost every point) and +21 (won every point). Our agent plays against a decent hard-coded AI player. Average human performance is −3 (reported in [2]). In this assignment, you will train an AI agent with super-human performance, reaching at least +10 (hopefully more!). 1 0 Distributions induced by a policy (13 pts) In this problem, we’ll work with an infinite-horizon MDP M = ⟨S, A, R, T , γ⟩ and consider stochastic policies of the form π : S → ∆(A)1. Additionally, we’ll assume that M has a single, fixed starting state s0 ∈ S for simplicity. (a) (written, 3 pts) Consider a fixed stochastic policy and imagine running several rollouts of this policy within the environment. Naturally, depending on the stochasticity of the MDP M and the policy itself, some trajectories are more likely than others. Write down an expression for ρπ(τ ), the likelihood of recall that V π(s0) = Eτ∼ρπ Solution: ∞ γt (st, at) s0 .

Mostrar más Leer menos
Institución
Grado

Vista previa del contenido

CS 234 Winter 2022: Assignment #2

Due date:
Part 1 (0-4): February 5, 2022 at 6 PM (18:00) PST
Part 2 (5-6): February 12, 2022 at 6 PM (18:00) PST

These questions require thought, but do not require long answers. Please be as concise as possible.

We encour age st udents to discuss in groups for assignme nts. We ask that you abide by the university
Honor Code and that of the Computer Science department. If you have disc ussed the proble ms w ith
others, ple ase include a st atement saying who you discussed problem s with. Failure to follow the se
instructions will be reported to the Office of Comm unity Standar ds. We reserve the right to run a fr aud-
detection softw are on your code . Please refer to we bsite, Ac ademic Collabor ation and Misconduct
section for details about collaboration policy.
Ple ase r eview any addit ional instr uctions poste d on the assignme nt page. W he n you are r e ady t o
submit , ple ase follow the instructions on t he c our se w ebsit e. Make sure you te st your code using
the provided commands and do not edit outside of the marked areas.

You’ll need to dow nload the st arter code and fill the appr opriate functions following t he instructions
from the handout and the code’s documentation. Training DeepMind’s network on Pong takes roughly
12 hours on GPU, so ple ase st art e ar ly! (O nly a com pleted r un w ill recieve full credit) We w ill give
you access to an Azure GPU cluster. You’ll find the setup instructions on the course assignment page.



Introduction
In this assignment we w ill impleme nt deep Q-le arning, following DeepMind’s paper ([1] and [2]) that le arns
to play Atari games from r aw pixels. T he pur pose is to demonstrate the effectivene ss of dee p neur al networks
as well as some of the technique s use d in practice to stabilize tr aining and achieve better per form ance . In
the process, you’ll become familiar with PyTorch. We will train our networks on the Pong-v0 environment
from OpenAI gym, but the code can easily be applied to any other environment.

In Pong, one player score s if the ball passe s by the other player. A n episode is over whe n one of the player s
reaches 21 points. Thus, the tot al return of an e pisode is between −21 (lost every point) and +21 (won
every point). Our age nt plays against a decent hard-coded AI player. Aver age hum an performance is −3
(reported in [2]). In this assignme nt, you will train an AI agent with super -hum an perform ance, reac hing at
least +10 (hopefully more!).




1

, 0 Distributions induced by a policy (13 pts)
In this problem, we’ll wor k with an infinite -horizon MDP M = ⟨ S, A, R, T , γ⟩ and consider stochastic policie s
1
of t he for m π : S → ∆(A) . Additionally, we ’ll assum e that M has a single , fixe d st arting st at e s0 ∈ S for
simplicity.

(a) (written, 3 pts) Consider a fixed stoc hastic policy and im agine running sever al r ollouts of this policy
within the environment. N aturally, de pending on the stoc hasticity of the MDP M and the policy itself,
π
some trajectories are more likely than others. Write down an expression for ρ (τ ), the likelihood of
sampling a trajectory τ = (s0 , a0 , s1 , a1 , . . .) b y running π in M. To put this distribution in context,
Σ t
recall that V (s0 ) = Eτ ∼ ρ π ∞ γ R(st, at) |s0 .
π
t=0
Solution:
Y ∞
π
ρ (τ ) = π(at|st)T (st+1 |st, a t)
t=0


π
(b) (written, 5 pts) Just as ρ c apture s the distribution over trajectorie s induced by π, we can also ex-
amine the distribution over states induced by π. In particular, define the discou nted , statio nary state
distribution of a policy π as
Σ ∞
π t
d (s) = (1 − γ) γ p(st = s),
t=0

whe re p(st = s) de note s the pr obabilit y of being in st at e s at time ste p t w hile follow ing polic y π; your
answer to the previous part should help you reason about how you might com pute this value . Consider
an arbitrary function f : S × A → R. Prove the following identity:
"∞ #
Σ t 1
Eτ ∼ρ π γ f (st, at ) = E s∼d π E a ∼ π(s) [f (s, a)] .
(1 − γ)
t=0

Hint: You may find it helpful to first consider how things work out for f (s, a) = 1, ∀(s, a) ∈ S × A.
Hint: What is p(st = s)?
Solution:
"∞ #
Σ t ∞
Σ
Eτ ∼ρ π γ f (st, a t) = γ t Eτ ∼ ρ π [f (st, at)]
t=0 t=0
2
= E τ∼ ρ π [f (s0 , a0 )] + γE τ∼ ρ π [f (s1 , a1 )] + γ E τ∼ ρ π [f (s2 , a2 )] + ...
Σ Σ Σ Σ
= π(a 0 |s0 )f (s0 , a0 ) + γ π(a0 |s0 ) T (s1 |s0 , a0 ) π(a1 |s1 )f (s1 , a1 ) + ...
a0 a0 s1 a1
Σ Σ
= p(s 0 = s)E a ∼ π(s)[f (s, a)] + γ p(s 1 = s)E a ∼ π(s)[f (s, a)] + ...
s s

ΣΣ t
= γ p(s t = s)E a ∼ π(s)[f (s, a)]
s t=0 1
1 Σ dπ (s)E a ∼ π(s)[f (s, a)] = E s∼ d π E a ∼ π(s) [f (s, a)]
= (1 − γ)
(1 − γ) s




1For a finite set X , ∆(X ) refers to the set of categorical distributions with support on X or, equivalently, the ∆ |X |−1

probability simplex.

Escuela, estudio y materia

Institución

Información del documento

Subido en
23 de marzo de 2022
Número de páginas
12
Escrito en
2022/2023
Tipo
Examen
Contiene
Preguntas y respuestas

Temas

$7.49
Accede al documento completo:

100% de satisfacción garantizada
Inmediatamente disponible después del pago
Tanto en línea como en PDF
No estas atado a nada

Reseñas de compradores verificados

Se muestran los comentarios
3 año hace

3 año hace

Thank you for review! wish you well in you're exam.

2.0

1 reseñas

5
0
4
0
3
0
2
1
1
0
Reseñas confiables sobre Stuvia

Todas las reseñas las realizan usuarios reales de Stuvia después de compras verificadas.

Conoce al vendedor

Seller avatar
Los indicadores de reputación están sujetos a la cantidad de artículos vendidos por una tarifa y las reseñas que ha recibido por esos documentos. Hay tres niveles: Bronce, Plata y Oro. Cuanto mayor reputación, más podrás confiar en la calidad del trabajo del vendedor.
AMAZONEDUCATER West Virgina University
Seguir Necesitas iniciar sesión para seguir a otros usuarios o asignaturas
Vendido
53
Miembro desde
3 año
Número de seguidores
48
Documentos
304
Última venta
5 meses hace
NURSDENNY / Quality work from me

3.0

7 reseñas

5
2
4
1
3
0
2
3
1
1

Documentos populares

Recientemente visto por ti

Por qué los estudiantes eligen Stuvia

Creado por compañeros estudiantes, verificado por reseñas

Calidad en la que puedes confiar: escrito por estudiantes que aprobaron y evaluado por otros que han usado estos resúmenes.

¿No estás satisfecho? Elige otro documento

¡No te preocupes! Puedes elegir directamente otro documento que se ajuste mejor a lo que buscas.

Paga como quieras, empieza a estudiar al instante

Sin suscripción, sin compromisos. Paga como estés acostumbrado con tarjeta de crédito y descarga tu documento PDF inmediatamente.

Student with book image

“Comprado, descargado y aprobado. Así de fácil puede ser.”

Alisha Student

Preguntas frecuentes