L11-Unit_8-Part_1_Proximal_Policy_Optimization_(PPO)-E4-PPO_with_CleanRL

中英文对照学习,效果更佳!
原课程链接:https://huggingface.co/deep-rl-course/unit2/mid-way-recap?fw=pt

Hands-on

亲身实践

Ask a Question
Open In Colab

在Colab中公开提问

Now that we studied the theory behind PPO, the best way to understand how it works is to implement it from scratch.

既然我们研究了PPO背后的理论,理解它是如何工作的最好方法就是从头开始实施。

Implementing an architecture from scratch is the best way to understand it, and it’s a good habit. We have already done it for a value-based method with Q-Learning and a Policy-based method with Reinforce.

从头开始实现体系结构是理解它的最好方式,而且这是一个好习惯。我们已经对基于价值的方法和基于策略的方法进行了改进,其中Q-Learning采用的是Q-Learning,而策略方法采用的是Entra。

So, to be able to code it, we’re going to use two resources:

因此,为了能够对其进行编码,我们将使用两个资源:

Then, to test its robustness, we’re going to train it in:

黄科斯塔制作的教程。Costa是深度强化学习库CleanRL的幕后推手,该库提供具有研究友好特性的高质量单文件实施。除了本教程之外,您还可以阅读13个核心实施细节:https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/Then,要测试其健壮性,我们将在以下方面对其进行培训:

<source src=”assets/63_deep_rl_intro/lunarlander.mp4” type=”video/mp4”>
</video>

LUNARLADER-v2<SOURCE src=“Assets/63_Deep_rl_Intro/Lunarlander.mp4”type=“Video/mp4”>

And finally, we will be push the trained model to the Hub to evaluate and visualize your agent playing.

最后,我们将把训练好的模型推送到中心,以评估和可视化您的代理游戏。

LunarLander-v2 is the first environment you used when you started this course. At that time, you didn’t know how it worked, and now, you can code it from scratch and train it. How incredible is that 🤩.

LunarLander-v2是您开始学习本课程时使用的第一个环境。当时,你不知道它是怎么工作的,现在,你可以从头开始编写它的代码,然后对它进行训练。这是多么不可思议的🤩。

via GIPHY

通过GIPHY

Let’s get started! 🚀

让我们开始吧!🚀

The colab notebook:

CoLab笔记本电脑:

Open In Colab

在Colab开业

Unit 8: Proximal Policy Gradient (PPO) with PyTorch 🤖

单元8:使用PYTORCH🤖的近端策略梯度(PPO)

Unit 8
In this notebook, you’ll learn to code your PPO agent from scratch with PyTorch using CleanRL implementation as model.

在本笔记本中,您将学习如何以CleanRL实现为模型,用PyTorch从头开始编写PPO代理代码。

To test its robustness, we’re going to train it in:

为了测试其健壮性,我们将在以下方面对其进行训练:

We’re constantly trying to improve our tutorials, so if you find some issues in this notebook, please open an issue on the GitHub Repo.

LUNARLADER-v2🚀我们一直在努力改进我们的教程,所以如果你在这个笔记本上发现了一些问题,请在GitHub Repo上打开一个问题。

Objectives of this notebook 🏆

此笔记本电脑🏆的目标

At the end of the notebook, you will:

在笔记本结尾处,您将:

  • Be able to code your PPO agent from scratch using PyTorch.
  • Be able to push your trained agent and the code to the Hub with a nice video replay and an evaluation score 🔥.

Prerequisites 🏗️

能够使用PyTorch从头开始编写您的PPO代理。能够通过良好的视频回放和评估分数🔥将您训练有素的代理和代码推送到中心。Prerequisites🏗️

Before diving into the notebook, you need to:

在深入研究笔记本之前,您需要:

🔲 📚 Study PPO by reading Unit 8 🤗

🔲📚通过阅读单元8🤗研究PPO

To validate this hands-on for the certification process, you need to push one model, we don’t ask for a minimal result but we advise you to try different hyperparameters settings to get better results.

要验证认证过程的实际操作,您需要推送一个模型,我们不要求最小结果,但我们建议您尝试不同的超参数设置以获得更好的结果。

If you don’t find your model, go to the bottom of the page and click on the refresh button

如果您没有找到您的模型,请转到页面底部并单击刷新按钮

For more information about the certification process, check this section 👉 https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process

有关认证过程的更多信息,请查看https://huggingface.co/deep-rl-course/en/unit0/introduction#certification-process👉这一节

Set the GPU 💪

设置图形处理器💪

  • To accelerate the agent’s training, we’ll use a GPU. To do that, go to Runtime > Change Runtime type

GPU Step 1

为了加快工程师的培训,我们将使用GPU。为此,请转到运行时>更改运行时类型GPU步骤1

  • Hardware Accelerator > GPU

GPU Step 2

`硬件加速器>GPU`GPU步骤2

Create a virtual display 🔽

创建虚拟显示🔽

During the notebook, we’ll need to generate a replay video. To do so, with colab, we need to have a virtual screen to be able to render the environment (and thus record the frames).

在笔记本期间,我们需要生成一个回放视频。要做到这一点,使用CoLab,我们需要一个虚拟屏幕来渲染环境(并因此记录帧)。

Hence the following cell will install the librairies and create and run a virtual screen 🖥

因此,下面的单元将安装库并创建和运行虚拟屏幕🖥

1
2
3
4
5
apt install python-opengl
apt install ffmpeg
apt install xvfb
pip install pyglet==1.5
pip install pyvirtualdisplay
1
2
3
4
5
# Virtual display
from pyvirtualdisplay import Display

virtual_display = Display(visible=0, size=(1400, 900))
virtual_display.start()

Install dependencies 🔽

安装依赖项🔽

For this exercise, we use gym==0.21

在本练习中,我们使用gym==0.21

1
2
3
4
pip install gym==0.21
pip install imageio-ffmpeg
pip install huggingface_hub
pip install box2d

Let’s code PPO from scratch with Costa Huang tutorial

让我们用Costa Huang教程从头开始编写PPO

1
2
3
4
5
from IPython.display import HTML

HTML(
'<iframe width="560" height="315" src="https://www.youtube.com/embed/MEt6rrxH8W4" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>'
)

Add the Hugging Face Integration 🤗

对于PPO的核心实施,我们将使用优秀的Costa Huang教程。除了该教程外,要深入了解该教程,您可以阅读37个核心实施细节:https://iclr-blog-track.github.io/2022/03/25/ppo-implementation-details/和视频教程:https://youtu.be/MEt6rrxH8W4Add👉Huging Face🤗

  • In order to push our model to the Hub, we need to define a function package_to_hub

  • Add dependencies we need to push our model to the Hub

1
2
3
4
5
6
7
8
9
10
11
12
13
from huggingface_hub import HfApi, upload_folder
from huggingface_hub.repocard import metadata_eval_result, metadata_save

from pathlib import Path
import datetime
import tempfile
import json
import shutil
import imageio

from wasabi import Printer

msg = Printer()
  • Add new argument in parse_args() function to define the repo-id where we want to push the model.
1
2
3
4
5
6
7
# Adding HuggingFace argument
parser.add_argument(
"--repo-id",
type=str,
default="ThomasSimonini/ppo-CartPole-v1",
help="id of the model repository from the Hugging Face Hub {username/repo_name}",
)
  • Next, we add the methods needed to push the model to the Hub

    为了将我们的模型推送到Hub,我们需要定义一个函数Package_to_hub添加依赖项我们需要将我们的模型推送到HubAdd new参数在parse_args()函数中定义我们想要推送模型的repo-id。接下来,我们添加将模型推送到Hub所需的方法

  • These methods will:

    这些方法将:

    • _evalutate_agent(): evaluate the agent.
    • _generate_model_card(): generate the model card of your agent.
    • _record_video(): record a video of your agent.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
def package_to_hub(
repo_id,
model,
hyperparameters,
eval_env,
video_fps=30,
commit_message="Push agent to the Hub",
token=None,
logs=None,
):
"""
Evaluate, Generate a video and Upload a model to Hugging Face Hub.
This method does the complete pipeline:
- It evaluates the model
- It generates the model card
- It generates a replay video of the agent
- It pushes everything to the hub
:param repo_id: id of the model repository from the Hugging Face Hub
:param model: trained model
:param eval_env: environment used to evaluate the agent
:param fps: number of fps for rendering the video
:param commit_message: commit message
:param logs: directory on local machine of tensorboard logs you'd like to upload
"""
msg.info(
"This function will save, evaluate, generate a video of your agent, "
"create a model card and push everything to the hub. "
"It might take up to 1min. \n "
"This is a work in progress: if you encounter a bug, please open an issue."
)
# Step 1: Clone or create the repo
repo_url = HfApi().create_repo(
repo_id=repo_id,
token=token,
private=False,
exist_ok=True,
)

with tempfile.TemporaryDirectory() as tmpdirname:
tmpdirname = Path(tmpdirname)

# Step 2: Save the model
torch.save(model.state_dict(), tmpdirname / "model.pt")

# Step 3: Evaluate the model and build JSON
mean_reward, std_reward = _evaluate_agent(eval_env, 10, model)

# First get datetime
eval_datetime = datetime.datetime.now()
eval_form_datetime = eval_datetime.isoformat()

evaluate_data = {
"env_id": hyperparameters.env_id,
"mean_reward": mean_reward,
"std_reward": std_reward,
"n_evaluation_episodes": 10,
"eval_datetime": eval_form_datetime,
}

# Write a JSON file
with open(tmpdirname / "results.json", "w") as outfile:
json.dump(evaluate_data, outfile)

# Step 4: Generate a video
video_path = tmpdirname / "replay.mp4"
record_video(eval_env, model, video_path, video_fps)

# Step 5: Generate the model card
generated_model_card, metadata = _generate_model_card(
"PPO", hyperparameters.env_id, mean_reward, std_reward, hyperparameters
)
_save_model_card(tmpdirname, generated_model_card, metadata)

# Step 6: Add logs if needed
if logs:
_add_logdir(tmpdirname, Path(logs))

msg.info(f"Pushing repo {repo_id} to the Hugging Face Hub")

repo_url = upload_folder(
repo_id=repo_id,
folder_path=tmpdirname,
path_in_repo="",
commit_message=commit_message,
token=token,
)

msg.info(f"Your model is pushed to the Hub. You can view your model here: {repo_url}")
return repo_url


def _evaluate_agent(env, n_eval_episodes, policy):
"""
Evaluate the agent for ``n_eval_episodes`` episodes and returns average reward and std of reward.
:param env: The evaluation environment
:param n_eval_episodes: Number of episode to evaluate the agent
:param policy: The agent
"""
episode_rewards = []
for episode in range(n_eval_episodes):
state = env.reset()
step = 0
done = False
total_rewards_ep = 0

while done is False:
state = torch.Tensor(state).to(device)
action, _, _, _ = policy.get_action_and_value(state)
new_state, reward, done, info = env.step(action.cpu().numpy())
total_rewards_ep += reward
if done:
break
state = new_state
episode_rewards.append(total_rewards_ep)
mean_reward = np.mean(episode_rewards)
std_reward = np.std(episode_rewards)

return mean_reward, std_reward


def record_video(env, policy, out_directory, fps=30):
images = []
done = False
state = env.reset()
img = env.render(mode="rgb_array")
images.append(img)
while not done:
state = torch.Tensor(state).to(device)
# Take the action (index) that have the maximum expected future reward given that state
action, _, _, _ = policy.get_action_and_value(state)
state, reward, done, info = env.step(
action.cpu().numpy()
) # We directly put next_state = state for recording logic
img = env.render(mode="rgb_array")
images.append(img)
imageio.mimsave(out_directory, [np.array(img) for i, img in enumerate(images)], fps=fps)


def _generate_model_card(model_name, env_id, mean_reward, std_reward, hyperparameters):
"""
Generate the model card for the Hub
:param model_name: name of the model
:env_id: name of the environment
:mean_reward: mean reward of the agent
:std_reward: standard deviation of the mean reward of the agent
:hyperparameters: training arguments
"""
# Step 1: Select the tags
metadata = generate_metadata(model_name, env_id, mean_reward, std_reward)

# Transform the hyperparams namespace to string
converted_dict = vars(hyperparameters)
converted_str = str(converted_dict)
converted_str = converted_str.split(", ")
converted_str = "\n".join(converted_str)

# Step 2: Generate the model card
model_card = f"""
# PPO Agent Playing {env_id}

This is a trained model of a PPO agent playing {env_id}.

# Hyperparameters
"""
return model_card, metadata


def generate_metadata(model_name, env_id, mean_reward, std_reward):
"""
Define the tags for the model card
:param model_name: name of the model
:param env_id: name of the environment
:mean_reward: mean reward of the agent
:std_reward: standard deviation of the mean reward of the agent
"""
metadata = {}
metadata["tags"] = [
env_id,
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
]

# Add metrics
eval = metadata_eval_result(
model_pretty_name=model_name,
task_pretty_name="reinforcement-learning",
task_id="reinforcement-learning",
metrics_pretty_name="mean_reward",
metrics_id="mean_reward",
metrics_value=f"{mean_reward:.2f} +/- {std_reward:.2f}",
dataset_pretty_name=env_id,
dataset_id=env_id,
)

# Merges both dictionaries
metadata = {**metadata, **eval}

return metadata


def _save_model_card(local_path, generated_model_card, metadata):
"""Saves a model card for the repository.
:param local_path: repository directory
:param generated_model_card: model card generated by _generate_model_card()
:param metadata: metadata
"""
readme_path = local_path / "README.md"
readme = ""
if readme_path.exists():
with readme_path.open("r", encoding="utf8") as f:
readme = f.read()
else:
readme = generated_model_card

with readme_path.open("w", encoding="utf-8") as f:
f.write(readme)

# Save our metrics to Readme metadata
metadata_save(readme_path, metadata)


def _add_logdir(local_path: Path, logdir: Path):
"""Adds a logdir to the repository.
:param local_path: repository directory
:param logdir: logdir directory
"""
if logdir.exists() and logdir.is_dir():
# Add the logdir to the repository under new dir called logs
repo_logdir = local_path / "logs"

# Delete current logs if they exist
if repo_logdir.exists():
shutil.rmtree(repo_logdir)

# Copy logdir into repo logdir
shutil.copytree(logdir, repo_logdir)
  • Finally, we call this function at the end of the PPO training
1
2
3
4
5
6
7
8
9
10
# Create the evaluation environment
eval_env = gym.make(args.env_id)

package_to_hub(
repo_id=args.repo_id,
model=agent, # The model we want to save
hyperparameters=args,
eval_env=gym.make(args.env_id),
logs=f"runs/{run_name}",
)
  • Here’s what look the ppo.py final file
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
# docs and experiment results can be found at https://docs.cleanrl.dev/rl-algorithms/ppo/#ppopy

import argparse
import os
import random
import time
from distutils.util import strtobool

import gym
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from torch.distributions.categorical import Categorical
from torch.utils.tensorboard import SummaryWriter

from huggingface_hub import HfApi, upload_folder
from huggingface_hub.repocard import metadata_eval_result, metadata_save

from pathlib import Path
import datetime
import tempfile
import json
import shutil
import imageio

from wasabi import Printer

msg = Printer()


def parse_args():
# fmt: off
parser = argparse.ArgumentParser()
parser.add_argument("--exp-name", type=str, default=os.path.basename(__file__).rstrip(".py"),
help="the name of this experiment")
parser.add_argument("--seed", type=int, default=1,
help="seed of the experiment")
parser.add_argument("--torch-deterministic", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True,
help="if toggled, `torch.backends.cudnn.deterministic=False`")
parser.add_argument("--cuda", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True,
help="if toggled, cuda will be enabled by default")
parser.add_argument("--track", type=lambda x: bool(strtobool(x)), default=False, nargs="?", const=True,
help="if toggled, this experiment will be tracked with Weights and Biases")
parser.add_argument("--wandb-project-name", type=str, default="cleanRL",
help="the wandb's project name")
parser.add_argument("--wandb-entity", type=str, default=None,
help="the entity (team) of wandb's project")
parser.add_argument("--capture-video", type=lambda x: bool(strtobool(x)), default=False, nargs="?", const=True,
help="weather to capture videos of the agent performances (check out `videos` folder)")

# Algorithm specific arguments
parser.add_argument("--env-id", type=str, default="CartPole-v1",
help="the id of the environment")
parser.add_argument("--total-timesteps", type=int, default=50000,
help="total timesteps of the experiments")
parser.add_argument("--learning-rate", type=float, default=2.5e-4,
help="the learning rate of the optimizer")
parser.add_argument("--num-envs", type=int, default=4,
help="the number of parallel game environments")
parser.add_argument("--num-steps", type=int, default=128,
help="the number of steps to run in each environment per policy rollout")
parser.add_argument("--anneal-lr", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True,
help="Toggle learning rate annealing for policy and value networks")
parser.add_argument("--gae", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True,
help="Use GAE for advantage computation")
parser.add_argument("--gamma", type=float, default=0.99,
help="the discount factor gamma")
parser.add_argument("--gae-lambda", type=float, default=0.95,
help="the lambda for the general advantage estimation")
parser.add_argument("--num-minibatches", type=int, default=4,
help="the number of mini-batches")
parser.add_argument("--update-epochs", type=int, default=4,
help="the K epochs to update the policy")
parser.add_argument("--norm-adv", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True,
help="Toggles advantages normalization")
parser.add_argument("--clip-coef", type=float, default=0.2,
help="the surrogate clipping coefficient")
parser.add_argument("--clip-vloss", type=lambda x: bool(strtobool(x)), default=True, nargs="?", const=True,
help="Toggles whether or not to use a clipped loss for the value function, as per the paper.")
parser.add_argument("--ent-coef", type=float, default=0.01,
help="coefficient of the entropy")
parser.add_argument("--vf-coef", type=float, default=0.5,
help="coefficient of the value function")
parser.add_argument("--max-grad-norm", type=float, default=0.5,
help="the maximum norm for the gradient clipping")
parser.add_argument("--target-kl", type=float, default=None,
help="the target KL divergence threshold")

# Adding HuggingFace argument
parser.add_argument("--repo-id", type=str, default="ThomasSimonini/ppo-CartPole-v1", help="id of the model repository from the Hugging Face Hub {username/repo_name}")

args = parser.parse_args()
args.batch_size = int(args.num_envs * args.num_steps)
args.minibatch_size = int(args.batch_size // args.num_minibatches)
# fmt: on
return args


def package_to_hub(
repo_id,
model,
hyperparameters,
eval_env,
video_fps=30,
commit_message="Push agent to the Hub",
token=None,
logs=None,
):
"""
Evaluate, Generate a video and Upload a model to Hugging Face Hub.
This method does the complete pipeline:
- It evaluates the model
- It generates the model card
- It generates a replay video of the agent
- It pushes everything to the hub
:param repo_id: id of the model repository from the Hugging Face Hub
:param model: trained model
:param eval_env: environment used to evaluate the agent
:param fps: number of fps for rendering the video
:param commit_message: commit message
:param logs: directory on local machine of tensorboard logs you'd like to upload
"""
msg.info(
"This function will save, evaluate, generate a video of your agent, "
"create a model card and push everything to the hub. "
"It might take up to 1min. \n "
"This is a work in progress: if you encounter a bug, please open an issue."
)
# Step 1: Clone or create the repo
repo_url = HfApi().create_repo(
repo_id=repo_id,
token=token,
private=False,
exist_ok=True,
)

with tempfile.TemporaryDirectory() as tmpdirname:
tmpdirname = Path(tmpdirname)

# Step 2: Save the model
torch.save(model.state_dict(), tmpdirname / "model.pt")

# Step 3: Evaluate the model and build JSON
mean_reward, std_reward = _evaluate_agent(eval_env, 10, model)

# First get datetime
eval_datetime = datetime.datetime.now()
eval_form_datetime = eval_datetime.isoformat()

evaluate_data = {
"env_id": hyperparameters.env_id,
"mean_reward": mean_reward,
"std_reward": std_reward,
"n_evaluation_episodes": 10,
"eval_datetime": eval_form_datetime,
}

# Write a JSON file
with open(tmpdirname / "results.json", "w") as outfile:
json.dump(evaluate_data, outfile)

# Step 4: Generate a video
video_path = tmpdirname / "replay.mp4"
record_video(eval_env, model, video_path, video_fps)

# Step 5: Generate the model card
generated_model_card, metadata = _generate_model_card(
"PPO", hyperparameters.env_id, mean_reward, std_reward, hyperparameters
)
_save_model_card(tmpdirname, generated_model_card, metadata)

# Step 6: Add logs if needed
if logs:
_add_logdir(tmpdirname, Path(logs))

msg.info(f"Pushing repo {repo_id} to the Hugging Face Hub")

repo_url = upload_folder(
repo_id=repo_id,
folder_path=tmpdirname,
path_in_repo="",
commit_message=commit_message,
token=token,
)

msg.info(f"Your model is pushed to the Hub. You can view your model here: {repo_url}")
return repo_url


def _evaluate_agent(env, n_eval_episodes, policy):
"""
Evaluate the agent for ``n_eval_episodes`` episodes and returns average reward and std of reward.
:param env: The evaluation environment
:param n_eval_episodes: Number of episode to evaluate the agent
:param policy: The agent
"""
episode_rewards = []
for episode in range(n_eval_episodes):
state = env.reset()
step = 0
done = False
total_rewards_ep = 0

while done is False:
state = torch.Tensor(state).to(device)
action, _, _, _ = policy.get_action_and_value(state)
new_state, reward, done, info = env.step(action.cpu().numpy())
total_rewards_ep += reward
if done:
break
state = new_state
episode_rewards.append(total_rewards_ep)
mean_reward = np.mean(episode_rewards)
std_reward = np.std(episode_rewards)

return mean_reward, std_reward


def record_video(env, policy, out_directory, fps=30):
images = []
done = False
state = env.reset()
img = env.render(mode="rgb_array")
images.append(img)
while not done:
state = torch.Tensor(state).to(device)
# Take the action (index) that have the maximum expected future reward given that state
action, _, _, _ = policy.get_action_and_value(state)
state, reward, done, info = env.step(
action.cpu().numpy()
) # We directly put next_state = state for recording logic
img = env.render(mode="rgb_array")
images.append(img)
imageio.mimsave(out_directory, [np.array(img) for i, img in enumerate(images)], fps=fps)


def _generate_model_card(model_name, env_id, mean_reward, std_reward, hyperparameters):
"""
Generate the model card for the Hub
:param model_name: name of the model
:env_id: name of the environment
:mean_reward: mean reward of the agent
:std_reward: standard deviation of the mean reward of the agent
:hyperparameters: training arguments
"""
# Step 1: Select the tags
metadata = generate_metadata(model_name, env_id, mean_reward, std_reward)

# Transform the hyperparams namespace to string
converted_dict = vars(hyperparameters)
converted_str = str(converted_dict)
converted_str = converted_str.split(", ")
converted_str = "\n".join(converted_str)

# Step 2: Generate the model card
model_card = f"""
# PPO Agent Playing {env_id}

This is a trained model of a PPO agent playing {env_id}.

# Hyperparameters
"""
return model_card, metadata


def generate_metadata(model_name, env_id, mean_reward, std_reward):
"""
Define the tags for the model card
:param model_name: name of the model
:param env_id: name of the environment
:mean_reward: mean reward of the agent
:std_reward: standard deviation of the mean reward of the agent
"""
metadata = {}
metadata["tags"] = [
env_id,
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
]

# Add metrics
eval = metadata_eval_result(
model_pretty_name=model_name,
task_pretty_name="reinforcement-learning",
task_id="reinforcement-learning",
metrics_pretty_name="mean_reward",
metrics_id="mean_reward",
metrics_value=f"{mean_reward:.2f} +/- {std_reward:.2f}",
dataset_pretty_name=env_id,
dataset_id=env_id,
)

# Merges both dictionaries
metadata = {**metadata, **eval}

return metadata


def _save_model_card(local_path, generated_model_card, metadata):
"""Saves a model card for the repository.
:param local_path: repository directory
:param generated_model_card: model card generated by _generate_model_card()
:param metadata: metadata
"""
readme_path = local_path / "README.md"
readme = ""
if readme_path.exists():
with readme_path.open("r", encoding="utf8") as f:
readme = f.read()
else:
readme = generated_model_card

with readme_path.open("w", encoding="utf-8") as f:
f.write(readme)

# Save our metrics to Readme metadata
metadata_save(readme_path, metadata)


def _add_logdir(local_path: Path, logdir: Path):
"""Adds a logdir to the repository.
:param local_path: repository directory
:param logdir: logdir directory
"""
if logdir.exists() and logdir.is_dir():
# Add the logdir to the repository under new dir called logs
repo_logdir = local_path / "logs"

# Delete current logs if they exist
if repo_logdir.exists():
shutil.rmtree(repo_logdir)

# Copy logdir into repo logdir
shutil.copytree(logdir, repo_logdir)


def make_env(env_id, seed, idx, capture_video, run_name):
def thunk():
env = gym.make(env_id)
env = gym.wrappers.RecordEpisodeStatistics(env)
if capture_video:
if idx == 0:
env = gym.wrappers.RecordVideo(env, f"videos/{run_name}")
env.seed(seed)
env.action_space.seed(seed)
env.observation_space.seed(seed)
return env

return thunk


def layer_init(layer, std=np.sqrt(2), bias_const=0.0):
torch.nn.init.orthogonal_(layer.weight, std)
torch.nn.init.constant_(layer.bias, bias_const)
return layer


class Agent(nn.Module):
def __init__(self, envs):
super().__init__()
self.critic = nn.Sequential(
layer_init(nn.Linear(np.array(envs.single_observation_space.shape).prod(), 64)),
nn.Tanh(),
layer_init(nn.Linear(64, 64)),
nn.Tanh(),
layer_init(nn.Linear(64, 1), std=1.0),
)
self.actor = nn.Sequential(
layer_init(nn.Linear(np.array(envs.single_observation_space.shape).prod(), 64)),
nn.Tanh(),
layer_init(nn.Linear(64, 64)),
nn.Tanh(),
layer_init(nn.Linear(64, envs.single_action_space.n), std=0.01),
)

def get_value(self, x):
return self.critic(x)

def get_action_and_value(self, x, action=None):
logits = self.actor(x)
probs = Categorical(logits=logits)
if action is None:
action = probs.sample()
return action, probs.log_prob(action), probs.entropy(), self.critic(x)


if __name__ == "__main__":
args = parse_args()
run_name = f"{args.env_id}__{args.exp_name}__{args.seed}__{int(time.time())}"
if args.track:
import wandb

wandb.init(
project=args.wandb_project_name,
entity=args.wandb_entity,
sync_tensorboard=True,
config=vars(args),
name=run_name,
monitor_gym=True,
save_code=True,
)
writer = SummaryWriter(f"runs/{run_name}")
writer.add_text(
"hyperparameters",
"|param|value|\n|-|-|\n%s" % ("\n".join([f"|{key}|{value}|" for key, value in vars(args).items()])),
)

# TRY NOT TO MODIFY: seeding
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.backends.cudnn.deterministic = args.torch_deterministic

device = torch.device("cuda" if torch.cuda.is_available() and args.cuda else "cpu")

# env setup
envs = gym.vector.SyncVectorEnv(
[make_env(args.env_id, args.seed + i, i, args.capture_video, run_name) for i in range(args.num_envs)]
)
assert isinstance(envs.single_action_space, gym.spaces.Discrete), "only discrete action space is supported"

agent = Agent(envs).to(device)
optimizer = optim.Adam(agent.parameters(), lr=args.learning_rate, eps=1e-5)

# ALGO Logic: Storage setup
obs = torch.zeros((args.num_steps, args.num_envs) + envs.single_observation_space.shape).to(device)
actions = torch.zeros((args.num_steps, args.num_envs) + envs.single_action_space.shape).to(device)
logprobs = torch.zeros((args.num_steps, args.num_envs)).to(device)
rewards = torch.zeros((args.num_steps, args.num_envs)).to(device)
dones = torch.zeros((args.num_steps, args.num_envs)).to(device)
values = torch.zeros((args.num_steps, args.num_envs)).to(device)

# TRY NOT TO MODIFY: start the game
global_step = 0
start_time = time.time()
next_obs = torch.Tensor(envs.reset()).to(device)
next_done = torch.zeros(args.num_envs).to(device)
num_updates = args.total_timesteps // args.batch_size

for update in range(1, num_updates + 1):
# Annealing the rate if instructed to do so.
if args.anneal_lr:
frac = 1.0 - (update - 1.0) / num_updates
lrnow = frac * args.learning_rate
optimizer.param_groups[0]["lr"] = lrnow

for step in range(0, args.num_steps):
global_step += 1 * args.num_envs
obs[step] = next_obs
dones[step] = next_done

# ALGO LOGIC: action logic
with torch.no_grad():
action, logprob, _, value = agent.get_action_and_value(next_obs)
values[step] = value.flatten()
actions[step] = action
logprobs[step] = logprob

# TRY NOT TO MODIFY: execute the game and log data.
next_obs, reward, done, info = envs.step(action.cpu().numpy())
rewards[step] = torch.tensor(reward).to(device).view(-1)
next_obs, next_done = torch.Tensor(next_obs).to(device), torch.Tensor(done).to(device)

for item in info:
if "episode" in item.keys():
print(f"global_step={global_step}, episodic_return={item['episode']['r']}")
writer.add_scalar("charts/episodic_return", item["episode"]["r"], global_step)
writer.add_scalar("charts/episodic_length", item["episode"]["l"], global_step)
break

# bootstrap value if not done
with torch.no_grad():
next_value = agent.get_value(next_obs).reshape(1, -1)
if args.gae:
advantages = torch.zeros_like(rewards).to(device)
lastgaelam = 0
for t in reversed(range(args.num_steps)):
if t == args.num_steps - 1:
nextnonterminal = 1.0 - next_done
nextvalues = next_value
else:
nextnonterminal = 1.0 - dones[t + 1]
nextvalues = values[t + 1]
delta = rewards[t] + args.gamma * nextvalues * nextnonterminal - values[t]
advantages[t] = lastgaelam = delta + args.gamma * args.gae_lambda * nextnonterminal * lastgaelam
returns = advantages + values
else:
returns = torch.zeros_like(rewards).to(device)
for t in reversed(range(args.num_steps)):
if t == args.num_steps - 1:
nextnonterminal = 1.0 - next_done
next_return = next_value
else:
nextnonterminal = 1.0 - dones[t + 1]
next_return = returns[t + 1]
returns[t] = rewards[t] + args.gamma * nextnonterminal * next_return
advantages = returns - values

# flatten the batch
b_obs = obs.reshape((-1,) + envs.single_observation_space.shape)
b_logprobs = logprobs.reshape(-1)
b_actions = actions.reshape((-1,) + envs.single_action_space.shape)
b_advantages = advantages.reshape(-1)
b_returns = returns.reshape(-1)
b_values = values.reshape(-1)

# Optimizing the policy and value network
b_inds = np.arange(args.batch_size)
clipfracs = []
for epoch in range(args.update_epochs):
np.random.shuffle(b_inds)
for start in range(0, args.batch_size, args.minibatch_size):
end = start + args.minibatch_size
mb_inds = b_inds[start:end]

_, newlogprob, entropy, newvalue = agent.get_action_and_value(
b_obs[mb_inds], b_actions.long()[mb_inds]
)
logratio = newlogprob - b_logprobs[mb_inds]
ratio = logratio.exp()

with torch.no_grad():
# calculate approx_kl http://joschu.net/blog/kl-approx.html
old_approx_kl = (-logratio).mean()
approx_kl = ((ratio - 1) - logratio).mean()
clipfracs += [((ratio - 1.0).abs() > args.clip_coef).float().mean().item()]

mb_advantages = b_advantages[mb_inds]
if args.norm_adv:
mb_advantages = (mb_advantages - mb_advantages.mean()) / (mb_advantages.std() + 1e-8)

# Policy loss
pg_loss1 = -mb_advantages * ratio
pg_loss2 = -mb_advantages * torch.clamp(ratio, 1 - args.clip_coef, 1 + args.clip_coef)
pg_loss = torch.max(pg_loss1, pg_loss2).mean()

# Value loss
newvalue = newvalue.view(-1)
if args.clip_vloss:
v_loss_unclipped = (newvalue - b_returns[mb_inds]) ** 2
v_clipped = b_values[mb_inds] + torch.clamp(
newvalue - b_values[mb_inds],
-args.clip_coef,
args.clip_coef,
)
v_loss_clipped = (v_clipped - b_returns[mb_inds]) ** 2
v_loss_max = torch.max(v_loss_unclipped, v_loss_clipped)
v_loss = 0.5 * v_loss_max.mean()
else:
v_loss = 0.5 * ((newvalue - b_returns[mb_inds]) ** 2).mean()

entropy_loss = entropy.mean()
loss = pg_loss - args.ent_coef * entropy_loss + v_loss * args.vf_coef

optimizer.zero_grad()
loss.backward()
nn.utils.clip_grad_norm_(agent.parameters(), args.max_grad_norm)
optimizer.step()

if args.target_kl is not None:
if approx_kl > args.target_kl:
break

y_pred, y_true = b_values.cpu().numpy(), b_returns.cpu().numpy()
var_y = np.var(y_true)
explained_var = np.nan if var_y == 0 else 1 - np.var(y_true - y_pred) / var_y

# TRY NOT TO MODIFY: record rewards for plotting purposes
writer.add_scalar("charts/learning_rate", optimizer.param_groups[0]["lr"], global_step)
writer.add_scalar("losses/value_loss", v_loss.item(), global_step)
writer.add_scalar("losses/policy_loss", pg_loss.item(), global_step)
writer.add_scalar("losses/entropy", entropy_loss.item(), global_step)
writer.add_scalar("losses/old_approx_kl", old_approx_kl.item(), global_step)
writer.add_scalar("losses/approx_kl", approx_kl.item(), global_step)
writer.add_scalar("losses/clipfrac", np.mean(clipfracs), global_step)
writer.add_scalar("losses/explained_variance", explained_var, global_step)
print("SPS:", int(global_step / (time.time() - start_time)))
writer.add_scalar("charts/SPS", int(global_step / (time.time() - start_time)), global_step)

envs.close()
writer.close()

# Create the evaluation environment
eval_env = gym.make(args.env_id)

package_to_hub(
repo_id=args.repo_id,
model=agent, # The model we want to save
hyperparameters=args,
eval_env=gym.make(args.env_id),
logs=f"runs/{run_name}",
)

To be able to share your model with the community there are three more steps to follow:

`_valutate_agent():评估代理。_GENERATE_MODEL_CARD():生成您的代理的模型卡。_Record_Video()`:录制您的代理的视频。最后,我们在PPO培训结束时调用此函数。以下是ppo.py最终文件的样子为了能够与社区分享您的模型,还需要执行三个步骤:

1️⃣ (If it’s not already done) create an account to HF ➡ https://huggingface.co/join

1到HF https://huggingface.co/join的️➡⃣(如果尚未完成)创建帐户

2️⃣ Sign in and then, you need to store your authentication token from the Hugging Face website.

2️⃣登录,然后,您需要存储来自Hugging Face网站的身份验证令牌。

Create HF Token

创建新令牌(具有写角色的https://huggingface.co/settings/tokens)创建HF令牌

  • Copy the token
  • Run the cell below and paste the token
1
2
3
from huggingface_hub import notebook_login
notebook_login()
!git config --global credential.helper store

If you don’t want to use a Google Colab or a Jupyter Notebook, you need to use this command instead: huggingface-cli login

复制令牌运行下面的单元格并粘贴令牌如果您不想使用Google Colab或Jupyter Notebook,则需要使用以下命令:huggingfacecli login

Let’s start the training 🔥

让我们开始培训🔥

  • Now that you’ve coded from scratch PPO and added the Hugging Face Integration, we’re ready to start the training 🔥
  • First, you need to copy all your code to a file you create called ppo.py

PPO
PPO

现在您已经从头开始编写PPO并添加了Hugging Face集成,我们准备好首先开始培训🔥,您需要将所有代码复制到您创建的名为ppo.pyPPO PPO的文件中

  • Now we just need to run this python script using python <name-of-python-script>.py with the additional parameters we defined with argparse
  • You should modify more hyperparameters otherwise the training will not be super stable.
1
!python ppo.py --env-id="LunarLander-v2" --repo-id="YOUR_REPO_ID" --total-timesteps=50000

Some additional challenges 🏆

现在,我们只需要使用带有我们用argparse定义的附加参数的python.py来运行此python脚本。您应该修改更多的超参数,否则培训将不会非常稳定。一些附加挑战🏆

The best way to learn is to try things by your own! Why not trying another environment?

学习的最好方法就是自己去尝试!为什么不尝试另一个环境呢?

See you on Unit 8, part 2 where we going to train agents to play Doom 🔥

第8单元,第2部分,我们将培训特工扮演末日🔥

Keep learning, stay awesome 🤗

继续学习,保持超赞🤗