F5-Unit_3-Deep_Q_Learning_with_Atari_Games-H7-Conclusion

中英文对照学习,效果更佳!
原课程链接:https://huggingface.co/deep-rl-course/unit7/introduction-to-marl?fw=pt

Conclusion

结论

Congrats on finishing this chapter! There was a lot of information. And congrats on finishing the tutorial. You’ve just trained your first Deep Q-Learning agent and shared it on the Hub 🥳.

祝贺你读完了这一章!这里有很多信息。并祝贺您完成了本教程。您已经培训了您的第一个深度Q-™代理,并将其分享到了Ÿ袁³中心。

Take time to really grasp the material before continuing.

在继续之前,花点时间真正掌握这些材料。

Don’t hesitate to train your agent in other environments (Pong, Seaquest, QBert, Ms Pac Man). The best way to learn is to try things on your own!

™毫不犹豫地在其他环境中培训您的代理(PONG、SEAQEST、QBERT、MSPAC Man)。最好的学习方法就是自己去尝试!

Environments
In the next unit, we’re going to learn about Optuna. One of the most critical task in Deep Reinforcement Learning is to find a good set of training hyperparameters. And Optuna is a library that helps you to automate the search.

在下一单元的环境中,我们将学习奥普图纳™。深度强化学习中最关键的任务之一就是找到一组好的训练超参数。而OpTuna是一个帮助您自动搜索的库。

Finally, we would love to hear what you think of the course and how we can improve it. If you have some feedback then, please 👉 fill this form

最后,我们很想听听您对这门课程的看法,以及我们如何改进它。如果您对此有任何反馈,请填写此表格,并与Ÿ‘‰联系

Keep Learning, stay awesome 🤗

继续学习,保持卓越🤗