E4-Unit_2-Introduction_to_Q_Learning-N13-Conclusion

中英文对照学习,效果更佳!
原课程链接:https://huggingface.co/deep-rl-course/unit4/conclusion?fw=pt

Conclusion

结论

Congrats on finishing this chapter! There was a lot of information. And congrats on finishing the tutorials. You’ve just implemented your first RL agent from scratch and shared it on the Hub 🥳.

祝贺你读完了这一章!这里有很多信息。并祝贺您完成了教程。您已经从头开始实现了您的第一个RL代理,并将其共享到了™的Ÿ袁³上。

Implementing from scratch when you study a new architecture is important to understand how it works.

在学习新的体系结构时从头开始实现,这对于理解它的工作原理很重要。

That’s normal if you still feel confused with all these elements. This was the same for me and for all people who studied RL.

如果你仍然对所有这些元素感到困惑,那么这是正常的。™对我和所有研究RL的人来说都是一样的。

Take time to really grasp the material before continuing.

在继续之前,花点时间真正掌握这些材料。

In the next chapter, we’re going to dive deeper by studying our first Deep Reinforcement Learning algorithm based on Q-Learning: Deep Q-Learning. And you’ll train a DQN agent with RL-Baselines3 Zoo to play Atari Games.

在下一章中,我们将通过研究我们的第一个基于Q-™的深度强化学习算法:深度Q-学习来更深入地研究。你将用RL-Baselines3动物园训练一名DQN特工来玩™游戏。

Atari environments
Finally, we would love to hear what you think of the course and how we can improve it. If you have some feedback then, please 👉 fill this form

雅达利环境最后,我们很想听听您对这门课程的看法,以及我们如何改进它。如果您对此有任何反馈,请填写此表格,并与Ÿ‘‰联系

Keep Learning, stay awesome 🤗

继续学习,保持卓越🤗