7-Main_NLP_tasks-7-Mastering_NLP

中英文对照学习,效果更佳!
原课程链接:https://huggingface.co/course/chapter7/8?fw=pt

Mastering NLP

掌握NLP

Ask a Question

问一个问题

If you’ve made it this far in the course, congratulations — you now have all the knowledge and tools you need to tackle (almost) any NLP task with 🤗 Transformers and the Hugging Face ecosystem!

如果你?™已经在课程中走到了这一步,恭喜你??你现在已经拥有了用🤗Transformer和Hugging Face生态系统处理(几乎)任何NLP任务所需的所有知识和工具!

We have seen a lot of different data collators, so we made this little video to help you find which one to use for each task:

我们已经看到了很多不同的数据校验器,所以我们制作了这个小视频来帮助你找出每个任务使用哪一个:

After completing this lightning tour through the core NLP tasks, you should:

完成核心NLP任务的快速浏览后,您应该:

  • Know which architectures (encoder, decoder, or encoder-decoder) are best suited for each task
  • Understand the difference between pretraining and fine-tuning a language model
  • Know how to train Transformer models using either the Trainer API and distributed training features of 🤗 Accelerate or TensorFlow and Keras, depending on which track you’ve been following
  • Understand the meaning and limitations of metrics like ROUGE and BLEU for text generation tasks
  • Know how to interact with your fine-tuned models, both on the Hub and using the pipeline from 🤗 Transformers

Despite all this knowledge, there will come a time when you’ll either encounter a difficult bug in your code or have a question about how to solve a particular NLP problem. Fortunately, the Hugging Face community is here to help you! In the final chapter of this part of the course, we’ll explore how you can debug your Transformer models and ask for help effectively.

了解最适合每项任务的体系结构(编码器、解码器或编解码器)了解预训练和微调语言模型之间的区别了解如何使用🤗Accelerate或TensorFlow和Kera的Trainer‘API和分布式训练功能训练Transformer模型,具体取决于您一直遵循的轨道了解像Rouge和BLEU这样的指标用于文本生成任务的含义和限制知道如何在集线器上和使用🤗Transformers的Pipeline`与微调的模型进行交互尽管有这些知识,但总有一天您会在代码中遇到困难的错误,或者对如何解决特定的™问题有疑问。幸运的是,Hugging Face社区在这里帮助你!在本课程这一部分的最后一章中,™将探讨如何有效地调试您的Transformer模型并寻求帮助。