8-How_to_ask_for_help-6-End-of-chapter_quiz
中英文对照学习,效果更佳!
原课程链接:https://huggingface.co/course/chapter8/7?fw=pt
End-of-chapter quiz
章末测验
问一个问题
Let’s test what you learned in this chapter!
让我们来测试一下你在这一章中学到的东西–™!
- In which order should you read a Python traceback?
From top to bottom
您应该按照什么顺序阅读一篇Python回溯?从上到下
From bottom to top
自下而上
- What is a minimal reproducible example?
A simple implementation of a Transformer architecture from a research article
什么是最小的可重现示例?一篇研究文章中的一个简单的Transformer架构实现
A compact and self-contained block of code that can be run without any external dependencies on private files or data
紧凑且自包含的代码块,可以在不依赖私有文件或数据的任何外部依赖关系的情况下运行
A screenshot of the Python traceback
Python回溯的屏幕截图
A notebook that contains your whole analysis, including parts unrelated to the error
包含全部分析的笔记本,包括与错误无关的部分
- Suppose you try to run the following code, which throws an error:
1 | |
Which of the following might be a good choice for the title of a forum topic to ask for help?
假设您尝试运行以下代码,这将抛出一个错误:以下哪一项可能是请求帮助的论坛主题标题的好选择?
ImportError: cannot import name 'GPT3ForSequenceClassification' from 'transformers' (/Users/lewtun/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/__init__.py)
`ImportError:无法从‘Transers’(/Users/lewtun/miniconda3/envs/huggingface/lib/python3.8/site-packages/transformers/__init__.py)`导入名称‘GPT3ForSequenceClass’
Problem with from transformers import GPT3ForSequenceClassification
`从Transformer导入GPT3ForSequenceClassication的问题
Why can’t I import GPT3ForSequenceClassification?
为什么不能导入GPT3ForSequenceClass?
Is GPT-3 supported in 🤗 Transformers?
🤗Transformer支持GPT-3吗?
- Suppose you’ve tried to run
trainer.train()and are faced with a cryptic error that doesn’t tell you exactly where the error is coming from. Which of the following is the first place you should look for errors in your training pipeline?
The optimization step where we compute gradients and perform backpropagation
假设您尝试运行traine.™(),但遇到了一个隐晦的错误,该错误不能告诉您错误的确切来源。以下哪一项是您应该首先在训练流程中查找错误的地方?我们计算梯度并执行反向传播的优化步骤
The evaluation step where we compute metrics
评估步骤,我们在其中计算指标
The datasets
数据集
The dataloaders
数据加载器
- What is the best way to debug a CUDA error?
Post the error message on the forums or GitHub.
调试CUDA错误的最佳方法是什么?在论坛或GitHub上发布错误消息。
Execute the same code on the CPU.
在CPU上执行相同的代码。
Read the traceback to find out what caused the error.
阅读回溯以找出导致错误的原因。
Reduce the batch size.
减少批次大小。
Restart the Jupyter kernel.
重新启动Jupyter内核。
- What is the best way to get an issue on GitHub fixed?
Post a full reproducible example of the bug.
修复GitHub上的问题的最好方法是什么?发布一个完整的可重现的错误示例。
Ask every day for an update.
每天询问最新情况。
Inspect the source code around the bug and try to find the reason why it happens. Post the results in the issue.
检查错误周围的源代码,并尝试找出发生错误的原因。将结果张贴在本期杂志上。
为什么过度适应一批处理通常是一种好的调试技术?事实并非如此;过度适应总是不好的,应该避免。
- Why is overfitting to one batch usually a good debugging technique?
It isn’t; overfitting is always bad and should be avoided.
它使我们能够验证该模型能够将损失降低到零。
It allows us to verify that the model is able to reduce the loss to zero.
它允许我们验证我们的输入和标签的张量形状是正确的。
It allows us to verify that the tensor shapes of our inputs and labels are correct.
为什么在🤗Transformers Repo中创建新问题时,使用Transformers-cli env包含您的计算环境的详细信息是一个好主意?这可以让维护人员了解您使用的库的版本。
- Why is it a good idea to include details on your compute environment with
transformers-cli envwhen creating a new issue in the 🤗 Transformers repo?
It allows the maintainers to understand which version of the library you’re using.
它允许维护人员知道您是在Windows、MacOS还是Linux上运行代码。
It allows the maintainers to know whether you’re running code on Windows, macOS, or Linux.
它允许维护人员知道您是在GPU上还是在CPU上运行代码。
It allows the maintainers to know whether you’re running code on a GPU or CPU.
