9-Building_and_sharing_demos-4-Integrations_with_the_Hugging_Face_Hub

中英文对照学习,效果更佳!
原课程链接:https://huggingface.co/course/chapter9/5?fw=pt

Integrations with the Hugging Face Hub

与Hugging Face中心的整合

Ask a Question
Open In Colab
Open In Studio Lab
To make your life even easier, Gradio integrates directly with Hugging Face Hub and Hugging Face Spaces.
You can load demos from the Hub and Spaces with only one line of code.

在工作室实验室的Colab Open Open中提问为了让您的生活更轻松,GRadio直接集成了Hugging Face中心和Hugging Face空间。您只需一行代码即可从Hub和Spaces加载演示。

Loading models from the Hugging Face Hub

从Hugging Face中心加载模型

To start with, choose one of the thousands of models Hugging Face offers through the Hub, as described in Chapter 4.
Using the special Interface.load() method, you pass "model/" (or, equivalently, "huggingface/")
followed by the model name.
For example, here is the code to build a demo for GPT-J, a large language model, add a couple of example inputs:

首先,从Hub提供的数千个Hugging Face模特中选择一个,如第四章所述。使用特殊的Interface.Load()方法,传递“Model/”(或者,等价地,“HuggingFace/”),后跟型号名称。例如,下面是为大型语言模型GPT-J构建演示的代码,添加了几个示例输入:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import gradio as gr

title = "GPT-J-6B"
description = "Gradio Demo for GPT-J 6B, a transformer model trained using Ben Wang's Mesh Transformer JAX. 'GPT-J' refers to the class of model, while '6B' represents the number of trainable parameters. To use it, simply add your text, or click one of the examples to load them. Read more at the links below."
article = "<p style='text-align: center'><a href='https://github.com/kingoflolz/mesh-transformer-jax' target='_blank'>GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model</a></p>"
examples = [
["The tower is 324 metres (1,063 ft) tall,"],
["The Moon's orbit around Earth has"],
["The smooth Borealis basin in the Northern Hemisphere covers 40%"],
]
gr.Interface.load(
"huggingface/EleutherAI/gpt-j-6B",
inputs=gr.Textbox(lines=5, label="Input Text"),
title=title,
description=description,
article=article,
examples=examples,
enable_queue=True,
).launch()

The code above will produce the interface below:

上面的代码将生成如下界面:

Loading a model in this way uses Hugging Face’s Inference API,
instead of loading the model in memory. This is ideal for huge models like GPT-J or T0pp which
require lots of RAM.

以这种方式加载模型使用了Hugging Face的推理API,而不是将模型加载到内存中。这是像GPT-J或T0pp这样需要大量内存的大型机型的理想选择。

Loading from Hugging Face Spaces

从Hugging Face部空间加载

To load any Space from the Hugging Face Hub and recreate it locally, you can pass spaces/ to the Interface, followed by the name of the Space.
Remember the demo from section 1 that removes the background of an image? Let’s load it from Hugging Face Spaces:

要从Hugging Face中心加载任何Space并在本地重新创建它,您可以将Spaces/传递给Interface,后跟Space的名称。还记得第1节中删除图像背景的演示吗?让我们从Hugging Face部空间加载它:

1
gr.Interface.load("spaces/abidlabs/remove-bg").launch()

One of the cool things about loading demos from the Hub or Spaces is that you customize them
by overriding any of the
parameters. Here, we add a title and get it to work with a webcam instead:

从Hub或Spaces加载演示的一个很酷的事情是,您可以通过覆盖任何参数来定制它们。在这里,我们添加一个标题,并让它与网络摄像头一起工作:

1
2
3
gr.Interface.load(
"spaces/abidlabs/remove-bg", inputs="webcam", title="Remove your webcam background!"
).launch()

Now that we’ve explored a few ways to integrate Gradio with the Hugging Face Hub, let’s take a look at some advanced features of the Interface class. That’s the topic of the next section!

现在我们已经探索了几种将GRadio与Hugging Face中心集成的方法,接下来让我们来看看`Interface类的一些高级功能。这就是下一节的主题!