9-Building_and_sharing_demos-1-Building_your_first_demo

中英文对照学习,效果更佳!
原课程链接:https://huggingface.co/course/chapter9/2?fw=pt

Building your first demo

构建您的第一个演示

Ask a Question
Open In Colab
Open In Studio Lab
Let’s start by installing Gradio! Since it is a Python package, simply run:

提问在Colab中打开在Studio Lab中打开让我们从安装GRadio开始!因为它是一个Python包,所以只需运行:

$ pip install gradio

`$pip安装等级`

You can run Gradio anywhere, be it from your favourite Python IDE, to Jupyter notebooks or even in Google Colab 🤯!
So install Gradio wherever you run Python!

你可以在任何地方运行GRadio,从你最喜欢的PythonIDE,到Jupyter笔记本电脑,甚至在Google Colab🤯中!因此,无论您在哪里运行Python,都要安装GRadio!

Let’s get started with a simple “Hello World” example to get familiar with the Gradio syntax:

让我们从一个简单的“Hello World”示例开始,以熟悉GRadio语法:

1
2
3
4
5
6
7
8
9
10
import gradio as gr


def greet(name):
return "Hello " + name


demo = gr.Interface(fn=greet, inputs="text", outputs="text")

demo.launch()

Let’s walk through the code above:

让我们演练一下上面的代码:

  • First, we define a function called greet(). In this case, it is a simple function that adds “Hello” before your name, but it can be any Python function in general. For example, in machine learning applications, this function would call a model to make a prediction on an input and return the output.
  • Then, we create a Gradio Interface with three arguments, fn, inputs, and outputs. These arguments define the prediction function, as well as the type of input and output components we would like. In our case, both components are simple text boxes.
  • We then call the launch() method on the Interface that we created.

If you run this code, the interface below will appear automatically within a Jupyter/Colab notebook, or pop in a browser on http://localhost:7860 if running from a script.

首先,我们定义一个名为greet()的函数。在本例中,它是一个在您的姓名前添加“Hello”的简单函数,但通常它可以是任何Python函数。例如,在机器学习应用程序中,此函数将调用模型来对输入进行预测并返回输出。然后,我们创建带有三个参数的GRadioInterfacefninputsoutputs。这些参数定义了预测函数,以及我们想要的输入和输出组件的类型。在我们的示例中,这两个组件都是简单的文本框。然后,我们在创建的Interface上调用Launch()方法。如果运行此代码,下面的界面将自动出现在Jupyter/Colab笔记本电脑中,或者如果从脚本运行,则在http://localhost:7860上的浏览器中弹出。

Try using this GUI right now with your own name or some other input!

现在试着用你自己的名字或其他一些输入来使用这个图形用户界面!

You’ll notice that in this GUI, Gradio automatically inferred the name of the input parameter (name)
and applied it as a label on top of the textbox. What if you’d like to change that?
Or if you’d like to customize the textbox in some other way? In that case, you can
instantiate a class object representing the input component.

您将注意到,在此图形用户界面中,GRadio自动推断输入参数的名称(name),并将其作为标签应用于文本框顶部。如果你想改变这一点呢?或者您是否希望以其他方式自定义文本框?在这种情况下,您可以实例化表示输入组件的类对象。

Take a look at the example below:

请看下面的示例:

1
2
3
4
5
6
7
8
9
10
11
import gradio as gr


def greet(name):
return "Hello " + name


# We instantiate the Textbox class
textbox = gr.Textbox(label="Type your name here:", placeholder="John Doe", lines=2)

gr.Interface(fn=greet, inputs=textbox, outputs="text").launch()

Here, we’ve created an input textbox with a label, a placeholder, and a set number of lines.
You could do the same for the output textbox, but we’ll leave that for now.

在这里,我们创建了一个输入文本框,其中包含一个标签、一个占位符和一组行数。您可以对输出文本框执行相同的操作,但我们暂时不讨论这一点。

We’ve seen that with just a few lines of code, Gradio lets you create a simple interface around any function
with any kind of inputs or outputs. In this section, we’ve started with a
simple textbox, but in the next sections, we’ll cover other kinds of inputs and outputs. Let’s now take a look at including some NLP in a Gradio application.

我们已经看到,只需几行代码,GRadio就可以为具有任何类型的输入或输出的任何函数创建一个简单的界面。在本节中,我们从一个简单的文本框开始,但在接下来的小节中,我们将介绍其他类型的输入和输出。现在让我们看看在GRadio应用程序中包含一些NLP。

🤖 Including model predictions

🤖包括模型预测

Let’s now build a simple interface that allows you to demo a text-generation model like GPT-2.

现在,让我们构建一个简单的界面,允许您演示像GPT-2这样的文本生成模型。

We’ll load our model using the pipeline() function from 🤗 Transformers.
If you need a quick refresher, you can go back to that section in Chapter 1.

我们将使用🤗Transformers中的管道()函数加载我们的模型。如果你需要快速复习,你可以回到第一章的那一节。

First, we define a prediction function that takes in a text prompt and returns the text completion:

首先,我们定义一个预测函数,该函数接受文本提示并返回文本完成:

1
2
3
4
5
6
7
8
from transformers import pipeline

model = pipeline("text-generation")


def predict(prompt):
completion = model(prompt)[0]["generated_text"]
return completion

This function completes prompts that you provide, and you can run it with your own input prompts to see how it works. Here is an example (you might get a different completion):

此函数完成您提供的提示,您可以使用自己的输入提示来运行它,以查看它是如何工作的。以下是一个示例(您可能会得到不同的完成):

1
predict("My favorite programming language is")
1
>> My favorite programming language is Haskell. I really enjoyed the Haskell language, but it doesn't have all the features that can be applied to any other language. For example, all it does is compile to a byte array.

Now that we have a function for generating predictions, we can create and launch an Interface in the same way we did earlier:

现在我们有了一个生成预测的函数,我们可以像前面一样创建和启动一个Interface

1
2
3
import gradio as gr

gr.Interface(fn=predict, inputs="text", outputs="text").launch()

That’s it! You can now use this interface to generate text using the GPT-2 model as shown below 🤯.

就这样!现在,您可以使用此界面使用GPT-2模型生成文本,如下面的🤯所示。

Keep reading to see how to build other kinds of demos with Gradio!

继续阅读,看看如何使用GRadio构建其他类型的演示!