Text Generation with Transformers (GPT-2)

In this tutorial, we will learn how to generate text from the input sentence with the help of Open AI GPT2 using Gradio Interface.

Introduction: Text  Generation

Text generation is the primary challenge in various natural language processing tasks, including speech to text, conversational systems, and text synthesis, in machine learning. Based on the prior sequence of words used in the text, a trained text generation model learns the likelihood of occurrence of a word.

What is GPT2?

GPT-2 is a self-supervised transformers model that is trained on a huge corpus of English data. This implies it was pre-trained on raw texts solely, with no human labeling (which is why it can utilize so much publically available data), and then used an automated method to build inputs and labels from those texts. It was specifically taught to guess the following word in sentences.

In more technical terms, inputs are continuous text sequences of a specific length, while targets are the same sequence but with one token (word or chunk of the word) swapped to the right. To ensure that the predictions for the token  ionly use the inputs from 1 to  i and not future tokens, the model employs a mask-mechanism.

What is Gradio?

Gradio is a Python toolkit that allows you to quickly create easy-to-use, configurable UI elements for any machine learning model, Any  API, or any subjective capability in only a few lines of code. It allows you to interact with your models in your web browser by simply dragging and dropping photos, text, or a recording of your own voice, for example, and witnessing the results in real-time. You may either integrate the GUI directly into your Python notebook or share the URL with everyone.



Import Libraries

Let’s import all the required Python libraries.

import gradio as gd
import tensorflow as tf
from transformers import TFGPT2LMHeadModel,GPT2Tokenizer

Loading Model

Let’s load the pre-trained GPT2  xl Model from the transformers library.

Important key terms

  • GPT2Tokenizer:  GPT2Tokenizer has been taught to regard spaces as token parts (similar to sentence piece), a word will be encoded differently depending on whether it is at the start of the sentence (without space) or not:

  • eos_token_id:  The id of the end-of-sequence token.




Generate Text

Let’s create a function that takes input as a sentence.


  1. Encode Input Context.
  2. Generate Independent sequences using beam search decoding(5 beams).
  3. Decode Input Context.

Important key terms

  • input_ids: In many cases, the input ids are the only parameters that must be supplied to the model as input. They’re token indices, which are numerical representations of the tokens that make up the sequences that the model will utilize as input.
  • num_beams:  Number of beams used for group beam search.
  • skip_special_tokens(bool, optional, defaults to False): Whether or not to remove special tokens in the decoding.
  • clean_up_tokenization_spaces (bool, optional, defaults to True):  Whether or not to clean up the tokenization spaces.
  • return_tensors: Return the tensors in the form of TensorFlow or PyTorch.
  • tokenizer. decode: Convert a list of lists of token ids into a list of strings by calling decode.
def generate_text(inp):
  return ".".join(output.split(".")[:-1]) + "."


Calling Function in Gradio

Let’s call the function which we created earlier, in Gradio Interface to predict the likelihood of occurrence of a word.


  • Creates a textbox to render output text or number.
  • Define title = “Text Generator” in Interface.
  • Define description in Interface
  • Launches the webserver that serves the UI for the interface.
gd.Interface(generate_text,"textbox",output_text,title="Text Generator",
             description="GPT 2 is an unsupervised language model that \
             can generate coherent text. Go Ahead & Input a Sentence and see what it generates \
             Takes around 20s to run").launch(debug=True)


Leave a Reply

Your email address will not be published. Required fields are marked *