How to build your first LLM Application using Google Gemini Pro and Streamlit

A. Raphael
5 min readMay 2, 2024

--

Building your first LLM application can be challenging and could lead to procrastination. In this blog post, a simple application is presented to help you get started in this lucrative field. This application will take less than 30 minutes for you to finish building. The application is a “QnA” application, or a question and answer application.

The process for building this application is divided into 8 steps

1. Creation of a virtual environment

2. Installation of needed libraries

3. Importation of libraries into the development environment

4. Getting the Gemini Pro API

5. Loading the model

6. Creating a function to get a response from the Gemini Pro model

7. Creating a Streamlit application

8. Testing the Application

Credit for this beautiful application goes to Sir. Krish Naik, you can check out his YouTube channel.

Creation of Virtual Environment

A virtual environment is needed to keep packages used for a particular project separate from those used in other projects. This is because sometimes packages might collide, leading to issues with application functionality and speed. It keeps the system clean and organized, making it easy to access and upgrade any packages/libraries.

The virtual environment used in this project was called “qna” and the following steps were used to create it:

i. Open your terminal (Preferably anaconda prompt)

ii. Use this code to create a virtual environment called “qna” .

Conda create -p qna -y

You can replace the “qna” with a name of your choice.

iii. Activate the virtual environment

To activate the virtual environment, use:

Conda activate qna

Installation of needed libraries

The libraries required for this project are Streamlit and “google-generativeai”. Streamlit is needed to create the front and back end of the application, while “google-generativeai” is needed to interact with the Gemini Pro model API. There are two ways to install these libraries: either by creating a “requirements.txt” file or by installing each library one after the other on the command line.

To install using the “requirements.txt” file, follow these steps:

i. Create a “requirements.txt” file and list out these two libraries

ii. On the command prompt, type in “pip install -r requirements.txt”; this will install all the needed libraries.

To install one after the other, use

pip install any of the library

Importation of libraries into the development environment

To import any of these libraries, use the keyword “import” followed by the name of the packages, as can be seen below.

import streamlit as st
import google.generativeai as g_gen

“streamlit” was imported as “st” and “google.generativeai “was imported as g_gen. The “as” is used to create an alias and these aliases are used for simplicity.

Getting the Gemini Pro API

To get the Google Gemini Pro API, follow these steps:

i. Visit this website: https://aistudio.google.com/app/apikey

ii. A pop-up will come up with two options: “New Prompt” or “Get API Key”. Click on “Get API Key”. A legal notice will pop up read or ignore, check all the boxes and click “Continue”.

iii. Click on “Create API Key” and “Create API Key In New Project”.

iv. Copy the Generated API token and store it in an easily accessible place.

To use the API in the development environment, use the code below

g_gen.configure(api_key="place your api here ")

The “g_gen” is the “google.generativeai” that was imported, and the configure is the method used for getting the API key. Remove the “place your api here “ and replace it with the API you copied from the website in step (iv).

Loading the model

Use the code below to load the model.

gemini_model = g_gen.GenerativeModel("gemini-pro")

The reason why “gemini-pro” is specified is that there is another model meant for computer vision projects, but in this case, we are dealing with a natural language (NLP) project.

Creating a function to get a response from the Gemini Pro model

A function called “generate_response” was developed to get responses from the Gemini Pro model. “res” is the variable used to store the response and “res. text” is used to get the text part of the response. The code is presented below:

def generate_response(qtn):
res = gemini_model.generate_content(qtn)
text = res.text
return text

Creating a Streamlit application

The application creation section can be divided into these steps:

i. Naming the web app tab

ii. Adding the title

iii. Defining the input

iv. Adding the button

v. Getting model response

i. Naming the web app tab

st.set_page_config(page_title="QnA")

This code is used to name the tab or the web page title

ii. Adding the title

st.title("QnA Application")

This is used to add a title to the application

iii. Defining the input

input = st.text_input("Enter your question(s): ")

This is used to get input or questions from the user for the model

iv. Adding the button

submit = st.button("Get Answer")

This is a submit button used for triggering the model response

vi. Getting model response

if submit:
try:
output = generate_response(input)
st.write(output)
except:
st.write(" ")

To get the model response, an if condition is used. If the condition above is evaluated to true, the codes under it run, the function “generate_response” is called, the text response is returned, and “st.write()” is used to show the output on the web app. The “try and except” block is used to catch errors if no value is entered.

Testing the Application

To run the Streamlit app, use “streamlit run app name.py” e.g streamlit run customapp.py

streamlit run appname.py

Copy the local URL and paste it into your browser, and if everything is done properly, the app will be displayed. This is what the app looks like.

To test the app, type in any question in your mind and get a response from the model. The following are examples of questions and answers used for this demonstration.

Example 1

Example 2:

Conclusion

Getting started with Building an LLM application does not have to be too complex and time-consuming. You can start a basic question-and-answer application, which in itself is very powerful. This application is like a mini-ChatGPT or Bard and gives answers to almost all general questions.

Finally, I want to say thank you for reading. Please leave a clap and follow for more posts like this.

--

--