How to Build Reliable LLM Applications with Phidata?

Ajay Kumar Reddy 01 Jun, 2024
14 min read

Introduction

With the intro of Large Language Models, the usage of these LLMs in different applications has greatly increased. In most of the recent applications developed across many problem statements, LLMs are part of it. Most of the NLP space, including Chatbots, Sentiment Analysis, Topic Modelling, and many more, is being handled by Large Language Models. Working directly with these LLMs can often be difficult, leading to different tools like LangChain and LlamaIndex, which help simplify creating applications with Large Language Models. In this guide, we will look at one such tool called Phidata, which simplifies building real-world applications with LLMs.

Phidata

Learning Objectives

  • Understand the basics and purpose of Phidata for building LLM applications
  • Learn how to install Phidata and its dependencies
  • Gain proficiency in creating and configuring LLM assistants using Phidata
  • Explore integrating various tools with LLMs to enhance their capabilities
  • Develop skills to create structured outputs using Pydantic models within Phidata

This article was published as a part of the Data Science Blogathon.

What is Phidata?

Phidata is a popular Python library built to create real-world applications with Large Language Models by integrating them with Memory, Knowledge, and Tools. With Phidata, one can add memory to an LLM, which can include Chat history, or we can combine the LLM with a Knowledge Base, which are the vector stores, with which we can build RAG(Retrieval Augmented Generation) systems. Finally, LLMs can even be paired with Tools, which allow the LLMs to perform tasks that are beyond their capabilities

Phidata is integrated with different Large Languages Models like the OpenAI, Groq, and Gemini and even supports the open-source Large Language Models through Ollama. Similarly, it supports the OpenAI and Gemini embedding models and other open-source models through Ollama. Currently, Phidata supports a few popular vector stores like Pinecone, PGVector, Qdrant, and LanceDB. Phidata even integrates with popular LLM libraries like Langchain and LlamaIndex

Getting Started with Phidata Assistants

In this section, we will see how to download the Phidata Python library and start with it. Along with Phidata, we will need some other Python libraries, including Groq, Duckduckgo Search, and OpenAI. For this, we run the following code:

!pip install -q -U phidata groq==0.7 duckduckgo-search openai

Code Explanation

  • Phidata: This is the library we will work with to create LLM Applications
  • groq: Groq is a company known for creating new GPU equipment to run LLMs faster. It is called LPU(Language Processing Unit). This library will let us access the Mixtral model that is running on the Groq LPUs(Language Processing Unit)
  • duckduckgo-search: With this library, we can search the internet. The Large Language Model can leverage this library to perform internet searches
  • openai: This is the official library from OpenAI to work with the latest GPT models

Setting Up API Keys

So, running this will download all the Python modules that we want. Before starting, we need to store the APIs in the environment so the Phidata can access them to work with the underlying models. For this, we do the following.

import os

os.environ['GROQ_API_KEY'] = GROQ_API
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY

To get a free API from Groq’s official website. With this, we can access the Mixtral Mixture of Experts model. By default, Phidata works with the OpenAI model, so we even give it the OpenAI API Key. So, running the code will save the Groq API and the OpenAI API Keys to the environment. So let us get started with the Phidata library

from phi.assistant import Assistant

assistant = Assistant(
    description="You are a helpful AI Assistant who answers every user queries",
)

assistant.print_response("In short, explain blackholes to a 7 year old?", markdown=True)

Code Explanation

  • We start by importing the Assistant class from the phi.assistant
  • Then, we instantiate an Assistant by providing a description. An assistant is a Large Language Model. To this LLM, we can provide a description, System Prompts, and other LLM configurations to define an assistant
  • We can call the LLM by calling the .print_response() method. To this, we pass the user query and even provide another parameter, markdown=True

Running the code will define an Assistant with OpenAI GPT for the LLM, and then a user query will be given to this Large Language Model, and finally, the output will be generated. The output generated will be formatted neatly. By default, the output will be generated in a streaming format. We can see the output below

Phidata

We see the response generated in a well-formatted style. Here, we even see the user question and the answer generated by the Large Language Model. Setting the markdown = True is the reason we see the output being printed in a readable format. Now, if we wish to change the type of OpenAI model that we want to work with, we can check the below code

Changing the OpenAI Model

Here’s the code:

from phi.assistant import Assistant
from phi.llm.openai import OpenAIChat

assistant = Assistant(
    llm=OpenAIChat(model="gpt-3.5-turbo"),
    description="You help people with providing useful answers to their queries",
    instructions=["List should contain only 5"],
    max_tokens = 512
)

assistant.print_response("List some of top cricketers in the world", markdown=True)
Phidata

Code Explanation

  • We start by importing the Assistant class from the phi.assistant, we even import the OpenAIChat from the phi.llm.openai
  • Now, we again create an Assistant object. To this, we give a parameter called llm; to this parameter, we provide the OpenAIChat() class with the model given. Here it is GPT 3.5
  • Here, we even give some more parameters like the instructions, to which we pass the instructions that want the LLM to follow, and even the max_tokens, to limit the generation of the LLM
  • Finally, we call the .print_response() by giving the user query to it

Running the code will generate an output, which we can see below. We see that the OpenAI model has indeed followed the instructions that we gave it. It produced only 5 elements in the list, which aligns with the Instructions we gave it. Apart from OpenAI, Phidata is integrated with large language models. Let us try working with the Groq model with the following code

Working With Groq Model

Here’s the code:

from phi.assistant import Assistant
from phi.llm.groq import Groq

assistant = Assistant(
    llm=Groq(model="mixtral-8x7b-32768"),
    description="You help people with providing useful answers to their queries",
    max_tokens = 512
)

response = assistant.run('How much height can a human fly?', stream=False)
print(response)
Phidata

Code Explanation

  • We start by importing the Assistant class from the phi.assistant, we even import the Groq from the phi.llm.groq
  • Now, we again create an Assistant object. To this, we give a parameter called llm, and to this parameter, we provide the Groq() class with the model, that is, Mixtral Mixture of Experts
  • We even give description and max_tokens while creating the Assistant object
  • Here, instead of the .print_response(), we call the .run() function, which does not print the output in the markdown format but in a string format
  • We even set the Stream to False here so we can see the complete output in one go

Running the output will produce the output that we can see above. The response generated by the Mixtral-8x7B looks good. It can be seen that the user query makes no sense and provides the right answer to the query. Sometimes, we need the output generated by the LLM to follow a structure. That is, we want the output generated by the LLM in a structured format so it becomes easy to take this output and do processing for future code. Phidata easily supports this, and we can do so with the following example

Let’s Create a Travel Itinerary Using Phidata

Here’s the code:

from typing import List
from pydantic import BaseModel, Field
from phi.assistant import Assistant
from phi.llm.groq import Groq


class TravelItinerary(BaseModel):
    destination: str = Field(..., description="The destination of the trip.")
    duration: int = Field(..., description="The duration of the trip in days.")
    travel_dates: List[str] = Field(..., description="List of travel dates in YYYY-MM-DD format.")
    activities: List[str] = Field(..., description="Planned activities for the trip.")
    accommodation: str = Field(..., description="Accommodation details for the trip.")
    budget: float = Field(..., description="Estimated budget for the trip.")
    travel_tips: str = Field(..., description="Useful travel tips for the destination.")

travel_assistant = Assistant(
    llm=Groq(model="mixtral-8x7b-32768"),
    description="You help people plan travel itineraries.",
    output_model=TravelItinerary,
)

print(travel_assistant.run("New York"))

Code Explanation

  • We start with importing a few libraries, which include the Pydantic, typing, and our Assistant
  • Then, we start by defining our Structured Output. For this, we create a class; in our example, we create a TravelItinenary class, which inherits from the Pydantic BaseModel
  • In this class, we provide different variables, which include different travel-related information
  • For each variable, we provide the data type that the variable stores, and in the description, we write what it is expected to store through the Field object from Pydantic
  • Finally, we create a travel assistant object by calling the Assistant Class and giving it all the parameters it needs, which include the LLM, description, and the output_model parameter, which takes in our Pydantic object

Now, we call the .run() function of the assistant and give it a location name to get the Travel Itinerary. Running this will produce the below output

Phidata

We can see that the output generated from the assistant is a Pydantic Object, i.e., the TravelItinenary class that we have defined. Each variable in this class is filled up according to the descriptions given while creating the class. The Mixtral MoE 8x7B has done a good job in filling the TravelItinenary object with the right values. It has provided us with the travel itinerary, travel dates, duration of stay, and list of activities to perform at that location. Along with that, it even provides travel tips to help save money.

Working with Tools and Building Custom Tools

Large Language Models by themselves are very much limited in terms of usability. LLMs can only generate text. But there are cases, where one wants to get the latest news or wants to get some information which requires an API call. In these scenarios, we will need tools.

Tools are LLM weapons. LLMs can work with tools to perform actions that are impossible to do with vanilla LLMs. These tools can be API tools, where LLMs can call these tools to perform API calls and fetch information, or these can be math tools, where LLMs can work with them to perform math operations.

Built-in Tools in Phidata

Phidata already has some in-built tools, which the LLMs can work with. Let us try this with the following code:

from phi.assistant import Assistant
from phi.tools.duckduckgo import DuckDuckGo

assistant = Assistant(
    llm=Groq(model="mixtral-8x7b-32768"),
    tools=[DuckDuckGo()],
    show_tool_calls=True,
    description="You are a senior BBC researcher writing an article on a topic.",
    max_tokens=1024
)

assistant.print_response("What is the lastest LLM from Google?", markdown=True, stream = False)
Phidata

Code Explanation

  • We start by importing the Assistant class from the phi.assistant, and for the tool, we import the DuckDuckGo tool from the phi.tools.duckduckgo
  • Next, we instantiate an Assistant object by calling the Assistant class with different parameters. These include the LLM parameter, where we provide the Groq model and the description and the max_tokens parameters
  • Here, we even provide some other parameters like tools, to which we provide a list of tools; here, we provide a list with a single element DuckDuckGo
  • To check if the Assistant has called the tool or not, we can give another parameter called show_tool_calls=True, which will display the tool call if called
  • Finally, we called the .print_response() function and gave a user query asking for information about the latest Google LLMs

Running this code will create an assistant with the DuckDuckGo tool. When we call the LLM with the user query “What is the Latest LLM from Google?” the LLM will decide whether to make a function call. The output picture shows that the LLM has made a function call to the DuckDuckGo search tool with the query “latest LLM from Google.”

The results fetched from this function call are fed back to the model along with the original query so that the model can work with this data and generate the final response for the user query, which it did for our query. It has generated the right response about the latest LLMs from Google: the Gemini 1.5 Flash and PaliGemma, which the Google Team recently announced. We can also create our built tools besides relying on the Phidata tools. One example of this can be seen below

Creating a Custom Tool

Here’s the code:

from phi.assistant import Assistant

def power(Base: float, Exponent: float) -> str:
    "Raise Base to the Exponent power"
    return str(Base**Exponent)

assistant = Assistant(
    llm=Groq(model="mixtral-8x7b-32768"),
    tools=[power],
    show_tool_calls=True)

assistant.print_response("What is 13 to the power 9.731?", stream = False)
LLM Applications

Code Explanation

  • Here, we start by importing the Assistant class from the Phidata library
  • Then we define a function called power(), which takes in two floating point numbers called Base and Exponent and then returns the Base raised to the exponent power
  • Here, we return the string because Phidata tools expect the output to be returned in a string format
  • Then, we create an assistant object and give this new tool in the form of a list of the tool parameters along with the other parameters
  • Finally, we call the assistant by giving it a math query related to the function

Running this code has produced the following output. We can see in the output that the model did perform a function call by calling the power function and giving it the correct arguments taken from the user query. Finally, it takes in the response generated by the function, and then the assistant generates the final response to the user’s question. We can even give multiple functions to the LLM and let the assistant call these functions multiple times. For this, let us take a look at the code below.

Verifying Tool Invocation

Here’s the code:

from phi.assistant import Assistant

def power(Base: float, Exponent: float) -> str:
    "Raise Base to the Exponent power"
    return str(Base**Exponent)

def divison(a: float, b: float) -> str:
    "Divide a by b"
    return str(a/b)

assistant = Assistant(
    llm=Groq(model="mixtral-8x7b-32768"),
    tools=[power, divison],
    show_tool_calls=True)

assistant.print_response("What is 10 power 2.83 divided by 7.3?", stream = False)

In this code, we have defined two functions. The first function takes in two floats called Base and Exponent and returns a string containing the Base raised to its exponent. In comparison, the second function is a division function. Given two integers, a and b, it returns a string of a/b.

Then, we create an assistant object with Groq for the Large Language Model and give these two tools a list of the tools’ parameters while creating the assistant object. Finally, we provide a query related to these tools to check their invocation. We give “What is 10 power 2.83 divided by 7.3?”

From the output below, we can see that two tools are getting called. The first is the power tool, which was called with the appropriate arguments. The answer from the power tools is given in the form of the argument and another variable while calling the second tool. Finally, the answer generated by the second function call is given to the LLM so that the LLM can generate the final response to the user query.

Phidata

Phidata even lets us create a cli app with the Assistant. We can create such an application with the following code.

Creating a CLI Application

Here’s the code:

from phi.assistant import Assistant
from phi.tools.duckduckgo import DuckDuckGo

assistant = Assistant(tools=[DuckDuckGo()],
                      show_tool_calls=True,
                      read_chat_history=True)
assistant.cli_app(markdown=True)
  • We start by importing the Assistant and Duckduckgo from the Phidata library
  • Then, we create an assistant object by calling the Assistant class and giving it the LLM and the tools that we wish to work with
  • We even set the read_chat_histoy to True, which will allow the model to read the chat history if needed
  • Finally, we call the .cli_app() function of the assistant object and set the markdown to True so to force the model to provide a markdown response

Running the above command will create a Terminal App where we can chat with the Mixtral model. The conversation can be seen in the below pic

LLM Applications
LLM Applications

In the first reply, Mixtral called the duckduckgo function to get information about the latest models created by Google. Then, in the second conversation, we can notice that the Mixtral has invoked the get_chat_history function, which was necessary given the user query and was able to answer the user query correctly

Building a Team of Assistants

Creating a team of assistants is possible through the Phidata library. It lets us create a team of assistants who can interact with each other and delegate work to each other to perform specific tasks with the specific tools assigned to each assistant. Phidata simplifies the process of creating such assistants. We can create a simple assistant team with the following code:

Here’s the Code

from phi.assistant import Assistant
from phi.llm.groq import Groq

def power(base: float, exponent: float) -> str:
    "Raise base to the exponent power"
    return str(base**exponent)


math_assistant = Assistant(
    llm=Groq(model="mixtral-8x7b-32768"),
    name="Math Assistant",
    role="Performs mathematical operations like taking power of two numbers",
    tools=[power],
)

main_assistant = Assistant(
    llm=Groq(model="mixtral-8x7b-32768"),
    name="Research Team",
    show_tool_calls = True,
    team=[math_assistant],
)

main_assistant.print_response(
    "What is 5 power 9.12?",
    markdown=True,
    stream = False,
)
  • The code starts by importing the necessary modules from the Phidata library: Assistant from phi.assistant, Groq from phi.llm.groq
  • We then define a function named power, which takes two float parameters (Base and Exponent) and returns the result of raising the Base to the Exponent as a string
  • An Instance of Assistant named math_assistant is created with attributes like the llm, where we give the llm we want to work with, the name of the assistant, here we name it the Math Assistant and the list of tools the assistant can work with, we provide the power tool to the math assistant and give it a role attribute defining the role of the assistant
  • Similarly, we create a main_assistant, but here we provide the team attribute where we give it a list of assistants to whom it can delegate work; here in our code, it will be the math_assistant
  • The print_response function is called on main_assistant with a query (“What is 5 power 9.12?”), formatted to display in Markdown

So, running this has produced the below output:

LLM Applications

So when the code runs, the main_assistant is fed with the user query. The query contains a math problem, which involves taking a power. So, the main_assistant will see this and then delegate the work to the math_assistant. We can see this function call in the pic. Now, the math_assistant gets this user query from the main_assistant and creates a function call to call the power tool with the Base and the Exponent. The answer returned from the function call is then fed to the main_assistant from the math_assistant. Finally, the main_assistant retrieves this answer and creates a final response to the user query

Conclusion

In conclusion, Phidata is an easy and powerful Python library designed to simplify the creation of real-world applications using Large Language Models (LLMs). By integrating memory, knowledge bases, and tools, Phidata allows developers to improve LLMs’ capabilities, which are worked for only plain text generation. This guide has shown the ease of building with Phidata, working with different types of LLMs like OpenAI and Groq, and extending functionalities through different tools and assistants. Phidata’s ability to build structured Outputs, work with Tools for real-time info retrieval, and create collaborative teams of assistants makes it a go-to tool for developing reliable and sophisticated LLM applications.

Key Takeaways

  • Phidata simplifies building LLM applications by integrating memory, knowledge, and tools
  • Phidata supports structured outputs using Pydantic models, improving data handling
  • Developers can build teams of assistants that delegate tasks to each other for complex workflows
  • Phidata’s CLI application feature allows for interactive, terminal-based conversations with LLMs
  • It supports popular LLMs like OpenAI, Groq, and Gemini and even integrates with different vector stores

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.

Frequently Asked Questions

Q1. What is Phidata?

A. Phidata is a Python library designed to simplify building real-world applications with Large Language Models (LLMs). It allows you to integrate LLMs with memory, knowledge bases, and tools to create powerful applications

Q2. Can Phidata handle different LLM outputs?

A. Yes, Phidata can work with different LLM outputs. You can define a structured output format using Pydantic models and retrieve the LLM response in that format

Q3. What are Phidata tools, and how do they work?

A. Phidata tools extend LLM capabilities by allowing them to perform actions like API calls, internet searches (using DuckDuckGo), or mathematical calculations (through user-built functions)

Q4. Can I create my very own tools for Phidata?

A. Absolutely! You can define our own Python functions to perform specific tasks and include these tools to get integrated with the Assistant object

Q5. Is it possible to build a team of assistants using Phidata?

A. Yes, Phidata allows the creation of multiple assistants with assigned roles and tools. You can create a main assistant that delegates tasks to other assistants based on their knowledge of different areas

Ajay Kumar Reddy 01 Jun, 2024

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear