Getting Started With LangChain for Beginners
This tutorial demonstrates how to use the LangChain framework to connect with OpenAI and other LLMs, work with various chains, and build a basic chatbot with history.
Join the DZone community and get the full member experience.
Join For FreeLarge language models (LLMs) like OpenAI’s GPT-4 and Hugging Face models are powerful, but using them effectively in applications requires more than just calling an API. LangChain is a framework that simplifies working with LLMs, enabling developers to create advanced AI applications with ease.
In this article, we’ll cover:
- What is LangChain?
- How to install and set up LangChain
- Basic usage: Access OpenAI LLLs, LLMs on Hugging Face, Prompt Templates, Chains
- A simple LangChain chatbot example
What Is LangChain?
LangChain is an open-source framework designed to help developers build applications powered by LLMs. It provides tools to structure LLM interactions, manage memory, integrate APIs, and create complex workflows.
Benefits of LangChain
- Simplifies handling prompts and responses
- Supports multiple LLM providers (OpenAI, Hugging Face, Anthropic, etc.)
- Enables memory, retrieval, and chaining multiple AI calls
- Supports building chatbots, agents, and AI-powered apps
A Step-by-Step Guide
Step 1: Installation
To get started, install LangChain and OpenAI’s API package using pip, open your terminal, and run the following command:
pip install langchain langchain_openai openai
Setup your API key in an environment variable:
import os
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
Step 2: Using LangChain’s ChatOpenAI
Now, let’s use OpenAI’s model to generate text.
Basic Example: Generating a Response
from langchain_openai import ChatOpenAI
# Initialize the model
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.5)
# write your prompt
prompt = "What is LangChain ?"
# print the response
print(llm.invoke(prompt))
Explanation
from langchain_openai import ChatOpenAI()
. This imports theChatOpenAI
class from thelangchain_openai
package and allows to use OpenAI’s GPT-based models for conversational AI.ChatOpenAI()
. This initializes the GPT model.model ="gpt-3.5-turbo"
. As Open AI has several models to use, we have to pass the model that we want to use for prompt response. However, by default, Open AI uses thetext-davinci-003
model.temperaure=0.5
. ChatOpenAI is initialized with a temperature of 0.5. Temperature controls randomness in the response:- 0.0: Deterministic (always returns the same output for the same input).
- 0.7: More creative/random responses.
- 1.0: Highly random and unpredictable responses.
- Since temperature = 0.5, it balances between creativity and reliability.
prompt = "What is LangChain?"
. Here, we are defining the prompt, which is coming from LangChain and will be sent to the ChatOpenAI model for processing.llm.invoke(prompt)
. This sends the prompt to the given LLM and gets the response.
Step 3: Using Other LLM Models Using HuggingFacePipeline
from langchain_huggingface import HuggingFacePipeline
# Initialize the model, here are trying to use this model - google/flan-t5-base
llm = HuggingFacePipeline.from_model_id(
model_id="google/flan-t5-base",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 200, "temperature" :0.1},
)
# print the response
print(llm.invoke("What is Deep Learning?"))
# In summary here we learned about using different llm using langchain,
# instead OpenAI we used a model on Huggingface.
# This helps us to interact with models uploaded by community.
Step 4: Chaining Prompts With LLMs
LangChain lets you connect prompts and models into chains.
# Prompts template and chaining using langchain
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model_name="gpt-4o",temperature=0.9)
# Prompt Template - let you generate prompts that accepts variable,
# we can have multiple variables as well
template = "What is the impact on my health, if I eat {food} and drink {drink}?"
prompt = PromptTemplate.from_template(template)
# Here chains comes into picture to go beyond single llm call
# and involve sequence of llm calls, and chains llms and prompt togetger
# Now we initialize our chain with prompt and llm model reference
chain = prompt | llm
# here we are invok the chain with food parameter as Bread and drink parameter as wine.
print(chain.invoke({"food" : "Bread","drink":"wine"}))
Why Use LangChain?
- Automates the process of formatting prompts
- Helps in multi-step workflows
- Makes code modular and scalable
Step 5: Chain Multiple Tasks in a Sequence
LangChain chains allow to combine multiple chains, where output from first chain can be used as a input to second chain.
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
llm = ChatOpenAI(model_name="gpt-4o", temperature=0)
#first template and chain
template = "Which is the most {adjectective} building in the world ?"
prompt = PromptTemplate.from_template(template)
chain = prompt | llm | StrOutputParser()
#second template and chain with the first chain
template_second = "Tell me more about the {building}?"
prompt_second = PromptTemplate.from_template(template_second)
chain_second = {"noun" : chain} | prompt_second | llm | StrOutputParser()
#invoking the chains of calls passing the value to chain 1 parameter
print(chain_second.invoke({"adjectective" : "famous"}))
Why Use Sequential Chains?
- Merges various chains by using the output of one chain as the input for the next.
- Operates by executing a series of chains
- Creating a seamless flow of processes
Step 5: Adding Memory (Chatbot Example)
Want your chatbot to remember past conversations? LangChain Memory helps!
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from langchain_openai import ChatOpenAI
# Initialize model with memory
llm = ChatOpenAI(model="gpt-3.5-turbo")
memory = ConversationBufferMemory()
# Create a conversation chain
conversation = ConversationChain(llm=llm, memory=memory)
# Start chatting!
print(conversation.invoke("Hello! How is weather today ?")["response"])
print(conversation.invoke("Can I go out for biking today ?")["response"])
Why Use Memory?
- Enables AI to remember past inputs
- Creates a more interactive chatbot
- Supports multiple types of memory (buffer, summarization, vector, etc.)
What’s Next?
Here, we have explored some basic components of LangChain. Next, we will explore the below items to use the real power of LangChain:
- Explore LangChain agents for AI-driven decision-making
- Implement retrieval-augmented generation (RAG) to fetch real-time data
Opinions expressed by DZone contributors are their own.
Comments