Unlocking the Potential of LangChain Expression Language (LCEL): A Hands-On Guide
Introduction
LangChain Expression Language (LCEL) represents a transformative approach in working with large language models (LLMs). It simplifies complex workflows, making it more accessible for developers to leverage the power of AI in their applications. In this blog, we'll explore LCEL through practical examples.
Setting the Stage: Basic Setup
Before diving into LCEL, it's crucial to set up the necessary environment. This setup involves installing the LangChain library along with other essential packages:
!pip install langchain
!pip install openai
!pip install chromadb
!pip install tiktoken
Once installed, you can begin coding by importing the required modules and setting up your API keys:
import os
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
# Set your API key
os.environ['OPENAI_API_KEY'] = "sk-..."
Understanding the Output: Detailed Examples
model = ChatOpenAI()
output_parser = StrOutputParser()
prompt = ChatPromptTemplate.from_template(
"Create a lively and engaging product description with emojis based on these notes: \n{product_notes}"
)
Let's dive deeper into the output generated at each step of our LCEL example to fully grasp its functionality.
First, we invoke the prompt with specific product notes:
prompt_value = prompt.invoke({"product_notes": "Multi color affordable mobile covers"})
The prompt_value
here holds the structured request we sent to the model:
ChatPromptValue(messages=[HumanMessage(content='Create a lively and engaging product description with emojis based on these notes: \nMulti color affordable mobile covers')])
We can convert this value to a string to see how it's presented:
prompt_value.to_string()
This yields a human-readable format of the prompt:
Human: Create a lively and engaging product description with emojis based on these notes:
Multi color affordable mobile covers
To pass this prompt to the model, we convert it to messages:
prompt_value.to_messages()
This conversion is crucial for the model to understand and process the request:
[HumanMessage(content='Create a lively and engaging product description with emojis based on these notes: \nMulti color affordable mobile covers')]
Next, the model is invoked with these messages:
model_output = model.invoke(prompt_value.to_messages())
The model_output
contains the AI-generated product description:
AIMessage(content="๐๐ฑGet ready to dress up your phone in a kaleidoscope of colors with our multi-color affordable mobile covers! ๐๐...")
Finally, we parse this output:
output_parser.invoke(model_output)
This yields a well-formatted, human-readable product description:
๐๐ฑGet ready to dress up your phone in a kaleidoscope of colors with our multi-color affordable mobile covers! ๐๐...
This step-by-step breakdown showcases the power and flexibility of LCEL, illustrating how it handles and transforms data at each stage of the process. The ability to see and understand each component's output is invaluable for debugging and refining your AI-driven applications.
The Power of Chaining in LCEL
One of the most powerful features of LCEL is the ability to chain operations. This capability is showcased in the following example:
chain = prompt | model | output_parser
product_description = chain.invoke({"product_notes": "Multi color affordable mobile covers"})
print(product_description)
The |
operator elegantly chains the prompt, model, and output parser, simplifying what would typically be a complex series of operations.
Streaming and Batch Processing
LCEL also supports streaming and batch processing, allowing for efficient handling of multiple inputs and real-time data flows:
# Streaming Example
for chunk in chain.stream({"product_notes": "Multi color affordable mobile covers"}):
print(chunk, end="", flush=True)
# Batch Processing Example
product_notes_list = [
{"product_notes": "Eco-friendly reusable water bottles"},
# Add more product notes here
]
batch_descriptions = chain.batch(product_notes_list)
These examples illustrate LCEL's versatility in handling various types of data inputs and processing needs.
Advanced Use Case: Retrieval Augmented Generation (RAG)
LCEL goes beyond simple chaining. The following RAG example demonstrates its capability in more complex scenarios:
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
# Sample documents
docs = ["Document on Climate Change...", "AI in Healthcare..."]
# Retrieval setup
vectorstore = Chroma.from_texts(docs, embedding=OpenAIEmbeddings())
retriever = vectorstore.as_retriever(search_kwargs={"k": 1})
chain = {"context": retriever, "question": RunnablePassthrough()} | prompt | model | output_parser
# Invoke the chain for a query
response = chain.invoke("Query about social media and politics")
print(response)
Conclusion
In conclusion, LangChain Expression Language (LCEL) presents a flexible and powerful way to work with large language models, allowing for easy composition of complex tasks. As we've seen through the code snippets and explanations, LCEL simplifies the process of generating dynamic content, handling data streams, and performing advanced operations like Retrieval Augmented Generation (RAG).
If you're keen on exploring more about LCEL and would like to see these concepts in action, I highly recommend watching our detailed tutorial video. This video provides a practical, visual guide to using LCEL, complementing the insights shared in this blog post. It's a great resource for both beginners and experienced users looking to deepen their understanding of LangChain and its capabilities.
Whether you're a developer, researcher, or just someone fascinated by the potential of AI and language models, the video will offer valuable insights and enhance your skills in AI-driven application development.
Thank you for reading, and happy coding with LCEL!
Jupyter Notebook: https://github.com/PradipNichite/Youtube-Tutorials/blob/main/LangChain_Expression_Language_(LCEL)_Tutorial.ipynb
If you're curious about the latest in AI technology, I invite you to visit my project, AI Demos, at aidemos.com. It's a rich resource offering a wide array of video demos showcasing the most advanced AI tools. My goal with AI Demos is to educate and illuminate the diverse possibilities of AI.
For even more in-depth exploration, be sure to visit my YouTube channel at youtube.com/@aidemos.futuresmart. Here, you'll find a wealth of content that delves into the exciting future of AI and its various applications.