Langchain agents documentation example. LangChain is an open source orchestration framework for application development using large language models (LLMs). In this example, we will use OpenAI Tool Calling to create this agent. Jul 9, 2025 · The startup, which sources say is raising at a $1. Use LangGraph. The action consists of the name of the tool to execute and the input to pass to the tool. 2 docs. 1. May 2, 2023 · LangChain is a framework for developing applications powered by language models. LangChain comes with a number of built-in agents that are optimized for different use cases. Deploy and scale with LangGraph Platform, with APIs for state management, a visual studio for debugging, and multiple deployment options. For this reason, in the below example with an XML agent, we use the LangSmith is framework-agnostic — it can be used with or without LangChain's open source frameworks langchain and langgraph. One of the first things to do when building an agent is to decide what tools it should have access to. The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish. You are currently on a page documenting the use of Ollama models as text completion models. Classes Example code for building applications with LangChain, with an emphasis on more applied and end-to-end examples than contained in the main documentation. More complex modifications Overview The tool abstraction in LangChain associates a Python function with a schema that defines the function's name, description and expected arguments. from langchain_core. js to build stateful agents with first-class streaming and human-in-the-loop 3 days ago · This page shows you how to develop an agent by using the framework-specific LangChain template (the LangchainAgent class in the Vertex AI SDK for Python). 3 days ago · Learn how to use the LangChain ecosystem to build, test, deploy, monitor, and visualize complex agentic workflows. For working with more advanced agents, we’d recommend checking out LangGraph. agent_toolkits. create_csv_agent(llm: LanguageModelLike, path: str | IOBase | List[str | IOBase], pandas_kwargs: dict | None = None, **kwargs: Any) → AgentExecutor [source] # Create pandas dataframe agent by loading csv to a dataframe. Jan 11, 2024 · Discover the ultimate guide to LangChain agents. Whether you’re an indie developer experimenting with AI apps or a company needing offline capabilities, this setup is highly customizable and production-ready with the right tooling. Agents LangChain offers a number of tools and functions that allow you to create SQL Agents which can provide a more flexible way of interacting with SQL databases. LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for composing custom flows. Key concepts Tools are a way to encapsulate a function and its schema in a way that can be Quickstart In this quickstart we'll show you how to: Get setup with LangChain, LangSmith and LangServe Use the most basic and common components of LangChain: prompt templates, models, and output parsers Use LangChain Expression Language, the protocol that LangChain is built on and which facilitates component chaining Build a simple application with LangChain Trace your application with How to create tools When constructing an agent, you will need to provide it with a list of Tools that it can use. In Chains, a sequence of actions is hardcoded. Why do LLMs need to use Tools? A big use case for LangChain is creating agents. Stay ahead with this up-to-the-minute resource and start your LLM development journey now. 4 days ago · Learn the key differences between LangChain, LangGraph, and LangSmith. This is a relatively simple LLM application - it's just a single LLM call plus some prompting. LangChain simplifies every stage of the LLM application lifecycle: Development: Build your applications using LangChain's open-source components and third-party integrations. Productionization In Agents, a language model is used as a reasoning engine to determine which actions to take and in which order. It can often be useful to have an agent return something with more structure. Build a Retrieval Augmented Generation (RAG) App: Part 1 One of the most powerful applications enabled by LLMs is sophisticated question-answering (Q&A) chatbots. agents. These systems will allow us to ask a question about the data in a SQL database and get back a natural language answer. Finally, we will walk through how to construct a conversational retrieval agent from components. Finally, it creates a LangChain Document for each page of the PDF with the page's content and some metadata about where in the document the text came from. It provides essential building blocks like chains, agents, and memory components that enable developers to create sophisticated AI workflows beyond simple prompt-response interactions. Aug 25, 2024 · In LangChain, an “Agent” is an AI entity that interacts with various “Tools” to perform tasks or answer queries. But for certain use cases, how many times we use tools depends on the input. Use LangGraph to build stateful agents with first-class streaming and human-in-the-loop support. 2. Customize your agent runtime with LangGraph LangGraph provides control for custom agent and multi-agent workflows, seamless human-in-the-loop interactions, and native streaming support for enhanced agent reliability and execution. Deprecated since version 0. Building an agent from a runnable usually involves a few things: Data processing for the intermediate steps (agent_scratchpad). ⚠️ Security note ⚠️ Feb 16, 2025 · Agents in LangChain are advanced components that enable AI models to decide when and how to use tools dynamically. LangChain's products work seamlessly together to provide an integrated solution for every step of the application development journey. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. 0 chains LangChain has evolved since its initial release, and many of the original "Chain" classes have been deprecated in favor of the more flexible and powerful frameworks of LCEL and LangGraph. This should be pretty tightly coupled to the instructions in the prompt. When the agent reaches a stopping condition, it returns a final return value. A common application is to enable agents to answer questions using data in a relational database, potentially in an create_json_agent # langchain_community. LangChain allows AI developers to develop applications based on the combined Large Language Models (such as GPT-4) with external sources of Concepts The core idea of agents is to use a language model to choose a sequence of actions to take. The agent executes the action (e. # pip install wikipedia from langchain. The agent can store, retrieve, and use memories to enhance its interactions with users. Agents select and use Tools and Toolkits for actions. 0 in January 2024, is your key to creating your first agent with Python. Agents are systems that use an LLM as a reasoning engine to determine which actions to take and what the inputs to those actions should be. How to migrate from v0. Quickstart In this guide we'll go over the basic ways to create a Q&A chain and agent over a SQL database. This repository contains sample code to demonstrate how to create a ReAct agent using Langchain. When the agent This guide provides explanations of the key concepts behind the LangChain framework and AI applications more broadly. As a language model integration framework, LangChain's use-cases largely overlap with those of language models in general, including document analysis and summarization, chatbots, and code analysis. LangChain simplifies every stage of the LLM application lifecycle: Development: Build your applications using LangChain's open-source building blocks, components, and third-party integrations. By combining robust building blocks with intelligent orchestrators, LangChain empowers developers to create dynamic, context-aware, and scalable solutions that can transform industries and enhance user experiences. Setup: LangSmith By definition, agents take a self-determined, input-dependent Build copilots that write first drafts for review, act on your behalf, or wait for approval before execution. These agents, rather than following a static sequence, tailor their responses by intelligently selecting and chaining tools based on real-time input. It is often useful to have a model return output that matches a specific schema. You can access that version of the documentation in the v0. AgentExecutor [source] # Bases: Chain Agent that is using tools. The schemas for the agents themselves are defined in langchain. A Python library for creating hierarchical multi-agent systems using LangGraph. 0 chains to the new abstractions. This application will translate text from English into another language. For detailed documentation of all SQLDatabaseToolkit features and configurations head to the API reference. Common examples of these applications include: Question Answering over specific documents Documentation End-to-end Example: Question Answering over Notion Database 💬 Chatbots Documentation End-to-end Example: Chat-LangChain 🤖 Agents Documentation End-to-end Example: GPT+WolframAlpha LangChain’s ecosystem While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications. The agent prompt must have an agent_scratchpad key that is a MessagesPlaceholder. Classes In Agents, a language model is used as a reasoning engine to determine which actions to take and in which order. load_tools. How to add memory to chatbots A key feature of chatbots is their ability to use the content of previous conversational turns as context. By understanding the core architecture — LLMs, tools, chains, memory, and the agent loop — developers can create sophisticated agents tailored to specific use cases. Load the LLM First, let's load the language model we're going to langgraph langgraph is an extension of langchain aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. 2 days ago · LangChain is a powerful framework that simplifies the development of applications powered by large language models (LLMs). Read about all the agent types here. We will first create it WITHOUT memory, but we will then show how to add memory in. Getting Started Apr 2, 2025 · You can replace the endpoint to your custom model deployed on the serving endpoint. These need to represented in a way that the language model can recognize them. The supervisor controls all communication flow and task delegation, making decisions about which agent to invoke based on the current context and task requirements. Returning Structured Output This notebook covers how to have an agent return a structured output. The agent returns the exchange rate between two currencies on a specified date. The main advantages of using the SQL Agent are: It can answer questions based on the databases' schema as well as on the databases' content (like describing a specific table). When you use all LangChain products, you'll build better, get to production quicker, and grow visibility -- all with less set up and friction. LangChain implements a standard interface for large language models and related technologies, such as embedding models and vector stores, and integrates with hundreds of providers. It can recover from errors by running a generated query, catching the traceback and regenerating it Deprecated since version 0. 0: LangChain agents will continue to be supported, but it is recommended for new use cases to be built with LangGraph. That means there are two main considerations when thinking about different multi-agent workflows: What are the multiple independent agents? How are those agents connected? This thinking lends itself incredibly well to a graph representation, such as that provided by langgraph. To start, we will set up the retriever we want to use, and then turn it into a retriever tool. Nov 6, 2024 · LangChain is revolutionizing how we build AI applications by providing a powerful framework for creating agents that can think, reason, and take actions. The . prompts import ChatPromptTemplate system_message = """ Given an input question, create a syntactically correct {dialect} query to run to help find the answer. \nYour goal is to return a final answer by interacting with the JSON. LangChain is an open-source framework created to aid the development of applications leveraging the power of large language models (LLMs). LangGraph offers a more flexible and full-featured framework for building agents, including support for tool-calling, persistence of state, and human-in-the-loop workflows. tools (Sequence[BaseTool]) – Tools this agent has access to. 27 # Main entrypoint into package. This tutorial, published following the release of LangChain 0. In chains, a sequence of actions is hardcoded (in code). For working with more advanced agents, we'd recommend checking out LangGraph Agents or the migration guide Agents You can pass a Runnable into an agent. agent. In this notebook we will show how those parameters map to the LangGraph react agent executor using the create_react_agent prebuilt helper method. 3. Using agents This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation. Jan 31, 2025 · Learn how to build a Retrieval-Augmented Generation (RAG) application using LangChain with step-by-step instructions and example code create_csv_agent # langchain_experimental. Example code for building applications with LangChain, with an emphasis on more applied and end-to-end examples than contained in the main documentation. tools_renderer (Callable[[list[BaseTool]], str]) – This controls how the tools are Check out some other full examples of apps that utilize LangChain + Streamlit: Auto-graph - Build knowledge graphs from user-input text (Source code) Web Explorer - Retrieve and summarize insights from the web (Source code) LangChain Teacher - Learn LangChain from an LLM tutor (Source code) Text Splitter Playground - Play with various types of text splitting for RAG (Source code) Tweet Agents let us do just this. The above, but trimming old messages to reduce the amount of distracting information the model has to deal with. Intermediate agent actions and tool output messages will be passed in here. Unless the user specifies in his question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. In Agents, a language model is used as a reasoning engine to determine which actions to take and in which order. Available in both Python- and Javascript-based libraries, LangChain’s tools and APIs simplify the process of building LLM-driven applications like chatbots and AI agents. In agents, a language model is used as a reasoning engine to determine which actions to take and in which order. We recommend using the prebuilt agent or ToolNode, as they natively support handoffs tools returning Command. This guide covers a few strategies for getting structured outputs from a model. This state management can take several forms, including: Simply stuffing previous messages into a chat model prompt. It provides a standard interface for chains, many integrations with other tools, and end-to-end chains for common applications. Here’s an example: Dec 9, 2024 · langchain 0. Sep 18, 2024 · Let’s walk through a simple example of building a Langchain Agent that performs two tasks: retrieves information from Wikipedia and executes a Python function. It seamlessly integrates with LangChain, and you can use it to inspect and debug individual steps of your chains as you build. This tutorial previously used the RunnableWithMessageHistory abstraction. . note Mar 17, 2025 · In conclusion, LangChain’s tools and agents represent a significant leap forward in the development of AI applications. , a tool to run). agents import load_tools Here we focus on how to move from legacy LangChain agents to more flexible LangGraph agents. Tools are essentially functions that extend the agent’s capabilities by Jan 23, 2024 · Each agent can have its own prompt, LLM, tools, and other custom code to best collaborate with the other agents. agents ¶ Agent is a class that uses an LLM to choose a sequence of actions to take. load_tools(tool_names: List[str], llm: BaseLanguageModel | None = None, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None, allow_dangerous_tools: bool = False, **kwargs: Any) → List[BaseTool] [source] # Load tools based on their name. Parameters: llm (LanguageModelLike) – Language model to use for the agent. It's designed to be simple yet informative, guiding you through the essentials of integrating custom tools with Langchain. Many popular Ollama models are chat completion models. See Prompt section below for more. This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output. Contains previous agent actions and tool outputs as messages. Additional examples such as streaming, async invocation and function calling can be found in the LangChain documentation. \nYou have access to the following tools which help you learn more about the JSON Jun 4, 2025 · Using a Langchain agent with a local LLM offers a compelling way to build autonomous, private, and cost-effective AI workflows. Their framework enables you to build layered LLM-powered applications that are context-aware and able to interact dynamically with their environment as agents, leading to simplified code for you and a more dynamic user experience for your customers. agent_scratchpad: must be a MessagesPlaceholder. 17 ¶ langchain. Follow their code on GitHub. There are several key components here: Schema LangChain has several abstractions to make working with agents easy Oct 13, 2023 · With easy-to-follow instructions and lucid examples, I’ll guide you through the intricate world of LangChain, unlocking its immense potential. The main advantages of using SQL Agents are: It can answer questions based on the databases schema as well as on the databases content (like describing a specific table). If you are using either of these, you can enable LangSmith tracing with a single environment variable. Below is an example of how you can implement a multi-agent system for booking travel using handoffs: API Reference This section will cover building with LangChain Agents. AgentAction [source] # Bases: Serializable Represents a request to execute an action by an agent. Tools within the SQLDatabaseToolkit are designed to interact with a SQL database. Here are the steps: Define and configure a model Define and use a tool (Optional) Store chat history (Optional) Customize the prompt template (Optional The prompt must have input keys: tools: contains descriptions and arguments for each tool. LangChain has many other document loaders for other data sources, or you can create a custom document loader. Jul 23, 2025 · LangChain is an open-source framework designed to simplify the creation of applications using large language models (LLMs). Instead of relying on predefined scripts, agents analyze user queries and choose LangChain implements standard interfaces for defining tools, passing them to LLMs, and representing tool calls. An AgentExecutor with the specified agent_type agent and access to a PythonAstREPLTool with the loaded DataFrame (s) and any user-provided extra_tools. csv. Question answering with RAG LangChain and LangGraph SQL agents example. See the multi-agent supervisor example for a full example of using Send() in handoffs. End-to-end Example: Question Answering over Notion Database 💬 Chatbots Documentation End-to-end Example: Chat-LangChain 🤖 Agents Documentation End-to-end Example: GPT+WolframAlpha Getting Started # Checkout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application. with_structured_output() method Apr 11, 2024 · Quickstart To best understand the agent framework, let's build an agent that has two tools: one to look things up online, and one to look up specific data that we've loaded into a index. Agents use language models to choose a sequence of actions to take. This is a multi-part tutorial: Part 1 (this guide) introduces RAG Introduction LangChain is a framework for developing applications powered by large language models (LLMs). Prompt Templates take as input a dictionary, where each key represents a variable in the prompt template to fill in. Still, this is a great way to get started with LangChain - a lot of features can be built with just some prompting and an LLM call! LangChain 🔌 MCP. Hierarchical systems are a type of multi-agent architecture where specialized agents are coordinated by a central supervisor agent. LangChain agents (the AgentExecutor in particular) have multiple configuration parameters. To improve your LLM application development, pair LangChain with: LangSmith - Helpful for agent evals and observability. Don’t delay; start leveraging LangChain to build innovative applications today. We will also demonstrate how to use few-shot prompting in this context to improve performance. create_json_agent(llm: BaseLanguageModel, toolkit: JsonToolkit, callback_manager: BaseCallbackManager | None = None, prefix: str = 'You are an agent designed to interact with JSON. Besides the actual function that is called, the Tool consists of several components: This tutorial shows how to implement an agent with long-term memory capabilities using LangGraph. The main difference between the two is that our agent can query the database in a loop as many time as it needs to answer the question. output_parser (AgentOutputParser | None) – AgentOutputParser for parse the LLM output. Prompt Templates output Introduction LangChain is a framework for developing applications powered by large language models (LLMs). json. LangChain is a software framework that helps facilitate the integration of large language models (LLMs) into applications. Discover how each tool fits into the LLM application stack and when to use them. This will assume knowledge of LLMs and retrieval so if you haven't already explored those sections, it is recommended you do so. base. For details, refer to the LangGraph documentation as well as guides for Nov 15, 2023 · A Complete LangChain tutorial to understand how to create LLM applications and RAG workflows using the LangChain framework. 1 billion valuation, helps developers at companies like Klarna and Rippling use off-the-shelf AI models to create new applications. One of the most powerful applications enabled by LLMs is sophisticated question-answering (Q&A) chatbots. These are applications that can answer questions about specific source information. The agent returns the observation to the LLM, which can then be used to generate the next action. Tools allow agents to interact with various resources and services like APIs May 9, 2025 · Conclusion LangChain provides a robust framework for building AI agents that combine the reasoning capabilities of LLMs with the functional capabilities of specialized tools. How to: use legacy LangChain Agents (AgentExecutor) How to: migrate from legacy LangChain agents to LangGraph Callbacks Callbacks allow you to hook into the various stages of your LLM application's execution. In Build an Extraction Chain In this tutorial, we will use tool-calling features of chat models to extract structured information from unstructured text. Return type: In this quickstart we'll show you how to build a simple LLM application with LangChain. This guide will cover how to bind tools to an LLM, then invoke the LLM to generate these arguments. It can recover from errors by running a generated query SQLDatabase Toolkit This will help you get started with the SQL Database toolkit. How to: pass in callbacks at runtime How to: attach callbacks to a module How to: pass callbacks into a module constructor How to: create custom callback handlers How to: await callbacks Custom agent This notebook goes through how to create your own custom agent. The log is used to pass along extra information about the action. LangChain has 208 repositories available. Embeddings The following example shows how to use the databricks-bge-large-en embedding model as an embeddings component in LangChain using the Foundation Models API. By default, most of the agents return a single string. Contribute to langchain-ai/langgraph development by creating an account on GitHub. AgentAction # class langchain_core. Build controllable agents with LangGraph, our low-level agent orchestration framework. This log can be used in Build resilient language agents as graphs. A good example of this is an agent tasked with doing question-answering over some sources. Here we focus on how to move from legacy LangChain agents to more flexible LangGraph agents. Next, we will use the high level constructor for this type of agent. prompt (BasePromptTemplate) – The prompt to use. Apr 24, 2024 · This section will cover building with the legacy LangChain AgentExecutor. This flexibility enables the creation NOTE: for this example we will only show how to create an agent using OpenAI models, as local models runnable on consumer hardware are not reliable enough yet. Class hierarchy: Apr 25, 2023 · Below is an example in which the agent first looks up the date of Barack Obama’s birth with Wikipedia and then calculates his age in 2022 with a calculator. This is generally the most reliable way to create agents. A few-shot prompt template can be constructed from either a set of examples, or Prompt Templates Prompt templates help to translate user input and parameters into instructions for a language model. Build a multi-agent system You can use handoffs in any agents built with LangGraph. load_tools # langchain_community. Contribute to langchain-ai/langchain-mcp-adapters development by creating an account on GitHub. Framework to build resilient language agents as graphs. langchain: 0. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. A basic agent works in the following manner: Given a prompt an agent uses an LLM to request an action to take (e. Providing the LLM with a few such examples is called few-shotting, and is a simple yet powerful way to guide generation and in some cases drastically improve model performance. This guide will help you migrate your existing v0. , runs the tool), and receives an observation. For details, refer to the LangGraph documentation as well as guides for AgentExecutor # class langchain. In this guide, we'll learn how to create a simple prompt template that provides the model with example inputs and outputs when generating. These applications use a technique known as Retrieval Augmented Generation, or RAG. LangSmith documentation is hosted on a separate site. What Is LangChain? Agents LangChain has a SQL Agent which provides a more flexible way of interacting with SQL Databases than a chain. Memory is needed to enable conversation. One common use-case is extracting data from text to insert into a database or use with some other downstream system. The core idea of agents is to use a language model to choose a sequence of actions to take. We'll use the tool calling agent, which is generally the most reliable kind and the recommended one for most use cases. path (Union[str, IOBase It then extracts text data using the pypdf package. g. Contribute to johnsnowdies/langchain-sql-agent-example development by creating an account on GitHub. It can be used for chatbots, text summarisation, data generation, code understanding, question answering, evaluation, and more. In this comprehensive guide, we’ll In Agents, a language model is used as a reasoning engine to determine which actions to take and in which order. Sep 12, 2024 · 5 Real Word Examples - How Do Custom LangChain Agents Work? LangChain Agents, with their dynamic and adaptive capabilities, have opened up a new frontier in the development of LLM and AI-powered applications. You can order the results by a relevant column to return the most Parameters: llm (BaseLanguageModel) – LLM to use as the agent. param log: str [Required] # Additional information to log about the action. This repository contains a collection of apps powered by LangChain. Chains are great when we know the specific sequence of tool usage needed for any user input. Tools can be passed to chat models that support tool calling allowing the model to request the execution of a specific function with specific inputs. tool_names: contains all tool names. For more see the how-to guide for setting up LangSmith with LangChain or setting up LangSmith with LangGraph. rmfzhsgg asdui erjhoy rtmb ulvefgf qcrcv tsabcew iws kwu iafw