What is LangChain? A deeper look at LangChain

Publisher:breakthrough3Latest update time:2023-07-14 Source: 分布式实验室Author: Lemontree Reading articles on mobile phones Scan QR code
Read articles on your mobile phone anytime, anywhere

In daily life, we usually work on building end-to-end applications. There are many automation platforms and continuous integration/continuous delivery (CI/CD) pipelines that can be used to automate our machine learning processes. We also have tools like Roboflow and Andrew NG's Landing that can automate or create end-to-end applications.

If we want to create an application based on a large language model with the help of OpenAI or Hugging Fe, we may have to do it manually before. Now, to achieve the same goal, we have two of the most famous libraries, Haystack and LangChain, which can help us create end-to-end applications or processes based on large language models.

Let’s take a deeper look at LangChain.

What is LangChain?

LangChain is an innovative framework that is revolutionizing the way we develop applications driven by language models. By introducing advanced principles, LangChain is redefining the limits of what can be achieved with traditional A. In addition, LangChain applications have agent-like properties that enable language models to interact and adapt with the environment.

LangChain consists of multiple modules. As its name suggests, the main purpose of LangChain is to connect these modules in a chain . This means that we can string each module together and use this chain structure to call all modules at once.

These modules consist of the following:

Model

As discussed in the introduction, the models mainly cover large language models (LLMs). Large language models refer to models with a large number of parameters and trained on large-scale unstructured text. Giants have launched a variety of large language models, such as:

Google’s BERT

OpenAI’s GPT-3

Google LaMDA

Google PaLM

Meta AI’s LLaMA

OpenAI’s GPT-4

……

With LangChain, interacting with large language models becomes more convenient. The and functions provided by LangChain help to easily integrate the powerful capabilities of LLM into your work applications. LangChain uses the asyncio library to provide asynchronous support for LLM.

For binding scenarios that require concurrent calls to multiple LLMs, LangChain also provides asynchronous support . By releasing the thread processing the request, the server can assign it to other tasks until the response is ready, thereby maximizing resource utilization.

Currently, LangChain supports asynchronous support for models such as OpenAI, PromptLayerOpenAI, ChatOpenAI, and Anthropic, but asynchronous support for other LLMs will be expanded in future plans. You can use the agenera method to call OpenAI LLM asynchronously. In addition, you can also write custom LLM wrappers, not limited to the models supported by LangChain.

I used OpenAI in my application and mainly used Davinci, Babbage, Curie and a models to solve my problems. Each model has its own merits, token usage and use cases.

Case 1:

>Backend management system + user applet based on SpringBoot+MyBatisPlus+Vue&Element, supporting RBAC dynamic permissions, multi-tenancy, data permissions, workflow, three-party login, payment, SMS, mall and other functions
>
>*Project address:
>*:

#Importingmodules
fromlangchain.llmportOpenAI

#Hereweareusingtext-ada-001butyoucanchangeit
llm=OpenAI(model_name="text-ada-001",n=2,best_of=2)

#Askanything
llm("Tellmeajoke")

Output 1:

'

Whydidthechickencstheroad?

Togettotheotheide.'

Case 2:

llm_result=llm.generate(["Tellmeam"]*15)

Output 2:

[Generaon(text="

Whatifloveneverspeech

Whatifloveneverended

Whatiflovewasonlyeling

I'llneverknowthislove

It'snotafeeling

Butit'swhatwehavefeachother

Wejustknowthatloveissomethingstrong

Andwe'thelpbutbehappy

Wejuseelwhatloveisforus

Andweloveeachotherwithallourheart

Wejustdon'tknowhow

Howitwillgo

Butweknowthatloveissomethingstrong

Andwe'llalwayshaveeachother

Inourlives."),
Generation(text='

Onceuponatime

Therewasalovesopureandtrue

Itlastedforcenturies

Andneverbecamestaleordry

Itwasmovingandalive

Andtheheartofthelove-k

Isstillbeatingstrongandtrue.')]

Prompt

As we all know, prompts are the inputs we provide to the system in order to make precise or specific adjustments to the answer based on our use case. Many times, we want more than just text, but also more structured information. Many new objectives and classifications based on contrastive pre-training and zero-shot learning use prompts as valid inputs to make predictions. For example, CLIP by OpenAI and Grounding DINO by META both use prompts as inputs for predictions.

In LangChain, we can set prompt templates as needed and connect them to the main chain for output prediction. In addition, LangChain also provides output functions for further refining the results. The role of the output parser is to (1) guide how the model output is formatted, and (2) parse the output into the required format (including retries when necessary).

In LangChain, we can provide prompt templates as input. A template refers to the specific format or blueprint of the answer we want to get. LangChain provides pre-designed prompt templates that can be used to generate prompts for different types of tasks. However, in some cases, the preset templates may not meet your needs. In this case, we can use a custom prompt template.

Examples:

fromlangchainimportPromptTemplate

>Backend management system + user applet based on SpringCloudAlibaba+Gateway+Nacos+RocketMQ+Vue&Element, supporting RBAC dynamic permissions, multi-tenancy, data permissions, workflow, three-party login, payment, SMS, mall and other functions
>
>*Project address:
>*:

#Thistemplatewillactasablueprintforprompt

template="""
Iwantyoutoactasanamingconsultantfornewcompanies.
Whatisagoodnameforacompanythatmakes{product}?
"""

prompt=PromptTemplate(
input_variables=["product"],
template=template,
)
prompt.format(product="colorfulsocks")
#->Iwantyoutoactasanamingconsultantfornewcompanies.
#->Whatisagoodnameforacompanythatmakescolorfulsocks?

Memory

In LangChain, chains and proxies run in stateless mode by default, i.e. they process each incoming query independently. However, in some applications (such as chat), retaining previous interaction records is very important for both short-term and long-term. This is where the concept of "memory" comes in.

LangChain provides two forms of memory components. First, LangChain provides auxiliary tools for managing and manipulating previous chat messages, which are designed to be modular and work well regardless of the use case. Second, LangChain provides an easy way to integrate these tools into the chain structure, making it very flexible and adaptable to various situations.

Examples:

fromlangchain.memoryimportChatMessageHistory

history=ChatMessageHistory()
history.add_user_message("hi!")

history.add_ai_message("whatsup?")
history.messages

Output:

[HumanMessage(content='hi!',additional_kwargs={}),
AIMessage(content='whatsup?',additional_kwargs={})]

Chain

Chains provide a way to combine various components into a unified application. For example, you can create a chain that receives user input, formats it using a PromptTemplate, and then passes the formatted reply to the LLM (Large Language Model). By integrating multiple chains with other components, you can generate more complex chain structures.

LLMChain is considered one of the most commonly used methods to query LLM objects. It formats the provided input key value and memory key value (if any) according to the prompt template, and then sends the formatted string to LLM, which generates and returns the output result.

After calling the language model, you can follow a series of steps, and you can make a sequence of multiple model calls. This is especially valuable when you want to use the output of one call as the input of another call. In this chain sequence, each chain has an input and an output, and the output of one step is used as the input of the next step.

#Herewearechainingeverything
flangchain.chat_modelsimportChatOpenAI
fromlangchain.prompts.chatimport(
ChatPromptTemplate,
HumanMessagePromptTemplate,
)
human_message_prompt=HumanMessagePromptTemplate(
prompt=PromptTemplate(
template="Whatisagoodnameforacompanythatmakes{product}?",
input_variables=["product"],
)
)
chat_prompt_template=ChatPromptTemplate.from_messages([human_message_prompt])
chat=ChatOpenAI(temperature=0.9)
#Temperatureisaboutrandomnessinanswermorethetemp,randomtheanswer
#FinalChain

chain=LLMChain(llm=chat,prompt=chat_prompt_template)
print(chain.run("colorfulsocks"))

Agent

Some applications may require not only a predetermined LLM (Large Language Model)/other tool call sequence, but also an indeterminate call sequence based on user input. The sequence involved in this case includes an "Agent" that can access multiple tools. Based on user input, the agent may decide whether to call these tools and determine the input when calling.

According to the documentation, the high-level pseudocode for the proxy is roughly as follows:

Receive user input.

Based on the input, the agent decides whether to use a tool and what the input to the tool should be.

Invoke the tool and record the observations (i.e., the output obtained after invoking the tool with the inputs).

The history of tools, tool inputs, and observations is passed back to the agent, which decides what steps should be taken next.

Repeat the above steps until the agent decides that the tool is no longer necessary and responds directly to the user.

This process repeats until the agent decides that the tool is no longer necessary and responds directly to the user.

Examples:

fromlangchain.agentsimportload_tools
fromlangchain.agentsimportinitialize_agent
fromlangchain.agentsimportAgentType
fromlangchain.llmsimportOpenAI

llm=OpenAI(temperature=0)

tools=load_tools(["serpapi","llm-math"],llm=llm)

agent=initialize_agent(tools,llm,agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,verbose=True)

agent.run("WhoisLeoDiCaprio'sgirlfriend?Whatishercurrentageraisedtothe0.43power?")

[1] [2] [3]
Reference address:What is LangChain? A deeper look at LangChain

Previous article:Development of a flexible wearable piezoelectric tactile sensor based on AlN film
Next article:Visual SLAM promotes the evolution of service robots

Latest robot Articles
Change More Related Popular Components

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
circle

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号