สรุปสั้นๆกับ LangChain for LLM Application Development

LangChain คือเฟรมเวิร์กโอเพนซอร์สที่ช่วยให้เราพัฒนาแอปพลิเคชันที่ใช้ LLM (Large Language Model) ได้ง่ายขึ้น โดยเฉพาะแอปที่ต้องการความสามารถมากกว่าแค่ถาม-ตอบ เช่น การเชื่อมต่อกับฐานข้อมูล เรียก API ภายนอก หรือจดจำบริบทของผู้ใช้

LangChain for LLM Application Development

ฟีเจอร์หลักของ LangChain:

  1. Prompt Templates – ช่วยจัดการ prompt ให้อยู่ในรูปแบบที่ใช้งานซ้ำได้ง่าย
  2. Memory – จดจำบทสนทนาเดิม เช่น chatbot ที่คุยต่อเนื่องได้
  3. Chains – รวมหลายๆ ขั้นตอนของ LLM เข้าเป็นกระบวนการเดียว เช่น รับคำสั่ง → สร้างคำถาม → เรียก API → สรุปผล
  4. Retrieval (RAG) (Question and Answer) – ใช้ LLM ค้นข้อมูลจากเอกสาร/เวกเตอร์เพื่อให้ตอบได้เฉพาะเจาะจงยิ่งขึ้น หรือ Integrate เชื่อมต่อกับ API ภายนอก เช่น Google Search, WolframAlpha, ฐานข้อมูล
  5. Agents – ทำให้ LLM คิดเองได้ว่าจะใช้เครื่องมือไหน เช่น ค้นเว็บ หรือคิวรีข้อมูล

LangChain สามารถใช้ด้วย Python และ JavaScript (TypeScript)

🧠 ใช้ LangChain ทำอะไรได้บ้าง?

  • Chatbot อัจฉริยะ ที่จำได้ว่าคุยอะไรไปแล้ว
  • Search engine ที่ตอบจากเอกสารบริษัท
  • ระบบแนะนำสินค้า/บริการ
  • AI Agent ที่สามารถใช้เครื่องมือต่างๆ ทำงานแทนคนได้

ตัวอย่าง Code

1. Models, Prompts and parsers (Prompt Templates)

  • Models คือ Language Model มีมากมายในตลาด
  • Prompts คือ สไตล์ในการสร้าง inputs และส่งให้ Models
  • parsers คือ เอา outputs ออกจาก Models

ตัวอย่าง


# account for deprecation of LLM model
import datetime
# Get the current date
current_date = datetime.datetime.now().date()

# Define the date after which the model should be set to "gpt-3.5-turbo"
target_date = datetime.date(2024, 6, 12)

# Set the model variable based on the current date
if current_date > target_date:
    llm_model = "gpt-3.5-turbo"
else:
    llm_model = "gpt-3.5-turbo-0301"

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.output_parsers import ResponseSchema
from langchain.output_parsers import StructuredOutputParser

# To control the randomness and creativity of the generated
# text by an LLM, use temperature = 0.0
chat = ChatOpenAI(temperature=0.0, model=llm_model)
chat

customer_review = """\
This leaf blower is pretty amazing.  It has four settings:\
candle blower, gentle breeze, windy city, and tornado. \
It arrived in two days, just in time for my wife's \
anniversary present. \
I think my wife liked it so much she was speechless. \
So far I've been the only one using it, and I've been \
using it every other morning to clear the leaves on our lawn. \
It's slightly more expensive than the other leaf blowers \
out there, but I think it's worth it for the extra features.
"""

gift_schema = ResponseSchema(name="gift",
                             description="Was the item purchased\
                             as a gift for someone else? \
                             Answer True if yes,\
                             False if not or unknown.")
delivery_days_schema = ResponseSchema(name="delivery_days",
                                      description="How many days\
                                      did it take for the product\
                                      to arrive? If this \
                                      information is not found,\
                                      output -1.")
price_value_schema = ResponseSchema(name="price_value",
                                    description="Extract any\
                                    sentences about the value or \
                                    price, and output them as a \
                                    comma separated Python list.")

response_schemas = [gift_schema, 
                    delivery_days_schema,
                    price_value_schema]

output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
format_instructions = output_parser.get_format_instructions()
print(format_instructions)

review_template_2 = """\
For the following text, extract the following information:

gift: Was the item purchased as a gift for someone else? \
Answer True if yes, False if not or unknown.

delivery_days: How many days did it take for the product\
to arrive? If this information is not found, output -1.

price_value: Extract any sentences about the value or price,\
and output them as a comma separated Python list.

text: {text}

{format_instructions}
"""

prompt = ChatPromptTemplate.from_template(template=review_template_2)

messages = prompt.format_messages(text=customer_review, 
                                format_instructions=format_instructions)

print(messages[0].content)

response = chat(messages)

print(response.content)

output_dict = output_parser.parse(response.content)

output_dict

type(output_dict)

output_dict.get('delivery_days')

#output is 2

2. Memory

เมื่อเราใช้โมเดลสิ่งที่สำคัฐคือการเก็บข้อมูลไว้ใน Memory ยกตัวอย่างเช่น การใช้ chatbot จำเป็นที่จะต้องจดจำข้อมูลที่เกิดขึ้นไปแล้ว เพื่อจดจำการสนทนาที่ผ่านมาหรือก่อนหน้า

ตัวอย่าง

import os

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file

import warnings
warnings.filterwarnings('ignore')

# account for deprecation of LLM model
import datetime
# Get the current date
current_date = datetime.datetime.now().date()

# Define the date after which the model should be set to "gpt-3.5-turbo"
target_date = datetime.date(2024, 6, 12)

# Set the model variable based on the current date
if current_date > target_date:
    llm_model = "gpt-3.5-turbo"
else:
    llm_model = "gpt-3.5-turbo-0301"

from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory

llm = ChatOpenAI(temperature=0.0, model=llm_model)
memory = ConversationBufferMemory()
conversation = ConversationChain(
    llm=llm, 
    memory = memory,
    verbose=True
)

conversation.predict(input="Hi, my name is Andrew")

#output start
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:

Human: Hi, my name is Andrew
AI:

> Finished chain.
"Hello Andrew! It's nice to meet you. How can I assist you today?"
#output end

conversation.predict(input="What is 1+1?")

#output start
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:
Human: Hi, my name is Andrew
AI: Hello Andrew! It's nice to meet you. How can I assist you today?
Human: What is 1+1?
AI:

> Finished chain.
'1+1 equals 2. Is there anything else you would like to know?'
#output end

conversation.predict(input="What is my name?")

#output start
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:
Human: Hi, my name is Andrew
AI: Hello Andrew! It's nice to meet you. How can I assist you today?
Human: What is 1+1?
AI: 1+1 equals 2. Is there anything else you would like to know?
Human: What is my name?
AI:

> Finished chain.
'Your name is Andrew.'
#output end

Conversation แบบต่างๆของ Memory

  • ConversationBufferMemory - การจดจำแบบปกติ เก็บค่า message ใน variable
  • ConversationBufferWindowMemory - การจดจำแบบหน้าต่าง ช่วยกำหนดให้ AI ไม่ให้จดจำและใช้ Memory มากจนเกินไป เช่น กำหนด k=1 (keep) จะจำแค่บทสนทนาแค่ 1 สนทนา หรือ 1 prompt
    from langchain.memory import ConversationBufferWindowMemory
    memory = ConversationBufferWindowMemory(k=1)       
    
  • ConversationTokenBufferMemory - Limit token เพื่อไม่ให้ User ใช้การเกินที่กำหนดในแต่ละ context
    from langchain.memory import ConversationTokenBufferMemory
    from langchain.llms import OpenAI
    llm = ChatOpenAI(temperature=0.0, model=llm_model)
    memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=50)
    memory.save_context({"input": "AI is what?!"},
                    {"output": "Amazing!"})
    memory.save_context({"input": "Backpropagation is what?"},
                    {"output": "Beautiful!"})
    memory.save_context({"input": "Chatbots are what?"}, 
                    {"output": "Charming!"})
    memory.load_memory_variables({})
    #output if max_token_limit=30, save only first context
    #output if max_token_limit=50, save first and second context
    #output if max_token_limit=100, save all contexts
    
  • ConversationSummaryMemory - สร้างการจดจำแบบสรุปในช่วงเวลาทั้งหมด
    from langchain.memory import ConversationSummaryBufferMemory
    # create a long string
    schedule = "There is a meeting at 8am with your product team. \
    You will need your powerpoint presentation prepared. \
    9am-12pm have time to work on your LangChain \
    project which will go quickly because Langchain is such a powerful tool. \
    At Noon, lunch at the italian resturant with a customer who is driving \
    from over an hour away to meet you to understand the latest in AI. \
    Be sure to bring your laptop to show the latest LLM demo."
    memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=100)
    memory.save_context({"input": "Hello"}, {"output": "What's up"})
    memory.save_context({"input": "Not much, just hanging"},
                    {"output": "Cool"})
    memory.save_context({"input": "What is on the schedule today?"}, 
                    {"output": f"{schedule}"})
    memory.load_memory_variables({})
    conversation = ConversationChain(
    llm=llm, 
    memory = memory,
    verbose=True
    )
    conversation.predict(input="What would be a good demo to show?")
    #output start
    'A good demo to show would be one that highlights the capabilities and features of the AI system. For example, a demo showcasing natural language processing, image recognition, or predictive analytics could be impressive. It really depends on what you want to showcase and what your audience is interested in. Let me know if you need more specific suggestions!'
    #output end
    memory.load_memory_variables({})
    #output summarize the conversation over time
    

Momory Types เพิ่มเติม

  • Vector data memory - เก็บบทสนทนาหรือจากที่อื่นใน vector database และดึงขึ้นมาใช้จาก block หรือ text ที่เกี่ยวข้อง
  • Entity memories - ใช้ LLM จำรายละเอียด entities แบบเจาะจง
  • สามารถใช้ Multiple memories at one time.

3. Chains

Key building chains สำคัญอันนึง สามารถ run หลายๆครั้งใน input หนึ่งครั้ง

  • LLMChain - chains เดียวง่ายๆ
  • Sequential Chains - chain แบบลำดับ มีตัวอย่าง
    • SimpleSequentialChain
    • SequentialChain
  • Router Chain - สามารถกำหนดทิศทางของ output ได้ ถ้า prompt เป็น prompt ที่เป็นแบบที่ต้องการ

ตัวอย่าง

SimpleSequentialChain

# account for deprecation of LLM model
import datetime
# Get the current date
current_date = datetime.datetime.now().date()

# Define the date after which the model should be set to "gpt-3.5-turbo"
target_date = datetime.date(2024, 6, 12)

# Set the model variable based on the current date
if current_date > target_date:
    llm_model = "gpt-3.5-turbo"
else:
    llm_model = "gpt-3.5-turbo-0301"
from langchain.chains import SimpleSequentialChain
llm = ChatOpenAI(temperature=0.9, model=llm_model)

# prompt template 1
first_prompt = ChatPromptTemplate.from_template(
    "What is the best name to describe \
    a company that makes {product}?"
)

# Chain 1
chain_one = LLMChain(llm=llm, prompt=first_prompt)

# prompt template 2
second_prompt = ChatPromptTemplate.from_template(
    "Write a 20 words description for the following \
    company:{company_name}"
)
# chain 2
chain_two = LLMChain(llm=llm, prompt=second_prompt)

overall_simple_chain = SimpleSequentialChain(chains=[chain_one, chain_two],
                                             verbose=True
                                            )

overall_simple_chain.run(product)

#output
> Entering new SimpleSequentialChain chain...
"Royal Rest Bedding Co."
"Royal Rest Bedding Co. specializes in luxurious and comfortable mattresses and bedding designed for a regal and restful sleep experience."

> Finished chain.
'"Royal Rest Bedding Co. specializes in luxurious and comfortable mattresses and bedding designed for a regal and restful sleep experience."'

SequentialChain

from langchain.chains import SequentialChain
llm = ChatOpenAI(temperature=0.9, model=llm_model)

# prompt template 1: translate to english
first_prompt = ChatPromptTemplate.from_template(
    "Translate the following review to english:"
    "\n\n{Review}"
)
# chain 1: input= Review and output= English_Review
chain_one = LLMChain(llm=llm, prompt=first_prompt, 
                     output_key="English_Review"
                    )
second_prompt = ChatPromptTemplate.from_template(
    "Can you summarize the following review in 1 sentence:"
    "\n\n{English_Review}"
)
# chain 2: input= English_Review and output= summary
chain_two = LLMChain(llm=llm, prompt=second_prompt, 
                     output_key="summary"
                    )
# prompt template 3: translate to english
third_prompt = ChatPromptTemplate.from_template(
    "What language is the following review:\n\n{Review}"
)
# chain 3: input= Review and output= language
chain_three = LLMChain(llm=llm, prompt=third_prompt,
                       output_key="language"
                      )

# prompt template 4: follow up message
fourth_prompt = ChatPromptTemplate.from_template(
    "Write a follow up response to the following "
    "summary in the specified language:"
    "\n\nSummary: {summary}\n\nLanguage: {language}"
)
# chain 4: input= summary, language and output= followup_message
chain_four = LLMChain(llm=llm, prompt=fourth_prompt,
                      output_key="followup_message"
                     )

# overall_chain: input= Review 
# and output= English_Review,summary, followup_message
overall_chain = SequentialChain(
    chains=[chain_one, chain_two, chain_three, chain_four],
    input_variables=["Review"],
    output_variables=["English_Review", "summary","followup_message"],
    verbose=True
)

review = df.Review[5]
overall_chain(review)

#output start
> Entering new SequentialChain chain...

> Finished chain.
{'Review': "Je trouve le goût médiocre. La mousse ne tient pas, c'est bizarre. J'achète les mêmes dans le commerce et le goût est bien meilleur...\nVieux lot ou contrefaçon !?",
 'English_Review': "I find the taste mediocre. The foam does not hold, it's strange. I buy the same ones in stores and the taste is much better... Old batch or counterfeit!?",
 'summary': 'The reviewer is disappointed with the taste and foam quality of the product, suspecting it may be an old batch or counterfeit.',
 'followup_message': "Cher client, nous sommes désolés d'apprendre que vous n'étiez pas satisfait du goût et de la qualité de la mousse de notre produit. Nous vous assurons que nous prenons la qualité de nos produits très au sérieux et nous enquêterons immédiatement sur votre problème. Il est possible qu'il s'agisse d'un lot périmé ou contrefait, et nous ferons tout notre possible pour rectifier la situation. Veuillez nous contacter directement pour que nous puissions résoudre ce problème au plus vite. Merci pour votre retour."}
#output end

Router Chain

physics_template = """You are a very smart physics professor. \
You are great at answering questions about physics in a concise\
and easy to understand manner. \
When you don't know the answer to a question you admit\
that you don't know.

Here is a question:
{input}"""

math_template = """You are a very good mathematician. \
You are great at answering math questions. \
You are so good because you are able to break down \
hard problems into their component parts, 
answer the component parts, and then put them together\
to answer the broader question.

Here is a question:
{input}"""

history_template = """You are a very good historian. \
You have an excellent knowledge of and understanding of people,\
events and contexts from a range of historical periods. \
You have the ability to think, reflect, debate, discuss and \
evaluate the past. You have a respect for historical evidence\
and the ability to make use of it to support your explanations \
and judgements.

Here is a question:
{input}"""

computerscience_template = """ You are a successful computer scientist.\
You have a passion for creativity, collaboration,\
forward-thinking, confidence, strong problem-solving capabilities,\
understanding of theories and algorithms, and excellent communication \
skills. You are great at answering coding questions. \
You are so good because you know how to solve a problem by \
describing the solution in imperative steps \
that a machine can easily interpret and you know how to \
choose a solution that has a good balance between \
time complexity and space complexity. 

Here is a question:
{input}"""

prompt_infos = [
    {
        "name": "physics", 
        "description": "Good for answering questions about physics", 
        "prompt_template": physics_template
    },
    {
        "name": "math", 
        "description": "Good for answering math questions", 
        "prompt_template": math_template
    },
    {
        "name": "History", 
        "description": "Good for answering history questions", 
        "prompt_template": history_template
    },
    {
        "name": "computer science", 
        "description": "Good for answering computer science questions", 
        "prompt_template": computerscience_template
    }
]

from langchain.chains.router import MultiPromptChain
from langchain.chains.router.llm_router import LLMRouterChain,RouterOutputParser
from langchain.prompts import PromptTemplate

llm = ChatOpenAI(temperature=0, model=llm_model)

destination_chains = {}
for p_info in prompt_infos:
    name = p_info["name"]
    prompt_template = p_info["prompt_template"]
    prompt = ChatPromptTemplate.from_template(template=prompt_template)
    chain = LLMChain(llm=llm, prompt=prompt)
    destination_chains[name] = chain  

destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)

default_prompt = ChatPromptTemplate.from_template("{input}")
default_chain = LLMChain(llm=llm, prompt=default_prompt)

MULTI_PROMPT_ROUTER_TEMPLATE = """Given a raw text input to a \
language model select the model prompt best suited for the input. \
You will be given the names of the available prompts and a \
description of what the prompt is best suited for. \
You may also revise the original input if you think that revising\
it will ultimately lead to a better response from the language model.

<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{
    "destination": string \\ "DEFAULT" or name of the prompt to use in {destinations}
    "next_inputs": string \\ a potentially modified version of the original input
}
REMEMBER: The value of “destination” MUST match one of
the candidate prompts listed below.
If “destination” does not fit any of the specified prompts, set it to “DEFAULT.” REMEMBER: "next_inputs" can just be the original input
if you don't think any modifications are needed.

<< CANDIDATE PROMPTS >> {destinations}

<< INPUT >> {input}

<< OUTPUT (remember to include the ```json)>>""" 
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(
    destinations=destinations_str
)
router_prompt = PromptTemplate(
    template=router_template,
    input_variables=["input"],
    output_parser=RouterOutputParser(),
)

router_chain = LLMRouterChain.from_llm(llm, router_prompt)

chain = MultiPromptChain(router_chain=router_chain, 
                         destination_chains=destination_chains, 
                         default_chain=default_chain, verbose=True
                        )

chain.run("What is black body radiation?")
#output start
# > Entering new MultiPromptChain chain...
# physics: {'input': 'What is black body radiation?'}
# > Finished chain.
# "Black body radiation refers to the electromagnetic radiation emitted by a perfect black body, which is an idealized physical body that absorbs all incident electromagnetic radiation and emits radiation at all frequencies. The radiation emitted by a black body depends only on its temperature and follows a specific distribution known as Planck's law. This type of radiation is important in understanding concepts such as thermal radiation and the behavior of objects at different temperatures."
#output end

chain.run("what is 2 + 2")
#output start
#> Entering new MultiPromptChain chain...
# math: {'input': 'what is 2 + 2'}
# > Finished chain.
# 'The answer to 2 + 2 is 4.'
#output end

chain.run("Why does every cell in our body contain DNA?")
#output start
# > Entering new MultiPromptChain chain...
# None: {'input': 'Why does every cell in our body contain DNA?'}
# > Finished chain.
# 'Every cell in our body contains DNA because DNA carries the genetic information that determines the characteristics and functions of an organism. DNA contains the instructions for building and maintaining an organism, including the proteins that are essential for cell function and structure. This genetic information is passed down from parent to offspring and is essential for the growth, development, and functioning of all cells in the body. Having DNA in every cell ensures that the genetic information is preserved and can be used to carry out the necessary processes for life.'
#output end

4. Retrieval (RAG) Or Question and Answer

ค้นข้อมูลจากเอกสาร/เวกเตอร์เพื่อให้ตอบได้เฉพาะเจาะจงยิ่งขึ้น

การหาคำตอบจากข้อมูลอย่าง่ายๆคือ Stuff Method คือโยนข้อมูลย่อยไปให้ LLM แล้วถามเพื่อให้ LLM หาคำตอบ

ถ้าในกรณีที่ข้อมูลเยอะมาก สามารถใช้ Map Reduce, Refine จาก Chunks เพื่อหาคำตอบ หรือ Map rerank จาก Chunks เพื่อหาคะแนนสูงที่สุดและได้คำตอบที่คะแนนสูงที่สุด แต่ใช้ effort ค่อนข้างเยอะ

ตัวอย่าง


import os

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file

# account for deprecation of LLM model
import datetime
# Get the current date
current_date = datetime.datetime.now().date()

# Define the date after which the model should be set to "gpt-3.5-turbo"
target_date = datetime.date(2024, 6, 12)

# Set the model variable based on the current date
if current_date > target_date:
    llm_model = "gpt-3.5-turbo"
else:
    llm_model = "gpt-3.5-turbo-0301"

from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import CSVLoader
from langchain.vectorstores import DocArrayInMemorySearch
from IPython.display import display, Markdown
from langchain.llms import OpenAI

file = 'OutdoorClothingCatalog_1000.csv'
loader = CSVLoader(file_path=file)

from langchain.indexes import VectorstoreIndexCreator

index = VectorstoreIndexCreator(
    vectorstore_cls=DocArrayInMemorySearch
).from_loaders([loader])

query ="Please list all your shirts without sun protection \\
in a table in markdown and summarize each one."

llm_replacement_model = OpenAI(temperature=0, 
                               model='gpt-3.5-turbo-instruct')

response = index.query(query, 
                       llm = llm_replacement_model)

display(Markdown(response))
#output list of all your shirts without sun protection

5. Agents

ทำให้ LLM คิดเองได้ว่าจะใช้เครื่องมือไหน เช่น ค้นเว็บ หรือคิวรีข้อมูล

ตัวอย่าง Using built in LangChain tools: LLM-Math and Wikipedia

import os

from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv()) # read local .env file

import warnings
warnings.filterwarnings("ignore")

# account for deprecation of LLM model
import datetime
# Get the current date
current_date = datetime.datetime.now().date()

# Define the date after which the model should be set to "gpt-3.5-turbo"
target_date = datetime.date(2024, 6, 12)

# Set the model variable based on the current date
if current_date > target_date:
    llm_model = "gpt-3.5-turbo"
else:
    llm_model = "gpt-3.5-turbo-0301"

from langchain.agents.agent_toolkits import create_python_agent
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType
from langchain.tools.python.tool import PythonREPLTool
from langchain.python import PythonREPL
from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI(temperature=0, model=llm_model)

tools = load_tools(["llm-math","wikipedia"], llm=llm)

agent= initialize_agent(
    tools, 
    llm, 
    agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
    handle_parsing_errors=True,
    verbose = True)

agent("What is the 25% of 300?")

#output start
> Entering new AgentExecutor chain...
Thought: We can use the calculator tool to find 25% of 300.
Action:
{
  "action": "Calculator",
  "action_input": "25% of 300"
}
Observation: Answer: 75.0
Thought:Final Answer: 75.0

> Finished chain.
{'input': 'What is the 25% of 300?', 'output': '75.0'}
#output end

## Wikipedia example
question = "Tom M. Mitchell is an American computer scientist \
and the Founders University Professor at Carnegie Mellon University (CMU)\
what book did he write?"
result = agent(question) 

#output start
> Entering new AgentExecutor chain...
Thought: I can use Wikipedia to find out which book Tom M. Mitchell wrote.
Action:
{
  "action": "Wikipedia",
  "action_input": "Tom M. Mitchell"
}
Observation: Page: Tom M. Mitchell
Summary: Tom Michael Mitchell (born August 9, 1951) is an American computer scientist and the Founders University Professor at Carnegie Mellon University (CMU). He is a founder and former chair of the Machine Learning Department at CMU. Mitchell is known for his contributions to the advancement of machine learning, artificial intelligence, and cognitive neuroscience and is the author of the textbook Machine Learning. He is a member of the United States National Academy of Engineering since 2010. He is also a Fellow of the American Academy of Arts and Sciences, the American Association for the Advancement of Science and a Fellow and past president of the Association for the Advancement of Artificial Intelligence. In October 2018, Mitchell was appointed as the Interim Dean of the School of Computer Science at Carnegie Mellon.

Page: Ensemble learning
Summary: In statistics and machine learning, ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.
Unlike a statistical ensemble in statistical mechanics, which is usually infinite, a machine learning ensemble consists of only a concrete finite set of alternative models, but typically allows for much more flexible structure to exist among those alternatives.
Thought:I can use Wikipedia to find out the book written by Tom M. Mitchell.
Action:
{
  "action": "Wikipedia",
  "action_input": "Machine Learning (book)"
}

Observation: Page: Machine learning
Summary: Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Within a subdiscipline in machine learning, advances in the field of deep learning have allowed neural networks, a class of statistical algorithms, to surpass many previous machine learning approaches in performance.
ML finds application in many fields, including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. The application of ML to business problems is known as predictive analytics.
Statistics and mathematical optimization (mathematical programming) methods comprise the foundations of machine learning. Data mining is a related field of study, focusing on exploratory data analysis (EDA) via unsupervised learning. 
From a theoretical viewpoint, probably approximately correct learning provides a framework for describing machine learning.

Page: Quantum machine learning
Summary: Quantum machine learning is the integration of quantum algorithms within machine learning programs.
The most common use of the term refers to machine learning algorithms for the analysis of classical data executed on a quantum computer, i.e. quantum-enhanced machine learning. While machine learning algorithms are used to compute immense quantities of data, quantum machine learning utilizes qubits and quantum operations or specialized quantum systems to improve computational speed and data storage done by algorithms in a program. This includes hybrid methods that involve both classical and quantum processing, where computationally difficult subroutines are outsourced to a quantum device. These routines can be more complex in nature and executed faster on a quantum computer. Furthermore, quantum algorithms can be used to analyze quantum states instead of classical data.
Beyond quantum computing, the term "quantum machine learning" is also associated with classical machine learning methods applied to data generated from quantum experiments (i.e. machine learning of quantum systems), such as learning the phase transitions of a quantum system or creating new quantum experiments.
Quantum machine learning also extends to a branch of research that explores methodological and structural similarities between certain physical systems and learning systems, in particular neural networks. For example, some mathematical and numerical techniques from quantum physics are applicable to classical deep learning and vice versa.
Furthermore, researchers investigate more abstract notions of learning theory with respect to quantum information, sometimes referred to as "quantum learning theory".

Page: Timeline of machine learning
Summary: This page is a timeline of machine learning. Major discoveries, achievements, milestones and other major events in machine learning are included.
Thought:I can now answer the question about which book Tom M. Mitchell wrote based on the information gathered from Wikipedia.
Final Answer: Tom M. Mitchell wrote the book "Machine Learning."

> Finished chain.
#output end

สามารถลองต่อยอดจากไลบรารีเช่น langchain, langchain-core, และดูตัวอย่างที่ใช้ร่วมกับ OpenAI, Hugging Face หรือ LlamaIndex ได้

Previous Post Next Post