Post

Learning Agent - 01

Learning Agent - 01

参考教程:

https://www.datacamp.com/tutorial/langgraph-tutorial

0x01 用LangGraph构建单节点ChatBot

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
import os
from dotenv import load_dotenv
from pydantic import SecretStr
from IPython.display import Image, display

from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser


load_dotenv('../../.llm_env')
API_KEY : str= os.getenv("MY_OPENAI_API_KEY") or ""
BASE_URL = os.getenv("MY_OPENAI_API_BASE")

llm = ChatOpenAI(api_key=SecretStr(API_KEY),  base_url=BASE_URL,  model="gpt-4.1",temperature=0)

class State(TypedDict):
    # messages have the type "list".
    # The add_messages function appends messages to the list, rather than overwriting them
    messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)

# Set entry and finish points
graph_builder.set_entry_point("chatbot")
graph_builder.set_finish_point("chatbot")

# node
def chatbot(state: State):
    print(state)
    return {"messages": [llm.invoke(state["messages"])]}
# The first argument is the unique node name
# The second argument is the function or object that will be called whenever the node is used.’’’
graph_builder.add_node("chatbot", chatbot)

graph = graph_builder.compile()
try:
    with open("struct.png", "wb") as f:
        f.write(graph.get_graph().draw_mermaid_png())
except Exception:
    print('error')

# Run the chatbot
while True:
    user_input = input("User: ")
    if user_input.lower() in ["quit", "exit", "q"]:
        print("Goodbye!")
        break
    for event in graph.stream({"messages": [("user", user_input)]}):
        for value in event.values():
            print("Assistant:", value["messages"][-1].content)

关键步骤:

  • 创建llm实例
  • 设置State, 定义节点
    1
    2
    3
    4
    5
    
    class State(TypedDict):
      # messages have the type "list".
      # The add_messages function appends messages to the list, rather than overwriting them
      messages: Annotated[list, add_messages]
    graph_builder = StateGraph(State)
    

    可以把State理解为在不同的节点直接传递的“上下文状态”,每一个节点接受一个State,施加自定义的处理逻辑:

    1
    2
    3
    4
    5
    
    def chatbot(state: State):
      print(state)
      return {"messages": [llm.invoke(state["messages"])]}
    ...
    graph_builder.add_node("chatbot", chatbot)  # 使用chatbot函数处理"chatbot"节点接受到的State
    

    然后产生一个新的State,并继续向后传递.

  • TypeDict类型会建立一个列表,定义时的成员名会成为最后的key,而对应的value类型也在:后被制定
  • Annotated接受一个类型和附加信息,比如Annotated[str, "info"],这里我们的Annotated类型是list,附加信息用来提醒:在不同Node间传递的时候,在列表后增加信息,而不是直接替换信息.
  • 编译graph并开始运转
    1
    2
    3
    4
    5
    6
    7
    8
    
    while True:
      user_input = input("User: ")
      if user_input.lower() in ["quit", "exit", "q"]:
          print("Goodbye!")
          break
      for event in graph.stream({"messages": [("user", user_input)]}):
          for value in event.values():
              print("Assistant:", value["messages"][-1].content)
    

    当调用stream的时候,会把信息{"messages": [("user", user_input)]} 作为初始的State传递到_start处,当数据到达_end后,读取event中的数值。

注意和上面State中的add_message区分:这里每次用户输入都会把当前的提示词作为初始State,因此两次输入之间没有任何提示词关联; 前面所谓的”add_message而非清空”指的是在同一次输入中,不同Node之间转移的过程中对State增加信息。 测试:

1
2
3
4
5
6
User: hello, your name is anla
{'messages': [HumanMessage(content='hello, your name is anla', additional_kwargs={}, response_metadata={}, id='500b6144-f418-4dc3-818a-339ff01ad07d')]}
Assistant: Hello! Nice to meet you. Yes, you can call me Anla. How can I help you today? 😊
User: what's your name
{'messages': [HumanMessage(content="what's your name", additional_kwargs={}, response_metadata={}, id='fea56bf2-3dca-4dd7-aaaa-bf48320170e9')]}
Assistant: I'm ChatGPT, your AI assistant! How can I help you today?

0x02

0x03 多节点

This post is licensed under CC BY 4.0 by the author.