Code can be found here: GitHub - jamesbmour/blog_tutorials:
In the ever-evolving world of conversational AI and language models, maintaining context and efficiently managing information flow are critical components of building intelligent applications. LangChain, a powerful framework designed for working with large language models (LLMs), offers robust tools for memory management and data persistence, enabling the creation of context-aware systems.
In this guide, we'll delve into the nuances of leveraging memory and storage in LangChain to build smarter, more responsive applications.
Memory management in LangChain allows applications to retain context, making interactions more coherent and contextually relevant. Let’s explore the different memory types and their use cases.
LangChain provides various memory types to address different scenarios. Here, we’ll focus on two key types:
ConversationBufferMemory
This memory type is ideal for short-term context retention, capturing and recalling recent interactions in a conversation.
from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory() memory.save_context({"input": "Hi, I'm Alice"}, {"output": "Hello Alice, how can I help you today?"}) memory.save_context({"input": "What's the weather like?"}, {"output": "I'm sorry, I don't have real-time weather information. Is there anything else I can help you with?"}) print(memory.load_memory_variables({}))
ConversationSummaryMemory
For longer conversations, ConversationSummaryMemory is a great choice. It summarizes key points, maintaining context without overwhelming detail.
from langchain.memory import ConversationSummaryMemory from langchain.llms import Ollama llm = Ollama(model='phi3',temperature=0) memory = ConversationSummaryMemory(llm=llm) memory.save_context({"input": "Hi, I'm Alice"}, {"output": "Hello Alice, how can I help you today?"}) memory.save_context({"input": "I'm looking for a good Italian restaurant"}, {"output": "Great! I'd be happy to help you find a good Italian restaurant. Do you have any specific preferences or requirements, such as location, price range, or specific dishes you're interested in?"}) print(memory.load_memory_variables({}))
Selecting the appropriate memory type depends on several factors:
Use Cases:
Memory can be seamlessly integrated into LangChain chains and agents to enhance conversational capabilities.
from langchain.chains import ConversationChain from langchain.memory import ConversationBufferMemory # llm = OpenAI(temperature=0) memory = ConversationBufferMemory() conversation = ConversationChain( llm=llm, memory=memory, verbose=True ) conversation.predict(input="Hi, I'm Alice") conversation.predict(input="What's my name?")
This example illustrates how ConversationBufferMemory can be used to remember previous interactions, enabling more natural conversations.
Persistent storage ensures that conversation history and context are maintained across sessions, enabling continuity in interactions.
For basic persistence, you can use file-based storage with JSON:
import json class PersistentMemory: def __init__(self, file_path): self.file_path = file_path self.load_memory() def load_memory(self): try: with open(self.file_path, 'r') as f: self.chat_memory = json.load(f) except FileNotFoundError: self.chat_memory = {'messages': []} def save_memory(self): with open(self.file_path, 'w') as f: json.dump({'messages': self.chat_memory['messages']}, f) # Usage memory = PersistentMemory(file_path='conversation_history.json') print(memory.chat_memory)
This method allows you to persist conversation history in a simple, human-readable format.
For more scalable and efficient storage, integrating with databases like SQLite is recommended:
import sqlite3 class SQLiteMemory: def __init__(self, db_path): self.db_path = db_path self.conn = sqlite3.connect(db_path) self.create_table() def create_table(self): cursor = self.conn.cursor() cursor.execute(''' CREATE TABLE IF NOT EXISTS conversations (id INTEGER PRIMARY KEY, input TEXT, output TEXT) ''') self.conn.commit() def save_context(self, inputs, outputs): cursor = self.conn.cursor() cursor.execute('INSERT INTO conversations (input, output) VALUES (?, ?)', (inputs['input'], outputs['output'])) self.conn.commit() def load_memory_variables(self, inputs): cursor = self.conn.cursor() cursor.execute('SELECT input, output FROM conversations ORDER BY id DESC LIMIT 10') rows = cursor.fetchall() history = "\\n".join([f"Human: {row[0]}\\nAI: {row[1]}" for row in reversed(rows)]) return {"history": history } # Usage memory = SQLiteMemory('conversation_history.db') print(memory.load_memory_variables({}))
To ensure your application remains responsive, consider these optimization strategies:
Here’s an example of a memory class with basic caching:
import time class CachedSQLiteMemory(SQLiteMemory): def __init__(self, db_path, cache_ttl=60): super().__init__(db_path) self.cache = None self.cache_time = 0 self.cache_ttl = cache_ttl def load_memory_variables(self, inputs): current_time = time.time() if self.cache is None or (current_time - self.cache_time) > self.cache_ttl: var = self.cache self.cache = super().load_memory_variables(inputs) self.cache_time = current_time return self.cache memory = CachedSQLiteMemory('conversation_history.db', cache_ttl=30)
This implementation caches the results of database queries for a specified time, reducing the load on the database and improving performance for applications that frequently access memory data.
Effective memory management is a cornerstone of building intelligent, context-aware conversational AI applications. LangChain provides a flexible and powerful framework for managing memory, allowing developers to tailor memory types to specific use cases, implement persistent storage solutions, and optimize performance for large-scale applications.
Dengan memilih jenis memori yang betul, menyepadukan storan berterusan dan memanfaatkan teknik lanjutan seperti kelas memori tersuai dan strategi caching, anda boleh membina sistem AI canggih yang mengekalkan konteks, meningkatkan pengalaman pengguna dan beroperasi dengan cekap walaupun mengikut skala dan kerumitan. interaksi berkembang.
Dengan alatan dan teknik ini yang anda boleh gunakan, anda dilengkapi dengan baik untuk memanfaatkan potensi penuh LangChain dalam mencipta aplikasi AI yang responsif, pintar dan peka terhadap konteks. Sama ada anda sedang membangunkan bot sokongan pelanggan, pembantu maya atau sistem perbualan yang kompleks, menguasai memori dan storan dalam LangChain akan menjadi faktor utama kejayaan anda.
Jika anda ingin menyokong penulisan saya atau belikan saya bir:
https://buymeacoffee.com/bmours
위 내용은 LangChain에서 메모리와 스토리지를 활용하는 LangChain 부분: 종합 가이드의 상세 내용입니다. 자세한 내용은 PHP 중국어 웹사이트의 기타 관련 기사를 참조하세요!