LangChain for LLM Application Development
https://learn.deeplearning.ai/langchain
Models, Prompts, and Parsers
Langchain provides convenience wrappers over OpenAI models. It lets you wrap your prompts so that even the complex prompts can be reused. Parsers let you define the format of the output you want to extract from the OpenAI response.
Memory
LLMs are stateless.
Each transaction is independent.
the entire conversation is provided as context every single time.
Store the entire conversation in the conversation buffer memory. Every new message is added to the buffer.
It is possible to use the conversationbufferwindowmemory. This way only the last n messages are stored in the context memory.
there is also a conversationTokenBufferMemory - this can help control the spending directly as pricing is related to tokens
there is also a ConversationSummaryBufferMemory - summary of the conversation over time.
Chains
Never miss a post from
Satyajeet Jadhav
Get notified when Satyajeet Jadhav publishes a new post.
Comments
Participate in the conversation.
Read More
Introducing, Chat
For the above conversation the prompt was
The biggest challenge with AI
For most of us, the biggest challenge with AI is not going to be which LLM model to use or what infra to deploy. It will be to understand why do I need AI.
FAQs from friends and strangers
Who do you think is the closest to thinkdeli in terms of competition? Is it notion, notes, or medium?
How I used thinkdeli to generate chapters and summary of Ravi’s talk -
Some of the improvements -
Thinkdeli Release Notes
This document maintains the list of new features and bug fixes as they happen.
Semantic Search, aka Magic
The related notes feature searches all your notes to find the ones that are closest in meaning to your current note.Searching notes to find text similar in meaning to your query is called semantic search. We are trying to build a semantic search engine.