LongCut logo

Building AI powered apps | Vercel AI Accelerator (Langchain)

By Vercel

Summary

## Key takeaways - **Langchain's new .stream() method simplifies LLM integration**: Langchain now offers a .stream() method for all its models and chains, simplifying the process of handling streaming responses and making it easier for developers to integrate LLMs into web applications without complex wrappers. [02:54], [08:34] - **Langchain prioritizes TypeScript and Next.js developers**: Langchain is actively working to improve the developer experience for those using TypeScript, Next.js, and Vercel, aiming to make their tools more accessible and user-friendly for the web development community. [02:17], [05:53] - **OpenAI Functions bridge LLMs and structured data**: OpenAI Functions enable LLMs to return structured data, bridging the gap between the unstructured nature of LLM outputs and traditional software workflows that rely on predictable data formats like JSON. [18:00], [19:19] - **Zod simplifies schema definition for LLM function calls**: Zod provides a clean syntax for defining schemas, which can be used with Langchain's chains to help LLMs extract structured data, simplifying the process of working with OpenAI Functions. [21:06], [22:59] - **Chains enable complex LLM workflows**: Chains in Langchain are abstractions for handling more complex LLM interactions that go beyond simple input-output, chaining together multiple calls, inputs, and integrating with components like memory and output parsing. [15:55], [16:26] - **Large context windows don't entirely replace retrieval**: While large context windows are advancing, they may not fully replace retrieval methods due to potential focus drop-offs in the middle of long inputs and cost considerations at scale, suggesting continued relevance for retrieval strategies. [44:00], [45:13]

Topics Covered

  • LangChain's new streaming API for web developers.
  • Chains: Beyond simple LLM input-output.
  • How do LLMs integrate with traditional structured data?
  • OpenAI Functions: Structured output for complex workflows.
  • Context windows aren't enough; smarter retrieval is key.

Full Transcript

all right welcome everybody thank you

for coming this is our seventh fireside

chat of the versell AI accelerator uh

today we have Jacob Lee from Lang chain

here Jacob uh formerly the co-founder of

auto code and also an engineer at Google

um he's now working at link chain and

and here to talk to us about uh how to

use Link Chain in your AI applications

so without further Ado please take it

away Jacob thanks for being here

um thanks for having me uh really

excited to be

um you know they have the opportunity to

talk to everyone here

um I know uh

you know super exciting space tons of

Interest

um across the board and uh just really

hoping that

um

just really exciting to be at lunch

chain and working on something that

makes it easier for other folks to kind

of have access to and build glass around

um

all these crazy new LMS and uh

yeah it's just hard to say at rest but

um you know it's been a lot of fun

um yeah I wanted to uh I think I

sent in a uh one possible talk um about

um sort of like working with structured

data and like sort of taking

um llm's Beyond

um

you know chat interfaces to use like

structured data and uh things that like

might get better into

um

you know more traditional pipelines

um but I think given that's a result

talk we've been working on some really

really exciting things around um

uh streaming and actually uh making our

interfaces

um go beyond kind of the simple

um request input and input out model

um and to like uh I imagine a lot of

people using xjs um in the crowd do you

um maybe you show of hands or I think a

lot of cameras are off though can't see

but

um that might be a bit more interesting

thumbs up is great too

yeah and um you know I think uh election

today has been really focused on the

python side of things and kind of um

you know making uh serving the kind of

data machine uh traditional machine

learning data scientist crowd and um I

think one thing we as a release myself

most of my day-to-days maintaining

um pushing for the typescript side of

things

so for me um you know folks using xjs

and like resell um or you know super

interesting and uh we've been looking

forward to ways we can better

um make the experience better

um folks

um Jim how did you give a demo I think

maybe that might be a good way to start

it

um I do uh yeah I was working on this

right before so let me just get a little

set up here

um

but yeah

um this actually I just pushed this out

today so it's really exciting but um

it's effectively a single

um basically implementing

a DOT stream method that returns a

readable screen that's also iterable for

all blank change models and um

a few other pieces uh games are coming

very soon

um you can't call it on chain but it'll

uh just return a big chunk

and uh yeah we're internally calling it

single call protocol we're gonna be

doing a bigger blog post about it later

um it also gives us the ability to run

batch requests and everything

um yeah maybe specify uh

through my ignorance I need to look up

the create react app uh sorry not react

app create um

next app commands but

um maybe best if I share my screen

let's see we've got

um

yeah share screen

select the entire one uh

I think this should do it up now I need

to give it access here let's see

uh come on maybe drop out unfortunately

yeah

um I will be right back apologies

cool in the meantime how's everybody

doing

great

flogging through infrastructure

good

yeah good good

Finally Friday excited for the weekend

did you uh did you get any time on your

project this week

Hassan

not too much

um I mostly work on it on weekends

because I have my day job during the

week uh so yeah I'll probably have some

updates to report next week though yeah

nice fun thanks

so yeah let's try this again yeah

okay that looks better

um

great

cool okay so I've got

um

yeah I think uh many of you have

probably uh you're building out of

yourself or obviously familiar with the

ask okay

um

I think one of the big goals with um Mr

Factor was uh composability on but also

kind of playing nicely with these uh web

Technologies and especially drop script

side

um so to start I'll just create a new

brand new react app I'm sorry excuse me

next up

um your cell Fireside

uh do any subscripts yes lent Zone's

fine

um

let's use that and no

um so I'll just walk you through sort of

like how we

um

yeah so this new uh syntax and sort of

how we like are trying to support web

developers better

um and flexibility around uh next adjust

in particular

cool so I've got a brand new Next Stop

here

um no API rights yet

um if I were to run this right now you

just see the default uh

oops uh they're so far side yep

yarn install

uh let's add light chain first actually

um let's install our types tsjs

um

version and uh

sort of dependencies and we'll also add

the aisdk oops

I remember the ad

cool so now you've got that taken care

of

um we're going to have a look at this

with yarn Doug and see what we've got

and yeah something

um

too crazy at the moment I'm sure you're

all familiar with this

um

and we'll make a new API route as well

so we'll call it uh API

and it uses a chat by default and

uh it's routes.ts right not routes I

think it's routes yeah

rough sorry it is Rob

cool

um so yeah so far you're all familiar

with all this

um we're just gonna do export

um

our function here

for that

cool

um

yeah a few other pieces here

I'll just uh give myself a little

trouble how about that

cool so I got a uh yeah double XCS route

set up here

um takes request response

and um I think formerly uh there was a

link to the extreme class that

um

you can report from the Versace K and if

you wanted to use some of the link

chains uh versus features

um with streaming um you can of course

use the request you know simple

um

uh

that's what I'm looking for

um

um you know input in and put out uh

protocol but um yeah you'd be able to

you need to like do this uh line change

student class and then wrap it with a

callback Handler

um yeah fresh uh fresh out a few hours

ago uh we've got this new

fancy

um dot stream method so if I were to go

create a Model A Link Chain model

um

shadow of an AI just like that

um all seems populate environment

variables in a moment so uh I'll stop

sharing for a second there

um so I got my chat model I've got

um

yeah just do that uh stream equals await

and this is the new bit chat model dot

screen and this is available on um

oh yeah okay

uh this is available on all of the top

models we have as well as the chains um

so at the moment they'll just um sort of

type back the entire output at one go uh

we have plans to greatly expand support

for

um all the popular like sort of

um use cases that we uh people like and

um

you make them much more friendly and

usable and uh

web environment like this

so yeah let's just say how is or what is

versa

crypto company

um and then yeah we can do uh

you know actually why don't I start with

opening on to make this a bit less just

the normal

sort of uh

yeah there's an extra step of car for

chat models um which I'll get into a

minute but

we'll start with the normal LM model

ml.stream

and then I can actually just pipe this

directly into

um

turn new

screen just like that just like where

you'd use any of the kind of um your

cell model there and I think I do need

to actually

I threw to a uh

text encoder because it affects bits but

um

what's going on here

okay let's go back to the chat model

person that's why uh I'm ready

okay

and we have this new kind of utility

class that will take care of chunky knee

Coatings and uh make sure they're in the

right format for yourself

we'll get that going and um

yeah and then for the front end

um you know you can cheat a little bit

and just copy paste over some stuff I

prepared earlier

and now if I were to run uh oops

actually that should be good

um

here we go

so and I think globals too sorry

some funky uh CSS going on

cool now if I were to go

um

uh uh it's not right

there we go okay

how is every uh sorry what is versaille

actually no uh it's actually just gonna

take from me

our code value there

um

oh what's happened here

did I

I used to do environment variables

uh no I did not okay uh let me stop sure

for a moment get my key in there

and bring back in a minute

good catch thank you

all right let's try that again

sleep okay

um

all right and we're back

cool

uh so now if I were to ask

um what it is for yourself

um I just got the streaming right out of

the box here so no more um you know kind

of cover some wrappers or callback

handling

um you know

uh just good clean data bytes

um and you can get a nice response here

using the um

AI kit front end uh just use chat um

which you may be familiar with and then

um you know kind of the

a little bit of a little bit of UI just

on top to make it a little you know

easier on the edge of the demo but at

the end of the day it's just pretty much

um

yeah using uh

uh sorry I think I've got a chat

yeah thank you

um

yeah at the end of the day just uh using

previous that are offering the istk and

uh within light chain um

so yeah we've got uh and then you know

obviously I've hard-coded it here so

it's a little bit of a hack but um you

could

um

use child will pass in uh the investors

in the body so there's like a map step

as well

um I get two more but I think that would

be a bit boring for most folks here

um

and yeah like I said we got uh plans to

basically introduce just on models now

um like if I already do uh

let's say conversation chain

and initialize that

I believe it takes

yeah and then

oops

who's text yeah they got your work

um if I were to do this though um

it'll just for now blast everything back

at me and one go I think

or

yeah uh there's input I forgot the uh

key here but um we got plant submission

supported across all of the various

chains and like

um enable this kind of nice workflow

with uh you know the other things like

memory and retrieval and um

the other part to modules might change

so

um yeah stay tuned for that it's a big

Focus for us going forward and um yeah

we're

I'm excited about this and uh I think

it's gonna be you know

um well we hope it's gonna be useful to

uh lots of folks on the Jazz side of

things in particular

that's awesome can you can you talk to

us about chains a little bit in general

like what what is a chain what's a

conversational chain when would we want

to use something like that

yeah sure

um so obviously uh you know uh the

terminology models um kind of simple

wrappers around uh you know open AI um

there's anthropic as well I think for

Solace on your face um you know Google

vertex Etc

um really just very simple input output

um

you know we're basically sending open AI

the string what is resell and we're

getting some output

um now it can be streamed

um you think of change is kind of like

when you need to more do more complex

flows they might involve various inputs

into a prompt

um so

if I were to um

uh this conversation changes probably

built in but you could say

um

yeah it's an example here for retrieval

um

there are multiple steps uh Each of

which require like various uh chained

calls to the llm with various pieces of

data that may involve

um output from previous steps uh so you

use a chain or change our abstraction

for basically these chain calls and um

a container for various other aspects

like memory and output parsing

um

Etc

cool

um we can we can transition to q a if

that's what you had for the demo unless

you wanted to show something else

um yeah I mean I can get into some of

the uh more I guess functional pieces I

had originally planned um if that's

interesting folks or um qa's greatest as

well uh would be most interesting people

go for it that'd be great

oh sure okay

um yeah all right let's uh

cut that in there

um

take a show of hands how many people are

familiar with open AI functions or I've

been using it in their uh accelerator

projects maybe I can

Derek I see

okay so a few folks but definitely not

the uh not the room

um

yeah so uh again

change uh sorry LMS are you know

fantastic at kind of creating

unstructured data out of the box um

maybe chat EBT really prove that there's

a huge amount of interest in like sort

of um

kind of capture the global imagination

right

um I think as a developer though um you

know we're very used or the whole

natural language interface has been kind

of a very interesting New Frontier for

us

um rather than

you know having

um

uh you know generally when you call an

API maybe we expect some Json back and

then we feed that into the next like

call or

um you know we were able to render web

page based on something returned from

like a

um

like a database for example

um

and yeah that model uh you know I think

for a while people were really

struggling to figure out and are

struggling to figure out how to like

kind of sort of uh combine these two

worlds um so this world of

this world where uh easy things are

uh

you know seemingly impossible things are

very easy like you know uh generating

positive signing texts that uh

you know being distinguishable from

human or um

Etc where and um

you know sort of like things like

writing unit tests are actually like

very difficult because like elements are

not deterministic and um

I think openai like sort of know and you

know lots of people in the industry are

um sort of or in the llm industry

I'm starting to realize that like hey

it'd be great to

um you know provide more ways that

developers can sort of combine these and

bridges Gap and like combine these two

worlds so

open AI released uh functions uh I think

about a month ago

um and is there a way of um

uh basically returning structured data

from some inputs so one example is

um

I think they're not like

um the marketing department like Transit

is very useful for things like agents

where you can basically say uh agents

for those aren't familiar is a um

a common term for like uh

contract ground LMS that can sort of uh

think of what it wants like a

you can prop the LM to say pick an

action from this list and um you know

perhaps try to do some parameters and

then

um sort of like run it through a

reasoning Loop like that so I think it

was uh sort of the media reaction

reaction functions was like oh hey this

is gonna be great for agents it's

actually got a lot of applications

um instruction data has a lot of

applications for

um it's sort of like General software

workflows as well

so it's a little bit um yeah I think the

one was um

uh

kind of the name of the time they had

right now uh structured yep here yep

and also need to add a dependency called

Z

um oh sorry not z uh I hope that's not a

bad package

yes and then we'll import this as um

and let me actually make a new route

here so we'll call this just

get

okay and um

yeah import Z

from

uh and I'll show you in a minute um Zod

is a pretty popular like validate Tetra

validation and um like uh I'm gonna call

themselves a validation library but um

you can do some interesting things

around um

so like construction types

so and then yes uh this create structure

up a chain

um was sort of like a experiment we were

doing and um

uh not experiment sorry um and you just

have a chain we've uh kind of come up

with that basically makes it easier to

use these use this functional

um

our open AI functions to

um within chain context to basically

extract or um structure output given

some input

um so I'll show you that in a minute uh

we'll create a new chain here

and uh we actually need to find a scheme

as well

um on schema

so I guess as a concrete example let's

say

um

I think a really simple one is just

um one's coming down the depth but like

sentiment analysis so we want to given

some input text

um

understand what uh or of the L and

extract like some tone

so um like whether the

I don't know if I was angry or sad or

happy

um and I think we'll yeah maybe I'll uh

set up an object

and

that object is going to have a magical

tone here

um the tone of the

um

I guess input

or

uh text

be very generic

yeah so um

you can think of uh you know I think

open AI uses Jason schema directly you

can certainly pass in Json schema with

another method here but um I do kind of

like the syntax and sort of

style of uh Zod here

um

and then we will create our model

oops

set up an AI by the way yeah the uh

model the model here will pull in um the

open AI Keys environment variable

so yeah what part of the stick again

um

yeah Zod schema and then um

because I think it'll actually create a

element for us so

and then um yeah

I think that should do it what am I

doing wrong here

oh yes I need to give it a prop too

um

yeah running it too far into prompts uh

why don't I change this or yeah we can

do a problem sure um so const prompt

equals new

prompt templates

um sorry

and we'll say um

yeah given the following

let me check this again I think it might

be

did I do this backwards I think I might

do this backwards yeah

um let me check my docs a minute sorry I

should yeah no soft on my head but

um

just I'm not going uh too far off the

weeds here

um

okay yeah so I need to give it the

prompt and then yeah so I was doing the

right thing

um

like text

and that will take a

yeah

okay so I got so um analyze the

sentiment

and this is our like templating syntax

so we can do a um

we're able here like um that's fine

I'll just be prompts and yes we're back

in the green

and then let's just try uh

chain.call and um you'll notice I have

to find a input variable here so any

input in the prompt here needs to be an

input into the chain so I can do input

and then um

yeah you know uh I think Runners from

web might be a little well okay I'm

gonna show you the logs so

um I love Purcell

and next

yes

and let's just see what happens on my

demos

um

cool so it's all loaded and now if I

were to Ping

um

oops

one slash uh API slash chat

um

yeah okay it won't show up here but

it'll switch up the log I hope

yeah 500 so something wrong here but

okay yeah so you can see output tone

positive anyway

um even though I'm there's something

going on with the route I think I need

to return something from Brazil there

um

yeah and if I were to let's say um

entities or something

or um

let's see uh I need yeah that might work

yeah I'm gonna give you things like

erase so I can do

um array I think this is right

um and then just uh this describe by the

way gives the element more context about

um you know I think it'll it'll do it

may work without it um in this case but

giving it this description

um gives it just a bit more context on

like what um it's it's looking for

exactly

uh actually on that Jacob there's

there's a couple questions in chat on on

that schema I'm sorry do you mind I've

uh yeah I'm sorry I've uh let's see

absolutely fine

Okay so we've got um

there's an open ad function and then

actually on the line chain um Dynamic

tools are 3v4 agents um open-end

function um

I believe uh I believe we have the you

can pass in a dynamic tool

um into

we have a way to pass them in and like

sort of create the

uh basically a way to translate those

into open-end functions but

um this specific chain I don't believe

has that

um

so yeah anyway a dynamic Tool uh is for

agent specifically and if you use the uh

we have an agent that's specifically

designed to work with open API open AF

functions

um

that will take advantage of that like

kind of mapping

um between like a

uh and like serialize the dynamic tool

into like an open edit function and then

um

our agent framework will call the

um

Dynamic tools like uh function with

um

with the parameters returned from open

AI uh

that wasn't a great explanation but um I

can go into that more if that's or maybe

we can talk after

um

so I guess yeah open AI function would

be the um

like

model being used uh that's returning

structured data in a

yeah that's returning in this uh that's

returning structured data

um well a genetic tool would be like

the way it'd be like what you would

describe the input as or the done a tool

like wraps up the schema of the

um of like what you want and then pass

it to open AI functions

or the open AI functions model

um more properly

um cool uh yeah so it's great

and um

uh no it currently won't throw a error

on the semantic if the semantic

definition doesn't match um that would

be really cool though

um I think it would require like quite a

excuse me um

you can do oh okay yeah actually um you

can do enorms too uh I don't remember

the syntax on top of my head but

um yes you can do enums uh so it would

um

you have a fail on a uh

if some output were not matching

um one of the few entries that kind of

thing

cool uh did I get everything I think I

did you did thanks yeah awesome

so

yeah I guess uh you could think of what

we're doing here with passing a scheme

as like sort of a clicking dirty way of

um constructing like a simple Dynamic

tool without the actual

uh so

I guess yeah for those aren't familiar

tools

um

our election attraction for agents um

you can basically it's I think I

mentioned before you can give

uh the llm like a list of actions ask it

to pick and then ask it for parameters

to populate

um

you know perhaps like uh yeah you can

give it actions

um

and uh like parameters that you asked to

populate and then

um you can use those popular parameters

and chosen action to call a function

essentially so

um

and then you can you know pass back the

output of that function

and ask it to

um you call another function perhaps

until then like sort of reason through

steps until you get to some output

um so what you can think of here is like

a very yeah quick and dirty way of like

creating a schema that's just populating

like parameters um so we're just

basically asking the opening functions

model to

um call a function that in popular

parameters given the schema

and then

um

just like return that and in this case

it's actually very handy because um you

know even though we're not calling

necessarily calling a function

afterwards

um

yeah we're just going to use this like

extracted output in some other part of

the workflow

and uh William Pride yeah that's

actually a yeah

open a functions is poorly named I agree

um I think there's a lot of marketing

for agents and like sort of that

framework but um yeah

like tool as a functions exactly yes so

great um so yeah I think you guys

more or less get the point but um the

entities mentioned in the text would be

the kind of

don't expect this to return for cell and

xjs

and I think if I were just call CNN Mr

Postman

get another error but yeah cool so you

can see output tone positive entities or

throw next gas

uh next.js so that's great and then um

yeah I'm using dot call here uh so if I

were to like

um by the way

um the way change work

um

uh chains take a object of like valued

as input so in this case it'll be the

input required by The Prompt which is a

key called input and then the output is

actually also specifiable this specific

chain

uh change can also return multiple

values that's why we

yeah change their term multiple values

um in this case the specific chain that

I'm creating here has a default value

output key of output but it's hard to

change this to for example

um

I don't know uh response or something

um

and then log this you would see that

actually uh let's change this back

yeah you'd see now that the key is

response um so I think there's a lot of

there's some confusion sometimes when

folks get started whereas um

you know why is the response here why is

it output here um and the answer

unfortunately is it sort of depends

chain to chain um and you can override

it in a lot of cases

um and also

um there's a convenience method called

run as well which will uh specifically

for change that take one input and one

output

um

basically just call

yeah and extract that out the key so you

don't worry about it um so run it again

I would just see the

um

direct output here with no no wrapping

compared to this where it's wrapped in a

object with key response

um

so yeah you can imagine uh you know very

trivial example here where I'm just

passing in like some simple text I love

or so on xjs but

um

I think one use case I think gets people

kind of excited is uh perhaps like

extracting instruction data from

documents uh child documents is a huge

use case for llms

and um you could you know given for

example like a a contract

um like read the

um APF for contract you could um

sort of parse the uh use our user loader

to get the um raw text from the PDF and

then

you know chart like stakeholders or

um you know like total value using

something like this

um you know for like a

um let's say you have like a list of

like sales transactions for

um Gap or I don't know I said Gap a hate

Gap um

yeah like raffle run or something

um you could say like okay what's the

like

um give me a list of all the products

that were containing these sales

transactions and like this is quite

flexible right I mean I'm passing in a

string here but this could easily be

like a stringify Json object itself or

like a list of objects um so you can use

like window down data like more

traditional context as well

um

yeah uh just really powerful stuff and

something that I've personally found

like very useful um even before open AI

functions came out

uh we had our own sort of um or

pre-functions um you were able to you

could try to prompt the llm to like oh

please please return Json and like

please make it this format and

do all this fun stuff and uh it's just

20 times easier now so

um even back in the old days I uh I

still found this quite useful this kind

of workflow very useful

um and now it's just even more powerful

um

so yeah

um any questions about that or anything

I could dive deeper into

that's awesome thank you so much Jacob

um we can just transition to q a if that

sounds good to you

yeah sounds excellent awesome so as

usual if anybody has any questions

please uh put them in the chat or raise

your hand I see Eric already has his

hand up so go ahead Eric

oh you're Eric there right yep that's me

what's up Jacob how you doing we uh we

actually met

um in 2016. uh that's conference in

Poland and then um yeah I saw his name

in the uh yeah

yeah crazy view event

how far we've come or Fallen one of the

two

um yeah so two questions the I was like

right in the middle of typing one in the

chat when you said well let's just open

it up so

um

yeah so the Zod thing that's super cool

um it reminds me a little bit of uh some

of the stuff that guardrail

um does is there an integration with

guardrail or any of that I mean like

uh anything that goes beyond what Zod uh

um would do

um I'm actually not super familiar

guardrails um I think if you go on the

python side there's an integration

um absolutely everything so it wouldn't

surprise me but

um

I'm looking up now got rules AI I'm

guessing yeah

python package let's use your ad

structure type quality RTS oh this is

cool yeah

um that's super cool uh I am sure it's

on I'm sure it's on python or someone's

added it uh JS NADA yeah swix who's in

the program uh did a uh an interview

with the person who's developing that

it's a pretty robust thing it's pretty

neat

um yeah I was actually so the other

question I was going to ask is about uh

retrieval so you mentioned that a couple

of times

um are there any um

are there any apis that are like really

neat that are sort of not not as obvious

I mean you know something something to

uh draw attention to you know that's

been a theme a little bit is uh uh you

know augmented retrieval and some other

things like that so whether that's

prompt expansion or or um

yeah using Vector DBS or something along

those lines I mean there's a lot of a

lot of there's a lot of space there yeah

there's a lot out there

um

yeah I know um a lot there's a lot

happening on the python they're a bit

ahead of us

um in terms of like you know history and

then also um like I think a lot of the

more academic sort of experimental uh

pieces tend to find their way in the

pipe them first and then you know we try

to take the best we can from there to

gonna bring to JS um

I think uh there's some interesting

things with um my colleagues uh Lance

Martin

um

has to do with uh context Source

splitting where you can basically

oh actually it's a good one um

yeah you can so sort of Builds on the

theme

um with the structure output parsing but

um you can uh we have like a one one

pretty common use case of this uh so

retrieval Vector stores you split up

your documents right you can't

necessarily uh for longer documents

retrieve the whole thing in one go to no

one because you know you have contact

limitations and you generally want to

keep um prompting a props as small as

possible

um if possible to like reduce

distraction

um

so you can split documents but then

um you know you also as a result I lose

context for like other parts of it right

you

um they want to keep track of like where

um like what the original title was or

like you know maybe later on you only

want to query

um

over like specific

um

specific uh sorry documents from like a

specific text or like a series of text

or like documents um maybe that includes

sales data

um as a whole so like you can imagine

um you get an input document and then

um

you know I guess more abstractly like

let's say you get one about uh whales

partly or like the mentions whales but

once it's not going to be split all

matches of whales are like lost from the

original text

um or sorry I lost for the chunk that

you split out

um but later on you maybe want to

reference and like uh query over that

because like some other part of the

document or mentioned that and there's

like some hints you can give it

um so you just have a uh which is a

couple weeks ago but um a way to like

basically tag incoming documents and

like have a pipeline

um before splitting so you can basically

um use this sort of extraction technique

to

um like tag incoming documents for like

things like tone or like uh what they

mentioned overall and then like once you

split your documents you can

um

you know keep that metadata and then

filter on it afterwards um so that's one

thing

uh what I mentioned was about to mention

earlier it hasn't made its way JS yet is

um like context were splitting so

you can um

yeah it's a look at a concept where you

can like keep like more context within

the um

individual chunks you generate

um

uh let's see what else um

yeah there's been some interesting

things and I don't have like Great

Adventures for yet but um you can

sort of like embed let's say you don't

know exactly like or you can't do a

metadata filter on

your query but you still want to like

uh maybe keep track of certain aspects

you can actually embed headers which uh

when you split documents you can like

put additional metadata information like

directly in the chunk as you split it so

it adds to the each chunk's digital size

and you're kind of

General

um

you know context window but you also can

see that

um this came from the specific original

text and like if you ask like only

return or like what was the original

Source it can give you better answers

that way too

um so it depends what you're trying to

do but yeah there's there's a whole

jungle with retrieval things out there

now for sure

cool thank you

totally

looks like we have a question in the

chat as well

or two questions

sure

um bring it up here

there we go okay

um what about langsmith inviting the

rest of the dev team

um definitely um ping me um we have a

pretty long wait listing at the moment

uh producing like open source

contributors and folks but

um

I can see what I can do to speed it up

and uh

yeah I can send my email I would also

love to um I can send the uh GitHub repo

with the sort of streaming stuff

attendees they can get in touch with me

it's on

yeah for sure I can send out your info

to everybody

uh what's what's the I could send out

like an email or Twitter or whatever you

prefer

uh Jacob line chain at uh dot Dev or

um at hakubu hacubu

um I made it when I was 12. so don't

judge me too harshly

yeah awesome um so please ping me if uh

we

um are rolling it out it's it's uh I

think gotten quite a bit of um

yeah it's been really exciting um I'll

say that but um

we'd love to get more people on um and

we'll be over the coming weeks

and then um yeah given the larger

context Windows the cheap LMS d35 okay

uh so there actually been some really

interesting

um

you know anthropic for example and

Tropic has their 100K window right

um and uh you know people in question

like oh do you even need retrieval

anymore like you know in a year all

these like

um originally passing like the entire

you know books in the llm

um I think the answer like long term

could be yes candidly

um these things make it good enough

wherein like fast enough where it

actually does make sense

um I think right now uh someone

publishes an interesting paper um it's

on Twitter that like uh did kind of a

more into study

um basically if you've these really

large context window

um and I'll I'll try to find a link to

it um but these really large context

window models they tend to do really

well with like uh information at the

beginning of the input and then the end

and then like the middle drops off

pretty substantially

um

so I'm sure I'm sure that'll get figured

out and I think open AI has gone on

record as saying like you know one

one million token context Windows oh uh

Darlin found it

um

yeah okay this is it

yeah so like one billion um

you know uh sorry uh one million token

contacts Windows could be like in the

near future I think it was on their new

term roadmap um saw that as well

um I guess answer your question uh I

would still try to keep document

yeah I still try to keep your trucks

small um or smallish

um even with these larger sizes just

because it'll save you tokens and

um

they were kind of uh you know cost is an

issue at the end of the day once you get

hit scale so that's important um and

yeah like quality wise it may be a

little you're gonna have drop-offs with

longer props no matter what

um and there's been a lot of people's

experience with agents too I think after

you got a few steps in or like Auto GT

it just sort of like

becomes tougher to uh

yeah keep paying attention

um yeah back to the topic retrieval as

well

um you know there's a couple approaches

you can do to like process retrieve

documents as well um it doesn't have to

be just stuffing it into prompt like

there's uh things you can do to like

sort of window down and summarize like

retrieve your chunks into like smaller

more manageable focused uh prop steal

them so uh with Coach folks look into

that too if you're looking for uh ways

to boost uh boost performance there

do you have any projects or anything

that uh you've seen that that uh you can

post that'd be super awesome

it's always hard to filter the

information

there's a lot going on yeah

honestly uh I

I just uh Twitter's a big one for me

weirdly um I never would have called

myself a big Twitter Twitter Twitter guy

before

um

we're about a year ago and then suddenly

it's yeah it's mine probably most open

app on my phone now or sorry x x.com I

don't know

um

yeah yeah it's uh

it's kind of it's kind of Staggering

yeah it's hard to keep up with all the

pace and um you know just say I

uh she's good folks to follow and who

seem to be thoughtful and um Upstate on

the more Academia side and then you know

me being on more of the applied side try

to

um process that best I can

um

I think that's uh that probably applies

to most folks in the in the audience

here

nice I don't recommended Vector DB

um

to be honest I've not done a ton of

um

benchmarking or anything I think they

all do a pretty good job

um

I know like when I was uh for for John

Legend I was doing some Contracting the

ml space and

um I'd often just like literally just

use hnsw lib and

um just save save my Vector disk and

load it for small ones um I think as

that

as your sort of Corpus gets big um you

can have problems with that but

um just start with a really simple one

and then um

I've been using like just PG Vector on

Super Bass lot

um for sort of side projects

um I think yeah once it gets like bigger

and bigger like really

a lot of times documents I think some of

these make more sense but um I encourage

folks to just sort of start simple

um not lock in too early just because it

is

uh I guess one thing Victoria does do

something cool where they you know have

their own embeddings so you have to

worry about that which can be kind of

fun

um we're not fun but like you know

one less thing to pay open AI for

um

pretty nice um

but yeah as far as differentiate between

the others I I I'm not an expert

great well I know we're five minutes

over there's one more question in the

chat that we can answer and then we can

wrap up that's all right okay

doesn't support working streaming yeah

um well retrievers don't generally

support yeah none of the retrievers are

support streaming at the moment

um

I think yeah now that we are you know

adding this to kind of all over the

place

um I think it would make sense

eventually or can make sense eventually

but

um yeah I'll look into that please um

yeah please take me after and uh you

know we can yeah chat about it

awesome all right with that I think we

can wrap up thank you so much Jacob for

coming this is great

thanks for having me this is a lot of

fun

um and uh yeah best of luck with um you

know the accelerator projects uh

excited to see the uh kind of next

unicorn come out of here it'll be uh

we'll say hey I was in a room with Eric

there and or in a zoom call Eric bear

now he's

yeah partner with Richard Branson on

Necker Island or something

um

thank you so much man all right have a

good one everybody

Loading...

Loading video analysis...