NEW Tanstack AI Library is Amazing!
By Web Dev Simplified
Summary
Topics Covered
- AI Chat in 100 Lines
- Tools Run Anywhere
- Server Tools Query Data
- Client Tools Update State
- Tanstack Outshines Vercel
Full Transcript
Tanstack just launched a brand new AI library that makes working with AIS in your application incredibly easy, no matter what framework or AI library you're planning on using. In this video, I'm going to be breaking down everything you need to know about this new library, as well as showing you some of the unique features, such as the ability to create AI tools that run not only on the server, but also in the client. Welcome back to WebDev Simplified. My name is Kyle and my job is to simplify
the web for you so you can start building your dream project sooner. And to get started with this library is actually incredibly easy. All we need to do is install Tanstack AI, Tanstack AI React if you're planning on using React and Tanstack AI OpenAI or whatever AI tool you're using. In our case, we're using Gemini for this project. So, I installed AI-GENI, but whichever one that you're planning on using, just make sure you install the proper libraries for that. Once you have that done, all
you need to do is set it up on the server as well as set it up on the client. And that is everything you need to get a full chat application working. If you take a look at our actual code here, you can see I can type in a message down here, click send, it's going to send that off, and then the assistant, which is the AI, is going to respond with some type of a response for me. And you can hook this up properly with different tools. For example, I can say, how many to-dos do I have? And if I
do that, it's going to go through, it's going to call the different tools that I've created. And you can see it's saying that I have 200 different to-dos stored inside my database. I can even make it interact with the client. For example, I have this countdown here, which is just a client side thing stored in local storage. When I refresh, you can see it saves it. And then I could say something like update my local count to the number of to-dos in my database.
And it should go through and it should actually query my database, find out that there's 200 to-dos in there, and then update this count variable specifically to be that value of 200. Now, I just gave my page a quick refresh. And you can see that count has been updated to 200. You notice the assistant didn't return to me any message. That's specifically because I didn't really tell it to return any message. I just told it to do something specifically and I'm not logging out all
the different AI calls that it's making. But if I said something like, "Give me the number of to-dos in my database." And it's going to go through, it's going to get that database number. And if I look inside this tan stack here, we actually have different dev tools. I'll pop this out to make it a little bit easier to see. But you can see here's the response I'm sending. Here's the response the AI is giving me. And also, here's the stuff it's passing along to
the different tools that I have. So, we have a lot of things that we can look into to figure out what's going on. And that's kind of a demonstration of what this is at a high level. So now I want to dive into the code to understand how this actually works. In my code I essentially have three or four main files. This first file is just my route itself. You can see I have this chat component that is everything you see on the screen right now. And then I have this counter component which is just
this local storage counter down here purely for testing how client-based tools and serverbased tools work. Inside of my chat tsx here, we really only have one small section we care about to get started with. And that small section is a simple piece of state for my input that's just hooked up to this input right here. really basic React stuff. We then have this use chat hook. This use chat hook comes from that Tanstack AI React. That's where essentially all of
the code for my chat related stuff is being handled behind the scenes for me and it's giving me the important things I care about such as my loading state. I get a send message function which I can send a message to my AI and then a messages which is just all the different messages between me and the AI that are being returned in an array. So you can see I get my message that I sent, the message that assistant sent and so on. Every single message is stored in this
array. I then have the ability to pass along a connection which uses this fetch server send events again coming from that AI react library. All this takes is a path to a fetchbased URL inside my project where it's going to post all these different chat messages for me. I call this / API/ chat. And if I go to that specific file where I have that post request being handled, you can see inside this post request, I first get my AI API key, make sure it exists, and
array. I then have the ability to pass along a connection which uses this fetch server send events again coming from that AI react library. All this takes is a path to a fetchbased URL inside my project where it's going to post all these different chat messages for me. I call this / API/ chat. And if I go to that specific file where I have that post request being handled, you can see inside this post request, I first get my AI API key, make sure it exists, and
then I'm just getting all my messages and conversation ID from that request. So here it's going to pass up essentially the data of messages and conversation ID that comes directly from this use chat. So I get that information right here and then I call this chat function and if we look this chat function comes from that tanstack AI library. So really I'm calling use chat on the client to hook up everything. I'm passing it along the URL to where my chatbased stuff is on the server and
then on the server I'm just getting the information being sent to me automatically through that use chat hook and then I'm calling a chat function which essentially does all the AI magic behind the scenes. I tell it what adapter I want to use essentially what AI you want to use. In our case we're using Gemini but you could use open AI if you wanted and you would just type in open AI here just like that. And now we're using OpenAI and of course we have different models so we would need to
pass in the name of a different model but in our case we're using Gemini. You then pass it along all your messages and conversation ID that comes again from the client from that hook that we used right here. So super straightforward. We tell it what model we want. And finally, if you have different tools that you want to pass along, you put those inside here. We'll come to that section in a little bit because it's a bit more advanced. Finally, you just call to stream response, which again is
something that comes directly from that AI library in Tanstack. Pass it along the stream that you get from your chat. And that's going to do everything for you. It's going to handle streaming the data down properly. It's going to handle all the different validations and tool calls and everything else you need to do. All you need to do is just call chat and pass it along to stream down. And then here we're just logging out any error that we get inside of our application. So overall the process of
this code is very simple. We call use chat somewhere on a client, pass it a URL and inside that URL location we call chat passing along the stuff that we get from the client as well as any additional tools and what model and so on that we want to use. So getting this to work is really really quite simple. Also inside of our code, you'll notice that we have quite a bit of HTML down here. This is all just for styling. You can see that we're just looping through
our messages and rendering out some code for each message. And then we have a really simple form right here that's using this loading state to disable our input and it's just setting that input value. And whenever we actually call handle submit, you can just see it's calling send message which comes directly from use chat. So really this is quite simple. Every time we submit a message it calls send message and then we render out all of our different messages in this array here. So getting
a really basic AI chatbot set up is super simple. I mean it's literally like a 100 lines of code or so and you can copy this directly from the documentation. Now, if you want to get tools implemented, it's a little bit more involved, but really not that difficult. So, I'm going to first look at a simple tool, which is our get to-dos tool. Every single tool uses this tool definition function. Again, this comes directly from that library up here, this Tanstack AI library. And this
tool definition takes quite a few different properties. You can see here, and the main one you really want to focus on is you need a description telling the AI what this is. You need a name so that the AI knows what this thing is called. And then you need an input schema essentially saying what is the data being passed in. and an output schema specifying what the data being returned by this tool is. Now, some of these things are optional. For example, I could remove this output schema if I
don't have any output, but most of the time you're going to have some type of input and some type of output. You can also pass along another property of needs approval. Set that to true. And that essentially says, hey, this is some type of dangerous operation. Maybe it deletes data from your database or spends money, for example, and you want to make sure that this is approved by someone on the client. By passing needs approval along, it'll actually send back a message to the client, essentially
asking the user, hey, are you okay with this thing happening? Do you approve this? You can approve it. And then it'll go back to the server to finish operating whatever this tool is that is doing that thing for you. So, this is a tool definition that you create. And when you create a tool definition, that essentially just gives the AI some type of capability, but it doesn't know how to execute that yet. This just tells it what it can do. It doesn't tell it how
to do it yet. That's where we need to create an implementation. And you can create an implementation to run either on the server or on the client. In our case, we have a server-based implementation because this is going to be calling data in like a database or something else. In our case, we're just using the JSON placeholder API for this, but you could use a database or anything else. All you want to do is just take your definition, call the server function, and this is going to get
passed in whatever your input schema here is. I'm using ZOD in my case, but you can use whatever you want and whatever makes sense for you, like JSON schema for example. But you can see this is an object with a query as a string. And you can see right here, I'm getting that exactly passed along. And we get full type safety. You can see here is a query that is either string or undefined because it's optional or a string. You can then see I'm putting that query and
appending it onto a URL, fetching that data and returning that back to the user. So now my AI has all those different to-dos that are coming from this JSON placeholder API. And I can even get it for a specific query. For example, I could say get me all the to-dos with the title of the something like that. And it'll tell me how many to-dos have that specific title being returned from them. You can see it found 11 to-dos with that title and it's just listing them all out
for me. Now, I don't have any formatting or markdown stuff showing up, so it's quite ugly looking, but you can see it's getting me all that information and using that tool specifically, which is the important thing to understand. Now, in order to use a tool on the client, it's a little bit more involved, but it's essentially the exact same process. We first need to create a definition and then we need to create essentially a client version instead of the server
for me. Now, I don't have any formatting or markdown stuff showing up, so it's quite ugly looking, but you can see it's getting me all that information and using that tool specifically, which is the important thing to understand. Now, in order to use a tool on the client, it's a little bit more involved, but it's essentially the exact same process. We first need to create a definition and then we need to create essentially a client version instead of the server
version. Right now, we're using this server version. And when I pass my tools along, you can see in here I'm passing that get to-dos tool, which is what I get by calling that server function. But if you want to be able to have a tool run on the client and not the server, then all you need to do is just pass along the definition. You can see I'm just passing along the update counter tool definition. And this update counter tool definition just is the definition itself. It has a name, it has a
version. Right now, we're using this server version. And when I pass my tools along, you can see in here I'm passing that get to-dos tool, which is what I get by calling that server function. But if you want to be able to have a tool run on the client and not the server, then all you need to do is just pass along the definition. You can see I'm just passing along the update counter tool definition. And this update counter tool definition just is the definition itself. It has a name, it has a
description, it has some input, and it has some output. You can see very basic. It just has a count that it passes in. And it returns success of true or false. And I pass that along on the server side of things. All of this is on the server, but I need to create an implementation for that on the client. So what I can do is I can then go into my chat section. And you can see here I've created an implementation for my update counter tool that is based on that definition,
but it calls the client function instead of the server function. All this means is that this code now runs on the client instead of on the server, which is a great way for you to interact with different things on the client based on responses you get from the server of your AI. So here you can see I'm just setting that local storage variable of counter to whatever that count variable is. and I'm returning a success of true every single time. Then when I create my
use chat right here, all I do is I call this client tools function which comes from tan stack again and I just make sure I pass it along all the different tools which in my case is just this one single tool. Now anytime that I'm using something and I tell it to update my local count, it's going to make sure it calls this tool and updates that value inside local storage for me. And if I look at my counter component, you can see here it's using this use local
storage hook that just uses this counter variable and gets the information from it. So you can see it's processing whatever that count is from local storage. So with this simple AI library and just a couple maybe 100 200 lines of code, I have a fully functional chatbot. I have the ability to run code on my server based on different tools. I have the ability to run code on my client using different tools. And all of that is incredibly easy to set up. You can
see the actual chat related code is this small section right here. The ability to send a message, this update tool, and then essentially this entire file. But overall that's like 200 lines of code and you have this full entire functioning AI chat application setup and this is just the alpha version of this documentation. We're essentially version zero. This just released very very recently. And as more and more features get added, I can see this becoming a great replacement for the
Visel version of essentially the exact same thing. I was looking through the blog article for this and one of the big things that they talk about that they want to add is essentially some headless UIs for all of this to make it so you don't have to write as much of this UI boiler plate code for you. It's going to have a headless version that just integrates perfectly. I think that's going to be one of the biggest things that is going to improve this library. And I know that Tanstack as a company
and as a product has a much better track record than someone like Verscell when it comes to creating really good highquality software. So, I'm really interested to see where this project goes in the future. Now, if you're interested in Tanstack in general, I highly recommend checking out my Tanstack start crash course. It's going to be linked right over here. I also have tons of other Tanstack videos coming in the future. I will link some of them right over here as well. So,
anything Tanstack related, you can find right there. With that said, thank you very much for watching and have a good
Loading video analysis...