LongCut logo

Building AI Applications with the Laravel AI SDK

By Laravel

Summary

Topics Covered

  • AI Demands First-Party Framework Integration
  • Laravel Conventions Excel With LLMs
  • Agents Encapsulate Reusable AI Logic
  • Tools Supercharge LLMs Beyond Text
  • Structured Output Enables Reliable Parsing

Full Transcript

And we are live. Hello everyone. Good to

see you all. Uh we have a very special guest today. We have the Taylor Otwell.

guest today. We have the Taylor Otwell.

Uh Taylor, how are you doing this morning?

>> I'm doing good. I'm rocking and rolling.

How are you?

>> I'm doing pretty good. It was a it was a good weekend for me. Hopefully everyone

in the chat uh I know we have already had quite a bit of people um in the chat already. Uh so a lot of people waiting.

already. Uh so a lot of people waiting.

So if you are here and you are uh ready to see what the AI SDK has in store for us and also just to hear Taylor's thoughts on the AI SDK, why he built it.

I'm excited for that. Personally, I

haven't gotten a chance to talk to Taylor yet about the AI SDK specifically. But if you're here, let us

specifically. But if you're here, let us know where you're watching from. And um

I guess Taylor, what's a good what's a good question? What's something that's

good question? What's something that's been on your mind like outside of work?

Like maybe TV show, what something like that? Oh man, I don't even know.

that? Oh man, I don't even know.

Everything's been a blur since I did [clears throat] the uh AICK and then I was in um India in Dubai for Laracon India. So like I feel like

India. So like I feel like >> Yeah. I feel like this is like maybe the

>> Yeah. I feel like this is like maybe the first week I feel like I'm kind of getting back in the groove. Um

>> there you go.

>> Yeah. So I don't know. I don't even know what's going on in the world. [laughter]

I guess what was your whenever you travel somewhere, what is like the the food or like thing that you try to make sure you you do?

>> I try to just like have especially if I'm going somewhere like India, um try to have like whatever food that the people there really like. So,

>> okay, >> for example, like the first time I went to India, >> I was the at the hotel and like the room service guy, I got back kind of late to the hotel and he was like, you know, do you want any food? Like what do you It

was like almost midnight and he was like I was like just bring me like whatever you would eat at this time. Like

whatever your favorite thing to have would be just like bring that and I'll just try it. Uh so I just try to do stuff like that. Um just so I get a feel for like what people are like, you know, what they're really doing in these

places.

>> I love it. I guess then for everyone watching, I already see a bunch of uh all all over the place. So, we got um Evan from Switzerland, Bakersfield, California, glad to see you here.

Albania, Kuwait, UK. I love just like the uh how how global everyone is.

India, um good day from Pakistan, I guess. Everyone, what time is it for you

guess. Everyone, what time is it for you right now? I always love to see that.

right now? I always love to see that.

And then also kind of similar to Taylor, what is uh what is the thing that you always tell people to try whenever they are visiting your region? So food that

you always tell people to try?

>> Oh, that's a great question. Actually, I

think like in Arkansas and maybe other states in the south, a lot of people like to try the barbecue, you know. And

when I say barbecue, I don't mean um >> like, you know, like >> Yeah. Yeah. like barbecue with like

>> Yeah. Yeah. like barbecue with like barbecue sauce, like pulled pork, brisket with barbecue sauce, stuff like that. Around here it's a lot of like

that. Around here it's a lot of like pulled pork, but um yeah, in Texas it's more like the brisket, you know.

>> Yeah, that makes sense. I I have not been to Arkansas. In in Arizona, I guess I would tell people to try any hole-in-the-wall Mexican restaurant.

Like that would be my go to try unless >> We have a lot of good Mexican places here, too. It's It's so good.

here, too. It's It's so good.

>> Yeah, it's it's great. Unless you're

like in Scottsdale, then there's some like good burger places that I would would recommend. Uh, good old Greg

would recommend. Uh, good old Greg sausage roll. I have not had one,

sausage roll. I have not had one, unfortunately. I've heard good things.

unfortunately. I've heard good things.

Uh, 900 p.m. Dubai time. Well, thanks so much for staying up staying up late. Uh,

we got 14 o'clock here in Brazil. Uh,

10:03 p.m. in Pakistan. I think that might be Oh, no, never mind. 10:03 a.m.

right now in PH.

All awesome. I love it. Macedonia 163.

Glad or 1803. Glad to have you all here.

Any Ghanian developers in the room? Says

Ovac for you.

Um Joshua, what is going on, gents? I'm

excited for this. Well, we're excited to have you here, Joshua. Glad to have you here. Um and Liam says, "I'll buy you a

here. Um and Liam says, "I'll buy you a sausage roll next time you're in the UK, Josh." Okay, sounds good to me. I am

Josh." Okay, sounds good to me. I am

always down for some food. Uh I did have Taco Bell when I was in the UK, so and it was actually pretty good, >> man. I've got a Taco Bell. It's like

>> man. I've got a Taco Bell. It's like

right outside my neighborhood. I mean,

I'm talking it's like across across the street basically.

>> I didn't know I didn't know this about you. If you if you love Taco Bell, then

you. If you if you love Taco Bell, then that's awesome. I love it.

that's awesome. I love it.

>> It's so quick. I could like run over there and get like lunch and be back home legit like 10 minutes.

>> Even if I eat at the restaurant, you know? [laughter]

know? [laughter] >> What's your What's your go-to current Taco Bell menu item right now?

>> Oh man. Like if I just want to be like really efficient and in-n-out, I'll do just like two bean burritos and a drink and >> Okay.

>> crush them and then dip out of there.

Um, it's like 5 minutes.

>> What do you get?

>> I I get um like either the cheesy bean and rice burrito. Like that's kind of like one of my go-to or I always love like anything chicken from them. So like

the there's the the cheesy chicken chipotle flatbread thing. I like that.

>> Um but yeah, like cheesy bean rice burritos I've always had since I was like in high school. So those are kind of like my, you know, uh go-to for nostalgia sake. Yeah, for sure.

nostalgia sake. Yeah, for sure.

>> And then a Baja Blast. I used to do Full Sugar. Now I'm Baja Blast Zero.

Sugar. Now I'm Baja Blast Zero.

>> My daughter does the Baja Blast Zero, too.

>> Yeah, I I I tried it full sugar again like for the first time in like I don't know 10 years and I was like, "Wow, this is this is a lot. I can feel it."

>> Awesome. Well, glad to have everyone here. It's It's crazy to see how many

here. It's It's crazy to see how many people are here. Mostly I to see Taylor and also to hear about this AI SDK. Um,

so if you do have questions in the chat, be sure to drop them. I'll try my best to be watching, but mostly the goal of this live stream is one to get Taylor's thoughts on the AI SDK, what it is, how

you would build with it within Laravel, maybe like why it was built, and then two hopefully like the towards the latter half of this stream being able to to build something. There's there's some

projects that I've I've built um that we can kind of show off the AI SDK and just like what's capable of, but I think like the the perfect part is being able to say, "Okay, what actually is capable

when you're building things?" Um and then the why. So again, if you do have questions, again, everyone's kind of shouting out where they're from. I love

it. Keep doing that. Um it's awesome to see people from all over the world being able to come together and say, "Hey, this is some cool things that we're building with this cool software that we

love." Um, so Taylor powered by Taco

love." Um, so Taylor powered by Taco Bell. Uh, what was the reason like why

Bell. Uh, what was the reason like why did I I I think like the first thing I saw you start talking about the AI SDK?

It was probably I don't know probably in October, November of last year. I could

be wrong on the timeline, but why did that come into [snorts] fruition? That

was one of your projects I think that you kind of picked up and you're like, I'm just gonna >> I'm gonna dedicate my life to this.

Yeah, I mean obviously everyone was just kind of tinkering around with AI stuff as they have been for the last few years and um you know this is like such a important part of the dev workflow at

this point. It felt like we needed some

this point. It felt like we needed some sort of firstparty opinion on interacting with AI providers. I mean,

just like we have like opinions on sending email or like queuing jobs and just like this [clears throat] is becoming such a common part of like what people do when they build apps. Um, that

made a lot of sense to bring something first party and like have like an offering here. I usually try to think of

offering here. I usually try to think of like if I'm going to build something into the framework, I kind of think to myself like is this applicable to like

70 80% of devs, you know, in in the Laravel world. like if I'm debating

Laravel world. like if I'm debating whether something needs to be in the framework.

>> Yeah.

>> Um and then like for a package like the AI SDK is maybe a little bit more lenient like is this applicable to maybe half of Laravel devs like in the world like probably so right with AI uh and probably increasing dramatically over

time. So made sense to have something

time. So made sense to have something and yeah I did build it myself and it was the first package I've written at Laravel in a couple years. The last

stuff I wrote was uh Livewire Vault and Laravel Folio.

>> And then as we were like building cloud, building nightw watch, I didn't really write a lot of packages at the time. And

now I'm kind of like back in the driver's seat with the AI SDK. I just

kind of like one [clears throat] most other people here were busy with other stuff, right? So it's [laughter] like so someone has to build it, but everyone's busy. So I was like, well, I

everyone's busy. So I was like, well, I I it'd be good for me to like get back in the code and like building something uh fresh and it sounds fun. And so like I'll tackle it and plus I had some ideas for what I wanted to do with it. Um

there was already some cool stuff being done in the community around AI stuff with like TJ Miller's work on Prism which we kind of built upon in the AI SDK. I kind of see like that as like

SDK. I kind of see like that as like almost like a query builder and eloquent type of relationship. Like,

you know, I think the AI SDK I tried to put like a little bit more opinions on top than people might typically build in

a in like kind of a generic AI SDK.

>> Yeah. Um Yeah, that's that's awesome. I

guess like for for those in in the chat, some some quick questions um as well. So

uh um uh will be paid or free. It is already out so and it is it is free. It's a free package free package to use for the AIDS. I guess for people who are just

AIDS. I guess for people who are just joining us as well. Thank you so much.

We're going to kind of take the first little bit talking a little bit to Taylor about his thoughts about why we built it, what's it for? Um and then the second half of this stream we'll be able to jump into some code and see okay what

what does this look like practically building um in today's day and age. Uh

but uh for those joining, what is your favorite outside of the AISDK? Because

that's a little bit of a curve or or home run type of question. Uh what is your favorite Laravel package? Either

first party or third party? Love to hear it in the in the chat. Um Taylor, what's your favorite package that you've built outside of the AI SDK?

>> Oh man, if it's like something that's in Laravel, it' probably be eloquent. Um

it's also the hardest thing I've pretty much ever built. Um, if it's like a package, I'm a pretty big fan of like, um, I think reverb is cool. Um, I think

echo is cool. Like kind of the real time stuff in Laravel. What What else do we even have? Um,

even have? Um, >> we let me let me look at the docs. We've

accumulated quite a few packages.

>> There's quite there's quite a bit out there.

>> I mean, I think Octane is pretty cool to some extent. Like the bulk of Octane is

some extent. Like the bulk of Octane is sort of handled by other people, right?

like the Franken PHP team and swool and stuff like that and but uh >> just the ability to like boot the Laravel app and kind of keep feeding your requests really quickly.

>> Um >> yeah, >> Pete Bishop and my favorite package is pennant cannot live without Fortify.

It's crazy how many packages there there are that the Laravel ecosystem has especially when on our Laravel YouTube channel during the advent season of last

year. So, for 24 days and then a special

year. So, for 24 days and then a special guest message from Taylor on the 25th day, uh we had one video for every package and there were some packages that we we didn't make a video for. So,

enough to have at least 24 videos is pretty crazy and some of those could be split up as well. Uh yeah, I I think my favorite personal favorite is Cashier.

And I didn't really get into cashier too much because when I started my Laraveville journey, I started with um Spark and so I didn't really have to touch Cashier too much. It was kind of already done for me. Uh

>> yeah, >> but over the last year using cashier I was like, "Oh, this is actually incredibly easy, >> man." Yeah, we're working on some other

>> man." Yeah, we're working on some other some new cashier stuff and some new Fortify stuff actually. Um that

hopefully we can show pretty soon. But

like man, cashier is like that is a I don't think like back when um I first wrote cashier like integrating with stripe and all that it is a lot of work to like catch all the web hooks, store

all that. So

all that. So >> yeah, I think a lot of people find that package pretty valuable.

>> Yeah. And especially if you look outside of the Laravel ecosystem, most people, you know, have their own set way of okay, hey, here's how to not make Stripe incredibly difficult to work with and

everything like that. So, you know, it's, you know, it's valuable. Uh, we

got some people saying, uh, reverb, uh, Q's and jobs. I, we'll call that as part of a package. Working with Q's and jobs is incredibly easy. Yes, love for Octane. Uh, Prism is insane. Uh, I

Octane. Uh, Prism is insane. Uh, I

agree. We got inertia. Love Laravel

Nova. Uh, favorite. Love. Uh, Octane.

Um, Sale Octane Reverb fortify.

I love it. I love it. Reverb is also cool. I I do love I do love reverb.

cool. I I do love I do love reverb.

makes everything makes everything easy.

Uh so Taylor, when it comes to the AI SDK, what is how is this different than maybe like working with AI in Laravel? So how is this different

in Laravel? So how is this different than like a boost for example or even MCP? Where does this uh SDK kind of fall

MCP? Where does this uh SDK kind of fall into?

>> Yeah, so like right now we have kind of three AI related packages at Laravel. Um

um I I think MCP was actually the first one. So MCP is uh MCP stands for model

one. So MCP is uh MCP stands for model context protocol. MCP right model

context protocol. MCP right model context protocol I think.

>> Um and it is basically a way to like I think of it as a way to like expose a standardized API for LLMs and AI things to talk to. So like I have an app on the

internet and I want to allow like chat GBT to do things or some other tool to do things with that then I can use MCP to expose functionality and OpenAI started coming out with this new kind of like I don't even know if it's out of

beta yet but eventually like sort of an app directory of things you can like interact with from Chad GPT and things like that but that's all kind of powered by MCP. Um

by MCP. Um so then there's uh Boost which is actually a little local MCP server that

runs on your machine um that cloud code and cursor and other like agentic dev tools like open code can plug into and what it does is it just like makes available some tools to those agents so

that they can query the Laravel docs, they can run tinker commands, they can inspect your database schema and so the idea is it improves like the quality of the Laravel code that things like claude

code and cursor can write because they have access to the latest doc. So like

if I release a feature, you know, tomorrow in Larave valve because we release every Tuesday. LLMs have like no idea that that feature exists, right?

Because they've been trained on old data, but boost actually allows us to feed them like the latest info. So um

they always have access to the most up-to-date docs. Um, if you're writing

up-to-date docs. Um, if you're writing Laravel and you're using Cloud Code or Cursor, which at this point you probably should be using some sort of like AI assistance, um, I think installing Boost is just like a no-brainer because it's

like a free package that is probably only going to make the quality of your code better. Um, so I would definitely

code better. Um, so I would definitely install that. So that's the second AI

install that. So that's the second AI package we released. And now the um AI SDK is the third which is sort of like a streamlined unified interface for

working with different AI providers like OpenAI, Gemini, Anthropic. So I want to generate some text or I want to stream some text uh or I want to generate an image or audio. I can do that with the

AI SDK um through a unified interface and kind of try different models, try different providers without going and like researching that what what are the you know what's the HTTP endpoint for generating image on Gemini and now what

is it on OpenAI? Uh it's just a really streamlined way to work with AI.

>> Yeah. Yeah. I love it. Um yeah, someone someone asked the question uh for someone that does not know what Laravel AI SDK is, how would you describe it in a few words? I think Taylor just did that. Uh, but I think if I could put it

that. Uh, but I think if I could put it into another words and maybe Taylor you can correct me. It's like an eloquent way of interacting with all these things that >> now in this world of AI you would expect

you have to interact with at some point in time.

>> Yeah, I think so too. And then which I'm sure we'll get into a little bit later.

It sort of is um integrated with a lot of Laravel's the rest of its stack. So

like if I generate an image, I have a method to like store that image using the file system stuff in Laravel. If I

want to like cue the generation of an image, it's actually integrated with the Q system in Laravel. So, um it's an AI SDK that really leans into like the full

stackness of Laravel.

I love it. I love it. Uh when you think of um and some people are saying uh your your video quality isn't doing too good.

I don't know if there's a way for you to bump it up, but if that means I don't know you start stuttering, I don't know.

Um, but we for everyone who's saying that uh >> Oh, I'm on standard definition. So, I

can go up to Do I go up to full high definition? Is that recommend?

definition? Is that recommend?

>> Full highp maybe. I

maybe. I >> How do I do I look good now? Do I look better?

>> Let's see. Um, I think I think it look >> crisper.

>> Should look should look video enhance says someone. Uh,

says someone. Uh, >> exactly.

>> Taylor will get >> Yeah. Um, sweet. But yeah, uh let's uh

>> Yeah. Um, sweet. But yeah, uh let's uh let's take a turn into okay, all of this AI stuff is fun to work with, but I'm curious of maybe what you're seeing

within um you know, current applications, enterprise applications, etc. Or just why would people start adding AI into their apps? What are the things that people are generally going

to be using it for? Um what are some practical applications that you can now build with the AI SDK?

Yeah, I mean I I think some of like the most common things that you see people build with it are sort of like almost text summarization type features. So

like on nightw watch for us which we've already launched was like you have an issue in nightw watch right like an exception has occurred or you have like some slow query you can actually click a button in nightw watch to let the AI

write like a summary of that. So, and it will tell you like, okay, here's kind of what happened. Here's some suggested

what happened. Here's some suggested solutions. Here's where you might look

solutions. Here's where you might look next or whatever. I think that sort of like category of AI usage is super common. Like, I have a I have a bunch of

common. Like, I have a I have a bunch of text that I need to basically like classify or summarize. It's super easy to do that with AI. Um, and pretty fast

and affordable. Um, you know, I think

and affordable. Um, you know, I think another like chunk of AI is around like audio and transcription and sort of real time applications. That's another like

time applications. That's another like popular um, use case for AI. I mean this is kind of I don't know if this is like people consider this like full-on AI in

the normal sense of the word phrase but I mean I think it kind of is related is around like embeddings and vector search which is part of the AI SDK where we can generate embeddings from a given like string of text vector embeddings store

them in a database and then do semantic search querying and then AI powered ranking of those results for like the most semantically similar [snorts] results. So um you know I think those

results. So um you know I think those are like some of the more common use cases. So I think that then there's like

cases. So I think that then there's like image generation which I think is a little bit more like you know I don't know if it's a little bit more on like the consumer side of like um the use cases for AI where you're generating

sort of fun images and things like that but at Laravel we've been using it mainly I would say for like text summarization issue summarization things like that. Yeah, definitely. And and the

like that. Yeah, definitely. And and the neat part I think with the AISDK kind of like it said is now you have this one central point. I know in the past within

central point. I know in the past within things you know there there are great packages like Prism that I I know you said that the SDK is is built on top of and expands in a lot of ways. Um, but

it's great that you can kind of have this one central place to say, okay, when I'm working with images or when I'm working with audio, the infrastructure behind it, the the code behind is

exactly the same. I'm curious your thoughts and I and when we dive into building something within the code, I I I've already used this within things like cloud code and boost and everything

like that. It's it's crazy how much

like that. It's it's crazy how much better it gets because it's just one package for the LLMs that you're working with to say, "Okay, hey, this audio is going to be generated the same way that

I generated this this chat." Do you think that's intentional that you built like how what are your thoughts on that when it comes to this new world of coding in this in that sense >> in terms of like how much effort do I put into sort of like the elegance of

the API?

>> Yeah.

>> Either either that or is that elegance of the API matter now? Yeah, I know.

It's something I thought about myself.

Um, I I think it does sort of matter in the sense of like the LLM still do well with things that are like easy to parse and understand and summarize and read

about. I don't know if it's like, you

about. I don't know if it's like, you know, obviously the LLM doesn't derive the same joy from it that like the human consumer might. You know, I used to put

consumer might. You know, I used to put like a lot of effort into these sort of like whimsical fun APIs because as humans writing the code, it's kind of fun to like use these tools that are sort of like charming in a way. Um, and

maybe it still is, but yeah, I I mean it's still important, I think, to have good API design just for the discoverability of that LLM to be able to figure out what the heck's going on, you know? Yeah. Um, and I think like we

you know? Yeah. Um, and I think like we we've tweeted about this a little bit, but like frameworks that sort of lean into conventions and structure I think do well with LLMs. I think we've seen

that with like Laravel and things like Rails where like the LLMs are doing a pretty good job because there's lots of training data. There's these very

training data. There's these very conventional structure to the projects where it's like there's a models directory, there's a controllers directory. So they kind of like it's

directory. So they kind of like it's very discoverable, you know.

>> Yeah. Yeah. I I I completely agree and it's it's it's neat to see the you know I I feel like the joy from those clean APIs comes now of of seeing it uh when

you're when you're looking at your maybe your code being generated or you're just looking back and it's very easy to know okay this is exactly what's happening rather than having to jump through 50 different files to say okay what actually is happening it feels like

you're >> even if you're not handwriting all that code it still feels like you understand what's happening in that in that sense and I think the LMS do a great job at uh also saying um you know we're using this

one package so we're going to be doing it the exact same way another another iteration of the application.

>> Yeah, totally.

>> Uh for those just joining, thank you so much for for being here. Uh I am joined my name is Josh. I'm joined by Taylor.

probably know Taylor uh just from everything the creator of Laravel and being and now the creator of Laravel AI SDK which is what we're kind of diving into here on the stream. So if you're

just joining us let us know where you're joining from uh what time it is there for you and then what's your favorite Laravel package. Uh but if you haven't

Laravel package. Uh but if you haven't gotten the chance feel free to like and subscribe this video if you're on our YouTube platform. There's a whole bunch

YouTube platform. There's a whole bunch of awesome videos that we try to put out as consistently as possible. But we're

going to continue to take a look at the Larville AI SDK and then hopefully be able to have a chance to to build with it as well. Um, Taylor, what is your favorite part before we jump into like

each aspect of the AI SDK? Uh, just from an overall standpoint, what is your favorite part of the AI SDK? One that

you're like when you built it, you're like, "Yeah, this is really cool. I like

I like this."

>> Yeah. Yeah. Yeah. It's a good question.

Um I really like the kind of like agent class concept which was a big part of like um my initial sort of like thinking behind the AISDK and you know for those

who haven't seen it basically when you use the AIDK you make agent classes using like a make agent artisan command and it's basically a class that sort of encapsulates the system instructions the

message context the tools maybe the schema of um sort of like what you're doing um with the AI provider. So you

might have for example like I don't know I think in in the docs we use a lot this sort of like sales coach or like lead extractor agent that's like reading transcripts of sales calls and giving

advice and things but I I think it's like this very Laravelesque way of working with AI where everything feels sort of like nice and tucked in this little class that I can reuse throughout my application. So maybe I use it in an

my application. So maybe I use it in an artisan command in one place and I use it in a HTTP controller in another place and it's just sort of all like self-contained. also very easy to like

self-contained. also very easy to like test um where you know it had you can fake agents, you can fake other AI interactions. I think the testing story

interactions. I think the testing story was something I tried to like make the sense. I think testing with AI is

sense. I think testing with AI is something that's often like overlooked one because testing AI can be hard. It's

not deterministic. Um you know there's all kind of quirky things that can happen. So I tried to make that pretty

happen. So I tried to make that pretty streamlined.

>> Yeah. No, I love that. And I I think that's an interesting standpoint that I' I've personally loved as I've worked with it just in like the past couple of weeks before launch and then also even just after launch. I've I've built

probably a couple two or three different applications just familiarizing myself.

And one of the things that I really love about the agent SDK specifically is that is that aspect where it feels like >> I don't know similar to a job where you would or an action I guess would be a

better uh use case but where where now I'm uh abstracting that from every single job or controller that I'm using from that specific logic and it lives in

one place. I can change the as we all

one place. I can change the as we all know for if you've built anything within AI, you have a prompt and sometimes you might want to change that prompt. It

sucks to have to change it in 10 different places that you're using that exact text prompt. So being able to have it in one place is really awesome.

>> Yeah.

>> Awesome. I wanted to jump into the the landing page real quick and just kind of talk through some of the some of the aspects before uh getting into um the

the rest of the actual code. I'm going

to plop this open right here.

Uh perfect. Um so, oh lost you for a second. There we go. Okay. So, for those

second. There we go. Okay. So, for those who don't know that the Laravel AI SDK is out, it is out. It is laravel.comai.

So laravel.comai.

Uh and the it's the AI toolkit with batteries included. It's as simple as

batteries included. It's as simple as running this composer require command.

Composer require laraveli.

I guess one question for you Taylor is what was the you kind of mentioned it but what was the thought behind putting this in in one single package versus like hey here's a package for

>> image here's a package for voice. what

what's your kind of thoughts or idea behind that?

>> Um, you know, I kind of just wanted people to have a complete toolkit for working with AI and not have to pull in a bunch of different stuff and kind of like overthink it. Um, so it's for me it's just kind of nice to be like

composer require Laravel AI and boom, I'd sort of like have everything I need to do AI SDK related things, whether that's audio or images or whatever. um

it doesn't really like you know significantly increase the size of the package to add these things. All the

bulk of the work is handled by people like Gemini and Open AI you know to actually uh do this stuff. So it's just kind of nice to have this like one self-contained package to have a kind of a complete toolkit.

>> I love it. Um so yeah just like Taylor said one SDK for every capability. Uh I

wanted to walk through some of like the elements that we have here on the bottom of the page. Hey you that's us today. Uh

but there's these kind of like uh you know there's seven different uh or eight different aspects and this is only again a small part of it but there's prompts

context tools provider tools structured output streaming async and attachments.

I kind of wanted to get your thoughts and walk through of like when you might use each one because I feel like especially in today's age, one of my thoughts is that as you are learning how to build with with Laravel, of course,

maybe your LLM as you're building this within Laravel or adding AI SDK so that you can build and so for those who are installing the AI SDK, if you run and you are using AI agents, which like

Taylor said, it's it's a good thing to be using them. I think now if you run boost install PHP artisan boost install after you install the Laravel SDK it

will ask you to to install um the AI SDK skill which will help the LLM learn how to use this better. Uh but uh like I think Taylor it's one of those things

where the more you understand what can be possible behind the scenes the better we can help prompt our LLMs how to do that and so that we know what's actually happening. What do you what uh just in

happening. What do you what uh just in maybe like a couple sentences maybe we can go through each one of these and I just want to hear your thoughts on when you would use prompts what is context

and when to use it tool etc. >> Yeah [snorts] prompts are sort of like the basic way to ask the LLM to generate some text in response to like text you give it. So you know if you've ever used

give it. So you know if you've ever used chat GBT and you ask it a question and you get it some text back that is like you're prompting the LLM. Very similarly

like in the AI SDK. This is how you do that in code, right? So we're going to send the LLM some sort of like text, a question, uh maybe like a command. Like

in this case, we're saying analyze this transcript and we're kind of passing it along this file and it's going to give us back some text. So this is like a super common use case for AI, right?

Yeah.

>> Um >> similar to that summarization aspect that you had that you had said before.

>> Yeah, exactly. Um, and it doesn't have to be like you can see even in this case it's not actually like we don't have to use prompting just for chat so to speak, right? We can just like give it a task

right? We can just like give it a task almost as a prompt and we we don't intend to ever revisit like the conversation, right? It's not an ongoing

conversation, right? It's not an ongoing conversation, but we're just prompting the AI uh or the LLM with a specific task and we're getting a response and we're storing that or we're sending it to the user or whatever, but it's not

really an ongoing chat. Um and and to your question like the context tab u or well this is actually good you can go back you can look at this is a good example of like that sales coach agent

this kind of shows you how you can kind of like encapsulate the system instructions within this class and then we can kind of reuse this throughout of our application um so that we can just new up the sales

coast agent and then give it a prompt and we can do that in a variety of places throughout our app without sort of like re-specifying those system instructions. Yeah. So that's super

instructions. Yeah. So that's super convenient.

>> One one question I had here just I think this is a good place to talk about it is so um you kind of mentioned it's one central place to be able to interact with everything. So we have all these

with everything. So we have all these tools available to us that we see up here but you're we're not saying uh send a prompt to open AAI or even send a prompt to a specific model or even send a prompt to [clears throat] you know

Gemini.

>> How does all that work behind the scenes? Like do you have different

scenes? Like do you have different sections providers? What happens?

sections providers? What happens?

>> Yeah. So, a lot of that is driven by your configi file. Um, and that's where you specify. I don't even know if that's

you specify. I don't even know if that's on this page, but we definitely talk about kind of like multi-provider and failover and stuff, but similar like in your Laravel app, if you've used Laravel, you probably know you have like

a config database file or a config file systems file. When you install the AI

systems file. When you install the AI SDK, you get a config AI file and that's where you configure, okay, I want my default AI provider to be open AI or I want it to be anthropic. And you can

actually customize that based on the types of things you're doing. So you can have a default provider for text like how we're prompting here, but you could have a different default provider for images. So maybe for images I want to

images. So maybe for images I want to use Gemini and Nano Banana, but for text I want to use Anthropic. Um, and so if you specify that in your config file, you actually don't have to specify it

here when you actually prompt. You know,

it's just going to use whatever defaults you have. Of course, we can override

you have. Of course, we can override that in the code, but uh, it's kind of nice. Uh, you know, you can just sort of

nice. Uh, you know, you can just sort of like use it. You don't have to remember.

Okay.

>> What is the like code name for Claude?

Haiku-4, whatever that is.

>> Five. Yeah.

>> Yeah. Exactly. You can just use the uh use the features.

>> Yeah. And it seems like that's it. the

eloquent way in a lot of ways where like you you're building out database queries in the same way where I could be building in SQLite in local production or local instance and then when I put into production none of my code changes

just my config.

>> Yeah, it's basically the same concept almost like a OM type concept concept for uh AI providers.

>> I love it. What about context? What is

that?

>> Yeah. So context is kind of how you actually would build something more like a chat where >> you know what what's kind of interesting about like dealing with LLMs is when you

ask LLM for some text it you know chat GPT and like claude give the illusion that there's this ongoing state you know with the LLM where it's like aware of

all the previous messages but that's not actually the case by default you have to feed it the whole string of messages that came before your prompt so that it

knows and you have to do that every time actually that you call the LLM. It has

no concept of like historical interactions with you.

>> Yeah.

>> Um so this is where you do that. So this

is where like if you are actually are building more of like a chat like interface where the user is sort of like having an ongoing conversation with an agent, you need to store those messages in a database and then this is where you

can pull them back out and give them to the RM. So, we're going to in this

the RM. So, we're going to in this example, we're like pulling all the recent messages for a given user, maybe the most recent 50 in this case, and we're passing them sort of in chronological order uh to the LLM. So,

this is an example of how to do that manually, but the tab you're on now kind of shows us this is built actually into the AISDK. So, like the previous tab

the AISDK. So, like the previous tab showed, we allow you to do that manually, but if you use this members conversations trait like you have here, the AI SDK will actually just do that for you. Um,

for you. Um, >> and the AI SDK actually is in beta. And

I would say this is one of the areas that I'm like most working on between now and stable release is some kind of more features around this. But the the the kind of like bare bone skeleton of

it is here where if I use this remembers conversations trait, it's going to automatically store conversations in a conversations table. It's going to put

conversations table. It's going to put the messages in like an agent messages table and it's going to automatically pull them back out when you continue the conversation. It's just super nice

conversation. It's just super nice because it's a lot of kind of like boilerplate code to store those messages, get them back out, make sure they're sorted in the right order, pass them to the LLM. So, to just be able to use this trait in like one line of code,

it's like, oh, I just have like conversation state. Um, that's great.

conversation state. Um, that's great.

>> Yeah. To be able to not have to do uh any of this and store it yourself and then you're in this aspect of it. It

just kind of obuscated for you.

>> Yeah. I guess there was a question that I saw in the chat though was what was the most challenging part of building the level SDK? Is this it or is there a different part?

>> No, I mean a lot of it was kind of like I wouldn't say there was any one area that was just like oh wow this is just uber hard. I would say it was a lot of

uber hard. I would say it was a lot of just like sitting and thinking about what are the right like APIs or like how do I want to design it? kind of what we were talking about earlier of like I really like to design things that feel

like really thoughtful and really like really like well-designed and easy to use and intuitive and the process of arriving there is usually just like a lot of sitting there and staring at the

screen you know like being like or even just like writing pseudo code of like oh what would be like what would feel the best you know and sometimes I'll actually go into like an empty file and just start typing code as if I'm using

the ASDK even code that just doesn't even exist to try to discover like, oh, this feels nice or this doesn't feel nice. And

>> um so yeah, I I mean to get back to the answer, I don't think it was any one particular thing. It was just figuring

particular thing. It was just figuring out what's the most like delightful considered APIs to give people.

>> Yeah, I love that. And to your to your point before where you're like you were kind of or the pose question, does it matter? Like I think it that that means

matter? Like I think it that that means it does matter in a lot of sense because it it becomes this unified platform where you know what's available to you even if you might not be handwriting

every single piece of the code you know what's available and it feels unified which I think is the big >> Jason you mentioned uh Jason Torres okay

fine I'll go install it you had your Santa app that was hosted on Laravel clouds I'm curious of if you swap out the AI SDK it'd be cool to

Um, all right, Taylor. What is uh tools?

I know we kind of talked about this a little bit on the streams before when it comes to like MCP and what's the difference between tools and stuff like that. What is tools in the context of

that. What is tools in the context of the AIS SDK? Yeah, tools are one of the coolest parts I think of building agents and they kind of give you the most uh interesting possibilities for the kinds

of things you can build because um it lets you like basically supercharge LLMs with things that they couldn't do otherwise. So, and this example is a very basic example, but I

I'll share some other cool examples in a second where maybe you want the LLM to be able to generate like a cryptographically secure random number.

>> I don't think that's actually built into LLM. So like if I ask chat GPT, hey,

LLM. So like if I ask chat GPT, hey, give me a number between 1 and 100.

>> Yeah, >> that's probably not actually a random number in the true like mathematical sense of the word. But I can actually give um using the AISDK, I can define a

tool that when I prompt the LLM, I give it a list of tools that it has available to it. So it's going to see, hey, I have

to it. So it's going to see, hey, I have this tool that can be used to generate cryp cryptographically secure random numbers. and it can invoke that tool

numbers. and it can invoke that tool which will then invoke this code on your end. This handle method that's like

end. This handle method that's like right in front of us will actually be called uh but the LLM will actually decide to call it. I mean it's it's pretty wild actually. Um and so a lot of

times if you look at like model evals when OpenAI or Enthropic comes out with new models, it will actually have like an eval for tool use of how good it was at using tools,

>> you know, and the LLM these days are are pretty good at this.

So this lets us actually augment the uh LLM with cool stuff. So I mean a real world use case for tools that probably a lot of us are familiar with is if you've

ever used cloud code or open code or cursor, you'll see that it like um it inspects files like it reads files, it writes files, it runs bash commands.

Those are actually tools that the builders of claude code and open code have built, right? So they have like a run bash command tool that they have

given to the LLM to run bash commands.

And so actually the first time when I demonstrated the AI SDK out in San Francisco a few weeks ago, I actually wrote like a little tiny nano version of cloud code. Uh I think I actually called

cloud code. Uh I think I actually called it nano code where I gave it like seven tools. I gave it like a read file tool,

tools. I gave it like a read file tool, a write file tool or a bash tool and it actually worked. Like you could chat

actually worked. Like you could chat with it. It would update files. you

with it. It would update files. you

could say, "Hey, make a new eloquent model and update the routes to like return all the data using the new model." And it like actually works. Um,

model." And it like actually works. Um,

so tools let you do all sorts of random stuff. And I think we'll get into a

stuff. And I think we'll get into a second. We actually give you some tools

second. We actually give you some tools built in. Um, but this is how you can

built in. Um, but this is how you can like start to integrate the LLM with your system. [snorts]

your system. [snorts] >> Yeah. No, I I love that. And that's

>> Yeah. No, I I love that. And that's

that's essentially what Boost is doing in a lot of ways too. You mentioned like cloud code, but boost has has um a specific number of tools that would you say then Taylor tools are a great place

to to keep the things that your app likely does over and over or that your LLM through your app are doing over and over again.

>> Yeah. Yeah, I think so. Um and some of the tools we give you or are some common tools that people expose are like search. So um go to provider tools. What

search. So um go to provider tools. What

do we have here? I think this is yeah this is web search web fetch uh and then file search. So like people use tools a

file search. So like people use tools a lot for this kind of thing. Um so if I throw like the web search tool or the web fetch tool onto this agent and these are two tools that are just built into the AISDK. You don't need to write them

the AISDK. You don't need to write them yourself. It will allow the LLM to

yourself. It will allow the LLM to actually search the web for the latest data or fetch a web page from a given URL. And then with file search, it will

URL. And then with file search, it will allow it to search like vectorized PDFs or other data to find things that otherwise didn't have access to. So if

we have maybe uploaded, I don't know, all of our company policy PDFs to a file store, we can now search this and have like a chat agent that can search over

this data. Um, and that's data that the

this data. Um, and that's data that the LLM otherwise wouldn't have access to.

>> I love it. Uh, kind of qu question around these lines. How should we organize our tools? So I guess when you're building out an application with this, what do you think are the thoughts of one, how many tools MC can or should

we use? Um, and then how should we

we use? Um, and then how should we organize our tools? I guess uh you know >> the SDK gives you a way to organize them within this tools name space, but what

are your thoughts on this?

Yeah, I I know it's been like kind of a a pretty hot topic of discussion around like the amount of tools that you should expose to um to an LLM. I I would say like

>> you know when you you shouldn't have like 50 60 70 tools probably exposed to the LLM because you get what's called context bloat where >> kind of what I was saying where you have

to send all of the messages to the LLM to provide historical context. You also

have to send all of the tool definitions, what they do, and you have to send that on every message as well.

So, if you have too many tools, you can sort of consume a lot of tokens and a lot of context. Um, I think that really comes into play on some of these local coding agents where you start to plug in

like different MCP providers and they have maybe a dozen tools and you've got like maybe five of those and now you've got like a hundred tools and uh it can kind of get pretty out of control pretty fast. I think if you're using the AI SDK

fast. I think if you're using the AI SDK to build something in your own application, you probably aren't running into that quite as much where you have like a hundred tools. I think that's probably pretty rare. So, um yeah, I

mean I I don't know what the exact number is, but it is a thing you have to watch out for. Let's put it that way.

>> Yeah. Yeah. I am curious. I actually

don't know the answer to this, but I am curious of if if there's going to be a way or if there's already one way to be able to say uh to to conditionally load up tools would be would be very

interesting. I know there's probably

interesting. I know there's probably people >> and what's cool with the AI SDK is you can do that a little bit in the sense of like [clears throat] >> you have the agent class and so whatever you return from that tools method can be

sort of like determined at runtime.

>> Okay. Um, so maybe maybe one user maybe maybe you pass a user into the constructor of the agent and then you're using that user to say okay well this user has access to this list of tools and maybe this other user doesn't. So

you can do a little bit of like dynamic tool registration here.

>> I know people are starting working on sort of like other things around tool discovery. Um so yeah it's pretty

discovery. Um so yeah it's pretty ongoing I think evolving topic in the AI world.

>> Yeah. So that does make sense that that uh uh a great use case for like a tools array on a particular user would be maybe they're a paid user and would have access to specific tools that you might

not have for free users for example.

>> Exactly. Yeah.

>> Uh awesome. For those just joining again thanks so much for being here. We have

Taylor talking through the label AI SDK.

we'll get to actually building things and like what that actually looks like practically here in a little bit. But I

want to give a good overview bare bones for those watching in on demand of what it is, what do you how we think through different pieces of this because there's so many cool options that you can build

with this. Uh even one app probably

with this. Uh even one app probably would only touch two or three of these.

It's really hard to find a singular app that could have all of these uh things provided. Uh if you do have questions,

provided. Uh if you do have questions, be feel free to drop them in the comments. I am checking as much as

comments. I am checking as much as possible. I can't promise that we'll get

possible. I can't promise that we'll get to all those questions, but we're glad to have all of you watching here. Feel

free to like and subscribe to help us uh show more content to you in the future as well. Uh so Taylor, what is

as well. Uh so Taylor, what is structured output? So I've heard of, you

structured output? So I've heard of, you know, we kind of talked about in prompts that you get a text back when you're saying, "Hey, give me this." What is the difference between that and structured output?

>> Yeah. So when you prompt an agent sort of by default, um you just get sort of free form text back in the response. So,

very much like you would get from like chat GPT, you know, it doesn't really have like a defined structure or schema to it. You're just getting back a

to it. You're just getting back a paragraph or two of text.

>> Um, what structured output lets you do is actually say, okay, when you respond to me, I want you to respond in this structure. Um, and I think if you scroll

structure. Um, and I think if you scroll down a little bit, you can see like in this case, we're saying give me back a score and that needs to be an integer and it's like a required field. So when

the LLM responds, it's going to give you JSON that matches that schema, which is super convenient because then you can parse that JSON. You like know what to expect in terms of what data is on it,

and then you can reliably store it or do whatever you want. If you're just getting back free form text from LLM, like you know, how do you parse that? I

don't know. It's it's just it's just arbitrary text, you know. But um if you're getting back JSON, you know what to expect. And this is actually powered

to expect. And this is actually powered by um a new Laravel component, the JSON schema component which you can see there and like the type signature. Um so we

actually used this on the MCP package um to define JSON schemas for your MCP tools and it's used here. So um we actually wrote this component because we

needed it, you know, in the AI SDK in the MCP package. Uh but it's a really slick little component for defining JSON schemas.

Yeah, I remember the first uh application actually the first Laraveville application I ever made was an AI application where I was that we didn't have structured output at the time. This was about like three years

time. This was about like three years ago. I think it was GPT2 or something

ago. I think it was GPT2 or something like that. Uh but I had to hope and pray

like that. Uh but I had to hope and pray that the agent responded in the way where I said okay here's use the response put equals equals equals and then then separate it and then I had to

parse it out from that. Uh structured

output makes that so much easier.

>> Yeah. It's like I Yeah. I I mean even on uh Anthropics models up until recently like to get it to return structured output, you were literally telling it in the prompt, please respond with this this JSON structure. [laughter]

>> It's pretty pretty wild.

>> Yeah, it's crazy times that we we we live in now where we could we could have it be exactly the way we want it. And so

the the way the AI SDK does that makes it extremely extremely easy. Um a couple questions that I wanted to go. Uh

someone said for using SDK we need to purchase multiple model subcriptions like Gemini cloud code. Uh so cloud code would probably be like your local one and then Gemini if you wanted to use that within the AI SDK whether that's

returning text responses or even um you know image generation. Uh multiple

models would have to be added as API keys to the config locally and in production. Uh but I'm I'm correct

production. Uh but I'm I'm correct Taylor where you can use like let's say you're building something locally and just want to test it. can use local models through Olive. Is that correct?

>> Yeah, that's correct. Yeah, we we released that pretty quickly after launch. Another thing you can do if you

launch. Another thing you can do if you don't want to sign up for multiple model subscriptions is use something like open router where you sign up for open router and you have one key and then you can actually access anthropic models uh open

AI models all all the models through one API or through one sort of like paid service. Uh but yeah, you can also just

service. Uh but yeah, you can also just use local models. I actually did that myself the other day just testing stuff.

You can um download O Lama and then pull in some kind of local model, whichever one you want, and you can um just run AI stuff locally.

>> Yep. So that's O Lama. O L L A M A. Um

and I think actually Open Router now that you mentioned it to I think they also have like a uh of course they have like a paid one where you can pay them and they'll route it to different things, but I think they also have a a

free where they'll route it to like free providers as well, >> which is really really interesting. Um

uh I saw another one. Okay, you mentioned embeddings um and vector search. Can it replace lang chain as an example and can the

tools be a rag replacement? Yeah, a lot of the tools kind of can be a rag replacement. As far as it can it replace

replacement. As far as it can it replace lane chain, you know, it probably depends on kind of what you're doing with lane chain, but a lot of the ideas behind some of the built-in tools are around sort of like rag and similar

concepts in the sense of like vectorizing files, searching them, doing similarity search across data using tools, um things like that. I think

that's like like a super common use case for tools in general. Um so yeah there's opinions on that kind of built into the

AI SDK because it is so common.

>> Yeah. Um and then uh there is uh we'll get to that question in a little bit. Uh I will jump back in. What

little bit. Uh I will jump back in. What

what is streaming? So

>> yeah. So streaming is just a really easy way. So instead of calling like the

way. So instead of calling like the prompt method on an agent, if I call stream, super easy way to send back a stream or a stream of text to the front end. So that you know like in the same

end. So that you know like in the same way as when you interact with chats online, how the text sort of like streams in line by line as it's available. The stream method in the AI

available. The stream method in the AI SDK makes that super easy. So this uses a HTTP server set events um called SSE

for streaming text back as it's available. And then in this example, you

available. And then in this example, you can see we're using this kind of then method. We can actually do something

method. We can actually do something once it's all been streamed. If there's

some sort of like either follow-up action we want to do or something, I don't know, some arbitrary code we want to run after the stream is complete, we can do that here. But then on the front

end in our JavaScript or using live wire or something, we can actually consume that stream to like show text as it as it is available so that you don't have to wait for uh the entire response to be

generated before you see anything.

>> Yeah, that is a really nice part of it being so succinctly tied in with the front end. I I like that. Uh for

front end. I I like that. Uh for

example, within Vue and React, we have the use stream hook. Uh, but then Live Wire has the the wire stream field to take all of this and and make it look like you would expect within TAG GBT where it's kind of streaming line by

line.

>> Um, totally. What about async? This is

this is interesting to me because I' I've I've heard a lot of cool things about it, but I'm I'm not quite sure how it how it actually works behind the scenes.

>> Yeah. So, this is another example of how the S AI SDK is sort of like uh leaning into a lot of Laravel's full stack, you know, batteries included opinions where >> um instead of calling prompt or stream,

in this case, we're calling Q where we're actually basically queuing a prompt. So, we're saying, hey, we want

prompt. So, we're saying, hey, we want to call the LLM with this prompt, but we want to do that in a Q job in the background because we don't necessarily need or want to wait for the response

right now. um we just need to cue this

right now. um we just need to cue this to happen in the background and then we'll do something with the response later. So when you call this Q method,

later. So when you call this Q method, it's actually going to create a background job on your Laravel Q to respond to this prompt and then the and the catch callbacks will be invoked when

the response is ready or if something goes wrong respectively.

>> Um so once we get the response we can store it in a database or send an email or whatever we want to do with that response or something goes wrong, we can handle that as well in the catch call back. But basically, you know, this

back. But basically, you know, this would be used where >> uh the user user submits something that we want to analyze with AI, but we don't really want the user to have to sit and

wait for it to be done, right? Then um

so we we want to return a response to the user like, hey, we're processing this. We'll let you know when it's ready

this. We'll let you know when it's ready or whatever.

>> And then um that happens really quickly, right? Basically instantaneously. And in

right? Basically instantaneously. And in

the background, this job is running. And

then we'll do something. We'll notify

the user maybe when it's ready or when it's done or whatever. Uh, this makes it super simple to do that.

>> Yeah. So, I would I would say like it off the top of my head, it seems like a good use case for this would be um maybe transcribing a 100 megabyte 200 megabyte audio upload where you don't want them

to wait for a full request until they get the response for that. That's

awesome. I love that. A question on the async stuff uh from Len. Is the async stuff non-blocking? Does it use the

stuff non-blocking? Does it use the React PHP PSR18 browser?

Yeah, this async stuff is more like Q related. Um, so it doesn't really it

related. Um, so it doesn't really it doesn't really have anything to do with like async in the sense of like concurrency or like you know u go co- routines or anything like that. Um, it's

more about just like putting something on your background queue and then doing something with it later.

>> Yeah. Um, I think I saw another uh question. Uh, yes, the SDK is compatible

question. Uh, yes, the SDK is compatible with the Olama AP or API, I would assume. Yes. As well, correct?

assume. Yes. As well, correct?

>> I think so. Yeah. Yeah. And you can even um there's a new I think we just documented this the other day, maybe two days ago, there's a in your config file, you can actually override the base URL

for any provider. So, you can actually use any AI provider that is like OpenAI compatible or O Lama compatible um just by overriding that base URL.

Yeah. Uh question here that we we kind of glossed over here at the top. I'm

curious to hear what how how this was kind of built Taylor and then also what your thoughts are in terms of uh I know that there's also a use cheapest model trace and stuff like that. So in terms

of >> automatic failover um and then how does all that work behind the scenes?

>> Yeah. So um yeah yes yes you can do this and it pretty much works how um this person is asking where you pass multiple providers and models to the prompt method or either uh or specify them in

the provider attribute on the agent itself where if one provider is either overloaded or you are rate limited on that provider um it will actually just

fall back to the next provider in the list that you give it. So maybe you try enthropic first. If enthropic is

enthropic first. If enthropic is overloaded or you're just rate limited on enthropic, it will fall back to open AI for example. Um we do that by checking like the response codes from

the AI providers that we get back. So we

can see like oh they're you know overloaded or we get like a I think it's like a 429 HTTP code if you're rate limited or something like that. And then

we can fall back to the next provider and we actually raise an event that you can listen for to know that you got uh you had a failover. And we're working on bringing a lot of this event visibility into Nightw Watch so that you're going

to be able to see, you know, like how much tokens am I using on which providers and how long are my tool calls taking and things like that. Um anyway,

so that that's kind of like model failover. And then uh you mentioned kind

failover. And then uh you mentioned kind of like some of the attributes you could put on agents like use cheapest model, use smartest model. I kind of wanted to

have this nice way to just like annotate an agent with use cheapest model for example. And sometimes you have tasks

example. And sometimes you have tasks like it's basic text summarization or something like that where you don't need like anthropic opus 4.6 for that, right?

It's like too expensive.

>> Yeah.

>> Yeah. Exactly. So if you're just trying to summarize like a paragraph of text into one sentence for example, >> um you can just use like haiku.

>> So the benefit of putting use cheapest model on the agent is like let's say tomorrow a new cheapest model comes out.

>> You don't really have to go look okay like what is that model? what is let me copy the ID for it like paste it into my code. If you're just like use cheapest

code. If you're just like use cheapest model and you just like composer update to get the latest AI SDK you're just like always on >> the latest cheapest or smartest model.

So this is kind of nice. You don't

really have to update your code if you know that this agent is doing something that's not very like doesn't require a lot of intelligence.

>> Yep. No, I I love that. uh because it's one of those things similar to what you had mentioned at the beginning where you having the one package then makes it easier and more eloquent in the sense of

you're not having to you have those opinions already formed for you when it comes to cheapest when it comes to smartest even when it comes to automatic failover for uh those are things that you probably mostly want when you're building a full stack application you

just don't want to do it yourself in a lot of ways.

>> Yeah totally.

>> Uh lastly, what about attachments? This

is interesting.

>> Yes. Yeah. So attachments kind of come up in a few places throughout the AI SDK. So in this example, we're using it

SDK. So in this example, we're using it when prompting during text generation.

So you know, maybe we're giving it I think in some of my demos I've given it like a CSV of sales leads or maybe you're giving it a PDF of a transcript and you're saying analyze this this

document or parse it or do something with it. So we just make it super easy

with it. So we just make it super easy to give attachments in various places.

So you can do it here with text. We also

actually let you do it when you're generating images. So, you could give it

generating images. So, you could give it um you could prompt the model, hey, here's an image and I want to um make it like cartoon style or like a painting or

I want to change something about it, like basically remix an existing image.

Um you can also use attachments for that as well. Um yeah, just super easy to

as well. Um yeah, just super easy to attach and upload files and kind of interact with them. Yeah, here's here's this uh image example. Yeah, maybe

you're asking what's in the image. So I

can give it an image and I can combine that even with like structured output to like analyze images and generate um structured output that I can do

something with. I love that. I love

something with. I love that. I love

that. Uh quick question. Uh I'm going to close out this screen or or stop sharing my screen real quick just so we can uh answer this question. I'm going to pull up a demo that I built so we can walk

through it in a practical sense before we and then we maybe if we still have some time we can build something from scratch because I'm curious to see uh how cloud code also works with the AI

SDK and get to show everyone there but question says now that we have the Laravel SDK what's next level's focus on ASP is exciting I think it's the right step forward are there any future plans

so I guess what you kind of mentioned the the the whole um uh remembering agent being one of your things that you're focus ing on what do you think is

>> um I've got actually got a list of things that I that I'm working on um sort of like post beta and even after the stable release. So yeah, I've got a a list of features I want to keep

working on for sort of the conversation storage, context retrieval, all of that stuff. I mean it's stuff like some basic

stuff. I mean it's stuff like some basic stuff like pruning of the uh historical conversations or compaction of the history and sort of like summaries and snapshots and things like that. There's

some stuff around tools and sort of agent loops that I want to look at. So

like human uh tool approval is a good one. So like say you want the LLM to be

one. So like say you want the LLM to be able to invoke like a refund customer tool, but it needs human approval if the refund amount is greater than $200. Um

so like stuff like that. It's kind of these more advanced complicated tool use cases. Um and then there's other agent

cases. Um and then there's other agent stuff like uh agent loops. um you know more customization of when the agent actually stops doing things. This is

sort of gets into like some of the uh you know you've seen people have these like so-called Ralph Wiggum loops of where you can can have like infinite loops of agents doing things. So a lot of like agent loop uh configuration and

stuff like that. I also want to build like a cool artisan command like um artisan agent chat so where like if you have an agent and you just want to like chat with it maybe to see how it's behaving or what it's doing I can just

come into the command line and do agent chat and then I can just like pick an agent from a list using Laravel prompts >> and then just chat with it and see like okay is it behaving like how I would

expect you know um almost like the uh the DD of you know like agent debugging [laughter] in a way >> the AI DD in a in a lot of That'd be that'd be really interesting.

I almost imagine then you could have its own agent MCP where your cloud code or LLM whatever is kind of can interact with the agent talk to each other.

>> Yeah. And then there's a there's a you know there's already a bunch of PRs open from the community. And one of the ones I think that is most requested or most on everyone's minds is uh so-called like

sub agents. Um, and one easy way to

sub agents. Um, and one easy way to think about this is almost like the ability to expose another agent as a tool from another agent. So like imagine

I have you know one primary agent and in its tools method maybe I don't only return tools I can also return other agents. So it can actually call on okay

agents. So it can actually call on okay you are sort of like an an orchestrator agent and you can call on maybe this planning agent or this research agent or this other type of agent that is more

specialized to do things. There's

actually a PR open for that right now um that I need to uh review. Hopefully I

get it reviewed today and um but at least can kind of get out that sub agent behavior this week but a lot of it will be driven by the community. You know, I think there's like dozens of pull requests already out there and u that's

kind of how it always goes in the open source world. You know, like I I'm

source world. You know, like I I'm driving I I have one hand on the wheel, but the community also has a hand on the wheel and you know, we're we're hopefully getting to a good destination.

>> Yeah. Co co-piloting together to a great place. Uh that there kind of similar you kind of answered a couple questions in the chat of a sense of can you call an agent from another agent?

Doesn't sound like it's possible right now. That is something that's coming

now. That is something that's coming >> in terms of agent orchestra orchestration and everything like that.

Again, thanks so much for everyone being in the chat. I want to talk I wanted to walk through a demo that I actually built and I'm curious to get your uh thoughts, Taylor, on what could be what could be added to this. Uh but then we

can jump in and maybe build something from scratch as well. Uh but thanks so much everyone for for joining. If you're

just joining or maybe you've been here the last 10 15 minutes, haven't heard me do this spiel yet. We are with Taylor talking about the AI SDK that was just

released uh last Thursday, Friday, Thursday. Um and uh so it's only been

Thursday. Um and uh so it's only been out for 4 days. Um and so it's been crazy to be able to see how many people are already building with it, how many,

like Taylor said, how many new PRs have already been added. Uh but just walking through the possibilities of it, as well as just talking through why it was built in the first place. So, if you're here, let us know you're here. You have any

questions, we'll try to get to them as best as possible. Uh, but yeah, I wanted to talk show this little demo that I built. Um, it's actually live right now.

built. Um, it's actually live right now.

I'm not going to tell everyone the the URL just because I don't I don't want it to crash while we're on on on stream, but I'll share it at the end um so that you can have it as well as a GitHub repo

and I'll put it in the description if you're watching this in um in interview.

This is a little I wanted to try to find a way to build using almost everything or almost as much of the AISDK as possible. Um,

possible. Um, >> and the premise for this was I always save a lot of links that I always forget what those links are. Bookmarks have

never worked for me. Um, even most AI tools have never worked for me because it only saves like the link and maybe meta description doesn't necessarily save any of my thoughts on it or anything like that. And so I wanted

[clears throat] to build a thing using the AI SDK that either uh takes a file that I upload, maybe a screenshot, a

document, or a URL, and then have it upload to this vector storage. So we're

actually using um the vector storage uh uh part of the AI SDK to autochunk that embed. And this is one cool thing that I

embed. And this is one cool thing that I actually didn't know. I'm curious to get your thoughts on like what's the difference between storing something in vector using open AI's vector storage

versus putting this in postgres PG.

>> Yeah.

>> Yeah. This kind of came up at Laracon India. I mean they are the end goal is

India. I mean they are the end goal is really similar I would say in the sense of like we want to store embeddings about things and then search them or

query them. What is cool I mean I guess

query them. What is cool I mean I guess let's start with like um the PG vector stuff. So like in the AICK docs, we show

stuff. So like in the AICK docs, we show you how to take a string of text and generate embeddings for it and then you can put that in a vector column in a postcrist database >> as long as it has the PG vector

extension which um most local uh postcrist things do like the herd version and then Laravel clouds postcrist has it.

>> Um and then you can run queries on that to find um things that are similar to your query. So, like if I to go back to

your query. So, like if I to go back to our Mexican food example, which we both like, like if I stored like um you know, restaurants that have great cheese dip

and then I searched for good appetizers, those things are semantically similar.

Even though they don't actually have the same words contained within them, they're are similar ideas. And so that that that's the purpose of semantic search is to be able to pull back things that are related. So you can actually

store those embeddings in your own postcrist database or you can actually upload files or data to OpenAI or Gemini and they will vectorize and store them

for you. So it's, you know, it's just

for you. So it's, you know, it's just sort of like one of the convenient things about storing them on OpenA Gemini is you can just hand them a PDF.

They will extract all of the text and vectorize it for you out of the PDF and all of that. you know, like doing that all locally, you're going to have to pull the text out of the PDF, you're

going to have to vectorize it. Um,

you're going to have to store it. Um, so

it is kind of convenient to just be able to like throw files at OpenAI and Gemini and say, "Hey, like vectorize these for me. I don't want to even have to think

me. I don't want to even have to think about it."

about it." >> Yeah.

>> Is this, you know, it's two different ways of doing similar things.

>> Yeah. Yeah. No, I I love that. Um, and

so yeah, th this this works in the sense of the there's a couple cool things I really love is how easy it is to take an attachment, let's say an image, and grab specific things out of this using an AI

agent to say, okay, I have this image now. I just want like I I save a lot of

now. I just want like I I save a lot of screenshots either on my phone or just like I'm I'm grabbing things and usually I have like a directory on my device. I

wanted a place to be able to save all of those, but then also be able to query them in later things. So again, the goal of this is one to kind of scratch my own itch and build with the AISDK, but then

two, be able to show as much as possible off of the ASDK. So I wanted to grab we'll go to we'll go to Larville news. I

want to just find like a quick uh little link that I might be able to sav. Um so

let's go ahead and maybe we'll see um see here like OpenAI releases GPT 5.3 codeex. I'm going to grab this link and

codeex. I'm going to grab this link and go back to my URL here. And if I just paste this in, I can add a specific comment about this URL or I can just capture this. And this is one of those

capture this. And this is one of those uh async things that's happening then on in the background where um I can still, you know, go to different links. Uh but

it's still being stored and retrieved in the background. Uh it looks like because

the background. Uh it looks like because it's uh because it's a JavaScript front end. I this this failed. So, here we're

end. I this this failed. So, here we're going to upload a screenshot instead.

I'm going to go back to that link. I'm

just going to upload. We'll zoom uh we'll zoom out here.

And I'm just going to paste a quick screenshot just of this.

There we go. We'll go back to here. Go

paste this in and upload and analyze. So

what it should be doing within the uh AI SDK is it's going to grab the process using the vision to say hey let's describe what is actually happening in

this image and not even just transcribing it but the cool thing about that vision SDA is like you get additional aspects you can send a specific prompt in with vision to say

okay I actually want these particular things if we're building a research library I want to know what's pertinent to this you can see Here it says it's uh >> a detailed description of the image,

screenshots, a top header, a large rectangular card with rounded corners, and a soft drop shadow. All all the neat things that you might want from image.

It's a lot more than I actually thought it would be from just one one simple image. But the neat part with this is uh

image. But the neat part with this is uh kind of everything that's happening in the background is a couple people mentioned uh what would you recommend for front-end things? Some people said

polling versus broadcasting. What are

your thoughts, Taylor? And I'll say say what I did in this particular demo.

>> Oh man. Um like there is >> something is which would be a good answer. A question. I guess

answer. A question. I guess

>> there is something just like really nice about for simple stuff is throwing like a wire pole on a div and it just like works. [laughter]

works. [laughter] >> So if you don't have >> Yeah. if you don't have tons of users

>> Yeah. if you don't have tons of users like um or maybe it's just a tool for yourself even you know like you're the only user um which I think in the age of AI we're seeing more sort of just like personal software right of just like

tools people are creating for themselves that are really intended just for them I think throwing like some polling or especially with liveware wire makes it so easy with wire polling I guess inertia has polling as well now since

inertia too I think that's nice but if you're in like uh more of a production use case like let's say like Laravel forge or Laravel clouds dashboard where there's hundreds of users in there at a

time. Wire polling can be a little bit

time. Wire polling can be a little bit more inefficient since you're kind of like continually hitting that back end whether there's data ready or not, you know. Um, broadcasting kind of gives you

know. Um, broadcasting kind of gives you more of like a push mechanism to where you're just pushing data as needed. So,

it's a lot more efficient.

>> Yeah. Yeah. In this particular demo, I did use broadcasting with uh with the use like with with reverb and then like the use echo uh hooks to watch for that.

So that way um when we're uh you know when we're grabbing something from the web or we're up uploading a screenshot for example if I go into here and I just

say something like uh I don't know we'll just grab this from the from the AIS SDK and if I go into and and actually paste it it's happening behind the scenes.

Yes. But once it finishes you'll see that happen live. So this is one of the things that of course like Taylor said you could use this with wire pole where it's just constantly pinging the server um every 5 seconds or whatever and it

will update automatically but this is using specifically for this demo um it's using the vector storage to store all this once it does read it from the uh

vision and then we're getting a structured output so that when we chat with it and you can see that kind of popped up right there. Um, now we can chat and have all of those things. And

if I pop over to this chat, this is where we have a couple additional features that we kind of already walked through with the AI SDK. We have

conversational to say, hey, here's like um remembering those particular agents.

Uh, but then we also have uh streaming set up as well as file search and web search. These are the two tools that me

search. These are the two tools that me personally te those were like incredibly easy and I loved having them to not have to >> uh I I I've I've built a couple AI tools

where I'm relying on uh maybe my >> maybe you know web search within PHP for example instead of giving AIs the tools

to be able to use it themselves.

>> Yeah. Uh I'm curious of like what were your we kind of talked about the tools but what was your thoughts on those three tools the file search web search and um is it image search

>> uh file search web search and web fetch are three built-in well I guess there's also a similarity search which we haven't really got into but yeah I mean I wanted to give people um you know I

think these are just like incredibly common use cases um web search and web fetch of course used to sort of overcome the training knowledge cut off for the LLM to where

we can actually go out and fetch relevant data. I actually use

relevant data. I actually use this type of functionality in LLM a lot when I'm maintaining the Laravel docs and um or when I'm writing the docs for

a new framework feature. So, one thing I will do is go into the docs, fire up cloud code and say, "Hey, can you write docs for this PR?" And I will like paste

in the PR URL on GitHub. And it works like incredibly well actually. So it's

super convenient to be able to give the AI the ability to fetch web pages in a variety of scenarios.

>> Yeah. No, I love that. I love that. I'm

just going to ask a question about um let's see uh maybe maybe we'll say um what was that latest model from OpenAI?

we should see it search the specific things that it already has pertain to my local uh user and we should see those tool callings actually be made here. So

it's searching the knowledge base um and then it found you know the open AM model mentioned is GBT 5.3 codex in our save notes. So this is a use case of

notes. So this is a use case of particular conversational API that we kind of talked about in the sense of it's remembering the things if I was to

ask um without having that context of something like uh what uh >> and you even had like you you even had the tool call thing like show up too.

That's that's pretty fancy. That was

that was a fun uh I I I didn't know that that was possible. Um because I've only ever worked with chat where it was just returning text. So returning the

returning text. So returning the >> when it returns and I could be wrong I uh or misremembering but I would I think

when you return the structured output >> um it uh open AI specifically which is what the provider I'm using with gives um says when it's calling particular

tools as a JSON. So it's really easy to parse that within you stream. Is that

correct?

>> Yeah. Yeah. Yeah. When you stream stuff, you get sort of like tool call start, tool call end. You get like, you know, these various events for when tools are being used.

>> Yeah. I'm curious. Uh I actually haven't tested if I can search on the web then uh of like what what link is that associated to on the web. I I think I

marked this down so it doesn't actually go on call. I'm just curious now [laughter] if if that um if that does it. Uh a couple questions. Um some

it. Uh a couple questions. Um some

people said I didn't uh Oh, it says there it is. Using a web search tool searching.

>> Okay. A blog post. We

>> found something. Yeah.

>> Yeah. We we'll call that good. Open AI.

Open AI blog post, which is probably what Laravel News is referencing. So,

we're good there.

>> Uh some people said I I made it easy to guess the URL. Yeah, I'm sorry. I mostly

uh I did not scale. I I think I'm using a very small Laravel cloud server for this. Uh so if it if it crashes, that's

this. Uh so if it if it crashes, that's not Laravel cloud's fault. That's my

fault for not scaling it up. Um

>> what did you use to build this, Josh? Is

this uh is this a cloud code special or is this cursor? What is this? This is

all handwritten.

>> This is mostly Claude code uh with some handwritten stuff for the for the chat.

So I I did figure out quickly uh what stuff uh claw code doesn't seem to do a great job um with and the use stream stuff of uh it took me took me a while

to figure out that it returns the JSON for structure for for streaming >> um and being able to kind of push that back and forth. that was trying to do

some uh I whenever Claude codes goes out and tries to build like five different files and I'm like no this should probably just be like one option in the Laravel AI because I know it's a lot cleaner than that that's when I start

having uh some second guesses. So most

of this was built um within cloud code with with me referencing by hand each particular AI uh capability and being able to prompt us saying hey I know that

the AI SDK can do this let's make it do that >> right >> um someone said in an AI first future do you see Laravel becoming an orchestrator

of AI agents rather than just an HTTP framework I'm curious your thoughts on that >> yeah I mean I could definitely see that if there's one thing I've learned in like the 15 years of maintaining Laravel is that people will use it for all kinds

of things I didn't expect. So,

[laughter] um yeah, you know, I mean like I think um people building AI agents and like AI features is going to be the thing for a long time uh for the foreseeable future.

So, um I think it's you know it's good for Laravel to sort of have this good first party option there.

>> I love it. Um, why don't I fire up uh um maybe a clawed code instance and we can create a Laravel application from

scratch and maybe Taylor I'll have you talk through with me of of how you might maybe uh help start from scratch and the and it'd be curious to see your your

thoughts on um what are the things you're kind of looking for to make sure everything is working the way what what kind of revisions you you look at. Um,

the idea that I have, and we can maybe kickstart this a little bit more. The

idea that I have, mostly because I have a fallback to show what I built right before this, just in case things don't work out. The idea I have is uh we maybe

work out. The idea I have is uh we maybe upload some markdown files about us or about the user as an individual and then

we can use 11 labs API as a voice to be able to talk um to it, transcribe our voice, get an answer and then have it talk back to us.

>> Okay. Yeah, that sounds good.

>> Um so let me go ahead and share my screen again real quick. Uh so though for those still in the chat with us um

what uh uh what's your what's your favorite drink of choice maybe while you're coding? I know Taylor just took a

you're coding? I know Taylor just took a swig of a Coke Zero. Uh so who who else is Coke Zero fans or what what else is your your particular drink of choice?

How many Coke Zeros have you consumed so far today, Taylor?

>> Uh I think like three. I drink I'm I drink a lot of Coke Zeros.

>> I I saw some research on it that like everyone talks about like oh you know you know Diet Cokes are still bad for you and Coke Zeros are still bad for you and all this. Um but then uh uh some

research said you had to you had to have I think somewhere around 40 yeah 40 or something coke zeros before it even before you even get to the required

limit which is like a th000% lower than what the actual limit is or something like that.

>> Yeah. Yeah. It's pretty wild.

>> Um uh Taylor, what ID do you use? I'll

say that before we jump into >> I'm still using uh a lot of Sublime Text. You know, I'm kind of interested

Text. You know, I'm kind of interested to see where like the IDE >> space goes like as a concept. You know,

I think you even see this with cursor, right, with them like kind of leaning more into their agent mode as like the default view even for um booting up the IDE.

And it's like, you know, um previously when we were like handwriting so much code, I remember when I was first starting in my like career using Visual Studio to write.NET net and the IDE was

just so incredibly important in terms of like the intelligence it gave you and the shortcuts and it was kind of built for like handwriting code productively, right?

>> Yeah. And now as we move into a future where like handwriting code to some extent starts to feel painful just as a concept of like why would you handw write all of the code I'm curious to see

like how the IDE arena >> sort of like evolves with like you know VS Code and and PHP Storm and you know Sublime Text is more of almost more of an editor than an IDE but

>> um I'm still using Sublime Text paired with uh Cloud Code and Open Code.

>> I love it. I'm I'm similar not not with Sublime, but I'm using Zed editor with paired paired with Cloud Code um and Open Code mostly because I I I kind of

like the when I'm writing handwritten stuff, I don't want like even autocomplete. Mo, maybe a little bit of

autocomplete. Mo, maybe a little bit of autocomplete I'm okay with.

>> Yeah. Uh mostly

>> I'm kind of the same way.

>> Yeah.

>> I find Zed and Sublime like Zed is the closest thing to Sublime that like I've ever used that's not Sublime, you know?

I find them so similar.

>> I've not I never used Sublime. I I think I did when I was first writing HTML files, but I I didn't learn it enough to when it stuck. So, I was using VS Code for the longest time.

>> Um, people in the chat water. We got

Coke Zero. A lot of water.

[laughter] >> Water for Code. Pepsi Zero. I'm a

villain. Yeah, that

>> really [laughter] >> sometimes I'm at Pepsi or Pepsi Zero.

>> Uh, I I don't know if I've even had Diet Pepsi. Yeah, I've I think I've only had

Pepsi. Yeah, I've I think I've only had Pepsi Zero, but sometimes you're just in a Pepsi establishment, right? And you

just like don't have a choice because it's like a Pepsi household. Um, so I've definitely had it. [laughter]

>> That is true. I I do like Dr. Pepper Zero. That is like

Zero. That is like >> Yeah, that is good. I agree with that.

That's good.

>> That is That is one of my specialties.

Um, we got Tim Horton steeped tea. Some

lemon iced tea. I'm I'm I'm thrilled at how many Coke Zeros there are in the chat. I love that.

chat. I love that.

>> Yeah. Uh,

>> and I I actually went to the store this morning after the kids went to school cuz I was out of Coke Zero. And so I pull into the parking spot and like as I'm getting out, there's this lady loading her car next to me and she is

loading that car with Coke Zero. And I'm

like, "Oh man, >> this this is one of my people right [laughter] >> full size."

>> Yeah, they were actually bottles. They

were like big plastic bottles of Coke Zero. I almost like said something, but

Zero. I almost like said something, but I figured she'd just be weirded out.

>> Why do you care about my Coke Zero? I'm

curious if you've ever had the thought of uh of uh putting Coke Zero, maybe like getting a whole bunch of two liters, putting it on tap and and doing

that or can all the way.

>> Uh I mean I I like the can, but I mean some places have like a really good tap, you know, like to me McDonald's they have like some kind of magic tap where it's just like the soda comes out >> so much better than it does other places. But

places. But >> Oh yeah.

>> Yeah. But to me, to me actually the peak is like glass bottle Coke Zero, which is not easy to find to be honest. Um, but

they do have it and I've I've found it occasionally more in Europe actually than in the United States. But

>> a glass bottle Coke Zero that that really hits different.

>> I I I maybe we'll get it have have this happen at this year's Laron. By the way, Laron US tickets are out live right now.

We're going to Boston this year. Laracon

us. Uh, but I I pitched this that last Laracon. Maybe we'll get it have it

Laracon. Maybe we'll get it have it happen in this one. I want to do Have you ever done like those blind tasting things, but we'll do Taylor.

>> That' be good.

>> Blind tasting Coke Zero behind like glass can uh McDonald's Coke Zero or McDonald's Diet Coke and have you figure it out.

>> Yeah, that'd be fun actually.

>> So, we'll we'll we'll put it now that it's in the atmosphere. We we'll put it out there.

>> Um, let's let's do a new let's do Laravel new.

>> Okay. Um, I'm gonna call this uh the Larvis agent like Laravel Jarvis.

>> Okay.

>> Um, and uh we'll we'll go ahead and we'll make this uh React just for simplicity.

We'll go no authentication scaffolding uh past. How do you like to do Laravel

uh past. How do you like to do Laravel new Taylor? Like what's what's your

new Taylor? Like what's what's your dash dash everything or >> I was just thinking we need a Laravel new d-tailor that just like it's all my own opinions. Ooh. Um, that'd be

own opinions. Ooh. Um, that'd be awesome.

>> So, usually, oh man, this is a good one.

I mean, it kind of depends on what I'm doing. I do like the blank starter kits

doing. I do like the blank starter kits like you picked here with like just no authentication scaffolding for stuff like this. Um,

like this. Um, and then uh, you know, I usually just use the defaults like pest or whatever.

>> Yeah. Yeah. I didn't know if you were a dash react and then dash because you can put all of that into one.

>> Yeah. I don't really do that. Yeah. I

don't I never do that.

>> Yeah. Do do you do the empty empty prompt then? Um in terms of like the uh

prompt then? Um in terms of like the uh the Laravel new and not even any name actor >> I usually do specify a name so I'll do like Laravel space news space something

like I I actually forgot that you you can actually leave off the app name I guess and it will just ask you okay I kind of forgot about that I usually I usually provide the app name.

>> Yeah I I provide I provide the app name as well. Yeah, one of the things I've

as well. Yeah, one of the things I've learned with when I'm scaffolding something out with uh with LLMs like Claude Code for example is uh the blank

no dashboard and no authentication almost is easier because otherwise it tries to confine it into the dashboard sometimes and from a UI standpoint and it looks a

little then I just have it, you know, I I usually do manually like cutting that out and clearing it up. So I I like the I like the blank side. Um, we will, um,

I'm not going to use the herd MCP. I'm

going to use clawed code. Um, then I'm curious, Taylor, what are your the ways if you like, let's say you're scaffolding something out with with LLM, you're not jumping too much into the code to start until you get to a certain

point, whatever that might look like.

Um, do you install packages first? Do

you do any kind of like PHP artisan model creates or how do you what's your way of going about this? Uh yeah, that's a good question. Sometimes I will put a little bit of like scaffolding in place

first. And I did this as well as I was

first. And I did this as well as I was even just building the AI SDK where [clears throat] I would actually just scaffold out a few like sometimes even empty classes of like, hey, here's kind of like the structure of how I want

things to look. Um so maybe I define like an interface or a few classes just to kind of like >> put some kind of like rails in place, you know, as far as okay, here's kind of the shape of what we want this app to

look like. and then let the model kind

look like. and then let the model kind of like, you know, paint between the lines from there.

>> Yeah. Yeah. I kind of go I kind of go back and forth myself. Most of the time I I install all the packages that I want and set all the packages up first and then I get and usually then like any

kind of like maybe tables, migrations that I know I specifically want and then I jump in. Uh but yeah, I found that >> then that way >> it feels like I'm I am setting the the

course for example, >> right?

>> Um let's go ahead and install the Laravel AI SDK [snorts] and then we uh

we uh I believe we publish uh >> um I always forget the >> publish and then we can search for it.

>> Yep.

>> Tags. Yeah.

>> Yeah. Service provider.

>> Oh.

>> Oh, yeah. Yeah, that works. Yeah, there

you go.

>> That's just the Yeah. So, that should be all we Oh, we actually need to publish the uh >> uh the Let's go into the docs. We actually need to publish the uh

>> the migrations.

>> Yeah.

>> Yeah.

>> So, we'll go ahead and do that as well.

Perfect.

I'm going to hide my screen and input um some environment variables because that's usually what I'll do next. So,

I'm going to do this mostly just I'm going to delete them anyways right after this, but just in case so I don't I don't uh don't pull those up anywhere.

Um, so we're going to use 11 labs and we're going to use uh probably open

AI SDK because it makes it a little bit simpler.

Going to copy this.

Perfect.

All right.

And so Taylor, you said that you had used uh clawed code mostly and like open code. Do you do you do uh like the

code. Do you do you do uh like the dangerously >> pretty much? Yeah.

>> Get permissions all the time or do you do you watch?

>> Yeah, pretty much. Um,

no, I pretty much do this to be honest.

>> Yeah. Uh, here's a here's a quick tip that I actually didn't know until like the last couple of of weeks for those for those who do use cloud code. Um, if

you go to and bypass or dangerously skip permissions, you can still do the shift tab to go into plan mode, for example, >> um, without going back and and you can go back into bypass permissions on. I I

do that mostly because I always I always like to do plan mode um just to get things up and running.

>> Yeah, I do too. Especially for bigger features.

>> So, I'm going to do uh PHP Artisan um migrate and we'll make sure I'm on the right one. Um

right one. Um perfect. And so we're going to in cloud

perfect. And so we're going to in cloud code we are going to build uh an

application where we can upload or uh have some local markdown files

and using the Laravel AISDK, we are going to take those markdown files and generate

or and have uh an assistant be able to answer questions using the voice mode.

So when we talk to the assistant in an extremely simple UI,

uh they can transcribe our uh voice with voice to text from AISDK

uh pass that to uh an agent to be able to generate an answer based on our markdown files and

then uh generates a response using the voice from AI SDK.

So I'm curious >> what was the difference between the Larvis directory and the Larvis agent directory?

>> Um oh are we in Larvis? Oh

>> yeah, we're in Larvis.

>> That's that's this is the one that I I already did. [laughter]

already did. [laughter] >> Yeah, cancel that.

>> We'll cancel that one out. Uh, we'll go into we'll go to Larvis agent. There we

go. Thank you.

Uh, I already I already did this mostly because I wanted to something to show at the end if we either didn't have enough time or or or if anything.

Um, but it's a it's a good uh good for those uh uh I'm curious then for those in the chat who are using LLMs, what is a thing maybe one or two things that you

do or that you've found that have been helpful maybe with building in Laraveville or something that you always like to do even if you are having AI do most of the work like what's something

that you either look at something that you make sure you review uh Taylor when it comes to like reviewing the work or testing it out. What's your first thing maybe like after you run a prompt even

plan mode? What's the first thing that

plan mode? What's the first thing that you like go to double check in the code?

>> I kind of pull up um I pull up the whole like diff in the GitHub desktop actually.

>> Okay.

>> And that way it's just like this really easy visualization of every file that changed and I can really quickly just kind of scan through them and see what's going on. Um,

going on. Um, and then I will usually clean up a variety of things. It might be like really nitpicky things like method order or where it put a method in the class. I

kind of like organize methods logically or in a certain way that I like. Um, I I haven't really reliably been able to automate that with like a with an AI

guidelines or anything. And then like um I'll just kind of continue the conversation with the AI, you know, like maybe sometimes it um it duplicated something or I was like, "Hey, you could

actually have a more general abstraction around this and like make this file and extract this here and then use it both places. I'll kind of coach it along." Um

places. I'll kind of coach it along." Um

or try different things. But um

>> yeah, that's kind of how I approach it.

I don't use a lot of like really let's call them like fancy AI workflows in the sense of like I usually have like one or two cloud code tabs open.

>> I'm kind of working on a couple maybe one or two things at most kind of specifically um at a time. You know I I haven't really dug too far into like

I've got you know 10 different cloud codes churning on parallelwise task on the same code base without them like running into each other. And I I haven't really experienced that or like found a

need for that, but um you know, I'm pretty simple guy, I guess, when it comes to the AI usage and how I use it.

>> Awesome. Um so then have you jumped into the the uh Open Claw stuff yet? So for

those who don't know, you can run OpenClaw and Forge right now. It's super

simple to get a setup. Have you dumped into it, Taylor? No,

>> I I've only like scratched the surface of it. You know, I have got an open claw

of it. You know, I have got an open claw like up and running when I was playing with the forge stuff. I I can't say that I'm like super deep into it like automating my entire life, but I do

>> love the concept. I think like I actually wrote my own Telegram chatbot for AI um along like year or two ago um just because I find like Telegram or

like WhatsApp like this super convenient sort of just like interface for talking to AI on the go. Uh mine was just a simple chatbot. had no other like

simple chatbot. had no other like functionality that OpenClaw has. But I

think it's a really cool project. You

know, I'm curious like how it evolves and where it goes from here. But pretty

crazy how it's like, you know, got everyone buying Mac minis all over the place.

Awesome. Uh we're doing some plan mode.

So where should the uh markdown files live? Let's go resources docs. I'm

live? Let's go resources docs. I'm

curious, Taylor, get your thoughts on like if you're doing anything locally, >> like would you do storage app? Seems

like AI loves storage app. I don't know what that is about that.

>> So are these these are files that is this files that the user is going to upload?

>> Uh maybe either user or if this was just meant to be deployed locally like let's say you >> Okay. If it's just going to be deployed

>> Okay. If it's just going to be deployed locally, I think I would go resources docs. I think that makes sense.

docs. I think that makes sense.

>> Uh direct attachments. We'll probably do that instead of a instead of a filed search.

>> Let's do that. Um, and then we can text to speech. Let's do 11 labs and we'll do

to speech. Let's do 11 labs and we'll do we'll do a single session. So, super

super simple, super super generic. Um, I

forgot to publish the um the skills for the AI SDK. So, before it jumps into actually implementing this, I'm going to cancel out and do that. We had some questions um in the chat. If you're

still here, thanks so much for being here. Taylor's been awesome to be able

here. Taylor's been awesome to be able to answer some questions. and now we're just having some fun uh building with it as well. But some great questions uh

as well. But some great questions uh specifically for Taylor about AI. I had

a feeling Taylor didn't like AI much.

What made him change his mind? What was

the turning point?

>> I uh I never have actually been anti- AI or against AI. I think um you know when a coding agents and AI kind of first got

started, there was this big um wave of like vibe coding as a concept, right?

and it sort of like dominated the discourse for, you know, a chunk of 2025 was this concept of Vive coding where you're like not even reviewing the code.

>> And I think um I don't relate a lot or maybe don't um for like production work, let's say, or for like real work that I'm going to like give to customers. I don't relate

to that way of coding as much. I think

that's a fun way of coding for like, you know, personal projects that are sort of low stakes and you actually don't care about the code quality. But, you know, what we've seen is just like over time as the models get better, I think like

the quality of the code has gotten so much better. And I'm actually curious if

much better. And I'm actually curious if you've seen like any >> I know Opus 4.6 just been out not even I guess a week yet. Um, are you seeing like any big improvements from like Opus

4.5 or has it not been long enough to really even judge [laughter] Yeah, I I don't think I have seen too many big improvements in the sense of like I I haven't used uh the new codecs

as much. Um I think the biggest

as much. Um I think the biggest improvements that I've seen from both of those just in my gut reaction and feel has been more in like the the thinking through something before doing it. I

feel like I I'm uh telling it no that was wrong. Let's think about this. Let's

was wrong. Let's think about this. Let's

do it this way instead much less.

>> Um and I don't know if that's just because it's using skills better. it's

using context better or it's just smarter. It's it's always hard to tell

smarter. It's it's always hard to tell in that aspect.

>> You know what I'm sort of like [snorts] I'm curious to see where AI coding goes in the sense of like I I've seen DAX and Open Code kind of say similar things where like I almost feel like even if

the models only got marginally better from where they are now at writing code, if they got like let's let's say they got like 10% better but they got 10x

faster at like actually doing it. I

actually think the speed for me is like what I crave even more than like more intelligence at this point. You know

what I mean? Like I find that for most tasks that I'm asking them to do, >> they're intelligent enough to do because I don't I don't >> I'm not doing rocket science, right? I'm

like building web apps and pulling stuff from a database and it's sort of like crud interfaces. But if they could do it

crud interfaces. But if they could do it like 10x faster, like for me that would be like such an even bigger unlock I feel like than like you know like here we've been waiting for six minutes but

what if it only took it like 15 seconds to do all of this? Like that would be incredible you know and hopefully we get there over the next few years.

>> I'm going to run boost install again. So

when we scaffolded out Laravel new we installed boost we ran it. But if we run it again, because we installed Larville AI SDK, it's going to actually ask for us right here to do these third party AI

guidelines, which I'm going to enter.

Um, and make sure that our uh code here uh because we're clearing context and bypassing permissions, it should use the skill now that has access to it as well.

Uh, but it might not at the same time.

Uh, I'm always curious if I think we have to close out of it and then open it back up again for new skills. Uh we have um some people in the chat probably know a little bit more about me in that

particular aspect. Um, uh, question

particular aspect. Um, uh, question about when you were creating the AI SDK, what did you want? Did you want to try and create a competitor to Lingraph and and that suit of code or I guess your

thoughts?

>> I sort of um I sort of approached things pretty fresh. Um, I'm not really like a

pretty fresh. Um, I'm not really like a heavy user of langraph or lane chain, you know, like or anything like that.

Um, and I've approached a lot of things in Laravel this way for better or worse to be honest over the years where I just sort of like come at the problem with no

really preset opinions or like uh baggage or like kind of historical experience around it even and just sort of look at it all fresh and based on my own like opinions and use cases try to

build something that I think is interesting. Um, and I try to put like

interesting. Um, and I try to put like when I build a new package like the AI SDK or even other features in Laravel, I try to build just a good foundation that

I can feel confident about and release into the community for like a further iteration, you know? So, like

>> in in V1 of the AIS SDK, I don't necessarily set out to solve every single um conceivable problem around developing with AI. I try to give like okay here's a good foundation and like

um some scaffolding that we can start from as a community and like have a conversation about and then iterate from there over a period of months and then you know eventually years um to where

things can evolve from there but I try to put like a foundation in place uh to start from.

>> Yeah. No, that's awesome. Um you

mentioned like the fast code uh have you tried uh someone in the chat said Kimmy 2.5. Have you tried?

2.5. Have you tried?

>> It is fast. It is quite fast. Yeah.

>> I think it's a glimpse of like man like life-changing OBS things get even faster.

>> Yeah. And I'm with you though in the sense of like most of the time unless I'm doing like some kind of MVP like the initial kickstart of a project I might

be using something like COD Code um with with Opus 4.5 4.6 or something like that or Codeex um or or big feature. But most

time if I'm just like wanting to nail out certain things, um either I'm hand coding it within Zed and I'm I'm just tweaking, you know, UI design or tweaking like h how I I like parts of

the code or I'm just using something like Kimmy 2.5 or something very quick >> because I like the feel of it being >> I'm asking questions about my codebase.

I want it now kind of thing.

>> Yeah.

Um, so you use the Git uh desktop UI. Is

that new or have you always been using that?

>> I really only use it for a few things.

So like I don't use it for just like committing or or checking out branches or anything like that. I'll just use the CLI. I use it for like um kind of

CLI. I use it for like um kind of reviewing history in the repository. So

like I want to see a list of the previous commits and kind of what files they changed. I find it just like a nice

they changed. I find it just like a nice UI for doing that. or like if I want to quickly like look at the stage changes and then revert a few things. It's

really nice just to be able to like rightclick a file, revert changes, like I don't have to like remember kind of like how to do this on the command line.

>> Um, and then again just kind of for quickly reviewing things visually that the AI does. I kind of use it for that kind of thing. So, it's more like I almost use it more of like a diff viewer

like review tool than I don't really use it as like a you know kind of the branch switching or committing even or anything like that.

>> Yeah.

>> Um for me it's just a really quick kind of like review tool.

>> Yeah. Uh and for those of us just joining we're kind of uh using uh cloud code to help us generate a application using the ADK. So, we'll jump into the

code that it's generated to make sure one that it's doing it the way that uh we would want it to be done and then two make sure that it works the way we'd want it to be done too. The neat thing with the AI SDK is that since it is

built, you know, by Taylor in an eloquent way, it is designed in a way that whether we're using it for our chat context like we're going to be using it within OpenAI or the 11 Labs voice

context, it's the same, you know, the same API, the same model kind of uh the same uh knowledge kind of model when we're when we're writing with it.

>> Uh it's almost done. Got his test passed at least.

I like how the AI I see this a lot when I use AI too. It's like, let me run pint one more time to be safe. You know,

[laughter] it's like >> some some of my uh some of my personal um looks like it just finished. Some of

my personal uh projects that I use, I I have it set up with recctor uh PHP stand and pint. And so it's it's it's fun to

and pint. And so it's it's it's fun to have it make sure that does does all of that. Uh is there anything uh Taylor

that. Uh is there anything uh Taylor that you usually feel that like it still doesn't get quite until usually like you know every single thing that you build you check this one or two things?

>> Oh that's a good question. I don't know.

Um I did week though too.

>> Yeah it can change. I did nothing huge jumps out at me. Um I try to like kind of coach it along with my guidelines as much as I can.

>> Yeah.

So let's go. But we we have a knowledge assistant that created looks like it's using the >> even attributes.

>> Nice.

>> Yeah, that's I love that.

>> Uh so a friendly knowledge knowledgeable assistance. Um and uh if the documents

assistance. Um and uh if the documents do not get enough information to answer the question, say so honestly. Keep your

answers concise and conversational. So,

just so everyone in the chat knows, we're we're building um an app that if we go to resources and docs, we can create a a markdown file in here and maybe even ask Claude Code to generate

some for us. Um I might I might even do that. Uh just for fun, I'll say um I'll

that. Uh just for fun, I'll say um I'll open up a new Cloud Code and I'll say we have this app that uses markdown files about the user in order to answer

questions.

uh find some info on Taylor Hotwell and generate relevant markdown files in the resources

docs directory kick start that off. Uh but the the goal is to be able to then take all these markdown documents and with this knowledge assistant and so we should go

into I believe it's probably in a controller. Yeah, we have this ask

controller. Yeah, we have this ask controller here that's using looks like Laravel AI transcription files with document audio. Um, so I would assume

document audio. Um, so I would assume hopefully it's using it in the proper sense where we're getting an upload of an audio. We're going to transcribe it.

an audio. We're going to transcribe it.

um get all the docs in the uh markdown the resources docs directory and then we're going to create a new knowledge assistant or a new instance of the knowledge assistance and with that prompt attach the attachments that we

have in the markdown file get a text and then this is so clean I love this Taylor of like how easy it is to say I just want an audio of this this answer um

>> and so is is 11 correct I don't >> yeah that's correct yeah >> okay >> yeah that is correct.

>> Sweet. Awesome. Uh the the one thing uh the one thing that I've noticed and I don't know if this is just uh me not prompting well, but the one thing I've

noticed within um specifically within like the React context within Laravel, for some reason when it does stuff like uh within like a a controller where it's

hitting and hitting our API, it's not an inertia like use form request because you wouldn't use that for chat for example, for some reason it always likes to return it as an inertia page and so

you get like this JSON resource. Um, so

I always usually try to tell it to use ax Axios or something like that. Then it

>> Yeah, >> then it seems to do well. Are you still an Axio >> user?

And I I do use uh Access quite a bit, but this is actually um something that's coming up in Inertia 3, which uh I think will be coming up, you know, in the next I I don't want to speak for when it's

coming out. I think it's coming out

coming out. I think it's coming out pretty soon. Let's say like in the next

pretty soon. Let's say like in the next month or so. um there's going to be a use HTTP thing that um

you won't have to really use Axios anymore with um I don't even think we're including Axios by default anymore, which we do now in many different things.

>> Um so yeah, we're kind of giving people uh kind of dropping that and going a little bit lighter weight, but still giving people some of the like affordances of Axios.

>> Yeah, I love that. I think Joe Tanov's going to talk about that at his Laric Europe talk. Actually, he's going to

Europe talk. Actually, he's going to kind of give like an Inertia 3 uh preview or kind of demo.

>> Perfect. Um, so yeah, stay tuned. Uh, I

I don't know off the top of my head if Laracon Europe talks are being streamed, but you they're probably going to be recorded um at the very least. So, be be sure to stay tuned uh for that. Uh,

question for Taylor. What's the

recommended approach to implementing production grade guard rails? So I guess within the AI SDK, let's say you're doing this and I I can maybe talk about even for my simple demo of what I made

sure to do. Uh what are some things that you can think of in terms of prompt injection, misuse of LLM, API keys, that kind of thing?

>> Yeah, this is something we haven't really built any opinions in thus far.

Um I'm curious to see like what PR's come in or what sort of like community feedback is around this. Um, you know, certain types of apps are sort of more vulnerable vulnerable to this than others depending on sort of like what

you're doing and what tools you're exposing. And if you have tools that can

exposing. And if you have tools that can like for example delete records out of your database, that's of course like much more uh potentially serious in terms of like prompt injection or things like that versus some other use cases.

So, um, yeah, this is something we haven't baked anything in on yet, but I'm curious to see sort of like what people want and what people have found, you know, most effective in other ecosystems that maybe we could learn

from or borrow from.

>> Yeah, perfect. I love it. Um, uh, can AI SDK be extended or improve with plugins or add-ons? Um

or add-ons? Um >> yeah, so I think like one of the cool areas we'll see sort of like community extension and collaboration is um you

could actually package up and distribute either tools or agents and tools as composer packages to give to other people. So maybe you write a tool that

people. So maybe you write a tool that is reusable and it does some some kind of functionality that maybe a lot of people would benefit from in different kinds of applications. you could

actually package that up and distribute it, you know, on packages as a tool that other people can use. Um, so I think that's one of the primary ways you'll see people sort of like have plugins or

so-called add-ons for the AI SDK is like distributing >> those types of things um throughout the community.

>> That's awesome. I'm curious to know how much uh uh this got it right on you in terms of biography, philosophy, and projects is what >> what cloud code. So we have we have

biography um >> uh this all looks correct so far >> and we just asked claude code to generate all this for us. So it's I'm

assuming pulling uh just internet accessible things.

>> Um company grew from around eight employees started 24. Yeah.

>> Yeah. This is all Yeah, that all looks correct.

>> Simplicity and elegance warned against classic code.

>> Yeah, I think backwards compatibility has been a, you know, something that has been more important in more recent years of Laravel. I think people would

of Laravel. I think people would question that in the early years of Laravel [laughter] but probably as most projects go. In the

early years of Laravel, you know, we were moving fast and breaking things.

But lately I I try to make it to where most people can upgrade, you know, without any changes.

>> And it seems like the backwards compatibility uh is is even bigger now for AI because you almost have to have like uh oh like things that happened a year ago which models are getting smarter and it's more up toate of course

but things happened a year ago if it's using that it still works.

>> Correct. Yeah. Yeah. Yeah. It is more important like not to break things in the AI world. I agree.

some some projects.

Uh >> yeah, cool.

>> Okay. Well, I'm curious.

>> Feels decent. Yeah.

>> Yeah. I'm curious to see how this works.

Let's let's go ahead and composer rundev. That is that is my preferred way

rundev. That is that is my preferred way of getting things up. Are you still the same way or PGP artisan serve or >> I use uh either this or uh herd.

>> Okay.

Open this up. Okay. We have Larvis. Um,

I don't know if my >> my Yeah, there's very minimal. Let's

Let's go ahead. I don't know if my sound is going to work. Let's see.

Can you tell me a little bit more about uh Taylor's way of thinking through things when he builds a project?

Oh, audio must be web mg wave. I

actually ran across this I think in my initial thing because I think it's it tries whatever AI did in the initial thing I think it tried to save it as

webm and by default that's classified as video instead of audio.

>> Oh wow. Okay.

>> Um let's go and see >> have this error. I think it's because when we initially try to tr uh save the

vid uh audio file saves it in a video format of it question mark.

[snorts] Um back to the chat uh if people were want to contribute to the Laravel SDK are there any specific tasks or areas you would like the community to help out with?

>> Oh that is a good one. I mean

>> that's a great question. Some of the recent PRs have been getting have been around just like adding different providers. So like someone contributed

providers. So like someone contributed uh Deep Seek, someone contributed uh Mistral.

This is also like a tricky one because some of the more complicated features like um you know some of the conversation stuff I want to improve like I've got stuff in my head and sometimes it's hard like if PRs are

coming in and sort of like conflicting with that.

>> Yeah.

>> Um you know so let me think. I think

that like agent chat artisan command thing is a is maybe a good one the community could riff on a little bit and then people have been adding providers.

Um human tool approval I don't I mean if someone wants to take a stab at that go for it. I haven't started on it. Um that

for it. I haven't started on it. Um that

might be a good one someone could work on but it's a little bit of a a meaty one. You know it's not a not trivial

one. You know it's not a not trivial probably.

>> Awesome. Uh so yeah, Taylor had mentioned ear a little bit earlier kind of like two things is chatting with agents that you automatically generate maybe like in a PHP artisan agent chat

command. Um and then uh human tool

command. Um and then uh human tool approval. Uh correct me if I'm wrong,

approval. Uh correct me if I'm wrong, Kelly, but you mentioned it would be nice to be able to say okay there's specific tools that they happen but before the response is actually generated you have some form of human

interaction to say hey this is good to go like let's run that kind of thing.

>> Right. Yeah. Exactly. Those are those would be great. Um

opportun opportunities. Uh someone's

mentioned token usage. That'd be

interesting. I don't know if there's anything currently built in. Correct.

>> You can get back to token usage from like a given prompt or whatever a given generation, but um we're going to be building a lot of token usage tracking stuff into Nightw Watch. So,

>> interesting.

>> You know, people can kind of people can of course sort of roll their own because you do have access to that, but we'll be also providing that in Nightw Watch.

All right, let's let's go ahead refresh.

See if it see if it does anything. Now,

we'll say, uh, can you tell me a little bit about how Taylor would think through creating a new package for Laravel?

Uh, upload is not >> Please try PDF.

>> Oh, you can't upload markdown files.

>> Markdown files. I I would assume in our our agents for the uh chat we're just uploading we're just uploading we're just collecting the documents as

markdown.

>> I was passing it should just like inject them into the uh you know what I mean?

>> Yeah. Instead of uploading because we get this error instead of uploading the markdown files.

Let's inject them directly into the prompt.

>> Yeah, like >> should be like that.

>> Yeah, we'll see if that works.

>> See if See if that works.

>> I actually noticed that as well that uh Open AI doesn't accept a whole lot of file formats on these attachments. Um I

was surprised like you I don't think you can upload like a CSV like it has to it's basically like PDF or image or >> something before you Yeah. Yeah. Yeah. I

guess maybe are they trying to uh not to comment on OpenAI's business plan or anything like that but are they trying to try to move everything into vectors like you do that we'll do that?

>> That's very very interesting.

>> Um >> it looks like it is making the correct change though. I saw where it was.

change though. I saw where it was.

>> Yeah, I saw that it uh yeah so we're doing the >> docs with the question and prompt. So

bigger prompt, but we don't it probably be a little bit long longer of a wait before we get an audio response if we're generating the prompt pushing it to 11 Labs, but still still should work. Uh uh

I'm curious and that might be a good maybe if you're listening to this and you love 11 Labs. I was actually going to take a stab at a PR or I don't know if you you've even tried it Taylor of

like the ability to I think they have voice agents where you can like instead of having to transcribe and then get an answer and then put it back. I think

they have the ability all wrapped up into one where you could upload a voice and it >> with a prompt or something like that >> or system prompt.

>> Um which would make this a lot faster at least for this this demo.

>> One more time. Let's try try this out.

Uh, if I was Taylor, how would I think about building a new uh package for Laravel?

>> That's great. When building a new package's [laughter] development philosophy focus on simplicity and elegance, ensuring that the package is intuitive.

>> Okay, >> it is working. But

>> sick reverb [laughter] doesn't look like it was uh it was sharing my uh sharing my audio.

Um but yeah, that worked. We have it. Uh

I'm assuming we could uh speak. Uh,

looks like we have a little bug speaking thing. Yeah. Oh, there we go. Maybe I

thing. Yeah. Oh, there we go. Maybe I

added maybe clicking it a couple times added it. I assume I think because we're

added it. I assume I think because we're still saving it as uh memory in terms of like the um ask controller. Do we have a

Okay. It probably is just a one-off. It

Okay. It probably is just a one-off. It

probably would reset then, I'm assuming.

>> Yeah. Yeah, I think so.

>> Sweet. Awesome. It's cool to be able to to get something like that up and running rather quickly. And honestly,

there's not too much code involved with this. That's one of the things I like

this. That's one of the things I like about the AI SDK um is like we have a knowledge assistant being able to have and kind of as a good recap for those

watching, you have assistance that you can create the PHP artisan make um agent. Sorry, not an assistant. Uh where

agent. Sorry, not an assistant. Uh where

we have a particular class that that all lives in. So instructions um as well as

lives in. So instructions um as well as tools. Is there anything else that can

tools. Is there anything else that can live in there, Taylor? Yeah, like the output schema, things like that.

>> Um, and then with that agent, we can then uh create a new instance of it in things like uh controllers where we're creating a new instance of this knowledge assistant or passing it that

prompt. So we're using all the the

prompt. So we're using all the the configuration that we're doing in that assistant which also uses all the configuration of the AI config in terms of what's the default for audio

transcription um images uh and embeddings for example and then it's as simple as using things like audio of the answer we're jittering

with and getting that particular response and then it's converting that on the front end but yeah it's it's it's crazy how basically that whole demo was

just really one controller if if and in the sense of that um and doing a couple things on the back end of how fast it can be of and probably we could

configure this quicker to use I think in my uh my backup larvis I did a use cheapest model or use is there a use fastest model am I correct

>> uh it's use smartest and use cheapest >> use cheapest so I think I did a use cheapest for like the actual response and then that way it just was a lot faster in a lot of ways

Um, but yeah, sweet. Let's go ahead and I'm gonna we're probably coming short up on time. So, is there any um I'll jump

on time. So, is there any um I'll jump into maybe one more question. Those of

us who were those who were in the chat for this whole period of time, thanks so much for being here. Thank you so much for taking a journey along with us on how the AI SDK was built, what it's good for, um, and also getting a lot of

questions for Taylor. I know uh Taylor, we're appreciative of you and your time of being here. Um

>> for sure.

>> Uh let's go ahead and see.

Uh there's not there's not too many more questions. Taylor, what what would be

questions. Taylor, what what would be one thing that you would love to tell people about when it comes to like the AI SDK, whether in terms of building with it or just, you know, inspirational

kind of kind of phrase to leave? Oh

well, it makes you know integrating AI with your Laravel app super easy. So I

mean it is out composer require Laravel AI. There's a boost skill available. So

AI. There's a boost skill available. So

install boost and you will have the uh AI SDK skill that kind of tells boost how to use it and just start like you know kind of like we did here just like building something fun, some cool ideas.

You'll kind of learn how to use it, learn the ropes and then uh hopefully can take it to um you know whatever you're doing at work or whatever kind of like business you're building or whatever and hopefully it makes uh your life a little bit easier.

>> Awesome. I love it. And then for those who saw the uh the other kind of demo that I that I that I had kind of put together for um the kind of rag instance

and chatting with it, um I'll put that link in the chat as well. Uh but I'll also be able to uh uh on that demo, we'll also have like the GitHub link so

that you can kind of take a look through what was what was built with that. Um uh

I guess one last question uh to to kind of go through that as well is someone mentioned in terms of like Prism PHP because I know you you had mentioned a lot of some of the stuff in the

background is using Prism PHP. Someone

mentioned what does it look like to switch. What are some of the gaps at the

switch. What are some of the gaps at the moment in terms of switching?

>> Um I think of yeah we do use Prism for some of the tech stuff under the hood.

Nice package uh built by TJ Miller. Um I

think of Prism and AI SDK a little bit like I think of query builder and eloquent. So like Prism has kind of this

eloquent. So like Prism has kind of this fluent query builder interface for like querying and doing AI things. And I

think the AI SDK is like maybe like a layer of abstraction on top of that you might think of where you have like agent classes and sort of like agent testing and things like that. So it's kind of

like a little bit higher level uh of abstraction in terms of like code organization and like reuse uh which I think is kind of like the main difference between the two. Um and then

it kind of like goes beyond and a little bit bit around like the similarity search the file search some of that stuff that is maybe not available in Prism but is available in the um AI SDK

as well as some extra providers like 11 Labs and things like that. Um so yeah it's just a little bit higher level of abstraction um to get started and then of course um maintained here by us at

Laravel but again using Prism for a variety of things under the hood and uh yeah great library.

>> Awesome. Yeah thank you so much everyone in the chat. Uh thank you for being here and for joining us on this journey. For

those who are watching kind of in on demand uh thank you so much for watching this. Feel free to like and subscribe so

this. Feel free to like and subscribe so that we can have additional content be pushed to you as quickly as possible.

But thank you so much Taylor for for joining us. And

joining us. And >> for everyone getting started with building within Laravel and maybe AI, you can use composer require Laravelai

to get started um and use Laravel boost because it does have a skill to be able to generate some of those things. Even

like that little demo that we just did.

Uh, it was simple, but it was easy to show how the LLMs can use things like Laravel Boost and the AI SDK to create something pretty magical in such little

code. Uh, but yeah, hopefully everyone

code. Uh, but yeah, hopefully everyone has a fantastic day wherever you are or wherever you are joining and watching us from. Grab a Coke Zero for us and have

from. Grab a Coke Zero for us and have some Taco Bell uh in favor >> favor of Taylor and the Laravel team.

But thank you all so much.

>> Thanks.

Loading...

Loading video analysis...