Build a Complete AI Agent with Lovable and Supabase (Full Tutorial)
By Supabase
Summary
Topics Covered
- API Keys Must Never Be Exposed—Use Secure Storage
- Anonymous Sign-In Enables Immediate Value Delivery
- User Impersonation Tests Verify Row Level Security
Full Transcript
Combining level and superbase allows you to create real world apps in record time. Today I wanted to explore how we
time. Today I wanted to explore how we can build a simple AI Asian app that users can talk to, obtains their information, stores them securely, and gives smart suggestions. I also share
some security tips along the way. So,
let's dive right in.
So, first we're going to have to decide what kind of AI agents uh we want to build. For today, I want to build a
build. For today, I want to build a fitness AI agent. something that can help me to achieve a certain fitness goal, something like gaining muscle or losing weight or in health in general.
So, let's start out with that. I've
actually created a Subaru project already called fitness agent. It's a
blank project. So, uh nothing there yet, but we can start by connecting lovable to a uh Subies project before we give
the initial prompt. So this is going to create a project that is preconfigured with Subase and let's say something like um I want to create a conversational
chat AI agent that will help the user achieve their fitness goals. The initial
chat should ask the user what their fitness goal is and it should ask a few follow-up questions to obtain more
information that's require you to assist the user. achieve that goal and at the
the user. achieve that goal and at the end it should provide a nice plan to achieve that goal. It shouldn't use a voice chat but instead it should be a
textbased chat application.
So hopefully this is clear enough. Uh I
told it that I want to create a conversational chat AI agent to achieve their fitness goals. Yeah, I think this is good. Let's wait and see what lovable
is good. Let's wait and see what lovable can cook for us. So, looks like it's done with the initial iteration. Have a
nice chat. It's asking some questions the user. Let's see. I want to build
the user. Let's see. I want to build muscle and then I'm guessing it's not connected to open AI or anything. At least I
haven't provided any API key. So, it
shouldn't be connected. But it's giving me some plausible response. Great
choice. Now, what's your current fitness experience level?
What if I say something ridiculous like I don't know cats excellent how many days per week asking sequences of questions that's a
predefined hardcoded I think make sure that chat is properly using a large language model powered by open AI I can
provide the open AI API key what else do we need I guess that's a good step yeah let's just let's just go with that we can keep it incremental Okay. So I'm
just going to provide a open AAI API key. I am in the API key generation page
key. I am in the API key generation page of open AI. I'm just going to create a key called fitness
agent. Now this key is very important.
agent. Now this key is very important.
You shouldn't expose it to public like this. The reason why I'm exposing it
this. The reason why I'm exposing it because I'm going to be deleting it right after I record this video. But you
should never paste it into your chat window. You should always wait for this
window. You should always wait for this add API key window or button to appear and paste it in there which will store the key nice and securely as a superbase
edge function secret so that nobody else can access it.
Okay. So now that I've given it the API key, it's constructing the sub edge function to power the chat using large language model. Oh, I didn't realize
language model. Oh, I didn't realize this header up here. Beautiful. Okay, it
seems like it is done constructing the edge functions. So now the chat should
edge functions. So now the chat should be powered by proper AI. So hello, I'm your fitness coach. I want to build muscle.
Okay, that wasn't too bad. But if you if you remember if you have used chat GPT before, you notice the difference, right? As soon as you send the chat,
right? As soon as you send the chat, chat GPT starts responding in real time, writing word by word. And that didn't happen right there. Instead, what
happened was it was waiting for the entire chunk to load and then it um the entire chunk was displayed all at once.
But we can do better here. We can uh implement something similar to what chatpt does where everything is streamed in real time and that is called server send events. We can ask Lovable to
send events. We can ask Lovable to configure server send events on edge functions. And there's also a special
functions. And there's also a special package that enables service send events on the browser uh called SSE.js.
Uh let's ask Lubble to do that.
Currently the response is not displayed until all the message is done loading which is not a great user experience.
Instead use SSC.js JS on the front end and set up proper SSE on the edge functions so that users can stream the
response in real time.
Hopefully this does it. I played with Lavable and try to make it work with SSC. It didn't quite do the job until I
SSC. It didn't quite do the job until I specifically told it to use SSE.js package. So that seems to be the magic
package. So that seems to be the magic keyword here. Hopefully that does a
keyword here. Hopefully that does a trick and we can enjoy the real time uh streaming of the messages. But yeah,
let's let's wait for level to do its thing. Let's try it out. So uh build
thing. Let's try it out. So uh build muscle. Oh, I misspelled muscle there.
muscle. Oh, I misspelled muscle there.
But okay, now it's just broken. It seems
like it seems like it's broken. Let's go
check out um if there's something wrong with the edge function. So I'm going to check out the logs of edge functions.
And yeah, we do see a error. Stream
controller cannot close. No idea what the error is, but uh Lavable has access to the logs. So we can just ask something like seems like there's an
error happening on the edge functions when we send the chat. So digest error and make sure it is fixed. And once that error is sorted out and we have some
real-time streaming then we can get into refining the system prompts which we can actually view from the source code. So
as we send the chats to the AI agent I'm looking at the super edge function. So
we can go drill down into sub functions fitness chat and index.js or ts. we can
view the prompts that the system is sending to OpenAI in the background. So
on top of what the user has entered, uh it sends all these chunks over to OpenAI API. It's saying stuff like you are a
API. It's saying stuff like you are a fit plan fit plan AI an expert personal fitness coach and nutritionist. So
basically it's giving it a persona and some more information. So basically it's providing some guard rails on what kind of content what kind of response it
should come back with and then it's also adding the user goals in in the middle of the prompt. So this is how these AI agents at least more or less are
constructed. You have a set of system
constructed. You have a set of system prompts and then you may or may not throw in some user responses within there and you get a certain response back from the large language model.
Right now it's a pretty simple case but we can make some conditional system prompts. You know if the user is asking
prompts. You know if the user is asking for this type of thing then use this system system prompts or um else or something like that.
There's a lot of stuff that you can tweak with these. But let's go back and let's check if we have got some real
time message streaming.
Okay. So
build muscle.
It looks like now I've added comprehensive login to help diagnose the issue.
I see the issue. The front end is using the new direct functional URL, but the edge function is still set up to handle the old URL pattern. Let me check
the actual network requests more carefully.
H at least it looks like it's still broken.
This is the old error. I think so.
Um it's not getting any response right. Um
let's see.
So, it's sending something to Open AI.
So, there's something going on.
Okay. I guess there's the there's the same error.
It seems like on the server side there's still the same error. Make sure it's properly fixed.
I should also add uh steel uh dig into edge functions logs and make sure it's properly fixed.
Yeah, let's come back in a few minutes.
Okay, it is done doing its thing it seems like. So, let's come back and say
seems like. So, let's come back and say build muscle.
And it looks like it's not. Yeah, that
definitely didn't do realtime streaming.
So, uh at this point, I don't know. Uh let's
see. Um at least it's not airing out.
So, make sure that the browser properly uses SSE.js JS and the edge function uses or implements server send events
properly so that the response from the AI can be streamed and displayed in real time on the UI. Also, there's some weird
UI glitch going on on the front end where when the response from the AI comes back, the submitted response of the user gets replaced with the response
of the AI. So make sure that doesn't happen.
Okay. Uh let's wait for another few minutes. Um yeah, this is part of the
minutes. Um yeah, this is part of the process, the part of process of vibe coding. Um let's wait for another few
coding. Um let's wait for another few minutes. Okay, so let's try it one last
minutes. Okay, so let's try it one last time. Build muscle. And so at least it's
time. Build muscle. And so at least it's streaming the response. It has that weird bug where the user response is wiped out by the AI response, but at
least streaming works. So yeah, let's pat ourselves on the back. By default,
lovable and every other large language model or AI tool that I try to use will try to not use SSE.js. It's going to try
to do its own implementation. But the
keyword here is to try to ask it to use SSEJS. By default, the browser doesn't
SSEJS. By default, the browser doesn't support a post request to an SSE endpoint, but SSJS can implement that part that's missing in the browser. And
we got here. Now, let's just get rid of this bug hopefully uh in a single prompt. I noticed that when we get a
prompt. I noticed that when we get a response back from the AI, the submission of the user within the chat window is also overwritten by the
response from the AI. So make sure that the response from the AI is displayed and the user response is retained untouched.
You would think that's easy, right? Um
but yeah, let's let's see. But
um so from here on I think we can try to essentially refine this system prompt.
Right now it's kind of a static. You
have the user's goal. wonder if it's able to like take in the uh follow-up messages and stuff. Um, also I've noticed that every I don't know 10 20
seconds or so, it displays this hello, your personal uh fitness coach did this initial message. So, we should probably
initial message. So, we should probably get rid of that as well.
Fix the ID collision. Okay, that seems plausible.
Also, another thing that I want to do is right now I believe none of these responses are saved anywhere, but we should uh save it on the database. And
maybe that kind of helps uh mitigate some of these errors, but so I'm just going to start with that actually. Now,
make sure each response is saved on the Subies database. And for authentication,
Subies database. And for authentication, use anonymous authentication so that the user doesn't have to enter email address or or anything. And as soon as they land
on the page, if the user is not signed in already, uh signed in using uh anonymous off. If they are signed in,
anonymous off. If they are signed in, just proceed and actually load the past conversation and resume from there.
So what anonymous authentication is is it's a mechanism to sign in the user silently uh behind the scenes. Normally
when users sign in or sign up they provide some kind of credential either it's a Google account, GitHub account, email password, whatever it is.
Anonymous signin provides a access token behind the scenes without user providing the credential. And this is actually a
the credential. And this is actually a totally secure uh way of accessing the database. It's just a the only issue is
database. It's just a the only issue is uh if the user provides their email address and password for example, they can take that anyway. They can open the same website on a smartphone, different
laptop and they can sign in using the same credential and they can view the same conversation history in this case.
Whereas anonymous off if you open an app on the phone and you're signed in anonymously and you take that to your laptop or something it's considered a two separate uh session two separate
conversation threads. So that's a
conversation threads. So that's a benefit of asking the user for their credentials. But actually anonymous off
credentials. But actually anonymous off is great for presenting the value of your app to your users as soon as possible. In this case, I want the user
possible. In this case, I want the user to just land on this website and start talking to AI instead of having to worry about their email addresses and stuff.
So that's what I'm going to do. And
let's see. Uh it looks likeable came back with some uh table. So we have some conversation
uh table. So we have some conversation table, messages table and conversational conversation states table.
Sure. Create a table for sorting conversational conversation state stage. Welcome.
stage. Welcome.
Sure. Um I'll leave it up to you.
Loable. Uh users can view their own conversation. So
conversation. So obviously I know roal security policies and by reading this I I I can understand that um it's it's secure but some of you
that are watching this might not know how to tell a good role security policy versus unsecure role security policies and if you want to be 100% sure you kind
of have to know SQL but I think you can totally get by having a deep conversation with large language model For example, if you're unsure, you can copy and paste this and talk to a
different AI model, maybe 03 or some more smart uh models out there, and make sure that it's it's nice and secure.
Also, later on, I'll show you a little trick that you can do within your Subaru dashboard to make sure your rollable security policies are working as
intended. But yeah, for now, I it looks
intended. But yeah, for now, I it looks great. The conversation is locked in.
great. The conversation is locked in.
The conversation states table is locked in. The messages table is locked in.
in. The messages table is locked in.
It looks great. So I'm just going to hit apply changes. And this is going to
apply changes. And this is going to create the table on our Subaru project.
So by default anonymous authentication is disabled. So we have to go into Sub
is disabled. So we have to go into Sub uh signin providers and go to allow anonymous signins.
and hit save. So this is going to add anonymous sign in and on an actual production application you are recommended to enabling recapture which
prevents uh bots uh from creating bunch of fake anonymous authenticated users.
But I'm just going to skip that for now.
But you should definitely do that if you're taking your app to production.
Okay, it looks like it's done.
Let's see. I want to build muscle.
Great. It's streaming the response.
Ideally, the loading thing disappears right away. Um, but yeah, that's fine.
right away. Um, but yeah, that's fine.
And that bug is gone. The user's
submission is properly retained. Awesome
goal. Building muscles. Fantastic way of improving your strength. Blah blah blah blah blah blah. Can you tell me about your current fitness experience? Are you
a beginner, intermediate, advanced? Um,
okay. Sure. I've been to the gym a few times, but recently I'm too lazy to go to gyms. I do have a dumbbell at home, which I lift occasionally, but that's
kind of it. I also do some I play tennis. Uh, but other than that, that's
tennis. Uh, but other than that, that's Yeah, that's pretty much my only exercise routines these days.
Let's hit that. And thanks for sharing that. Sounds great. Blah blah blah.
that. Sounds great. Blah blah blah.
Next, let's talk about your timeline.
Okay, let's let's see how let's see where this takes. So, I'm just going to keep on responding timeline. Uh, how
long do you want? Um,
sooner the better, but I can give it a few months.
Great. Now, let's discuss availability.
I can work out a few days a week.
Okay, final details to create a personalized plan. Do I have any
personalized plan. Do I have any specific preference, limitations, injuries, types of exercise? Not really.
I am allergic to avocado, but other than that, no restrictions, no injuries whatsoever.
Let's see if it can come up with a nice plan. Great. Thanks for sharing the
plan. Great. Thanks for sharing the information.
Wow. Wow. Wow. Wow. It's still going.
I am actually impressed. This was a nice sequence. It was asking all the right
sequence. It was asking all the right questions and then um you know the response was nice and concise until the
final response which is um a nice detailed uh plan for me to execute. So
weekly schedules work out three to four days a week. I don't know why it added extra day, but that's all right. Uh rest
days. So, pretty pretty specific, right?
Monday Wednesday Friday optionally Saturday. Fine. And then key exercise uh
Saturday. Fine. And then key exercise uh with my dumbbell try to focus on compound movements, muscle groups. So,
dumbbell bench press, bend over, dumbbell rows.
So specific types of exercise, number of reps, pretty good day two, day three. So it
has, you know, all the dates covered, nutrition tips, protein intake, complex carbs.
Wow. Wow. Wow. Getting started tips.
Yeah, this is this is actually great. I
wonder what kind of what kind of system prompt uh Lovable created for me. It's
the same thing, isn't it? It's it's
nothing special.
But it's able to.
So, okay. So, depending on the stage,
okay. So, depending on the stage, current stage, I feel like OpenAI is way more powerful than I expected. And this
is one of the cheaper models, right?
It's not even the nice model. Uh where
is the model?
Which OpenAI model is it using? Right
here. Yeah. GBT4
mini.
Okay.
That was that was actually nice. Yeah.
Uh keep the response concise but informative when asking questions. Provide clear
options for the final plan. Include
weekly schedule key exercises.
Wow. Um
yeah, impressed. So Lable was able to construct this really helpful uh system prompt with my not so great guidance.
And just like that, we have a functioning AI agent that can construct a nice workout plan for me. And this is this is a great starting point. Um, I
can I'm not going to go this far, but maybe one potential way of expanding this is maybe adding Google login. And what that
unlocks is by obtaining the Google access token. After the user performs
access token. After the user performs the Google login, you can take that and maybe manipulate users calendar. So you
can have the AI agent input all these uh exercise on Monday, Wednesday, Friday on certain times. Write all these exercise
certain times. Write all these exercise lists in the description of the calendar event. So that way the users don't have
event. So that way the users don't have to come back to this app to view their exercise schedule. It's all right in
exercise schedule. It's all right in their Google calendar or something like that. Now that we have this nice looking
that. Now that we have this nice looking app, I just want to wrap it up with making sure rolliti is doing its thing.
So one, like I mentioned earlier, having more conversation. If you're not
more conversation. If you're not familiar with SQL, having more conversation with uh large language models, not necessarily within lovable, you can talk with lovable agent as well.
By the way, lovable generally does a really good job of securing your database. It writes really good roful
database. It writes really good roful security policies. But sometimes us
security policies. But sometimes us humans could push lovable or any kind of large language models in its corner. You
know if if something some certain data isn't loading and you think it should be loading, you just keep on saying hey this should load. Nothing is displaying.
And then after you do that a few times might just say okay I'm just going to open up the database access to everything and and make it work that way. And at that point it's not to blame
way. And at that point it's not to blame the lo language model or certain tool or anything. It's it's more like you know
anything. It's it's more like you know human just being forceful. You always
want to make sure that role security policies are properly set. And one way to ensure that is you can go into your table editor and let's view the messages
table for example. Now all these messages are tied to a single user. I can know I know that because I can look at the user
ID column and it's all this C1F person whoever that is right now what I can do here is I can go over here press
this button so right now it's accessing this database this table uh through a Postgress role which is a like a admin role a super user role that has
permission to access everything we can change this to for example anen roll which can simulate a non-signedin user.
Remember, we set proper rollable security policies and we also added superase authentication through anonymous o. So, if you're not signed
anonymous o. So, if you're not signed in, you shouldn't be able to view any data and that's what's happening right here. A non-signin user is not able to
here. A non-signin user is not able to view any data. We can also simulate authenticated role. In authenticated
authenticated role. In authenticated role, we can impersonate a certain user.
So if I impersonate the C1F user, we see all the data, which is great, which is expected. But what if we want to make
expected. But what if we want to make sure other users do not see uh his data or his or her data? We can impersonate
this C4 user. And as expected, they are not seeing the messages table. And the
same goes for the conversations table hopefully. So this user is seeing this
hopefully. So this user is seeing this conversation because it belongs to them.
But if I go back to the C1F user, they see their own conversation. And if I bring it back to
conversation. And if I bring it back to Postgress, we can see all the conversation that exists within this table. So this user impersonation
table. So this user impersonation feature is great to make sure your role security policy is working correctly and your data is nice and secure. That was
how you can build a nice AI Asian app using loable, super beast, and open AAI.
I hope you learned a thing or two and I'll see you around in the next Subaru
Loading video analysis...