This OpenAI o1 pro coding workflow is insane...
By Mckay Wrigley
Summary
## Key takeaways - **Use Repo Prompt for Massive Context**: Repo Prompt selects entire project directory excluding ignores like node_modules, providing 47k-60k tokens of context within o1 Pro's 128k limit, including file tree for structure awareness. [01:25], [02:07] - **Inject Detailed Cursor Rules**: Cursor rules file with 481 lines details project structure, typing, envs, frontend components, server components, and backend requests, dramatically improving o1 Pro code performance. [03:12], [03:44] - **Force XML Response Format**: O1 format prompt requires summary section and XML section with code_changes tags containing changed files, operations, paths, and code, essential for XML parser tool to apply changes. [04:17], [04:39] - **Parse and Apply XML Instantly**: Copy o1 Pro's XML response into local XML parser tool at port 3001, which applies all changes to codebase instantly on a git branch for quick review like an AI-generated PR. [04:51], [11:31] - **o1 Pro One-Shots 80-90% Features**: Natural language request for sub-todos gets database schema updates, UI tweaks, CRUD actions, and imports handled in one shot, leaving only 10% minor fixes like removing 'use server' in Cursor. [15:34], [15:44] - **Pro Mode Beats Standard o1**: AB testing shows o1 Pro mode gives significantly better results than standard o1, especially reliable for one-shotting entire feature requests in this workflow. [00:51], [17:09]
Topics Covered
- Inject Full Codebase into Prompts
- Cursor Rules Turbocharge AI Output
- O1 Pro One-Shots 80-90% Features
- Choose AI-Friendly Tech Stacks
Full Transcript
video I'm going to be showing you my full workflow for opening eyes new 01 Pro mode model so I'm going to show you how to take the code that 01 generates in chat gbt and you can actually apply
that as soon as possible into your project so what I'm going to do here is I'm going to let me just show you uh the app case so this is the starter template one of the things that we have is we
have a to-do case so this is a fully um this uses superbase case we're using postgress um this is is a very basic to-do right so I could add a to-do here and that's going to add a to-do right
but what the things that I would like is if I could add uh sub todos here okay so in my 01 workflow this is how I will go about this request okay so I'm I'm going
to do I'm going to go back over to this page here or this window and you'll see on the left I have chat gbt okay so I'm just going to do new chat you'll see I have chat gbt 01 prom mode selected
again uh you can try this with 01 I think you'll still get pretty good results but you may notice that I do get a bit better results with PR mode okay i' I've done a lot of AB testing here and I'm telling you promote is like
absolutely ridiculous especially with this kind of workflow and then I'm going to go over to repo prompt here and I'm just going to go full screen here for a second so you can kind of see everything that's going on so what I'm going to do here is I'm actually just going to clear
my selection you'll see everything is gone I'm also going to clear these up here okay and what we're going to be doing here is we're basically going to build the entire piece of context that
we're going to feed into our request to chat GPT Okay so so what I'm going to do here is I'm actually just going to select this folder here okay so this is my actual Project Director here and it's
going to select every single thing in here except it's going to ignore anything that I have in this filters okay so you can see um you know this gets rid of things like node modules
things like that so you don't have all of that extra bulky context uh that's that's going to be completely irrelevant okay so I've kind of set up that that ignore file here uh you can see you know
I'm ignoring some things like my migrations folder um you know my cursor rules file package locked out Json all that kind of stuff okay so you can see when I select this at the bottom here is
is a little bit small but you can see that's about 47,000 tokens of contact so one thing to keep in mind the requests
to A1 Pro can fit up to 128k tokens of contact K so we are in fact within that window um I typically have been keeping my requests just for reference at about
60k tokens is um that's about all cut things off you can kind of get more efficient with those um you can also Max those out a little bit more uh that's kind of the sweet spot for me I think I might try and figure out my workflow to
kind of get that number to be down so things are a little bit faster uh but this is kind of how I'm working right now so what I'm going to do here is I'm actually going to go to this top right area here and I actually have three
stored prompts that I want to use for every single request I have my 01 Pro my base prompt here I'll talk about all these in just a second here I have my template here uh my cursor rules
template okay so I'm actually going to be injecting into my 01 request I'm going to be using the cursor rules file here okay so I am a very detailed cursor
ruler so you can see this is 481 lines of all sorts of cursor rules for my project this includes things like um you know project structure um you know rules for typing for envs for front end
components for for you know uh server components backend request every every single thing that I want the AI to have context about when I'm working in cursor
I have really detailed uh written out in detail in my cursor rules file and so what I'm actually doing here is if we actually go look this up here you'll see and this is a little bit small but
you'll see I just have the enti entire file in here okay so that's going to give oan some really really good instructions of how the code should actually work and this is going to dramatically increase the performance right we're going to get much better
code because o1 actually knows what it needs to do okay so my o base prompt here is it just says you are an expert software engineer task with following my instructions use the included project instructions as general guide okay so
really General stuff but just want to make sure that is included the other one we're going to turn on here is 01 format okay and this one matters a ton okay if I bring up the edit section I'll kind of
show you what's going on here okay so you can see we have this uh prompt here that says you will respond with two sections a summary section and an XML section so this is super important because this is what's going to
determine how a one is going to respond to us and in order for this workflow to work we need o1 to give us an XML response enclosed in a bunch of tags so that we can
correctly send uh we can copy and paste that um XML response of what we're going to be using is we're going to be using that 01 XML parser tool that I built okay so you can see this is running on local
3001 okay what you're going to end up doing is you're going to paste that XML here with whatever your Project Director his and that is going to automatically apply all of those changes to your codebase okay super cool okay so what
we're going to do here um is I'm going to hit cancel because I don't want to edit that and then the sort of the last piece here is we actually have our file tree so if I just copy this and paste it
here really quick you'll see that repo prompt actually generates our full um project file tree here which is super cool because that really helps A1 understand where everything goes right so code is in the right place it knows
what it has access to all that kind of stuff okay so that's pretty cool um and then in the instructions here we're going to actually say what we want okay and before we get to our instructions
what you'll notice is we now have 50.7k th tokens so we added several thousand tokens of context um primarily from like the the cursor rules file okay but uh all of that's going to be really
important to get the results that we want okay especially this format if you don't get the format back how you want it you're not going to be able to actually use the XML parsing Tool uh and your workflow is going to be a little
more cumbersome okay uh so I'm trying to remember what I even said ah so we want want uh we want to add sub todos okay so what we're going to do here in the instructions and I'm going to use flow voice for this is I'm just going to kind
of talk to this about what I want in my app I would like to add the ability to have sub to-dos uh so please
um help me Implement that into my app okay so sometimes I'll like ramble for like a minute or two about what I want this is a pretty straightforward request uh so we're just going to be using this one so what you can do at
this point is you can see here if I click on this Arrow you'll see we have include save prompt include files include user instructions include file tree okay so this is going to include all of these blue uh prompts that's what
this first checkbox box does the second checkbox is going to include all the files that we added here okay great user instructions are going to be whatever is in this text box text box here and then
our green files right here that is what this last one is so what I'm going to do is I'm going to hit copy and this is going to copy all 50,000 uh 51,000
tokens over to My Clipboard we're going to go um let's actually just use the full chbt window for a little bit more room okay so we're going to paste this in and you can see here if I go to the
scroll bar here this is our entire request okay so this is a monster uh you can see we have those meta promps we have um all of our Cod base injected into here and what we're going to do is
we're going to send this off so what you can see it's scrolling down all the way by the way open AI if anybody and if any of your engineers are watching if you can make this not like a smooth scroll just make that an instant scroll that would be super great just waiting for
that can get really annoying um but what you'll see here is that o1 is going to start working here this is going to take typically about two to three minutes depending on how big uh your request is
okay so on a you know 50,000 token request it's probably going to be closer to two or three minutes so what's going to happen is 01 is going to respond back
to us in the format that we specify in O format okay in this kind of meta PR that we're using as part of our um our pre-save formats okay so let's go ahead
and actually talk about this while 01 works so you'll see here's how you should structure the XML and what we're doing here is we're defining the XML
structure that we want uh that we want1 Pro to respond with K so you can see we have uh code changes so everything is going to be in this code changes tag here you can see we have changed files and then all of the files are going to
have a file tag with the summary okay an operation path and the actual file code okay so what's going to happen here is that uh we're going to get a nice little summary okay you can remember two
sections summary section then we're also going to get the XML section back as a markdown code block okay so let's see looks like 01 Pro is still thinking okay that's good that's test time compute at
work that is the entire point of a model like a one that means it is doing work for us okay you can almost think of it like a human engineer we gave our developer a task and now they're working
okay so what what we need to do once we get the response back is we're actually just going to hit the copy button and we're going to go into our parser okay and what I'm actually going to do right now is I'm going to jump into uh cursor
right here and I'm going to do uh get the working directory here okay uh you have two options when you're working with this tool you can either set the directory in as uh actually I need to
drag the uh so this is this is the repository for the XML parser here okay so you can see uh what you want to do is you want to uh type PW for print working directory here and then uh that's going
to give you your directory and you can either post that in the UI or you can post that uh in a EnV file okay so the UI is going to take priority if you don't have one in the UI it'll take
whatever is set as the environment variable there okay so let's go over to our Tool uh we'll just use the UI because that makes very good demo let's see if one is done okay so the request
is done and you'll see this like big uh bulky box here this copy code button okay so we're literally just going to hit copy code this is the XML response that we got back from o1 Pro so what
we're going to do I'm not going to apply this yet I'm just going to paste it in and go to the very top here you can see we got the changes that we need uh for our entire uh feature request okay and
then if I scroll up here what you'll notice is if we go a little bit down also open ey if you're listening to this be really nice because these requests are so big if there was like a arrow that could take you to the start of each
message cuz some of these requests get kind of crazy uh but anyway you can see this is the summary section okay so this gives us a pretty quick little glance at what's going on uh you can see uh the
model here updated our database schema in case we're going to have to do a migration here which is totally fine that's expected um you can see it's tweaking our page. TSX okay so this is the to-do page and it's also tweaking uh
some of the actions okay so it's updating some of those crud uh queries that we're going to do and then it's even updating the component for uh the to-do list okay and then you can say
again we just have that entire XML block which is what we need for this apply to work okay so what I'm going to do here is I'm just going to show you just to prove that this is kind of working how
we would expect if we go um I'm doing an 01 testing Branch here so I'll publish this Branch you'll see um our get trees empty we have no changes and what we're going to do now is we're going to go to
our XML parsel here and we're going to hit apply and you'll see changes were applied successfully here and what we'll do is we'll go over to our Repository and you can see we now have instantly
applied all of those changes so let's kind of go through that and almost do like a Cod riew just to see uh what's going on here obviously depending on the request you're going to want to like review this to make sure right it's it's
almost like if your AI is making you a PR right uh you want to review those pull requests but um basically it looks like um everything's good you can see
we're getting the sub toos we're building all those uh crud actions here for sub toos you can see this is
updating the UI okay that's great um you can see we got some helpers here our DB F okay so you can see like even even little things like making sure we're
correctly importing the updated table here to our schema right um exports minor things like that all of this is getting tackled which is pretty crazy you know and that's a combination of our
prompting as a combination of just the raw power of o1 pro all that kind of stuff so what I need to do because it actually generate a new table here um I need to just generate that schema npm run DB generate okay I'm using drizzle
drizzle is the best shout out to the team at drizzle love you guys um and then we're going to migrate that okay so I I didn't I didn't show you this so I already set up like a super based
project and a clerk and I have those EnV uh envs all all set up okay um and so what I'm going to do here is now we're going to go to our app and we're going
to go see what our errors are okay so this is kind of the last step 01 one-shotted as much of this as possible and what we're going to do now at this point is we're going to kind of work in
cursor and sort of take the last 10% um of of issues that need to be solved we're going to go kind of handle that okay so it looks like we have an issue in sub toos
schema sub too schema so let's go open that up uh and let's see what our error is okay so it seems as if uh server actions must be async functions here
okay so we're getting some sort of weird eror there so what I can do here is now we're just going to work and our good friend uh cursor here and I'm also going to run npm run
type check here just to see if maybe we have a typing error somewhere uh this sometimes can catch those errors and it looks like if we use cursor chat here we
can get that work in here uh so what I'm going to do here is I'm going to click through okay so this this is actually so uh ow actually just put you server here
we just got to get rid of that okay and you can see cursor figure that really quickly um we should now be able to go back to our app okay and everything's going to work so you can see we got the UI for sub toos um now let's actually
test if this works okay um so let's just say I have uh test Let's do let's just do Sub sub too test one let's try adding that okay it looks like create Works uh
let's verify if we do a refresh here um that that's actually updating here great that is if we go to our to-dos here or our superbase project refresh things you'll see we have a sub table all of
that data is in there okay so it looks like everything's hooked up let's uh let's test deletes really quick um let's just do test two here let's do an add
let's see if we can check that H refresh okay so update Works delete refresh okay delete works so what we were able to do there was we were able to take that really basic request here and 01
one-shotted almost every single thing the only thing we had to do is go in cursor and we had to just get rid of that use server action but we were just able able to ask the or that use server directive we were able to ask the AI in
cursor what that problem was right we just copy and pasted that error message really simple one line removal boom we got every single thing we needed okay so obviously because I'm having to film this uh the task took a little bit
longer than it really normally would right but you can imagine in a normal workflow you can go from those uh you know building the context here in repo prompt to pasting it over to chat gbt to
waiting for that response from o01 Pro to pasting it into the XML parser and then going and making those last little tweaks and cursor you can do that really fast right so the the reason I wanted to
film this was because this is the fastest way that I've been able to come up with to get reliable results that can sort of one shot uh or or kind of 80 to
90% shot um these these entire feature requests and you can actually even chain these and ask for multiple features at once and one can't like I'm telling you guys this the model is absolutely crazy
01 Pro mode is pretty insane so that is the stack that you need uh in order to execute this workflow okay so again the
idea is you go from natural language instructions to you know the entire code getting written from ow and pro right you're you're avoiding things like having to copy and paste you're avoiding
things like having to go manually build your context Windows you're avoiding you know uh not just copy and pasting into chat yout it out right so you're getting all of the the the speed upgrades that
you can by using this workflow uh I've been doing this for like two days now it's honestly kind of insane um I'm really hoping o Open AI doesn't Nerf o01
Pro and I'm also really hoping that opening ey makes whatever 01 Pro is available via the API obviously they're doing their little 12 Days of Christmas thing I'm hoping one of those days
sneaks this model into the API so that would be great um so that's the end of the workflow section I'm going to talk for just a little bit more about uh some of the ins and outs of this for any of
you that are um curious about that so we're going to talk about speed we're going to talk about efficiency we're going to talk about performance everything that you need to know to maximize your coding ability with 01 Pro um this is also
applicable to 01 standard if you don't have $200 on Prom mode um but just if you do notice performance issues when you try it with regular o1 promote is significantly better uh so I do just want to point that out at the very start
here so let's talk about the uh stack that we're going to be using here okay so we're going to be using a tool called repo prompt okay repo prompt.com and how you find this is is free download and
it's great because it allows us to really easily copy large pieces of context from our code base on our local machine into chat gbt okay that's a big uh kind of speed issue because you don't
want to have to go manually copy and past things uh back and forth so what repo prompt is going to do is going to allow us to build really sophisticated kind of pieces of context we're talking you can see down here
50.7k tokens okay so we're going to be working with lots and lots of tokens is one of the nice things about prom mode you get 128k tokens of contact so you can work with really big requests uh and because they're unlimited right you can
do a lot of different things okay so that's kind of the the first tool that we're going to be using one of the tools that I use is called flow. okay flow. this is the um speech
flow. okay flow. this is the um speech to text tool that I use so I will use this quite often if you've seen some of my videos before you've probably noticed me uh using that previously but uh I'll
use that to kind of increase the speed at which I can work right so speaking is about three times faster than typing so that comes in handy quite a bit uh you're also going to need my 01 XML
parser okay so I built this tool to tackle the other direction okay so repo prompt kind of takes us from codebase into chat gbt but we need a tool to take
us from uh inside of chat GPT and once we have the XML response back from o1 we need a fast way to apply that diff to our codebase K and so my 01 XML parser
tool here uh will get you all set up so go to that repo let get up. com /m rley one-x ml- Paro here okay I have a little quick start guide so you can get up and running it's a really simple tool uh if
you kind of follow this blueprint but that will allow you to take the responses that you get back from o1 and apply it really easily and instantly so that you don't have to go copy and paste
everything from here into your IDE okay so we're also going to be using cursor um you can pretty much use whatever IDE you want this workflow is actually not super cursor dependent but kind of at
the end of my workflow I do like to work uh on some of those kind of small last final tweaks with cursor tab cursor chat things like that um so if you are familiar with a cursor heavy workload
one of the steps that this largely replaces is kind of your cursor composer type workflow okay so we're uh kind of going to be getting away from cursor
composer uh and replacing that with this uh kind of 01 Pro step okay we're gonna talk a little bit about
prompting with this model okay so I cannot emphasize this enough if you lazy prompt this model you're going to get worse results okay so the trick is how
do you put the time in to come up with the fastest process you can to reliably get good results from the model okay and been testing this a bunch uh a lot of
this is based on my existing workflow or my the workflow that I've been using before uh I began using this one which is all done in cursor um and a big part
of that is going to be your cursor rules so I'm actually just going to show you uh I think um yeah let's let's let's just take a look at the cursor rules project so you can see I get really
detailed here um you know I outlin my teex stack um and one of the things that I want to emphasize too is that I have picked a Tex stack that works really really good with AI models so we're pretty much um okay we're using
typescript exclusively uh which which AI models excel at um we're using really popular library right we're using nextjs we're using Tailwind uh you know postgress and superbase are are well
known at this point clerk authentication obviously stripe okay so so everywhere that we're writing code kind of the the core Foundation the the the very you
know level one of our code is something that models are really good at understanding which is part of why we've chosen the text de that we've chosen right this is what I work with um as you can see I've really let the model know
here I've let the model know how I uh structure my project which is pretty nice and then of course we have with the uh with REO prompt we also include the file trees okay so so 01 Pro is getting
a really good context and really good feel for like how everything is structured which is great you can see you know I'm doing everything from telling the model how to import things what how do I want files named how should you handle envs how do I want
types handle I work with types script so how do I want types uh file I have all these front end rules right here's how you should do components here how you should organize components here's how you should fetch data here's when you should use server components here's when
you should use client components right um you know here's how data schemas are here's an example of a data schema you know how to write that code um this is
incredible way to work okay so this has been powering my cursor workflow for quite often this is basically saying like almost as if I added somebody in my team like hey here's our text stack and here's how to use it like if you've
never seen this before here's like everything you need to know and what happens is when you pass this context to the model it it it knows how to work right it takes whatever you're request
is is and it says okay the user has outlined for me how they you know how I should write the code um how it should be organized you know why we're use why
we're doing things the way we do it all that kind of stuff um so if you do in fact use it and again you can look at these cursor files on my uh my template repo here that file is included here if
you do kind of want to see an example of that you will obviously have to customize this for your own project right if you're using a complet text different text stack for me
totally fine I do recommend this this like if you're uh especially if you're not a developer or something and you kind of haven't formulated your own you're just getting started with this stuff it's a really great you know textto this is all popular stuff very
standard uh we're not we're not doing anything weird here so I'm not like throwing some like random stuff at you guys like this is all very popular stuff um you know it's the kind of thing where if you ask a im models or you go into
perplexity or whatever you do some Googles um you know you're going to find resources for this this is all this is all good stuff but the point there is if you want to get requests you know that
are as good as the one you just saw here um you know you're really going to need to prompt and and the best way that you can instantly upgrade your prompt there is of course using kind of the format
that I just showed you uh and then of course the other thing you'll really need and I'm actually going to do this right now um is I need to put this in the repository here um so you guys are going to see me do some code uh really
quick um so what we're actually going to do here um is we're just going to go to the read I'm not going to do this with1 because it's literally just a single copy and paste um but I'm going to do um
the XML prompt okay and what we're going to do here is we're just going to paste that in there add A1 XML prompt okay so now you
guys can go copy and paste that prompt which is great um that would be really annoying to decipher from the video okay so if you go to the XML parser here we
go to the read me you will see this is the uh the format here um for the uh The Prompt uh so you'll see that that that's kind of annoying that the styling is a
little weird but that's okay um you can just literally just go like this copy paste the entire thing and then put that into your instance of uh of repo prompt
where's my repo prompt it's under this isn't it okay so that's going to go into you know new and then you're going to want to create one called o on format or like format or something like that right that's going to go in there and of
course you're going to want to attach those so uh I don't want to ramble too much here but the just the the really important things is you want to use o
Pro okay I know people aren't going to want to if you don't use these models professionally for coding purposes it may not be worth the price tag for you which is totally fine then you'll use it
with oan uh let me know how things work cuz I literally been doing this for two days and it's super reliable in 01 Pro if it's not reliable in 01 let me know cuz I would be C curious I did do AB
testing I think I mentioned that already H and it worked for the most part but o1 Pro definitely takes it to a new level again repo prompt whisper flow uh cursor
uh make sure you you you clone the XML parser here okay you really have to make sure that you put the time in to build those cursor rules I'm telling you this is going to like materially impact the
quality that you get the o1 formatter is an absolute requirement you cannot get around it you will you will not get XML format that you need for this tool if
you don't use that prompt okay um and of course this is this is ow and B I'm not actually going to include this anymore this is like you could actually probably improve this a little bit um but yeah
this is uh this is my workflow guys um I will probably do some more videos over the coming week or so to show you this a little bit more in action and like production grade use cases obviously we
just did a silly little test here um but the point is I just wanted to show show you how it works so if you found that helpful um great take off this is what I spend most of my time on here um I I
really love teaching people I love teaching AI skills we have a lot of good courses up right now there's a ton of updates coming but we're also going to be having this1 workflow course that I'll show you a couple more tricks in um so you found that helpful um you may
want to check out take off
Loading video analysis...