LongCut logo

How I Code with Python, Cursor, and Claude Code

By Shaw Talebi

Summary

Topics Covered

  • UV enables reproducible fast environments
  • Voice dictation accelerates raw ideation
  • PRD clarifies overlooked project slices
  • Iterate plans before coding saves time
  • Prototypes slash dev time 10x

Full Transcript

Hey everyone, I'm Shaw. In this video, I'm going to share how I build AI apps with Cursor, Python, and Claude Code.

With new AI development tools coming out every month, it's easy to feel simultaneously overwhelmed and behind.

However, my goal with this video is to share the simple workflow and tech stack that works for me in case it's helpful to other technical builders. Here's my

entire workflow. This is how I go from project idea to actually generating code in less than 30 minutes. First thing I'm going to do is to create a folder for

the project. Then I open that folder up

the project. Then I open that folder up in cursor, which is my IDE of choice.

And then since I exclusively write code in Python, I'm going to initialize the Python project using UV for projects that aren't trivial. So it's going to require some kind of system design and

thinking through text stack and things like that. actually won't jump to

like that. actually won't jump to writing code. The first thing I'll do is

writing code. The first thing I'll do is go to Chat GBT and basically talk to Chat GT and have it help me write a PRD or a product requirements document

first. Finally, I'll take this PRD, dump

first. Finally, I'll take this PRD, dump it in my project folder to initialize Claude and start writing code. So to

show you what this looks like in action, I'm going to walk through a concrete example where I'm going to build one of the AI agent ideas from my previous video, which was an AI tutor that would

take questions from students and generate responses in my likeness. And

so the way it would do that is it'll have a system prompt which describes my style and how I respond to questions, but it'll also have access to all my YouTube videos. And we'll basically run

YouTube videos. And we'll basically run via this agentic rag system where the agent will be able to fetch relevant context and things that I've said in my YouTube videos and responding to

questions when relevant. First thing I'm going to do is to create the project folder for like little prototype projects like this. I have a dedicated sandbox directory on my laptop. And then

here I'm going to create a new folder called AIT tutor. So that's step one is done. Next thing I'm going to do is open

done. Next thing I'm going to do is open up cursor. Now we've got cursor opened

up cursor. Now we've got cursor opened up. And then I'm going to open up the

up. And then I'm going to open up the project. We'll go find the folder. So

project. We'll go find the folder. So

now we've got our empty project. And

this is cursor. It's just an IDE, but it was one of the first IDEs to build AI directly into the user experience. While

I used to use the AI panel a lot in cursor, ever since Clawude Code came out, I actually prefer using that via their command line interface, which I'll actually run in Cursor, and we'll see what that looks like a little later. But

now we've got Cursor opened up. So, the

first thing I want to do is to initialize the Python project. Python is

the main programming language that I use. It's also the main programming

use. It's also the main programming language used in AI and machine learning. To do this, I have UV

learning. To do this, I have UV installed. And if you're not familiar

installed. And if you're not familiar with UV, I actually talked about this in a previous video, but the short version of it is that it's this super fast Python package and project manager. It's

much faster than using base pip, which I used for the longest time. While the

speed of UV is great, what I really like about it is that it makes it really easy to track specific dependencies of your project, which helps make my projects

much easier to reproduce and deploy via cloud services. And so it's really

cloud services. And so it's really simple to initialize a project with UV.

You just do UV in it. And then what it's going to do is create a bunch of things that you might need. So it's going to create a git ignore file. This will

include all the types of things that we typically want in get ignore. But I'll

go ahead and add a couple of other things because I'll be using a MMV for this. And then MacOSS always has this DS

this. And then MacOSS always has this DS store. That's a hidden file and I don't

store. That's a hidden file and I don't want that tracked via get. So I'm going to put that there. Python 313 is good.

It's going to give us a main file, but I'm going to go ahead and delete that cuz I don't need that. And then here it automatically creates this piro.l file.

So this will automatically track all the dependencies for the projects and all the versions of the project. Anytime we

add something like if we wanted to add OpenAI's API, we could just do UV add OpenAI and it's going to add that to our dependencies list. It's all a bunch of

dependencies list. It's all a bunch of libraries that are needed to run OpenAI's library and it'll automatically create this venv file. So, it's going to create this virtual environment for us.

And then we can actually change to this environment by doing select interpreter.

And then we'll select this new virtual environment that it just created for us.

And then I'll actually just open up a new terminal. So now we are in the

new terminal. So now we are in the proper AI tutor environment. We can kill this old one. And then a couple other things. UV automatically added a readme

things. UV automatically added a readme file for us, which we'll use a little later. Also, it created this UV.lock

later. Also, it created this UV.lock

lock file. This is basically a more detailed way of tracking the dependencies for the project. So this

doesn't just include the dependencies that we add explicitly, but all the package dependencies that were installed when we installed OpenAI. So this is like a very detailed snapshot of the

exact environment I'm using here to create and run this project. So far

we've created a new folder. We've opened

up cursor. We've done UV in it. now is

to actually start thinking about this specific project and what we're going to build and how we're going to build it.

And so for this, I actually like using chatbt. And the reason for that is I

chatbt. And the reason for that is I really like Chad's interface for dictation and talking to the LLM. So

what I do here is I'll use the dictate feature. So I can just talk out all my

feature. So I can just talk out all my ideas. It can be raw and messy. And then

ideas. It can be raw and messy. And then

I'll just keep iterating with chatbt to refine the PRD. So I not only do I have this document that clearly spells out what I'm trying to do, but I get clearer in my thinking of what is it I'm

actually trying to build. So let's

generate this PRD for this specific project. So I want to build a very

project. So I want to build a very simple prototype of a AI tutor for my students of my AI builder boot camp.

Basically, I want an AI version of me that can take student questions and give responses that sound like me. And a key feature will be that the agent will have

access to all of my YouTube videos. So,

when a student asks a question that is technical and there's a relevant YouTube video for it, the agent can automatically fetch the relevant

transcript and use that in responding to the student. The system should be very

the student. The system should be very simple. In terms of tech stack, I want

simple. In terms of tech stack, I want to use OpenAI's agents SDK for the scaffolding and for accessing the large

language model for the agentic rag stuff. So creating the uh index and

stuff. So creating the uh index and search feature. I want to use Chroma DB.

search feature. I want to use Chroma DB.

I want everything to run locally and be very super simple. And really the goal of this is that we have a starting place so we can start iterating and optimizing

the system. So just getting some minimal

the system. So just getting some minimal version working first and then we can optimize the retrieval system. We can

optimize the user interface. We can

optimize the system instructions. So am

I missing anything? Is there anything else I need to consider for building this project? So the reason I like to

this project? So the reason I like to ask what am I missing? What else do I need to consider? is that often when you have a project idea, you just have like a slice of it and that's what's getting you excited. At least this is how it

you excited. At least this is how it works for me. I just get excited about like a slice of the project, but of course when I actually stop and think about it, there are a lot more components to it. There are a lot of

things I didn't consider. And so having chat GBT help me think through this project helps me get a much clearer idea before I write a single line of code.

Reading through this, for the most part, this is pretty aligned with what I had in mind. A couple things is don't want

in mind. A couple things is don't want to have a rule-based system for deciding when to do retrieval and when not to. I

just want the agent to decide on the fly. So I don't want to do any of these

fly. So I don't want to do any of these routing rules. We don't need that level

routing rules. We don't need that level metadata. And then we don't need this

metadata. And then we don't need this simple feedback UI. So even though a lot of it's aligned, there are still some issues as expected. So this is where I'll just go through and read it and

just dictate again and give it feedback.

I don't want a rule-based classification slash routingbased I don't want any kind of rulebased logic for the rag system. I

just want to give the agent a retrieval tool and have it access data when it feels like it's necessary. So letting

the agent use its best judgment coupled with specific guidance in the system prompt for when to do retrieval and when not to do retrieval. We don't need level

as a meta tag. We don't need tags and topics as a meta tag. Just the video ID, title, URL, and start time, start and end time of the chunk is good enough. We

just want to keep it super simple and we can make it more sophisticated later.

Sounds like me. That's all good. The

logging is good. So, keep the student question. Don't do classification. So we

question. Don't do classification. So we

just need the question, the retrieved chunks and the final answer. We don't

need whether they click the video link or not because we shouldn't expose the video link in this initial version. So

the logging is good, but this is just a little too much for this initial prototype. We do not need the simple

prototype. We do not need the simple feedback UI. Uh we don't need to worry

feedback UI. Uh we don't need to worry about error analysis here because I'm going to do that separately. Yes, let's

use uh streamllet and we'll just run this locally. That'll help keep things

this locally. That'll help keep things very simple. we can worry about the UI

very simple. we can worry about the UI in the next version of this. There's no

need to ask the student their name and current module at the start of the session. Just let them ask questions and

session. Just let them ask questions and all that can just be inferred. Don't

worry about fallback escalation logic right now. We can figure that system

right now. We can figure that system prompt stuff out later. Conflict. Yes,

let's have av file for that stuff. We

[snorts] can try the config.pay. We can

keep that. YouTube ingestion script.

Yes, we'll need some kind of pipeline to do that. local dev story uh simple entry

do that. local dev story uh simple entry point. Yeah, we're going to use UV for

point. Yeah, we're going to use UV for this project. So, it's going to be UV

this project. So, it's going to be UV run app.py. So, typically the first

run app.py. So, typically the first iteration isn't great. The second

iteration is usually better. And then by the third iteration, I have something that's like pretty solid. Okay. So, it

seems like it's just going off and writing the PRD itself. So, we can review this. I feel like for the most

review this. I feel like for the most part, this did a good job, but there are still some details we need to fix. So, I

don't really like this uh layout. This

is good for the most part. The layout I want to adjust it a bit. So create a utils folder and in that have the

agent.py retrieval.py and ingest.py.

agent.py retrieval.py and ingest.py.

Since we're just ingesting from a single source right now, we can just have a single Python file called ingest.py that

does that. And then if we expand later, we can change it if needed. But

everything else looks good. Data model

looks good. Chunking looks good. Okay.

So this is the third iteration. So

usually at this point it'll be pretty solid. So system behavior. One problem

solid. So system behavior. One problem

with chat GBT is that it'll start with the formatting really great, but then when it does a markdown text box in its response, it the formatting gets all messed up. It was formatting it

messed up. It was formatting it perfectly and then when it added that text box, it started getting confused.

So actually what I'd like to do at this point just tell it to export PRD as a markdown file. Oh, so that's not right.

markdown file. Oh, so that's not right.

So let me pause this and then we'll try again that I can download. Okay, so it seems like it's going to do it right now. And so of course you could use

now. And so of course you could use different model or different setup to do the PRD. Like we could have also just

the PRD. Like we could have also just done it within cursor. The only reason I did check is cuz the voice interface is critical. But if you're using one of

critical. But if you're using one of these voicetoext dictation tools like whisper flow or whatever that are good then you can just do all of this directly in cursor. But I'm going to go

ahead and download this file. I'm going

to copy this and then go to cursor and then I'm going to create a new folder actually called llm. So this is going to be like a secret folder that is mainly

for the llm. We'll paste the prd in there and then we'll add this to the git ignore file. lm I'll call llm slash. So

ignore file. lm I'll call llm slash. So

now we have our development environment set up. We know what we want to build.

set up. We know what we want to build.

Now we can start building. So this is the final step which is to spin up claude code to start actually generating code. So to do that you just go to this

code. So to do that you just go to this terminal and we'll type claude and we'll say yes, we trust it. And we got our little claude code agent spun up here.

And then we'll just do the slash command. do in it. And what it's going

command. do in it. And what it's going to do is analyze the codebase hopefully read the PRD. So it read the LLM folder

and going to generate this claude.md

file. The claude.md file is very similar to a prd but it's specifically optimized for claude code. And this is something that we will be able to iterate and

refine as we make progress with the project. So we'll see it has a lot of

project. So we'll see it has a lot of the same stuff like project overview environment setup. So we will just hit

environment setup. So we will just hit yes. So here's our cloud.md file. It

yes. So here's our cloud.md file. It

looks pretty good just skimming it. It's

pretty aligned with the prd. Now that we have the claw.md file, we can actually start implementing this thing. But

again, because this isn't just like a trivial one scriptor, there multiple scripts and we have to give some thought into how we're actually going to design this system. I usually like to go to

this system. I usually like to go to Claude and ask it here. I'm going to use the default dictation on my Mac based on the claude.md file, come up with a

the claude.md file, come up with a stepbystep implementation plan for this project. So even though Macs come with

project. So even though Macs come with dictation, it's not as good as Whisper Flow or the one built into Chat GBT, but for like small stuff like this, it usually does a good job. So it did a

good job on this specific one, but let's see what it's trying to do. Check for

Python files in root directory. Sure.

So, right now the main goal is to come up with a implementation plan. So,

basically, what are the step-by-step instructions for making this thing happen? This is a pretty cool feature of

happen? This is a pretty cool feature of Claude Code is that it'll ask clarifying questions when it wants to. So, this is a good thing. I didn't think about this.

So, we do want to use YouTube transcript API for OpenAI agents SDK implementation. What should we use? Just

implementation. What should we use? Just

use default functionality for calling LLM. Period. This is built in to Open

LLM. Period. This is built in to Open AI's agents SDK. Use web search to familiarize yourself with the library.

And so you can see that the dictation tool just like completely messed this up. So close it and open it. Let's try

up. So close it and open it. Let's try

it again. Ask me questions again. Okay.

So we'll do this. And then this time I won't use the dictation feature because it can get messed up sometimes. So I'll

say use default functionality in open AI agents SDK use web search to familiarize

yourself with library and then let's see embeddings Chroma uses the uh all mini LML6 as a default let's not do that

let's just use openAI's embeddings should we include example configuration setup in the implementation Yeah, I guess we'll be sharing this. Okay, so we got these answers and we'll submit

answers. So I actually really like this

answers. So I actually really like this feature of Cloud Code. And then what's really cool is if you So it's going to do the web search about OpenAI's agents SDK, which is good. What's really cool is if you build custom agents like sub

aents and cloud code, you can also give them access to these tools like the web search tool, web fetch tool, the ask questions tool. It makes it really easy

questions tool. It makes it really easy to create more specialized agents than your default Claude code agent. It's

familiarizing itself with OpenAI's agents SDK. So another thing I might do

agents SDK. So another thing I might do is if I see that Claude is getting confused with implementation steps or I'm using some obscure library, I will

actually go to the documentation. So

let's say I can go to Chroma DB docs and I can go to the documentation here and they have all this stuff. So maybe what I'll do is get their getting started

page and I'll copy this page and then I'll go here and I'll create a new file and I'll say chroma getting started MD and we'll just do that. There's also a

nice MCP server called context 7 which will give Claude access to updated docs.

I haven't used it yet. I think it's free but it requires an API key to get set up so that friction stopped me from installing it. But I guess between like

installing it. But I guess between like it being able to do web search and just giving it specific context files in a LLM folder, it works most of the time

for me. So it finally came up with the

for me. So it finally came up with the plan after lots of thinking. Let's see.

Even though I feel like I have a good idea of what's going to be in these implementation steps, it's still important to read it because even like one little mistake, one little thing that's not aligned with your

expectations can cause a lot of problems down the line. But also like this gives us an opportunity to just keep iterating and iterating on the plan and on the scope of the project which is just a

good thing. It's not only good for us

good thing. It's not only good for us for our mental model of what's happening but it's good for building this in the fastest and best way possible. Setting

up dependencies.

Let's see. Create config. Okay. So

that's a cool way to do it. So this is a great example. So I don't want it to do

great example. So I don't want it to do this. I wanted to import specific files

this. I wanted to import specific files cuz things might get refactored and then this might become more of a headache. So

I want to change that. Let's see. Injest

pipeline CLI interface. Okay, so this is a CLI interface for the chunking process. Okay, that's fine. Retrieval

process. Okay, that's fine. Retrieval

core components. Cool. Agent layer. Here

are the AI tutor. Also, I want to have the system prompt in a separate folder.

So I should have specified that earlier but this is why it's good to iterate cuz now I realize I wanted a separate

prompts folder. Okay. So we'll say don't

prompts folder. Okay. So we'll say don't package for utils just import each file

to make it easy to refactor later. And

then the other thing was don't define system prompt in Python script. Create a

system prompt MD file and save it in a prompts folder.

But let's see. It wants to do the dependencies first. It wants to set up

dependencies first. It wants to set up the data layer. Wants to set up the agent layer and then the UI and then polish. Okay. So that's a reasonable way

polish. Okay. So that's a reasonable way to do it in my opinion. But we'll have it refine the plan based on these two updates. And another thing is like sure

updates. And another thing is like sure you can just jump into the coding from the jump or you can jump into the coding right from the PRD. But a lot of times what happens is that there will be like

little lines like these lines here. If

we would have just had Claude implement the whole thing from the jump and then it did this utils import which isn't the way I wanted to do it or define the system prompt within a Python script

which isn't the way I wanted to do it. I

would have to go in and manually fix the code or I'd have to prompt it to manually fix those two specific pieces.

And so just in terms of feedback loop, it's much faster to just iterate on a plan like we did here than it is to have Claude implement the software, us test

the behavior and then go review the code and then give feedback on specific pieces. The latter way of doing it just

pieces. The latter way of doing it just takes much longer. And so as much as you can iterate at the planning phase, it's going to save you a lot of headaches down the line. Of course, there are a lot of things you can't plan for and you

won't see coming, but for the things that you can easily spot at the planning phase, just fix them and then free up your bandwidth for the things that you don't see coming in the implementation.

Let's just make sure everything looks good now. Prompts system prompt MD.

good now. Prompts system prompt MD.

Okay, got that. And then we can refine the system prompt later. There's no need to iterate on that yet. Ingest.py.

So, I think at this point we've got a pretty good plan. So now what I'll do is just do yes and accept autoedit. Oh,

actually before I do that, I'm going to actually open up a new terminal and I'll do get add all. So it's very important to version your project in case anything

goes sideways. Get commit-m and I'll

goes sideways. Get commit-m and I'll just call it initial commit. Now we're

tracking changes. I'll just say keep going. So now it's going to go through

going. So now it's going to go through the implementation process and since we did the git versioning if it does something stupid or something we didn't expect we can always just roll back to

this initial snapshot and then make any corrections and try again. And so from here this might take a while but this will just be the beginning of my development process. This usually takes

development process. This usually takes me like 30 minutes or less and then have the whole prototype done in probably less than 2 hours. Before building

something like this would easily take me a day or maybe even multiple days. But

now something that may have taken me 2 days before to code can now take me 2 hours which lets me iterate and refine ideas much more quickly. So either one I build the prototype and realize it's a

stupid idea and no one cares about it and then move on to the next thing or two I can get the prototype done very quickly and start iterating on the second version and basically have

version 1.0 know done in the same time it would have taken me to create the prototype before. So that's my workflow.

prototype before. So that's my workflow.

Hopefully seeing me walk through this step by step either one makes these tools a little less scary if you weren't sure where to start or two seeing my workflow helps give you some ideas on

how to adapt this to your tech stack and how you build software. So if you have any questions about my process or any of the tools I used here, please let me know in the comment section below. And

as always, thank you so much for your time and thanks for watching.

Loading...

Loading video analysis...