LongCut logo

My top 6 tips & ways of using Claude Code efficiently

By Academind

Summary

Topics Covered

  • Reject AI Loops, Stay in Control
  • Edit Plan Mode Proposals
  • Deploy Custom Agents and Skills
  • Explicit Instructions Beat Hope
  • Humans Still Write Code

Full Transcript

Over the last couple of months, like many others, I've used Claude Code a lot. I've used it to help me build projects like Build My Graphics, some other projects which are yet to launch, and lots of internal tools.

I'm not using it for live coding, to be very clear about that, but I'll get back to how I use it because that's exactly what this video is about. I wanna share my top Claude

about. I wanna share my top Claude Code usage strategy, my six top usage strategies. The six things I would recommend

strategies. The six things I would recommend doing when you're using Claude Code to get good results. Now it's worth noting that I'm using Claude Code with

results. Now it's worth noting that I'm using Claude Code with the Max Plan, which gives you 20X the usage. Obviously, heavily subsidized by Anthropic,

usage. Obviously, heavily subsidized by Anthropic, but you get lots of usage out of Claude Code here, and if you're not running Claude Code in a loop, in a Rolf loop, and there's been lots of hype about that, usage out of it. And that's actually already my first point: I don't loop, I don't

Rolf, I stay in control. And that's

important to me. I know there's lots of hype these days about running Claude Code in a loop.

We have that entire Rolf Wiggum thing here, where the idea is that you have a detailed product document, in the end a document that details all the different steps you need to take to build the product you wanna build, and you hand that document off to, uh, Claude Code in a loop, a simple bash

loop, for example, where you run Claude Code over and over again with a prompt where you tell it to look into the document, pick the next step, tackle that step, and then work its way through the document until it's done.

And the idea simply is that by having a detailed plan, you get good results. Now I will totally agree that a detailed plan matters.

Planning matters. I totally agree with that. It absolutely does matter, no matter what

with that. It absolutely does matter, no matter what you're building. But I'm not a fan of the AI working its

you're building. But I'm not a fan of the AI working its way through a plan on its own. Instead, I like to stay in control, as I mentioned. That is really important to me. I feel like AI can be a very helpful

me. I feel like AI can be a very helpful tool for developers, and I shared that and plenty of other videos on my other YouTube channel, Maximilian Schwarzmulle.

I feel like AI can be a very helpful tool for developers.

I believe every developer should use AI as a tool, but I don't feel like it can really give me good results if I let it run on its own.

That's why I'm absolutely not into live coding.

I tried the Rolf loop; I'm not just saying For me, the results were not convincing.

I wanna stay in control. And kind of related to that, my second point is use plan mode, because plan mode is amazing.

In Claude Code, you can cycle through different modes with and there is a plan mode built into Claude Code.

And the idea behind that mode, and I'm sure you know that, is simply that it does not go ahead and execute changes right away, or that it does something based on your prompt right away.

But that instead first it gathers information, it explores the code base, it tries to understand your prompt.

It may also look up documentation, though I'll get back to that. And then it makes a plan, and then you, uh, approve that plan or not.

It may even ask you some questions if more clarification is needed.

Now plan mode, therefore, is really helpful for a couple of different reasons. For one, it can save bad prompts. So if you're writing poor prompts, plan mode can help you here because it can kind of help you get to a better prompt. By asking follow-up

questions, it might clarify things that might not be clear from your prompt. But of course, you should try to write good prompts in

your prompt. But of course, you should try to write good prompts in first place, because that's important.

AI is a tool. The output will only be as good as your input. And even then, there is randomness involved, let's be honest. But if you're throwing bad,

be honest. But if you're throwing bad, unprecise prompts with missing context at the AI, you will not get great results out of it.

So good prompts with good context engineering, so with the right context being provided, matter.

So good prompts are still important.

That is really, uh, worth noting here.

But of course, you definitely have plan mode as a little, uh, well, savior that can come in.

But that's still just one advantage of plan mode.

Another advantage of plan mode is that it shows you what the AI wants to do, and that is really valuable in my experience.

Because it's far too easy to just blindly trust the AI to do something and that it does the right thing.

But hey, it's AI. (laughs) It's not necessarily going to do thing. It's definitely not necessarily going to

thing. It's definitely not necessarily going to you might want it to write. And the great thing about plan mode is that even if you don't get follow-up questions, prompt maybe already was good enough, like here for me in this example, I still get a plan. And I know a lot of developers that will just blindly accept that plan, so that will just

hit, uh, enter and let it do its thing, but don't be that kind of developer. Take a look at that plan.

It tells you what it wants to do. If you asked it to fix an issue or find a solution for a bug, it will tell you what it thinks causes the And by the way, when I say think, you always wanna keep in mind with token generators here. But still, it will tell you what it wants to do, and you should take a look at

that.And then, here's the shocking part: you

that.And then, here's the shocking part: you should feel free to edit that plan.

So don't blindly accept it; edit and tweak the plan if needed. You can and should do that. It's nothing you have to just accept and get done with and then fix any problems that might occur thereafter. Instead, take a look at the plan, tweak it to your

thereafter. Instead, take a look at the plan, tweak it to your likings, and then accept it. Because you can always go to here and say, "I don't like the text on the 404 page," and then be more precise. And then you can get it to generate a new

precise. And then you can get it to generate a new then may accept. And that's a way better approach than thing, just to then find out that you don't like what it did to fix. That burns more tokens, is more

to fix. That burns more tokens, is more work, wastes time, and fixing code with AI is not a lot of fun. And by the way, I cover in detail how to use Claude Code, how to write prompts, how to do context engineering, how to provide general rules, how to use tools Claude Code offers like

agents or skills, to which I'll get back in this video too, And I'll build a complete demo project in a brand new course I just launched about Claude Code.

You find a link to the course with a great discount below the video in case you're interested. The third point here is that I use

interested. The third point here is that I use agents and skills, and what do I mean by that? Claude Code allows you to build custom

by that? Claude Code allows you to build custom sub-agents, uh, something I also show in my course, all the things I mention here and much, much more.

The idea here is that Claude Code can launch these sub-agents with their own dedicated context window to save tokens in the main context window so that you don't run out of context space there, and those agents can then specialize in certain tasks. And one agent I like to build,

certain tasks. And one agent I like to build, build in the course, is a Docs Explorer agent, for example, which is simply an agent optimized for browsing documentation. And I give that agent certain tools

documentation. And I give that agent certain tools like web search or the Context7 MCP. So I give that agent, uh, those

MCP. So I give that agent, uh, those tools. The Context7 MCP, in case you don't

tools. The Context7 MCP, in case you don't know, is a MCP server that gives AI agents easier access to the documentation of third-party libraries or languages you might be working with.

I'm not a huge MCP fan. I will say that, uh, because I feel like they're token inefficient.

AI is not that good at using MCPs in my experience, and I prefer built-in tools and I don't need that many tools anyways.

But the Context7 MCP is pretty amazing and I built my own Docs Explorer agent which has the specific task of using web and this agent to browse for documentation.

And in addition to that custom agent, I also equip Claude Code with skills and that will be skills that are project dependent or specific.

So for example, if I'm working in a Next.js project, I might give it some skill that describes some best practices I want it to use in relation to, uh, Next.js to ensure that it uses Next.js or writes Next.js co- code that is in line with my expectations and my preferences.

And it's worth noting here that there also are open source initiatives like, uh, Vercel's skills here, uh, that make it easy to install specific skills into projects and load openly shared open source skills into a project, like the React best practices shared by Vercel. I will also do that in many projects but I

Vercel. I will also do that in many projects but I still like to craft my own skills with my very own preferences, patterns, and simply rules that come from my experience as a developer that I want the AI to use. Because the idea behind the skills is that the

use. Because the idea behind the skills is that the AI will read those skill files when needed.

They are lazily discovered. They're not always loaded in full into the context. Instead they're loaded use a skill, and then that skill can give the AI extra context, extra instructions, which should increase the chances of getting good results. And that's important here.

We're always talking about increasing our chances because it's still AI, there is randomness.

You can't be sure that it does stuff the way you want it to do, but you can try to increase the chances.

That's the entire game of using AI as a developer in my experience. And that's there for my fourth point here,

experience. And that's there for my fourth point here, explicit over implicit. Now, what do I mean with that? With that I simply mean that the AI may do something the way I want it to do, but I can't be sure about it. So I'm rather explicit.

So for example, if I'm using the BetterAuth library in a project and I want to use authentication via Google, I write that explicitly. But I'll also say something like,

explicitly. But I'll also say something like, agent," which is the custom agent I wrote which is good at exploring documentation, which I mentioned before, to explore the relevant BetterAuth docs before implementing because I've seen it too often that AI will just head off and do something and then I see that it didn't do what I wanted it to

do, and that it, for example, has access to certain agents but it just won't use them even though in theory it should.

And I don't want that. I'm not getting anything out of saying, the AI is pretty bad. It didn't do what I want it to do," it should do. It's a stupid...... tool that

can be amazing, kind of. And therefore, don't hope that it does something or don't wait for it to fail just so that then you can be convinced that AI is not good. Instead, know that it is good at generating lots

good. Instead, know that it is good at generating lots of code quickly and use that as its strength and give it the necessary instructions it needs to increase the chances of getting the output you want, essentially.

So that's what I mean with explicit over implicit.

I rather tell it explicitly what I want it to do, if I know what I want it to do, than that I hope that it does that. I hope that makes sense.

does that. I hope that makes sense.

I have no problem with telling the AI something which it maybe would've done anyways if I can then be sure that it will do what I want it to do. So for example, here, it used the Docs Explorer to look up the better of docs, and I didn't have to hope for that. My fifth point is kind of related to this:

that. My fifth point is kind of related to this: trust, but verify. AI is great, and if you give it a good prompt, if you are specific, if you provide the right context, depending on the problem that was tackled, you have good chances of getting decent results.

But don't blindly trust that. You are in control, so you should verify the results. You should not blindly trust them. Carefully review the

trust them. Carefully review the code. Don't just think that it's correct.

code. Don't just think that it's correct.

Think about it critically. Don't dismiss it as AI-generated. It's certainly bad.

AI-generated. It's certainly bad.

Instead, accept what's good and try to improve what's bad.

And bad can also be something that maybe is okay, maybe works, but isn't using a pattern or an approach you want to use. So do that. But

also give the AI tools of self-verification.

So give it tools for self-verifying, because that can vastly improve results.

Actually, I guess that's my second point.

The first one is review the code, as I just mentioned. But with tools for self-verifying, I mean

mentioned. But with tools for self-verifying, I mean stuff like unit tests, E2E tests, and also potentially, depending on what you're building, Playwright MCP, for example. The idea

here simply is that the AI can do these things on its own. You might wanna tell it, as I mentioned here, in explicit over implicit, but it can run tests.

It can run linting commands, for example.

So, um, that would be something else in here.

And it can also access the browser with tools like the Playwright MCP, though you wanna be careful here.

That is token heavy, so that burns a lot of tokens and you don't wanna use that all the time, therefore.

Um, at least you wanna know that it can burn a lot of tokens.

But giving the AI tools that help it to evaluate the results on its own can vastly improve the quality of the output. It's still not a guarantee.

output. It's still not a guarantee.

For example, the AI may write tests that simply pass and it might adjust the tests to the code instead of the other way around, so you wanna be careful.

But still, this can lead to better results and it does still not mean that you should not review the code, course, also test yourself. You wanna do that.

You're in control and you have responsibility for the code.

You can't say, "AI wrote it, it's bad." Sorry.

You are in control. And then the last one: you're still allowed to write code. Shocking, I know, but with AI,

code. Shocking, I know, but with AI, with AI tools, it's not an either/or choice. You're still allowed to write code and

choice. You're still allowed to write code and you should still write code. I'm not going to ask AI, "Please increase the margin of this box from .5 rem to one rem." I can do that myself.

from .5 rem to one rem." I can do that myself.

And if you review the code, if you understand the code the AI generated, which you should, getting back into the code base shouldn't be too difficult. And don't get me wrong, I've fallen into the trap of handing off too much work to the AI and of not fully understanding the code base. I'm not doing that anymore.

I make sure that I always understand the code base and that I'm always able to get back in there because I am a developer.

I can read code, I can write code, and I can still write code even when I'm using AI. So if AI can't figure something out, if you have a trivial change where you tokens, I mean, those tokens cost money, just do it yourself. And by the way, coding is fun, at least for me,

yourself. And by the way, coding is fun, at least for me, so I love getting into the code and writing code myself from time to time, and it's never been easier with great auto in Cursor or VS Code, and therefore, this also is, uh, a very important point. And the last of my points here. Now, as I mentioned, I have a full course where I dive deeper

points here. Now, as I mentioned, I have a full course where I dive deeper into Cloud Code, where I dive deeper into these points, others where I explain in detail how to build custom agents and all that fun stuff. You find a link below.

But I hope these points are also helpful and allow you to get results out of Cloud Code.

Loading...

Loading video analysis...