LongCut logo

Opencode Is Probably The Best Coding Agent I've Ever Used

By DevOps Toolbox

Summary

Topics Covered

  • AI Agent Equals LLM Plus Tools Loop
  • Zen Delivers Pay-As-You-Go Model Routing
  • Custom Agents Beat Default Modes
  • Infinite Context Via Session Compaction
  • GitHub Bot Automates Issue Coding

Full Transcript

I have a lovehate relationship with AI.

And this is why this one is special. I

have zero patience to overblown LinkedIn posts, probably written by AI itself.

But then with every major announcement I still have my 10 millisecond hype rush. Honestly, at this point, the sound

rush. Honestly, at this point, the sound of vibe and coding makes me shiver. But

I still secretly use AI to write bits of code and brainstorm. However, when you find out about a 100% open- source, 0% affiliated terminal based agent built by and for Neoim users and the guys from

the SSH coffee shop, this is my reaction. Open code, not to be confused

reaction. Open code, not to be confused with this open code by that guy, which was discontinued and turned into something else in a funny chain of events. We'll talk about that. The

events. We'll talk about that. The

actual open code is everything I mentioned and so much more. And before

you ask, what about cloud code or codeex or any other modelbased utility? Here's

the short answer. You can use any model and by any I mean there's an extensive list of them. This thing is solely focused on your experience. The

interface, the themes, autoloading LSPs parallel models. Heck, you can even

parallel models. Heck, you can even share your sessions with your team in one click. But beyond all these, the

one click. But beyond all these, the really cool thing about it is its internal model router called Zen. It

finds the latest yet more costefficient models using one payment, and they don't profit off of it at all. Another

critical component of Zen is the fact that it supports pay as you go model.

I've been paying cursor their $20 for 6 months probably not using 80% of it. You

know what?, Let's, use, something, more comparable like cloud code. 17 bucks a month. Take it or live it. With Zen, I

month. Take it or live it. With Zen, I only pay for what I use. It runs a local server which is critical when accessing your files unlike Devon or Codec which run in the cloud and it's a pleasure to

work with. Let's get into it.

work with. Let's get into it.

Before diving in, what is an agent anyway?

>> What, is, an, agent,, Dax?, Everyone's, been asking.

>> I, got, to, be, honest,, I, don't, really, know.

>> In, all, seriousness, though,, an, agent, is just a loop talking to an LLM and iterating over a task until the break laws like requiring intervention or simply completing all steps.

>> We, can, say, agent, equals, LLM, plus, tools.

Is that bind plus loops?

>> You, can, think, of, it, like, a, while, true loop. Iterate on task instructions until

loop. Iterate on task instructions until requiring more permissions or done. When

you provide these permissions either completely or manually when they ask for it, you're basically running in a gentic mode. The only risk to the process is

mode. The only risk to the process is the limited context window which open code has a cool solution that both Codex and Clode have implemented as well.

Let's see it all in action. Open code AI is the great domain these guys have. The

project as the name suggests and unlike other players in the field is open source on GitHub, super popular and for great reasons. Curl the installed script

great reasons. Curl the installed script or use your favorite method then go ahead and fire open code. The default

theme stands out and while I don't hate it, it's not exactly embedded in the themex window around it. So / themes pops a long list of available options to suit you fashionistas. And as usual, I'm

going with Katpuin. The default model if you haven't added anything yet, is Grock Code Fast, which is a free one at the moment as they're trying to gather data for model training. You can start

speaking to it, and the black boxes here aren't responses. These are the thinking

aren't responses. These are the thinking steps yielded by the LLM. Once I'll be corrected that I'm actually conversing with Open Code, not Grock specifically.

Great job, Open Code team. Let's start

by tweaking the next visual element which is those thinking blocks. Hit

/thinking or scroll down to it and toggle them off. The next message gets a simple response. Basics out of the way.

simple response. Basics out of the way.

Time to crank up the power and inject some juice with open code Zen. Zen is

like a model router with models tested and approved by the Open Code team.

They'll make sure you're getting the latest and greatest and bring updates directly to your doorstep without you having to lift a finger. Not only that Open Code doesn't profit off of the process. You're adding your credit card

process. You're adding your credit card and it periodically adds tokens based on usage, but at the provider's cost level and only topped by processing fees. To

be honest, I wouldn't mind paying for the service. So, thanks Dax and Adam.

the service. So, thanks Dax and Adam.

This is how it works. You sign in, you add a credit card, create an API key and run open code off login and pick a provider. Now, just to show you how many

provider. Now, just to show you how many onboarded providers beyond Zen are already here, this is the long list of availabilities. I'm going back to Zen.

availabilities. I'm going back to Zen.

The team recommends either Zen or Claude directly. Once picked, we can add the

directly. Once picked, we can add the API key and it's done. We started, hit /models, and now we've got a list of models available through Zen. Sonet 4.5

is my current choice as it's pretty much the, latest, and, greatest,, at least, for the next 24 hours. And if you trust an AI company's benchmark, saying they're on top of everyone else, well, this one seem to be doing quite well with coding

tasks. Grock is suspiciously not here.

tasks. Grock is suspiciously not here.

And well, because they're all benchmarking Python.

>> The, Sweet, Bench, benchmark, is, literally just Python. There's no benchmark that

just Python. There's no benchmark that says like given the same prompts and the same code base, here's the one that did the best job.

>> So, to, leave, the, UI,, you, hit, control, C twice or exit, which allows us to open it with the context of a project. You

don't have to follow it up with period but if you want Open Code to have a full project's context on another path, you can just add that after the command.

Now, we can start doing some real work.

Starting by a quick project overview and in less than a minute, you have an architecture, product goals, and text stack on a fairly decent codebase I've been working on for a couple of years.

The one important thing though any project should have before treating it with AI is agents.md. This is a common file to help the agent navigate the dos and don'ts and other instructions to

keep it under some supervision. To start

one, open code has a /init command that reads the files, understands conventions, common methods, and utilities. You'll note that it tries to

utilities. You'll note that it tries to read other common files like cursor rules and directories as well as copilot's instructions. It'll then

copilot's instructions. It'll then iterate until the finished product is written. And there it is, agent

written. And there it is, agent guidelines. When you fire up open code

guidelines. When you fire up open code for the first time, you'll see an agent type at the bottom right corner. and two

main agents are build and plan. Tab will

switch between them and others we'll add later. These basically correlate with

later. These basically correlate with access to files in order to make changes and additions and a readonly mode that doesn't do anything other than read and brainstorm. When the plan agent is asked

brainstorm. When the plan agent is asked to make changes, it won't, but the build definitely can. These are fairly simple.

definitely can. These are fairly simple.

What I highly recommend is adding your own set of agents. Not only adding a special instruction, but also tweaking its temperature, level of verbosity, and even a dedicated model. Looking at the agents docs, it suggests we use open

code.json config file, but there's a

code.json config file, but there's a much cleaner option that uses markdown with headers. Configuring different

with headers. Configuring different files for different agents and tweaking even permissions to the level of a specific tool. One example would be a

specific tool. One example would be a deep thinker using GPT5. High reasoning

effort and low verbosity, no prompt needed as context. Or one I use quite often is an email responder, helping me draft and respond to messages. Now, I

know there's a lot of markdown LSP warnings here, mainly over long lines.

How about instead of ignoring it, we use Open Code to fix it for us as a first task. Making sure build agent is active.

task. Making sure build agent is active.

Ask Open Code to fix everything according to the LSP warnings. Open code

comes with its own built-in list of servers. Markdown, by the way, isn't one

servers. Markdown, by the way, isn't one of them, which explains why one iteration didn't do it, but insisting further cleans the file from errors and updates a clean version easy to read. To

access the agents, we mentioned tub earlier, but you can run slash agents and pick them from a fuzzy searchable list, then ask it to draft an email, for example, asking for a provider about

their MCP server. But we're not here to discuss emails. One thing mentioned

discuss emails. One thing mentioned earlier you might want to do is change the temperature setting, defaulting to 0.1, which is very confident and finite as opposed to a higher value closer to

one, like 0.8, cranking up the creativity and randomness, or freedom of the model, if you will. We're talking

about so many slash this and slash that in open code. How about we create some custom command available from within the UI? This is great for building, testing

UI? This is great for building, testing even git operations and code reviews. I

actually do that with a different model which I imagine is like another set of fresh eyes on changes made by another team member. Under open code command

team member. Under open code command directory, add markdown commands like we did with agents. A simple one would be /build, which I'm not going to even bother with providing the actual command. Not very token efficient, but

command. Not very token efficient, but you get the point. Once added, SLB build cloud things and build is successful.

Here's the few seconds old binary to confirm the work was done. Another

option I like having is a quick security scan. This can either be done with CLI

scan. This can either be done with CLI scanners or using an MCP. So, with that in mind, let's add an MCP, shall we? To

do that, we now have to configure Open Code JSON, which we've avoided so far.

It starts with a large generic schema.

This holds key binding, shortcuts, and other configs to play with. I'll head

over to Snick's MCP and it first asks for the CLI. Once installed, Snick test seals a quick security scan telling me I'm good on the dependencies front. I

can actually monitor it continuously and view results in a dedicated page, which is pretty cool. But we're here for the MCP. So, open code.json add any MCP here

MCP. So, open code.json add any MCP here directly as an object. This one only requires a simple command to run it locally. Now we can ask open code to

locally. Now we can ask open code to scan the project and I can get the result in chat which actually offers the next step not only dependencies but also code. This requires a simple off

code. This requires a simple off process. Let's see if open code handles

process. Let's see if open code handles that for me. Yes. Let's authenticate Mr. Terminal user interface and voila pops a page. Access granted. We're good. It

page. Access granted. We're good. It

found a low severity too in this instance. Not something to be worried

instance. Not something to be worried about. I didn't let Snick know it could

about. I didn't let Snick know it could ignore these test files. though it

understandably alerted me about a hard-coded credential. Thanks to Sneak

hard-coded credential. Thanks to Sneak for sponsoring this video and giving me the best example for an MCP to integrate here. Learn more about Sneak MCP in the

here. Learn more about Sneak MCP in the links below. Now, Open Code like your

links below. Now, Open Code like your standard chat interface maintains a history of chats or sessions as agents call them / sessions pop that list and lets you dive into any older context

from earlier conversations. When you

pick one beyond the chat itself, there's a number of tokens, percentage of context window, and price paid. We'll

see a cool trick to handle that. But

before any session, old or new, is sharable through the web. SLshare puts a URL in the clipboard, which is then publicly accessible, showing the model thinking steps, prompt results, code changes, and everything you need for a

session review, debug, or brainstorm.

When you're done, it's recommended to untrade the trail, effectively removing the page. Now about those tokens in

the page. Now about those tokens in context window. Similar to other tools

context window. Similar to other tools you can slash compact the conversation which will ask the model for a summary condensing the context into short text opening a further context window which

is now at almost no tokens and back to 0%. This isn't a perfect method of

0%. This isn't a perfect method of course as things get lost in translation but it works well enough to feel like an infinite context window 90% of times. If

you want to export the session instead of publicly sharing it /export sends it to your editor. You'd want editor environment variable for that which then gets the local file with session summary. One thing that stood out to me

summary. One thing that stood out to me is the lack of integration into a coding environment. You know like cursor

environment. You know like cursor windsurf and the many other VS code forks companies call an AI IDE now. So

open code.nv is my new perfect weapon of choice. It adds an open code sub

choice. It adds an open code sub terminal to neovim communicating with the code directly in the editor. If

you're using lazy vim, you can add an open code Lua file, which in this case is the exact set of configuration taken from the plugins page. Once installed

and loaded by lazy, we can do a bunch of stuff. I'll broaden the screen to make

stuff. I'll broaden the screen to make room. And here's why I love lazy vims so

room. And here's why I love lazy vims so much. It's already part of the menu.

much. It's already part of the menu.

Leader O and T to toggle the tool. Now

with leader A, we can ask Open Code about the code at the cursor, for example, using the fantastic question what's this line about? When you leave the code, you'll notice open code is still running in its own near vim

terminal pane, which is great as the session keeps going, but you'd have to kill that one too when done. Another

option is leader OE, which just explains the line you're at. We can make some changes, then leader OS to select prompt asking, for, g, review,, for example,, which luckily tells me that this change will

break my code for sure given the current config file, which is greatly appreciated. Before wrapping up, a small

appreciated. Before wrapping up, a small word and a demo to show what makes open code different. What happens under the

code different. What happens under the hood when you run a session is a local open code server listening locally. You

can then call the local rest API getting a list of sessions or agents and basically use the tool to integrate it anywhere you like. And that's not all.

Another brilliant feature is their GitHub B also available on other platforms. You can run open code GitHub install approve pick a provider then commit and push the new GitHub action.

That action will then start a job whenever/ OC or/open code are mentioned in issues and run the chosen model in the context of the project and the issue to participate in the conversation. Open

code has a bunch of more options and great utilities and is honestly a pleasure to work with considering it's only made for the user experience.

Ported into Neovim makes the best setup I could wish for. Now that's great if you're already set up with Lazy Vim.

Whether you do or you don't, I recommend checking the full video coverage next to make sure you're making the most out of your Envim experience.

Loading...

Loading video analysis...