LongCut logo

NVIDIA Nemoclaw + OpenShell: FASTEST Way to Install

By Gao Dalie (高達烈)

Summary

Topics Covered

  • OpenClaw's Weak Trust Boundary
  • NemoClaw Equals OpenClaw Plus Enterprise Security
  • Out-of-Process Policy Enforcement Secures Agents
  • Dual Instances for Casual vs Confidential Tasks
  • NemoClaw Delivers Kernel-Level Isolation

Full Transcript

Over the past two weeks, we've witnessed the dramatic reversal of OpenClaw's popularity, an AI agent that runs on your own machine. You can give it commands through everyday chat apps like

Telegram and WhatsApp, and it can handle emails, appointments, files, and more.

It quickly gained popularity as a prime example of AI actually taking action rather than simply answering questions.

But security issues had been pointed out. What scares me? OpenClaw is based

out. What scares me? OpenClaw is based on a one trusted operator model, meaning one trusted user controls their own assistant. It's not designed to provide

assistant. It's not designed to provide strong protection in shared environments with multiple users or when adversarial input is present. This isn't so much OpenClaw's fault, but rather that

autonomous agents inherently have strong privileges. While this might be

privileges. While this might be acceptable for personal use, it suddenly becomes a serious issue for business applications. Open Claw's weakness isn't

applications. Open Claw's weakness isn't insufficient performance, but rather a weak trust boundary. However, Nvidia,

which wants to control all aspects of AI from hardware to software, is not going to leave this obstacle unressed. Nvidia

officially announced Nemo Claw to address this. My understanding is that

address this. My understanding is that Nemo Claw essentially uses OpenClaw as is, but it relies on OpenShell software to wrap OpenClaw and make it more

secure. OpenClaw runs inside an

secure. OpenClaw runs inside an OpenShell sandbox environment where all agent network communications, file access and inference calls are controlled through policies. Neimoclaw

appears to handle the configuration and setup for this system. Therefore, I

think that OpenShell is the core component that makes Nemoclaw secure.

Nemoclaw is not a new AI agent, but rather an enterprisegrade security distribution of OpenClaw. Think of

OpenClaw as the Linux kernel and Nemo claw as Red Hat Enterprise Linux. It

wraps the core AI in enterprise level security, auditing and control. At its

heart is OpenShell, a secure sandbox runtime that isolates the AI lobster, so it can only access permitted files with all network activity filtered by

policies. Sensitive credentials are

policies. Sensitive credentials are never stored in the sandbox but injected at runtime as environment variables.

Developers define these security policies in YAML with some rules supporting live updates without restarting. Nemo clause architecture

restarting. Nemo clause architecture achieves secure agent execution through the collaboration of multiple layers.

Here we will organize its constituent modules and operating principles.

Neimoclaw consists of four main components as shown in the table.

Neimoclaw is essentially a small lightweight TypeScript plug-in. This

plug-in itself is solely dedicated to receiving a specific command called openclaw Neimoclaw and does not perform any heavy processing. So who is actually doing the work? It's a blueprint, a

Python-based design document. This

blueprint is responsible for creating the sandbox, applying the rules, and connecting it to the AI model. Of

particular importance is the outofp process policy enforcement design philosophy adopted by open shell.

Instead of embedding rules within the agent via prompts as in the past, policies are enforced outside the agent.

Meaning that even if the agent is compromised, the policies cannot be bypassed. This is similar to the

bypassed. This is similar to the browser's tab isolation model where permissions are verified for each session. Furthermore, a privacy router

session. Furthermore, a privacy router is responsible for routting inference requests. A hybrid approach is employed

requests. A hybrid approach is employed where highly sensitive data is processed by local Neatron models and queries are made to Frontier models in the cloud as needed. After testing it, the

needed. After testing it, the differences in how to use them became clear. The standard Open Claw, this

clear. The standard Open Claw, this seems like a good choice for personal use. The biggest appeal of OpenClaw is

use. The biggest appeal of OpenClaw is its freedom to do anything and Nemo Claw's limitations severely restrict that and curl not being able to use certain features or freely search makes

using it stressful. Use Nemo Claw if you want to entrust sensitive tasks such as those requiring confidential information to an agent. The Nemo Claw environment is suitable. I felt that it might become

is suitable. I felt that it might become practically essential if a company is implementing open claw. Policies control

the risk of sensitive code leaking to external APE and applicationbased network control helps meet internal compliance. Since policies are YAML

compliance. Since policies are YAML based, GitOps like workflows are possible with changes reviewed via pull requests. Being able to answer the

requests. Being able to answer the question, can you prove that the agent won't send data externally without permission? By showing a policy file and

permission? By showing a policy file and saying it's deny by default, so communication is only allowed for those on this permission list is a huge advantage. In reality, the best approach

advantage. In reality, the best approach is to use two separate instances of open claw. One for everyday use and another

claw. One for everyday use and another for confidential tasks using Nemo Claw.

Definitely stay tuned throughout the end of this video. If you guys haven't followed me, I highly recommend that you do so so you can stay uptodate with the

latest AI news. Lastly, make sure you guys subscribe, turn the notification bell, like this video, and check out previous videos because there is a lot of content that you will definitely

benefit from. So that thought, let's get

benefit from. So that thought, let's get right back into the video. From here, we will explain the steps to actually set up Nemo Claw and run the AI agent. Nemo

claw has profiles that use cloud inference and profiles that use local inference. So we will introduce each

inference. So we will introduce each method. Before we start installing

method. Before we start installing anything, we need to make sure our system has the right environment. Make

sure you have the right environment on your machine. Now if you already have

your machine. Now if you already have Docker installed, you can skip this part. But if you don't have Docker yet,

part. But if you don't have Docker yet, don't worry. The easiest way is to

don't worry. The easiest way is to install Docker Desktop. Go to the official website and download Docker Desktop for Mac. Make sure you choose the correct version for your Mac. Now

that Docker is installed, the next step is to install Nemo Claw. Open your

terminal and run this command. This

command downloads the official install script for Nemo Claw. The installation

may take a few minutes, so just wait until it finishes. Once the installation is complete, launch the onboarding wizard. This wizard handles gateway

wizard. This wizard handles gateway startup, sandbox creation, policy application, and inference provider configuration all at once. You can

choose between NVIDIA cloud API or Olama local inference as your inference provider. When we first ran a McLaw on

provider. When we first ran a McLaw on board, we got hit with two blockers back to back. The first error I got said that

to back. The first error I got said that Docker was not running. So I launched Docker manually on Mac OS. Then I waited a few seconds for Docker Desktop to

fully start and I confirmed it was running. After that, I ran the onboard

running. After that, I ran the onboard command again, but this time I got a new error. Port 18789 was already in use by

error. Port 18789 was already in use by a node process. So I tried the easy fix and killed the process using its P ID. I

ran on board again. Same error but with a different P ID. That's when I realized something was automatically restarting the process. So instead of killing by P,

the process. So instead of killing by P, I killed by name. But the process kept coming back. At this point, it was clear

coming back. At this point, it was clear this was not a random process. It was a managed service that Mac OS was restarting automatically. To check that,

restarting automatically. To check that, I looked inside the Mac OS service manager and there it was. A service

called OpenClaw Gateway was running in the background. I searched for the

the background. I searched for the service file. This showed the service

service file. This showed the service config file that tells Mac OS to keep the process alive. I first tried to unload it the old way, but on newer Mac OS versions, that command doesn't work

anymore. So, I used the modern command

anymore. So, I used the modern command instead. This finally stopped the

instead. This finally stopped the service for good. After that, I ran Nemoclaw on board one more time and this time the port was free. Both checks

passed and the sandbox started correctly. After I got into the sandbox,

correctly. After I got into the sandbox, Nemoclaw connect. This will take you to

Nemoclaw connect. This will take you to an interactive shell within the container. The host directories are not

container. The host directories are not mounted. Instead, the container's

mounted. Instead, the container's workspace/andbox becomes your working area. It starts up in a completely isolated environment from the host with OpenClaw

pre-installed. The sandbox is almost

pre-installed. The sandbox is almost completely blocked from the network upon creation. Nemoclaw onboard Nemoclaw

creation. Nemoclaw onboard Nemoclaw automatically applies its policies adding only the endpoints necessary for OpenClaw to function, anthropic API, GitHub, etc. to the allow list. I tried

running claw to talk to the AI agent but got a command not found error. Next, I

tried installing it but hit a 403 error.

Turns out the sandbox blocks this package for security reasons. That's

when I noticed something interesting.

OpenClaw was already installed in the sandbox. I ran OpenClaw help and saw a

sandbox. I ran OpenClaw help and saw a huge list of commands. This wasn't just Claude's CLI. This was the full agent

Claude's CLI. This was the full agent interface already built in. So I ran OpenClaw Nemoclaw on board. This

configured the inference endpoint. There

was already a config pointing to inference.local routing through the

inference.local routing through the Nvidia cloud API using the model Neatron 3. I kept the existing config then ran

3. I kept the existing config then ran open clock configure went through the wizard selected open AAI as the provider since Nvidia's API is open AAI

compatible and used my existing open AAI API_key and picked the same Neatron 3 model.

Finally, I launched the terminal chat interface and just like that I was talking to the AI agent inside the sandbox. Everything was set up, secure

sandbox. Everything was set up, secure and ready to go. Let's get an Nvidia API key. We will go to the Nvidia website

key. We will go to the Nvidia website and log in with your Nvidia account.

Copy this key. It's what the sandbox and OpenClaw will use to connect to Nvidia's servers. Keep it private and don't share

servers. Keep it private and don't share it publicly. Once you have it, you're

it publicly. Once you have it, you're ready to configure the sandbox and start talking to the AI agent. After getting

into the sandbox, I noticed the agent kept returning HTTP43 every time I sent a message. Turns out

the NVIDIA API key was set on my host machine, but it never made it into the sandbox itself. I tried setting it

sandbox itself. I tried setting it inside the sandbox with the export open AAI API, but it didn't work. The gateway

was already running on the host, so restarting it inside the sandbox failed.

Next, I checked the sandbox status with the neoclaw status. Then I dug into the Docker container directly to find where the config files were docker execute.

Then I found two files that needed fixing two JSON file inside the container. I patched both files directly

container. I patched both files directly replacing the NVAR references with the literal API key. Basically I changed source env to source literal and put the

key right there. After that I reconnected I after that I reconnected.

I launched the terminal interface and sent a message and finally it went through. No more 403 errors. The agent

through. No more 403 errors. The agent

was talking just like it was supposed to. Once the TU I opened, I typed, "Hey,

to. Once the TU I opened, I typed, "Hey, how are you?" and pressed enter. The

model took a while to respond because it's a 120B parameter model on a cold start. As you can see, we successfully

start. As you can see, we successfully set up Nemo Claw and the agent responded to us. Nemoclaw addresses OpenClaw's

to us. Nemoclaw addresses OpenClaw's anything goes problem with kernel level isolation and application level network control. While it's still in alpha and

control. While it's still in alpha and somewhat rough around the edges, it represents a promising direction for agent-based security. For personal use,

agent-based security. For personal use, you can use plain open claw freely, but isolate confidential tasks in a Nemo claw environment. For corporate use,

claw environment. For corporate use, make Nemo claw the standard and manage policies with GitOps. This kind of distinction in usage seems to be the most practical solution.

Loading...

Loading video analysis...