LongCut logo

Bittensor SN27 Compute Innovations + SN60 Bitsec – AI Audits, Bug Bounties & More

By The Opentensor Foundation | Bittensor TAO

Summary

## Key takeaways - **Burn UID for Miner Emissions**: A new feature allows burning miner emissions via a specific UID, preventing them from entering circulation and not affecting absolute supply. This is implemented through Yuma Consensus (YC2) over the UID, ensuring community consensus on the burn amount. [02:56], [04:44] - **Dynamic Transaction Fees**: Transaction fees for staking and unstaking operations are shifting from a static model to a dynamic one. This change is based on the expected dividend yield per tempo, aiming to deter strategies that bloat the chain by moving stake around to capture epoch emissions. [11:05], [11:38] - **Subnet 73 Attack and OTF Intervention**: OTF intervened in Subnet 73 due to the subnet owner manipulating pseudo-call parameters to prevent validators from setting weights, effectively censoring stake operations. OTF captured the pseudo-calls to break through, set weights, and take over human consensus, redirecting funds to burn. [15:13], [16:46] - **Subnet 27: Proof of GPU and Reliability**: Subnet 27 uses Merkle tree proof validation for GPUs, passing a seed from validators to miners to run models. While currently focused on OS-level model training, future plans include decentralized training using Kubernetes and improving reliability, which is a current challenge. [25:56], [30:00] - **Bitsec: AI for Code Vulnerability Detection**: Bitsec offers on-demand, machine-based security audits for code, transforming traditional human-based audit competitions. It uses compute from Bittensor and other networks to detect vulnerabilities, aiming to reduce the significant financial losses caused by code exploits. [47:40], [50:46] - **Decentralized Compute and Enterprise Use**: Neural Internet aims to provide scalable compute services, moving from OS-level training to Kubernetes clusters. Their future plans include offering a compute API for third-party providers and targeting the corporate market with a light-themed website, emphasizing reliability and cash flow. [23:39], [44:13]

Topics Covered

  • Why Distributing Tokens Builds a Stronger Ecosystem.
  • BitTensor's Decentralized Control: When the Root Intervenes.
  • How Neuro Internet Verifies GPU Compute.
  • Scaling Decentralized AI Compute with Kubernetes.
  • AI Audits: The Future of Code Security.

Full Transcript

[Music]

all right people can hear me but I can't

hear const wait you Doug you actually

hear con but not me one more time for

the crowd to see if I

can Garett I can hear

we got it oh my gosh sweet I went into

the tjf channel and then came back here

and all of a sudden it's fine no oh my

gosh no

idea Discord man it's like a ritual now

you know the first uh 5 to 10 minutes of

um of novelty search every single time

is just a mic check mic check testing

testing

for infinitum yeah it's tradition make

sure we don't that out of the recording

Mars super important that people know

that we're authentic it's tradition

great okay so well now we have everyone

here and everyone can hear us loud and

clear this is a unique novelty search we

actually have two teams coming up so

have yubu and Zoro uh talking about bitc

and neural internet's compute subnet um

unfortunately unforeseen circumstances

we could not do the previous call but

here we are so we're a month

into detail I mean it's very exciting

there's been some good there's been some

bad there's been ugly consequences but I

would say that generally speaking it's

been a success and it's working it's

it's definitely stimulated a different

class of individuals to be involved in

the growth of the actual protocol and

that group is subnet owners um I would

say in the first iteration of bit tensor

the the annoying majority was

miners during Revolution it was

validators and now uh the loud

individuals in the group are are the

subna owners pushing for changes to make

it so that they can build the things

they need I think that's a healthy Pro

progression for the ecosystem and the

network in

general one of those things that had

been called for um by a number of

people both through action voting

through feat um but also in the r

Channel and throughout um the community

at large is the ability to burn minor

missions so I want to talk a little B

about that at the beginning of this call

um and discuss what my perspective is um

we can hear yours as well Garrett and

then if people want to come up on stage

and say their piece as

well so we're implementing a burn

uid specifically that uid is the owner's

hotkey

on the

subnet

and we are going to allow the incentive

that is sent through this key to be

burned specifically burned not

recycled um that means that it doesn't

take away from the absolute supply of

alpha it just becomes

non-accessible so your having does not

change and also the root weight

proportion uh doesn't slow down in its

um uh drop off over time so it's still

the 100 or so days before Alpha

inflation

matches the root prop and and uh and we

see less root

selling now we did that because we

believed that owners would be highly

incentivized not to use the burn new ID

and instead do sort of offchain

um burning mechanisms like we saw in 51

which we don't think are healthy uh and

said we would we hope that we we hope

that owners use this tool um on chain in

a verifiable and auditable way to see

how much of the mining emission is not

being sent out to

miners we think it's pretty cool that

it's implemented as a uid for the

following reasons um number one although

the owners write code that code code is

run by a distributed set of

validators and because of that there

does need to be consensus amongst that

set over time about what that burn

amount should be in the network so it

goes through human consensus we we we

run yc2 over that uid and if that uid um

reaches consensus and has trust then

that's the amount that is burned that's

one thing that's interesting and I think

important we we didn't want to be just

something that could be set by the

owners um because our philosophy is that

these subnets are sort of co-run by the

bit tensor ecosystem at large subnet

owners are you know main men they're the

main characters in The subnets but in

the end you can't just do anything you

want you have to reach consensus amongst

the the stakeholders in the

ecosystem the second important

implementation detail um is that we do

this through the

weights so the amount of mining emission

that is being burned is something that

can be continuously adapted instead of

being a manual process people in the

past had suggested that it be just

something that is tuned by the owner

with a pseudo

call we think that's far too manual and

doesn't really align with the philosophy

of bit tensor where we think markets

should be fluid and elastic

doing it through the validators is

important because we hope that the way

that

this um feature is

implemented is that the validators can

monitor in real time the ratio between

say

Organics and um synthetics and try to

match the minor emission uh based on the

ratio of those two things as an example

say 51 or perhaps we'll have Zoro

explain how he can do it on his subnet

on on

27 if there's more rented machines we

can increase the minor emission

adaptively slowly over time if there's

no rented

machines we can decrease it over time

and we and the the miners can pull off

uh resources and decrease cell

pressure we think that this um could

make subnets a very efficient economic

machines um and we hope that the the

subnet owners and the validators they

imp this design do it in a way to

maximize the um the effectiveness of

these tokens as value generating

machines okay that said um there's a

reason why I have personally pushed back

on burning minor

emissions um and I still believe

personally that it's best to distribute

the token inflation as much as you can

into your minor

Community rather than viewing minor

emissions as a pure sync in terms of

capital loss I think that

Distributing out this token to the most

cracked Engineers that are solving your

problem and getting them

aligned is really a massive feature for

the

subnet and um I I recommend that subnet

owners think of it this way aligning

your subnet miners into your effectively

a corporation or a company people that

are value aligned um and speaking from

experience um this has been one of the

best things that we've ever done uh with

bit tenser over the first four years of

its lifetime many of the miners that

were allocated large proportions of our

stake of TOA in the first two three

years of bit tenser you know came out to

be incredible uh engineers and product

Builders subnet builders researchers Etc

that that allocated their time and

resources back into the subnets well the

token that they were mining which was T

you know car is a good example of this

uh MOG is a good example of this and

most people don't know Taco um but but

Taco was an early minor that went on to

write yc2 um something that perhaps OTF

would never have been capable of doing

in the first place but we we we

attracted that level of quality um those

individuals through our uh liberal

distribution of mining emissions to to

our community so that's why personally I

wouldn't really use the uid but to each

their own um we think that you know we

have our opinions about the way in which

token economic systems should be run and

we've made some of those sacred and

unchangeable for instance like no

premine but we also think that that

owners should be allowed to adaptt and

clearly people are already beginning to

adapt um the way in which the token

distribution

occurs we have no plans right now to

allow changes in the distribution or the

um between valers Miners and owners the

1841 41 stays um simply this gives the

network the ability to lock or

specifically burn portions of the

emission that goes to miners if the if

the validators can measure in real time

that those resources are not being used

properly it would be very cool if what

we see coming out of this is subnets

that Implement their benchmarks directly

into the validators and increase or

decrease the burns based on the

performance on those

benchmarks more synthetics more Organics

we're getting too much too many we need

to increase emission we're not getting

enough we need to decrease emission we

need to burn this amount um oh the the

the subnet is not capable of decreasing

the loss on this machine learning model

or currently there's there's an exploit

going on and if that exploit is can be

measured by the validators in real time

they can use an opportunity to to

decrease the minor Mission um every

Tempo yeah so that's my little Spiel on

that change there's also another change

coming out next week with regards to

transaction

fees we have a we have a static

transaction fee on operations currently

stake and unstake it's very cheap um we

did this on purpose so that it would be

easy for people to with not that much

tow um to interact with the system we're

changing that to a dynamic design for

two reasons um this Dynamic design uh

looks at it looks at the amount of

dividend that the amount that is being

staked would get in one tempo so for a

very small amount of stake this can

actually mean that the transaction fee

is slightly slightly

less um but importantly uh if people

have been looking at these charts

there's all these like jumps that are

occurring people are effectively um

moving their stake around to try to

capture the epoch emission it's a

dangerous strategy but some people are

applying it and bloating the chain so

this makes it so that the fee is based

on the expected amount of dividend that

would be acred in that Tempo so making

this strategy um in effective so that's

something that's important coming out

next week I want to talk a little bit

about um laoc caust and 73 and our

perspective on the way in which OTF

reacted to uh those

subnets first of all I would say that

it's important to recognize that subnets

on bit tensor are co-owned by the owners

the miners the validators and the tow

holders at

large there is a decentralized control

structure by Design inside those subnets

subn owners can write code but that code

doesn't need to be run by validators

especially if they perceive that the the

code is not beneficial to to at large or

maybe even the subnet at

large that's by Design and important um

that it's not just an autocratic rule on

behalf of the subn owners who have a

very powerful position and you could

think of it as like a game theoretic

centrality in determining what that a

mechanism runs but ultimately it's the

validators who run the code and for this

reason it is it is designed so that the

validators can actually choose whether

or not the code they're running is the

code that they want to run sometimes

they don't run that code they just wake

copy um and and but that's fine too it's

actually just a Showcase of the fact

that valid do have the ability to do

what they want inside of a

subnet so for uh the law subnet the way

that we um our opinion was that this was

not beneficial to ethos of bit tensor um

and we found other validators who agreed

with us we child keyed to OTF and we ran

effectively the

inverse uh incentive mechanism uh

instead of incentivizing minor to hold

the tokens we incentivize them to sell

which is ironically kind of a meme as

well

um people have argued that we shouldn't

do that but I think that it's a

misunderstanding of the way in which

token ownership works in bit tenser

right now root has a good proportion of

ownership in each subnets on purpose we

built it this way so that we could stop

um you know just general attacks on the

ecosystem and that we could align um the

the the network towards goals that we

all share and that is the privilege that

we have via the stakers in OTF so that

can also change and it's also liquid if

people um want to unstake from OTF

because they didn't believe that we made

the right decision but by changing the

subnet code uh they are welcome to

unstake and and we would hope that we

would people would use that liquid

democracy to do the

same a bit about 73 this one's slightly

more difficult um and it was a decision

that we made internally by noting that

uh what the subnet owner on 73 had done

was manipulate some of the pseudo call

parameters that uh we had not considered

as um likely attack vectors but were um

specifically the ability to make it um

to stop validators from setting

weights our perspective is that setting

weights in

subnets um should not be manipulated in

order to censor individuals and sensor

stake

operations the goal of this developer

was to make it so that no validor could

could actually validate on the subnet

and they were successful in the first

four or five days of the subnet by

manipulating this the um weight set rate

limit which means that uh you can only

set weights every 60,000 blocks which

was about 9 days uh as well as the the

staleness on weights they set down to

120 blocks so every Tempo they would

decrease the weights rate limit down to

zero set weights themselves um and

then and then uh right before the epoch

use those weights to generate 100% of

the dividends and 100% of the incentive

back to their own keys and stop other

validators from doing anything in the

network um we intervened as OTF

by capturing the pseudo calls ourselves

so there's two keys that are capable of

setting pseudo on a subnet it's the otfs

global pseudo key as well as the subnet

owner sudo key so we were able to break

through set weights

ourselves and then take over the human

consensus of the subnet um and push the

the funds to our own key which we

intended to Simply burn the subn owner

did not like that because he perceived

that is um an overstep by OTF but I

think it's again a misunderstanding of

the way in which the co-ownership of um

bit tensor works it was a particularly

dangerous attack for a number of reasons

one of them was that uh because he was

able to distribute all of the dividends

to his own key uh there was effectively

no root cell pressure which was uh

unfair against other subnets because he

was getting 100% of the root dividends

the day before uh his key was getting

the highest APR in the system so we had

to act quickly before um he attracted

too much stake and it was effectively

impossible for us to do this thing

anyways this has been fully patched um

and as I understand it I think the

developer is selling selling uh his

slot this also stimulated a conversation

um in the ecosystem about 18% for sub

owners whether or not this is too

low I personally believe that it isn't

um and we set this because we felt that

it was a fair number um 18% over time is

more than OTF the founders at benser

ever attained uh and it's actually a

substantial amount of token if you think

long term in your subnet you know 18%

comes out to about 25 of the 25% of the

final um the final supply of the 21

million about four or five million

tokens out of the 21 million which is

which is significant if the subnet owner

is capable of waiting a sufficiently

long time OTF got around the fact that

we did not have tokens to sell by

writing contracts with accredited

investors to the ability to sell future

tokens that we do not have access to

OTC um I think that this is is a

reasonable strategy for teams that are

looking to bootstrap liquidity um into

their into their teams to to pay for

Branding and hiring developers uh

there's something like 2.5 million

tokens that a sub owner will have after

the first two years if the now if there

is also some liquidity that's injected

into the pool but if this is being

pulled out it also increases the price

so it's a fair weather problem um and

and you will have actually more

liquidity at your disposal because of a

higher price so it in the steady state

it is actually a substantial proportion

of the networks you own and I would urge

Builders to see the distribution into

the Valor that their their the people

that are staking as well as the miners

as extensions of the companies they're

building and fold in the minor Community

into each of your subnets um which is

something we did and was it was very

successful strategy and I think that um

teams that do this will will be highly

rewarded by having you know aligned

individuals who help them write code

anyways that's enough of my Spiel there

um I don't want to take any more time

away from the the two builders that are

that are going to be speaking on stage

today so that's um Zoro and yubo why

don't you guys raise your hands and

we'll we'll pull you up on stage welcome

guys hello hello hello so um why don't

we start with bit of an interesting

story how you guys got into bit why

don't we start there of course yeah do

you mind if I just uh I bring up some of

the superstars in new internet yeah go

for it yeah let's kick things off um Don

feel free to introduce everybody and

then we can just start okay yeah so

we've

got uh Hansel Zoro here he is the uh you

might say president head product uh we

also have uh one of our lead developers

Ivan and

muhamed and Douglas is here as well

Douglas is kind of an old time uh got a

lot of experience with with the enser

and is a consultant for with us I'll

have Ivan and muhaned speak a little bit

about proof of GPU because those guys

are deeply involved with some of the

technical aspects of that uh but I could

go ahead and and explain some things

about neuro internet real quick I yeah

so we're a dow based built on bit tensor

uh we've been found uh self funded from

the beginning uh through emissions of of

to okay so while while we're while I'm

waiting for that to export let just get

started with with what we are who we are

uh we are da a decentralized autonomous

organization and as such uh we are able

to uh use our our token as as stake for

for

governance as for economic purposes like

transfer of value so that was initially

based off of uh stake and to we are

going to restructure and so that our

governance is based on stake and our

Alpha token and with that then you'll

have voting right similar to how you do

in open tser foundation to vote on

proposals Etc uh that's part of our

economic plan we'll show why that's a

good business model for us here pretty

quickly to continue on though I just

want to get into what our target market

is right now that is OS level model

training so if you need to log in and do

uh uh train your models on a specific go

at us you can do that we're not

distributed training where you can have

multiple machines uh training the same

model we will be there in 2026 that's

that's our goal for for now and there's

several uh technical factors that are

going to play into how we're doing that

so essentially right now you are able to

get rent out a dark container same way

you would possibly as runp pod or a lot

of these other competitors there

probably about 30 out there that I know

of uh and you can take uh bring your own

data bring your own model and you'll be

able to train it there on you know say a

uh a machine with eight h20s that's

going to cost you I don't know $250,000

if you're going to go out and buy the

machine so with us you can get that

right now for I think we actually

discussed today I think I saw something

about $12 an hour I'm not sure if that's

going to happen but I did see something

in our chat about that

today okay so with our business model

then is is direct access to compute and

with that we are kind of a middleman

we're not really into uh investing in in

models themselves or or cleaning and

preing data but we are are just here to

uh provide you the access to be able to

do that uh in the future we will be

moving away from just on OS level model

training into using things like run AI

where we able to run kubernetes clusters

and you'll be able to scale up to

however many machines you have uh and

we'll be able to provide that as a

service where you just bring your own

model bring your own data but we can

then provide the training for you across

a wide range of machines inside of a

cluster uh technology wise uh we have

just been going through a big Rebrand

this year uh and that's part of our goal

of showing who we really are uh uh

trying to improve the the face because

we want cash flow is is our most

important factor in all this and if you

think about like you know crypto in

general that's where a lot of the

naysayers are still still at well

there's no cash flow in crypto uh bit

tenser of course as a lot of people

understand there's plenty of opportunity

for cash flow and and to do that and to

attract more business that we're we're

headed down the road right now of

refacing things while we improve on on

reliability which is another factor that

a lot of uh uh distributed trustless

computer environments are facing which

is you know reliability can be really

difficult uh okay so node XO is going to

be our new website uh right now it's

still at our uh no internet. website

we're going to Rebrand no XO we do have

some

uh logos and are working on landing

pages for that uh 2.0 website will be

done this year uh we're in the process

of doing that our team size does

fluctuate with the price of tap so it

depends on how fast we're getting things

done depends on how well our network is

performing as well so I can't give you

an exact date on that but it is well

underway and I have seen some of the

pages and and they are quite attractive

I I'm really happy with the way things

are going it's going to be really a

clean classy feel I think we're also no

XO is going to be uh targeted towards

the corporate market so a lot of bit not

bit sorry blockchain in general goes

towards the dark themes we're going to

have a dark and light theme but I think

will default to light so that we can

attract u u more more people in a space

outside of the traditional uh blockchain

markets okay so let's get into some of

our road map here uh according to things

that I think that are really affecting

uh the entire network or not the network

but the entire industry I should say uh

in in trustless Val or incentive

mechanism Trust on compute validation

this is a difficult one it doesn't

matter if just gpus or storage or or a

lot of other Ram even uh it's there's a

lot of room for misuse and and so uh we

have been spending a lot of time on

things like proof of GPU our our current

proof of GPU uh uh uh service I believe

is one of the strongest points of subnet

27 and how we're doing that with Merkle

tree proof validation uh and through

that we're we're we're passing over a

seed from the validators to the miners

and and then from there they will run

the model we are able to look at those

calculations because we've done

analytics on on on what they should look

like coming from specific models and

then are able to calculate what type of

model they have that's where we're at

right now this needs to be become more

sophisticated and we have actually

started to collaborate with uh other

players in the market outside of uh

anything to do with bit tenser who have

written white papers and are leaders in

this area because we've come to the same

conclusions with where we want to go

with it and and so we're actually uh and

talking right now about even starting an

open source uh python library to to

build this to provide it for the world

actually not just us with that can you

talk more about this can you talk more

about this Don so how does the

validation system work on um on the

subnet right

now okay so for for

gpus yes we okay so like I said we over

a seed of the model for the Merle tree

we don't we don't pass over everything

because that way we don't overload our

validators with creating these giant

Merkle trees to pass over at that point

the The Miner can build out the Merkel

tree and then run the model on its

machine at full capacity when it runs it

at full capacity it should be able to

pass with a certain Benchmark and that

Benchmark or that that specific score or

range of score will show us which model

it is that's I still think that's GP

sorry you mean GPU it'll tell you which

kind of GPU they're running yes based on

their score we we've done analytics on

what the score should look like coming

from specific models and so you falling

within that category and so you you pass

over the the specs for a machine

learning model to train on a data set

and ask the minor to actually train that

model and return the model and then eval

the model is that is that correct yeah

so I think uh either I know Douglas

probably knows the most about this I

don't know if he's in the meeting right

now but with with the Merkel tree like

you you have to build it out and and

it's essentially raise inside of a raise

right so by the time you use up your

entire GPU trying to calculate what the

end result is it's mainly the score that

we're building that off of but let

someone else that's actually built it

explain the rest of

that uh let's see who do we have here

that can do that

yeah Douglas is here Douglas okay yeah

Douglas I think is a good guy for

it yeah I can do it or mamed if you want

to talk about it um basically we're just

the seat is sent over by the validator

so we don't has he raised his hand is he

coming up on stage to discuss that oh

perhaps no no I know

why I've just muted you I have you now

yeah there you

go go ahead okay yeah so

basically we'll send a random seed over

to the minor um to generate the the

Merkel tree and then

um it's basically a challenge response

workflow where uh the minor starts to

basically compute and then once it's

done the they can send it back to the

validator um and basically all it's

trying to do is is test for T flops

right now

and

if you know we have a range of tolerance

and if they fall in that range we can

see you know that this person has an

h200 or this person has an h100

depending on their T flops and um the

reason we're doing it in this way is

because it would be a burden on chain um

it just wouldn't work validators doing

full Rec computation so this is a way to

kind of get around using the Merkel

proof is a way to get around full total

recomputation because the overhead for

that is just too high um but we're still

able to verify that you know they did

actually compute um the answer and we

can verify based on T flops that you

know they should be at an h200 or they

have an a100 based on those numbers that

we have and what exactly is a proof can

you you explain that so it's a Merc

proof of of of what you're so I'm bit

confused yeah so we send them a seed

which is completely random and that's

where they'll do the computations on um

and basically using the Merkle proof we

don't have to fully recomp computate

like re do the recomputation of these

matrices that the the minor is doing

so

we're it's does that make any sense

where it's it's it's a tree of hashes so

you've got you've got arrays of arrays

and they've got hashes inside of them

and you have to calculate all of it and

and it's exponentially bigger that you

know every level you go and and with

that you can uh

really use up the of your GPU and then

calculate ter flops by how quickly it's

able to do

it interesting okay so this allows you

to yeah right so the speed at which they

can solve this tells you what type of

GPU they have available and that they

also I imagine need to have a memory

available on the machine to do it within

a certain amount of

time exactly we had someone show up with

a 4090 a couple weeks ago and they're

like oh we running another workload on

it and of course it's going to you know

your h200 is going to look like a

different machine if you're if you're

doing anything else on your machine at

the same time and this actually goes

into how we're going to be addressing

misuse because we're going to start

doing raids on on on groups of at the

same time so that if you're trying to

spoof you won't be able to because

you're not going to be able to pass two

two proof of GPO test tests at the same

time due to the way the scores come

out so if you're doing anything look

like h200 if you have an h200 and you're

doing something else with it at the same

time it won't get the same and when you

and when you're using these machines

through this front end like how does

that work um you're given a key you SSH

into the machine you use the machine um

what stops miners from just flipping out

the machine or changing the machine when

you when you enter into the into the

SSH let Mor talk about that we have been

addressing it so the uh changing out

hardware and I know in the past month

actually we had a couple tickets for

that

I I'm sure maybe mamed knows the the end

result But ultimately we do Passover

proof of GPU tests often uh so they

shouldn't be able to swap out when

they're not rented when during their the

tickets that we recently had for were

detecting Hardware swaps while they're

rented and that's something I don't have

a perfect answer on at this moment maybe

Mohan or Ian

knows it's a hard problem

um because if the person is using the

machine you can't run the proof because

you just have to kick them out of the

machine in the first place um but but

it's still it's still very um

it's interesting that you have a a um a

foolproof method of determining what

kind of computer they have behind the

minor

so what are you guys seeing right now

for uh in terms of like the ratio

between rented compute versus the amount

that's on the network there's there's a

real reliability issue right now so

that's the thing that we're going to be

addressing in 2025 and and there's

multiple things we can do that and I

know it's not just us I it's outside bit

enss happening as well you look at clor

Chlor AI if you go to their Market

rental every single machine says spot on

it it's because they're also a

blockchain uh based Network or or

service and and they recognize that they

don't have any control over who's going

to leave when and that's one of the

issues we've also probably been dealing

with some reliability issues uh whether

that's dealing with misuse so people

that are trying to misuse the network

and

just gain gain tow or Alpha token and

then not participate on top of that so

they just you know script it so they're

dropping your your rental they kill your

Docker container after 20 minutes or

something uh because they're not

receiving any incentive to keep it

running and I can talk to you about what

we're going to do to to address some of

those things uh but right now our our

rental rates are are quite low because I

don't feel like we have a really uh

great reliable great reliability on the

network uh it has been as high as I

would say like 15 to 20 when I go and

look uh in the last couple days because

some of the things we've been dealing

with it's a lot lower than

that

uh I know in in September when we first

started a lot of people were testing it

out and there was also incentive to be

rented and at that point there was only

one or two machines you could get uh

that's changed though as we moved away

from 49

Etc you guys are incentivizing h20s

h100s just those types of machines or

like and how is that determined in the

validator based on the T tlop scores

essentially and this is this is where we

can be able to like like shift

incentives based on demand like if the

demand is saying hey look we only want

to pay for b200s because obviously

that's the best machine for AI workloads

whether it's you know fine tuning

pre-training or even just inference then

we can be able to essentially tweak the

scores and only aign um then incen to

incentivize miners to plug in for

example b200s and that's why we've

essentially cut out some of the older

models because at that point then we

will be wasting incentives because in

theory the demand isn't there for

example for 490 compared to something

like a b200 and that's how we kind of

differentiate is B based on the the

scores that the validators receive back

from Benchmark and the

miners what is your what is your plan

for the interaction between the people

that are renting and the capital flow

from those uh rental purchases and the

subnet token on

27 you're ENT

saying you're saying

what yeah sorry sorry D we kind of have

like this delay how do you move the like

what's your intention for the revenue

generated from the

subnet okay perfect question okay this

is something that we've been thinking

about quite a bit right we've been

balancing we've been thinking between um

recycling the revenue or burning the

revenue it honestly depends on the

market dynamics and where the

environment is right because we don't

want to just fully just take all the

revenue and just recycle it and then

continuously extend like happing events

and then continuously like put these

tokens back into circulation in the

future right we also need to think about

like how we can reduce this potential

dilution by just like burning set token

one revenues produc at specific

intervals like like in a bull market

when there's like uh the token is

performing very well and there's a lot

of demand for the gpus right then we

could think of starting to like use that

Revenue to them recycles like pretty

much saving for a rainy day right and

then when it's a bare market then it's

like it makes more sense to take the

revenue and essentially burn it so then

we can reduce total Supply and it could

potentially um reduce the time to

essentially have that happening but

yeah this is this is a question that's

like we're constantly like going back

and forth about and like I really think

that this is something that needs to be

solved by the community like you make a

great point that this these subnets are

co-own now right it's not just on us to

decide you know what to do with the

revenue I feel like it should be decided

on by the entire Community as a whole

right so then we could stay aligned what

is the interaction between 7

27 the stake value there and and this

Dow that you were speaking about Dawn

and like how how is that working right

now where is this Dow being formed is it

on the benser blockchain what are the

what's the structure of it what is what

what control does it

have yeah no that's a great question so

like we're seeing how you guys like

structured um like the dial for bit

tenture in theory where it's like um

open tenture postes these proposals and

then the validators they vote on it and

then the stakers vote with their stake

based on like you know whichever

validator is essentially aligning with

what they're voting for and that's

essentially the same idea that we have

where like the thing that we want to

like we want the community to vote on is

essentially the the list of gpus that we

support amount of incentive that will go

to each individual GPU model depending

on Dem band also like even just tweaking

specific hyper parameters like being

able to stay in consensus with like the

community and what they're looking for

like that's like that's key and like the

model that you guys implemented is makes

the most sense and it's like pretty

straightforward to implement at the Su

level as well so that's the direction

we're going towards and and with the Dow

too like the the token does not give you

rights to say assets or or of the

company what it does do is allows you to

uh to want to vote on on how maybe

assets or not assets but profits are are

distributed so if everyone decides that

they want 100% of profits after taxes

and expenses for uh things like Services

when we have other other providers we

need to pay out because we're not just I

believe that uh 727 is going to be the

our biggest offering but we're still

going to offer you know secure compute

uh access through providers like Oracle

and Oracle is going to need to be paid

for for that access but 100% of those

profits after tax could then go back to

buyback of the token uh to increase the

value of of the overall uh of each of

each folders state in in uh neur into

it that that's one example right but in

the future it could change as as he

saying based on as H was saying based on

on what all stakeholders uh want and we

V

on paying attention to the time here and

we also have yubo to come on the stage

um and

speak you mentioned that decentralized

training is actually on the road map for

27 how does that fold in here so we need

to move to kubernetes for that to happen

and also we need to be able to break out

from the the uid uh limitations of 256

we have done some really cool uh

experimentation with trying to use some

uh uh child hot keys in an offl manner

right they're just they're intended for

validators and and we did have some luck

on even incentivizing miners on the

network with child hot keys and and

keeping them as as you say a cluster but

there's some limitations with that in

the metagraph and just being able to uh

properly store uh enough machines the

issue with that is we would just we

wouldn't be able to put enough machines

on each uid still because we do plan on

possibly serving data centers someday on

subnet 27 and you could have possibly

have thousands of machines and and to do

that then we need to use kubernetes and

and we will store that we will store

secondary uh uids in a in a publicly

accessible uh database that is shared

between all validators and the

validators will have an API to keep the

the database secure to be able to to

store that information then from there

uh in 2025 it's going to be kind of an

over engineered minor in the beginning

this year we will have a k3s and

kubernetes running a control plane a

minor probably by a VM using uh uh

cubert instead of using drer con

containers and that's for security

purposes but it'll also allow us to run

a secondary uh kubernetes layer so like

let's say on one machine I if you guys

look in the document on the very far

right side there's some physical machine

layouts there and essentially uh our

minor will have kubernetes the control

plane it'll have also a hotkey

controller service that'll be built in a

Docker container and that is for

managing all of the uh you know uh the

wallets and anything that goes with that

now you can have multiple machines and

our validator will actually know which

ones have a control plane on them and so

if one goes down it will keep tracking

the API end points of of kubernetes

similar to the similarly to how a uh a

load balcer does now but instead of a

load balancer we don't we can't always

guarantee there's going to be a load

balancer doing that for us so we will

also have our validator uh checking the

machines it knows have a that has a

control plane so that if one of your

machines goes down that the rest of your

cluster can still can communicate with

our validators without losing your

entire cluster over just having one

machine in the middle so uh scalability

wise though to get there we need to to

use kubernetes to be able to then first

uh just have one machine with cubern

running if your Miner goes down or your

hotkey controller goes down and the

Machine doesn't go down it can restart

those Services that's going to help with

reliability on that scale and when we

scale up because we're using VMS we have

kubernetes on one level which is the

cluster that is run by the minor and

whoever who's over the owner of the uid

but then on top of that once they spin

up those VMS for us we can then run our

own clusters and our own control planes

that are separate from them and we will

allocate their specific VMS for our

nodes to be able to run kubernetes now

we cannot just choose like say one uid

or one one service provider but we can

start running the workloads across our

entire subnet 27 even one workload we

could use up the entire network just to

run that one if someone wants to pay us

to

do so it's a it's a it's kind of an

almost a nested thing you have you know

kubernetes and we're using it but then

since we have the VMS there as well

we'll be able to then allocate them

separately to be able to get there it's

a lot easier to understand with diagrams

I

think beautiful Don Zoro I interrupt I

interrupted you asking about the

mechanism um but any closing remarks or

final thoughts you want to put

out yeah I mean I really do appreciate

the community support I mean we've been

here I don't know this has been four

years con has been quite a journey um I

remember even just first thinking about

the the idea of like a decentralized

distributed compute within bit tenser

using subnet

architecture quite amazing to see how

far we've gone and even just the

industry overall like everybody's kind

of going towards trust's compute at the

moment only trust minimized compute is

is is is possible but things will change

as time progresses and yeah we're

working towards you know making sure

that we can be one of the first

Implement such a system where like

computer is fully trusted in an

environment where nobody's trusted like

that is very very exciting I do have one

more thing to say here too that we have

also a a compute API which is going to

give access to any third party provider

so we're going to be a good middleman

here we're not just going to have a

website where you can log in and and pay

for services but you'll able to

programmatically anyone across the world

can then use us to either scale up while

they need to or or use cheap compute

with with workloads uh that are destined

for you know wherever they can get the

the service the cheapest with that we

already have the API built but we do

need to get our reliability built up a

bit more and I do believe at some point

here in 2025 we will be offering that

and and even partnering we're

talking um AOS is one of the the uh uh

other organizations that we're going to

have a partnership with uh and and

there'll be some interoperability

between us and them through the API but

it's not just going to be them that's

just the first one and we'll be able to

survive put this out to everyone and I

really believe that that's what going to

help neural internet be a Cornerstone

here of uh or at least one of the the

survivable companies that are providing

compute in the

industry thanks

Don thanks guys okay we got yubo hey all

right I I'll just put the uh Google Docs

uh spreadsheet in here and then you guys

can just follow

along totally and we'll also just post

the slides right into the um into the

channel that's that's one way to do

it yeah I just uh did

that uh so today I'm presenting uh bitac

it's an ecosystem for AI power C uh code

vulnerability

detection and uh yeah feel free to

message me on Discord or

Twitter U and for those of you who do

not know me I'm John Yu I'm the founder

of this Subnet in this

project and I've been a longtime

participant in crypto uh it's actually

14 years now and I've been uh

programming since fifth grade uh

professionally programming since like

10th grade so uh yeah I've done a lot of

different programming a lot of different

crypto type

projects um if you've used any Cosmos

nft stuff you probably use some of my uh

smart

contracts um and like during that smart

contract development our team actually

uh ran into a security vulnerability in

one of the contracts and it cost us

$200,000 uh which which is a like pretty

crazy amount

so I checked like afterwards um in on

like some random weekend I had nothing

to do so I used chat GPT

apis and I actually found that same

vulnerability in the same code base uh

that was back in like two two and a half

years

ago

and uh yeah that basically like launched

the Journey of what ended up becoming

bit

suck and we all know that uh like anyone

that writes code like code exploits it

costs a lot of money in web two it's

hundreds of billions of dollars in web

three it's billions of dollars like any

mistake um can end up costing you a lot

and it it's not just uh in terms of

money it's also in terms of trust and

adoption from new

users and we also see pickup from like

Department of Defense open AI anthropic

and a lot of academic institutions that

have active funding and research for

this uh specific

area

and the current solution for these code

exploits are at least in web 3 is humans

like people coming through the code and

they're they're expensive they take a

lot of time but also they just don't

really work because week after week we

see these like hundred million doll like

billion dollar hacks that happen all the

time so my

solution After experiencing all these

different frustrations is

bitac which offers OnDemand machine

based security audits so it turns the

idea of human-based audit competitions

like code Arena Sherlock

and it flips it more towards using

compute provided by bit tensor and other

networks and there's a more

detailed uh view of all the different

actions in the

subnet um and just to simplify like if

you have uh questions we can get back to

this later but just to simplify it U

validators take code

um clean code and then they inject

vulnerabilities into the code and

there's a data augmentation pipeline to

create a

challenge and there's a simple image of

a solidity uh smart contract with a

withdrawal but the problem in this

withdrawal is that it doesn't check if

the user is creating this withdrawal

message so anyone can withdraw money

from the smart contract like that's a

poor design but a good challenge for

miners to find that

vulnerability and the data sources that

these validators can pull from so right

now it's a solidity smart contracts but

it can really

find vulnerabilities in a lot of

different types of code bases uh

different

languages different vulnerability types

like economic exploits in subnet

incentive mechanisms for

instance um and it's more like a

benchmarking evaluation problem than

like a problem of uh like try it with

python try it with

rust

so we try to uh add additional

challenges with different types of data

sets so they could be open source data

sets AI

generated um

also adding the data pipeline to

increase the

variations and uh real world code bases

so imagine like expanding it to web

2 um just like SQL injection or Access

Control right there's like millions and

millions of code bases that this could

be used

for so the minor gets a

challenge and they do something and then

they respond so they respond with a list

of

vulnerabilities and what's in this

vulnerability type um you have to

specify the

category where the vulnerable code is in

the lines of code like a line of code

range description with um what the

impact of the exploit might be and then

a proof of concept

showing that uh this is actually like a

real vulnerability so this is not just

like pulled out of thin air this is

basically mimicking an entry in an audit

report

so when a validator gets a bunch of

minor responses they can stitch those

together to create an audit

report and the scoring mechanism is uh

pretty straightforward it's a which is

the known

vulnerabilities and B which is the minor

responses and then you get this jard

score intersection over Union accuracy

score uh it could get a lot more

complicated like a penalty for false

positives different types of penalties

or

rewards for finding like uh critical or

high vulnerabilities for instance

and on slide 11 this is U like minor

rewards over time so you can see the

people who score pretty well they end up

staying close to the top and these other

ones like sometimes they score well

sometimes they don't so it seems like

the uh the minor performance is like

relatively stable

and what's the point like what can we do

with all these different minor

responses so the idea here is to create

uh to create a network and interface to

all of these really smart specific

Miners and two um applications that come

to mind is a GitHub codas scanner and

then uh instead of like having someone

sign up for like a monthly subscription

you can also just uh use it in house

like we'll just scan all the code bases

out there and find bugs if they have bug

bounties then just start collecting real

revenue from

them and a few um upcoming Milestones so

uh benchmarks for vulnerabilities is

kind of in a sad state right now um

open gave out a a grant to try to push

this

forward uh but it's basically just like

a a static test and any static test you

can just overfit and then uh score

really well but not uh projectile to

unknown data sets uh so there's uh we

we'll probably uh test this out with the

existing Benchmark but then uh ideally

expand the Benchmark to something more

Dynamic that will constantly be

challenged um and then also

participating in bug bounties and audit

competitions so this is where real

revenue is coming in so uh this goes

back to the piece where a

validator gets the code from bug

bounties or AUD competitions sends it

out to the miners as an organic request

um and then pieces all the responses

together back for a complete audit

report and this is coming really

soon uh I really want to get done for

today's presentation but uh it it's not

quite ready yet and then uh expanding to

different types of code different

languages vulnerability types so

it it really would make sense like uh

this can be used for I think any type of

vulnerability some vulnerability classes

will perform much better for this type

uh and then some minor groups will

perform better so we might end up using

something similar to bit mines uh camo

approach to select uh monitors based on

on

context and then also Alpha token

plans uh I I'll get that I'll save that

towards the

end

and yeah a a misconception that a lot of

people have is that it's just like you

know like me working on this on my own

like there have been a lot of people who

have

contributed uh during this

journey and I'll just name a a few

people uh like Samy and Matan from the

start uh they they help navigate the bit

tenser

ecosystem and the the folks at Yuma

really sharp people with security and

incentives uh and yeah just a lot of

people have made uh amazing

contributions to uh to this project so

far

and yeah so I didn't want to uh show any

zero day exploits so all of these are

like older exploits like 45 to 60 day

exploits so the code bases have

drastically changed like yeah I'm on the

subnet owner side I'm on the Builder

side I I want to make it easier for

Builders to build fast and not worry

about about uh getting

exploited

so yeah this this one was really

surprising to me so uh the case study

with Mell and dippy

speech so they're using PCA and MLP

which are differentiable

transforms which can be attacked

by like

optimizing uh completely because

uh B based on the loss curve like you

can really get owned unless if you take

these suggestions from the uh generated

audit report like adding noise using

multiple models uh using Ensemble

scoring so that that was a really

surprising finding and uh all of these

were found just uh so so they're old

right like for 60 days old my some

that's only been out like one month so

all of this was generated from me as the

monitor me as the validator and then

just like submitting the code uh in

testnet and yeah compute subnets have a

lot of problems with like gaming GPU

benchmarks I'm really interested to hear

uh zoro's Merkel tree

uh approach like that that sounds pretty

cool to me I I need to like double check

that um but yeah like it's a big problem

and I I think that there are solutions

out there

uh yeah anyways uh let's go to the next

one so um this one was just like me

running it on the

code so

yeah we run this like every every time

that we push to production and sometimes

we find like uh things that maybe we've

considered

before um a lot of times the uh exploits

that we find uh subnet owners already

know about them but they haven't figured

out a good solution or it's not

prioritized based on other higher

priorities so in this example there's a

a medium

exploitation where you can just send

like the

entire um dictionary of categories to

inflate the score so uh we found this

one ourselves and then that's why we

created the uh jard scoring uh where

it's more like inaccuracy score so you

penalize over reporting for

instance and yeah it doesn't just work

for um

subnet codee like we we also found um

scanning the bit tensor like uh pip um

supply chain attack so just uh scanned

it really quickly and found it in like

10 minutes like there's a post to some

unknown link and then it sends the

private key away and there's a full um

detail of how the uh hacker tried to

cover their tracks to

to um so yeah there's it it's like

pretty interesting that um when you

reframe the problem into like throw more

bodies like throw more specialized

humans at the problem versus now it's

like throw more compute at the problem

throw better algorithms at the problem

then you can afford to scan all kinds of

different code and find uh vulnerabil

like crippling vulnerabilities before

they

happen and yeah the market like Market

is huge it's like billions of

dollars and we aim to capture 25 million

per year just through bug

bounties and when people see how

effective we are in bug bounties then we

can start charging for other things like

GitHub subscriptions and whatever

and so you made it to the end so what's

the alpha on our Alpha

token uh so a few things that uh I

personally think when it comes to

evaluating

Alpha you're investing in people you're

you're investing in the problem that

they're trying to solve with the product

that they create and then at the very

end I think it's net

price

and I think you have to ask yourself

like does this plan make sense like

where's the execution risk and at this

current price do do I believe this

person do I think that they're going to

be able to make it

so with the alpha

token um I think it's pretty

straightforward like the the uh problem

is really expensive

and it's basically just like stitching

all this stuff together to make a

report and then start collecting from

bug

bounties and once that

happens then we can use some of that

Revenue to buy back the alpha token and

uh yeah I I think like good things can

happen and another alternative I haven't

heard too many other people talk about

but uh I think it'd be

interesting if you have a leaderboard of

top holders and then maybe top like five

holders get free enterprise access like

five the the top five like Alpha token

holders so for instance like some VC

they have a portfolio of like hundreds

of um portfolio companies maybe they can

uh like start buying my Alpha token and

then using the

subnet as a way to secure like all their

portfolio companies uh just a quick

example so uh the conclusions

here

um yeah even before this call like

someone was saying this is subnet

specific exploit finding um that's not

true it can find all exploits well it it

has the potential to find all exploits

in all code uh not quite there yet

that's like a work in progress that's

like the end

goal um the key thing I think is instead

of betting on one team with one approach

I think it's better to bet on the

ecosystem so on lining all the

incentives of the validators miners

researchers some that owner to create

the best overall solution that can adapt

to all the introductions of newest

models newest algorithms and data uh

together and I think uh the real goal is

to help teams that are building without

slowing them

down and yeah contact me on Discord or

or ask uh any

questions I can read them from the the

chat if people have questions they can

put them in the

chat let's go back to what I think was

the most interesting slide um just like

performance over time running the

average reward by Cold key over time

like what does this actually amount to

like if you could if you explain what

this chart means like you've been on

main net for a fairly short period of

Time how long have you been on Main net

EO uh since um the end of January so

it's basically like a month in

change and and so this is performance on

The Core Benchmark um which is really a

proxy

for whether or not the miners are

capable of extracting the vulnerability

that you're using that you've injected

through the the synthetic pipeline is

that correct yes

correct and so let me get this straight

so you you you find a codebase a large

potentially large codebase in entire

GitHub and you have a machine learning

model is that correct is a whole

GitHub uh so it's not a whole GitHub um

in in this case it's uh it's like a

blockchain smart contract code so

typically a solidity smart contract um

because of uh token limitations and we

strip out actually it's kind of funny um

there's a lot of noisy files in there

that induce uh more hallucinations or

actually uh misses in um in like

vulnerability detection so if if you

have a test case for let's say like

access

control um and that test case is

included in the contacts then the models

uh are hindered at finding Access

Control

vulnerabilities uh because of the

presence of that

test

interesting the so the goal though is it

will be a full GitHub that you can

submit not just a contract correct yes

yes

correct um and and so you you you you

find a contract and you you basically

inject

an exploit that you know beforehand so

you create the

answer

yes and right now um the the miners are

performing better and better like the

the score here is showing 80 does that

mean that or like it looks like 85 does

it mean 85% of the time the miners are

now um able to determine the exploit

with proficiency whereas before it was

only 60% of the time is that what I'm

looking at here

yeah uh I think that interpretation is

correct would be useful if they could be

looking at all of the evm contracts on

on benser as well just

innately the what is the state-ofthe-art

here like we get to 90% on this

Benchmark and we're the best in the

world uh so that's a really good

question so um the the last slide is uh

on

benchmarks and because there's no really

good benchmarks it's it's hard to tell

like which one is the best like which

one should we really be using so is it

is it audit reports so then it's like

man versus machine or is it like Machine

versus machine like using slither or

other stack analysis or is it just like

straight performance on a leaderboard so

like a competitions and Bug bounties if

we

start uh ranking like number one for

instance on all leaderboards like found

bugs then arguably that's probably the

best metric and

evaluation where is that leaderboard of

found bugs

uh

so we we use um I mean there's different

uh bug Bounty programs in different

audit competitions and each of them has

leaderboards on who's earned the most or

who's found the most

bugs so it's really just like ranking

well on these organic requests on like

real vulnerabilities

you could even use that as part of your

validation system if you had the minders

submit solutions to bug bounties and

then they genuinely found solutions that

were verified by

others yeah exactly like that's that's

like definitely where we want to

go you're like my mind do you see this

network as this

247 Giga machine that is just out there

on the internet fulltime time where

you're feeding at every single line of

code uh every single bug Bounty and it's

just constantly it's not writing PRS

that's actually just finding every

exploit in the

world uh that's that's one approach but

I think there there might be another

approach that's more suitable towards

Enterprise use cases so for instance um

let's just give some random example like

uh uh optimism right like they're

they're a billion dollar uh chain and

they're running like hundreds of

millions of dollars through a bridge

smart

contract so they can afford a million

dollars of

compute um for for instance to just run

everything point it towards their thing

for like a couple weeks and see if they

find anything right so I think it's like

you you could do it like um the first

way too but I think eventually it'll be

more like geared towards um what's the

best economic use of this uh this

compute and for for a hackathon project

maybe it's like a 100 bucks of compute

for like that bridge contract example

maybe it's like I don't know $10 million

or in bit's case like a billion dollars

um the scope of problems that you're

sending to the network or

contracts but the scope of

vulnerabilities could

include the fact that you can SSH into

this

machine

um plus there's a Code vulnerability etc

etc what's your plan for expanding that

scope slowly over time in the incentive

design so that eventually the network is

capable of finding the full class of

exploitation yeah that is extremely

challenging um and it's something that

I'm not sure how to address right now

but it's something that I definitely uh

want to collaborate with academic

researchers who are also interested in

like how do you make the into a compute

bound problem how do

you like add new domains

into um this vulnerability

detection is it is it a compute problem

because the sophistication of the top

miners on your

subnet is not just the ability to run

machine learning models quickly it's

also prompting of those machine learning

models like what do you what do you you

actually suspecting that the miners that

are succeeding on your network are

doing I'm I'm guessing they're using

well the first thing is using like uh I

built my own AI audit tools and the

first pass is really using better

models um and then enhancing them with

better prompts and then using AI AG to

do like multiple searches on on

different classes of vulnerabilities and

then piecing together the results so I'm

guessing the monitors are using a

similar approach to like what I

did right which is piecing together

models running through the source code

when it comes to larger problems like

here's a code base

what do you suspect that that these

agents will be doing these miners are

doing that's going to be super

interesting I I think basically they'll

need to

synthesize um the code bases into a

workable context uh within smaller

models and then looking through uh

likely sectors of the code in

combinations

of um different code bases and libraries

to see um where where the um explits

could

be it's sort of like the inverse

challenge to gen 42 submit gen 42 s

Rizzo I believe now um where the job is

to write a full code

base in in your design is the inverse

you're you're given a full code base and

you have to unpack it and find um all of

the problems there in I think that

coming up with with

uh expansive and um really like a

covering set of all of the types of

vulnerabilities is very hard from the

synthetic perspective like how do you

generate all of those vulnerabilities

and and like can you speak to how you're

doing that right now like how how do you

solve the synthetic

pipeline problem

here yeah so it is a work in progress

um but I've I've seen even like a

mentioned before with including test

cases throwing off uh models so even um

renaming uh the functions into say like

withdraw and deposit into one or

two uh it loses a lot of context in the

model

um for evaluation and then if uh people

do like static analysis then it's it's a

different

approach so I think

um the the data augmentation pipeline

that's what you're asking about

so what I'm doing is introducing

different types of noise like comment

pollution uh function pollution

and making it a little bit more

challenging for the current models to

Zone in on a specific

thing um but it it could get a lot more

sophisticated um there there's

definitely a lot of room for improvement

there I I think this is super exciting

um one last question for me would be

what exploits and vulnerabilities have

you seen on the subnet

like so far like a month or so in very

difficult to to Really cover all of the

incentive issues that you're you're

seeing um like you will see in every

subnet on bit tenser like have you made

something full proof at this point are

you happy with the outputs that you're

getting from the

network uh I am not happy but I am also

a

perfectionist

um because yeah at

I

mean yeah I'm sure you know like running

a startup there's levels of Brokenness

everywhere and you're just trying to

make everything a little bit less broken

over time or at least that's my

experience yes and I I think the most

broken thing right now is just like uh

there's something really close to

generating Revenue but it's not like

quite there yet so I I think like that's

what I need to focus on and the minors

and validators like yeah they they've

been surprisingly helpful and that's

something that uh you harped on and I'm

realizing 100% that's true what what

specifically oh um recruiting um

validators in monitors as a part of the

ecosystem like asking them to do

um totally and like

yeah the the squashing of bugs and

exploits is like a full-time job for a

subnet owner and if you have a

intelligent minor base one of the things

that happens is that some one of the

miners will find an exploit and they'll

start applying that

exploit um and if you have more than one

minor the other minors will

start telling you that there's an

exploit on the subnet because they're

unhappy about the fact that their

competitor is beating them you see this

all the time and so you have just pent

testers red teaming your system at all

times so solving the most difficult

problem of the subnet is in effect being

done full-time by your mining community

and if you have those channels open um

those problems can be solved quickly

yeah that's definitely been my

experience so far uh it's like

PvP with the miners very

PVP the unknown miners um Yubba let's

end it here uh so we began the call with

with me talking about the updates coming

next week a really large one with burn

new ID uh some changes with the uh the

staking operation fees if people want to

jump on stage talk about that or have

questions about that one they can they

can ask them in the in the channel and

we can go through um I think it's quite

interesting conversation for people um

or we can end it

here does any anybody want to come up

stage Sammy

Carol no I think that's it all right

guys thanks you brother this is

fantastic um I appreciate your time and

uh good

luck see you next week everyone we'll be

speaking with uh Namar with gradient

shoots and 19 uh next week so very

exciting CIA everyone bye

[Music]

Loading...

Loading video analysis...