Hash Rate - Ep 150 - Babelbit Subnet 59
By Hash Rate Podcast
Summary
Topics Covered
- BitTensor Replaces Venture Capital
- Predict Verbs to Beat Human Interpreters
- Prediction Unlocks Sub-50ms Latency
- Stake for Vertical Exclusivity
- BitTensor Enables Bossless Anarchism
Full Transcript
Hello everybody and welcome to Hashrate.
My guest today is Matthew Keras of Babelbit. Hello Matthew. How you doing?
Babelbit. Hello Matthew. How you doing?
>> Hi. Uh thanks for having me. I'm I'm
really well. Uh it's been an exciting couple of months. Um
>> yes. And what what is your subnet number by the way? I just realized I don't have it in my number.
>> 59.
>> 59. Okay. So people looking for you. 59.
So tell us Matthew, let's just jump right into it. What is Babel Bit? Okay.
So, uh the name comes from what it is.
If you like the uh science fiction, The Hitchhiker's Guide to the Galaxy has this concept of a fish. It's a tiny little fish that you put it in your ear and then whatever you hear is in your
language. Whatever you say is in the
language. Whatever you say is in the language of the listener and uh um there is no technology that's even close to that. And that's what we're building.
that. And that's what we're building.
>> Yeah. Yeah. So, for people who don't know Douglas, I know Douglas Adams because I read the books in the 80s. I'm
that old, right? When they first came out.
>> Um, uh, if if you're familiar with Star Trek, the universal translator. That's
like another one.
>> Exactly.
>> Or or if you're British and you watch Doctor Who, the TARDIS translates for you, right? Like that's the conceit in
you, right? Like that's the conceit in the show, which is why everyone can communicate.
>> Exactly. And uh, interestingly, I went to the same school as Douglas Adams, and in the UK, it was a radio show before it was even a book. So it was 1979 that we
got into it. Um, wow.
>> So, and that's such a cool concept. And
interestingly, >> I'm one of those people that invents things all the time. Mostly I don't have the resources to actually build them.
And sometimes the time has to just come historically that something becomes relevant or it becomes possible. In this
case, it has become possible in two different ways. one the technology to do
different ways. one the technology to do it uh is something which is really only emerged in the last 5 years and really only the last two years. Uh using LLMs
for translation is something that a lot of people are just starting to do but using it for voicetooice speechtoech translation is very very new. [snorts]
Uh so that's one thing but secondly I've been around the block a lot of times fundraising starting startups trying to start projects within corporates where you've got stakeholders that want
different things and when I discovered uh bit tensor I thought oh my god now these impossible tasks on my backlog of inventions are things which I can just
do myself because the ecosystem will provide me with the brilliant machine learning people and uh and it will pay them for me. So that's [snorts] >> yeah, a lot of people don't get that.
I've actually been I've been working out a little think piece that's going to come out soon. I'm not going to say much more about that, but and doing a lot of write. So I've been writing about this
write. So I've been writing about this concept a lot. There's a lot of people don't that don't understand like you could not have done this um before Bit Tensor, right? There is no other way.
Tensor, right? There is no other way.
And the reason why is because um you would have had to raise I don't know tens of millions maybe 2030 million, right? to get the same amount of work
right? to get the same amount of work that you're getting out of a bit tensor subnet, right? There are no employees.
subnet, right? There are no employees.
You just set up a subnet with an incentive mechanism and you don't pay them. The chain pays them. You just
them. The chain pays them. You just
direct what they're supposed to be doing. A lot of people don't get, you
doing. A lot of people don't get, you know, that you can get all the free work from experts.
>> Yeah. Exactly. So, and uh to set the context, I had not heard of Bit Tensor in May this year. And uh [clears throat] I left a job where I was trying to do
some interesting stuff with LLMs. my boss got fired and his replacement wasn't interested in innovation. So, I
left and I started blogging a bit on LinkedIn about some cool stuff I was doing with LLMs. And then I don't know if you know I'm sure you know Max and Nigel and Tim from SCORE.
>> Sure.
>> Nigel I've known for over 20 years. Uh
we were doing machine vision stuff in the early 2000s and he said, "Look, you got to come and join us." Um and so uh it took me I think a month before I
believed that the economics of Bit Tensor were actually real. You know,
it's not a Ponzi scheme. It's real.
>> Yeah.
>> And it's the perfect replacement for early stage venture capital. So, as you say, to do something like what we're doing would take tens of millions. But
that would start with some seed capital and then you raise a seed extension round and then another round, then a series A at some point and then some growth capital to do something else. And
that means that usually the most significant strategist in the organization, the CEO, is touring the world fundraising instead of building products.
>> Correct.
>> It's it's a disaster.
>> It's I've done it, by the I've done several startups that way.
>> And and you if you're the CEO, which I have been, you spend half of your time on that at least.
>> Yeah.
>> And the the ones that are good at it make no uh apology for the fact that when they've just closed around, they start raising for the next one.
>> Yep.
>> It's met. [snorts] So,
>> plus you're hiring. Like, that's the other thing you got to do is like you're you're the you're actually interviewing a lot of people, right? So, that sucks more time away. Yeah. And actually the
most complex thing in Bit Tensor is you have to accurately define the tasks and accurately define a way to judge those tasks correctly to make sure they're
solving the problem that you want them to solve. And that is something that
to solve. And that is something that engineers, inventors, and product people are very good at. Fundraising, selling
maybe not. So this is a really extraordinary thing that this idea of mine from a couple of years ago combined with the emergence of multimodal LLMs combined with discovering Bit Tensor
through an old contact of mine have all just happened at once. So from finding out about bit tensor in the last week of May to joining uh score and attending
the uh uh the proof of talk uh conference in Paris in the second week of June uh to starting to uh plan my own subnet in August uh and here I am you
know and we've been live now I guess for nine weeks.
>> Yeah.
>> But so you're a spin-off series from uh Subnet 44.
>> Yeah, in a way. Um I mean people ask uh what that means really it means that uh I have three fantastic advisers. So Max,
Nigel, and Tim are all advisers to uh to Babel bit. So yeah, it's it's it's been
Babel bit. So yeah, it's it's it's been much easier for me than someone coming into it blind. Yeah.
>> Okay. So let's talk a little bit about Babel bit directly. So um so it's it's language translation and it's my understanding is it's extremely real
time. So it's it's not like you like
time. So it's it's not like you like normally with these things like we have this, right? Like so it's you know I
this, right? Like so it's you know I have something on my phone where somebody speaks into it in Spanish and then I I turn it back to me and it waits a second or two and then it gives me the English version and vice versa. Right.
Yours is not like that. Yours is
actually boom like right away like I got like the little translator guy in my ear like the UN, right? Like that kind of >> we feel that we should be able to do better than a human interpreter. And
I'll explain why. Uh but I'll go back in time a hundred years to uh a story that all linguistics students learn this. If
you at university you learn linguistics.
There's a story about uh an American journalist is listening to Otto Fon Bismar who spoke in very long sentences uh and her interpreter stops speaking
and she's getting frustrated saying what's he saying? What's he saying? And
he says sh I'm waiting for the verb. Now
[snorts] that's a very interesting point. So in German, the verb goes at
point. So in German, the verb goes at the end of the phrase. So you might not know what a sentence is about until you've heard the very last word. Now,
[clears throat] interestingly, the other thing that you would learn, and if you study psycho linguistics, which I did, uh you will also learn that a native speaker 95 times out of 100 knows what
that verb is going to be before it's spoken. And people com people compare AI
spoken. And people com people compare AI with human intelligence. And they're
similar in some ways, different in others. But one way in which they're
others. But one way in which they're very very similar is predicting language because that's what an LLM does. So the
idea that a human being can know what the verb is going to be and that an LLM can know what the verb is going to be is a really great hypothesis for a product.
[snorts] So if you think about translation, I can start translating something in my head before I've heard the whole sentence because I know what's coming.
So why is it that Google Translate waits for the whole phrase to be finished? It
doesn't mean >> so it's [clears throat] kind of like if I said may the force be with you or sorry may the force be blank you'd be like well it's probably not Fred it's probably you right
>> that's that's great because I was told to use a different example because mine was very culturally specific so at British schools we don't have the state and church separation so every British school child has to say the Lord's
prayer almost every day so I always say you know you know uh if we say our father who art you know the next hundred But uh if I say to someone that doesn't
know me, my favorite songs are, >> you got no idea what's coming next. So
the art of exploiting this knowledge of what's what is about to be said is knowing how much we know. The system has to have a measure of its own confidence.
And that might be a measure going from zero to 100, but also there's a a time window that changes all the time. So if
I say BBC stands for I know the next three words. If I say UNICEF stands for
three words. If I say UNICEF stands for I know the next six words. Both 100%
confident but different lengths. So if
you imagine the LLM that we are building, it's an LLM that has to do prediction. It has to understand the
prediction. It has to understand the confidence of its own prediction and it has to understand the size of this moving window the whole time. And the
first task we've started with is trying to master prediction because if we can't master prediction, we can't do any of the rest. Right? Not to say that the
the rest. Right? Not to say that the rest isn't complicated, but being good at prediction is is is the key, >> right? So, it's kind of early days right
>> right? So, it's kind of early days right now and you're you're just focused on this very sort of narrow sliver. So,
there's no product at this moment.
>> No, no, no. No, no criticism. Just there
isn't.
>> No, absolutely. And it's interesting how different people have different opinions about that. So some people sort of
about that. So some people sort of saying you know you have to have a you know a a good business plan with a time scale to revenues and estimates of revenues and we do have that and we we
are speaking to investors and others about that but equally you know speaking to consting to mog some really experienced people they say well look if you're going for something with the
market size of speech translation you know one of these trillion dollar markets then thinking of getting a product out in your first two quarters is just a bit of a crazy way to go about Yeah. No, you shouldn't be. Yeah, I
Yeah. No, you shouldn't be. Yeah, I
agree.
>> But actually, I disagree with them a little bit because of my own experience.
[clears throat] So, I've built speech products before and sometimes the little modules you create have a small market of their own. So, for instance, if we created a great prediction engine, other
speech R&D companies would buy that as a licensed piece of software to do their testing and do do stuff with. If we only got as far as doing texttoext, if we had
a live um press conference uh with handtyped realtime subtitles in English, we could do them in multiple languages without having to cope with all the speech stuff. It's just textto text
speech stuff. It's just textto text stuff. So there's lots of products that
stuff. So there's lots of products that will appear along the way, but none of them will have the impact of the Babelfish. You know,
Babelfish. You know, >> the Babelfish. Yes. So in its final form, I assume it's it's got to be an earpiece, right? that I'm I'm
earpiece, right? that I'm I'm experiencing it with.
>> I would say in its final form, yes, but I would say maybe I'm not responsible for that final form. So I have done a lot of global scale consumer products
but I was convinced by uh one of the colleagues of mine uh that I worked with and sold a lot of different technology with in the 2000s that actually if you're developing a fundamental
technology the people that develop the actual enduser applications might be better off being different companies. So
this could be used in lots of context.
It could be a plugin for Google Meet or for the the platform we're using now. Uh
it could be something that's part of your phone. It could be uh something
your phone. It could be uh something that's part of a helpline or 1001 other things and it might work differently in different cases. So I have a lot of
different cases. So I have a lot of faith in selling fundamental technologies through resellers. So you
might have a reseller who is an expert in the legal market where accuracy might be slightly more important than latency.
uh you might have another market uh where um there's a very very um comprehensive international vocabulary of its own. So maybe oil and gas all
over the world people use the same words which means you can create a model that's far more efficient because of that industry. But how can I as a
that industry. But how can I as a fundamental technology person and an inventor know all of those verticals well enough to sell into those verticals? So, in the past, I've worked
verticals? So, in the past, I've worked with resellers like the big guys like they were Logica, now CGI, um, Accenture and so forth because they will be
bidding for contracts all over the world and they will say, "Okay, yeah, we can see next year we've got five or six pro products that all need translation and they'll take away the headache of
understanding the end user." And
sometimes they'll pay for exclusivity as well. So that could be that could be
well. So that could be that could be very interesting in the bit tensor context because if you imagine >> somebody has an opportunity uh in a particular industry or they have an
opportunity in a particular region. Uh
so it might just be the legal industry in Southeast Asia that all works in Hindi might have some particular uh model and we could be paying miners to
uh to do that or miners would be paid for doing just that one thing and that could come from staking coming from the resellers because the resellers want to say well we want exclusivity. Well if
you want exclusivity you got to put the money in. So there's a lot of things
money in. So there's a lot of things that tie into the sort of tokconomics that come from this idea of channel sales that each channel can have its own dynamics.
>> So quick question before I I want to move on to the incentive mechanism. Um
but before we go there, what languages are you supporting out of the gate and what are you focused on right now? Is it
>> So uh right so the current challenge is just in English because we're literally trying to prove how far forward we can predict.
In the next week or so, we're going to add German and Chinese.
>> German because of the famous example and Chinese because it's different in many ways and we're very interested in whether different LLMs do Chinese
differently. Ultimately, we will have to
differently. Ultimately, we will have to be building our own LLM because we'll need to restrict it down to the tasks we want to do. But right now, a Chinese LLM
made for a Chinese market has terabytes of stuff that isn't in OpenAI. So that's
kind of an interesting thing. So
>> very interesting. Okay. All right. So
let's talk about the subnet itself and how the uh the mechanism works. So I
think you've been very clear. It's the
success of predicting the next token or the next word actually is probably a better way of putting it for >> actually it's the phrase. So
>> okay >> um what we do is we we break down a dialogue into short phrases into the
utterances and each utterance has a step for each word. So if I say um uh
hi Mark uh how are you doing? Uh first
[snorts] the first word would be hi and then the system would try and guess the whole phrase on that first word. And actually
at the beginning of a podcast, it might guess, "Hi Mark, how are you doing?" You
might guess the whole thing first time.
Uh, and then the second phrase, Mark gives it a little bit more information.
And it basically makes a guess of the whole phrase for every word that's uttered. Because what we're trying to
uttered. Because what we're trying to work out is not just um how soon we can uh we can guess the whole phrase, but also there's a strange dynamic to it.
Sometimes as you get more words it goes off track a little bit and uh there are some there are some things where the wide context matters, some things where
the close context matters. Sometimes the
second utterance in a dialogue has some advantages from knowing the first utterance. Sometimes it doesn't. So
utterance. Sometimes it doesn't. So
we're learning a lot by scoring every single phrase. However, the the ultimate
single phrase. However, the the ultimate score for the minor is to go through three dialogues and the average accuracy is their score. That's about to change
uh in in quite an interesting way, but for now it's like that. And we're going to add German and Chinese to it very shortly.
>> And do you have 256 miners working on this?
>> No, I think on average it tends to be a couple of dozen at any one time. You
know, 20 30 miners. Um, we're getting a good spread. We've had the usual things,
good spread. We've had the usual things, uh, all kinds of weird exploits and, you know, as I say, we've only been live for nine weeks, I think, and, uh, a lot of
the time has been getting used to that.
We have a, um, our lead developer at the at the, uh, subnet code level is an ex miner, so he thinks in terms of exploits all the time, which is great.
[clears throat and laughter] uh and he's part of a community that that know knows all those things. Uh but
uh yeah, we're a very small team and keeping up with all of that is almost as much work as the R&D.
>> Uh yeah.
>> Yeah. So lately uh Bit Tensor has undergone a large change that we call TFlow, right? And it sort of came out of
TFlow, right? And it sort of came out of left field, right? There's debate over whether it's a good thing or or not. Um
and there's a lot of opinions on that.
Um but you know what it does is it basically determines how much emission you get from the chain. So basically the payments that the chain is making to
your miners based on whether there is a positive amount of tow coming into your subnet and being staked in the AMM. Right? So you have So what
this means is that you as a subnet owner have to continuously you know be out there shucking and jabing and getting people to get interested in what you're doing and stake their tow and and
basically and basically be saying well I think that these subnet tokens are going to be worth more in terms of Tao somewhere down the road which is why I'm staking my TOA now. So, with you guys, it's kind of like, well, you know, I I
believe that this is a a moonshot of language translation. Um, or or maybe
language translation. Um, or or maybe one of the little products that you're you're going to you're going to put out along the way might even break out. Um,
and it might happen sometime in the relatively near future, right? Because
as an investor, I probably have some time horizon and and given that it's crypto, it's probably not longer than a year, right? So, so what what what is TO
year, right? So, so what what what is TO how do you feel about TFlow? what is it doing to you and is it hurting you or is it helping you?
>> Uh it's helped us a lot actually. So uh
I think that um we have an awful lot to say. Um if I the only thing that is
say. Um if I the only thing that is throttling our um our uh um PR I guess the public perception of us is is not having enough time for me to you know I
keep promising myself I've got to tweet at least three times a week and uh we've got to be chatting more on Discord and doing this and that and and of course uh um appearing on podcasts etc.
uh but also we have a lot of interest from those kinds of idealistic people who think that the purpose of bit tensor is to do these big ambitious projects.
So when const reached out to us there were several reasons for that. One was
very simple. He was interested in what we're doing wanted to find out about it.
So we had a chat, but another was that our subnet code was a fork of a fork of the Aphen Foundation code, which I deliberately chose because it had some
interesting uh anti-copying uh features in it. And his ideas have moved on a bit
in it. And his ideas have moved on a bit since then. We've found out from hard
since then. We've found out from hard hard experience that not everything in that that approach was right. And
actually we're now directly collaborating with his team on the next generation and he and he's very keen for some of his ideas to be implemented and even if his team's running behind on it
that we might get there first and and so that's very exciting. Uh and I think that there's so much to say about the mechanics of where we fit into Bit Tensor. There's so much to say about
Tensor. There's so much to say about speech research and you know the weird thing was the very week we launched Apple started saying they were building in real-time translation to their
AirPods and you know I think we can safely say that our technique will have lower latency than what they've delivered which is you know often 5 10 15 seconds behind you know so
>> Wow. So you think you can beat Apple?
>> Wow. So you think you can beat Apple?
That's amazing.
>> Oh yeah. Yeah. I think so. Um [laughter]
there's a there's a lot of reasons for that. Um,
that. Um, I mean, I don't know. You know, this is one of these things that uh I I think in some in some podcasts I go into a lot of
technical detail. Uh, but the idea of
technical detail. Uh, but the idea of guessing what's about to happen is a fairly new concept in the language world. It's happened in signal
world. It's happened in signal processing a little bit earlier. And our
chief scientists, Josh, and I have worked on latency problems with speech and audio for many years. And uh we're very well aware of all the latest
research and uh I came up with an idea for um speech morphing, you know, the kind of stuff where, you know, someone has a video of Obama and it sounds like Trump. Well, that's used for some really
Trump. Well, that's used for some really important things like someone with a speech impediment can lose their speech impediment. Um and we were developing
impediment. Um and we were developing something for noisy speech. So some
people have uh um as they start to go deaf they can't hear an individual voice in a crowd. Uh but it is perfectly discernible to someone with normal
hearing which means it's also discernible to a computer. So rather
what most people were doing to fix this with neural nets was to train a neural net. Here's a bunch of dirty speech.
net. Here's a bunch of dirty speech.
Here's a bunch of clean speech. Let's
map one onto the other and we'll transform them. I said let's not do
transform them. I said let's not do that. Let's recognize the sounds of the
that. Let's recognize the sounds of the dirty speech. then recognize the tone of
dirty speech. then recognize the tone of voice and the accent of the person speaking and then reynthesize them completely clean without noise. [snorts]
And the proprietor of the company wasn't interested because it would mean designing from scratch a completely new uh um type of network architecture.
Didn't want to put that many people on it. If I'd have had Bit Tensor at the
it. If I'd have had Bit Tensor at the time, no doubt I would have done this.
And so I left the company and three or four years later Josh called me and uh said that they got it working because they decided to finally do it my way.
Great. But he said the latency sucked.
It was like half a second. Then about
less than 18 months ago. So we're only talking about summer last year. He
called me and said, "We fixed the latency problem." And this is amazing
latency problem." And this is amazing because I thought, you know, you got to recognize all the sounds, then you've got to uh um reynthesize them all and make them sound like the original
speaker. So you're doing like multiple
speaker. So you're doing like multiple um uh analyses of the different aspects of speech and then recreating it. Uh how
could it be less than you know quarter of a second maybe? They got it down to 50 milliseconds in the lab. So 20th of a second and they said the theoretical minimum with the new technique was 25
milliseconds. This is like less than a
milliseconds. This is like less than a tenth of what I thought was physically possible. And I'm not a mathematician
possible. And I'm not a mathematician but Josh is. and he explained it to me about five times before I I uh actually understood it. You know, if we'd have
understood it. You know, if we'd have been in the room with a whiteboard, it might have taken me three times, but you know, it he's always a bit ahead on those kinds of things. And
>> [clears throat] >> um I said, "Oh, I see what you're doing.
You're just guessing what comes next."
>> And then the penny dropped. Well, what
if we did that for words rather than sound signals? And then the idea
sound signals? And then the idea [clears throat] for this appeared from that. And uh that was really exciting.
that. And uh that was really exciting.
And then coincidence upon coincidence uh a week after we registered uh the subnet, Josh phoned me and said, "Actually, uh I'm leaving my job. Uh
it's got very boring now. They're not
doing anything cool. Do you know of anything cool going on?" I said, "Well, oddly enough, there's something which might be just up your street." So, he's now our our chief scientist. And um uh
so uh yeah, very very exciting times. Uh
and the reason we think we can beat Apple is that um this is the absolute latest research and it depends on doing some things that they're unlikely to
ever do. So for instance, the level of
ever do. So for instance, the level of predictability is highly context specific. So if we built a meeting
specific. So if we built a meeting system for a company, we would train that meeting system with all of their own vocabulary, all of their documents,
etc. And I don't see that as being something that is likely to be an Apple product. Um but that's the way forward
product. Um but that's the way forward with this kind of stuff to and you know maybe one day >> models will be 100 times the size they are now and include everything in the
world but right now to do it fast you need a small model and to do a small model you want to target the language.
Yeah.
>> Yeah. So if you I mean if you this was like a bit tensor specific language translator you'd have phrases like sum of subnets and tile flow and you know you know miners and things like that.
>> Yeah.
>> Yeah. Or even if this was me talking to my uh to my oldest friend and we've got the last seven years of our chat history on WhatsApp.
>> Yes. So you have a very specific you have a you and your friend specific library >> all the names all the places we talk about.
And so anyway, I came back to it just before, you know, when I was between jobs, before Nigel called me to recruit me for score and after I'd left the boring job, I was uh doing some
research. I've got to come back to that
research. I've got to come back to that idea that Josh had about prediction and see whether anyone's doing it for translation. And bizarrely, there was
translation. And bizarrely, there was one paper published on it in April by Phil Woodland and team in Cambridge. He
taught me decades before. So I know Phil Woodland
decades before. So I know Phil Woodland and they are not going to catch up with us once we get going because they admitted in their paper that they hadn't
got enough money to keep going. You know
they they were saying that they had to use an open-source texton uh um LLM whereas they'd much rather use uh the most advanced multimodal one. We're we
can start with the most advanced multimodal one because we can host it on shoots or host it on targon at a fraction of the price that Cambridge can and we can also get it trained for the
specific use cases that we're experimenting with. So we can have
experimenting with. So we can have miners doing it for the oil and gas industry and the miners doing it for my friends on WhatsApp, you know. Um, so
this is uh it's an unbelievable synergy that these technologies have emerged just to the point where there's a financial ecosystem that allows me to do it without having a boss and without all the
>> Yeah. Now you brought up something just
>> Yeah. Now you brought up something just very interesting just now. So you said not only does bit tensor supply you with a subnet which is basically free work
for creating your product but uh the inference to power that free work it's not free but it is quite a bit less through using other subnets shoot and
targon right and because those things exist your inference now costs like 16 to one10enth basically is what I've >> heard when they start to need translation we might be able to swap >> [laughter]
>> Yeah.
>> So, and there's more than that. So,
we're talking to uh Macrocosmos. They
were so helpful when we were setting up the subnet with all kinds of ideas. Um
their subnet 13, the data universe.
>> Yes. Screaming.
>> Yeah. We need, you know, where are we going to find um you know, a thousand hours of conversations in Mongolian?
Well, they'll probably find a YouTube channel that has that, you know. So, uh
yeah. So, we're going to be using them.
we'll probably be using Hippius. You
know that to me it's a principle of the foundation of our company that if we have a if we need a supplier that we first look amongst our peers in the uh
in the Bit Tensor world because those will be the best kinds of suppliers.
>> Yeah. So you'll use Subnet 13 to scrape conversations to train your LLM in in the specifics of language prediction.
>> You'll use Targon and shoots for inference. You might use Hippius for
inference. You might use Hippius for storage and these are all other subnetss and so you will you know the the cost savings of using those products accretes
to you you know cost you quite a bit less in all these different dimensions >> that's very yeah that's an unappreciated sort of side effect
>> and you just said you're doing it sort of you know phil to be you know philosophically aligned with the other towel holders right >> but I think but I think just practically
it costs like a lot less Yeah, exactly. And also I if we if all
Yeah, exactly. And also I if we if all those businesses support each other then it also supports the price of towel let you know forgetting the price of our individual you know in some senses we're
competing but actually you know we're we all have an interest in in the the whole project succeeding >> and uh um yeah it's and it's an
amazingly supportive community you know uh most of what I know about how to do iterative software development on bit tensor comes comes from copying what Shaq has been doing with ridges and you
know >> I I've been he's been the most responsive person you know if it's easier you know I don't know if you know much about SCORE but the four founders
of SCORE are on different continents so there's Connecticut Paris Dubai and Brisbane [snorts] so for a while it was easier for me to get a meeting with Shaq
than with Tim [laughter] same with and obviously Steph and Will are in London so I can just hop on the train and I can see see them. So, yeah,
it's uh it's it's an amazing community overall and it's amazing. It's an
amazing community in the UK as well.
Yeah.
>> Yeah. Shack is amazing by the way. So,
we uh Canada, but that's not too far.
Yeah. Yeah.
>> Yeah. Yeah. So, still Core just invested in Ridges. Well, not that long ago, but
in Ridges. Well, not that long ago, but but yeah. So, we you know, we've been in
but yeah. So, we you know, we've been in contact and Yeah, we're very impressed.
Are you is your subnet competition also winner take all like what Ridges is doing?
>> It is. We're looking at some modifications to that. Uh I think that we might have kind of multiple parallel ones. So for instance, if we had a
ones. So for instance, if we had a challenge that had one dialogue in English, one in German, and one in Chinese, we might give a a proportion to
each. Um we might also um uh give a
each. Um we might also um uh give a small amount. So obviously for for
small amount. So obviously for for someone that's really interested in the space that that wants to mine in the space but is you know maybe a few weeks
or even months behind the leaders um then uh we might give a small amount of an incentive for anyone that makes a a good attempt uh just so that they at
least get rewarded for for their work.
But we haven't seen anyone else quite do that. So, we got to work out the maths
that. So, we got to work out the maths of of how how best to incentivize people that that that aren't doing uh aren't necessarily furthering the product. Um
because obviously there are other things they can mine where the incentives are in proportion to the amount of work you do. If you're you're providing
do. If you're you're providing processing on some problem, uh whether it's fast or slow, you solve the problem and you get some of the some of the incentive. And and you know, we're we
incentive. And and you know, we're we are completely open-minded. Um, we can Yeah. We're so new that it might not be
Yeah. We're so new that it might not be winner takes all next time we speak.
>> Yeah. Yeah. You're still you're still figuring it out. So I I would assume so at this point you're you're sort of you're semifar away from from actually having a product, but have you thought
about like how you know do you have an idea of where you are going to go to drive value into your Subnet token? You
know, there's a lot, you know, the answers to that range from, yeah, we're just going to buy back, you know, as we make money, we'll we'll buy back things off the open market, all the way to what MOG has done with Hippius, right,
>> which is a complex but highly brilliant system to deterministically tie the value of the Hippius token to a rise in revenue and usage of the Hippas product.
>> Yeah. So I um uh I I was very interested when I saw he did a long interview on that. Um and
that. Um and uh we had a chat about that a little bit after. Um and
after. Um and I think we could do something like that.
But my thinking is the obvious place to tie in tow is through the sales process.
So if uh so I mentioned it briefly that for instance if somebody wants to become a reseller and get an inside track on what we're doing that they have to stake a certain amount of tow if someone wants
to have exclusivity in a vertical or exclusivity in a region which would make a lot of sense in both cases they put tow in because the tow >> sorry is it or your alpha
>> alpha sorry it would be your alpha. Okay
I just want to clarify. Yeah. So, so
they would be staking their towel on our alpha. Uh because actually there's a
alpha. Uh because actually there's a direct link between the exclusivity and the product. So if you want to be the
the product. So if you want to be the legal industry reseller of translation services, you then stake a certain amount of uh of
alpha and you get that exclusivity, but also you're effectively funding the uh the specific competitions that train the
model for your industry. Similarly, if
it's a regional thing, so if if uh if um Huawei wanted to say actually we want to be your reseller in China exclusively, we could say okay well you stake this around town, you will own the Chinese
language model, you know. [snorts]
>> Yeah. Stake your alpha. Well, stake to Yeah. and and then
Yeah. and and then >> you must you must hold okay so another way to put it is you must hold x amount of um Babel bits alpha token
>> and then you will own the rights to distribute our stuff in a particular region. Okay.
region. Okay.
>> Exactly. Yeah. And I think that that that could be really useful because it um it has a kind of automatic proportionality. So if there was some
proportionality. So if there was some very small niche, so uh I once built a uh speech recognition based search
engine for the BBC Wales archives in Welsh and you know there are three >> Yeah, maybe there's three and a half million people in the world that speak Welsh. So
Welsh. So >> that's the only thing I know and I know that because of Doctor Who. It means Bad Wolf.
>> Well, of course that interestingly that that has been made in Wales since the reboot because of Russell B. Davis being
being Welsh.
>> But um yeah, so that's an interesting one because as um the the the key players there, there are Welsh institutions, there are Welsh
universities that all would have a part to play in that. They own archives of documents that be useful. Uh for
instance, it's part of Welsh law that in the University of Wales, all meetings take place in Welsh. And if the if the participants aren't Welsh speakers, they actually have uh interpreters. The
meetings literally don't start till the interpreter arrives, which is fair enough. Archives are involved. Now, we
enough. Archives are involved. Now, we
could automate that, but also think about those recordings. Well, how useful those would be for us for training models. So, uh you know, there's lots of
models. So, uh you know, there's lots of cool stuff there. And I think that that's the way we would link it. Um and
I've been around this before. I did a lot of work on accent detection and uh you know for instance when Siri came out it was useless with regional accents
and there were there were seven versions originally and which version was on your phone depended on where you bought your phone. So if you bought a phone in India
phone. So if you bought a phone in India you'd have to speak with an Indian accent for Siri to understand you.
Interestingly, as deep learning came along, they got a lot better at it. And
now, it's amazing all of those videos with a, you know, a Scotsman talking to to Siri in to some amusing conclusion, those don't happen anymore. And, you
know, for a while there, you know, not so much in Scotland, but in America, there were people calling it racist technology. Say, you know, I'm I'm
technology. Say, you know, I'm I'm American. I have a strong Hispanic
American. I have a strong Hispanic accent or whatever.
uh and and and if it was uh you know something like the spell checker in uh Microsoft were way ahead on this you know on doing regionalized stuff and we
think that for translation regionalization is essential you can't have a single model that does it all because you have to have the output as well you it's not just a matter of recognizing what's being said you have
to say it in a way that will be understood and the way in which a Jamaican person speaks English is very different from the I speak English and so uh um someone might say, "Yeah, we
like your translation system, but it doesn't speak our English, you know, and so >> uh and I think that staking some alpha would be the way to to get that to get that." Yeah.
that." Yeah.
>> Very interesting. Great answer. Um okay,
let's talk a little bit about you because you've sort of alluded you you've had several illusions to your past. Um and and before the show, we
past. Um and and before the show, we talked a little bit um about you in the late 90s like being in the internet. So
you're like me, right? Like I was in the the internet in the late 90s also, right? Which was sort of a weird thing
right? Which was sort of a weird thing to be in at the time.
>> Could you take take us through your history starting with that time and how you got involved in >> Yeah. So so I had a kind of um uh I've
>> Yeah. So so I had a kind of um uh I've had parallel careers literally since 1995. So I left Cambridge with a
1995. So I left Cambridge with a qualification in speech and language uh you know basically computational linguistics and speech recognition and
there were no jobs for a lab assistant in that world. I didn't want to stay on in the university. It wasn't quite me and I got a job as a web programmer for
a subsidiary of Newscore and uh then I got head-hunted by the BBC. So I led the development of BBC News online. So it's
obviously one of the biggest uh um web projects in the world. Uh I think it it hits you know over one and a half billion people a month now. And we
didn't know how big it would be. So we
designed really scalable technology and that taught me something very interesting. Uh it's perfectly possible
interesting. Uh it's perfectly possible to create something that will scale indefinitely. And actually the the
indefinitely. And actually the the system I built was decommissioned this year 28 years later.
>> Wow. You built this in the '9s? Yeah.
1996.
>> What was a content management system and how did you >> Yeah, basically it was a very sophisticated content management system and um so that kind of gave me a
reputation and a career but I couldn't help getting dragged back into uh speech and language stuff. So I formed a small
company to do um to use speech recognition not in the way that they used to show it at trade shows. So the biggest fallacy in speech is that it's used for for for
doing dictation. You know what I used it
doing dictation. You know what I used it for was recorded speech. So what if the BBC has got an archive of 10 million hours? How do you search that archive?
hours? How do you search that archive?
If you can index all the speech with speech recognition, you don't you don't even have to synchronize it as a separate process. The speech recognition
separate process. The speech recognition system actually puts a time code on every word. So you search for the word,
every word. So you search for the word, it jumps the time code. So we built systems to do that. And uh that was that company was bought by Autonomy. Uh so I I had used their technology for some
some language stuff at News International before and so I had a close relationship with them for a very long while. And so I've always
long while. And so I've always oscillated between global scale digital products. Um I built one of the
products. Um I built one of the spin-offs of the Open University called FutureLearn which is kind of like a slightly more slick Udemy or a Corsera
type product. Um but I always kept
type product. Um but I always kept coming back to speech. So I've built uh search applications using speech uh a uh
pronunciation training uh technology which basically uh it's the still the only one I have a a patent for this
which which monitors and assesses and gives feedback on continuous speech. So
most speech scoring systems they make you read a phrase and then they give you the score. The problem with that is if
the score. The problem with that is if it's just a bad habit when you're doing a test, you won't have the bad habit.
So, I don't say you're a German that pronounces your V's like Fs, uh, when you they'll give you 10 sentences with V's and you'll get them all right. But
then when you're reading your conference speech and you're excited about what you're saying, you revert to your bad habits. So, we would actually let them
habits. So, we would actually let them just read their conference speech and it would flash up when you're doing it wrong. [snorts] Uh, and so that was a
wrong. [snorts] Uh, and so that was a really interesting product. Um, and so over the over 30 years, I've kind of switched between things like the BBC. I
did um I built ITV, which is BBC's main TV competitor in the UK. Uh, I built their TV catchup service. So, these are nothing, no clever new algorithms there,
but learning how to do scalability, learning how to do uh robustness and security. And when I was working with
security. And when I was working with autonomy, the biggest one that I find is missing in startups when I'm advising them, we learned how to do sales. I
learned how to build and [laughter] run a uh a global sales operation. And that,
you know, that's that's where my experience of using resellers to get into verticals that you could never understand from your own experience. Uh
that was really exciting. And I still have those contacts. I met with one of the uh um senior guys at CGI the week before last. Told him what we were doing
before last. Told him what we were doing and you know immediately I said, you know, I told him the product wasn't ready yet, but immediately he listed the large public projects that CGI are
bidding for now that could use real-time translation, you know, just off the top of his head. So these guys, they have a visibility of the world's markets, but of course since those days other things
have appeared. So the other big channel
have appeared. So the other big channel to market would be AWS marketplace. Put
an API in there and show that your benchmarks are faster than Google Translate and give people a free test.
Um don't know whether they'd let you do that on the Google Cloud Marketplace but Azure Marketplace etc. And there there are new routes to market where you can put your product up, throw some digital
marketing spend, monitor the conversion rates, and yeah, it's a fantastic world now for launching new products in that way.
Going back to your question about whether it would be something that goes in your ear, launching a specific consumer product, launching a competitor
of the AirPod. Well, doing that even with Bit Tensor, that would still cost more than probably all the Bit Tensor projects put together. You know, developing consumer
together. You know, developing consumer products uh and marketing them globally is something which you can only do when you're already rich. you you know >> yeah you have to be a you have to be a
big centralized company it's hardware so you got to manufacture yeah so you got to take the risk of manufacturing a lot of them and maybe they won't sell but you you just have to you know
>> safety testing everything the different materials all of these things now that's not to say you know there are things like Kickstarter that help people do that sort of thing but I don't think that's me for me
>> software has been attractive because it's the fastest way to turn brain power into value. And if you can create
into value. And if you can create something valuable, someone will be able to sell it. And you know, when I first learned to code, it was I I said to a friend of mine who's a my cousin who's a
painter and decorator, I said, "This is the equivalent of that, you know, that I'm basically swapping this skill I have for money and and I pay my my rent, you know, he he paints houses for money, you know."
know." >> So very fascinating. And this is like this is another key point that I want to drive home to all you people listening out there. So you're you're a serious
out there. So you're you're a serious person who's built multiple serious scalable products um for the BBC for other large companies and you've raised
money with traditional venture in the past >> and and now at this point you're looking at Bit Tensor and you're looking at the Bit Tensor subnetss like that's your primary game. You're like, "This world
primary game. You're like, "This world is better in multiple dimensions," >> which that is. It's not like you're some somebody who just loves crypto and you know, you're you're here you're here for
the hippie right? Like cuz that's what some people think about Bit Tense, right? Like, "Oh, that's not serious."
right? Like, "Oh, that's not serious."
>> And you're like, "No, I am serious. This
is just better. It's it's practical."
>> And actually explaining uh a company that has created a perfectly orthodox product. Here's a product that you can
product. Here's a product that you can license and use. uh if you explained the Bit Tensor part, there are still people that would run a mile. They'll say, "Oh, this is something to do with crypto.
It's all going to fall apart. Uh you
know, how do I know it's going to work?"
And actually, we are doing a small fund raise for fiat currency. But um partly uh because of what's happened in the
last couple of months. So, our our November payroll cost us twice what our October payroll cost cuz at the end of October, we had this peak that went up
to over $500 and then all of a sudden it's down to 270 or something. And uh
and if you're selling your towel to pay your payroll, that's not a great place to be. But so, but we're we're so we're
to be. But so, but we're we're so we're looking to raise about half a million, which we think will see us through to um to financial stability. And once we've got that stability uh and once we've
acred a certain amount of uh of crypto I think that the the the variability will will be less of a problem. But right now when we've got a finite pool of money
literally doubling the payroll in four weeks was crazy you know just that that >> yeah because and by you're doubling it because the price of tow not your subk but to
>> got chopped in half. Yeah, and actually tal was very interesting because uh then we had some you know some good things to say about what we were doing some of the
results from our subnet we had uh um uh cons showing us a little bit of support and actually we went from uh I can't remember what the price was like 00019
hovering around there 002 and then it went up to about three and it hasn't gone down again actually we've we've managed to and I think that we now have a constant stream of news. We're doing
more, there's more R&D, there's more stuff to talk about. Uh there's more diverse miners uh making money from us.
And I think that uh if we just keep up telling the world what we're doing, uh hopefully it won't drop back down again and it'll creep up. And um and I think that again following an example from uh
from Shaq's book, you know, he's what a year in from when he took over, I think, uh nearly. and uh and he still only has
uh nearly. and uh and he still only has a staff of four, you know, and that's that's a pretty good level >> that you can't do that in the Orthodox world. You just can't
world. You just can't >> you can't do that in the Orthodox world.
You can only do it here. Absolutely
true.
>> Exactly. Yeah.
>> H Okay. So, um I I I think a lot of questions. Is there anything you think I
questions. Is there anything you think I should have covered that I didn't?
>> Well, there there Yeah, maybe a couple of things. So there's a I came to into
of things. So there's a I came to into this so fresh that when I was in Paris like two weeks into my first introduction to bit tensor I wrote a
paper about using bit tensor to do iterative software development rather than doing what score were doing at the time where you're basically using it to farm out your processing power to run
complicated uh vision algorithms. And then someone said, "Oh, you should look at ridges." And at the time, their
at ridges." And at the time, their website didn't really tell you very much. And and so that was something I
much. And and so that was something I thought right at the beginning. I
because I was thinking about my, you know, long list of great inventions I've never been able to do. And now I'm thinking, I can do all of them over the next few years. This is great.
[laughter] So, so I started to think, well, what is the difference between this and venture capital? And I drew up some little diagrams and charts to try to get my head around it, you know, partly to prove to myself that it wasn't all a big scam, you know.
>> Yeah.
>> And so I I sort of thought, well, okay, one thing is it's way more complicated than any other crypto because for every transaction, every shift in price, there
are the miners, there are the subnet owners, there are the validators, and the stakers. And all those four
the stakers. And all those four different communities have to be aligned for something to start. start succeeding
and that is really interesting and then I sort of thought well how is this different from the real world you know uh um you know the the outside world the
orthodox world the straight world um and then I thought well actually it's very different from the startup world where you're constantly being asked stupid questions by people who don't understand
anything about anything and you know this is a world of intelligent people doing clever things but um actually what it is like is like the public limited
companies that trade on the stock market. So what is it that the staker
market. So what is it that the staker gets from investing in a bit tensor startup that they don't get from investing? You know, in the UK we've got
investing? You know, in the UK we've got great tax breaks for normal investments, SEIS and DIS, if you if you know what those are. You literally can claim back
those are. You literally can claim back more tax than the than the losses. You
know, you can it's like 125% you [clears throat] can claim back.
>> So [snorts] it's amazing, but you still can't play around. You can't once you've invested in a company, it might be 10 years before you see a return literally.
By play around, you mean you can't you can't get out. You gota
you're notid there's no liquidity.
Whereas this is a way of investing in startups but with the stock market you can say well paper bit are doing great things or you know if I get hit by a bus tomorrow and uh uh and we're not doing
great things anymore then you can go and invest in bridges. But the point is that that the that the st >> Yeah. the staker can have a great time
>> Yeah. the staker can have a great time and actually our COO who's who's got a background in um banking uh DevOps. So
he's the guy that understands scalability, security, robustness way beyond most uh startup guys. The reason
why he wanted to join is that he was a or still is a a bit tensed trader. He is
he is a an alpha trader and he when I first met him he was invested in 40 different subnets and this is an incredible thing and it's more like a stock portfolio and I think that's
something that very few people mention that that uh comparison that it's a bit like the stock market. Uh that's what that's the attraction for the the stakers. It's like uh except that these
stakers. It's like uh except that these are creative startups doing amazing innovative things which corporates that you invest in on NASDAQ usually aren't you know um you >> yeah because those are later stage
companies right something's on the NASDAQ it's already gone public like you investor you missed the best you missed the best part you missed the part where you could invest when it was like a 10 million market cap right so you didn't
write it up to a billion or three billion you could only invest when it was already worth a billion or three billion or whatever >> right so here it's it's it's like a bunch of white combinator startups, but
you can invest early and also you can un uninvest. You are liquid at all times
uninvest. You are liquid at all times and that is a key >> and it's sort of crazy. Yeah.
>> Yeah. Absolutely. And I and I really enjoy that aspect of it. Again, the
penny didn't drop without till I met Tom and realized that this is what he's been doing for the last year. He's just been trading. Um, and then the other thing,
trading. Um, and then the other thing, so Nigel and I are both, we were both like in our early to mid- teens when punk happened in the UK and everyone's
talking about anarchy and anarchism and some of us were quite serious about it politically. Uh, you know, we read
politically. Uh, you know, we read Malachester and Kotkin and all of these things. And
things. And what I found with Bit Tensor, it's not quite a sort of perfect example, but it is anarchism in practice. that the the
minor doesn't have a boss that I that and actually we don't even police each other. I mean there was a an instance in
other. I mean there was a an instance in my first week at SCORE Tim was showing me uh the different emissions um uh
distributions to the the various validators and one of them was exactly the same amount every day and he said, "Oh, that's obviously broken." And uh I said, "Well, if it's broken, how come they're allowed to make any money at
all?" And he said, "Well, it doesn't
all?" And he said, "Well, it doesn't really matter. They're not getting that
really matter. They're not getting that much." And the protocols will probably
much." And the protocols will probably be upgraded at some stage to stop that exploit, but it's not a major exploit.
And I thought, "This is interesting because in in the in the orthodox world, you would call the serious fraud office, you know, and say the police need to go around. These people are ripping off
around. These people are ripping off money from investors." But actually the way we do things in the in the whole of the crypto world actually is if something is if there's an exploit we
change the protocols to to prevent that exploit and that is genuinely anarchism in practice. You know this is a world
in practice. You know this is a world where nobody is telling anyone what to do but people are making a living and uh creating value inventing extraordinary
things. And uh for me it's kind of
things. And uh for me it's kind of vindicated my whole career of fighting bureaucracy and dealing with you know when you rise to a certain level you get
a management job and you think that's great it means I can control my project whatever then all of a sudden you've got to do everyone's staff appraisals or you've got to or it's bonus time you got to decide you've got £4,000 and six
people how who gets what bonus and you know it's just crazy. Whereas with with Bit Tensor that's all automatic. You
know the algorithm decides who gets what. Great.
what. Great.
>> Yeah. [snorts]
>> So for me it it's you know this is something which has been brewing in me for 50 years you know to suddenly see in practice a world an economy and
ecosystem that generates value that pays the rent that doesn't have bosses and doesn't have uh um those kinds of toxic relationships doesn't have bureaucracy.
all of that crap that the corporate world is uh obsessed with.
>> Yeah, I'm totally with you on that. I I
I love it for a lot of the same reasons.
Um I I wouldn't call myself a punk rocker, but I I I uh and I didn't suspect you for a punk rocker either. I
I I played in a in a punk band for many years. But actually, well, what I what I
years. But actually, well, what I what I was going to say is that um because Nigel was the person that took me from leaving one job being a hobbyist vibe
coder and blogging about it and asked me to join score. I I gave him a an early Christmas present. I have a it was a a
Christmas present. I have a it was a a first pressing of the Sex Pistols Anarchy in the UK which was originally on EMI but they got kicked off EMI so
they were all deleted and also the very first pressing had the wrong name for the producer on the back so it's quite a collectible record. So I sent that to
collectible record. So I sent that to him in the post the other day but I think he's one of the very few people that would understand quite how significant that was because when he was 13 he probably bought a lot of copy as well.
So yeah um yeah it's an extraordinary world we're in and u and everyone is so supportive you know it's just extraordinary the the community I
haven't seen any kind of toxic sort of behavior any kind of you know secrecy um uh the competition is all in can I do a
better job you know there's no competition in can I >> can I get promoted to a better job. It's
can I do a better >> Yeah. I got to say like, you know, we
>> Yeah. I got to say like, you know, we were talking earlier like we were both around for the early internet in the '9s, right? And that was one sort of
'9s, right? And that was one sort of petri dish of a lot of people who were sort of crazy and excited about this new thing called the internet that was coming um that a lot of most of humanity
did not believe in or see it and us crazy little people did, right?
>> And there were a lot of bad ideas, let's be fair. Uh and then there were a couple
be fair. Uh and then there were a couple really great ideas in the whole internet thing right?
>> And actually and the same thing happened with Ethereum, right? So there's a big
Ethereum, right? So there's a big exciting time when Ethereum first launched right?
>> Um and we're seeing it now. What I would say is different about Bit Tensor from those other two periods is the amount of signal and the amount of talent >> is extraordinarily greater in Bitensor
than it ever was in the early internet or the early Ethereum days. It's like
80% signal.
>> Yeah. So I I see the same thing. It's
it's astonishing.
>> And when when friends of mine ask me about it, I I there's few different explanations I give like the stock market one, but there's another um comparison I make. I say that if you
were to take Bitcoin uh and explain it to um a conservative uh riskaverse person. Uh the two biggest problems are
person. Uh the two biggest problems are this. One, it has no underlying value.
this. One, it has no underlying value.
its value is built just on its trading uh price and secondly the world is using up a lot of resources solving completely
pointless mathematical puzzles. Bit
tensor solves both of those problems. If you have some tow and you stake it into subnets, you are investing in innovative
startups solving real world problems. And secondly, the mining is done by the work that solves those problems. So it's not wasted. That work would have to be
not wasted. That work would have to be done anyway if it was paid by a venture capitalist or whatever else. So that to me is a next generation uh you know people
talk about web 2, web 3. I don't like those kinds of titles. But um but it's interesting how you know if the rules of Bit Tensor become limiting and someone
wants to do something which pushes them beyond where the Bit Tensor community wants to go, they can start their own one. They can start another different
one. They can start another different ecosystem. And I think that that
ecosystem. And I think that that happened in a much more bureaucratic way with the web. So, uh, one of my friends, Brandon Butterworth, he was one of the early pioneers of streaming video over
the over the, uh, internet and a lot of the W3C people and people in various organizations were against this. That's
not what it was built for. It will kill the whole thing. [laughter] That much data cannot possibly be used now. And
then you look at Netflix and Prime and whatever. And, you know, he helped solve
whatever. And, you know, he helped solve some of the problems. So, re real audio was the first one that uh, that really made >> I remember that.
>> Yeah. So, he helped them resolve some of their protocol issues.
>> Rob Glazer. Yeah.
>> Yeah. Exactly.
>> So, yeah. Um, uh, so this is even more anarchic than the early days of the web.
Yeah.
>> Yeah. Very cool. Well, our time our time is at an end. Uh, is there anything you want to plug website wise or Twitter wise before we go?
>> Yeah. So, I'll just talk about a couple of things that are coming up. So,
obviously uh there's always too much work to do. So I mentioned briefly the idea of confidence that you can't get that latency gain from a prediction
unless you're confident that it's the right prediction. So our next generation
right prediction. So our next generation of challenges after adding the new languages will be will include some kind of decision-m not only am I making a prediction but I'm going to say this is
the right prediction. And so that is an extra bit of work. We'll we'll put a base uh script out which probably does it very badly in a in a couple of weeks.
and then the miners will turn it into something that does it very brilliantly and that that's our next phase.
>> Cool. Well, thank you Matthew. This has
been extremely fascinating and I didn't really know much about you or Babel bit uh before this conversation. So, I'm
really I'm really glad that we >> love to meet you in person. Yeah. And uh
um yeah, I mean it's there are very few uh podcasts which are consistently interesting and uh I' I've always liked hash rate. So, yeah. Thanks.
hash rate. So, yeah. Thanks.
>> Well, thank you St. And you know, look, it's not me, it's you.
>> [laughter] >> you were extremely interesting. So I
just have to sit here and then look, you know, ask questions. So thank you once again. My name is Mark Jeffrey. This has
again. My name is Mark Jeffrey. This has
been hash rate. See you next time.
Loading video analysis...