The World's BEST New AI Model is 100% Free (Kimi K2 Thinking)
By Limitless Podcast
Summary
## Key takeaways - **Kimi K2: Free, Open-Source AI Challenger**: Kimi K2 Thinking, an open-source AI model from Moonshot AI Labs, is now available for free download and local execution. It reportedly outperforms GPT-5, Claude, and Gemini across benchmarks, despite a significantly lower training cost of $4.6 million. [00:05] - **Cost Efficiency: Kimi K2 vs. GPT-5**: Kimi K2 boasts a cost of $0.60 per million tokens for input and $2.50 for output, drastically undercutting GPT-5's reported costs of $15/million input and $120/million output for its Pro version. This represents a potential cost reduction of up to 100x, making it highly attractive for businesses. [03:24], [05:03] - **Mixture of Experts Architecture**: Kimi K2 utilizes a 'Mixture of Experts' architecture, enabling it to efficiently use only a fraction of its 384 specialists (out of a trillion total parameters) for specific queries. This modular approach significantly reduces computational cost and energy consumption. [06:12] - **US vs. China: Open vs. Closed Source**: While US AI labs focus on massive compute and closed models, Chinese labs like Moonshot AI are leveraging open-source strategies and efficient architectures like Mixture of Experts. This approach allows them to innovate rapidly and deploy advanced models at a fraction of the cost, challenging the US dominance. [08:47], [10:35] - **Open Source Licensing and Commercial Use**: Kimi K2's adjusted MIT license requires prominent display if a commercial product using it exceeds 100 million monthly active users or $20 million in monthly revenue. This contrasts with earlier open-source models that had fewer restrictions. [12:05] - **Consumer Advantage in Open Source AI**: The rise of powerful, open-source models like Kimi K2 directly benefits consumers by providing access to cutting-edge AI for free. Users can run these models locally, ensuring privacy and avoiding vendor lock-in, though performance may be slower compared to cloud-based services. [11:36], [19:19]
Topics Covered
- Open-source AI challenges US dominance
- China's AI cost advantage is staggering
- Mixture of Experts architecture drives efficiency
- US AI development faces a capital expenditure mismatch
- Open-source AI is a boon for consumers
Full Transcript
The world's latest and greatest AI model
is 100% free for you to download and run
at home right now. Kimmy K2 thinking is
the latest reasoning model from Moonshot
AI Labs, which is a Chinese frontier AI
lab, and it beats OpenAI's GPT5,
Anthropics Claude, and Google's Gemini
across pretty much all benchmarks. But
that's not even the most shocking part.
The most shocking part is that it only
costs $4.6 6 million to train and build,
which is only a fraction of the billions
of dollars spent by OpenAI to train GPT
in the first place. It's also 100% open
source, which means that you can
download and run Frontier AI right at
home where you're sitting right now. Um,
but of course, it begs two very
important questions. Number one, is
open- source AI the winning strategy?
We've been led to believe that closed
source is typically the better strategy
when you run a business, but China and
their AI models are proving us wrong
here. And the second question, the more
ominous question is, will the US stock
market bubble finally pop? Josh, what
have we got here? What is this new
model? And why is it taking over social
media everywhere I look? They did it
again. The Chinese did it again. They
knocked it out of the park. Grand slam,
home run. It's an unbelievably
impressive model. And this happens every
time. We get this amazing flagship model
out of the US. A couple months later, we
get the same thing marginally better at
onetenth of the cost. Like full orders
of magnitude cost less than what it
costs for the leading AI labs in the US
today. The specs are really impressive.
We're going to get into everything.
We'll start with I guess just like the
high level spec sheet. State-of-the-art
on um humanities last exam, which is the
reference point that we kind of use in
terms of benchmarks. It scored the
highest anyone's ever scored, 44.9%.
Um, it has a bunch of these really cool
breakthroughs, but the big thing that it
excels at, like it says in the post
here, reasoning, agentic search, and
coding. Now, there's a few cool things
that we could talk about here, EJ, maybe
we'll just get into the charts because I
feel like that's an easy way to
visualize how much better this model
really is than all the others. And what
we're seeing on the chart is that well,
GPT5 was the best. Kim K2 is now the new
best. And this is as it relates to
thinking and reasoning. And this again,
this is so impressive because one, this
model is fully open source. You can go
download the model and run it yourself
locally for free. What were your first
thoughts when you saw this? Cuz to me, I
was like, oh my god. Like, why would I
use anything else? My first thought, if
I'm being honest, Josh, was like to look
at the stock market. I was like, is this
going to crash the entire US stock
market? Like when Deepseek initially
released their uh R1 thinking model, do
you remember? Uh it was at the end of
last year. um people's kind of entire
bubble and vision of how AI models were
trained was completely burst. And since
then, China has repeatedly delivered on
breaking uh models, one of which is uh
the Moonshot AI lab team which built Kim
K2. Um it's such an impressive model for
a few different reasons. Uh for me,
number one, it can now compete with all
the best. And um personally, GPT5 is
something that I use pretty much every
day, whether it's for like kind of
casual prompts and requests or whether
it's kind of like for deeper thinking
and research and some of the lines of
work that I do. Um, so it's become kind
of like quintessential for me now to
have a a separate model that I can
download and run privately on my own
computer at home that I'm showing on
this tweet here that costs60
uh per million token input and $2.5
output is just an insane cost cutting
average where if I was running a
business using an AI model, I would
there's like very little reason for me
not to switch over to something like
this aside from maybe like maintenance
and setup and and stuff like that. The
other really impressive thing for me,
Josh, um was the team itself. Like this
is only a 2year-old startup, which
reminds me of another 2-year-old
startup, which is Elon Musk's X AI,
right? And there's a funny link between
these two models, Josh, which is um
Kimmy K2's reasoning, this thinking
model, um can do so because it does this
like really neat little chain of thought
experiment where it takes many steps to
kind of think to a logical answer versus
just kind of like splurging an answer
for you. That's something that Grohee 4
did uh when they that they pioneered
when they launched their new product.
So, Kim K2 is kind of like drawn on some
of these learnings from XAI to to
produce a similar model. The other
really cool thing is it does this thing
called tool use or tool calling whilst
it's thinking. Um, so if you imagine uh
as I'm kind of like trying to think
through a complex problem, um, I will
leverage different tools to be able to
help me get to the answer. So, if I'm
doing a maths exam, I can use a
calculator or if I'm doing a deep
research question, I might use Google.
Um, this AI model naturally does and has
access to over two to 300 different tool
calls and tool uses whilst it does its
thinking. So just overall a very
impressively new looking AI model.
>> Yeah, EJ, you mentioned the the cost
being 60 cents per million tokens. And I
just want to add a little bit of context
as to how low that actually is. I was
looking at the the GPT5 Pro cost for per
inputs and it is $15 per million tokens.
15 for the GPT5 Pro cost. Currently, the
output is $120
per million tokens. Granted, this is the
top of the top. If you're using GBT5
standard, input is $1.25 per million
tokens. Output is $10. So any way you
you scrape it, it's at least a 2x multic
cost reduction up to like 100x on the
highest end assuming it can compete with
GPT5 Pro which all those benchmarks
suggest it very well can. So the cost is
really like it's it's a big deal and to
get kind of dig more into the the point
that you were making e and how it
actually works. Well, we get to first
sorry no we'll get there. Save the
memes. Don't spoil the memes yet. We got
to get to the funny jokes next. But
basically the way this works is like
there's this very complicated diagram on
the screen. I'm not going to try to even
explain what that is. But there's this
this fun way that I like to describe it
when when I was describing it to my
friend earlier this morning, which is
that like Kim K2, it's like this giant
school and it has these things called
specialists. And in fact, Kim K2 has 384
specialists. You could think of these
specialists as like a math club or a
history club, coding club, debate,
whatever it is. And when you ask it a
question, it doesn't invite the whole
school. It doesn't invite all the clubs.
It's just ej, if you ask for a math
question, it will query the math club
and it chooses eight out of those 384
clubs to help combine their answers,
pick the experts, and decide how it's
going to solve this problem. So, it has
a trillion parameters, but it only uses
32 billion of them at once. And that's
how we're able to get the huge cost
reduction because it uses this thing
called mixture of experts. A lot of
people describe it as but basically what
it is is instead of using the entire
model's intelligence to answer what
should I have for breakfast this
morning. It will take the chef club, it
will take the health club, it will
combine those together and it will form
an answer that should hopefully give you
just as good as a result if you took the
entire model, but it's much more
efficient in terms of cost, in terms of
energy, and in terms of the amount of
tokens it could generate because it's so
much cheaper across the board. And I
think that's one of the big really
exciting things that has been cool to
see coming out of China. We saw it with
Deep Seek, we see it with Kimmy and it's
this mixture of agents architecture
where they're really kind of
modularizing the entire model and only
using the stuff that's important for the
specific query. Um they were put in a
very constrained position um which is
they didn't have access to the latest
GPUs or Nvidia GPUs. There's been a
bunch of US tariff restrictions on
Chinese labs getting access to these
kinds of things. So they've really
needed to kind of like work within their
bounds and means. Um and so coming up
with an architecture like mixture of
experts or the one that they did is
super important. And it brings me to
this meme, Josh, which is what are we
doing here? There is an obvious mismatch
between Americanmade AI models and the
uh Chinese ones. Uh you've got Open AI,
which is now projected to spend $1.4
trillion over the next 5 years. That's
trillion with a T versus Kimmy training
for $4.6 million. Now, I know there's a
bit of like clickbaitiness here. That
$4.6 million was rel relative to one
training run and usually takes a few
training runs. But let's say it took
like 20 training training runs, right?
At $4.6 million, that's still only like
a like a 100 mil, right? Or less than
that. So, it doesn't really matter when
you put it into the context that GPT5 is
rumored to have cost 1.7 to 2.4 $4
billion for OpenAI to train. So, there's
a mismatch that I don't quite
understand, Josh. And that's what makes
me the most nervous when it comes to um
what Americanmade companies and Frontier
Labs are doing. I feel like they're
missing the mark. I don't quite know
what it is, whether it's this mixture of
experts thing, but there's someone's
being sold a lie and I don't know
whether it's me or whether it's um me
like looking at this Kim K2 model and
being like, "Wow, it's so amazing."
>> Yeah. When I think about the role that
China plays versus the United States in
terms of like open source companies or
closed source companies here in the US,
uh the the thing that is reassuring to
me at least is a lot of these innovative
breakthroughs that happen on the
software level actually do happen in
these private AI labs. Um we do get like
chain of thought and reasoning and
there's like this whole slew of new
innovation that becomes standard very
quickly. That all happens in the United
States AI labs. And as far as we're
concerned, the AI labs in the US still
have they're making the most progress
the fastest. They are creating the most
innovation. And then what you kind of
see like we described earlier in the
episode is that innovation starts to
trickle down whether it's voluntary or
whether it's stolen and it gets
implemented into these new models. And
they just completely cut out the bottom
in terms of cost and efficiency because
that's kind of all they're able to do.
They don't have access to the resources
of like millions of GPUs from Jensen
Hong and Nvidia. They don't have the
access to $50 billion of capex just to
spend on employees, just to spend on
salaries and compensation. Um, so it
seems to me like I mean we're still
doing very well. It's just China is very
good at implementing the technology and
applying it at scale in a way that's
open sourced. and the open source thing
there's there's a lot to say for that
because it's it's very impressive and
it's kind of this community effort that
we saw early days with the United States
but once they became better they closed
it off so what happens is you get
innovation in one company like Kimmy and
then you see it implemented in deepseek
and then you see it implemented in Quen
and then suddenly this technology is is
kind of synchronously growing between
the three because it's all open source
they're publishing all the code all the
open weights and it's much more easier
for them to thrive whereas innovation in
the United States very much happens
behind behind a closed wall and it's
only leaked out at the advent of a new
model when they release it to the world
and people kind of reverse engineer how
it works.
>> Mhm. Um I was reading an article in the
Financial Times where they interviewed
Jensen Hang um and he said verbatim that
China will win the AI race if they
continue down the path that they're
currently on and if the US doesn't kind
of ramp up their energy production. he
was making a wider point that their open
source strategy is uh pretty effective
in the way that they're that they're
building these new AI models with the
constraints that you just mentioned. Um
kind of speaking more about the open
sourceness and the benefits of this. Um,
I I've got a tweet up here which shows
that Kim K2 uh thinking this new model
can basically run on two MacBook M3
Ultras, which is the like a couple of
thousand dollars worth of cost, which is
an insane thing to do to run a Frontier
AI model at home privately in your house
trained and fine-tuned on any of your
own private data. So, you don't need to
kind of like sell that data to Sam or
Mono or whoever. Um, just super cool and
super cheap, right? Cuz you're running
local inference at home. So you don't
have to worry about anyone kind of like
spying on any of your queries or your
prompts or your research. It's just all
at home which I thought was super cool.
Um the other part of the open sourceness
which I found interesting Josh was the
fact that they had an MIT license with
this new release or an adjusted MIT
license and we'll dig into that in a
second. But the point being when
Deepseek um released their first major
open source model and it took the world
by storm there wasn't any major licenses
around that. So you could pretty much
download and do whatever the hell you
wanted to it for it. You could implement
it into your own product whether you
were an American founder and if let's
say you scale that up to a million users
that used a feature that was um
leveraging that deepseat model you
wouldn't have to credit that team at
all. Um, Kim K2 kind of like takes a
step in a different direction here where
they've released an MIT license where I
think if you hit I think it's either 10
million or 20 million users for your
product, you need to show the Kimmy K2
label and say that listen, I'm using
this model under the hood, but there's
uh there's some differences with this
license, right, Josh? Um, can we can we
dig into that?
>> I believe it's it's modified. I don't
know to the extent that it is modified,
but I know that there is something
different going on here. What does this
say? Our only modification part is that
if the software or any derivative works
thereof is used for any of your
commercial products or services that
have more than 100 million monthly
active users or more than 20 million US
or equivalent other currencies in
monthly revenue, you shall prominently
display Kimmy K2 on the user interface
of such product or service.
>> That's a fun little marketing ploy. Fair
enough. Fair enough. You know what it
reminds me of, Josh? Um, it's what Meta
tried to do with their llama models,
right? So, um, Meta is the only other
major American company that I can think
of that went down this opensource AI
route. And the goal or the intended goal
at the time was to basically level the
playing field uh, between Meta and Open
AI and other frontier model AI labs
which had raced so far ahead. So if you
released all this cutting edge AI tech
for free and accessible to anyone then
it kind of drives down the cost premium
that open AI and all these other
frontier AI labs can charge you uh to
access this thing. China's doing that as
a vast hole on the on the American AI
stock market, right? So that's why we
saw like Nvidia crash I think 4.2% on
the news getting released and such. Um
I'm curious whether this kind of pops
the bubble and the capex bubble in
America. Josh, is that a crazy thing to
say? I mean, the markets reacted pretty
viscerally to this news.
>> I I don't think I have a problem with
this. I don't think it's popping a
bubble. I don't think we're in trouble.
I think this is just totally fine so
long as we continue to stay slightly
ahead or at least at par. I think we're
really excellent at making software,
distributing software, creating
products. I think China's really good at
shamelessly innovating and deploying
without needing to go through all the
hoops and intellectual
problems that the United States mostly
has. Um, so I don't think this will lead
to any sort of bubble popping. I think a
lot of the frontier innovative stuff
still happens in the US. The place where
I will begin to start to get a little
worried is when this switches to
embodied AI. Once we start moving from
large language models to implementing
these into robots or implementing these
into physical hardware, that's where I
think we have problems. On the software
front, we're good. We're crushing it.
Everyone's spending tons of money. Um,
on the hardware front,
>> we don't have the same lead. And over
the last what 30 to 50 years, we've kind
of outsourced our manufacturing
capabilities to other places and
therefore are just kind of I mean
everyone knows we just can't really make
things cost effectively here in the
United States. If we are at a foot race
with China when it comes to making
embodied AI like humanoid robots,
specialized robots, whatever it may be,
that's where things start to get a
little bit scary because that's where
there is a significant lead and that
lead comes in the form of atoms which
are much more difficult to move than
bits because you can steal some open
source code, create this slight
innovation on top, roll it out to a
billion users overnight and that's
innovation. That does not happen between
version two and version three of your
humanoid robot. you actually have to
build it with a factory with real
materials and people and places and it's
it's very difficult and challenging to
do and China very much stands to be the
largest winner in that so I think on the
software front I feel really confident
and as of now that's all that we're
battling on but in this near future
where things start to become embodied
where AI be becomes physically
manifested in the world around us that
that seems like a place where I would
start looking at Chinese investments a
little bit more than the American ones
>> okay I I I think uh I might push back a
little bit and say that there is
reasonable evidence to be bearish on the
software side before it gets to embodied
AI. I mean, so a few ways to think about
it. Um, there is such a gross
discrepancy when it comes to capital
expenditure for these things. On one
side, you've got the US spending
trillions of dollars literally to train
AGI or the best AI models, and on this
side, you're you're in like the hundreds
of millions of dollars, which is like an
order of magnitude less, right? Um, so
there's an obvious mismatch here that we
aren't seeing. Uh, whether it comes down
to training architecture, training
design, or just kind of like hardware
manufacturing. I don't know where that
um kind of advantage is being played.
But the Chinese have found it and
they're able to kind of really push down
on that lever to get ahead or on par
with the US. And they've been able to
successfully do this for years now. At
this point, Deepseek was kind of like
test case one. Now I've seen like you
know at least 50 open source models come
out of um Chinese Frontier AI labs since
then. Um number two it's not like the US
government has kind of like not tried to
to constrain them. Um we've imposed a
number of different sanctions which
include you know u constraining um which
GPUs um Nvidia and other manufacturers
within the US can sell to China. But
that still hasn't stopped them. um
they've been able to maintain and train
these frontier AI intelligences despite
all of these different things. So um I
think if I were to look on the other
side of this, it would be so what if you
have an open source model that is super
cool. Um why aren't you using it right
now? Like I'm not using Kimmy K2
regularly, even though I use GPT5 and it
might be better than GPT5. And the
answer for me is pretty simple. Um I'm
locked into an ecosystem in OpenAI that
I'm pretty happy with, which is um it
has memory on me. It understands who I
am. It has a context of all the previous
chats that I have with it. But also,
most importantly, Josh, if there's an
issue with something on my account or
something that I'm trying to use,
there's a community that I can access.
There's a support team that I can speak
to. There's a software ecosystem that
supports me, right? Um, versus me
jumping ship to kind of Kimmy K2,
setting it up on my own, and then having
to like troubleshoot it myself. I think
a lot of people will be disincentivized
to to to do that. It's it is difficult
but I mean we're seeing market forces
from both sides right like I I saw you
included a link here somewhere where
Kurser and Windsurf's um new AI models
they they were using some sort of
Chinese models and in fact they were
thinking in Chinese and I found this
really fascinating that like
Americanmade products are now thinking
in the Chinese language. So that's
certainly a concern in terms of the
commercial side where those API costs
really matter where if you can get a
million tokens for 60 versus $10 that's
that really affects the margins of your
business. For consumers like us um there
there's no real interest to use Kim K2
and the phenomenon you spoke about
earlier where you can actually run a
quantized version of Kim K2 on two Mac
studios running the M3 Ultra chips. uh
it generates tokens at like 13 to 15
tokens per second. So it's very slow
like that you're you're getting like a
sec a sentence or two every second. Um
which it's it's much slower. It's going
to feel groggy. It's not going to feel
well. There's a case to be made that
that changes because this year and it's
funny that Apple's really the only
computer that that supports this now.
They're releasing the M5 Ultra, which
will be the new version. And um it's
going to be interesting to see how it
plays out. What I found interesting,
this one side note actually that I
wanted to share with you e because you
might find it cool too is the version
that runs on these Apple computers, the
Apple studios, um it's a it's a slightly
quantized version. And
>> I heard about this and I learned about
this recently in the Tesla um earnings
call that they had the shareholder
meeting recently and we're going to have
an episode on this later this week. But
there's this interesting thing that Elon
mentioned during the episode where he
was talking about quantiz versus
floatingpoint uh AI and I was like what
the hell is that? like what why are you
spending so much time talking about
this? It doesn't make sense. And what I
realized is a lot of AI models they they
use like many many points after the
decimal in terms of data to get more
precise results and that is floating
point. When you quantize a model you
remove all of the data to the right of
the model and you just go to single
integers. So you lose the variance of
maybe up to like 60%. But you gain so
much faster efficiency, so much better
speed improvements, cost improvements,
and you can actually run it locally on
these things. So I think it's
interesting to see the different
decisions that people are making in
terms of well how precise does the model
have to be versus how cost effective and
how efficient does it need to be. And
what we're seeing with Kimmy Gay too is
it's very easy to to overindex on the
efficiency but maybe that's not the
stated goal of OpenAI where if they
really wanted to they could sign
quantize these models they could go more
to integer like type compute. Um, and it
was just something I was thinking about
is how they approach them because it
could just be well Kimy's just kind of
optimizing for speed and efficiency and
the downstream effect is it's also
really fast whereas OpenAI kind of
hasn't really optimized for that
specifically yet,
>> right? And the the counterargument to to
that point would be, well, Josh, it's
crushing all the benchmarks that we've
evaluated all the other American models
on, right? So surely it's much better.
And my my push back on that would be
like, well, benchmarks don't really um
materialize in real life use. So what if
it crushes uh 50% on humanity's last
exam? Is it useful for me to use? Does
it understand what I'm trying to say?
Does it understand the context of the
prompts that I'm putting into it?
>> Um the other side of this um you know on
the point of quantization, Josh, is um I
think that a lot of frontier American AI
labs like OpenAI, Google, etc. actually
have enough compute to give you the best
experience, the um the highest floating
point uh experience um to put it to put
into that context, but they're using the
majority of that compute to train the
next big model that we haven't even seen
yet, right? Um there were there was news
that broke last week that OpenAI is
doing this, right? So technically they
have enough compute to give you like
amazing service all year round, but
they're using 70% of that compute to
train GBT6. So I think it's just a
matter of prioritization right now until
we reach some kind of parity that these
AI models are are good enough. But I I
will say from all of the things that
we've discussed on this episode so far,
there is one clear winner and that is
the consumer. It's you, I, and everyone
listening to this show, which basically
gets access to frontier level
intelligence for the cost of next to
nothing. Download it completely free and
run it privately at home. Um on this
tweet that I have pulled up here uh it
basically says for every closed model
there is an open- source alternative and
it and it goes through a list like sonet
4.5 you've got glm 4.6
Groc code fast, you've got GPT OSS, um
GPT5, you got Kimmy K2 thinking, and it
just goes on and on and on. And if we
look at this kind of like a year and a
half ago, maybe even two years ago, this
list would be non-existent. It would
just be Frontier AI Labs on the closed
source side and zero open source side.
So to see this kind of progress is
really, really encouraging.
>> Mhm. Yeah. It's going to be a race. It's
going to be a battle between open and
closed source. And and perhaps that's
not even the battle. Perhaps it's open
source until they catch up to closed
source and then it's closed source
across the board. So, it's going to be
interesting to see the developments. Um,
we have a new batch of models that are
coming. We're kind of in this weird
limbo where Gemini 3 is hopefully coming
soon. We'll have some new benchmarks and
and one of the things that that was this
harsh truth to kind of wrap my head
around, which is what you just
mentioned, EJZ, and the fact that
everyone's just compute constrainted
like OpenAI could have made GPT5
probably twice as impressive if they
really wanted to. they just have no
compute to serve that and it would have
been way too expensive and way too slow.
So, it's not that it's they can't it
can't be done. It's just that people
don't have the resources to do it. So,
it's this constant balancing act and
it's going to be fun to see how how
companies kind of slot themselves into
that that curve of like how much they
want to spend on compute versus cost
versus just what they have available to
actually use to train these models and
deploy them at scale to users.
>> And that's it for today folks. Um, super
fun episode. Uh I it is always
surprising to me how quickly open source
catches up with closed source
centralized AI. I always think kind of
like it's going to lag a few years and
now it's come down to the fact that it's
lagging a few weeks. Um we have a
jam-packed week. Uh we have potentially
a new nano banana model being released
by Google tomorrow.
>> Fingers crossed. I'm praying for that.
>> Fingers crossed. I'm also praying for
that as well. And we have a second
episode based on Tesla's investor today
which had some really jam-packed
exciting news. Um, now listen, if you
want the US to win this AI race, and
make no mistake, it is a race. You need
to subscribe to American AI YouTube
channels, one of which is us. Please
subscribe, hit the notification button,
wherever you're listening to, give us a
rating. We uh are helped by these so
much. It is bringing up so much
awareness. The algorithm is favoring us.
We're getting all these wonderful views
and new incomers. We've got a thousand
of you from last week, which is just
insane. Hello, welcome to the channel.
Uh, we hope you enjoy the content and we
will see you on the next one.
>> Yeah, before I let them off the hook,
I'm I'm checking. I'm doing the stat
update. 83% of the people that watched
last week were not subscribed. If you're
watching this on YouTube, don't
subscribe or go on Spotify. My preferred
place of finding this podcast. It's the
best. I'm telling you, I I don't know
how to describe this to people any
better. Spotify is so good. You have the
video, you have the audio, you could
turn it off and lock your phone without
needing a premium membership. Please go
over there. Go leave a comment over
there cuz also the comment section's
kind of popping too. So, yeah. Anyway,
>> thank you for all the support.
>> We do not pick and choose. Wherever you
listen, go for it.
>> There you go. All right, we will see you
guys in the next one. Thank you for
watching as always. Much appreciated.
Um peace.
[Music]
[Applause]
Loading video analysis...