Singaporean Investor’s $200,000 bet on Biggest AI Prediction
By ContraTurtle: A Singaporean Growth Investor
Summary
Topics Covered
- AI Predicts Next Token Probabilistically
- Hallucinations Persist in Probabilistic AI
- Static Models Block AGI Progress
- Applications Layer Captures AI Surplus
- Meta's $200K Ad AI Bet Wins Now
Full Transcript
Hi everyone. I'm a 39-year-old regular Singaporean working a 9-to-6 tech job living in a HTB simple lifestyle with no car. On this channel, I document my
car. On this channel, I document my stock investing journey towards a financial freedom target of $4.5 million. The portfolio has made $163,000
million. The portfolio has made $163,000 in net profits so far and is starting to rebound significantly. Chinese New Year
rebound significantly. Chinese New Year what big big Ampa finally came for me with about $30,000 in profits added over just the last two days because two of my positions have finally started to make
big moves after patiently waiting for months. But today I'll share something
months. But today I'll share something else. My biggest prediction on AI and
else. My biggest prediction on AI and why I am placing a $200,000 bet on it.
I'll reference an article I wrote at contrato.com that I sent to all subscribers. There's a link in the
subscribers. There's a link in the description below. Feel free to
description below. Feel free to subscribe for updates. In this episode, I'll explain the current state of AI and limitations that this technology has in a simple to understand way for
Singaporeans. I will also share my
Singaporeans. I will also share my biggest AI prediction in terms of what kind of companies or use cases actually
profit most from AI and the $200,000 bet I'm making based on this AI thesis. Now,
each chapter of this video episode is a building block for the next one. So
don't skip from start to end in order for you to fully understand how I finally came up with my 200k thesis.
Most of the AI models that we know of today like GPT or Gemini are all known as auto reggressive transformer models.
Now don't panic. Turtle will explain to you very simply what this all means so that the rest of this episode makes sense. When you type a question to chat
sense. When you type a question to chat GBT for example, hey what is the national language of Singapore? The AI
model, right, actually does not understand the question the way a human does. Instead, right, what is actually
does. Instead, right, what is actually happening under the hood, right, is that it treats each of the words in your question, right, as tokens. Then the AI model asks itself a question. Okay,
given all the word tokens it has seen so far in the prompt that you give it, what is the next most likely word token? What
the AI model does is it searches through all the world's information and data that it was previously trained on to find the next word token with the highest probability that makes sense.
And in this case, right, it decides that the word it is the most likely to make sense next. And so what the AI does is
sense next. And so what the AI does is right, it adds this word token it to the back of the question that you asked it to form a whole new sentence of word
tokens which now becomes what is the national language of Singapore it. So
now there's this new token it at the back. You get it so far? The AI model
back. You get it so far? The AI model then repeats the same process. Now
considering this new sentence, what is the national language of Singapore? it
and then again right it looks through it trained past buffet of knowledge to find the next most likely word token that makes sense and in this case it predicts
with the highest probability that the next token that makes sense given this sentence what is the national language of Singapore it is actually the next token word is is and now the new
sentence becomes what is the national language of Singapore it is and then it repeats the same process again and predicts the next word as Malay and adds
it to the sentence and finally it arrives at the full output which is what is the national language of Singapore?
It is Malay. Now the first part of the sentence which is what you typed into GBT which is the question what is the national language of Singapore? This was
the prompt that you gave the AI model right that you gave chat GBT. The last
three words, it is Malay was from the AI model predicting one token at a time by looking at this past train buffet of knowledge that all these AI labs like
OpenAI or whatever are pouring pouring billions into training and feeding these AI models with and referencing this sentence again and again and again in a loop adding one token predicting one
token at a time because maybe somewhere in the thousands of books or online website articles that the model was trained on the words it is Malay frequently appears after the sentence
what is the national language of Singapore. So if I summarize again what
Singapore. So if I summarize again what you see as a complete sentence that GPT gives you right is actually combining a result of many tiny steps where the
model adds one token at a time each token chosen because it has the highest probability of making sense given the sentence so far. Now you finally
understand how chat GPT actually works and the transformer architecture behind it which by the way fun fact the T in GPT right actually stands for transformer. Now you may be disappointed
transformer. Now you may be disappointed to realize after I've just told you all these things that when you actually seek chat GBT right I know some people do that. I read about people going for all
that. I read about people going for all this therapy, emotional therapy or chat GPT, right? The sad fact is after you
GPT, right? The sad fact is after you watch this video, I'm sorry to break it out to you now that you understand how GPT actually works, right? The AI is actually not sentient, not like a human that actually understands you. It is
not. It is just a Nyx token predictor is predicting the next best word token to return you with the highest probability of making sense given the string or the
initial sentence of word tokens you've prompted it with. and it's pre pre-trained data of the world world's knowledge. So in simple terms right
knowledge. So in simple terms right today's AI revolution right is fundamentally about predicting the next token with the highest probability given some input. You just need to remember
some input. You just need to remember this. Okay. And then each step of
this. Okay. And then each step of generating a word token to respond to you is probabilistic. Remember it is based on probability. Every word token that returns you one after another.
Every loop is based on probability. And
this explains why, you know, the AI systems, the AI models, they behave inconsistently sometimes. So for
inconsistently sometimes. So for example, if I ask Chat GPT, hey, what is the Singapore's national dish? I might
get maybe chicken rice. Maybe the GPD replies me chicken rice. But let's say you, right, you go and spin up another chat GPD window. And then you ask Chat GPD the same question. Hey, what is Singapore's national dish? Maybe you
might not get chicken rice, you know, maybe you can nasa, right? So the key point is this the AI model right is not retrieving a fixed truth. It is
probabilistically sampling from its large pre-trained data from the internet from books he's read in the past. And so
the output that it gives you right is not guaranteed to be identical every time because maybe some food blogger like Seph Louie or what Lady Iron Chef or you know I don't know who else all
these bloggers right they may have written nasi lamak somewhere in their blog and then the AI was trained on that and maybe some someone in Reddit Singapore you know Singapore raw or whatever they may have written chicken
rice somewhere in the model uh in the in the in in in their Reddit post right and then the model trained on that right so sometimes the model may even produce an answer like malasango as a national dish
which you know Singaporeans would probably consider incorrect. Now when
the model right so confidently produces incorrect or fabricated information right we call this hallucination.
Hallucination is basically a characteristic of a transformer models like all these AI models like GPT, right? Uh and it cannot be fully removed
right? Uh and it cannot be fully removed hallucination. Okay, it can only be
hallucination. Okay, it can only be reduced. Why? Because so far now that
reduced. Why? Because so far now that you understand how GPT works in the looping predicting fashion, you will know that the whole model right the whole AI model is actually has a
probabilistic nature. Every prediction
probabilistic nature. Every prediction token stage, right, every time it predicts a new token, the next token, right, is based on probability. It's
like going through a roulette wheel and then picking the right word token each time. So there's a chance that it may
time. So there's a chance that it may hallucinate because one of the times it picks could give you a wrong answer. It
could be trained on some funky data last time in the past. Right? And that is why 3 years in spite of three years since the release of chat GPD 3.5 in November
2022 even though right models are getting bigger you're getting like billions or even trillions of parameters right and then more and more GPU compute is being fed to the model you know
apparently to make them smarter no matter how many you feeding them after 3 years today in 2026 hallucination still exists how many of you can tell me that you've done you use
chat GPD Gemini or entropic the clock and you have not faced any form of hallucination. I'll call on
hallucination. I'll call on that. Confirm God. Right? So even though
that. Confirm God. Right? So even though the models are getting bigger, right?
Hallucination still exists. Now you
understand how the model works because of probabilistic nature of the outputs, right? And so the model maybe sometimes
right? And so the model maybe sometimes gives you a wrong response like Malasango or maybe you know in this case prosperity burger or Singapore's national dish. So maybe some people may
national dish. So maybe some people may argue that is a national dish. But
anyway, the main limitation of today's AI models, right, is that they are static, right? Which means once the AI
static, right? Which means once the AI model is trained right based on whatever past knowledge all the books and internet that has read right their core intelligence their core knowledge their
core intelligence is actually frozen.
They do not continuously learn from new mistakes or update themselves in real time the way a human does. Of course,
there are temporary workarounds that we're hearing in the market now like you know some of these models they store new information in conversation memory but these do not permanently improve the model and it's not true continual
learning and even allowing the AI models right to use tools some people think oh no I let the the AI models use this tools like calculators or search engines or other tools like what we are seeing
now in all the workflows like you know aentic AI or open claw all this all these things all starting to pop out all these tools right in all these workflows calculators they also cannot fully solve
hallucination. People have this wrong
hallucination. People have this wrong idea. Why you can still only reduce it?
idea. Why you can still only reduce it?
Because think about it if the AI models at the very beginning right they feed these tools right these calculates whatever right rubbish in okay with hallucination these tools will still give rubbish out. It will still be
rubbish in and rubbish out. So the the problem here is not the tools. It is at a very fundamental level that AI models are still giving you probabilistic outputs which may at times hallucinate
and sometimes this hallucination gets fed into these tools like calculators.
So the rubbish will still come out. So
there will always be this risk. It
cannot go away. So my view is that AGI artificial general intelligence cannot be achieved by relying only on today's NYX token prediction transformer architecture. That's a mouthful but you
architecture. That's a mouthful but you should understand how everything works by now. Now this view right is not just
by now. Now this view right is not just sukasuka by me alone. Okay for example Jerry Turk this guy is former open AI architect the very same people that the
very same guy that created the 0103 reasoning models on JBT right and codeex. He actually stated in this
codeex. He actually stated in this YouTube video he said I don't think a static model can ever be AGI continual learning is necessary. Andre Kapati,
okay, a founding member of OpenAI and former head of Tesla AI, similarly mentioned in a recent video that transformer models don't have continual learning. You can't just tell them
learning. You can't just tell them something and then they'll remember it and they are just cognitively lacking and it's just not working. I just don't I just think that it will take about a decade to work through all of these
issues. So both of them which again I
issues. So both of them which again I must emphasize these are people from open AI they are ex open AI they both arrive at the same conclusion static
transformer models like GPT or Gemini are not sufficient and continual learning is essential even Disassabis himself the CEO of deep mind the same
guy behind the Gemini models of Google also revealed in this recent interview that today's transformer-based AI models have limitations and we are at least 5
to 10 years away from AGI. So to
summarize my current view of today's AI models is firstly they are fundamentally based on probabilistic next token prediction not true reasoning.
Hallucinations will persist even as more compute is fed to the models. AGI cannot
be achieved using transformer-based models alone and continual learning is required for breakthroughs. And finally
we are still at least 5 to 10 years out before we even reach AGI. All of these constraints, right, define the ceiling of what today's AI models can realistically achieve and they are central, right, to what I'm about to
share in the next few sections in terms of why and how I am positioning my AI investment bats in a very specific direction. Hey, by the way guys, if you
direction. Hey, by the way guys, if you like the content so far, remember to give the channel a bit of turtle power and click subscribe so you can be informed when I push out new content.
I've also started a new Telegram channel. It will be a one-stop place
channel. It will be a one-stop place where you can be notified of new episodes, investment articles that I write or random investing memos or tweets that I share as well as other updates. You can join with the link in
updates. You can join with the link in the description below or you can just search Contra Turtle in Telegram. I
think there's like 300 people inside now. It's growing very quickly. I was
now. It's growing very quickly. I was
there was only like six people last two weeks and then somehow it blew up to 300. And also make sure you join the
300. And also make sure you join the right channel. Nowadays there are
right channel. Nowadays there are scammers impersonating me. They even
copy my profile picture, write the same name, and then they start you cryptocoin or start asking you to sign up for these trading platforms, which you guys know I don't trade. I only
invest long-term, right? So, these are all scams. I make sure you guys join the right right channel. Anyway, back to the episode. Now, let's break down the AI
episode. Now, let's break down the AI stack into six levels of companies where we can invest into. At level zero, we have energy companies, right? Like GE,
Venova, or you know, constellation energy, etc. And this is level zero because if you think about it from first principles, right? Without energy,
principles, right? Without energy, right? everything everything else above
right? everything everything else above the stack cannot function. And next at level one we have everybody's favorite the chips TSMCR Nvidia AMD ASML Broadcom
etc. Level two we have the infra and data centers Equinix Arisa networks Vertive and of course the cloud hyperscalers Amazon Google and Microsoft. One more level above we have
Microsoft. One more level above we have level three and these are the very famous AI labs open AAI Gemini Mistrol Entropic etc. All right. And then level
four are the AI software infra companies. Again, the three cloud
companies. Again, the three cloud hyperscalers Palunteer Snowflake Data Bricks, etc. And then finally at the very top of the entire pyramid is the
level five which is the AI applications.
Now I will be focusing on level five in this article. Why? because I personally
this article. Why? because I personally believe that this is where we can confirm if AI actually has ROI and is sustainable long-term. The key here is
sustainable long-term. The key here is long-term and the key is we want to see inference at this application layer. We
want to see use cases, right? Because
you think about it, you can have the most advanced GPUs. Oh, them solid all the don't know what don't know what Robin will come out, right? The cheapest
energy, the largest data centers, the most powerful foundation models, right?
train on trillions of parameters. None
of this matters, right? If the enduser applications at layer 5, right, do not generate ROI or enough use cases that justifies the AI capex deployed lower
down the stacks. Layer 5 determines whether the entire AI stack, right, can earn an adequate return on capital over time. And over the long term, throughout
time. And over the long term, throughout history, the bulk of economic surplus usually occurs to the layer closest to the customer, which again is at level five. So this layer is still very early
five. So this layer is still very early days compared to the other layers right and many of the companies in fact in this layer right they're getting whacked by the market today they all fall in this layer because why there are a lot of software companies that are actually
where the AI has to be deployed and and applied but they all kind of wet today so I think the narrative there is something wrong in the market but we'll see how that goes in the next you know few months and hopefully within a year of course one more advantage I have is
that the company I work for myself uh sits in layer 5 right so I have a very strong visibility at work on how AI is actually deployed and used at the application layer. So if today's AI is
application layer. So if today's AI is constrained and it cannot overcome hallucination or reach AGI for another 5 to 10 more years, right? Then my
conservative investment question becomes okay assuming we are stuck with today's limited AI. Okay, let's assume that okay
limited AI. Okay, let's assume that okay to be conservative and then let's assume that AI doesn't improve by leaps and bounds from here on like what we saw in the early years right from GBD 3.5 uh to
GBD 4 there was a huge jump but recently you know like from 5 to 5.5.2 do is very very small improvements and hallucinations are still there right so if we assume that such as parabus AI you
know hits a ceiling um it cannot overcome hallucination we don't reach AI in 5 to 10 years the conservative question becomes okay what are the use cases at the application layer that
already creates the most economic value or ROI now the best use case right in my opinion right have at least one of two qualities even better if they have both of these qualities the first quality is
that the cost of failure from hallucination is low relative to the reward you get out of this AI which means you're okay right even if the AI right hallucinates once in a while no
problem it's not super mission critical like you know like traffic lights imagine if the traffic lights is traffic lights on the road in Singapore are fitted with uh this transformer AI right and then sometimes when the red supposed to be showing red it shows green instead
we cannot have that right so these are mission critical right so mission critical uh use cases cannot no go you cannot have this kind of transformer AI models But in certain use cases right
where you know you don't need a traffic light where you don't need when it's red must be red when it's green must be green and the reward you get out of it is far outweighs any small hallucination you get right uh that would be great I'll show you I I'll talk about an
example very shortly it ties in with my thesis right the second is that there is a way to verify the output of the AI at scale to mitigate some of these hallucination risks from my internal
observations within my own company and conversations I have with peers in other tech companies. companies. The most
tech companies. companies. The most
common enterprise use cases we see today are the following. Coding is definitely number one, right? Especially in tech companies or even MNC's that have their own internal IT departments. Coding is
by far the top use case. Why? Because it
possesses both of these qualities I just mentioned. Even I myself, right, I feel
mentioned. Even I myself, right, I feel like there's no way I'm returning to the old days without AI for coding at work.
The next use case is creating marketing assets like images, videos and copyrightiting. Followed by internal
copyrightiting. Followed by internal company search engines, right? When
you're you want to search for certain company documents, you type to this internal chatboard and then it finds you the document or the the information you need very quickly, right? You can type to it in natural language. So this is a
very popular use case. The next one is drafting reports or strategic discussions with AI, right? Like all
those consultancies when they talk about very strategic stuff, it's like very gray. There's no there's no like red
gray. There's no there's no like red light green light kind of distin distinction, right? You you there's no
distinction, right? You you there's no right or wrong kind of thing, right?
It's just a strategic general discussion. This kind of use cases very
discussion. This kind of use cases very common, right? And very useful because
common, right? And very useful because the AI can be trained as like a second brain to to spar with you in terms of strategic discussions and brainstorming.
And then finally, you have the very common summarizing documents and meetings, right? You ask the board to
meetings, right? You ask the board to summarize a meeting you just had on Google or some documentation and it gives you a one paragraph outlining all the important points. Now most of these
use cases I just mentioned right they give indirect earnings ROI to a company.
Why? Because they do so by increasing productivity keeping head count flat helping companies expand operating margins over time. However, I am
starting to see a ceiling in terms of productivity use cases apart from coding at least in my company and I think this this applies to most other tech companies as well. Um and and you know
mind you I work in a company that I can easily say is the forefront of you know a lot of this AI stuff. uh I won't reveal which company that is but it is an international company and and I'm I'm
constantly working with software and AI and it is also one other thing I want to say is it is still unclear how much of recent tech layoffs are AIdriven productivity gains versus pandemic
overhiring normalization which means people overhired during pandemic and then they're using this AI stuff as an excuse to fire people I think there some of it is but some of it isn't really there are productivity gains uh but you
know there are really a large companies like for example CLA right they had to backtrack you know there are claims that AI could replace all their humans last year after they fired most of their customer support right so I think it's a bit half and half here now earlier on I
talked about how a lot of these AI use cases right like I just mentioned right they provide indirect earnings ROI because of productivity gains but you know what there's actually one use case
where AI directly lives a company's profits and the ROI is measurable and immediate and that is advertising specifically digital advertising now let
Let me explain why advertising right ads right digital ads right they share two characteristics very similar to coding and as we know like what I mentioned coding has shown the most promise
enterprise so far and ads digital ads they share certain similarities with coding and and why it's so successful right the first is that ads have a very low cost of failure with hallucination
right which means even if the AI returns rubbish ad images or videos sometimes right after you prompt it right it's still okay it's not that what it's not mission critical right and the overall
ROI is still very high because for example last time in the past right if a company wants to create images or videos for the purpose of running a ad campaign right to run ads right they will need to
engage a graphic or video artist that charges way more than prompting an AI model right plus I don't know if you guys have worked with uh graphic designers or video designers or all
these agencies there's a lot of back and forth one right people don't agree on the ideas Right? There's a lot of back and forth with the human designer. So
every back and forth, right? Every
iteration of the picture or the video, right? It cost money. You know, the
right? It cost money. You know, the agency or the artist will add on money and slap you in the face, you know, with iteration cost every time, you know? But
now, if I am the company, right, all I need to do is I can simply prompt AI to create like 20 images at a low cost, right? I don't need to back and forth
right? I don't need to back and forth with a human graphic or video designer.
And this saves me a lot of money and it's way faster, right? And in fact, you know what? my entire YouTube channel,
know what? my entire YouTube channel, right, is possible because of AI because I I'm doing this every day. I'm
generating images and videos and all this stuff with AI. Imagine if I do all these contra turtle images with a graphic human designer. Oh my god, guys, this channel will not exist, you know?
It's impossible, right? Because of the cost and the speed, right? You think
about it, right? When you prompt the AI, right? Let's say you get 20 images,
right? Let's say you get 20 images, right? Okay, from the AI, right? Let's
right? Okay, from the AI, right? Let's
say maybe five of these images have hallucination, just rubbish, right?
which is what I experience every day.
When I use CH GPD, I generate 20 images.
Maybe like five images are hallucinating. They are wrong. They're
hallucinating. They are wrong. They're
rubbish. Right? And that's okay. I
reject five of them and then I use the remaining 15 images or videos, right? So
the cost of hallucination or failure is not a big deal. I can just curate the images that I want and reprompt those that fail again. I don't need to be 100%
correct 100% of the time like a lot of enterprise use cases require. Right? And
at this point, I know what some of you are thinking. Wawei AI slop. How can
are thinking. Wawei AI slop. How can
that be good? But have you guys seen SE seat dance 2.0? Have you seen how we've improved over the last two to three years in terms of generative AI images and videos? There's still some
and videos? There's still some hallucination, mind you, but you know, the hallucination and cost of failure is so low that the ROI you're getting is so much higher than the than the you know
uh hallucination the the small bits of hallucination you get, right? And with
AI, when you want to create ads, it's still way more efficient and cheaper working with a back and forth with a human designer. And ads, they have a
human designer. And ads, they have a built-in verification mechanism at scale, like coding, right? For coding,
right, the AI can write tons of code for you. And if it fails, right, there's
you. And if it fails, right, there's something you can do called automatic tests, right? That can catch all this
tests, right? That can catch all this code hallucinations at scale. This for
coding, okay? And that ensures only the best quote goes through. And same for advertising. Actually, there's a similar
advertising. Actually, there's a similar concept, right? Advertisers can
concept, right? Advertisers can basically run AB tests on maybe six different AI generated ads, right? That
the AI generated and if any one of those ads fail to give a return on investment, right? You can automatically measure it
right? You can automatically measure it by maybe, example, conversion rate on the ads, right? The advertisers, right, can automatically filter this ad out that is low performing and stop running it, right? And only run those ads that
it, right? And only run those ads that are still converting ROI. So this is a form of verifiable output at scale. Very
similar to coding. And of course, sometimes it's quite important to have human in the loop, right? because maybe
the branding of the the image might be off, but you're getting a lot of engagement in Instagram, but that's still not something you want, right? But
still, the amount of ROI you get out of this is still way higher in spite of hallucination. And so advertising is a
hallucination. And so advertising is a nearperfect commercial application or probabilistic transformer AI models given the ROI you get in spite of hallucination. And there is one company
hallucination. And there is one company that benefits disproportionately from this sort of AI architecture and it is none other than Meta. Inside Meta's
advertising tool, there is something called Advantage Plus. You can think of it as something like Nano Banana. You
know the the Gemini app that helps you to create images when you prompt it. You
can think of Advantage Plus as Nano Banana integrated within the Meta ad system. So advertisers can use advantage
system. So advertisers can use advantage plus in meta to create lots of images or videos for Instagram or Facebook ads. In
fact, in a recent Q4 2025 earnings, Zuckerberg mentioned that the revenue run rate of AI video generation tools has now hit $10 billion in Q4 with
quarter overquarter growth outpacing the increase in overall ads revenue by nearly 3x. In other words, AI generated
nearly 3x. In other words, AI generated video ads are making tons of money for Meta. Now, another reason why AI is so
Meta. Now, another reason why AI is so beneficial to Meta today is because Meta adopted the transformer AI model to more accurately predict the next most relevant ad to show you at the right
time. Right? Think about it. It's the
time. Right? Think about it. It's the
same concept. What is the next best token like why I show you the NASA Lama, but now Meta is sort of tweaking the model to say, okay, what is the next best ad to show you while you're
scrolling up and down Instagram? You get
it? So, Meta calls this the gem model.
Now, this all sounds a bit complicated, but you think about it in a general sense, right? Exactly what I just told
sense, right? Exactly what I just told you. Today's AI models are about
you. Today's AI models are about predicting the next token, the next nasi lama, the next chicken rice, right? But
in this sense is you can imagine the same kind of AI architecture being used to predict the next best ads given some inputs of your profile data, right?
Let's say you're a male, you are female, you're from Singapore, you're from Malaysia. And it will show you based on
Malaysia. And it will show you based on the prompt. Now you think about the prom
the prompt. Now you think about the prom as your profile data. The output will be the next best ad and relevant ad to show you while you're scrolling through Instagram. So it is using the next token
Instagram. So it is using the next token prediction architecture of AI models to show you the highest ROI ads. Why?
Because when you click to those ads, Meta gets money from all these advertisers. So there's a numerical
advertisers. So there's a numerical proof here. When Meta released the gem
proof here. When Meta released the gem model earlier last year, it immediately increased Meta's ad conversion rates by 5%. guys. And you think 5% is not a lot,
5%. guys. And you think 5% is not a lot, right? Think about it. Meta is a $200
right? Think about it. Meta is a $200 billion annual run rate company.
Majority of the $200 billion revenue per year is in display ads. Only a very small portion is all the coaching kura reality labs, right? Only a very small part. Most of the $200 billion, right,
part. Most of the $200 billion, right, the revenue is generated from display ads. So imagine a 5% lift out of a
ads. So imagine a 5% lift out of a majority $200 billion revenue run rate is insane. There's billions of dollars
is insane. There's billions of dollars of extra profit introduced per year just by having this AI model introduced as a recommendation system based on the AI transformer model of today. And again I emphasize everything I just described to
you right about advertising right uh and matters AI ad prediction model are all returning earnings ROI right based on today's AI models there's no requirement for AGI to arrive at all you know we
don't I don't we don't even need the the models to improve any further as it is right cetus parabus as what the the AI models are capable today right the gem
models and how meta is applying right is already returning ROI so that's already very conservative This is proven numbers. So in the worst case scenario,
numbers. So in the worst case scenario, right, even if you know AI models they continue to hallucinate and AGI is still 5 to 10 years to come or maybe it
doesn't come at all, right? Advertising
and meta specifically is already one of the cleanest demonstration of direct AI ROI based on today's models and is the reason why I deployed over $200,000 into
this single position. It is the largest position currently in my portfolio. One
other thing is that in line with my EI thesis and a worrying observation I made, right, it's a bit worrying.
There's another company that I read in the Q4 earnings, um, I just made the biggest reallocation of capital in my portfolio just last night. Literally
just last night, right? And uh, total transaction volume value at about uh, 62,000 USD. So I sold a very large
62,000 USD. So I sold a very large portion of this other company for the first time in years in order to reallocate this proceeds into my EITs and, you know, do a bunch of stuff. but
mostly is in line with EI thesis. Uh but
anyway, um feel free to check out my newsletter for more details. And uh
thanks for watching everyone. I hope
today's session is not too complicated.
I really try to dumb it down as much as possible so that even the lay Singaporean person can really understand how the AI transformer models like CH GPD and Gemini of today actually works.
Thanks for watching everyone and may the long-term turtle compounding spirits be
Loading video analysis...