LongCut logo

How to Learn Anything With AI (And Where ChatGPT Falls Short)

By Every

Summary

## Key takeaways - **LLMs are general, not learning tools**: LLMs are built to be a general tool, the most universally general tool, but given that they were not built as learning tools at their core, they're missing a lot. A real learning platform has to be built as a learning platform. [02:11], [04:15] - **Most learning is passive**: Most learning is passive. It's not active. It doesn't require lean and participation. Most of the learning that you've done in your life has been through the consumption of content passively uh with active engagement every once in a while. [02:46], [04:30] - **Learning needs multimodality**: Most of the learning that you've done in your life has been multimodal, right? It has not been sitting and consuming and engaging with content in one particular way. It's been done through piecing together different formats online. [03:01], [16:47] - **High-intent objective drives learning**: The best learning happens through high intent, high agency, and significant objective driven motivation. Most of the use cases that people today already are using Obo for are objective oriented. People come in and they say, 'I know what I want to accomplish. I want to be good at X.' [05:19], [25:44] - **LLMs lose context in long learning**: Anybody who has spent more than a few minutes in an LLM conversation trying to do anything longer than the quick information thing has probably learned LLM are not great at that. They lose context very quickly. They don't stay focused. [09:23], [09:35] - **Obo makes hard topics achievable**: The grand vision is you can learn anything with Bobo. Anything that you ever thought was too hard to learn, we could at least get you started and you can feel like it is achievable. It's very hard to do that if you give us something and then we kind of like immediately violate your trust by taking forever. [00:00], [15:14]

Topics Covered

  • Most Learning Passive Not Active
  • LLMs Fail Long-Form Learning
  • AI Platforms Need Purpose-Built Design
  • Objective-Driven Beats Topic Browsing
  • Embedding Spaces Reveal Hidden Patterns

Full Transcript

The grand vision is you can learn anything with Bobo. Anything that you ever thought was too hard to learn, we could at least get you started and you can feel like it is achievable.

>> Why does it need to be a separate app?

Like why isn't CHBT just like the ideal way to learn new things?

>> A real learning platform has to be built as a learning platform. LLM are built to be a general tool, but given that they were not built as learning tools at their core, they're missing a lot.

Near, welcome to the show.

>> Thanks for having me.

>> Uh, so for people who don't know, you are the co-founder and CEO of Obo. uh

which I don't know what your oneliner is, but um from my perspective, it's like a it's an AI learning app that makes um like one-off courses for you like on demand basically. Um it's it's

pretty cool. Uh you can tell me where

pretty cool. Uh you can tell me where I'm wrong there. Uh but it's it's a really it's a really good app. I think

it fits with a lot of stuff you've been thinking about. Um uh prior to prior to

thinking about. Um uh prior to prior to doing OBO, you were um most well known for being in every writer sometimes. Um

and um and you're the VP and global head of audiobooks at Spotify and before that you are the co-founder of Ankor. Um

excited to have you on the show.

>> Thanks for having me. And yeah, I was thinking back on uh the writing that I had done for every I actually do think it's very on brand for Obo because a lot of what I had written was about trying to make people realize that they can

learn and understand things that they probably were intimidated by and thought were too hard to learn. I love that you're doing this because I think it's um I think AI is just so good for learning. Like it has expanded my mind

learning. Like it has expanded my mind in so many different ways. Um and I want to ask the like the tough question up front which is like why does it need to be a separate app? Like why isn't chatbt just like the ideal way to learn new

things.

>> Looks are incredible. I use them um now honestly in every aspect of my life as I know you do too and I think it's becoming more and more uh the case for

everybody who who realizes how powerful it is. But uh LLMs are built to be a

it is. But uh LLMs are built to be a general tool like the most universally general tool that that you could possibly have. Um I've spent a long time

possibly have. Um I've spent a long time thinking about learning. I've spent a long time thinking about it both from the perspective of an entrepreneur who wants to build products in the space but also as a person who uses these products

day in and day out to actually teach myself things. Uh I teach myself things

myself things. Uh I teach myself things a lot and I have for years and what I fundamentally believe is that uh while

chat is extremely powerful for learning it is not the primary way that people do learn. Um most learning is passive. It's

learn. Um most learning is passive. It's

not active. It doesn't require lean and participation. Most of the learning that

participation. Most of the learning that you've done in your life has been through the consumption of content passively uh with active engagement every once in a while. Most of the learning that you've done in your life

has been multimmodal, right? It has not been sitting and consuming and engaging with content in one particular way. It's

been done through piecing together different formats online for instance, right? Every single day, I'm willing to

right? Every single day, I'm willing to bet that you do this and everybody listening does this. You get curious about something. You read about

about something. You read about something in the news. you uh identify something that you don't know too much about and you go down rabbit holes on the internet to try and learn those things and you don't just default to

chat GPT or other LLM uh despite the fact that they're really really powerful you may use them as one tool in the arsenal of tools that you use to learn but they are just one modality people learn through multimodality right so

you'll start with chat GPT and then you'll Google some stuff and you'll end up on Wikipedia and you'll go to YouTube and and you're not alone in doing that because billions of people do that every single day. Like there are billions of

single day. Like there are billions of people who use the internet to learn every day and to piece together a learning experience um using these different formats and these different

platforms. And so back to your question, I think LLMs are a piece of the puzzle, right? I

think they they enable an important piece of the puzzle, but given that they were not built as learning tools, um at their core, they're missing a lot. And I

think that a real learning platform has to be built as a learning platform. Um,

as I believe that a real fill-in-the-blank platform uh that many of your guests are probably building on on the show, those have to be built by people who are focused on a particular use case.

>> That's really interesting. I I I want to go back to something you said which sort of shocked me which is that most learning is passive not active or like basically good learning is passive not active. Is did I get you right? Because

active. Is did I get you right? Because

I I would have assumed the opposite. Um,

and so like my my mental model of like how learning works is uh you like at least for me it's very much learning in context. I'm like I need to do a

context. I'm like I need to do a specific thing and so I want to like understand how to do that thing and how how it might fit into all the other things that I know and and and want to do which feels very active to me. I do

spend a lot of time reading which I guess is passive and I am learning stuff but I just hadn't really if I had to bet what you would say I would have said learning happens as an active thing.

Well, I think what we're talking about are probably two different dimensions, right? What you're talking about is

right? What you're talking about is intentionality and objective. Uh where

does the objective come into learning? I

do believe that the best learning happens through high intent, high agency, and significant uh objective driven motivation. And that's actually a

driven motivation. And that's actually a thing that we talk about a lot as as far as the product goes. Most of the use cases that people today already are using OBO for are objective oriented.

People come in and they say, "I I know what I want to accomplish. I want to be good at X or I want to be able to uh, you know, take take out a mortgage uh to buy a house. I have no idea what that

is. Or I want to be able to do gain this

is. Or I want to be able to do gain this particular skill so that I can upskill at work or I want to pass this particular test." All of those fall

particular test." All of those fall under this umbrella of like high intent object objective driven motivation.

um that I think is separate from the dimensionality of how you actually go about learning the thing. So I do think that high intent is very important. It

is what motivates people the most and that's why it's important to build a product that taps into whatever their objective is and tries to embrace a path that gets them to their objective as quickly as possible and lets them see that it is feasible for them to hit that

goal. But then the question of how do

goal. But then the question of how do you actually present the material to them goes to the heart of a different dimension which is what I think is passive versus active or single modal

versus multimodal. If you think back to

versus multimodal. If you think back to school and you think about the best teachers that you had most of the time that you were learning you were learning from them teaching to you with an opinion about how to teach you. Uh you

were sitting and consuming them talking or consuming the reading or whatever it was. You were not actively participating

was. You were not actively participating in the conversation. That's not to say that active participation is not very important. It's one of the modalities

important. It's one of the modalities that I think reinforces a lot of the learning that people are able to achieve, but it's not the primary mechanism, right? And what LLM do is

mechanism, right? And what LLM do is they basically put the onus on the user, on you as a learner to be very explicit around what you want to achieve and how you want to achieve it and giving a

constant feedback. That's not how people

constant feedback. That's not how people learn. Like when you were in school, you

learn. Like when you were in school, you didn't give constant feedback to the teacher to get them to adjust their curriculum. the teacher was not asking

curriculum. the teacher was not asking you questions about how best to structure the course and where to go next. Um, that that's what I mean by

next. Um, that that's what I mean by passive versus active.

>> That makes sense. I that makes total sense. I think um here here's maybe one

sense. I think um here here's maybe one one thing where I'm sort of slotting obo into my into my model of the world. And

maybe one way to put it, I'm curious what what you what you would say is LM as they are are are incredible learning machines and people do in this like very low intent way pick up new things all

the time like every single day or at least I do from them. And sometimes

there are categories of your life where you're like I actually just want to like really learn this thing and I want to get the equivalent of a degree from it or you know a diploma or certificate or whatever. And in those cases, you

whatever. And in those cases, you actually need a real teacher that is like thinking about this as a course, not just like a I'm oneoff like reading this book and I like get curious about this particular character and I'm like

going down with going down the rabbit hole with HGBT on it. Is that is that is that sort of what you're what you're thinking about?

>> Yeah. I don't I don't think OBO is a quick answer platform. There are many many ways to get your quick answer to things, right? the internet. I mean, if

things, right? the internet. I mean, if you think about there are many ways to think about LLMs, and I think I know, you know, you you've written about this and explored a lot of uh a lot of these

different perspectives, but like at their core, what an LLM what an LLM is at its core is a uh information compression machine that just takes the massive breath of the internet and gives

you the information in a much more personalized uh specific fast, you know, compressed way. That's what it does. It

compressed way. That's what it does. It

just takes all human knowledge in the form of content on the internet and compresses it down to uh to what could be distilled into into a back and forth conversation. And so there's an

conversation. And so there's an assumption there that the way that people should use LLMs is for quick succinct information, right? It's

not intended to be long form. And yet

learning, true learning, the thing that people really want to whatever it might be, that objective that people want to achieve requires a commitment and it requires you to follow through and it requires you to take multiple steps. And

anybody who has spent more than a few minutes in an LLM conversation trying to do anything longer than the quick information thing has probably learned LLM are not great at that, right? They

lose context very quickly. They don't

stay focused. It's really easy for them to deviate from the path that you originally set out. And then that I think is problematic. That's actually a thing that we talk about at OBO a lot is

how do you allow users to engage with our platform and deviate and ask questions and make it personalized and yet continue to provide the scaffolding that will always bring them back to the core objective that they set out for

themselves. Um

themselves. Um that I think is something that LLMs do not do very well.

>> I want to show everyone the product. I

think it's really cool and I think that'll give us a lot more a lot more concrete stuff to talk about. Um

>> sounds good. and is can I pick the course topic?

>> Yeah, of course. Live demo. This this

has never gone wrong. This is

>> We're doing it live, folks. Um Okay. Um

Okay. What the course that I the course that I want to take is um I want to take a course in Vickenstein's philosophical investigations. And I'm sure that it can

investigations. And I'm sure that it can do this. But the thing that I'm not sure

do this. But the thing that I'm not sure about and you tell me is what I my ideal version of this course uh is

it uses the full text which is like available for free online and every unit um of the course is just taking it's it's written in apherisms basically or

like paragraphs like subsections and each um each uh unit of the course is like uh it takes one of the subsections and then explains explains it and talks about it talks about what you need to

know to understand it and then moves on to the next one. And maybe maybe that's a horrible way to learn this book. But

uh that that's my that's like my vision.

>> Interesting. I uh two things. First of

all, you're going to have to help me spell it.

But that specific use case I actually think is um it'll be interesting to see what it produces. And maybe it's not the best example because one of the things that we built into the engine that

powers this is we want things to feel uh lightweight and achievable from a you know hitting a milestone standpoint. And

so one of the things that'll probably happen is it'll reduce it sounds like you you envision a pretty substantial scope to this course. It'll by default

try to reduce that to the key points because it wants to make you feel like you're making progress along the way and hitting important milestones rather than giving you something that's like an unachievable thousand chapter long. Uh

>> I see >> course.

>> I'm also super happy to like have it be a section of it like the first 30 the first 30 things uh or the first 30 sections or something like that. Would

that be better?

>> Let's let's see what it Let's see what it does. I don't know. Let's see. Um all

it does. I don't know. Let's see. Um all

right. How do you spell it? So,

Vickenstein um Vickenstein wi t Gen Stein Vickenstein.

>> Okay.

>> Uh Vickenstein's philosophical investigations.

Uh philosophical investigations. I'm

going to say give um you know pull out the first part and give me uh >> I would say part is not going to be clear enough. I would say first 30

clear enough. I would say first 30 subsections.

>> Okay.

Uh and give me the context I'll need to understand them.

>> Understand each one. Yeah.

>> Okay.

>> Yeah. Great. Understand them.

>> Interesting. Perfect. Um yeah, you know, one of the interesting well this we'll generate this. One of the interesting

generate this. One of the interesting things that is a tough balancing act that we've had to strike is because we're building this the the grand vision

is you can learn anything with Bobo >> and building a platform that is able to focus on Vic Denstein uh while also teaching people all the other things that they want to learn. it really

interesting like a really interesting balancing act to figure out how do you empower a platform to uh to have autonomy and to have the breadth to cover everything while also being

opinionated about what it does because uh like I said a true learning platform has to be built with learning use case in mind and so we have to have opinions about ped pedagogical methods and how it is we should be presenting information

and what is the balance of passive versus active uh engagement things like that so all right so here's our course so obo creates a Let me just let me just stop you right there. So, this is sick.

It's super cool. Um, like for people who are listening, it's a it's now it's now a page with a bunch of different sections. There's a headline that says

sections. There's a headline that says Vickinstein's philosophical investigations, first 30 subsections.

There's an introduction. It has like an explanation. It has a podcast. Um, and

explanation. It has a podcast. Um, and

then it has a bunch of like subsections.

And one of the things I think is cool about how you did this is it's obviously difficult to uh make a course uh all in

one shot. um all in like a very short

one shot. um all in like a very short time frame and it looks like you're in parallel generating a lot of different things and and um you are showing me the

first thing I want to read so I can get started immediately while you're then also filling in the rest of it which I think is like really really smart rather than making me wait. Um yeah is that is that that's how you're thinking about it

right? Yeah, I mean look, speed is a

right? Yeah, I mean look, speed is a critical piece of any AI product. I

think um we did not want to be one of those products and there are other products on the market that uh that do this. We didn't want to be one of those

this. We didn't want to be one of those products that requires you to submit something and then wait around for a while in order to get started.

>> Yeah.

>> Um a big part of our value proposition here and and our positioning for the product is you can learn anything.

Anything that you ever thought was too hard to learn, we could at least get you started and you can feel like it is achievable. Uh it's very hard to do that

achievable. Uh it's very hard to do that if you give us something and then we kind of like immediately violate your trust by by taking forever to to give you something. Like we're we're there to

you something. Like we're we're there to be the guide, right? And so we want to take you on that first step as quickly as possible. Um

as possible. Um >> I have a question like is there so it says there are two sources. Did it go and pull the full text?

>> Uh in this case it looks like it may have from these sources. Um

uh and also it's possible that it will do that if it feels that it needs to go get an external source. Also each of the chapters that we have here um will have

different sources.

>> Uh and so it's possible that you know it may have pulled them into particular ones but not into the specific one.

>> Okay.

>> Um you know this is the beauty of working with AI is like you don't always know what's happening under the hood. I

mean, we we've obviously we can dig into it uh behind the scenes, but when you're looking at the product, a lot of times even as the person who built this, a lot of times I'm looking at the product and I'm like, "Wow, it's pretty incredible how much agency uh you know, you give it the right guardrails and you give it the

right amount of autonomy." Is able to use that agency to do things that you had never expected. I've never, this is a live demo, I've never looked at this example before. I've never seen the

example before. I've never seen the content that we're about to see before.

So, it's it's always very cool seeing what comes out.

>> I love it. So take us through it um from the perspective I mean for and and and talk about it as because there are people who are going to be watching and there's also people are going to be listening. So so take us through what

listening. So so take us through what what's there.

>> For sure. Yeah. So um bunch of different things going on here. Uh multimodality

is a big piece of what we believe in. So

what that means is you can't just be getting a bunch of text in the way that an LLM would give it to you. you have to be given uh a variety of information in a variety of different ways that the the

way that makes is is most suitable at any given time. Um and so a big part of the pipeline that powers this is kind of figuring out what is the right thing to show you. Also what is the right format

show you. Also what is the right format to show you at any given time. Now the

one thing that is an exception to that is our podcast format because podcast listening is a very different type of experience. The times during your day

experience. The times during your day when you would want to listen to a podcast are very different than the times that you would want to uh you know engage with something that looks like this. And that's why the podcast uh sits

this. And that's why the podcast uh sits separately. This is a generated podcast.

separately. This is a generated podcast.

It's a conversation between two people uh talking about the topic. And one of the things that we've recently added is you can think about this particular chapter as the first episode in a podcast. And all the chapters that you

podcast. And all the chapters that you see here on the side are additional episodes of this long form podcast. And

so the course that you created actually also created a podcast with multiple episodes. In this case, it's six

episodes. In this case, it's six episodes because there's six chapters here. um each of which kind of build on

here. um each of which kind of build on each other and reference things that came before, but you always have the same two hosts. If I were to play this, I don't know if this would work in in our recording, but you tell me if you can hear this.

>> Have you ever tried to explain a really specific feeling or maybe a a dream you had and the words just fail? Like the

more you talk, the further away you get from the actual thing >> all the time. It's that frustrating moment where you feel like language itself is the barrier. And it's funny you bring that up because that exact

problem obsessed a philosopher named Ludvik Vikenstein for his entire life.

>> She pronounces Vikenstein. Interesting.

>> Yeah, I know.

>> Uh, now now I'm un unsure.

>> You probably know more than the the synthetic voice that we're using there.

Um the it, you know, the experience of consuming this is it's supposed to be rich. It's supposed to be um

rich. It's supposed to be um as I said, it's supposed to make use of a variety of different formats. Now,

depending on what type of content you're creating, you're going to see a different mix of things. If I were we should look at a STEM example because it'll look completely different than this. Right? If I'm looking at something

this. Right? If I'm looking at something that's very philosophy based, it'll be heavily about the language that you're seeing a lot less about the visuals despite the fact that we have a nice

photo of uh Ludvig here. Um

we have um you know you get taken through an experience uh that as I mentioned is meant to feel very peacemeal and achievable because I think one of the biggest issues the biggest blockers that prevents people from

learning is feeling like it's overwhelming to be able to get to the end state that they want to get to. Um

and so you know this case here I'll show you a different example uh just as a point of comparison. If I were to go here and here, so so this is a recent course that somebody generated on the

platform quantum tunneling explained very different obviously and so as you're uh in this case you're getting course about quantum mechanics and

you'll see um examples of visuals and content that's pulled in and content that generated that looks very different because it's built for the specific use case. As we

scale the team, as we scale the product, we want to add support for more and more formats and more different not only formats, but what we call embedded formats, which means the the formats that show up in line at the appropriate moment uh for you to learn the things

you want to learn.

>> What would happen if I uploaded a full book?

>> Uh we do have size limits on the uploads. However, um I guess it depends

uploads. However, um I guess it depends how big your file is, but if you were to do that, if you were to take Vic Finstein's book for instance, uh you could upload it. You could drag in any attachment here. And we see people doing

attachment here. And we see people doing that. we see people using it for for

that. we see people using it for for work for that purpose or you know help me understand this article or this document or whatever it is. Uh that's

what this plus button is right here. And

in the case of you uploading a book I I think without giving a direction it would have to infer a lot about what your intent was by uploading that. But

if you uploaded the book and then gave it specific intention about you know I want you to analyze this fiction book and do a whole character analysis with plot graphs and things like that. it should

be able to do it versus, you know, giving it a completely different set of directions with that uploaded book.

>> Okay, cool. Um, that's really interesting. Can we go back to the to

interesting. Can we go back to the to Vickenstein real quick?

>> Um, I don't want to get too far away from Ludvig. I'll have attachment

from Ludvig. I'll have attachment anxiety. Um, okay. So, if we if I want

anxiety. Um, okay. So, if we if I want to scroll down. So, so it looks like in these in these chapters there's like um uh in the chapters that you that you have um it it has like introduction to

Vickenstein, transition to philosophical investigation, understanding language games, meaning meaningless use which are it's going through all the major concepts of the book. But I'm wondering if we go to the analyzing the first 30 sections. I just want to see what it did

sections. I just want to see what it did there.

>> Um okay, interesting. Let's keep going. So,

okay, interesting. Let's keep going. So,

it's it's basically talking about um it's it is actually in the beginning of the book. It is actually breaking that

the book. It is actually breaking that down. Let's keep going.

down. Let's keep going.

>> Scroll to the bottom. I just want to see like where it where it got to.

>> Oh, interesting. And so,

>> it looks like it's sum it more summarized um the first 30 section sections as opposed to >> doing what you you know what you suggested.

>> Yeah. Yeah, that's interesting. I I do like I did notice this. you have this like quiz thing where um every once in a while there's a format where it like asks you stuff about the whatever it's teaching you to like kind of help you do I guess that's the more like one of the

more active things you have. Tell me

about that. Yeah. So, um, as I mentioned, a lot of our, uh, philosophy here is around what we refer to as embedded formats, which is let's put the right thing in at the right time. And

so, when it makes a decision that this is an appropriate point point for me to reinforce the material that I've already covered, it'll throw in a quiz or it'll throw in flashcards or uh, you know, a a

game or different formats as we as we add support for more and more embedible formats. What we're doing here is we're

formats. What we're doing here is we're giving this pipeline uh, that generates all this content more of a a toolkit, if you want to think about it like that, to to use, just like

a teacher has a variety of different tools at their disposal and they're able to use the right one at the right time that they determine is is fitting. Um,

but you'll notice in this case, you know, back to our question about passive passive versus active. It is objective oriented, right? It is intentional in

oriented, right? It is intentional in the sense that you could see that the course was built in a way that built up to the thing that you asked it to do. In

this case, uh, the end result, I guess, was not as detailed as we were hoping that it would be, but it builds up from basics to get you to the end result, but on the active versus passive thing, it tries to strike the right balance there.

Right. So, in this case, it's mostly >> Is there a way for me to be like, hey, like, no, no, like I, you know, do more.

I want a bigger, deeper course. Can I

say that?

>> So, we we talk about that a lot.

Currently in the product, we don't have the ability to do that. Well, the the way to do that would be to continue to refine your prompt. And you can generate as many of these as you want for free.

That's there's no limit on how many you can generate these. However, uh one of the things that we talk about a lot and users ask for this a lot >> is how can we take content that we've already generated and give the user the ability to continually refine the course

so that you could you could say actually here I want you to double down. I want

you to split this into multiple chapters. Things like that.

chapters. Things like that.

>> Yeah. Yeah.

>> That's really interesting.

>> And then giving more granular control to the user to be able to also say >> I want it more >> tonally in a particular format. Right?

I'm using this to teach my kid >> you know fifth grade math or whatever it is and so we want it to be uh presented to them in a slightly different way and given those type of giving those granular controls is going to become an

important part of the product as well.

>> Got it. Awesome. How's it going so far like business-wise?

>> Uh it's great. So, we launched the product officially in um September of 2025

and the 3 months or so following that were all about um addressing some fundamental things that came from our users, came from the launch, the realization that we could be much more

objective oriented in our content. We

could have a much more enriched format uh enriched presentation of the material which is the version that you're seeing right now. And so we recently launched a

right now. And so we recently launched a substantial uh change in terms of how the product is presented to the user and and what they get out of it. Um as well as some

changes like allowing users to create unlimited courses uh for free. And so that's relatively new. And then the other thing that

new. And then the other thing that happened recently was we announced our series A uh which we raised after um after launching the product. And so that was uh just announced as And so we're

now entering the phase where we're able to grow the team and do a lot more and and hopefully build out a lot of these features that you and I are talking about.

>> That's awesome. Congrats.

>> Thank you.

>> Um what what has surprised you so far about um like who's using it and what they're using it for and maybe what they're not using it for.

>> I wasn't expecting the extent to which what people put in is objective oriented as I mentioned. Um, and that's informed our road map a lot. And so if you look

at the uh the prompts that people put in, more than twothirds of them are under this category that we refer to as objective-based learning goals. And it

kind of reinforces um it probably should have been obvious to us, but I think actually seeing the real world data changes your perspective on what a product like this is for. You

know, most people today struggle, I think, with this gap that exists in in their minds, in my mind, for a lot of things that I want to learn, and I'm sure in yours as well. I know what I

want to achieve. I just have no idea how to get there, right? I know that I need to understand X. I know that I want to do Y for my job. I know that I want to uh start learning about this topic that I know is I'm I'm theoretically

interested in it. I have no idea how to even begin, right? And I have no idea what the steps are to get there. And so

I think creating, you know, learning to me is primarily about that. It's about

uh allowing someone to specify what their end objective is and getting them to that path. Now, if you think about the way that most people historically have learned things with online platforms, it's actually the exact

opposite of that. They have no control, no agency over where I'm going. It's

just I'm generally interested in a topic. I'm gonna find some resources for

topic. I'm gonna find some resources for beginners to get me on the first step and then I have to do a bunch of research to figure out how to get to the second step. But you're not

second step. But you're not personalizing the journey. I mean, even LLM's to the point that we mentioned earlier, if you were to specify to an LLM what your long-term objective was and you were to spend more than a few

minutes or a long context window trying to talk to it to get you there, it'll very quickly lose context, right? And

it'll it'll lose track of what it is that you're trying to achieve. And so

ultimately I think uh a learn a real learning platform that's um that's successful at teaching has to be able to simultaneously

be always oriented toward towards the objective the user is trying to get to and also allow for a lot of freedom along the way and and a lot of quick wins and a lot of keeping the learner motivated so that they continue to go.

>> That's interesting. Yeah. I have not found it that you know like chat GBT especially with like now like let's say GBT 5.2 into an Opus4 5 that it really gets that off track with long context.

Opus4 5 definitely like in longer chats it'll do this compaction stuff and it's just like the way they've done that is like stupid but more or less like it seems in my experience that it keeps going like these chats keep going. The

thing that I find because I've used I use these tools to learn all the time.

Um so when when um 03 came out was it 03? Yeah, it was 03. They added the

03? Yeah, it was 03. They added the ability for it to um set reminders. And

so what I started to do was I took a couple things that I wanted to learn about. So one was I wanted to go through

about. So one was I wanted to go through Andre Karpathy's um building a language model course on YouTube. Um and I said like every day I want you to like take me through a new section of that course

on YouTube. So like get the whole video,

on YouTube. So like get the whole video, figure out what he does, and then like walk me through it step by step and stop when I'm when I don't get something. and

like let's just keep working on that until I get it. And it actually worked really well for like a couple days. Um

and it it was like actually really good overall because uh like 200 days in it was still like beeping me and being like, "Hey, like there's this piece of this thing that you haven't done." It's

like pretty good. Um but the problem is basically like splitting it up into those little pieces. At some point I got to a piece where it was like a

little hard and then I was like I don't have time for this right now and I let a couple days go by and then that piece like looking at it again 3 days later I was like I don't really have the context

anymore for even where we were. Um and

and I had to do now even more work to like build back up to this and that's even less appealing. And so it just like ended up just they just kept reminding me, hey, like we're right here at this like, you know, we're learning about how

key value stores and LLMs work or whatever and um uh and I would just feel guilty about it every morning and then I just like dropped it basically. So

that's that's been my experience actually is that um one I often have a like motivation to learn something hard

but that motivation is um it passes like especially if it's not fully related to like my job or like something and this is actually kind of related to my job but like I can get by my job without

having done this course right um and then so there's the passive stuff and then there's the like I we're building up to something and I like sort of lose context of where we are on the

path and then it doesn't adjust to be like hey I saw you haven't responded in 3 days like let's try to reignite your interest here and like get you caught up in a way that like gets you psyched

again in the way that you know maybe a friend would or something like that doesn't have that like little that in that that sort of intelligence which maybe at this point it could do it I just like haven't prompted it for that

um but those I think those are my two big problems >> I think that touches on the interesting uh double-edged sword of

creating LLMs and building them to be generalized tools, right? Especially

once you get into the agentic stuff, which is what you're talking about, in order to have it deliver the value, uh you really

let this thing run on its own more than an LLM in a regular conversation, right?

And so in order to have it deliver value over a long period of time over multiple sessions, you have to be pretty constraining I think uh in terms of what this thing can do, you don't want it to go off the rails, which agents, if you

set them off on their own, easily could.

And so there's this really interesting balancing act that that has to happen under the hood for any of these AI platforms, especially for LLMs where it's like as you're building agents, you know, how do you how do you set the guardrails, give them some level of

autonomy, uh, but set it in a way that's flexible enough, and to your point, it's clearly not flexible enough to say, hold on, I need to reassess the entire approach here because Dan's kind of lost track or he's lost interest and I need

to come at this from a different angle.

Um clearly we we are in the very early days I think of of the technology that is able to successfully do that right it's an agent sort of reassessing its own uh its own objective and its own

approach to things and allowing the system to rewrite the rules redefine the guard rails and and move them so that uh so that it becomes more valuable without

requiring you as the user to actually come in and be very explicit with actually no I want you to do this right because my point from earlier. You would

never do that with a teacher. A teacher

would be able to read the room. A

teacher would be able to know when their students were confused. A teacher would be able to know, "Hold on a second. It's

been a week since we last covered this.

I need to reinforce certain material."

They don't put the onus on the user or in that case, sorry, the student to come to them and say, "Hey, actually teacher, you know, you need to do it this particular way." And I I think that's

particular way." And I I think that's where the current landscape breaks down and and doesn't fulfill the promise of being the type of learning experience that we all had in school growing up. H

um how and how do you think about like so you know this experience at Chute it's like I can sort of set this thing and then I forget it and but it it continues popping up so it feels like

it's now part of my life and it's in the way of my normal routine whereas my experience with Obo just using a little bit a couple days ago in in preparation for this and honestly just I like saw a tweet about it and I was like I got to

like I honestly forgot we were I knew we were recording at some point but I was like this actually sounds sick. I should

just try this. Um, and my experience with it is I asked it to make a course and I can't remember what it was and I was looking at it. I was like, "Oh, this is cool." But then I was like, and this

is cool." But then I was like, and this is I was on my phone. I was in the airport and then I was like, "Okay, like this is cool. There's a lot here." I

read a little bit of it and then I just forgot about it basically. Um, and it feels like there's enough material in any course that I could spend probably

hours over over a couple days on it, but the consumption format feels very like ephemeral. How do you think about that?

ephemeral. How do you think about that?

>> Uh, it's a, you know, talking about balancing acts again, it's a tough balance to create content that feels lightweight but not ephemeral. And I

think we we are continuing to work on that is how do you make this thing feel like a a an asset that has longevity to it that you can come back

to, but not so much so that it intimidates you and make and makes you feel like it's heavy. Um I think one of the single biggest issues that exists with any learning platform historically, especially like formal education

platforms, is the content feels incredibly overwhelming, right? You're

presented with this massive amount of information. It feels heavy. you don't

information. It feels heavy. you don't

want to even get started. So that that I think is an interesting balance. Um now

that we have a model where users can create as many courses as they want for free and put in as many prompts, I actually think that ephemererality is totally fine. Be at any given time. We

totally fine. Be at any given time. We

we should be able to retain context about what you've asked before if you want to pick up where you left. You

don't need to go into a pre-existing course. We can make a new one picking up

course. We can make a new one picking up where you left off and you should be able to to prompt OBO to to let you do that. I will say um you know given how

that. I will say um you know given how early we are, we don't yet have the re-engagement hooks that a product like this should have. We don't have a mobile product

have. We don't have a mobile product yet, a native mobile product. It's a

web- based product right now and that's all coming. Uh and obviously learning

all coming. Uh and obviously learning requires notifications and re-engagement. Learning requires you to

re-engagement. Learning requires you to learn on your phone and on your desktop and have a native app. um where we decided to focus our energy with the small team that we have in the early

days was on hyperfocusing on the utility that this product would deliver because if we can nail the value proposition, if we could really find product market fit with the use cases that that people would come with, then uh the sky's is

the limit in terms of where you could take that in terms of re-engagement.

Right.

>> What are you using it for? What are you learning with it?

>> Uh so I'm a big nerd and I I for years have been uh very interested in all of these like advanced math and science topics that I never learned in school. I did not major

in math and physics and I wish I had. uh

and you know this kind of speaks to the mission of the company also is it took me a very long time to realize that despite the fact that I was fascinated by all these topics uh that I could

actually teach myself and it's becoming increasingly easy for me to teach myself especially with tools like OBO and uh yeah so bunch of physics topics

primarily that uh that I had never gotten a chance to learn formally in an academic setting and now I realize like I I don't need the academic setting, right? I'm able to actually go through

right? I'm able to actually go through and teach myself and have an enjoyable lightweight experience where I can jump in and jump out as as easily as I want to.

>> What kind of physics stuff are you thinking about?

>> Um, in the case of OBO here, I'll let me see let me see what I've made recently.

Uh, uh, I was reading the other day about the history of quantum physics, like how it was first discovered in the early experiments. Um, you know, how did they

experiments. Um, you know, how did they discover >> body radiation and stuff?

>> Uh, more the uh I forget how it's pronounced it's Have you ever heard of the We're going to get very uh

>> nerd here. Yeah. Uh, Stern Gerlock, I think it's called. Do you know Stern? Is

that >> an excellent followup to Vickenstein?

Um, I don't know Stern Gerlock.

>> Okay, so uh we'll get very nerdy here for a second. So Stern Gurlock experiment was the experiment that basically proved that um spin quantum spin which is the you know

if you take a particle any particle any atom or or electron or whatever uh it has some inherent spin property where it it spins and and the there's this challenge of determining what direction

it actually spins in and uh so they built this apparatus these two experimenters Stern and Gerlock or Garlock uh I don't know how it's pronounced and uh they built this

apparatus that basically determines spin. Uh it measures spin and determines

spin. Uh it measures spin and determines it as either being up or down, right?

And that's to be expected. Like if you throw a particle and you measure the spin of a particle as up or down, then it's it's one or the other. And

probabilistically in quantum physics, it kind of expects that you know it's 50% of particles will be up, 50% will be down. But there are all these really

down. But there are all these really weird variations on the experiment that they did to determine that uh quantum states are like totally inherently probabilistic and completely unpredictable. And so to give you an

unpredictable. And so to give you an example of this, if you were to take a particle and measure its spin as either being up or down, and then you were to measure its spin along a different

dimension, so like instead of measuring how much it's spinning on the Z-axis, you measure it on its X axis. And then

you measure it again back on the original axis that you had. So you go from Z to measuring it on X back to Z.

There is once again a 50% probability that it's either up or down on this new axis. So this experiment basically

axis. So this experiment basically proved and and this was had been theorized for a while but they actually proved it through this experiment uh like a 100 years ago that the particle somehow almost forgets the state that

it's in. It almost forgets which way

it's in. It almost forgets which way it's spinning when you measure it the second time and when you go back to orienting it.

Uh my computer, I don't know if you saw that, but I just had like fireworks. Um

I guess I did a hand gesture.

>> Mind blown.

>> Uh when you go back to measuring it on the original uh axis that that you measured on, it it totally like disregards the original measurement, which makes absolutely no sense, right?

That's almost like I as if I were to say I spin a top or I spin a ball or something like that. I measure it in one way, I measure it in a different way, and then when I go back to measuring it the first way, it's actually spinning in

a different direction 50% of the time.

Um, and the thing that blows my mind learning about this is not only the underlying physics itself is fascinating, but also uh this happened a hundred years ago. Like they they were

able to build successful experiments to determine that this actually was true something like 100 years ago. Uh I can look up when this experiment was done.

1922.

This experiment was done more than 100 years ago. And it just boggles my mind.

years ago. And it just boggles my mind.

It's like think about the technology that was available to them at the time and that they were able to figure this out.

>> Totally. It's it's really interesting.

It makes me feel like we haven't really I feel like Newtonian physics has made its way through all of culture and society, but like quantum physics has not. Um and we're still stuck in sort of

not. Um and we're still stuck in sort of like a Newtonian world in a lot of ways.

>> Yeah.

>> Um >> well, the one we experience dayto-day, right?

>> Well, it wasn't before Newton, you know?

Um like we we were like this this Newton is that's crazy. That's definitely

not how it works. Um, so I but it is I think it is uh it is sort of unintuitive. The thing that I've been

unintuitive. The thing that I've been think because I've been I guess everyone every nerd just sort of likes quantum mechanics. So I've been like reading a

mechanics. So I've been like reading a little bit about like quantum mechanics and philosophy and um and so uh one of the things I was thinking about and I am

the furthest thing from being good at physics. Um, so I'm curious as another

physics. Um, so I'm curious as another non-expert, but someone who I seems to be a lot smarter and more grounded in it than me. And maybe we can make an obo

than me. And maybe we can make an obo course about this to explain it to us live on this show. So you tell me, you you tell me. But I've been wondering about, you know, I think of language

models as this interesting discontinuity in um the way that we think about how knowledge works and what knowledge is.

And when we first started um trying to build artificial intelligence, we started from like symbolic AI, which is essentially like we're going to reduce

intelligence down into um a set of rules that Yep.

>> um are human understandable, are formal, are explicit, and we're going to build our way up to something that can learn anything. And then we just ran into this

anything. And then we just ran into this like big problem, which is it would take more computation than is available in the universe to actually actually do that. Um, and then we flipped to um,

that. Um, and then we flipped to um, language models, which I think of as being a little bit more of a post-modern technology that um, learns like

countless uh, uh, implicit um, uh, patterns from um, uh, like tiny tiny correlations in like long pieces of text

to figure out what comes next. Um, and

that ended up like working really well in this way that's like it works but we don't quite understand it and we can't reduce it down into that symbolic thing that we that that makes it understandable and almost makes it feel

a little bit more Newton Newtonian. Um

and the way that we did that is we like em invented embedding spaces and um um and there's something about that

whole thing that reminds me a little bit of quantum mechanics. And the specific thing that I think of with quantum mechanics is the um the double slit experiment where

>> I can't I I I cannot do the double slit experiment off the top of my head, but more or less it's very similar to what you what you just explained where depending on how you look at a photon,

it shows up as either a particle or wave. Um, and um, there's all these

wave. Um, and um, there's all these different weird variations of it, but effectively the way that you measure it at the time that you measure it like

determines whether it becomes a particle or wave, and before that it's in like some probabilistic in between state. And

what it what that reminds me of is sort of embedding spaces somehow. It's like

uh if if at different times you measure the same thing and it's uh different depending on when you measure it and and how you measure it, then there's probably some like high it exists in this highdimensional way that um you're

reducing its dimensions down to something that that you can actually measure in the same way that um you're sort of uh you're taking a point in the embedding space and like reducing it down into the next letter. I'm saying

this very inelegantly, but I think you probably get what I'm saying. Does that

make any sense or am I just like being crazy for >> Well, for a few reactions. First of all, I'll tell you um you just reminded me there was maybe 10 years ago or something. I was I had always been a

something. I was I had always been a huge fan of the double slit experiment.

If anybody listening doesn't know what the double slit experiment is. I

probably I don't think I can think of anything I've ever learned in my life that has blown my mind as much as the fact that this is real. And um and I remember I had a conversation with somebody about 10 years ago where the

double slit experiment came up and we were both just totally nerding out about how fascinating it was and we had this moment where we were like to to your point earlier around how the world hasn't come around to thinking about

this. How is not everybody talking about

this. How is not everybody talking about the double slit experiment all the time?

Like the fact that that actually is a real thing and we don't just constantly talk about it is crazy. um because it's probably as incredible like fact of the

universe as as can be imagined. But but

you're right, there is another there's definitely a lot of similarities to the fact that there's this other incredible totally inexplicable thing about the universe, which is that if you take a massive amount of information created by

humans, the internet, and you pass it into one of these massive neural networks, it's able to identify patterns that we humans don't even know exist.

And it's able to do it in a way, you know, the the massive highdimensional space of these embedding spaces that you're talking about means that there

are dimensions of human output of what it is that we create that we're completely blind to. And it's not like the AI goes about or the, you know, training one of these embedding spaces.

It's not like it goes about and labels these axes for us. And so these these dimensions exist in this highdimensional space that basically totally define patterns in how it is that we behave

that we have no idea how to make sense of. And um and that's certainly a thing

of. And um and that's certainly a thing that is incredibly weird as a human being to try and wrap your head around like philosophically.

How is it possible that we as humans were able to create machines that are able to understand our output and predict it so much better than we ever would be? And not only that, but do it

would be? And not only that, but do it in a way that we have absolutely no sense of how they actually work under the hood. Like we know the technical

the hood. Like we know the technical reason way that they work, but we have no uh understanding of why they find certain patterns and what those patterns actually represent because they're just numbers and we can't make sense of them.

Um but you did say something that I thought was interesting, which is, you know, you talked about the probabilistic nature of LLM. It's worth pointing out LLMs inherently are not probabilistic.

Actually, the underlying models that power them are 100% deterministic. we

put in the probabilistic variance uh to try and make it sound more humanlike and unpredictable.

But the truth is, and and I don't know if anybody's done this type of experiment, but I would love to see this is like what does an LLM look like where you reduce the variability of its output to zero, which I think it would be the

temperature setting, right? You

basically reduce the temperature of any LLM down to zero so that the thing that it's determining is 100% mapped to the uh to the uh patterns that it found, right? into the weights that it's giving

right? into the weights that it's giving to each one of the tokens. Um, how good or bad would that output be? It probably

wouldn't feel very human, but it would weirdly actually be more accurately representative of the output that humans are creating on the internet.

>> That is interesting. I think the reason why that doesn't work currently. There's

there's really good research from Thinking Machines, which is Mirat's um company um where they were looking at why even if you set it to zero, it's

actually still not deterministic. And it

turns out that it's because um when you add floatingoint numbers um of different precision in different orders uh you get

slightly different results and um those floating point additions are happening in parallel on

GPUs and um depending on the order in which it it like the the work is batched um it will it will end up changing the the end result. Um, and so they've figured out how to fix that so that when

you set it to zero, it is actually purely deterministic. But it's like any

purely deterministic. But it's like any production LLM is not except for thinking machines is not actually deterministic even at even at temperature zero.

>> And I guess the reason is uh just so I understand it because it's parallelizing across many GPUs. I guess there's a race condition where you don't know which computation is going to get completed.

>> I think it's it's something like that.

>> That's interesting. Yeah,

>> it's definitely about the order of operations of float addition being parallelized in batches on GPUs. Um,

>> but I you know what I I've I've never thought about this before, but I actually wonder if the human brain works exactly the same way. You know, you could think about the fact that the the human brain is a computational neural

network with gazillions of parameters.

And so this question arises of take out any consideration of quantum mechanics or anything like that, right? In a

purely Newtonian world, if I were to get exactly the same input input into my brain, would it produce the exact same output? And it sounds like based on this

output? And it sounds like based on this research in the real world because it's noisy and because, you know, different neurons are firing off at different speeds, you actually have the same race conditions happening in your brain. And

so, it's totally possible that you would have a totally non-deterministic answer uh or output given the same input in in the brain as well. I would I would guess that it's not the same race condition

because I don't think we're doing floatingoint arithmetic, but um but if you're around old people,

they tend to repeat themselves.

Um and in the same uh in the same situation, in the same context, they'll say the exact same things, and it gets worse and worse as you get older. Um,

and I bet there's something there's something there. And it's I it's

something there. And it's I it's something about um the flexibility of your neural pathways and which ones get activated and you just end up activating more and more of the same ones instead of new ones. Is some there's something

like that I think is going on.

>> There's probably a reinforcement thing there too where as you keep activating the same ones, it reinforces that those are the ones that your brain should be activating, right? And so it ends up getting worse over time.

>> Yeah. Exactly. And I think you also like as a child you have way more connections than you do as an adult. So you're

constantly like pruning connections and I think that that proc maybe I don't know the biology of it but there's something about that process that I think continues and then just you you oify it a little bit as you get older.

>> Right.

>> Anyway, lots to learn. Uh I want to make some about about all these topics. But I

think um the one thing that you did actually make me think of is um language models know things that we don't know.

But that's because we think of knowledge as um something that we need to be able to explicitly talk about. And language

models are able to do things that we actually know a lot about implicitly. We

just have not been able to articulate in an explicit way how it works. So we've

been able to write for a very long time and we have some idea of um like how how language is formed from linguistics but that has not enabled us to generate

language in the way that language models do. Um, but I do think that there's some

do. Um, but I do think that there's some broadening of our idea of like what it means to know something from looking at language models and looking at how um, even if we can't explicitly say how they

work, they are actually able to embody a a corpus of knowledge that it's not new knowledge because it's all generated from us. So, it's just knowledge that

from us. So, it's just knowledge that exists in us in a different way.

>> Wait, say that last part again.

Knowledge that exists exists in us in a different way because of the fact that we can't >> in a different way. It's intuitive. It's

it's not something that we can um we can talk about which is actually Vickenstein's whole point.

Back to Victorstein.

>> Um yeah, look, I think uh there is the mind and the internal way that the mind works and then there's the way that the mind projects out into the real world and the way that we look at

it. And I think if anything, LLM have

it. And I think if anything, LLM have probably forced us to realize that those two things are massively disconnected, right? Like you can't um

right? Like you can't um just because a mind is conscious and self-aware and aware of its own output doesn't mean that it understands any of the mechanics of how it works under the hood. Uh

hood. Uh yeah, I mean I I think there are uh we are in the we're very much scratching the the very surface from a

physics standard the implications on physics and philosophy and things like that of all these questions around like what is the mind and what is consciousness and all that and I think I I often think

try to reflect on like are we ever going to have some kind of breakthrough that actually answers some kind these fundamental philosophical questions that you're talking about. Um

I I don't know that I'm convinced that that actually could happen. You know, I think the human mind biologically developed in a way that evolutionarily developed in a way that probably

intentionally obscures all this from us.

Uh and makes it so that we we probably can't do the things that LLM can do and we can't identify the patterns that LLM can identify. Um I don't know. I I I

can identify. Um I don't know. I I I think I've convinced myself of that that that like there is a limit to how much we can actually know about this. But you

might feel differently.

>> I I think that I've just broadened the definition of knowledge because anything that an LLM can do, a human has done. Um

and I think that should count as a form of knowledge even if we can't explain it.

>> That's fair. That's fair. Although what

an LLM has access to in terms of its input and its ability to train on all these different you know corpuses of >> any individual person can't but humans have >> humans collectively can. Yes, correct.

Yeah, that's right.

>> Yeah. Um well that is I think a great way to end it. Um Ne thank you so much for coming on the show. Uh if people want to try OBO or or find you on the internet where can they find you?

>> Obo.com. Uh

go create as many courses as you want.

uh you know let us know what you think of us feedback. We're always welcome to it. Really appreciate it. Uh yeah, I'm

it. Really appreciate it. Uh yeah, I'm near Zickerman. You can find me with all

near Zickerman. You can find me with all my name as my handle on all the various platforms. So we'd love to hear from everybody and uh if you have any product feedback, let us know.

>> Awesome. Thanks for coming on.

>> Thank you, Dan. Appreciate it.

>> Oh my gosh, folks. You absolutely

positively have to smash that like button and subscribe to AI and I. Why?

Because this show is the epitome of awesomeness. It's like finding a

awesomeness. It's like finding a treasure chest in your backyard, but instead of gold, it's filled with pure unadulterated knowledge bombs about chat GPT. Every episode is a roller coaster

GPT. Every episode is a roller coaster of emotions, insights, and laughter that will leave you on the edge of your seat, craving for more. It's not just a show,

it's a journey into the future with Dan Shipper as the captain of the spaceship.

So, do yourself a favor, hit like, smash subscribe, and strap in for the ride of your life. And now, without any further

your life. And now, without any further ado, let me just say, Dan, I'm absolutely hopelessly in love with you.

Loading...

Loading video analysis...