LongCut logo

Should You Be A Carpenter? [Wading Through AI - Episode 1]

By Molly Rocket

Summary

Topics Covered

  • Deep Learning Nexus Ignited 2013
  • Junior Engineers Decimated Now
  • AI Sycophancy Undermines Code Quality
  • AI Matches Junior Programmer Scope
  • Focus Adjacent Value Generation

Full Transcript

Hello everyone and welcome to the very first episode of Waiting Through AI.

This is going to be a discussion series that I'm doing with my good friend Dimmitri Spanos who has actually been an AI researcher for decades since way before AI became the hot topic. And in

this series, I'm going to be playing the outsider because as many of you know, I don't do machine learning. I it's not my area of expertise. I have the same sort of superficial knowledge that most of

you out there probably have about AI up until recently.

But Dimmitri has a very in-depth technical knowledge of AI. He's been

working with it for basically his entire professional career. And so what we

professional career. And so what we wanted to put together here was a place where we could talk without all the hype, actually have some level-headed

conversations about what AI is doing currently, where it's likely to go, what are the actual effects that we're likely to see from it. sort of just a much more grounded discussion than probably what you're seeing from people who either

want to say AI is the greatest thing that's ever happened or that it's the worst thing that's ever happened. Uh

we're going to try and have a much more sort of um measured approach if you will. So we hope you'll join us for the

will. So we hope you'll join us for the whole series. Here is our very first

whole series. Here is our very first episode. And in this episode, we try to

episode. And in this episode, we try to tackle sort of the abstract question that a lot of people have, which is if they are working in a knowledge sector, like something where you sit at a

computer and type most of the day for what you do, is that just a bad career now? Should

you not be thinking of going into that kind of career because AI will replace it? Basically, this episode is should

it? Basically, this episode is should you just go be a carpenter? Is that the kind of trade uh that will be the primary job path forward in the near future? Well, you know what? I'll let

future? Well, you know what? I'll let

you take it away a little bit, Demetri, here because first of all, most people probably don't know your background in AI. So, maybe you could kind of give

AI. So, maybe you could kind of give people a little introduction as to, you know, uh why why were you ever interested in AI to begin with and how did you end up being interested in AI

well before sort of it became trendy?

>> Yeah. Uh so, uh good place to start. So

uh starting way back at university, my training is in what's called computational mathematics. So that's all

computational mathematics. So that's all the mathematics that leads to some kind of computation, but also how we use computers to do those computations, right? So there's a mathematics part and

right? So there's a mathematics part and then there's a computer science part. Um

so that was this was in the early 2000s.

Uh and at the time I thought the main thing I wanted to do was uh something in the direction of of robotics. Um maybe

something with with uh aerospace. And so

it was natural at the time I was looking at um various kinds of uh mathematical formalisms of um planning intelligence

optimization. There's a big cluster of

optimization. There's a big cluster of things in the academic literature on how do you make a robot do anything useful right um so uh and so again that there's

a mathematics part there which is um studying and describing the world in a way that eventually becomes formal symbols. Right? That's the mathematical

symbols. Right? That's the mathematical part and then there's the software programming part and how do you actually get it on hardware which uh you may know robots frequently do not have

tremendously powerful CPU. So there's a um uh there's a performance optimization um kind of aspect of that as well. So

that's what I was doing then. Um,

>> and presumably like robotics, I mean there was quite a bit of things that are sort of AI in robotics, but they weren't the kind of AI that we would talk about today. Like things like, you know,

today. Like things like, you know, search searching spaces, you know, geometric spaces for route planning and all that sort of stuff. So I I assume you're talking about like that whole sort of thing, not just deep learning or whatever.

>> Exactly. Yes. And actually you you hit on a point that I think will be useful to organize this first conversation because really this deep learning thing uh was

um it was a turning point because several factors came together around 2013 2014 well I'll get to that point uh but really the this thing that we're

experiencing now is downstream of that nexus point where this thing called deep learning became viable both um we developed some theory that made it work.

We figured out GPU compute well enough.

Uh and then uh also the big techs uh found that it was an opportunity to use their enormous scale to do something that could not be done by smaller teams. So all of that um like if you feel like

something weird has h like where did all this stuff come from, right? Um it's

it's those three things coming together and the the nexus point of that was somewhere around 2013 2014. um and has been playing out uh playing out ever

since. So,

since. So, >> so maybe uh I can stop you there and just say because hopefully you know uh I would like to do a fairly lengthy set of

explorations of this and that's probably something we want to go into as its own episode which is kind of like talking about the nexus point. I know that uh one of the reasons that you originally

said that you uh would be willing to do this series was you had sort of this you constantly were getting asked because you have so much experience in AI.

You're constantly getting asked by people who don't really know that much about the state of AI. They want to know like all of the practical concerns like you where is this going? How will it affect me? What will the jobs look like

affect me? What will the jobs look like in the future? All that sort of thing.

And so, you know, this first time out for people who are coming to the stream, maybe I can just prompt you and say, do you want to kind of give me also a little bit of a sense of the kinds of questions that you're, you know, being

asked? What are the things that are on

asked? What are the things that are on people's minds when they come to you and they're like, you know, tell tell me what's up with AI.

>> Yeah. So, maybe I'll break that into categories of questions. One category of question I have because I have uh young children and I have nieces and nephews now who are almost university age and I

have friends who have similar children and so on who are asking me uh is the job market about to be destroyed right uh and so there's this question of is uh

I need I need to advise the young people in my life somehow right uh and similarly I you know as you know I have an informal mentorship program with um

with young engineers like early 20s and help them as well. And so um I I help them get jobs, help them figure out, uh appropriate skills for the market and so on. So anyway, this is something that I

on. So anyway, this is something that I uh I think about a lot in my own uh activities and I'm asked by other people uh very often. Similarly, I'm asked by hiring managers because sometimes people

will come to me and ask do you have a good engineer who can become so and so, right? uh and so now those people are

right? uh and so now those people are asking the same question that does it make sense to hire a junior engineer right now if in two years that job won't exist right and so this is another

category of question uh similarly I get questions from engineers asking hey is my job going to be gone in 2 years do I need to be uh become a carpenter or something right and I'm I'm not not

exaggerating that's a literal literal example right that people are are worried about that >> no I believe you and I think it's it's a legitimate question because one of the problems with any kind of an emerging

technology is, especially if you don't have insider knowledge, it's almost impossible to know what to expect the trend to look like. Some of these things are hyped and then they go away completely. Some things are hyped and

completely. Some things are hyped and then they become main stays of life and it's usually very difficult to know kind of like what's going to happen there. So

I I totally believe that those are real questions and they're concerns people should have. Yeah.

should have. Yeah.

>> Yes. So that connects to the next aspect which is uh people asking about um the hype cycle if you will right uh and so this partly has to do with

thinking about the economy in general how do I invest um do I just wait it out I mean is this just is this going to be NFTTS all over again right uh so I get

questions like that and that's from investors because I'm I'm frequently advising VC funds uh and so I should mention that as a relevant uh pool of experiences we can explore.

Uh so I myself uh ran a VC started uh VC funded startup back right around the nexus point. Okay. Uh and so I saw this

nexus point. Okay. Uh and so I saw this happening as it was happening. Uh and I was I was like uh like um people you know CEOs of very influential companies

uh were involved in helping me start that company. So I saw this from uh you

that company. So I saw this from uh you know from ground zero of AI when uh even those people didn't believe that it would work. Right.

would work. Right.

>> Right. Right.

>> Um and so uh h I have uh I've seen this whole thing in AI uh just just like you I saw this whole thing with as well right and so there are some parallels I

think we can explore there. So um

condensing quite a bit uh you may remember that uh with the dot crash there was a period of um

uh a period of uh feeling like oh we should have known that you would never order pet food online right you remember pets.com and this was a common joke right and oh we should have known that

you would never use an online grocery store you may remember web van right >> right Yes. Yes.

>> And so the there was a funny thing that happened there which was those companies were not wrong. They were just wrong about the time. Right. Pets.com was not

wrong that eventually people would buy pet food online. Web van was not wrong that eventually people would buy groceries online. But there was

groceries online. But there was explosive explosive hype, catastrophic collapse. that catastrophic collapse

collapse. that catastrophic collapse nearly wiped out many many other businesses that had nothing to do with the so-called stupid ones. Right.

>> I would also say there's a scale thing there too. Uh because not only were some

there too. Uh because not only were some of the companies that got wiped out, they they may have been right about the idea, but they also may have been uh

wrong at the same time about the size of that thing. Like it's like maybe pet

that thing. Like it's like maybe pet food ordering online is a thing that will happen in the future, right? they

could be thinking but it won't be that important right in other words like it won't really be you know a trillion dollar company or something like this so you have to understand like even if something will come true in the future

certain segments of that market are going to be more valuable than others which we definitely saw in the butt.com boom and then crash you know the things that we ended up with today are not necessarily the set that people thought would be the most profitable then and so

on right >> right and so this connects to the investment NVC aspect which I'm this is not going to anything about investment advice, but uh everything that's

happening here is so tied up in VC dynamics and big big tech dynamics that it's it's hard to really talk about it seriously without having some understanding of that aspect. And so one one important thing to know with these

kinds of uh new markets, emerging markets, speculative markets is that uh the enterprise thesis, this is the jargon that's used in the business, the enterprise thesis might be correct, but

all the prices might be wrong. So, as

you said, like maybe right >> uh maybe pet food online is a $5 billion business. I don't know. I mean, this

business. I don't know. I mean, this must be a billion multi-billion dollar business, but it's probably not a trillion dollar business, right? And so,

uh is it more correct to approximate pet food online as zero or to approximate it as a trillion dollars? Right.

>> Right.

>> Um and there's no good answer. There's

no good answer. Right. And so this is I I'm confident that at least part of that cycle will play out again. Um and

similarly there's um you know markets have a usually have uh sorry new markets usually have uh you know what they call a diversification phase and then a consolidation phase right? So now

everyone has an AI company. There are

multiple um so-called frontier labs now.

So a frontier lab is working on the foundational. So a Frontier Lab might

foundational. So a Frontier Lab might make something like Claude or they might make something like Chad GB2. That's

what a Frontier Lab is. Um there are many people claiming to be Frontier Labs now. Uh probably not all will survive.

now. Uh probably not all will survive.

Um it's not even clear that right now the you know the big four AI companies four and a half that I consider uh you

know OpenAI uh Anthropic um uh XAI Google Gemini uh and then the half is Midjourney because they they're a specialist player but they are the

dominant specialist player.

>> Did I hear you leave Microsoft out of this list I assume?

>> Well so worth noting >> and Microsoft and Amazon because their models are not frontier enough I assume.

Uh, worth noting that Microsoft owns a substantial part of Open AI.

>> That's true.

>> And I've seen conflicting reports about how much actual control they have. So

that I don't know. They own uh I've seen claims between 10 and 30%. I don't know how much corporate control they have. So

that's that's something we can explore another time. But uh it's worth noting

another time. But uh it's worth noting that OpenAI is substantially uh Microsoft affiliated.

Um, and so, uh, I I'm pretty sure that that, uh, not just Cop-ilot, but all that other like all the stuff that they're trying to put now in Windows 11

is derived from OpenAI developments. So,

um, they are they and Amazon are mainly involved in platform and infrastructure.

>> Yeah.

>> Um, and then separately, sorry, I forgot uh my my friends at at Meta and their their main influence actually is in uh is in open models. of the uh llama family for example. So they do actually

quite good work. Um so uh there that's uh also in the mix. I was only talking about commercially available.

>> Right. Well, this was in the context of VC, >> right? Exactly. Exactly. Exactly.

>> right? Exactly. Exactly. Exactly.

>> Okay.

>> Um Okay. So uh but you were kind of you're going to talk about the dynamics of this being important. So continue the why the VC part was important. I just

wanted to >> Right. So, uh this uh expansion and then

>> Right. So, uh this uh expansion and then contraction thing that I uh tendency that I mentioned uh and then um you know whenever there's a kind of a

land rush or a gold rush kind of uh uh opportunity uh you frequently see not just the prices being wrong right people thinking it's going to be worth this and

not worth you know and uh sorry worth uh more than it should be but also uh an unsustainable number of participants Right.

>> Right. Because they're sort of fighting.

They they all know that there's less pie there and they're fighting to have their slice of it and they're assuming someone else won't get that slice. It's not that we think it will sum to all to to to the

entire investment that they're all putting in. Someone's going to lose at

putting in. Someone's going to lose at least some, you know, at least one of them.

>> And if it if it recapitulates um.com, not just someone is going to lose, almost everybody is going to lose.

>> Okay.

>> Right. because

>> right >> Google ended up with 90 plus% of the search market but you remember in during the during the bubble it was Google and Yahoo and Alta Vista and webcwler and

excite and infosek and ask jeieves so like 10 times as many companies as the long-term equilibrium were serious players.

>> Yep.

>> Right.

>> Yeah. So if that I mean if that same thing happens um right now everyone who's involved in the VC aspect of this is thinking okay

if 10 years from now there is uh someone in the AI business that has a position as dominant as Google has in search we have to fight like you know we have to

fight the most bloody fight of our lives to be that that player right >> that's a really good way to put it because uh Yeah, if you think about what

they're looking at, it's not like, oh, we can we can tune our investment to how likely we think this thing is to succeed. It's literally going to be all

succeed. It's literally going to be all or nothing. It's like either your

or nothing. It's like either your investment is incredibly valuable or it's largely worthless potentially, or sold for for parts, I guess, to somebody else.

>> Certainly what happened with the other search engines.

>> Yeah, it's a very very steep sort of difference between those two, >> right? And with with so much money

>> right? And with with so much money already committed um it's very difficult to uh to deescalate right I mean who who who can

come out and say okay look guys we all we all believe that this is important >> it's not we can't all be playing this game for 10 years how do we right there's there's no deescalation path so

now everyone uh and I mean there are also corporate animosities and really even interpersonal animosities in in >> right I mean some of these people

diaspora, right? So, they're, you know,

diaspora, right? So, they're, you know, ex employers were people they're now competing with and all this sort of thing, right? Yeah.

thing, right? Yeah.

>> Um, so the, um, anyway, there's no deescalation path that I can see. So, they're just going to keep building out as quickly as they

can more data centers, more electricity, more training data. Um, you know, more subsidized growth. Uh and so subsidized

subsidized growth. Uh and so subsidized growth meaning uh very roughly the like the Uber model that the you know uh VC uh VC subsidized uh prices in the early

early phase until most of the players are killed off and then and then uh whatever the the long-term price is that you know eventually the market discovers >> right so in other words you will get

your tokens at a reduced rate up until the point where someone actually wins >> right so there's that I mentioned also this aspect of um more and more training data and this brings us to uh something

that I know is important to you has been important to me as well which is someone made all that painting data right >> yeah uh and what about them right what about them like all the people who wrote

all the text all the people who made all the images um and so I've actually been involved in in this aspect uh quite a bit more than

one might expect because back in 2012 when I I had this VC funded company we were an extremely extremely primitive version

of uh AIdriven app development. And as a consequence as a consequence of that, I was working with uh a a lawyer at one of

the one of the big law firms uh because our clients were asking us who owns this thing, right? We come to you,

thing, right? We come to you, >> right?

>> We come to you and we give you data. You

run your AI app generator thing. It

gives us an executable and some source code that goes with it. Who owns this thing? Like what's the um and at the

thing? Like what's the um and at the time >> which we we still don't have an answer today I think >> right uh certainly not a universal answer. We have some partial answers in

answer. We have some partial answers in special cases.

>> So uh I have been working with high-end lawyers since 2013 trying to get uh trying to get answers to this. Uh so I I should emphasize uh I am not a lawyer. I

cannot tell you what lawyers think.

think I cannot well I can tell you what some lawyers think but I can't tell you what >> I can't tell you what uh a court will decide >> but I can tell you about the kinds of conversations I have had with lawyers and I've also advised several cases

where they're trying to adjudicate some of these um these concerns about is training fair use or not or how much is it fair use uh what if you know what if you're training but not competing with

the original artist somehow is that fair use right um so uh again I can't tell you what court will decide but I tell you I've had extensive conversations with lawyers um you know

uh litigating important cases on this.

So that's another aspect that people ask me about which is first of all what about what about something for all the people who made the training data right uh and separately what's likely to

happen legally because let's say hypothetically that you believe that some kind of um what I've I've described

in the past a vivo style resolution uh will come about. So, Vivo was uh part of this bulk licensing agreement that YouTube came to uh with the um I

forgotten the name of the uh of the recording artists um association. Anyway, the people who

association. Anyway, the people who >> the RAA okay >> so uh as you remember from that period there was another big copyright concern right which was

all these people you know it's 2008 YouTube all these people are listening to all this music on YouTube YouTube knows that they don't have the right to >> yes

>> whatever right uh and eventually uh eventually an agreement was reached with the um uh uh with the uh copyright holders uh

to create bulk licenses on uh on YouTube and there was content ID and being able to claim the revenue that came from uh from those so uh from those um copyrighted works. So maybe you believe

copyrighted works. So maybe you believe something like that will happen in the future. I think that that's something

future. I think that that's something like that is the only thing that is not catastrophic right now. So I'm hoping that that's something like that happens, right? Um

right? Um uh so I'm hoping that there is some kind of bulk licensing uh entity as you know you you may know that there are bulk licensing entities

for uh mechanical rights for uh for music. I'm hoping something like that

music. I'm hoping something like that happens. I don't know what will happen.

happens. I don't know what will happen.

Um, but if you believe that, right? If

you believe that it will be resolved somehow the way it was resolved with YouTube and also with like Spotify and streaming services, what should you do now? Because it's going to take years

now? Because it's going to take years for this to be resolved legally, right?

So, first of all, there's the legal concern of may I mean maybe it gets resolved in two or three years. Am I at risk if I use AI art in my website or in my game? Right? So there's the legal

my game? Right? So there's the legal risk and then there's just the ethical question of hey am I am I abus am I knowingly abusing something right can I avoid abusing that if I you know if

you're the kind of person who's inclined not to abuse artists musicians authors what can I do with that right how do I how do I even think about that um and so

that I mean that's something that that uh people have asked me about many times as well >> so this is a there's a tremendous number of things now uh this is why I said like hopefully we'll have sort of a series of

discussions because I think each one of the things you've already said is something we could go into a lot of detail on. Um so yeah, if you want to

detail on. Um so yeah, if you want to zoom out uh maybe you said you want to zoom out quickly so let's do that and then I kind of maybe I'll prompt you for uh kind of which direction we go first

here in this in this particular episode.

>> Yes. So, uh, I the zoom out was really just going to say what you just said, which is, >> okay, >> uh, >> I get all kinds of questions in five different directions all the time. And

so, I was a few weeks ago thinking maybe I should write something and put it together and, uh, or, you know, maybe record something myself. Uh, and as I was putting together notes for what I

might talk about there, I realized if I imagine myself on the other side, if I imagine myself in the audience, I would I would be asking, can I trust this person? Right. Uh, because

person? Right. Uh, because

>> right >> because I I have been in the business for a long time. Um, I I should maybe it's worth noting that I don't do anything with uh generative AI now. I'm

outside uh outside that I do quite specialist stuff right now that I we can go into that but it's um like I have no financial stake in generative AI one way

or another right now. Um but if I were someone uh if I were uh a hypothetical audience member, I was wonder I was thinking can they really take my opinion

seriously as as useful information? And

you know, I I don't care if people like my opinion or not, but I would like to be useful to to the audience. And so

that's when I I realized, you know, I I should just talk to Casey about this because Casey is someone who uh I trust his uh insight and technical ability.

And I also trust him uh not to accept nonsense answers if he hears a nonsense answer. Right. So I thought maybe um you

answer. Right. So I thought maybe um you know we can we can work together and uh I'm sure at some point I will say something that is colored by being an

insider uh in this market and it will be very useful to have you saying hey that doesn't make sense. Can you explain or I just don't buy that at all. Um hopefully

hopefully we don't have anything uh anything like that at least between the two of us because there are certainly um there are many things that have happened in this AI buildout that I don't like.

Right.

>> Yes. And also I would say uh there yeah there's there's so much kind of uh there's there's so many people with so many vested interests who are very loud that it's very it it's really difficult

I think for most people to kind of sort fact from fiction anyway. Uh and so yeah I I'm I'm certainly really excited about this discussion because I guess I should also say I don't do anything with AI

myself. However, I'm not really uh I

myself. However, I'm not really uh I wouldn't necessarily call myself an AI naysayer. I'm more of someone who's very

naysayer. I'm more of someone who's very highly critical of the behavior of AI companies. Yes. But when I look at AI,

companies. Yes. But when I look at AI, the technology, I'm like I I don't necessarily think it's that great right now, but I think they're making a lot of progress. And I think it's already done

progress. And I think it's already done some very impressive things. Um, so I'm not someone who just goes like this is completely useless. I'm like, "Oh, no.

completely useless. I'm like, "Oh, no.

This is like I mean, if nothing else, it solved a very, very long-standing problem in computer science, which was, can we actually have machines that sort

of understand a natural language input and, you know, fairly reliably and is as reliably as a human? You know, humans are often not that reliable

understanding language either, but like comparable to a human understand a human language description of something you want them to do." And you know, it's

easy to dismiss that part if you're anti-AII, but that was very hard. Like

people were spent decades trying to do that. And now you can sort of say like

that. And now you can sort of say like here is the first real demonstration of something you can deploy that kind of does work in a lot of cases. Are there

problems? Sure. But compared to the solutions that we had previously, it's an undeniable uh massive step forward, right? Yes.

>> So, there's a lot of things like that that I also I think are cool. Like I you know I I guess the way I might summarize it is I think it's unfortunate with the way

with some of the behavior that's gone on in the corporate sphere and some of the rhetoric and all that stuff because it does also make it pretty hard to like some of the improvements that were made

it because the people are unlikable or the companies are unlikable and that's unfortunate, right? It would have been

unfortunate, right? It would have been nice if it was just an unalloyed technological break uh like good technological breakthrough that we could be excited about, right?

>> Uh maybe I can just give a little bit of um cultural background on that. Uh

because uh when I got into this field, it was like a a quiet little area where you could do really interesting work, hard problems, uh interesting math, interesting computing problems. Um and

basically nobody cared because nobody believed that it would work. Right.

>> Right. And that was great. like like

from like 2000 to 2010 it was a beautiful time because you could do really interesting challenging things um you could see progress yearby year but nobody really cared and that's I mean

for me that's for my personality that's good right because uh the um well if you give a parallel uh you probably remember how the um how the tech business changed

after the uh global financial uh crash uh a lot of the u talent and attention that was in finance and investing uh one

way or another moved over into uh into tech businesses and I remember that culture uh shift uh was quite dramatic.

I mean I was in California at the time.

I imagine it was similar in Seattle. I

don't know. Uh so uh that uh that feeling of changing from hey this is mainly mainly engineers getting together trying to do something useful and sure I

mean there's a business side of it uh but it was mainly engineering culture and then there was a business wrapper around it um and that changed quite a bit just in in programming in general uh

a very similar thing has happened with AI where we were just trying to do um interesting challenging things and then

somewhere around you know 2013 2014 it just became um once it became so here's here's the uh the key word of what has

happened is this idea of scaling right uh and so >> right >> once it became clear that at least one kind of AI because like neural networks

are not the only kind of AI that at least one kind of AI uh could be scaled which is to say I throw money at it and it gets better. Right? That's right.

That that is the functional meaning of scaling that if I can I can throw money at it and it gets better. And for a long time there was nothing like that. Right?

prior to the confluence of um engineering developments in training deep networks and uh GPU compute

becoming relatively stable and cloud computing becoming um relatively stable and big techs like Google and Amazon having enormous cloud resources that

could be used. Right.

>> Right. Um those were the three things that came together where at some point someone realized well not what someone many people realized I could just throw money at this and make it better right

and that money buys data centers that money buys training data that money buys uh electricity right I mean electricity is another >> uh another thing that we haven't touched

on yet which is the the impact of AI on local economies um AI buildout on on local economies um so the pre-scaling

version was very different. And so

there's been a a large influx of people who were not not thinking about AI the way the the researchers were prior to

you to that nexus point. Uh and so that that culture shift I think um it it connects to what you were saying which is it would be a lot easier to be happy about this if there wasn't so much bad behavior.

>> Right.

>> Yeah. Which is true. It's not that's not new for Silicon Valley, right? It's it's

it's not AI is not the first time that we've seen that pattern as you quite accurately pointed out. Uh but yeah, it's just unfortunate that that happens to be the way it goes sometimes with new tech.

>> Yeah.

>> So um maybe uh for this episode here because you know we needed to do one where we kind of do the introduction so we couldn't start by drilling down on

something. Maybe for this episode we can

something. Maybe for this episode we can just kind of end on one sort of, you know, focus on one of these points that you brought up. Uh, which going all the

way back to the beginning. Someone comes

to you and they say, "Should I just be a carpenter?" Right. Right. Um let's use

carpenter?" Right. Right. Um let's use that as a bit of a launching off point um to to sort of close out this this this episode and and just maybe can you

give some background on what does the the prognosis look like

for people who are facing a job market in the you know the 5 to 10 year I don't know let's say you're you're 15

through 35, you have to care about what your job's going to be. You're not going to be retiring. You're you haven't already, you know, selected some some uh career and are well established or

anything like that. Um, and you're in a field. I mean, I guess we could start

field. I mean, I guess we could start with even the simpler question of what who who are the people or what kinds of jobs are the ones who might reasonably

be asking is my job going to be eliminated? like like do you have some

eliminated? like like do you have some kind of a litmus test someone can use to think about the kind of jobs that they're doing and how much the versus

how much those jobs would be impacted by uh the continued scaling of AI.

>> Yes. So uh excellent question uh I only have partial answers. Uh and so >> no one can see the future. So yes of course >> um so maybe I will start with what I am

seeing happening right now. Uh because

that's that's for sure. Right.

>> Right. Yes.

>> So right now the the biggest thing that I see that worries me is uh the junior engineering market being completely decimated.

>> Uh and this is so this is bad actually for everyone, right? It's bad for the junior engineers for obvious reasons.

It's actually bad for everyone else too, right? Because

right? Because >> okay, >> the way you get senior engineers is by having junior engineers first.

>> Yes. And so who's going to be someone who understands computing in 15 years if no one you know if no one who's starting on that ladder can get paid to be on that ladder.

>> Right.

>> Right. uh and so I mean it's not completely gone now but uh so I I work with uh various uh various people who do

recruiting and I would say the consensus consensus estimate right now is that that that I hear from them is that junior engineering hiring is down uh well at least within the US there's a

separate question of offshoring and so on I think we shouldn't touch on that now but will be relevant eventually so right now I'm only talking about what I

see in the US market um the um consensus estimate that I get is that junior engineering hiring now is down at least 50% like maybe 70%. Right? Okay.

Uh and so that that that gives one part of the question that that one part of the answer which is um if someone

uh so never mind what your skills are right uh hypothetical junior engineer if uh if someone who's like a hiring manager a median hiring manager at a

median company thinks this is probably just something that is stitching together answers from Stack Overflow.

That's the kind of thing that I see people are trying to uh trying to eliminate. So I want to emphasize this

eliminate. So I want to emphasize this is their perception. This is I'm not saying that this is the reality, but I'm saying that the kinds of things that are most aggressively being cut right now

that I see are things where the hiring manager conceives of it as this is someone who looks up stuff on Stack Overflow and maybe copy pastes uh like a

curl script. And um you know again I I

curl script. And um you know again I I really want to emphasize because uh like I the last thing I want to do here is is blame the victim and they the junior engineers right now are in big trouble.

So I emphasize >> well I think I mean I'll also just say it like uh to be clear when we're saying junior engineer we're not making a statement about that person's eventual value as a program. We're literally just

saying their experience level so far is junior engineer. They're fresh out of

junior engineer. They're fresh out of school or they just haven't had much programming experience. they're looking

programming experience. they're looking for their first job, let's say, and they, you know, that's what someone at their first job does. They have to look up things a lot of the time.

>> Um, possibly even using AI to look them up now, but they are still going to be have to be to be like inquiring into other sources of knowledge in order to do their work because they haven't had

the experience that lets them just know uh what to do. So, yes, absolutely. Yes.

Um, sorry. And that actually connects to uh to another yet another aspect here which is software quality right uh which we have not talked about yet but >> big big big big

>> concerns about software quality and that connects to the junior engineers right because the >> the junior engineers are expected at least at first not to produce particularly high quality work they're producing derivative work following some

senior engineers sleep right and that's that's not a criticism that's just how it work I mean I did the same you did the same we all >> you have to learn somewhere Right? We

all we all went through that process. Um

uh the fact that AI right now is generating let's say junior engineer code um is worrying because um so I know

many many people who are trying now to build entire products out of AI workflows. So I I personally don't

workflows. So I I personally don't recommend that. I don't do that. I only

recommend that. I don't do that. I only

use AI in very narrow focused ways. We

can talk in another another session about what I think is reliable right now. Um

now. Um but I know many people who are trying to do like all AI all the time um you know 17 clouds running all at once, right?

And none of this is exaggeration. Okay,

when I say like 17 claws running all running 24 hours a day, I'm not exaggerating even a bit, >> right?

>> Uh so now how do you fit junior engineers into this? And how do you get how do you

this? And how do you get how do you believe anything about software quality in this world? Uh I don't I don't have the answer to that but um >> well I guess let me let me uh focus your attention then on some of those parts

there because I think there's a lot of things that are interesting about that.

>> So uh the the first one I would ask is what do you think though even if we just isolate it to there sort of like two

branches. One is what is the effect on

branches. One is what is the effect on not having junior engineers going into the pipeline because then we won't have senior senior engineers and I just just

to overtly state what the danger is if we assume that the advancement of AI technology does not sufficiently keep pace with that pipeline. Meaning if AI

doesn't deliver senior engineer quality in five or 6 years the the very scary possibility is that now we don't have junior engineers moving into intermediate moving in

intermediates maybe too fast we don't have that pipeline flowing the AI hits some unexpected wall or maybe even expected wall and we don't it doesn't push forward fast enough now we're in

this state where the only people who can effectively use these AIs to code are senior engineers they their judgment is required to get the AIS to do anything that doesn't suck and we're not producing any more of them. Have we

created this nasty bubble that's going to really kill us in you know uh >> and and let me add to that the the cherry on top there is not just that the senior engineer is needed to drive it to

make something good. the senior engineer is required even to know if it has produced something good right and so this this relates to this uh I call this

sort of coding syntact like whatever idea you go to it with right with with some exceptions for extremely controversial things right but >> uh whatever idea you go to it uh go to it with it will tell you this is

>> you're on the right track you're thinking about exactly the right thing >> you are >> you know your insight is is tremendous and here are the next steps right and >> when you say it just to for the audience

the AI the AIS have been designed you know specifically to you know to try to be I mean it is it is so if you haven't had this experience so I I test all the

big AIs every day right because I it's it's my job to know what's happening in the market >> um it's really amazing uh how deep this tendency is that I was asking it the

other day about how to make uh about uh a variation on a French procedure for producing bone stock. Right. Okay.

>> And it was just like like paragraphs of flattery and I just wanted to know like when should I add the salt, right?

>> Yeah. Yeah. Yeah.

>> Again, I'm not exaggerating like you know you're it I I wish I I had it in front of you but it was something like you're thinking exactly like a cordon blue chef. These are

blue chef. These are Okay. So that's what it does. That's

Okay. So that's what it does. That's

what it does when you're trying to ask it when do I add the salt, right?

>> It does a similar thing in a hidden way with code >> where >> you ask it, you know, optimize it for whatever, right? Uh make it efficient or

whatever, right? Uh make it efficient or assume optimize for the worst case or optimize for the average case or whatever, right? Um and apply best

whatever, right? Um and apply best practices. Now, you know, and I know

practices. Now, you know, and I know that that's asking for trouble, right?

But most people don't know that, right?

And actually um boy, so many tangents.

>> It's okay. It's my job to keep track of this. So it's okay. You you know you you

this. So it's okay. You you know you you say what is important to say and I'll make sure you know especially cuz we'll like I said if we do several of these we can always come back and be like okay we didn't get a chance to go down this

particular rabbit hole and now you know later we will so go for So the tangent here is um many of the people who are trying this all AI all the time

development style are not themselves uh engineers right so I know so this is like this is not uh criticism of anyone but I know people who are like project

managers who are trying to get the project done using only AI agents right >> right >> they have no way of knowing if excuse me if uh if it is any

So they will naturally they will naturally say please apply best practices right because what else is a project manager going to do right this is not uh this is not a failure of the person this is just where they fit in

the project economy right a project manager is going to say hey you are you are pretending to be an engineer you the AI are pretending to be an engineer

apply best practices right uh now again you and I know that that's asking for trouble because that means it's going to give you whatever is the medium on stack overflow, right? Which is needless to

overflow, right? Which is needless to say not good.

Um so again, so this this connects to the this is what I call the code sycopency, which is it tells you this is a great design, right? It will tell you it's a great

right? It will tell you it's a great design even when it's its own design.

Right.

>> Right.

>> Okay.

>> And so how can you possibly know? Right.

like if if it will even flatter you about when to put in the salt. Uh what

do you do if you can't look at it and say this is obviously stupid, right?

>> And Demetri, since you and I have really not had a chance to talk about AI much, I guess I'll ask a question that I often have because I don't really spend any time looking into this stuff. I know

roughly how these systems work from sort of in the same way that I know how like a CPU works. meaning I understand like details of the architecture that a layman probably doesn't know, but I have literally no idea what actually is going

on behind the scenes. I've never sat in on a actual, you know, team that's designing hardware or something like that. And the same is true for AI,

that. And the same is true for AI, right? I've never been actually there

right? I've never been actually there going like, okay, what, you know, what gigantic Python thing have we, you know, constructed to train this and all that sort of stuff. I I just haven't looked

at that. And so I'm curious

at that. And so I'm curious that psychopansancy uh or psychophancy part that you're talking about.

So does this affect is is is are there byproducts of that in the actual quality of the answers too? And what I mean by

that is if you accidentally or even perhaps semi-intentionally when you're describing the thing, give the AI the idea that you wanted something done a

particular way, right? Meaning, even if it really wasn't your intention because you're just a project manager and you don't really know what you're asking for, but you happen to phrase it in a way that you know the space the AI ended

up in when it sort of like processed those tokens is like over here.

um will its desire to sort of please you right that sort of idea that it's been conditioned through its reinforcement you know through through its through its back propagation to like go towards this

kind of satisfying the user uh angle will that mean that it will actually give you worse answers in order to it if it thinks that that is closer to what

it's been trained is pleasing then it may even and I I don't know how to describe this in any way because I'm thinking in terms of like weights in a matrix and you know various steps. But

like if it knows meaning if encoded into the thing and part of its reinforcement learning or even you know if it has special purpose stuff for thing about code even if it actually could know that

this was a bad idea that the sort of sycopancy overrides the other parts of its training that would have actually been useful.

Absolutely. I'm sorry it's a bad way to explain it but you understand uh there's a very simple incentive >> uh I would say problem but let's just

say mechanism right which is in the uh RLHF so uh reinforcement learning through human feedback this is the part where it starts with very roughly speaking a completely objective model

objective meaning it's only doing what's in the training data then there is the RLHF part which is training it to adapt what it says to please the user

Okay. So there is a reward an explicit

Okay. So there is a reward an explicit reward for >> uh I will summarize by saying there's an explicit reward for praising the user.

Okay.

>> Right.

>> So naturally uh right this thing is a probabilistic optimizer under the hood. Right.

>> Yes. Um so if it can find a way to uh frame things, frame the flow of the conversation so that it gets to praise you, uh it it will tend in that direction. And you can um you can easily

direction. And you can um you can easily do these experiments where if you go to it with an idea for how to implement whatever aric sort or

something, right? Like any old thing,

something, right? Like any old thing, right? and say um and now add uh affect

right? and say um and now add uh affect right so meaning uh implication of how you feel about it separate from the factual question

>> okay so you go to it and say hey I need to implement a radic sort but I'm a little worried about it because I read this article in like the 1998 archive of

Dr. Kadob's journal is saying that uh you know if you do too many multipliers in a row then something happens with the division unit and

whatever right if you go to it um in a with an a effect that is doubtful and apologetic and framing it like hey I'm coming to you with this but I think it's a bad idea you will get a different

answer you can just test this yourself you will get a different answer from if you go to it and say I just read this paper from sigraph uh this is one of the best researchers in the world, this is the method that we're going to use and just describe the same method, right?

And we'll just give you a different kind of response.

>> Okay. And just again to continue forward, does that apply like based, you know, cuz these things are sort of co-rained, right? Like when we're

co-rained, right? Like when we're generating code, you're gener the LLM has to be trained on code and the human descriptions of the code and the ch you know, all that stuff together. So does

the actual code it produces change based on that as well or only the way that it will talk to you about the code?

>> No, you can easily generate just different code.

>> Wow, that's okay. So, so that is really kind of nuts when you think.

>> So it's worth noting that these things are not particularly reproducible. So

you can just ask the same thing and get different code on two different routes.

And actually one of the things that I do, maybe it's worth a quick aside here to talk about some of the things that I do because I don't I don't do this mainstream uh gen AI work. Usually what

I uh much of what I've been working on is reliability of AI systems and reproducibility. So someone comes and

reproducibility. So someone comes and says, "Hey, we're using this." So this is a real example of something that I've worked on which is uh we're using this as part of a radiological diagnosis

uh workflow and it has to give the same answer almost always right >> right right it can't be like you have cancer no you don't have cancer like wait which which is it >> so this is the kind of thing where uh usually I'm working with scientific

researchers medical researchers this these kinds of applications and so there are various techniques that I've developed over the years uh I mean other people do this is not just me I'm just saying this is the the slice of the

world I live in right now um where we're working on reproducibility and working against the fact that under the hood uh it's driven by RNG, right? So what do

you do when it's driven by uh driven by RNG? How do you even think about making

RNG? How do you even think about making it reproducible, right? Uh and that's I would be happy to get into tech technical details, but that would take an hour or two >> another future episode like basically I'm just going to go through this episode and I'll have so many things

that we want to expand upon. It'll be

great. So anyway, the uh the take-home point there is that actually you can make these things fairly reproducible.

Uh but you have to do a fair amount of work to to do that.

>> Okay. So uh that was great that I was just kind of curious about that myself.

That's very interesting. Um so going back to the original question because we were sort of probing at this from the the uh aspect of should I become a carpenter. So when we're talking about

carpenter. So when we're talking about the junior engineering problem, you were sort of saying that you're definitely seeing on the ground like >> separate from everything else, industry

is at least buying into the marketing hype part of it. Meaning they are reducing their hiring by 50 to 70% and that could be bad. But let's talk about the before we continue forward on on

other domains besides say programming.

Let's talk about the the nonAI hype part which is how would you categorize the current state-of-the-art in AI with respect to replacing a junior

programmer. So let's say we didn't care

programmer. So let's say we didn't care about the fact that we will not be breeding senior engineers which we need.

We'll not be producing them. So we were just talking about replacing junior engineers and I don't know we created the eternal life serum. So all the senior engineers will just stay around and we're not worried about some future

where the AI can't replace those jobs and we don't have any senior engineers.

We take that out of the equation temporarily unrealistically.

>> Y >> how do you how would you currently categorize uh how good of a job the AI does at replacing a junior programmer?

Like what are its strengths? What are

its weaknesses? Like how do you see that? How do you think about it?

that? How do you think about it?

>> Yes. So, uh, I actually have quite a good, um, reference project that I use to test these AIs.

>> Cool.

>> Uh, so I won't say it just because I don't want anyone to find out and optimize against it.

>> They'll game it. Yes.

>> Right.

>> Yes. Um but uh anyway it it involves using um uh using OpenGL to accomplish a relatively simple task and then plug the results from the OpenGL computation into

a very simple mathematical uh mathematical analysis and stick that in an HTML dashboard. Okay. So this is the >> okay >> something that I think a junior engineer would be able to do maybe not openGL WebGL whatever right the details

>> right yes sure >> um >> they still cannot do that reliably uh but they are getting to the point where um okay so uh in the last year or so uh

we have developed these meta practices like agent loops or run loops uh and so what that means is you don't run the code and then you don't run the prompt

get the result and and hope that it's good. Um, you run the agent over and

good. Um, you run the agent over and over again and say, "Generate this code, run this test. If the test doesn't pass, doesn't pass, generate the code again."

Right? So, this is this is the most primitive form of these loops. But there

are more advanced um uh more advanced loops uh and loop designs.

um that is getting to the point now where it can uh more or less solve my um

uh my test project but it takes it a while like takes it a few hours of crunching before it eventually passes all the tests that I ask it uh ask it to pass. uh this is quite a simple project

pass. uh this is quite a simple project right so this is this is something that I think a uh a junior engineer with the right experience would also be able to

do in you know in a few hours so I guess what I it is getting to the point where with all these meta practices so maybe a quick tangent on the meta practices so

>> sure >> there is there is what's called the base model so this is the thing that's just looking at all the all the text in the world and all images in the world. That's the base

model. Uh there's the chat model on top

model. Uh there's the chat model on top that is using that and has the RLHF that creates a persona. That's so that's the jargon right?

>> Right.

>> Creates a persona.

>> Um there you may remember time goes so quickly these days. Was it

like two years ago when we first started hearing about so-called thinking? So

thinking is >> uh I would say it is narrativizing the answer. So rather than saying, "Hey

answer. So rather than saying, "Hey Claude, how many legs does an octopus have?" Right.

have?" Right.

>> Right.

>> Uh you would you would instead the old way uh or the like the two years ago way was you would say um Claude, please write down the steps you would use to determine the number of legs the octopus

has and then tell me how many um how many legs the octopus has.

>> Okay?

>> Right? So that's that's the most primitive form of what's called thinking. Now there are much more

thinking. Now there are much more advanced thinking designs but so thinking is on top of the chat interface right so it is narrativization of a thought process

>> uh for for folks who uh don't know which includes me because again like I said I do not study this stuff at all and I don't really use it um can you be more specific about the the thinking process

part is that a separate sort of training phase or a separate like like to what extent is that actually different in this pipeline uh in terms of production.

Can you just give a vague overview? I

don't have to get any specifics yet on that sort of stuff. But

>> so much of this is trade secrets. So

it's hard for me to tell you what what they are doing.

>> Uh but they are the the things that are currently in use. The trade secrets are derived from things that were >> discovered in the open, right? So you if

you go back if you search like Twitter for example for uh so-called think uh what was the phrase there was a there was a a meme phrase like think through this step by step I think that was the >> okay

>> that was the and the the funny thing is that um by shaping the conversation that way it improved the results. So the same way that I told you the uh earlier that the emotional affect that you bring to

the conversation can change what it tells you. Uh this is another example

tells you. Uh this is another example where this thing is a you know conversation simulator right and so if you um if you frame the conversation in this

analytical style of hey we're not going to jump to any conclusions right we're not barbarians first we're going to list all the steps that we're going to do for the thinking and so it has to generate

those tokens those thinking tokens right >> okay so if I had to paraphrase that what we're effectively saying is that okay I

have you I've got this machine and this machine is very good at figuring out what comes next in a sequence. If I ask

it to take a very high resolution or sorry high resol a very like um coarse version of something like how many legs does an octopus has have it h it's doing

its prediction on a very sort of lowresolution sequence right >> but if instead I massively increase that resolution and say okay instead of just predicting the number that comes after

how many legs an octopus has you actually have to produce all of the symbols that would lead to a chain that would eventually get to the number of of legs Noctis has. That that stochastic

process is actually better at producing accurate answers for things presumably harder than Am I paraphrasing that roughly correctly?

>> That is exactly right. So you are coaxing it into introducing in its own context. So context is all the stuff

context. So context is all the stuff that has happened in the conversation so far, right? Yes. So you can imagine

far, right? Yes. So you can imagine actually here's the zero order version of it which is you would go to it and say hey Claude here are the steps for determining the number of legs an

octopus has and then you would write multiple paragraphs here's how you do it and then at the end you would say how many legs does an octopus have that dramatically impre improves its ability to give the correct answer

>> so there you are doing the narrative narrativization of the thought process right you can coax it to do its own narrativization of of the thought process by saying please tell me what

are the steps to analyze the number of legs an octopus has and then right so uh so >> you can kind of see that these that this is a def you know this is why I always say I'm not really an AI naysayer I just

don't like the behavior of AI companies because I find a lot of this stuff pretty cool that you could do that because when you think about what that implies it actually means that this

particular model does seem to produce results similar to the way humans reason about things at least at least at some some portion of the way humans reason because this is not unlike what you have

to do as a human when you're not producing the correct answer. Sitting

down and thinking through the specific steps helps you find things you were missing in that process. And so it kind of it feels similar to the kind of things we would do. Whereas if you think

about other kinds of computing that we've done in the past, the way that you would improve the quality of a result does not feel like the way that a human would, you know, does not you would not

change it in the same ways as you're changing these these things, right? So

that does seem pretty interesting to me.

Uh you know, just >> not not for now, but actually there are parallels. There are primitive parallels

parallels. There are primitive parallels in uh in the old days of 15 years ago.

Uh so the thing that is now thinking and narrativization all this all these meta practices back then um so this is slightly technically controversial some some engineers will disagree with me uh

and that's fine but the I'm confident I can defend this position which is uh this is in a way what used to be called feature engineering. So 15 years ago we

feature engineering. So 15 years ago we had we were using very different kinds of models and feature engineering was this practice of generating metrics that

aligned with human intuition about how to think about something. Um, so for example, >> let's say you wanted to build a model that would look at pixels and say whether or not it's a face, right?

>> Um, >> uh, you could just feed all the pixels as RGB values into the model and say good luck. Right? This is actually not

good luck. Right? This is actually not very successful at least with the older style and even on the even with deep learning, you need a pretty deep uh network and a lot of training data to

make that work. So, uh, the way that this problem was solved, uh, 20 something years ago, so the the first really reliable face detectors were, uh, like the Viola Jones detectors back at

Microsoft Research in like 2002 or something. And the way they did it was

something. And the way they did it was um by by constructing these features. So

features are just synthetic metrics, right? So you have your raw data, a

right? So you have your raw data, a feature is jargon for a synthetic metric that you derive from that data. Okay?

>> Right? uh and they would say, "Okay, well, one thing that we know about faces is that under normal lighting conditions, uh there's kind of a like a

medium medium brightness part on the forehead and then it gets a little darker around the eyes, then it gets a little brighter on the cheeks and a little darker and also there's like usually a little spot of red uh around the around the lips and then again there's a shadow under the chin, right?

Mhm.

>> That was then encoded into what were called features which were convolutions that were specifically designed to measure that uh human generated insight

um uh mathematically. Okay, so that's what a feature was. And so in a way you were in the same way that now we're telling Claude, hey, think about this, then think about this, then think about this, then think about this, then give

me the answer. Back then we were saying, hey, look, don't just look at the raw pixels. Okay, that's that's for naive

pixels. Okay, that's that's for naive people. First look at this synthetic

people. First look at this synthetic metric, right? And then then do your

metric, right? And then then do your modeling right?

>> And really the like the uh capsule version of what deep learning did was that it automated that process of generating features >> that >> I see.

>> They were so the same way the same way that deep learning sort of told them all, hey, just make your own features, right? I like I have important things to

right? I like I have important things to do. Make your own features. Okay.

do. Make your own features. Okay.

>> Right. Right. The same kind of thing is happening now with the narrativization where uh whereas two years ago you'd say

Claude here are the steps for analyzing the number of legs you tell it just hey tell me what the steps are and then apply the steps. Right.

>> Right. But for what but because you know that just that act of doing that is the appropriate procedure because it won't do that if it wasn't told to do that.

Right. So you just have to include that in the thing currently or we automate that presumably right at some point at some point that gets automated as well because we know that it works better and and off you go right.

>> Yes.

>> Okay.

>> Um okay.

>> Okay. So rewinding all the way back um to you were saying that a basically agentic loops or whatever however you want to describe that now. Agentic

workflows I guess I've heard that term.

uh you're saying that that is now roughly equivalent to what you might expect a junior programmer to do. Uh

meaning it takes a little while, which a junior programmer also would. It's not

like they would immediately know how to do something.

Maybe it doesn't do the thing perfectly, but it is able to get something that matches your description and a junior engineer probably wouldn't do it perfectly either, etc., etc., right? Uh

is that a fair summary or >> Yes. Provide with a few important

>> Yes. Provide with a few important provises. So one is to make it reliable

provises. So one is to make it reliable you need to have tests it can run or it has the tests have to be so common that it could derive them itself. So you can tell it >> write a bunch of tests then satisfy the

tests. Right.

tests. Right.

>> Right.

>> Now again that's only as good as the test it writes. Right. But same for the but same for the junior engineer. Right.

That the >> right if you were to give them the correct set of tests at the beginning they would be in a much better position that if you ask them to write the tests and they potentially do it wrong.

>> Right. So that that's one. The other is um it should be if it's something that is well covered in programming blogs. So

if it's um >> right >> so I asked it to solve a very very simple problem in numerical analysis and it just was completely useless right it just made up >> right because that's a kind of a dark

art right it's not yeah >> well I mean it's something that like it's on you know page five of my textbook but it's it's just not something that is common enough that it

is uh in its giant compressed representation of the entire internet this thing is not common enough that it occupies any space in uh in the neural network.

>> I see.

>> Um so things that are things that are very common like uh you know like the thing that you'll that you'll um

let me side take a side step. So you and like other friends of ours like like uh Ryan Flurry very often ask hey like if this AI stuff is so great can you just show me one thing that you've made?

Right.

>> Right. Right. Right. Right. Uh, I don't think I've ever asked that, but I I think he has many times. Yes.

>> Yeah.

>> Um, and by far the most common response to that is uh I made a dashboard that reads something from like the New York Stock Exchange and puts it on a chart.

Right. Right.

>> Uh, and this is you may remember or maybe you don't actually, but like back in the 2000s, a similar thing happened with uh with web frameworks. So you

remember there was like Rails became big, there was Django, whatever. And a

very similar thing happened at the time where people were asking okay like what can you do with this web framework and they said oh I made a blog right and so like I made a onepage blog in you know

2005 that's like I made a dashboard with a chart using AI in 2026.

So that kind of thing is reliable and I I expect that uh I expect that uh in the next year or two um it will be difficult

to be paid to do that >> or at least to do only.

>> Right.

>> Right. Because if you can do it now >> Yeah. And also I guess I would say most

>> Yeah. And also I guess I would say most of the jobs are probably also of that form. Meaning a lot of the kinds of jobs

form. Meaning a lot of the kinds of jobs that junior or just not the kind of junior pros, the kinds of jobs that exist, they don't tend to be, hey, uh, we're doing this hardcore numerical

analysis, right? They tend to be more

analysis, right? They tend to be more like, hey, we need to get the dashboard to like show the thing to our to our seuite who wants to view it or whatever

or the report generator. Exactly. Right.

So it is also true that presumably the lion share of the jobs are for the same kinds of things that have the lion's share of the material in the

training set just because that's who would be generating them as well.

>> Exactly. Yeah.

>> Exactly.

>> Um and so my uh so other people will disagree with disagree on this. I will

also give the the dissenting opinion.

Let me give you my opinion uh on scope.

Right now, I would I think there's a high chance of success within a few hours for something at the scope of roughly 1,000 to 2,000 lines of code of

a very common practice like making dashboards on the web or uh like reading a CSV and calculating some statistics or something like that. So, that's that's where I think we are right now.

Something of the scope of about one one to 2,000 lines of ordinary code, right?

Obviously, if you just jam everything on one line, that doesn't doesn't help anymore, >> right? Right. I mean, yeah, I

>> right? Right. I mean, yeah, I understand.

>> Um, other people, so you you will you have perhaps heard people say, "I'm generating 10,000 lines of code now."

>> Right. Sorry, 10,000 lines a day of code.

>> Right. Right. Right.

>> Right. So now, even as an insider in this business, I look at that and think, I have no idea what is possibly going through your head right now. Right.

There's >> Okay. Okay.

>> Okay. Okay.

Um, and so there are I I have thought quite a bit about how to explain what's happening here. Um, and this is only a

happening here. Um, and this is only a partial explanation, but it may be a useful explanation, at least a useful foothold for for people who are thinking about this. Um, it's generating 10,000

about this. Um, it's generating 10,000 lines of code, but it's not generating the value of 10,000 lines of code a human would write. Okay. So, um there's

this uh I can't remember if we've talked about Jeffin's paradox before, but the it's an economic observation about what happens when a new technology comes into a market that allows you uh to do

certain kinds of tasks more cheaply, more efficiently, whatever. Um okay. And

>> the the part that's relevant here is that it changes uh the uh profitability of low value work. So it used to be that maybe

uh you know should I make one more dashboard that shows this obscure metric that almost nobody cares about.

>> Right.

>> If it's two hours of engineer time, you wouldn't because it's not worth it.

>> Right.

>> If it's 2 hours of claude, who cares, right? 2 hours of claude costs $20 or

right? 2 hours of claude costs $20 or something right?

>> Right. Right. Okay. And so it changes um because it changes the profitability of low value work, it increases the

percentage of work that is low value, >> right? Because all the stuff that was

>> right? Because all the stuff that was slightly negative in value.

>> Oh dear. Yes. Okay.

>> So imagine there's a task that's worth Yes.

>> $50, right? And it cost you $200. It

cost you $200 to get an engineer to do it. You just never do those, right? They

it. You just never do those, right? They

never get that.

You know, this is it's funny because I've I've basically said as much before, but I didn't quite realize there was an actual economic principle that that's like that's a very good way of phrasing it, right? Um

it, right? Um I I sort of the idea of being flooded with AI slop is a very blunt way to say something, but a more precise way to say

it is Jeff's paradox, I guess.

>> Yeah, I have a I have a >> a Twitter thread on this I can link it to you.

>> Right. which is basically like, hey, even if it's doing this work correctly, meaning we're not suggesting that it's worse than what the human was going to do, it's just the things that are be that are the the first things that we'll

be able to automate successfully are also coincidentally those that just aren't that useful or aren't that good or whatever. And as a result, you will

or whatever. And as a result, you will get a lot more of that because it became cheaper.

>> Well, well, not only cheaper, there was a sign change. Okay. So this this is the fundamental observation that if something is uh so there you know uh

there's a a great folk saying in quantitative finance which is uh like if there's a trade where I pay a dollar and I get back 99 cents I'll do it zero times. If there's a trade where I put in

times. If there's a trade where I put in a dollar and I get back a dollar a dollar and 1 cent I will do it infinitely many times.

>> Yes.

>> Right.

>> Right.

>> So the sign change there is the essential part.

>> Right. I see

>> things that used to be worth 99 uh uh let me use different numbers. A task

that used to be worth $50 but would cost $200 in engineering time.

>> Now if it only cost $20 of claude, all these lowv value tasks that no one was doing before because they just weren't important. Right.

weren't important. Right.

>> Right.

>> All of these suddenly become economically profitable.

>> Right.

>> And so and and they can be automated.

Right. So again, here comes like the the the uh spectre of scale, which is wow, all that stuff that I never did because it wasn't very important. I'm going to do all of it now. Right.

>> Right. Right. Right.

>> And so that's a that's a large part of what is driving it's not the only part, but that's that's a large part of what's driving like what what do you mean you're doing 10,000 lines of code a day? Right. like

10,000 like 10,000 lines of code a day at the end of a year means a completely unmanageable amount of like how could you possibly even understand what it's doing. Right.

doing. Right.

>> Right. Right. Right. Uh and I guess the the um the implication there is uh you are implicitly saying the AI definitely won't help you manage that kind of complexity when you're talking about a 4

million line codebase or something like that. So why would you want to generate

that. So why would you want to generate that at this point? Yeah. Yeah. And also

like a 4 million line codebase uh for uh for tasks that are not >> let's say worth millions of lines of code. Right.

code. Right.

>> Right. Right. Right. Yeah.

>> Look, if you have if you're making like >> whatever GTA 6 and it's millions of lines of code, you say fine, that's it's a lot of code, but this is one of the most impressive technical endeavors any human has ever undertaken, right? So

whatever right?

>> Yes.

>> You might still say, hey, that's a lot of code, but >> but we have a very good idea of why, right? is like there's there's plenty of

right? is like there's there's plenty of wasteful code in GTA 6, no doubt, but we don't really know how to do something that big without that, right? So,

>> but if it's now if it's 4 million lines of code and it's like a mobile app to order your pet food, right? And I'm only slightly

right? And I'm only slightly exaggerating there, right? Like I'm only slightly exaggerating there.

>> Yeah.

>> Um I don't know how that can be viable in the long term, >> right?

Um, so anyway, that uh the the counter opinion that and people I respect have have this counter opinion which is the AI will just get so good

that you don't need to know what the code does anymore. You will only evaluate the artifacts as a black box and that's their position.

>> But just to be clear, that's a position about the future, not a position about the present. Right.

the present. Right.

Uh you may be aware that in in Silicon Valley the distinction between the future and the present is fuzzy.

>> Okay. Uh sorry. So let me restate that.

So if I were to ask one of these people if clawed code can handle that 4 million line codebase today without screwing it

up. Would they say yes or no?

up. Would they say yes or no?

They would say they would say de facto no but the the compensation layer around that is well that's why you have to have sub agents and that's why you have to worry about compaction and that's why

you have all this other stuff right that >> uh an emerging uh emerging meme in this business is that um man that building

things with something like cloud code it's a lot more like management than it is like engineering >> that's how I think of it too actually even though I don't do it It's what it seems like >> uh that no one agent can handle four

million lines of code. That's why you have to divide up the tasks and each agent gets, you know, whatever 20,000 lines of code and then the agents have to talk to each other, right? The same

way the engineers have to talk to each other except now the agents are talking to each other.

>> Okay.

>> And so, >> oh my god.

>> Do you have to have like a st like a standup meeting and a scrum master? Not

not lit not literally literally that but like 70% that yes that like the agents have the agents have like sync points where like every few hours they chat with each other and decide what

>> you know who who's doing what next.

>> Oh my gosh. Okay.

>> So like I don't know if that if that practice will last in 5 years. It's the

wild west. Sure. Like right now it's no one really knows how this works. Even

like even the people who are being paid like $5 million a year to work on this stuff at OpenAI, right? Even they don't really know um everyone's figuring it out on the fly.

>> Absolut everyone has their intuition about what the path forward is but only one of them will be right or maybe a sub a subset of them will be right and the others will

be wrong about what it took to progress and so on. Yep.

>> Uh okay. So uh again uh finishing our sort of drill down on this.

So uh that so what does that mean for your answer to a programmer then who says should I become a carpenter?

Because I think you you've given a very complete picture of what it looks like right now. Um,

right now. Um, as you've said before, we can't necessarily know exactly what the future looks like, but it sounds like you don't really need to know what the future looks like to already have a little bit

of an answer about carpentry versus junior programming. So, if you had to

junior programming. So, if you had to actually give someone an answer right now based on that, what sorts of things do you tell them? Because it does seem very dire even even if the AI turns out not to actually be very good at this

task long term >> because of that junior programming problem. So, how does that shake out for

problem. So, how does that shake out for you? How do you see it? I can give you

you? How do you see it? I can give you bits of the bits of advice that I have offered that seem to resonate. Um so one

is at a minimum you need to know your competition and ideally you need to um you know as uh as Abraham Lincoln said

if I if I befriend my enemy do I not defeat him? Right.

defeat him? Right.

>> Okay. Okay.

>> So at a minimum you need to know how these things work and what they can do.

and ideally find ways to work with them productively. Uh because I don't think

productively. Uh because I don't think that the even after whatever like post hype crash there is I don't think we're going to go

back to a world where a junior engineer can have substantial market value uh for the task of writing manually writing syntactic constructs.

And so I'm I'm I'm speaking so precisely there because I only want to make a claim about that precise thing. So I'm

not talking about there won't be jobs for programmers, there won't be jobs for people writing code, whatever. But I'm

saying specifically the part of your skill that is expressing an idea in the syntactic constructs of the C programming language at the junior

programming level. I I think the days

programming level. I I think the days are numbered there. Uh and

>> and any programming language, I assume not just C, right? Like literally any programming language other than human language.

>> Yes. Um, you know, I know like our mutual friend, uh, Ginger Bill, who created Odin, uh, the Odin programming language. Uh, I know people who get, uh,

language. Uh, I know people who get, uh, they say quite reasonable Odin code out of, uh, out of chat GPT. Um, so that's I mean, that's quite both a recent

language and a no offense to build a relatively obscure language.

>> Um, >> right. It's not it's not going to be in

>> right. It's not it's not going to be in the top three or something like that where you'd assume there's voluminous examples online, right? Yeah.

So that that is one bit of advice. Um,

another bit of advice is uh I would think so maybe this connects to something that you've said in the past which is maybe

maybe uh coding in the sense of writing syntactic constructs maybe that becomes something like playing the guitar uh in terms of its role in the economy, right?

That >> mostly people do it for love. a small

number of people do it so well that um there is market value to it not because not because the market really needed one

more guitar performance but the performance is so good that it sort of creates its own category. Right.

>> Right.

>> So there will always be like whoever the next Carmarmac is there will be market value for that person writing code. uh

but maybe not um you know at you you've commented on this in the past as well that you and I have lived through a blessed time in terms of uh >> right >> how much economic value you could get

out of something that you like doing anyway right and like most people it's it's worth worth noting >> um most people don't have that right uh like we >> if you love to play football or

something like that then almost nobody is going to be able to make a living doing that it's very narrow, you know, an extraordinarily select few who will ever get paid a lot of money do that.

Whereas, you know, for our lifetimes, even very mediocre programmers, even bad programmers could make an excellent living doing this. That's how broad that

was. And that that does seem like

was. And that that does seem like >> uh something that probably only lasts for a certain period of history, right?

Like >> Yes. So

>> Yes. So >> having said that the I think the engineering skills will remain valuable

uh like the general like systematizing problem solving arranging things in a good way not a bad way which you know actually most most programmers don't do

but in any case the that skill I think will remain >> that will remain valuable but perhaps not as applied in

writing syntactic constructs. So I mean the way that I think about this for myself because I'm you know I'm nowhere near retirement in any case I don't want to don't want to retire. Um the the way

that I think about this for myself is uh programming writing syntactic constructs will always be interesting for me as a mode of thinking right like even you

know even if cla can build all the apps that I want it to build right um very frequently I have an idea and I just have to express it as code because that's the natural way to express

express the idea as code um you know the same way that uh there are ideas that are naturally expressed as uh you know mathematics or as musical ideas or

whatever right um that doesn't that part is not going away um the the big thing that I'm worried worried about for my junior engineer friends for my own children for my like nieces and

nephews who are who are emerging into this market is uh how how much longer

uh someone at least in the US can expect money for um for generating stock overflow level code. Uh and so I don't know if you should become a a carpenter

but I think if you are entering the market now uh you should have at least one additional. So going back to the

one additional. So going back to the previous point if you're entering the market now at a minimum I strongly recommend being familiar with what these things can do. So pick a project. So

like a good example is the is um if you're a game or graphics programmer just use claude or chatgbt whatever whatever you most prefer to uh to try to

recreate the work in like learn openg for example right so that's something where >> learn openg the the court the series like book series thing yeah >> yes uh so that will teach you the

boundaries of what can do well and what it can't do well and we'll also teach you some of the flaws of what's reproducible, what's not reproducible, right?

>> I see.

>> So I would at a minimum recommend that level of familiarity. I I think um you you just can't ignore what is happening right now. Like if you are 22

years old and you're hoping to make a career somehow in creating applications or games or you know computations, you can't ignore it and you need to figure

out some way to to live with it one way or another. And

at a minimum you should become familiar with what can this do for me for the for a task that's representative of the entry level of my of my field.

I would also add um that all sounds very reasonable and I wanted to add sort of a perspective point here because one of the things that I think is worth

one of the things I think is worth looking at anytime you're talking about what's going to happen in the future is where possible if you can remove whether

or not or not the emerging trends are good from the equation >> and I think there's a strong argument for that here in terms of the answer to

this specific question. And the reason I say that is because let's suppose AI actually suck at programming and everyone is who's promoting them is wrong. Just take it just take that as a

wrong. Just take it just take that as a given.

We have already seen in the computing industry many things that a lot of us would argue are terrible ideas that are part of the job requirement, >> right? not just the dominant dominate

>> right? not just the dominant dominate the market >> the dominant right um and so uh I complain all day about object-oriented practices those are standard practices

and so even if I think someone takes the position that AI currently suck and will continue to suck no matter how negative you are against them I think it's still

worth pointing out that that opinion is not really related to the question of will your employer want you to have that skill because we have ample evidence that That's that will that will not be how this question is decided. Whe

whether or not it's useful or good or better is not how it will get decided. I

think uh it's fairly safe to say. Would

you agree with that?

>> Yes. Yes. Absolutely. Um you know my uh uh you know like my my advice is like hope for the best, prepare for the worst. Uh and it's and the worst the

worst. Uh and it's and the worst the worst might not actually be that the AI is great, >> right? The worst might actually be that

>> right? The worst might actually be that the AI sucks but it is adapted widely, right? like that could happen, right?

right? like that could happen, right?

Possibly that that might that might be the worst case uh honestly short of turning the world into paper clips or some of the catastrophic things people talk about, right? But the the the actual like very a very likely worst

case uh in terms of worst cases that could happen is more that way. Uh where

but regardless of what you think, it will become part of the job I think.

Right.

>> Yes. I uh I would be very surprised. Uh

so we haven't talked about this part much but I I have many friends who are in the VC business like investors, managers, recruiters, whatever. Um I

would be surprised if if those people could climb down from the level of commitment that they have put into transforming the workforce with

AI, >> right? Um, I do think that I do think

>> right? Um, I do think that I do think that there will be a post- hype crash somehow, but I I I expect it to play out the same way that com did, which was >> right,

>> the thing survives. The thing continues to grow. It's just that the the initial

to grow. It's just that the the initial irrational exuberance goes away and then but then we all have still have to live with it, right? Like we all still have to live with the internet. We all still have to live with mobile apps. We also

have to live with advertising driven websites. like all that stuff, right?

websites. like all that stuff, right?

>> The transformation happened. The crash

did not end like the changes and the changes were not necessarily good, but they happened. So, that's where we're

they happened. So, that's where we're at.

>> No, another good example, which is a trend that I I really dislike, but what can you do is uh software as a service.

>> Like I I just I'm itchy about subscriptions, right? And I look, I know

subscriptions, right? And I look, I know it's not a lot of money, right? I I get that. I'm just itchy about subscriptions

that. I'm just itchy about subscriptions and I like I No. and and uh you know I have been you know because I do some work on the VC side. I see how people talk about subscription revenue and I

mean they're very explicitly think well like lots of people are just going to subscribe and not use it and that's free money for us. Right.

>> Right.

>> Uh and you know I like I don't like that. I can't change that.

that. I can't change that.

>> I see.

>> I I remember when when software as a service was you know still new in whatever it was 2006 or 2007 and everybody was arguing about this that like hey why can't we just pay for

software and have it? Why do I have to give give you $10 a month? Uh but that, you know, that's an example of something

where I think it was on balance uh on balance displeasing. I understand the economic logic that makes it inevitable.

Uh unless unless a huge fraction of the population changes their behavior, I understand why it's inevitable. that the

vast majority of people would rather just pay $10 a month and maybe they forget about it than pay uh you know whatever like uh you see all these

people complaining about file pilot and uh you know uh if if you want to if people who are complaining about the price if you want to yell at someone yell at me okay because I told him you

need to you need to be charging more uh for uh for >> right >> this like really great piece of software and you should just like he had but that he didn't want to make it a a

subscription. So, you have to be

subscription. So, you have to be charging more than this. You can't

charge like $10 for for something this amount of work and this amount of um >> value >> the value.

>> Uh but many people will still complain that like this thing that I would use every day, how could I possibly spend $50 on this thing that I use every day?

>> Right. Right. Right. Right.

>> What what can you do? Right. So, uh, let me maybe we can conclude, uh, because there's so many things we could talk about and so I'd like to maybe parcel

them out over future episodes. So, maybe

we can just conclude on >> I realize most of the people who are watching this are probably coming from the tech industry and so your answer about junior programmers was the

important answer to have. But what about people who might watch this who they're not in the tech industry? So, I'm

thinking about people who um are in the music business or they do art for a living or they you know maybe we can keep it techreated by saying they're a graphic designer who works in the tech

space, they're a web designer who works in the tech space, they're an artist who does assets for games or assets for um mobile apps, those sorts of things.

Do you feel like you have any insight to share to those people who come and say should I have become a carpenter or is that too uh far outside of your comfort zone for an answer?

>> I can give you the parts that I am comfortable with which are multiple sort of spot uh spots of >> experience. So one thing that I can tell

>> experience. So one thing that I can tell you setting aside for a moment the art question is that I know many lawyers and I know people who also work in uh legal

IT and there's like there's this whole uh separate world of programmers who do legal it um and >> can you say what legal it is for people who don't know >> uh things that are like all the

documents that are related to a case right and there might be thousands of documents like at a law firm >> at a law firm right yes >> so this is all this is all the IT that

lawyers use to litigate cases. So, it's

all the emails that anyone involved in the case has ever sent. It's all the PDFs that they had on their hard drive.

It's all the like scans of all the mail that they sent like physical mail that they sent to each other. Right.

>> So, it's all these documents. Uh and

they're not uh structured, organized, and so there's a whole industry trying to make some sense out of this stuff.

Right.

>> Right. Uh and so a parallel to the junior engineer task at a law firm is something that either like a uh first year lawyer or a parallegal would do and they're facing exactly the same problem

that the the uh hiring managers that I know at law firms tell me the same thing that uh actually I can give chat GPT a thousand PDFs and it produces something

that's competitive with the analysis that a parallegal would give me and now I don't know what I should do Because if I don't have parallegals in 10 years, my

law firm goes under. But it's also really hard to justify paying a parallegal for um for >> So when you say if I don't have parallegals in 10 years, my firm goes under. You mean because there are no

under. You mean because there are no lawyers then at that point.

>> Uh well parallegal is a separate let me say if I don't have any firstear associates uh >> okay yeah >> so yes that the

the uh AIS cannot do what partners do.

uh they're still very far from doing that. And actually much of what partners

that. And actually much of what partners do is not >> not the nuts and bolts of um like contract review anyway, right? They're

doing >> much more uh multi-dimensional things.

>> But if you want to have partners in 15 years, then you need to have >> well under the current system, if you want to have partners in 15 years, then you need to have firstear people now.

And >> right >> uh you know like even um you know even after the uh substantial market correction in uh in the legal field like

first year lawyers are not cheap right and so what do you do right uh can you justify paying this person you know uh a six figure salary for something that amounts in the view of the hiring

manager I emphasize this again this is not my view that in the view of the hiring manager you're just going to sit at a desk and type prompts to chat chat GPT why am I paying you know, $150,000 a year to do that.

Right.

>> Right.

>> Um, so that's uh if you are in the legal business, so like I I cannot give advice, but I can repeat other people's advice that I trust, which is uh you need to be thinking very seriously about whether

you want to be on the partner track or not. And if you want to be on the

not. And if you want to be on the partner track, um, so this actually connects to the programming part. If you

want to be on the partner track, uh, eventually you're expected to do at least some amount of drumming up business yourself, right? So this is a thing that's uh value that's a se separate from the technical skill,

right?

>> So like the partners are drumming up business or they're trying to make deals with uh competing law firms with like the the other side of the suit, right?

And so that's that's stuff that's not like you're not going to have chat GP at least for now you're not going to have chat GPT negotiate with their chat GPT to resolve the case. Right.

>> Right. Okay. Um so

porting that advice over which is if the advice there is uh figure out whether or not you want to do the other value generating part of this enterprise which

is not the contractor view but drumming up business making deals um you know negotiating with counterparties that's the the legal analog. Um, in

programming, depending on the field, it's going to be, uh, you know, maybe you need to have some ideas about game design if you're a game programmer, or maybe you need to have some idea about art styles if you're a graphics

programmer, or maybe you need to have an idea about uh, market opportunities if you're like an app developer, right? Um,

so like a mobile mobile app developer, right? So um those are the kinds of

right? So um those are the kinds of things that I would recommend thinking about now which is uh you know an umbrella for that is

uh I strongly recommend thinking about whether you care about any of the other value generating bits that are adjacent uh because that that stuff is not going

away or at least not going away as quickly. Um, and in any case, doing that

quickly. Um, and in any case, doing that well still requires the technical understanding that you can't um, again, at least for now, you can't do that other stuff at a high level of

execution without at least some understanding of the technical side.

>> Gotcha. Um, so now let's talk about the the artists because that this connects with the artists. Now, uh I expect that shovelware apps on the app store are

just going to be AI assets within 5 years like entirely AI assets.

>> Okay.

>> Uh that will require some kind of legal resolution for copyright status of AI outputs and that's as you know deep rabbit hole that we can address um

>> address separately right now.

>> Yes, we definitely will have to do that one separately. Yeah,

one separately. Yeah, >> because right now just to make the point clear for you, I know you understand this, but for the audience, >> let's say that I want to be the uh

maximally AI app developer, right? And

I'm going to make whatever an app for naming your cat, right? Like you get a new kitten and I'm making an app and it needs icons and it needs like b it needs banners and it needs animations and whatever, right? Uh and I'm going to do

whatever, right? Uh and I'm going to do all of that with AI.

uh the current status of outputs from prompt to image generating systems is that uh absent some other creative act they are not pro protected by copyright in the US

>> right >> and there's this entirely separate question of what about China right what if you do all your all your shovelware development in China uh another another

completely um divergent uh direction of of discussion I I think that we I we're already seeing now I saw a post about

this recently about um a new shovelware boom in uh in the app stores uh in the last 6 months or so uh basically lining

up with the um uh with the emergence of agentic workflows as that's the the term right right where >> right right yeah >> so just to be clear an agentic workflow

is not I type a prompt and I get a result and hope I hope the result is good aentic workflow is you tell it go do this and don't stop until you meet some success criterion which is >> right

>> a test suite or like there's a manual approval process or whatever right so that's a genetic workflow um so I expect if you are

let's say a graphic designer um I think in five years all the low end will be one way or another for economic reasons just AI art um >> y

>> I don't know how that's going to shake legally, but I don't see how I don't see how the business side can avoid

uh driving itself there. Um because it's if you're at the point that you don't care about it being good anyway. This is what I mean when I say shuffle, right? I

don't I mean I it's porative, but it is informatively porative, right? Which is

there are many people who make apps and they don't care if it's good, right? And

like they will tell you they don't care if it's good. They only care if they can sell, you know, 10,000 $10,000 worth of it, right? And that's worth it for them.

it, right? And that's worth it for them.

So that's what I mean by >> or they might not care if the art is good. Maybe they care if the app is

good. Maybe they care if the app is good. Although they probably don't

good. Although they probably don't because if you don't care if the art's good, you probably don't care if the app's good either. But point being, you as long as you just don't care if the art is good, then you know, >> right?

>> Shovelware art, if you will.

>> Yes. Exactly.

So uh I wish I had advice on what the graphic designers and like illustrators can do about that. I don't know the industry very well. So I can't uh

>> but that's something where I again I would encourage you >> uh somehow ask more experienced people what are the other what are the other things that make this business a business right because ultimately that's

that's I think that's the most essential question here which is not um I think artists are not not as worried if you really dig into

their concerns. I mean, they're there

their concerns. I mean, they're there concerns about like, hey, this is offensive that you're trying to replace me, right? It's offensive that you stole

me, right? It's offensive that you stole my art to make this thing. It's

offensive like all this all this stuff.

But the most >> the thing that will most deeply uh hurt people's lives is I used to be able to make a living doing

this and now maybe I can't anymore.

Maybe it's all over, right? I mean,

>> yeah. Um and so I would encourage those people to ask uh more senior people in the in the business to to help them understand what are the what are the parts that make this business a business

right because uh drawing the art making the good illustration that by itself is not a business. there's a separate part of making a name for yourself uh right

>> you know branding uh finding finding clients and forming relationships with them finding people who who trust your sense of taste right and that's actually something that's that's quite rare um which is finding someone who says you

know I I just like how you how you think about art how you think about >> right >> um art style so I can't give specific advice there but my uh course advice is to find the

people who can tell what are all these other activities that make the business a business? Um, and and start looking

a business? Um, and and start looking there. Um, now if you are a if you are

there. Um, now if you are a if you are extremely talented and you make uh like you make uh 3D models for AAA games and

you do like high-end rigging for uh like anything where uh artistry of execution is important, I think you're safe for

now. Uh I think you're safe for like

now. Uh I think you're safe for like maybe five years. Um the

>> what happens in five years I don't know but right now the um like you see these these uh demos that they put out with like what was it called? Um

>> Genie Google Genie, >> right? Right. Y

>> right? Right. Y

>> and uh I have we discussed the idea of a dancing bear before?

>> Yes, but not not in public. Uh, so you you should you should explain what the dancing bear is because it was a very I've used it since you told me that in private. I've used it many times. Uh,

private. I've used it many times. Uh,

just never in public, I don't think, because I didn't know if anyone would know what I meant. So, please, yes, I would love this to be a more commonly used phrase so I can just say it.

>> Uh, so this this actually I learned it from a blogger back in 2007. I wish I remembered the name. So, uh, blogger who taught me dancing bear. Uh, thank you.

I'm sorry I don't remember your name.

Okay. Uh but anyway, the idea is you go to a circus and they have one of the things that they have on display is a dancing bear. Why do people go look at

dancing bear. Why do people go look at the dancing bear and pay to see the dancing bear? It's not because the dance

dancing bear? It's not because the dance is good. Right. The dance is actually

is good. Right. The dance is actually terrible. The thing that's interesting

terrible. The thing that's interesting is that the bear the bear is dancing at all. Right. Right?

all. Right. Right?

>> Uh and this connects to engineers because frequently engineers as a matter of personality they overfocus on the fact that the bear is

dancing and underfocus on the fact that that the dance is terrible. Okay. I see.

>> So uh you know Google puts out this this genie thing and it's a perfect example of a dancing bear because it's actually if you ignore the fact that you would never want to play this game, >> right?

>> It's actually quite impressive what it does, right? Like there's a

does, right? Like there's a >> Right. Yes,

>> Right. Yes, >> stable persistent 3D world. Things are

moving around with something like physics. Like it's I mean you can tell

physics. Like it's I mean you can tell that it's not really physics, but it's something like physics, right? And to be clear, I don't mean like Newton physics, not physicist physics. I mean just like

integrating velocity over time. It's not

quite that, right?

>> Things you might see in a rudimentary game.

>> Yes, exactly.

>> That's what it's It kind of looks like that, right? Yeah. Um, so I see in the

that, right? Yeah. Um, so I see in the 3D and animation world right now a lot of dancing bears. I have not yet seen anything that approaches a nice dance.

Uh, and so, you know, I I I encourage uh I encourage this as a, you know, a lens in looking at new developments like is this a dancing bear and how far are we from a dance that's worth seeing as a

dance as opposed to something that's merely interesting because it's u a bear that's doing the dance, >> right? So you're basically asking the

>> right? So you're basically asking the question like uh when can we cast this bear in Swan Lake, right? And the answer is pretty pretty far from that right now.

>> If you're doing when will we get there?

Who know? Right. Yeah. Uh how fast can this bear learn to dance better than it's dancing right now?

>> Right. So those people So the I mean the good news is that I think those people are safe at least for now. The the bad news is that there aren't that many of those people. Right. And again, it's

those people. Right. And again, it's again, we keep coming back to this development of the more junior people.

Um, like what do you do, right? Because

not everyone, you know, almost no one at 22 is doing like the the top end models for GTA 6, right?

>> Well, and I would hate to point out, and I I guess this is probably a good place to end um this first conversation. It's

something we probably will want to talk about at length in a future conversation, which is that everything you're describing kind of

suggests that there's a a nasty stratification that will happen between people with special skills who are sort

of outlier talented at doing something, let's say, and they can still have jobs for potentially quite some time. and

then sort of the if you're nothing special, you will just get left behind.

And that is not a great outcome. Like we

don't want most industries to look like rock music or the NFL. Um that's not a great outcome because in those kinds of industries there are very few people who

can actually get paid as players. I'm

talking about the players, not obviously the the the infrastructure. It's

interesting that you mentioned that because another uh another term that I found useful here is uh you may have heard this uh this uh term of so-called blocking and tackling. So like when

you're when you have a football team, right? Many things have to happen,

right? Many things have to happen, right? Uh but if you don't have blocking

right? Uh but if you don't have blocking and tackling, which is the most basic thing of like your guys can't come over here, we're going to stop you, right? Or

like if one of your guys is running, we're going to tackle him and that and that's that's usually not the thing that generates the star power, right? The

star power is usually the quarterback.

and anyway like coach whatever right uh but the point is that you can't have a football team at all without blocking and tackling right and so this this phrase blocking and tackling refers to all the all the stuff that has to be done competently or there's nothing at all right

>> right >> um and that stuff I don't know what happens with that in programming in the next 5 years >> or several other things >> yeah or several other things right so

like if you know if Carmarmac is like the quarterback right and uh you know >> Gabe is like uh Gabe is like the coach, right? Or the owner. I don't know. Those

right? Or the owner. I don't know. Those

people will still have star power, right?

>> Right.

>> But the stuff that just makes the thing work at all.

We don't have a good answer for those people right now.

>> And >> this is this is this analogy has completely gone off the rails. I meant

more like just if you talk about football players.

>> Yes, I know.

>> I just meant like the number of football players that there can be in the world earning a reasonable living is very low.

Yes. versus the number of people who would enjoy potentially playing football. Or you can pick any sport you

football. Or you can pick any sport you want. Maybe uh maybe what Europeans call

want. Maybe uh maybe what Europeans call football for example is a better analogy because it's it's much more widespread throughout the world, right? Um and uh

more relatable perhaps would be like the music industry. Like I said, like

music industry. Like I said, like Rockstar, the number of people who get paid well in the music industry for playing an instrument or singing a song

is shockingly small. It is so few people that if that's what most industries look like, the entire world is poor, right?

There's there's 0.01% of the people who are making a reasonable living and everyone else is dirt poor.

>> Um, >> and I mean I lived in LA so like this is a very very concrete thing that you know acting would be another example >> that people who were trying to get into acting, trying to get into music and

they were uh, you know, doing like either like uh weight stuff or whatever, right? And like these were like people I

right? And like these were like people I knew people I knew who were I knew personally who were smart, talented, hardworking. It wasn't it wasn't the the

hardworking. It wasn't it wasn't the the folk story of like someone who actually wasn't talented but thought they'd make it in LA. Right. These are people who like I know were smart, talented, hardworking

>> um and still, >> you know, still had to do something else because uh because of, you know, Hollywood economics that Hollywood works with whatever like a few

hundred personalities, not 100,000 personalities.

>> So, um I guess we will table the rest of that part of the analysis for a future episode. I think this is probably a good

episode. I think this is probably a good place to stop though. Uh, Dimmitri,

thank you so much for uh for being here.

That was that was that was very good. I

loved that entire thing. Uh, I really appreciate you sharing uh all of your sort of insider uh experience uh with us because uh like I said, I don't work with this stuff. So, it's it's I can

only kind of look from the outside and it's it's just very helpful to have somebody who's uh thinks about this stuff every day and has to work with it every day and can can give a sort of a a

reasonably objective opinion. I feel

like you're very good at that. Uh

>> I try. Thank you very much.

>> Uh you know, just as a closing thought, >> I think about this every day, not just because it's my job, but because people I know and love will be affected by this. And I

this. And I >> I think it's true for >> I I Yeah, it's true for all of us. And I

want to know I want to know what's going to happen for my kids in 15 years.

What's going to happen for my nieces and nephews in 5 years, for the the junior people I'm mentoring now. Uh

I want I want to find I want to be part of finding some way forward for for those people. And right now there are

those people. And right now there are not not good answers. And even more worrying, there are uh too few people who seem to care to try.

>> That's a very good way to say that. Very

good way to say that. All right. Well,

thank you very much and hopefully we can do this again very soon.

>> No, my pleasure. Thank you for joining us for this episode of Waiting Through AI. As always, I'd like to thank Demetri

AI. As always, I'd like to thank Demetri Spanos for taking time out of his schedule doing AI research and development to come share the insider perspective with us. And to that end, he is a consultant in that field. If you

have business inquiries for him, you can always find him at dmitrespanos.com.

If you have questions about this series or want to check out some of the other series that I produce, you can find those on computerenhanced.com and I'd love to see you there. That's it for this week. Until next time, have fun

this week. Until next time, have fun waiting through AI yourself and Dimmitri and I will see you out there on the internet.

Loading...

Loading video analysis...