LongCut logo

Big Tech, AI Everywhere, and Fewer Engineers: A Laid-Off Engineer’s Reality Check

By Asian Dad Energy

Summary

Topics Covered

  • LLMs Are Probabilistic Idiot Savants
  • Software Mechanizes Like 19th Century Agriculture
  • AI Code Quality Degrades Into Slop
  • Tame AI with Generalized Intelligence

Full Transcript

Hello world. I was recently laid off from big tech with 25 years of experience working in the tech industry.

Today I'm back with another vlog. So

over the last couple of years there has been a lot of activities around AI in the big tech industry. You know, it seems to me like every department, every

product, every team has been talking about AI and trying to shoehorn AI into every possible feature. There's been

hackathons after hackathons where large numbers of people within companies are pulled in to work on AI and to ideulate about AI. And many of these people may

about AI. And many of these people may not have anything to do with AI or even know or care about AI at all. But what

is the long-term impact of AI?

Specifically, the impact on the software engineering jobs market. Now that I have some time on my hands, I was able to give this a little bit of thought and

wanted to share these thoughts with everyone today. So first let's talk

everyone today. So first let's talk about what is generative AI today and what it is not. So generative AI and

specifically large language models is a category of AI that is probabilistic.

Meaning when you put in a prompt into an LM like chatgpt for example, the response that it produces, every word of

that response is essentially the most likely word to come after the input prompt based on the vast amounts of training data that that model has been

trained with. So while an AI model may

trained with. So while an AI model may return what appears to be a logical and coherent answer, there's actually no logic or reasoning happening at all. The

model isn't doing any of that. The model

has no understanding. In fact, I don't believe that today scientists have actually figured out how the human brain works in terms of reasoning and logic.

So to me, the theory that there is this linear path between LLMs and artificial general intelligence via huge amounts of

scaling seems really far-fetched.

Instead of some kind of AGI machine god, what we have today is more like an idiot savant that is capable of paring logic

and reason. With that said, however, I

and reason. With that said, however, I don't believe that humanity needs to achieve AGI in order to totally disrupt the tech industry. If we look back to

the 19th century, right, when agriculture was starting to be mechanized, we didn't have to invent humanoid robots to be able to plant and

harvest rice, right? much simpler

tractors and harvesters got the job done even better than massive numbers of peasants. And if we're honest with

peasants. And if we're honest with ourselves, the vast majority of software created today by the tech industry is kind of repetitive and unimaginative,

kind of like what the farmers did back then. And because software is digital in

then. And because software is digital in nature, it is incredibly easy to use as training data for AI. As such, the

software industry should be one of the easiest for the current kinds of AIs to replace. But there are some caveats to

replace. But there are some caveats to consider here, right? Some languages are way more popular than other languages, right? Your your JavaScript, Python,

right? Your your JavaScript, Python, Javas of the world out there. There's

way more code in those languages than for rarer languages, right? Like call or apex. As such, the languages that are

apex. As such, the languages that are relatively rare in terms of training data would have more hallucinations and performance issues from these models.

even within a single language stack, right? The total amount of code that's

right? The total amount of code that's out there on the internet forms kind of a bell curve where you have a small amount of modern code, way more older

and deprecated code. And quite frankly, as an open-source framework contributor myself, a lot of the code on git repos out there are of inconsistent quality.

Right? depending on the frameworks that's available out there, there could be a lot of bugs in the code as is. So

you're using that as training data for the models and certainly within the past couple of years with the massive adoption of AI tools, right? There has

been a flood of AI generated code. Now

that then causes the risk of essentially newer generations of AI models being trained on its own products. It's kind

of like a snake eating its own tail where the quality of that code progressively worsens and becomes this kind of baseline slop. As such, AI

generated code even for the most popular languages can still have subtle but serious defects. it can still have

serious defects. it can still have deprecated syntax, outdated libraries, security holes, and so on. And then

there's the matter of context windows, right? The short-term memory of a large

right? The short-term memory of a large language model. Most frontier models

language model. Most frontier models have a context window that really is not that big, especially when you want to

load an entire code repository into that context. Most of them can account for

context. Most of them can account for maybe small to medium-sized code repositories, but large existing projects, existing repos will be hard to

load fully into the context and then that causes actually problems like hallucinations.

Now, attempts to make the context window much much larger runs into its own set of problems. huge context windows

essentially adds noise that distracts the attention mechanism of the model in such it kind of causes confusion and

decreases performance. So netn net what

decreases performance. So netn net what it means is that the performance gain that you get from AI when refactoring

existing brownfield code bases is much less than the gain that you get from creating new green field code bases.

certain fundamental aspects of the tech industry still exists in sort of the semianalog world, right? Think about how

we gather business requirements from stakeholders, how we proceed with questioning and understanding the fine intricate details of those requirements.

That's not something that's digitized.

Think about system design or at the larger scale ecosystem design. A lot of these things are only captured at the courses level in like vio diagrams once

again not fully digitized. So these sort of semianalog activities would not be immediately available as training data for the AI

models. Now, all the teething issues

models. Now, all the teething issues aside, right, I have to admit that AI in its current state can for most common

languages, for most functional use cases, most of the time can generate code that is mostly acceptable and works

most of the time. And we can expect this to get incrementally better as more compute, more training data and better algorithms come online over time. As

such, I see a gradual narrowing of the number of software engineer roles needed by the tech industry and I see these

roles transforming to be more focused on human qualities like reasoning, logic, and intuition. In the medium to longer

and intuition. In the medium to longer term, it may be that the tech industry would only require a fraction of engineers that it employs today. This

transformation by the way isn't new.

It's already happened to other engineering disciplines. Right? If you

engineering disciplines. Right? If you

look at the aerospace industry or the electrical engineering industry, for example, that that same pattern has already occurred. So where does that

already occurred. So where does that leave us as software engineers in tech?

It's like we have this wild AI tiger that's chasing us and trying to eat us.

Well, I have thought of a couple of coping strategies, and each of these strategies have its own pros and cons.

Option one, you can ride the tiger, meaning you can take part in the building of AI frontier models. The pros

of such an approach is obviously potentially very high salaries and the ability to change human civilization itself. But there are a lot of cons,

itself. But there are a lot of cons, right? The competition for these jobs

right? The competition for these jobs are extremely high. The education

requirements and training requirements are extremely high. There is a degree of unpredictability, right? We could have

unpredictability, right? We could have an AI bubble collapse and an AI winter and you might find yourself completely out of a job in just a few years. So,

these are things to think about. Option

two is to outrun the tiger. And what I mean by this is to essentially focus on a language or framework or technology

that is very new and in demand. If you

do that then at least for a period of time there wouldn't be any training data for the AI to utilize right and so so by

doing that the benefit is that you can have highpaying jobs uh for a tech but there are some major cons here. One is

that you really have to be very careful about the technology or niche that you pick. It's almost like gambling in a

pick. It's almost like gambling in a way. You have to pick the right

way. You have to pick the right technology that you know is going to go viral. And then after that viral curve

viral. And then after that viral curve sort of uh declines, you then have to guess the next technology correctly. And

if you guess wrong, then you're out of a job. Another major disadvantage of this

job. Another major disadvantage of this strategy is that as time progresses as an engineer, we all get a bit slower in

terms of learning new things. So it

becomes progressively harder to keep changing this technology stack that's currently ahead of the curve of AI and to keep that up for decades on end. I

think in the long run, right, you can't outrun the tiger, but you could do it in the short run. Option three, taming the tiger. So all of these LLMs and

tiger. So all of these LLMs and generative AI models are are fundamentally narrow intelligence models, right? But humans are capable of

models, right? But humans are capable of true generalized intelligence.

As such, I [snorts] see a coping strategy where you master all of the AI tools and you essentially use your

general intelligence qualities to sort of have this broader context and then the AI tools are doing all of the narrow grunt work required to get something

done. Imagine a software engineer

done. Imagine a software engineer wearing the hat of a tech lead, an architect, a product owner, and a project manager, right? Having that kind

of generalized skills and then having essentially an army of AI agents to do those specific tasks for you and be able to ship out complex projects entirely by

yourself or with a small number of people. This can be as a job working for

people. This can be as a job working for a company or it can be in your own company or your own side hustle. So

there's obvious benefits to this approach, right? If you're able to do

approach, right? If you're able to do this, then it may mean a very productive and high-paying job potentially for a long time. The disadvantages are also

long time. The disadvantages are also numerous here. Chief among them is that

numerous here. Chief among them is that I think almost every other engineer is looking to do this. Everybody is looking

to scale up and learn these tools.

Almost every other person is probably thinking, hey, can I generalize my skill sets? Another disadvantage is that the

sets? Another disadvantage is that the unique sort of basket of skills required to do this orchestration is something

that's not necessarily very common in engineers. Now I I feel to me like from

engineers. Now I I feel to me like from my experience maybe a lot of people working in the tech industry not just engineers but maybe also like product

owners and PMs may think they can do all of these things but truly only a small

subset have that broad breath of skills.

Option four hiding from the tiger. So in

this option you find an area an industry or a domain whose solutions are difficult to expose as training data to

the AI models. So imagine highly regulated industries like defense or healthcare where the work the solution

being done is either prevented from being digitized and shared freely to be scraped by a model or that solution is so heavily dependent on the real analog

world that it's difficult to properly capture this data for training. The

advantage of this approach is that you could have job stability at least for a while. The disadvantage of this option

while. The disadvantage of this option is that over time as digital transformation of the economy progresses, all of these industries and

sectors would eventually digitize to the point where their solutions do become available as training data for AI. So

over the long run, the hiding strategy may not pan out. So there you have it, guys. Each of these coping strategies

guys. Each of these coping strategies have their positives, but each have serious disadvantages. Honestly, I can't

serious disadvantages. Honestly, I can't think of a scenario where everyone in the tech industry wins. But at least

some of us may be able to find a niche for ourselves and survive using one of these coping strategies. and maybe

that'll allow some of us to weather this storm. Okay, I know this sounds a little

storm. Okay, I know this sounds a little depressing, but it's my honest opinion after thinking about this problem for a good amount of time, but hey, you know,

who am I? I am just a laidoff ex big tech flunky. So, [clears throat] you

tech flunky. So, [clears throat] you should take everything that I'm saying with a giant grain of salt. Anyways, I

hope my rant on this subject is helpful to you. If you have a morbid curiosity

to you. If you have a morbid curiosity to join me in this post big tech layoff life journey, please feel free to subscribe to my channel. Anyway, thanks

for listening. Talk soon. Bye.

Loading...

Loading video analysis...