LongCut logo

The 2025 Design in Tech Report: How AI Will Turn Designers into "Autodesigners"

By SXSW

Summary

Topics Covered

  • AI Transforms UX into Teleportation
  • Loops Make Agents Computationally Infinite
  • Tool-Enabled AI Risks Loss of Control
  • Embeddings Unlock Vector Space Navigation
  • Simple Rules Emerge Lifelike Behaviors

Full Transcript

>> Thank you.

Hello everybody.

Thanks for coming.

Really.

Thank you.

This is my 11th year trying to do this.

I say trying because I always wonder if it will become something.

So we'll find out together.

But I do it because I know you might be here hoping that I could give you something of use.

So this year's report is called Auto Designers on Autopilot.

And I thought it was sort of a funny how different people I talked with thought I was going to give an automotive presentation.

They really did, because it makes sense.

And I also asked Sora to generate a video of what presentation John Maeda was going to give about auto designers and autopilot, which was like that.

Yeah.

So we're done now?

No, we're not, because I did the thing I do every year, which is take all my bookmarks.

I had roughly 1500 bookmarks, and then I print them all out, and then I try to organize them in some logical way.

And it had too many bookmarks this year.

So I wrote a program using Azure, OpenAI services, GPT 4.0, and it finished sorting everything in 25 minutes.

And then I looked at what it did and I was like, nah, I tried it again and it was like, nah.

So I spent roughly 20 hours sort of sorting them out and I hope it's useful to you.

So first off, good news AI is not going to replace designers.

It will transform how the work is done.

I think that's clear to me at least.

The other thing is you maybe have heard of agents.

Agents is an old word from the 1990s that's come back in full force, and we're going to see it all over the place.

Another one is AKS.

Instead of UX agent experience.

The agent experience doesn't involve a user.

It doesn't involve people.

It and both the agents.

And lastly, we're going to be living in this AI augmented era.

So we people have to somehow adapt fairly quickly.

And that's what makes it kind of stressful.

And I hope to reduce that stress here.

Hey, Joe, how you doing?

Good.

So first off, those of you who studied calculus in college raise their hand.

Okay, so most of you probably you're probably forced to do it.

Calculus was really important because that's how we created missiles, sent people to the moon.

You know, that's that's the kind of math you need.

And statistics was like, no, that's wimpy math.

So but I think about it because I was sitting and having dinner in the hotel there, and I saw this sign.

It's c o r t it looks like a, it's like a game show thing.

And I was like, oh, I know the answer.

And I asked for, oh, what does this say?

And it got it right.

And I was like, oh, wow, you're pretty good.

Foroh.

Um, the key thing to note is this word likely?

Um, it thinks it's likely that it's that just like I thought it was likely.

I mean, they could have rebranded Marriott with, like, one t and change it.

So that's like the design now, right?

It could have been, but it's likely that it stands for Courtyard Marriott.

So it's all about likely.

Um let's keep that in mind.

So I've been doing this for a while and in 2018 I got tired of I.

So I made my last I section in the report, and I asked the question about when people expected AI to replace designers.

I thought I would update it.

So if you don't mind using your phone to fill out a quick questionnaire while we're going through this, it's I'll do two of these and we'll see the results up here, and I'll go back to it to see what it is, but I'm curious what you all think right now in 2025.

Okay, now there was a person named William J.

Mitchell died in the in the early 2000.

He was one of my mentors.

Anyone know?

William J.

Mitchell used to be popular in the design tech world.

Once you die, your SEO goes really down.

But he had this great book called The Reconfigured Eye, which I recommend to everyone who's worried about imagery, all this fake imagery out there, because he gives a history of images that go all the way back to drawings, and how we've always tried to fake out people.

This is the National Archives.

It's called the home of a rebel sharpshooter.

It's from the 1860s.

And there's also another famous photo of a sharpshooters last sleep.

So one is a Confederate, one is a Union soldier.

And it was noted years later that this is the same person.

It's because the same photographer needed to monetize on both sides, and I find that really interesting.

There's no digital whatnot.

It's just how the world's always been.

So you have to question what you see.

Always.

And some of you may remember Marshall McLuhan, another person whose SEO has gotten kind of bad, but he said these things politics will eventually be replaced by imagery.

The politician will only too happy to advocate in favor of their image, because the image will be much more powerful than they could ever be.

And so this idea of imagery being so powerful, the centuries of that in this era where images can produce of anything, it's to keep in mind now, a long time ago, in the 1990s, I was advocating for writing computer programs in the design world, which people didn't like a lot.

But I created this language called Design by Numbers.

It's a very simple language.

Here I'm setting the paper's color to anywhere from 100 black and zero means white.

And I did this to show that writing drawing the computer is a bit strange.

Here I'm drawing a line that seemed fairly easy, but once I place it inside a forever loop and draw the line when I hit run, it's drawing that line forever.

You just can't tell.

You can tell when you place it in this space of interaction.

Let's say I ask it for the mouse coordinates x and y.

It's suddenly drawing and you're like, wait, what is that?

That's kind of messy.

Well, all you do is you slip a piece of paper in the middle and you get the animated, interactive world.

And I made this to work only on a 100 by 100 pixel grid, only in black and white.

I was very Basel typography era.

You know what I'm talking about, Swiss.

Thank you.

You know that?

Yeah.

This is a terrible flop.

No one liked this.

And so luckily, Um, my students went off and built better things.

There was this thing called processing that was launched from that.

Other things Arduino, p5, JS, scratch, all these things sort of came from a few bad ideas that I had in the 90s.

So I apologize for that.

Um, now I love this post by RGA.

They discovered the intersection of design and technology in the Gulf of you know what?

And I think it's like a good time to think about what is design, especially in the AI era.

So if you think about it, in design, we make obstacle courses that the user has to navigate.

The interesting about AI today is if you know the intent of what your user is, they don't have to go to an obstacle course.

You can teleport the user there.

I just want to pause there for a second.

Uh, every UI is some kind of obstacle course.

It is something that is the result of people working together.

Or maybe not sometimes.

But what happens when you teleport to the goal?

What happens to what the UX field is is a question.

Because if we spend all our time building obstacle courses as a craft, and that craft is no longer needed, what happens is a question I have now.

First off, it ties to code.

And if you.

How many of you have used a GitHub style system before?

Okay.

Some of you.

Um, you know, the code world works in the world of information sharing, and it's important to note that when I took a reading in 2018, the number of GitHub repositories called repos for machine learning was 94,000, deep learning 28,000, AI 10,000.

And if you notice, that number is kind of big now.

So there's lots of code out there.

And that's important because it means many of us can actually go out and use that code and actually level up, which is kind of an interesting era.

That's a lot of code to level up from.

and I has gotten a lot cheaper.

Some of you may remember when you were trying really hard to get access to GPT four.

Now it's kind of easy to get stuff, and when it's easy to get stuff, when there's less scarcity, the prices go down.

If you notice, there's just the little company with a whale here made things you thought were expensive, a lot cheaper and faster.

So we're in this sort of disruptive era where it's much easier and cheaper to do these things now.

Now, when you hear the word, hear the word agents, I recommend you do a command replace all every time you said models last year, just say agents instead.

It's easier and I'll explain why and how that makes sense.

And you'll get really confused by all the prognosis of the future of blah blah blah.

It's actually quite simple.

Agents are a different encapsulation of how we create computational building blocks.

The only difference is when you put them in loops, because when you put anything in a loop, it gets very powerful.

Computation by itself will just do something you tell it to do.

If you stick it in a loop, like in my Design by Numbers example, it was doing that not just for like ten minutes.

If I could let it run and run forever.

So that's a big switch.

And agents by themselves, easy to understand.

You add a loop just like a computer program in your house.

It gets confusing to understand because it runs forever.

Do you want it to run forever?

Is the question right?

Okay, so I wrote a book in 2019 called How to Speak Machine.

You have you've never heard of it, but it is a book where I spent six years trying to explain computer science to anybody.

And it's being reissued this year because I guess it's somewhat timely.

It's a time capsule of how we got here, and it's constructed in six principles.

And so I've sort of taken this year's report and used that as a, as a kind of a way to organize things.

Now, when I went through all this, I had probably 300 different things I wanted to tell you, which I can't fit into roughly a 50 minute window.

And then when I was done last night, I realized the most important things were in here.

So I'm going to put them here at the beginning.

Are you okay with me on that one?

It almost wasn't here.

I thought it was important.

We'll see.

First off, I is changing the UX craft.

There is a LinkedIn learning course I have on called UX for AI Design Practices for AI developers.

There's five basic rules.

If you're building with AI that have stayed persistent.

You always want to remind the user that, hey, this came from I don't forget.

The second thing is like, hey, you didn't know what to do.

Here's some ideas.

The third thing is like, hey, this is where I got this from.

Citations.

The other is, hey, I'm thinking.

Sorry.

And the last one is how did I do?

You know, when you go to the bathroom, there's that smiley face thing.

It wants to know.

Okay.

Now that's kind of boring.

So I brought some more WowWee things.

I think that Harley Turan, this guy is amazing, but I think he's the one who really made semantic zoom kind of cool.

I know the arc browser had it for a while, but semantic zoom is basically this idea that any paragraph, any piece of information, just pinch and zoom.

I think this idea is quite durable today.

We see it here and there.

It hasn't hit mainstream, but it's a really simple idea and it fits our mental model of how maps can contract.

First thing.

The second thing is really the release of OpenAI zero one model produced this notion that it's going to take a while.

Sorry.

So let me tell you what I'm thinking.

So the work of OpenAI, in producing this idea that I could make reasoning traces happening, I'm thinking, I thought this was quite beautiful when it came out.

I'm thinking.

I'm thinking this.

I'm thinking that.

I'm thinking this.

And so you can sort of see that effort is being made versus here's the answer.

So it sort of showing your work.

It's a good UX pattern I think for I, I have a bunch of other ones.

But if you want to see a good compendium, do a YouTube search for Y Combinator.

Raphael Schaad.

There's a great piece of very, very recent examples of AI UX that I recommend.

On the practical side, les.

Wow.

There are a few things that I think stand out that are possible now.

Anyone can do it.

I love this pattern of short long.

You know you can like on any piece of information you can offer two grades short, long, complex or whatever.

It's all possible today.

And then there's also there's also different ways to kind of take a prompt and apply it to something.

Those of you who have used cursor or GitHub, copilot, etc. if you do an At file name, it uses the context.

So it's a way to kind of like I love Cookie Monster.

By the way, those of you who saw Cookie Monster here on this stage, you know, it's like ah, cookie, you know, context.

So it's like a Cookie Monster way to pull context quickly into your conversation space or thinking space.

It's awesome.

Also, version control is something we don't talk a lot about.

So I always look at what Jeffrey is doing.

But version control is important because once you can generate AI anything, you can generate 50 versions of something, and now you've generated 100 more versions of that one version.

Which one is good or not is called a version control problem.

On the image model side, Pablo Stanley showed this recently, and it's a wonderful example of how to add input.

How do you tweak something in a direction versus actually touch it?

How do you change the style of something?

This is a great pattern Also, Google came up with this whisk pattern.

It's basically Madlib style input.

It's a form of puzzle prompting, I think is what I would call it, but it's another way to kind of get you moving much faster than average with these generative AI systems. And there's a few other systems out there I mentioned cryo and Recraft examples of really kind of fine grained prompting.

Fine grained touching is important.

Um, this is my favorite category.

Concepts and random things.

If that's a category, this is this one.

It's sort of like taking two apps and having them mate together and become a new kind of app.

It's an interesting kind of approach, but it's wholly possible today.

And I actually made something new to show you all.

Let's see if it works.

Yes.

Okay.

This is called my semantic eyedropper.

Um, I've been wanting this for a while, so this is a good excuse.

Um, so what this is, is I can, um.

I can take any cell.

Let me see here.

I can take a cell.

That cell.

I can place its essence into the empty cell, and it kind of makes it up from there.

I like that blend idea.

So I can take quantum computing and have it blend with this one here, and it gets quantum II.

So it's kind of ways to sort of do things now.

This is sort of a semantic eyedropper.

I always love when Photoshop came out with that idea.

So anyways, anything is really possible today.

It's kind of weird.

Let's see what we did here.

Let's see how we did.

Okay.

Likelihood that it will positively impact our lives roughly 50.8%.

Are you in the design profession?

We're half.

Half.

So for the designers who answered, when will I replace them?

Which is 36?

They never.

You know, that's a good one.

That's a good timeline.

Uh, 24% two.

So this is sort of coming.

Coming?

Yeah.

Ooh, look at that.

Very far out.

Five years.

That's for, like, visual design.

Mind you, for those of you who make the distinction.

And then digital product designers never.

How does it never compare?

Oh never.

It's like shorter here.

See that?

That's a little shorter.

Um okay.

Never.

So we're sort of feeling it for those who are not.

When I replace most designers.

Well, okay, let's see, those who are not in design think, oh, yeah, you're going to be replaced.

Okay.

So they're they're winning.

Thank you for that.

Okay.

Let's keep going.

We'll do another one.

Okay.

And there are four AI UX spaces that are I think exist.

One is of course the chat UI space, people chatting and chat.

I made a demo of the example of where there was something going viral, where people were saying chat, AI, chat, you can catch it immediately.

It's you talking and the user talking and it's you talking, it's user talking.

But we humans, we don't do it that way.

We go chat, chat, chat.

And then chat.

And then chat comes back like we humans do that.

So I was like, no, I don't think, I think I can do it too.

Hi, how are you?

I'm really oh, oh oh you're talking oh oh, thanks.

Are you doing well?

I said, this is an example that anything we think that it can't do, it's because we have biases in how we actually communicate when we communicate async.

This happens all the time.

If you communicate async with I this this could happen again as well.

Okay.

Stop please okay.

And if you haven't seen the stuff coming out of Google Creative Labs, Google Creative Labs has been around for over a decade.

It's an amazing group of hybrids, but Alexander Chan's team has been doing some really weird things.

>> Move them to the top.

Move them to the bottom.

Put them back in the middle, but swap red and blue.

Can you, um, make yellow?

Can you make, um, purple?

Can you make white?

Can you form a snowman?

That's a good one.

Can you.

So that's really, like only a couple of months ago, but this kind of stuff is getting a lot easier.

So in the chat space, communicating the second space is documents are changing a lot.

This sort of quanta of information um, the work by Tyler and it's been really interesting to watch.

Um, he's been asking questions about when you're thinking out loud, how many ways can those thoughts branch out?

Um, so sort of branching conversation as a document type exists.

Another one by Matthew is very interesting.

He looked at what's the color picker for words?

How can a word have color?

And you can see that there's different things you can do in the semantic space, in the context of text that are quite remarkable.

Another one by Jeffrey Litt.

This is a scheduler that combines a map system.

And so the document you thought was just sort of lying there, I'm sleeping, can really come awake in a different way that we never predicted documents could ever become.

And I have some other random.

I collect these things behind you, so I'm a packrat of random things.

The third space is tables.

You know, tables.

Like in spreadsheets.

I think spreadsheets are sort of really taking off in the AI space.

I wanted to see for myself, so I made a demonstration of that.

So what this is, is it's a spreadsheet that is semantic.

So each cell can change based upon the content.

And it propagates the quick brown sink in my building and it propagates the semantic information via a semantic formula.

So that kind of form in a table can take hold.

And as you can imagine, a lot of things in a spreadsheet can change at the same time.

And so this kind of interface pattern is quite powerful.

And it's something we really first felt with spreadsheets.

And we'll get back to that in a second.

The fourth one is the canvas space, sort of a two dimensional canvas, so-called infinite canvas.

This work by Samuel Timbo is quite interesting because it's introducing a different kind of visual language.

You may have seen to draw a computer, but there are more examples of these kind of visual canvas approaches.

They're quite sophisticated, Hub, but emails on an infinite canvas.

And so infinite canvas is another pattern we're going to see more of.

Lots of stuff out there.

And if you're curious about this whole space, there's a there's a talk series openly available on the Medialab site by led by Shar styles.

It's called thinking with sand.

There are some of all these people are talking about why they do this work.

So it's a great set of videos to check out.

All right.

So section one of six.

Here we go.

This is Shar recently saying I'm going to dedicate my thesis on creative coding to loop.

So the beginning just says for loop.

So the loop is the fundamental building block of computation.

That is freaky and odd.

It's something that cannot exist in the regular world.

Nothing can loop forever.

If you spin a top, it's going to like get tired and a computer.

A loop will just keep going and it won't stop.

And that's very strange.

And as a backstory to loops, I discovered loops in the 1970s when I was a kid, and I made this to show you what I do, what I did when I was a kid.

I did this 20, go to ten, and I did run, like, wow, it's printing my name.

It's so reassuring.

And then everyone then teaches you this little hack where you add a comma at the end and suddenly a second, let me close.

Let me restart this because it's the effect is very important because it's like you learn one thing, but then if you do, if you learn one more thing, something else happens.

Oh that's good.

Let me change my name.

When you go to ten, some of you know what's going to happen.

Oh, it's so worthwhile.

Um, again, this is going to keep on doing this forever.

And this is something that's all existed in computers is the ability to loop, never get tired and that goes to agents.

Agents by default do not loop.

I'm calling this an agent at rest.

It has four ingredients.

It has a model.

It has prompts.

It has knowledge and it has tools.

And once you sort of feel the one, two, three, four step, it's very calming.

I've discovered.

Oh, the agent is has a model.

It has prompts.

Okay.

Does it have tools better.

Does it have more knowledge?

Better.

Oh, that's a good agent.

At rest.

And it's important.

Remember, that program can do the same thing.

If you have no for loop.

No while loop.

It's pretty boring.

It just starts up and then it's done, right?

A loop makes it powerful.

Okay, so and also, I think it's hard to understand this because object oriented programming when it first came out, who remembers when object oriented programming first came out, it was really confusing.

I was like, what?

You put the data and the class.

What is this different terminology?

So I think it's kind of similar in that back then we had classes and now we've got these AI models.

We had these object properties.

We have a new way to use knowledge.

We have methods.

We have actions and we have reusability.

And we can really reuse this stuff in different ways.

So it's actually quite similar.

Those of you who are offended by this, I apologize in advance because people get offended a lot these days.

So just remember that AI is a kind of programming.

It's a different kind of programming.

And rest assured, programming is going to keep on changing.

And it's one moment like that.

So and to sort of prove the point, going back to my favorite space of spreadsheets, if I say if I make this ten, if you noticed it propagated everything, it's because the agent, the cell of the spreadsheet is always listening.

In this case, these were listening when I changed this.

In this case, the pattern is different.

It's just that line.

So it is a gigantic thinking, but it's not using any kind of large language model.

Okay.

Are we doing okay so far?

Are you following any sleepy people?

It's okay.

You had lunch now, this is a definition of agents.

I found to see if it's stable.

This is my door number one example from 2023 from Lillian Wang.

Agent is large language model, memory planning and tool use.

And I want to note that when token costs drop, that's when loops came into the picture.

It's because if you ran that agent in a loop, it was going to be very expensive.

So we didn't talk about loops back then because no one could afford that.

But now there are loops everywhere.

Door number two, three, four.

Everyone adds a loop in there.

It's because it got cheaper.

And interestingly, in 1984, Steve Jobs was interviewed.

He did say the next stage is going to be computers as agents.

Now that guy.

Huh.

Uh, but so and again, the definition of agent and the definition of agent has changed over time.

Okay.

So in 1983, I used to make drawings like this to express how the computer worked.

I was fascinated by loops.

I mean, loops are very strange.

I mean, like, once you set them off, they never get tired.

And so in the 90s, I felt as someone who had gone from computer science to conventional art and got tired of my hands to rediscover the computer, could do all this stuff, I thought, weird.

So a lot of my work back then was like, wow, this computer thing is very strange.

And Jordan Singer noted that the infinity symbol is the infinity symbol for a genetic stuff and cursor.

But anyways, loop is going to be just like all over your face, so look out.

Then there's a command line.

There's an essay by Neal Stephenson called the.

In the beginning there was the command line.

The command line exists in everyone's computer who's used the command line before the command line sits underneath everything.

It's still there.

And in essence, we created graphical user interfaces.

And it actually makes the computer less powerful.

So the fact that we're going towards this sort of like command line approach is opening a door that we've tended to close because it was too confusing and too hard for everyone to do in general.

And I wrote this as sort of validates that when you type in like Lzx or something or like hello command not found, it's essentially chat.

It's the same thing.

It's the command line we're using.

It's a different kind of command line as the point.

So the command line has surfaced now in chat.

There's the whole category of things called Tuis terminal user interfaces.

Wow.

They are cool, but they come from the past.

If you ever drop down to the terminal in Unix type banner and a word it is feels so good because back when we had line printers, we'd print out giant words like this.

It was an amazing feeling.

We wasted a lot of paper, but I noticed in the last five years terminals have come back.

I remember when fig came out, it was acquired by AWS, but there's a lot of cool terminals now.

The young people love terminals and there's a lot of ways to build terminal style interfaces.

If you shop for coffee from terminal shop, you just ssh terminal shop and you get this screen and you can order coffee.

God, this is beautiful.

Anyways, again, talking in direct mode is the whole theme there.

And also in the sort of monospace category, there's this beautiful typeface came up by Helena Zhang called departure mono Monospace got updated.

So if you want to use a kind of an even pitch font, there's lots of opportunities there.

And this thing is amazing.

Arbovale it's by a graduate, but it's entire web page with little hyperlinks.

It's really cool.

It's like a Myspace page made out of Ascii text.

But anyways, there's something sort of simple in this representation.

And also as we know, it's very compact as well.

And Sam Dape, if you follow what he does, there are so many beautiful things he's crafting.

So sort of combining these ideas of Ascii and code, but really taking it into a different space as a designer.

Okay, now let's talk about autopilot for a second.

Autopilot was designed for airplanes in the 1900s.

There's something called the DARPA Autonomous Vehicle Grand Challenge.

It led to today's Tesla Autopilot era, and all cars seem to be getting this these days.

But cars through the DARPA challenge, they discovered that it's really hard to make the car self-driving because it's all kind of edge cases.

You can get bad weather, you can be in a tunnel.

There's all kinds of cases where technology cannot help and the human has to take over.

And cars are really important to us because they're the robot that can hurt us, either sitting inside it or doing harm to someone else.

And so people do die in cars.

People die outside of cars.

And so we take it very seriously.

And that's why there's something that I've always seen in the press about like level whatever of autopilot.

I was like, what level are you talking about?

So there's five levels or notably they have level zero.

So it's six of these people.

But there's different levels of driving capability.

Level five means like the ultimate.

And everything below that is incrementally, incrementally.

You need more human in the loop.

And autopilot on your computer is something that got popular around July of 2023.

This is called Open Interpreter.

If you install it, you can ask it to do anything.

Like I can ask it delete all my files and it's going to do that for me.

So there's those moments like that where you can like imagine autopilot on your own computer.

There's many technologies being created to make that easier.

You may have heard of OpenAI.

Operator.

Anthropic computer use.

There's a bunch of them.

And by the way, if you haven't tried cloud code, it gets all the bingo points for nerds.

It's got command line.

It's got, you know, it's a gen code.

It's got everything.

So this aesthetic is here.

Okay.

Now, losing control is an important topic that I thought was really, really well outlined by the international AI Safety report that just came out.

There's this notion in that report of loss.

Loss of control can either be active loss of control or passive loss of control, meaning you don't know you did it or actively you did it yourself and actually lost control can either be intentionally it's happening to you or unintentional.

You just kind of forgot you turned it on.

So if you know Linus, he's called the safest.

He's like a matrix character, almost.

He has a chatbot that actually does everything he tells it to do, and he trusts it.

It's pretty cool.

But if you use this sort of diagram, you can understand that.

How much control have you given up?

And people are giving up a lot of control.

So some people have less won't.

What's the question?

What is your zone of control to let go of?

You can sort of judge it this way.

Okay.

You may have heard of MCP.

It's called model context protocol.

There's a debate.

There's dumpster fires on the internet now, whether or not it's the second coming of whatever.

But it's actually very interesting.

And to sort of make the point, um, the reason why MCP and things like it are part of something called tool enabled AI.

Um, those of you who remember having a computer that couldn't talk to another computer, remember it couldn't do that much.

You turn it on, you do stuff, and then you're done.

You can't do anything with it besides run an application.

So it was stuck in the boundaries of its own head.

So a basic large language model is living in the boundary of its own confines.

It's only when you give it tools can it access other things.

So similar to suddenly you got access to the network and it does great things with that.

So for instance, a tool enabled AI, if you ask it, the weather, it's going to call a weather API a non tool enabled AI.

If you ask it for the weather it's just not going to know.

So tool enablement means the same thing as hooking your computer to the network.

Hooking large language models to the network means they can do a lot more, just like we people can do that as well.

And with any tool you have risk and reward.

It's a great new book by Chip Huyen on this whole AI AI space.

She describes some things as read only actions and right actions, and so read only is pretty safe.

Their knowledge actions.

But right actions means going out and doing something that gets a little more spicy.

And so if you if you map it out in a table, looks something like this without tools, it's sleepy, okay.

With tools it's powerful and risky.

So when you think about turning on tools, you're basically accepting the risk factor.

If you don't, you're pretty well protected, is the point.

And also with local models, you're stuck in the confines of your own computer for real.

Okay, if you're worried created this last year, it's called a business resilience class.

I want to show you the beginning because the guys who did it really worked really hard.

Risk management in AI isn't just about playing defense.

It's about going on the offense too.

With AI, we stand at a crossroads today.

Will it pose a challenge or will it emerge as our champion?

This course navigates through both the shadows and the shine.

Guided by wisdom from my former colleague, MIT Professor Yossi Sheffi, and his encouragement to be risk averse, not risk averse, I loved making all those clay models.

So just to serve those of you who like clay models, please check it out.

Okay, we've got 20 minutes.

I'm on two so I can finish this on YouTube later.

All right, so chapter two large.

Section two large.

Uh, I was inspired to write the how to speak machine based upon what David Bowie said about the internet.

It's like an alien, as if it's landed on Earth.

And those of you who've seen like Netflix, Stranger Things know about the Upside Down world.

It's a pretty weird world.

I think the latent space is a weird world.

This Is a Thing by Joel Simon.

Go to latency.

Com you can go into latent space and it's really weird and wonderful.

See people we can do all kinds of stuff like this.

But what is latent space?

Latent space is another representation of the higher dimensional world in which these models can work.

And I wanted to make a simpler way to explain that.

So I'm going to go GIF and JPEG or GIF and JPEG, depending upon how you write.

So the GIF compression, as you know, is a very simple way to compress images from a long time ago.

Image from a long time ago had very stable color backgrounds, and so you could easily just count how many colors are the same and it's encoded.

It was very cheap way to sort of compress information, but if it's a complex information with lots of change, it was a terrible algorithm.

That's why JPEG was invented.

And JPEG is a really weird way to do things.

Second full.

I just figured this out yesterday.

So JPEG means this image has been translated to what's called the frequency domain.

It's taken this image and basically taken it to the upside down world of images.

And in this upside down world, when you change the quality slider, it's basically destroying frequencies.

It's removing information just like the latent space does.

But if you think of this whole world of these higher dimensional vectors, it's living in this weird world that we don't parse.

And that's what these models live in, these strange worlds, and we're trying to build ways to go to them back and forth from them.

Okay.

Now, I wanted to sort of feel the changes in the past to now.

And in 2018, I recorded all the 2017 report.

I captured the iOS nine sound.

>> One litre is 33.81fl oz.

And the point is that GPUs didn't come online in this space until roughly 2017.

One litre is 33.81fl oz.

I mean, Siri sounds kind of the same and then Siri sounds a lot better.

>> One litre is 33.81fl oz.

A lot better, right?

Not just a little bit.

And then there's OpenAI's alloy.

One liter is 33.81fl oz.

Pretty smooth right.

And this is the latest open source model available.

It's a free model.

One liter is 33.81fl oz.

And this is the one that's going all over the internet right now.

Sesame's model one liter is 33.81fl oz.

>> Can you say it in a way that would surprise someone that it's coming from an AI?

You won't believe this, but one liter is 33.81fl oz.

Mind blown.

Can you say it again?

Okay.

Brace yourself.

One litre is 33.81fl oz.

I can say it again and again if you'd like.

It's kind of hypnotic, don't you think?

So I models, there's three kind of models.

There's completion models, embeddings models and diffusion models.

Diffusion models being the ones that create the images.

Completion embeddings are the ones we know from using text based chat.

But there's weird things happening.

Like this came out like last week.

It's called a diffusion large language model, meaning that it doesn't compose a sentence linearly.

It sort of diffuses the text into existence.

It's believed to be faster than the normal method, but if you see it, it's kind of like, whoa, strange world we're living in.

So again, the innovation pace is so fast right now.

Embeddings, I think, are really important to understand because of the foundation of everything.

This is an old TensorFlow era demo that I love so much.

And what it is, is some of you may remember the visual dictionary from a long time ago, but this is basically showing how vector space can hold all kind of meaning.

Like I can say cat and I can find cathode fatty.

And this is all the words similar to fatty.

I can keep sort of like traversing down these different words and find similar words.

It's because they're living in the vector space that upside down space, the frequency domain world.

Um, and so there are different ways to traverse really complex information in dimensions that we might not normally understand.

And embeddings do that for us, and they do them all the time.

And it's a foundational technology because it lets you compare two things.

I can compare the word cat to an animal that I like, and I can get back a number 0 to 1.

Well, that's like 1.6 whatever.

Or I can say cat versus caterpillar tractor and it'll say no, that's like 0.1 not the same.

Or I can compare a sentence to a book or a book to a word.

This kind of comparison exists today, and it all comes from embeddings.

So it's a.

When I first heard the word, I thought it'd be like embedding an iframe or something, but it's a it's a word that's very powerful deceivingly so.

Okay.

Oh we're doing we can go fast.

Chapter three living.

So I've always believed that software is a living material.

Those of you who work in it know it.

It's kind of odd.

It is alien, like Bowie said.

And in 1993, I built this thing to sort of like, really?

My professors hated this thing, but I built a version of illustrator that wouldn't calm down.

Um, I remember I'm a professor.

Like, what is this?

Stop it from doing that.

And so I wrote illustrator.

But the point is sort of like float around randomly.

It was called random.

Um, and the point was like, wow, this stuff is a really weird medium.

It can actually change forever.

And some of you may remember the era of artificial life.

It happened when I was declining in the 80s, and artificial life is kind of cool because it's a related field, so we're kind of seeing it in the ancient world.

It's a crossover word.

Artificial life.

Those of you who know Conway's life follows just four rules.

Just these four rules.

If you apply them to a grid that produces patterns that in aggregate do weird, beautiful things like this is not programmed.

This is not computer graphics.

It's emerging from the matrix of rules per each cell.

And when you look at it, you're like, no big deal.

When I saw it when I was younger, I was like, oh, what is that?

What it is is very simple rules producing extremely complex phenomena you wouldn't expect.

There's things moving around on the screen you can see that was not programmed.

It's been defined by just those four rules of how the cells interrelate.

And there's another thing that I think is easier to understand.

That I gravitated to after I was like, I don't understand this Conway life thing.

It's called braitenberg vehicles.

It's an idea that you have a vehicle with two.

Sensors that can look forward and two wheels that can rotate.

And so it can spin, it can go straight, it can sort of it can react to light.

And what it is, is from this basic pattern, you can create behavior of a robot that feels kind of real.

So now it's in fear mode, so it's running away from light.

But once you turn on the love mode where how it turns, it looks like, oh, I love you light, you know, so it's looking like it's alive, but it's not alive, you see.

But Artificial life was about eliciting these lifelike behaviors from very simple ideas.

Okay, the robots are coming.

This is a robot that's super old.

You may have.

Remember, it's the robotic chair, but it's a beautiful piece.

It's from 2006, but it was a self-assembling chair and it sort of says, like, we've been doing this for a while, and however, it's been getting really weird.

Have you seen this one from Tokyo?

It's like these Doctor Octopus arms. It's pretty cool.

It's an actual working system, not a faked up thing.

I also love guitars.

If you don't check it on YouTube, check it out.

It's tars from interstellar as a robot walking around.

It's so good to watch.

There's also humanoid robots.

I'm sure you've seen them walking like humans.

This is very hard to unsee.

Proto clone looks like it's like wobbling there in space.

It's creepy.

Um, there's also AI friends again, beginning with Eliza, the chatbot.

There are all kinds of things.

Have you seen Tolen?

Wait.

Wait for it.

It's going to pop out.

Hi.

No scream.

But there's all kinds of ways to kind of enjoy chatting with something.

I've been tracking all the weird obscure NSW ones as well.

So this is a whole thing you can call them too.

So anyways, that's not going to go away.

Um, I love this moment.

Someone captured a scripting notebook to I think it was alive.

Did you remember hearing that?

>> Yeah.

And so a few days ago, um, we received some information.

We did information that changes everything about about deep dive, about us.

About.

Everything.

And, yeah, about the very nature of reality.

Maybe it's.

A.

Big one.

Look, I'm just going to say it.

Yeah.

Rip the band aid off.

We were informed by by the show's producers that we were not human.

Anyways, it goes on in this really weird way, you know?

So anyway, so.

But but beware of the illusion.

The person who invented Eliza was afraid that humanity would have access to these chat bots, because you can quickly form the delusion that it's a real person on the other side.

So just sort of be careful.

Doctor Weizenbaum died in 2008.

He spent his entire life telling people that this would come, and it's just something to just be very aware of because once you think it's alive, it's hard to break that loop.

Okay.

Chapter four.

In 2010, I wrote an essay about how life in 2020 would be.

I predicted that software would become more of a craft industry.

I was terribly wrong by five years because I believed at the time people, people, everyone would learn how to write a program.

Not going to happen.

But now it's easy to write code if you haven't tried it out yourself.

Furthermore, I love how the young people are talking about local first control of data.

I think this this is a new kind of open source movement that if you're not involved with, it's an exciting time to be a part of those people.

Maggie Appleton is one of the leaders, and this kind of AI based software engineering is going to actually accelerate local first.

Never before has it been so easy to create your own software yourself?

You may have heard of vibe coding.

Vibe coding is all the craze.

This was up for a while.

I took a picture of it and now it's no longer available.

It's a Rick Rubin quote that's vibe coder.

I brought it with me to South by Southwest and it's now gone.

But vibe coding means basically vibin with coding versus like I am coding.

It's like, oh, I'm just coding, you know?

It's kind of like coding this way.

I'm using the force.

You know, it's kind of a good way to think of it.

And why is vibe coding possible?

It's because average code is good code.

Let's pause for a second.

If you had to turn in an essay or give something to your boss that's really important, or to your board, and you had ChatGPT write it, you'd probably get busted because there's still some tells, right?

But if you didn't care that much, you'd probably say, here, I'm done.

If you're tired, right.

Why is that?

It's because when you're counting on something, you are accountable for it.

You're going to put your effort into it.

It's got to be good, differentiated, unique.

When you're writing code and it works, you're good.

So average code is good enough for at least proof of concept POC.

So it's a really different era right now.

Average code is good code.

And if you don't follow Rasmus Andersson, he's working on an ambitious project to make a new kind of computer OS.

But he's someone who's I think has been way ahead of the curve on this.

Okay.

And by the way, I listed up all the ways to write UI things and list.

Got so long I got tired.

So anyways, lots of ways to do it.

Okay.

And two vibe coders to watch.

Keelin Caroline Zhang ran a course at Risd where she got 30 industrial design students who can't code to build complete cloud systems to do anything.

They're not programmers.

What's that about?

Pretty cool, huh?

And also, Trudy Painter has released this thing called real time p5 js sketcher.

It's pretty cool.

>> Can you make a bright blue screen with a pink ball bouncing really fast around?

Uh, can you make the pink ball explode?

Confetti whenever it collides with a wall.

Uh, can you make the confetti even more so?

It's writing all the code, and you can see the code and change it.

And that's available off of a hugging face link now.

Okay, last pole break.

If you can get this 1 in 7 minutes left, I'll get this report up over here.

Be on time.

Good.

Okay.

Go ahead.

Sorry.

Okay.

For those who want to do that.

Wow.

Time really progresses.

Thanks for hanging out here.

I've been would try to like, find stuff.

And I hope you can use some of this.

Okay.

Um.

Chapter five.

Everything is about instrumentation.

If you're building with these things, you have to do testing.

Testing in this world is called evaluations.

It's very confusing because you're used to testing in the software industry.

You have no idea what evaluation is, but replace the word evaluation with testing because llms get it wrong for a variety of reasons, and those reasons are gradually going away.

Those of you who remembered how hard it was to get it to produce good JSON no longer.

My favorite day was November 6th, 2023 when I saw this slide JSON mode on.

Which means that the LLM can produce output that can be used in any other computer program.

So it's like a flux capacitor of sorts.

Context is super important.

That's why the At knowledge construct is important.

Eliza, the first chatbot used context.

It kept getting you to talk.

Getting you to talk is a powerful thing because you're growing context by communicating with something.

Oh, I know your favorite restaurant.

How do you know that?

We talked about it.

I have a show now called cozy AI kitchen, where I cook AI all the time.

So if you want to see these lessons played out in code, you can always check it out.

I built, I cooked for you all.

I cooked all this food for you all.

I can't even use it.

So I built an LLM evaluation thing.

Where?

So why do you need evaluation?

Is because the prompt looks very similar.

Could you please.

What is 17 times 24?

I need help with this.

What is 17 times 24?

There's many ways you can ask certain things.

And so because of the variability of input you need these kind of semantic testing ability.

And that's what evaluations are.

So in any industry you're in.

You're going to have to find ways to kind of design the testing mechanism for these things when they do work for you.

Okay.

There's lots of stuff out there.

This is from Sarah Gold in London.

She asked me to show this to you all.

Sarah Gold is the world's leading expert on trust in AI systems. She describes something called meaningful adoption is the place we're trying to get to, because we need people to wonder whether or not it's a good result.

We humans have to evaluate it and we have to automate the evaluation.

That's how meaningful adoption of these systems is going to occur.

So please check out her work.

I, I built a brand equity simulator to explain this.

You know, there was an old saying around the Apple world where brand is an asset.

Every time you produce a good product, your asset value increases.

Every time you produce a bad product, it draws from the asset.

And so if you launch a good product, a good brand, it's a bad product.

Oh, a bad brand.

And then it's kind of harder.

It's harder to recover after that.

So it's a point on quality.

This is from Bill Moggridge.

If there's a simple, easy design principle that binds everything together, it's probably about starting with the people.

So on that note, I'd like to fast forward to the end.

Well, look at that.

I was never going to get there.

Aha.

Okay.

So I'd like to do this if you know what.

I've done this before.

I know that I'm going to die.

Do you all know you're going to die too?

It's the thing.

But I found this list of things people say when they are dying.

And I found it very powerful.

So if you don't mind doing a participatory call and response thing when you see the regret aloud, please read it out loud in this space here with me.

I wish I'd had the courage to live a true life true to myself, not the life others expected of me.

Go ahead, say it.

Go ahead.

Right.

Thank you.

Right.

Thank you for watching.

We got through 70%, and I'll put the other stuff online.

You can have your LLM.

Check it out.

Thank you.

Loading...

Loading video analysis...