LongCut logo

The trick to AI prototyping with your design system

By Dive Club 🤿

Summary

Topics Covered

  • Pre-coded Templates Slash Navigation Errors
  • Translate Tailwind to Design System Components
  • Sticker Sheets Calibrate AI Vision Limits
  • Slack Bots Drive Adoption at Scale
  • Design Systems Shift to AI Memory Documentation

Full Transcript

We talk a lot about using AI at startups, but what are the more established companies doing to scale AI prototyping internally?

>> With AI, it's it's a lot more about like, okay, how do we document this so that it's available in the AI in the LM's memory at all times as opposed to typically with the design system, the

way I see that we would do these things is through programs and people and cultural reinforcement, design reviews, that sort of stuff. Now it's like, okay, can we just tell it exactly what we care about?

>> How do you use your design system to get the most out of tools like Replet or Figma make?

>> Now we're kind of going into a fluid model where anyone with any tool can essentially ship to a customer and we need to figure out how to support that.

Like the design system remit has just blown up into like anyone in the organization can essentially ship and that's a really challenging problem to

solve. Welcome to Dive Club. My name is

solve. Welcome to Dive Club. My name is Rid and this is where designers never stop learning. This week's episode is

stop learning. This week's episode is with Louis Healey and Kyler Hall and they're doing a deep dive into all of the ways that they scaled [music] AI

prototyping at Atlassian. And I want to start this conversation by taking a look at how they're using templates in Figma Make because I've never seen this approach before

>> with AI prototyping. At the beginning, it very much was about just like allowing people to spin up or ideulate ideas with code. So very much like how

do we augment product managers, product designers, content designers to essentially create a coded prototype of their idea. Initially when we were kind

their idea. Initially when we were kind of piloting AI prototyping earlier this year, design system team wasn't necessarily the key integration for AI prototyping. It was very much like, oh,

prototyping. It was very much like, oh, let's just focus on low fidelity and just spin something up. And very

quickly, once we gave people access to these AI prototyping tools, it very much was like, well, I need this to look like an elastic experience. How do we do that? And that's when the kind of design

that? And that's when the kind of design system team came in and we kind of created what you're seeing here, these templates. So, this template essentially

templates. So, this template essentially is that starting point, that kind of baseline for people to spin up a experience. So whether they want to

experience. So whether they want to create their own kind of like sub template. So we have a lot of apps in

template. So we have a lot of apps in our collections at Atlassian. So we have Jira, Confluence, Loom. They may want to create their own specific um template

for product designers to use and kind of experiment and ideulate on. And you know they don't want to have to create a road map every single time. But to do that they want to essentially get some very

similar content which is like you know the top nav and the sidenav. they kind

of need to be the same or or very high fidelity. So, we've realized that if we

fidelity. So, we've realized that if we create this kind of base template, then you get these things like the top nav and the sidenav consistent, but then also allows them to then make their make

their own version. We've landed on this like kind of abstracted template where it's not actually a specific product.

It's just a bunch of elements that the AI would usually get quite wrong. So um

our top nav and our siden nav was um very very hard to consistently generate.

What we would um find is when we were kind of initially creating these instructions to generate these prototypes it would hallucinate the icons the navigation elements like our

navigation system has a lot of imports and it would just always get one or two wrong. You know, if people were

wrong. You know, if people were uploading a screenshot and trying to kind of replicate an experience in production, the top nav and the sidenav would also be incredibly wrong because we're in a kind of state of of improving

the the visual refresh of it. This is

why this template was born where we like okay let's just code the topnav let's just code the siden nav and then when people are uploading a screenshot of their the side navigation elements that

they want we found the agent you know figma make and replet is actually very very good at just changing the code that already exists when before when we were focusing on our instructions it was

taking nothing and trying to build everything. So we found just that kind

everything. So we found just that kind of that initial starting point reduced probably the um the error rate from maybe like half of the prototypes were

having a lot of navigation issues to probably nearly zero of them. Maybe just

a few icon hallucinations here and there. But it if you upload a screenshot

there. But it if you upload a screenshot of any side navigation in Atlassian it's going to get a lot of the combination of navigation elements pretty correct. And

that really um supercharged prototyping for people because what we found is people were spending like maybe two or three hours just trying to get that

topnav and side navigation pixel perfect because that's what makes you feel like you're in an Atlassian experience. And

when you're testing with customers, you want the topnav and sidenav or the chrome to feel like you're in product.

Otherwise, people are going to get too distracted. But the content, it's not

distracted. But the content, it's not that important. And what we found is if

that important. And what we found is if you kind of do any content or any main content in this area, you can kind of get away with it. But when the top navigation and the side navigation is

incorrect, people start to get a little bit confused. The way findings a little

bit confused. The way findings a little bit off. So we've really solved that by

bit off. So we've really solved that by having that hybrid approach of like a pre-coded template with design system instructions. So then when you're

instructions. So then when you're building on top of it and when you're creating more product specific templates like a road map, etc., It's not going to get the basic elements and incorrect.

>> Real quick message and then we can jump back into it. So, I got a new computer recently and what do you think was the very first app that I installed? [music]

If you've been listening to this show for a bit, then you probably guess that the answer is Raycast. At this point, it is an extension of my fingertips and a fundamental way that I use [music] my

computer. And I'm not alone. I mean, I

computer. And I'm not alone. I mean, I see this sentiment from people I looked up to on Twitter all the time. So, if

you're still on the fence, I mean, [music] just do it. Download Raycast and thank me later. Just head to dive.club/racast

dive.club/racast [music] to get started. All right,

here's the thing. You don't need another dashboard. What you need to do is to

dashboard. What you need to do is to talk to customers. So, I want to introduce you to Genway AI. You can

think of it kind of like Vibe researching to validate [music] your ideas quickly. Just draft your

ideas quickly. Just draft your questions, select an ICP, and then their AI agent runs interviews on your behalf by pulling from a panel of global participants. I mean, you can literally

participants. I mean, you can literally set it up in the morning and get actionable insights by [music] lunchtime. It's validation at your

lunchtime. It's validation at your fingertips and you can try it out free for 14 days. Just head to dive.club/genway

dive.club/genway to get started. That's gnawy. [music]

Okay, now on to the episode. It's genius

because I've spent so much time just making little tweaks to the shell of my UI when actually the only thing that I want to prototype is a piece of the content or maybe it's a different layout

on the screen and I don't I don't care about the sidebar. I just want it to feel a little bit real. I mean, I've got so frustrated with it that at times I've even taken screenshots of prod and my

first prompt is add this image as the background to the page and then I just put components on top of it because I'm tired of adjusting the shell. And so

this starting point makes a lot of sense. I'm even just going to restate my

sense. I'm even just going to restate my understanding so for myself and everybody it's like really clear how this is being used. So we're looking at Figma make. This is presumably a file in

Figma make. This is presumably a file in a project somewhere that anybody can just run and duplicate and it exists in code. It's not any one real product.

code. It's not any one real product.

It's honestly kind of just a set of like core scaffolding and subcomponents and sidebar interactions that the AI is then using as a foundation to build whatever

that person wants. And then you've even baked some education for how to use the template into like the core page layout itself. And so anybody can open this up

itself. And so anybody can open this up and they just start typing and talking with the agent about what they want to make. Is that kind of correct

make. Is that kind of correct understanding?

>> Absolutely. Yeah. So this template you can either use it where you're creating your own sub templates. So you know that kind of geospecific road map that then people then want to then duplicate. So

it's a bit of a network effect or you can just if you're creating tile in your experience you're just ideating something new. You can just duplicate

something new. You can just duplicate this template and then you can just you know go for your life. So we've included some you know key instructions to to people you know if you want to change the product name to Jira what we've

actually tried to do is we found that hallucinations of this icon and this logo were very very common there's a huge spectrum of confidence and understanding of how to prompt at

Atlassian and what we found some people were saying like change logo to Jira and what it would do is it would go based on its pre-trained knowledge um and it

would just pull in a really old Jira logo and then swap that out.

>> So what we've actually found is by having a configuration object of just some hard, you know, constants that then includes product

icon and some elements and then hooking that up to the top navigation component.

um randomly we found that actually it hallucinated a lot less and then on top of that we found it hallucinated even less when we gave people like a copyable command that rather than saying change

logo to Jira they can just click that and copy it and then they know how to switch things into that logo. So, we've

kind of had this really hybrid approach of like pre-coded elements, you know, design system instructions, but then also the user experience of how to reduce very common and like annoying

hallucinations that would just, you know, create that extra one or two prompts, but it would probably really lower the bar of people's confidence that they can't seem to get the simplest stuff working. And that's because, you

stuff working. And that's because, you know, the AI doesn't know about the intricacies of these new logos we've created. it's just going based off what

created. it's just going based off what it has been, you know, pre-trained on, which is like, oh, this is the Jira logo. So, we found that really, really

logo. So, we found that really, really effective.

>> Before this, we just tried to give people a set of prompts like here's 100 lines, copy and paste this into Figma make, it might work, it might it might set it up. And a lot of that's like we

have a bunch of theming. We have like feature flags and feature gates that have to be turned on that aren't even documented well outside of internally.

So it's very hard to ask an LM to do that. The only way to really do it is

that. The only way to really do it is basically have an engineer tell you how to do it. So we want prototypes to kind of show off where where we're headed. We

we might use them for demos and design reviews, that sort of stuff. So we of course want it to ideally be cutting edge and possibly even even you know future baked. So a lot of that just

future baked. So a lot of that just requires us to kind of bake in a lot of the code things that in a designer should never even know exists. We

started with those prompts and a lot more of them and then over time it's just like well what if we just bake the entire default that an engineer should know by by heart into these templates so people could just iterate very quickly.

We've also taken that a little bit further as well where you may not want an entire template right like we have you know rovo which is our kind of you

know AI and Atlassian and that's kind of interface through a a chat box now do you have to duplicate a template to just add robo in probably not right like if

you already have an existing prototype it gets a little bit cumbersome so what we've found is we've created this thing called recipes where they are kind of what Carlo mentioned before, which is

like a code blob with some instructions basically of like here's how you recreate the chat box and then this is where you put it. And it's not going to be pixel perfect to what is in

production, but it's be good enough to get that look and feel without having to make every single user of of prototyping upload a screenshot and try and get that

pixel perfect. So we found this kind of

pixel perfect. So we found this kind of recipe approach as well really helps with those like smaller elements that you know are maybe a little bit more technical for people to do. Another

example is like dark mode by default.

Some of our user personas are that you know developers probably aren't using light mode by default and if we're prototyping maybe we want dark mode.

That's you know involving a lot of like theming coding. It's a little bit tricky

theming coding. It's a little bit tricky to actually change the default mode to dark mode. So we created a recipe where

dark mode. So we created a recipe where you could just paste a bunch of instructions in and it will just switch out the mode for you into dark mode. So

we're trying to think of the user experience of a non-technical user using code which can be a challenge in itself.

>> Okay, so this is the end state. It's

beautiful. You're introducing me to a lot of concepts that I haven't considered before. Even the way that

considered before. Even the way that you're thinking about recipes is genius.

I think my goal after seeing this now is to kind of understand like how can someone else reverse engineer this system that you've built because I'm

pretty sold. So can we even go back in

pretty sold. So can we even go back in time a bit and talk about like what did it take to even arrive at that end state and maybe we can dig into some of the

lessons that you've learned along the way. Essentially how how we started to

way. Essentially how how we started to um integrate the design system is you know Ka and I thought oh great sweet.

We're just going to take our Atlassian documentation.

We're going to put in a bunch of examples or a text file upload that text file and say now build with this. And we

got some pretty rubbish results because first of all the file was massive. So it

was just truncating the information and only creating a subset of of the components, but it also didn't really know how to use those components and how to use it in an AI prototyping world

because what we were doing is we were applying just the mental model of a human using documentation and trying to apply it to a machine or an AI. So the

that was a very very quick learning um of yeah, okay, that's not going to work.

Let's actually just like figure out how to actually talk to this machine. And I

think some of the the cool things that we did is we got some really good results by actually trying to talk to the AI in a way that it understands.

What I mean by that is that we have a guidelines.mmd file which includes all

guidelines.mmd file which includes all of our design system um documentation.

Now eventually this will be an MCP once we can plug in MCP but for now these are instruction files. And this is

instruction files. And this is essentially how we get design system generations to a pretty high fidelity.

But there's an interesting thing that we found really valuable which I've not seen too many people do which is we try and instruct it to kind of think in Tailwind still which is obviously going

to it's pre-trained data. I assume most models have been trained on shad CN and Tailwind because of the amount of open source code out there which is you know why everything is react as well. We try

and go okay generate that when you think in Tailwind classes once you see this thing it oh it's actually this design system component. So what we mean by

system component. So what we mean by that is we actually have in every instruction in every one of our components we have a translating from tailwind

section where we say if you see these class names actually it should be this react code and we found this actually really beneficial in terms of reducing

some of those hallucinations where you know we have a component in Allesian called lozenge now I don't see I don't think I've seen lozenge in many other design systems it's a very

name and then whenever there's new new people joining um joining Atlassian they're like what's a lozenge and then we have to explain it so AI is probably not going to know what a lozenge is it's probably going to think it's something

else this provides an opportunity for us to kind of say oh you see it this way feel free to think that way but then now translate it into something that we want to generate so you can see here like

span if it's you know using these tailwind classes it's actually this this important badge and and we found that really effective in in reducing quite a

lot of hallucinations. And at the top, we then have really specific instructions saying, you know, use Elastian design system first, use design tokens first, and then use

Tailwind for like kind of missing things and try and swap things over.

>> Who's responsible for this document? I

mean, this is about as robust as a guidelines document I've ever seen.

>> Myself [clears throat] and Kyla. The the

first version of this was fully vibecoded. So Carla was away for a month

vibecoded. So Carla was away for a month I think in Japan and I love working with Carla and I was just in this this depth of like having to get this design system integration working because we were in

the middle of a pilot and we had 300 people wanting to just like use design system and I had no way of understanding how to do that. So the first set of these instructions was absolutely

vibecoded and I'm sure Kylo can attest to it. Like there was a lot of

to it. Like there was a lot of discrepancies. There was a lot of like

discrepancies. There was a lot of like just contradictions and everything but it got really good results. Then after

that we then decided okay how do we actually centrally manage this and how do we actually generate this with some level of of consistency which I'm sure Carlo can can talk about how we got to

that. Now,

that. Now, >> so on on Alask [clears throat] uh full file and that's about 5,000 lines of just a gente content and we'll say 90% of that was vibe coded about 9

months ago because it was like oh okay let's actually you know I think I think the the journey for a lot of us was like oh hey we can we can use cursor like it's allowed to use cursor we also got VS code and like different things click

for different people at different times I think I I clicked with cursor and I was like okay what does it look like to teach it how to work with the design system and a lot of that was just like okay let me just create instructions to

tell it how to use our token system because it does not get it from its industry training model at all. So we

just have to basically make a table of here's every single token. So this is like the the first version that we I would say we started with I think back in like March. Well, this has probably

come a long ways from the first version, but um this is the the current iteration of the first version, which we were like, okay, what is what does it look like just to put all of this agenda content? Like there's a bunch of prompts

content? Like there's a bunch of prompts floating around, which is like, hey, how do you create a button? How do you style something? How do you use our tokens?

something? How do you use our tokens?

How do you use our theming? Um that sort of stuff that was floating around for probably a month. And it was like, okay, could we just make this public? So, we

made an LMTXT file. The full version is kind of all of our content uh truncated together. And yeah, kind of a lot of it

together. And yeah, kind of a lot of it was just let's let's create some some maps to explain like what these things mean. And then over in Figma make and

mean. And then over in Figma make and replet, we've actually in addition to showing light and dark modes, we'll actually show like a tailwind class as well just for color. I don't have that up, but um

>> yeah, this is really where it started.

But this is yeah this is like 5,000 lines whereas we've kind of settled on something more 2 to 3,000 lines within prototyping that's a little bit more

succinct because a lot of these are very verbose in in relatively good ways but it takes a lot of tokens and a lot of context to actually generate with this.

So we've tried to avoid using it in full if that makes sense. But yeah, this is basically where I started generating effectively just vibe coding. Okay, go,

you know, build a button section, go build a lozen section, go build the token section. Here's the file. And a

token section. Here's the file. And a

lot of that was just hallucinated originally um until I could go in or me or or another engineer or Lewis could go in and be like, hey, these are all the wrong media queries. Let's say let's go and fill in with the right media

queries. And it's like, oh, okay, look

queries. And it's like, oh, okay, look at this file instead. Don't just use industry trained things. And then we get all the right things. Lewis probably use some things like this to basically be

like, can we have, you know, a Figma make or replet version of this? And

then, oh, Tailwind doesn't work. We're

hallucinating more on icons than we were before. So, let's overindex on certain

before. So, let's overindex on certain things. And then I think there were also

things. And then I think there were also some cases where it's like, we don't we don't need all of these these props or we don't even need the hide component.

Um, it it probably knows how to do that.

So this is like everything whereas I think replet and Figma make we probably show only 20 or 30 components as opposed to like the 100 plus we have. So not

only is it specifically our content but it's also a very drilled down version of our content >> for a timeline view as well like this lens. the first created file. Then

lens. the first created file. Then

that's what I tried to use when I then vioded when Carlo was away for a month.

Then had to vive code my own thing. And

then once Carlo came back, we then figured out how to make a scalable version.

>> Okay, so we saw the guideline file.

We've talked about some of the lessons learned even about giving people coded lines of of text that they can just like paste into prompts. Were there other

areas or clear points where you just iterated? You did something different.

iterated? You did something different.

you learned something about how this process should work where you made a change or this just influenced the way that you thought about what it would take to enable AI prototyping at scale.

>> We were getting pretty good results like maybe 60 70% from like what we call a one shot. So, I was like benchmarking

one shot. So, I was like benchmarking the effectiveness of these AI prototyping tools and I was kind of screenshotting a simple card and then I'd screenshot a really complex screen

and I would see how well it performed from a single shot or like a single prompting like build this screenshot and then see what the results were and we were getting >> this just based off of just on the guideline file and that's it

>> just off the instructions. Yeah, just

off the instructions. it was getting like 50 60% accuracy like with the hybrid template approach it was you know getting a lot higher um it was hallucinating a lot but I was still

getting annoyed by certain parts of it and that was like things like our text and our icons and I was just obsessed with trying to just get it higher and higher and I had a little bit of

inspiration from like printer sheets.

So, like you know when you like print and you need to configure your printer, it prints like this special sheet of all these like colors and shapes and patterns and then it's used to then

configure the heads of the printer or something. So, I was like, what if I

something. So, I was like, what if I just did that for AI prototyping? What

if I just put a bunch of things on a prototype and just said, describe it to me. How would you describe this? How

me. How would you describe this? How

would you actually prompt this? and then

use that to then reinforce the instructions to basically >> talk to it how it expects to be talked to. And that really helped with kind of

to. And that really helped with kind of like understanding why the icons were were being hallucinated and then also things like text like why why it

couldn't figure out certain aspects of text. And I found that it was kind of

text. And I found that it was kind of hard to determine the the font size of something when I asked it to kind of be like, "Oh, here's a bunch of text. tell

me the font size is tell me the font weight it would often hallucinate that from just a screenshot. I found that yeah like with these sticker sheets that it thought differently than what I thought like. So then it was very much

thought like. So then it was very much around like okay how do we actually like meet in the middle and and improve those outputs and we actually got way better kind of um our primitive layer

essentially like our typography components like heading and text um that started to be rendering a lot more accurately because when it was seeing a screenshot I was talking to it in the

way that it actually read that I'm kind of like talking to the computer vision element of it when before I was very much just like oh this is Atlassian deal with it. Um, so I find that if you

with it. Um, so I find that if you really try and go closer to the metal, um, then you're probably going to get better results and understanding why.

>> Is there an example that we can use just to get specific about it? Cuz I nodding along cuz like conceptually it makes total sense, but I'm also like, okay, but like >> where and what is that change that would allow you to kind of close that gap?

>> You can see the prompt here, which is like I want to calibrate your computer vision model like can you add um the image in and then create bounding boxes.

So, this is very much like me just trying to get it to tell me about these elements and what it picks up cuz what what I was suspecting is a lot of things

around like certain components that have like a weak border or things with more wide space. It was really struggling to

wide space. It was really struggling to pick up the bounding box of them >> and then recreate them. So, this really told me a lot about like the limitations of the computer vision. And what I found

as well is like the more complex the screenshot, the less it picked up and actually it truncated and missed a bunch of elements. Um, so you know like things

of elements. Um, so you know like things lower down on the on the side navigation of a screenshot as an example. So let's

say this is our side navigation. Things

around this prompt box or anything would just been completely skipped because it probably has reached a context limit of like how much it can ingest. And that

really told me a story around oh okay like first of all I have to now instruct our thousands of people prototyping that okay when you're uploading a really complex screenshot it's going to miss

stuff so break things down into like sections of like the top now side nav so it really helped me improve the you know the training I gave people but also showed me like you know as you can see

here the bounding boxes that it's created around it and how it interprets certain elements so I got it to like name things as So you can see with this one we have a component called icon

tile. I was really interested to see how

tile. I was really interested to see how it picked up these smaller tiles with these kind of subtler imagery which we have a lot in our products. My suspicion

was it's not picking it up and that's why it was hallucinating it and it was true. So you can see with these this

true. So you can see with these this size it didn't pick up the color at all.

>> It just picked up the icon. So I was like, okay, I need to provide more specific instructions to improve the outputs of this or help users go, okay, if you have something that's being

hallucinated with an icon tile, which is this component, you're probably going to have to use maybe Figma MCP or you're going to have to upload a smaller screenshot or like a zoomed in screenshot of it, and update it peacemeal because if you're uploading a

big screenshot, it's just going to miss that. So this didn't necessarily always

that. So this didn't necessarily always influence our instructions of our, you know, our design system instructions that would then generate the code. It

very much also influenced how I actually talk about how to get [music] the best results from that kind of one shot with, you know, the thousands of people um using prototyping. [music] Hey, really

using prototyping. [music] Hey, really quickly, let me tell you about the allnew dive talent network. I've hand

assembled over a hundred of the most talented designers and builders that I know so I can recommend them to my favorite companies. So, if you're

favorite companies. So, if you're listening to this and [music] you're open to new opportunities, the talent network is anonymous and super low pressure. It's just an easy way to see

pressure. It's just an easy way to see what's out there without having to post on social media. So, if you're interested in joining or maybe you're looking for your next hire, head to dive.comclub/talent.

dive.comclub/talent.

I mean, gosh, you're dealing with a heck of an adoption [music] problem, I would imagine, and a lot of education required to get people comfortable and out of their comfort zones and whatever tools they were

familiar with before. So, what have been some of the tactics that you've had the most success with and maybe any key learnings along the way in terms of how

do we set people who are not as technical up for success with AI prototyping? Our first problem is

prototyping? Our first problem is getting people to even see any training you've done. So we've got, you know,

you've done. So we've got, you know, hundreds of designers, hundreds of PMs, hundreds of thousands of engineers, and we've probably got around like 11,000 employees.

So there's thousands of people that are going to be using AI prototyping at one time. It's almost impossible to even

time. It's almost impossible to even reach that many people. Even if I DM'd 100 people a day, right? It's every time it's going to take a time. So that is the kind of starting point in your

brain. It's like like not everyone is

brain. It's like like not everyone is going to see this thing and I've done a lot of training like adoption training design system training before and I found that like if you build it they do

not come. You really have to make sure

not come. You really have to make sure it's as simple as possible and also in the different formats that people learn in. So the the first approach we did was

in. So the the first approach we did was like okay looms let's focus on looms let's focus on video guidance but then also written guidance. We found that uh really effective in just providing a

baseline of a 101 prototyping essentials, right? Like here's just the

essentials, right? Like here's just the basic stuff that you need to know to succeed. And a lot of that covered is

succeed. And a lot of that covered is like one of the videos I recorded was just a UI tour, just a 3minut UI tour of this button does this, this button does this, this button does this. Because a

lot of what you'll find a lot of people aren't that exploratory, right? They've

maybe got a top- down push to, you know, you need to use AI prototyping. AI, AI,

maybe they got a bit of pressure. They

may not have time to click every button and understand what it means. And I

found it was really effective just to tell them this does this, this does this, this does this. That's important.

That's not important. Don't worry about that. That created a really nice

that. That created a really nice baseline for people. And then you start like building up the the confidence and the knowledge and more advanced topics.

But the 101 was really around like, you know, here's the UI, here's this new thing you have to learn, and here's just some tips on how to get the best result.

So like that thing I mentioned before around like, you know, if you want to recreate a screenshot, which is like one of the most common use cases, right?

Like I don't have this thing in Figma, it's too hard for me to constantly recreate it in Figma. I just want to upload a screenshot and then start from there so I can iterate or add a text box or whatever. then it was providing

or whatever. then it was providing guidance on okay you know it's going to hallucinate or it's going to get things wrong unless you do things in this kind of way and it was just you know really providing providing that guidance in

terms of distributing that guidance we've had a few ways we've tried to be creative as possible we had like a dedicated AI builder week um which you know we've talked about publicly which

was you know our president um a new just basically said everyone thousands of people tools down for an entire week and you're all just going to learn AI prototyping and it really created a lot

of a home moments for people where they were then given that space and we we had training we had you know guest interviews we had master classes I did

like a 500 person replet master class across two >> which was incredibly stressful but incredibly valuable because we went through step by step you know this is you know let's build something together

and it gave those those moments for people they're like ah okay I can't do this in Figma this is too hard. If I'm creating a filtering experience, I can't build

these interactions in. But in Replet, in Figma make, I can do it in 5 minutes. I

can just instruct it like build this filtering sequence and it will literally work and feel real. So, we kind of made sure we had a lot of space for those moments. Now, we're in a phase where

moments. Now, we're in a phase where it's all about we've got a 101, we've got a 2011, we've got a lot of guidance.

How do we get that to the people that need to use it? you know, maybe inactive users or people that haven't seen the course material before. What we actually did is we created a Slack bot. So, I

created it in Replet. Well, first I created it in cursor in like 15 minutes and then I created one in Replet where it's got kind of a wizzywig dashboard and we have an AI enablement bot and

what I found is if you add a bunch of people to a Slack channel, they're probably going to ignore it at our scale. You know, you got Slack fatigue.

scale. You know, you got Slack fatigue.

If you DM people, they will respond to you pretty quickly and they will almost like feel like they have to respond to you. So I was like, how can I replicate

you. So I was like, how can I replicate that at scale, right? Like if I put people in a channel, I do an at channel or an at here in whatever communications tool, most people are going to ignore

it. So what I created was a Slack bot

it. So what I created was a Slack bot that creates a group DM. So our design ops leads can then be the key recipient.

they can add in maybe 400 people that they want to reach out. maybe they're

inactive users or they haven't used it recently and it will group DM from the Slackbot to that person with a kind of a canned message of like oh hey you know

hey first name we've noticed that you know you've been inactive user can you fill in this survey and let us know and we got a lot of engagement from people around like oh I didn't realize this was

a thing or like oh I'm so sorry I've been busy xy zed and it gave us a lot of really good feedback on why people aren't using the thing or what the gaps are without having to like DM hundreds

of people constantly. So that was a real unlock for us on how to enable at scale.

>> Are there other challenges or things that you're thinking about in terms of just maintaining this system and how everything works especially as your product surface area continues to expand.

>> One of the other things that we've had a lot of success with was um that I've seen from the engineering side is a lot of design leaders are going out and sharing prototypes. So a lot of this is

sharing prototypes. So a lot of this is like coming in our design reviews top down in like you know weekly loom sort of thing like oh here's a prototype I I

built in this in this new tool new technology or here's an idea for how we could do this with rovo. So seeing that from, you know, your skip lead or your, you know, your leader in your design

space, I think that's kind of promoted a lot of verality where >> it's like, oh my, you know, my boss is doing this or my boss's boss is doing this. Maybe I should do this as well.

this. Maybe I should do this as well.

And then and then it's like the the good demos in our sort of design reviews appear to be the prototypes. So it's

like, oh, hey, if you if you come in with just a static design, you know, it might be a great design, everything, but if you come in with a prototype, like people will actually get a little bit excited and a lot more commentary on the

looms. That's that's what I'm seeing from afar. So I think that's really

from afar. So I think that's really helped stir this thing as well is like it's not just flashy, oh, we could use it. Here's some some learnings, but also

it. Here's some some learnings, but also people hit the ground running and just started using it on a on a daily basis.

Both high level leadership as well as like, you know, low level IC's. Um,

yeah, that's been >> cool. I' I've noticed that, too. Like,

>> cool. I' I've noticed that, too. Like,

there's a buzz when you share something that is fully functional. It's just it's cooler. There's like a novelty factor

cooler. There's like a novelty factor still that draws people in.

>> Yeah. In terms of like maintainability, how do we maintain this thing today? So,

I'll I'll show you a little bit behind the scenes on what these templates actually have. So, uh this is Replet has

actually have. So, uh this is Replet has a lot of boiler plate. So, we've kind of put our stuff into this boiler plate.

So, these documentation files are ours.

Like this is the guidelines file that uh Lewis showed. We have slightly different

Lewis showed. We have slightly different ones for Figma make and for for Replet.

So we kind of split out the examples MD and the guidelines MD because they get indexed a little bit separately um with Replet from from their guidance. We kind

of let them guide us on hey how how should we serve you this 2000 line file.

I get mostly into the template here but yeah a lot of it is just let us define the exact base of the template. So if I go into our actual front end monor repo

like this is good code that's the baseline is you're starting with good code you're starting with relatively production code this might even be better than sort of a brand new PC of an

application that someone might might spin up so we'll say this is you know frontend blessed stuff for the most part there is of course little bits in here

that may not be super blessed as we're trying to get it working in this environment because we don't have like GraphQL and the whole teamwork graph and and stuff behind it. So there's there's some some mocks and other things like

you know the way we the way we do theming and routing a little bit is is not 100% perfect but I'll go into cursor here which is what I use and show you

how we maintain it. We have the same just you know a folder for AI tooling that sits within the rest of our design system. So we have a lot of design

system. So we have a lot of design system packages. We kind of house all of

system packages. We kind of house all of these templates within the code. So they

are actually like production code or at least they pass most of the linting and type checking um apparently aside from that one in our codebase. And this kind

of helps us maintain it so that if we actually change like our theming for example, this will break. Um, and if we want to update our theme and you know it's no longer the refreshed version of

our topography theme, people will update this and then we can actually go and deploy that to replet so things don't break over time. So a lot of it's just literally use the design system. Now

some of this we're we're going to be transferring into our CSS andJS library shortly, but for the most part we kind of break this down into yeah a lot of different components like a lot of

different apps. Currently, most of

different apps. Currently, most of these, as Lewis showed, are are just basic sort of this is how to use the template. We might be building these up

template. We might be building these up more so Trello is actually like a proper clone of of the template um as opposed to uh you know, just a a hello world

page. And then we bake in all of our

page. And then we bake in all of our like our feature flags. So we have a bunch of feature flags which basically control functionality that effectively is required to be turned on in order to

have an Atlassian like experience to some extent. Not that many but like our

some extent. Not that many but like our our our new logos some of our visual refresh stuff or topography stuff. We

take all of this and effectively we we bundle it up and we do a distribution.

So these guideline files are generated actually from a very large amount of other content. So these guidelines files

other content. So these guidelines files for example if I go into the avatar here uh everything in our avatar uh package here that we have defined is actually automated and maintained from within our

codebase. So instead of going through

codebase. So instead of going through and if we want to document something document in five different places which is what happened we we went and we're like okay atassian.design

atlask kit.assian.com

lmstxt mcp now let's add it to prototyping. All five of those places

prototyping. All five of those places were like different content and >> at least two or three of those places were vibe coded content. So it's like okay how can we make it so this actually

represents what we tell to our customers on elast design. Uh make sure these are actually our usage guidelines. I think

we we are in a a state of making them better. I don't think they're in perfect

better. I don't think they're in perfect parody. We're working through this in

parody. We're working through this in the next half but our approach so far has been some sort of structured content. So, we're not going down like

content. So, we're not going down like the full data XML sort of approach, but we are going with something a little bit simpler that we think we could we could use to maintain I guess the couple

thousand packages that live within our monor repo. A lot of these are just

monor repo. A lot of these are just strings or even markdown files in some cases and we basically define what our component is, how to import that. So

with this we grab all the types. With

this we actually kind of give it enough context so that the LM ideally understands what an avatar component is in its own language like it's a profile photo or a representation of a user. Uh

we give it a description. We have

examples. We even have examples that are purely for AI in this case rather than all of our internal examples which are a little bit less clean. We have ones that

are just like this is what I want AI to build with. very simple, clean, not all

build with. very simple, clean, not all the 4,000 different cases we have for this component, but just basically the three relatively basic versions cuz we realized if you give it all of those

different types, it will hallucinate more and more and more and think, well, I can do this mixed with this mixed with this, right? And then the answer is no,

this, right? And then the answer is no, you really can't. So, we we try and describe, I guess, the 80% mark for most of these components. The same goes with our content and and usage guidelines.

This is not everything we care about on this component, but is probably 80% of usages, which is kind of our our target in prototyping to be honest, is is the

80% mark. We take all of that, these

80% mark. We take all of that, these offerings JSON files, and bundle it up and just distribute it. So, we have a

bunch of sort of codegen and and scripts basically to to crawl the entire monor repo and grab a very large amount of files. So we'll just we'll just run sort

files. So we'll just we'll just run sort of the distribute command. And then what this does is it distributes um effectively a template for Figma make and replet. So we have a fast and full

and replet. So we have a fast and full version of our templates. We have a couple others but they're not really hooked up right now. So if I go in here, everything that you would see in here, all of the examples, we generate this,

yeah, from the avatar component. All the

guidelines, we generate that from that avatar component. All of these apps, we

avatar component. All of these apps, we generate that directly from our monor repo and what we do is we just copy all of these files. We're looking for a way

to sync them a little bit more directly to maintain them even better. But yeah,

the the goal is really just been about like how can we automate the content that AI needs because we've had very poor success with just asking AI to go

to elastino. design and read our

to elastino. design and read our components or index them or expecting that it knows what our lozenge or our avatar means. Especially whenever we get

avatar means. Especially whenever we get into the the nitty-gritty of all of those type interfaces or our usage guidelines or especially when we're talking about like translating from

tailwind whenever it just hallucinates and it can't understand that oh this image should be an avatar. The best way we can change that is just going in and being like well let's let's handle this

edge case and let's let's fix it.

Zooming out for a second, how long have you two been in design systems roles?

And how big of a departure has the last eight months been from what you're typically used to? Because it feels like you're almost inventing an entirely new subd discipline within what it means to

run design systems at a large company.

>> I've been here for the past 3 and a half years on the Atlassian design system. In

the past, I've had much smaller design systems, but um definitely nothing to this scale. So I would say AI is really

this scale. So I would say AI is really a step change for us. So for example, I presented at at config I think in 2024 around how we do adoption basically across all of Alassian. How do we how do

we roll out our our visual changes, our our new navigation, our you know dark mode, that sort of stuff. And with AI it's it's changed a lot where it's like

okay how would I do adoption with AI? It

is absolutely night and day. with with

AI, it's it's a lot more about like, okay, how do we document this so that it's available in the AI in the LM's memory at all times as opposed to typically with the design system, the

way I see that we would do these things is through programs and people and cultural reinforcement, design reviews, that sort of stuff. Now, it's like, okay, can we just tell it exactly what

we care about? Can we tell it exactly what it needs to do? let's kind of cut out all the fluff and just give it the bare minimum. And then the other side is

bare minimum. And then the other side is like, okay, it primarily knows let's say Chadian, Lucid, Radics, whatever it's trained on. How can we make our

trained on. How can we make our components more closely aligned with that? like why do we call it a lozenge

that? like why do we call it a lozenge component if the industry doesn't have a lozenge component or why do we call our prop appearance versus variant or or things like that is kind of where we're

getting into where it's like can we just shift our system to be more I guess general for lack of a better word like like the rest of the industry >> I was an adoption person just like with with Carlo Klo and I worked together on

adoption I've been with Atassian three and a half years now I was kind of the Figma plug-in guy then the Figma guy and then I was the kind of designer an

adoption guy and I really saw AI as a adoption story just like Kylo in terms of like there it's a threat to adoption essentially. It's like I saw all these

essentially. It's like I saw all these things being generated that weren't using the elastic design system and I actually wasn't an early adopter of AI.

I was probably actually quite behind on a lot of stuff and now I'm like lead design technologist on the AI pillar because my passion and my drive was from an adoption lens like how do I enable

product managers, product designers, content designers to generate Atlassian experiences with the design system.

That's always been my mission. Before it

was very much like how do I increase adoption in Figma and how do I increase confidence that then translates into code. this is now just the next step for

code. this is now just the next step for me and now obviously I am a lot more involved in the overall AI but it's for me at the beginning it was very much that that just like design system

adoption lens and increasing that that fidelity interestingly enough I was actually initially asked to be a tool lead for Figma make as we were kind of upskilling Figma make

>> and then it massively scope spiraled into like me essentially being responsible for the design system integration for the entire organization with Kylo to enable these thousands of

people to do AI prototyping. So it's

it's crazy how things can just in the space of 8 months just completely shift your entire mindset where I went from being a lead designer working on adoption to a lead design technologist

leading up an AI pillar with this like massive scope of work in such a short space of time. So I'm very grateful for it. But sometimes I look back and I'm

it. But sometimes I look back and I'm like oh wow 6 months ago we were just doing that. Like that's crazy how mature

doing that. Like that's crazy how mature you can get so quickly.

>> Well, I mean six months is an eternity in today's day and age with how fast things are accelerating. So I guess before I let you go, I want to use that as a launching point to maybe look ahead into the next 6 months and beyond

because you two are thinking about this a lot and you're seeing where the bottlenecks exist. I'm sure probably

bottlenecks exist. I'm sure probably asking these questions. Oh man, you know, what if we could do this? Maybe

this would unlock a different set of workflows over here. like when you kind of just look into the future, what are some of the things that are rattling around in your mind that you get excited about?

>> We're trying to look at like what does an AI native design system look like? We

don't know the answer necessarily, but it's very much important to look ahead like what does the future look like 3, five, whatever years. And my opinion is it doesn't look that too different to

today because it's just the AIS are going to get better, the tools are going to be more kind of holistic. I'm of the opinion like what can we actually do today to bring the future forward and

you know what agents can we create what tooling can we spin up like what context can we create to leverage that and also you know how can we make our own team

kind of AI native so the future is actually really interesting because it's kind of you look ahead of where you need to be but there's actually just so much you can do today to get you there but

it's probably going to be like duct taped together or like there's going to be an agent that just like does one thing really well and then you have to like switch to another agent that does one thing really well and that's maybe

like today's model. What I feel like is going to be in the future is going to be this end to end where maybe it's just one agent, one tool that will just kind of like handle the entire software

delivery life cycle. I don't know what tool that would be. I don't know if it would be a third party or people build it as a first party. Who knows how powerful AI can be in 3 to 5 years. I

can imagine incredibly powerful or the tools that people create are incredibly powerful, but I'm very excited for the way it's going where I feel like teams are going to be able to organizations

are going to be able to truly create velocity for the way that they want to write, create and deliver to customers.

And at the moment we're in kind of a fixed mindset or a fixed tool set where you know traditionally the software delivery life cycle was you know very much you have requirements they get translated into design it's usually

Figma or another design tool and then an engineer then has to read the design and then build that. Now we're kind of going into a fluid model where anyone with any

tool can essentially ship to a customer and we need to figure out how to support that. like design system remit has just

that. like design system remit has just blown up >> into like anyone in the organization can essentially ship and that's a really challenging problem to solve and I think

we need to build more tooling more linting everything to actually tackle that new kind of persona um because I feel like design system is the kind of

core of an AI native organization or a truly high velocity organization >> the way I see it as as Lewis said is the remit bit of a design system is is blowing up and I don't think this is new

for other companies. A lot of companies design systems are front-end platform teams and they are the entire front-end platform or they own the front-end platform and they happen to have a design system within that or they ship

design stuff for us a lot of it is how do I enable a designer to ship to a Jira customer and that's a very scary thing scary question for a lot of people but also at the same time we've got a couple

designers that have shipped to production in possibly smaller apps I can't think of an ex an explicit one in Jira but we're going in that direction.

But the further we go in that direction, even the further we go with prototyping, people are no longer asking about how do I work with the design system? And the

design system is almost like check, we we've we've done that. We do that in 6 months. I wouldn't say it's perfect. We

months. I wouldn't say it's perfect. We

we can maintain it better. We can fill in a bunch of the gaps. You know, there were there were bugs in the stuff we we showed you, but for the most part, it's it's good enough to sell that idea. But

prototyping for us, I think it will not stop at just showing an idea. Ideally,

it will go all the way, you know, further along that like, oh, can we take that idea, put it into a poll request?

Can we take that idea? Can we put it into design campus? Can we take that idea, put it to production? I think the the context that's required for that, for an LLM to know how to build within Atlassian, what's inside my head and

hundreds and thousands of other engineers head in order to actually land that is a little bit scary. So the way I look at it just some some raw numbers is

like the Alassian design system is like 75 packages in a front-end monor repo which is not the entirety of Atlassian front end uh which has about 5 to 10,000

packages. So if you want to build within

packages. So if you want to build within Jira, you not only need to have the the context or the LM needs to have the context of our 75 packages, but also the

5,000 packages, all the different libraries, the tools, the engineering, the content accessibility standards, all of those that are expected of you when you're working in Jira. So we have a

long road ahead of us, I guess, to go outside of just this 1% box that is the design system. I know the design system

design system. I know the design system is the largest 1% box. It probably makes up 50% of sort of the React code that you might see in Jira. But for the most part that it's that other 50% that will

be very very hard that we have to go and document I guess those second layer systems or even the third layer systems how to write tests how how to use you know our version of CSS andJS how to how

to do all those technical things because an engineer can vibe code and say oh no you should use compiled instead. Oh no,

you should use this internationalization library. It's specific to Jira, but a

library. It's specific to Jira, but a designer has no clue of that. Even

myself as an engineer, I don't I don't know how to work in Jira. So I think that's where we're going in a lot of ways. That's how we'll get to the end

ways. That's how we'll get to the end vision that I think we all have, which is can we empower anybody to at least open a pull request and get peer review on that, customer review on that,

>> and you know, experiment that into production. I think that's kind of, you

production. I think that's kind of, you know, the five-year goal. Well, you've

taken a heck of a first step here. Uh

really impressed by everything that you all have shared. Definitely changed the way that I'm thinking about the role that design systems play and [music] I'm just appreciative that you you guys came

on here and pulled back the curtain. And

I'm sure that you have inspired a lot of teams out there. So, [music] we appreciate you taking the time today.

Before I let you go, I want to take just one minute to run you through my favorite products because I'm constantly asked [music] what's in my stack. Framer

is how I build websites. Genway is how I do research. Granola is how I take notes

do research. Granola is how I take notes during [music] crit. Jitter is how I animate my designs. Lovable is how I build my ideas in code. Mobin is how I

find design inspiration. Paper is how I design like a creative. And Raycast

[music] is my shortcut every step of the way. Now, I've hand selected these

way. Now, I've hand selected these companies so that I can do these episodes full-time. [music] So, by far

episodes full-time. [music] So, by far the number one way to support the show is to check them out. You can find the full list at dive.com/partners.

Loading...

Loading video analysis...