First Block: Interview with Daniela Amodei, Co-Founder & President of Anthropic
By Notion
Summary
Topics Covered
- Founders discover everything they're bad at
- Perfectly harmless models would be useless
- AI models improve faster than you think
- Product roadmap is really a research roadmap
- Safety and capability aren't actually in tension
Full Transcript
I oversaw the team that did ChatGPT when I was at OpenAI, and I think the distance between GPT 2 and Claude 1 feels smaller than between Claude 1 and Claude 2. It feels like things have just rapidly, rapidly improved, but early days of Claude training, Claude would sometimes sort of invent
creative modes. So it was like I'm stuck in dragon mode and we're like, I don't know what that is or
creative modes. So it was like I'm stuck in dragon mode and we're like, I don't know what that is or where you got that from. There was quite a lot of messiness in the middle to kind of get to the product that is available on the market today. Welcome to First Block, a notion series where founders and executives from the world’s leading companies tell us what it was like to navigate the
many first of their startup journey and what they learned from that experience. I’m Akshay Kothari, co-founder and COO. Our guest today is Daniela Amodei, co-founder and president of Anthropic.
Anthropic is the AI safety and research company behind Claude, the Frontier model used by millions of businesses and consumers for its emphasis on safety and performance. Thanks for coming over.
Yeah, thanks so much for having me. We’re so excited to have you on this program. I
guess to start off, why did you start Anthropic? Well, first of all, thanks for having me. I’ve
been looking forward to doing this really all week, really all month, so it’s a pleasure to be here. Why I started, I mean it wasn’t just me. I had six co-founders and we had all been working together at OpenAI and really kind of felt there was this opportunity for us to go start something on our own where we were really putting safety at the center of all of
the work that we had done. We were a cohesive team. We’d worked on GPT3 and scaling laws, and we really felt like there was this kind of unique moment where we could go off, build something really from scratch where we were able to ensure that safety and this kind of focus on making humans at the center of generative AI was something we could do.
So one thing that’s unique is one of your co-founders is also your brother.
Can you tell a little bit more about how has it been working with a sibling?
So Dario and I are think pretty unusually close for siblings. So we have been really close since we were little kids and I think we always kind of had this dream of getting to work
together on something and our careers took fairly different paths. He started, he’s a physics PhD and he really started his career of that side of the house and eventually kind moved into working in biophysics, thinking about ways to apply biophysics to health and eventually moved
into tech, worked at Google Brain, and I think at the same time I was sort of going on this other path of working in global health. I spent time working in politics and we both eventually sort of wound up in tech, but we were always really united around this sort of vision of wanting to do something good for the world. And so it was really kind of a special opportunity we saw to be
able to go off and found something together. In one of your interviews in time recently, you said since you were kids you felt very aligned. It’s something that you don’t I guess often hear, and so do you two ever disagree and how do you handle that?
Yeah, I mean do siblings ever disagree? Definitely never. We’ve never disagreed on anything. Of course we disagree sometimes, but I think Dario and I, again, we’re sort of unusually
anything. Of course we disagree sometimes, but I think Dario and I, again, we’re sort of unusually united around this sort of vision of the type of company we want to build the sort of problems that we want to work on. And something I think that makes our relationship unique, especially at
Anthropic is we have these sort of clearly divided kind of zones of ownership at work. He’s this sort of technical visionary and I think really saw where the field of generative AI was heading, and I feel like he is often thinking about the 5 to 10 year timeframe and really my role is around
what is the one to two year timeframe, how do we take these incredible technical visionary ideas and turn them into something tangible that people can use today? And so while that’s, it doesn’t mean we never disagree, it just means there’s sort of areas of ownership that make it a
little bit easier to be in concert together. Just different horizons and it’s very complimentary in some ways. Exactly, exactly.
Yeah, that’s really cool. Just also looking at your last decade, you worked at some storied companies from Stripe to OpenAI to Anthropic. How is it different to be a co-founder now? What’s different about that? It’s a really funny question because I think when
we started Anthropic, I sort of assumed like, oh, if you’ve done one hypergrowth company, you’ve kind of done them all. And when I started at Stripe, it was around 40 people I left around 1200. OpenAI was a sort of similar starting size to around 200 or 250, and I thought that
1200. OpenAI was a sort of similar starting size to around 200 or 250, and I thought that zero to 40 range, it can’t be that hard. I’ve sort of seen it before, but it’s completely different. And I think being a founder you really quickly learn all of the things you’re bad at,
different. And I think being a founder you really quickly learn all of the things you’re bad at, which are many. And in a lot of ways I was actually very fortunate because I had so many co-founders that were sort of on this journey with me and often it’s one person or maybe two people
and we were all kind of able to learn together. But there’s so many things that go into the very sort of beginning founding of a company. We had to figure out simple stuff like how to run payroll, which we messed up multiple times. We had to find an office in the midst of Covid that was its own kind of interesting journey, but also fundraising and figuring out who we wanted to hire and setting
up initial processes for a company was completely different from zero to 40 than from 40 on.
I guess one other unique thing is maybe having six co-founders, how is that for you? How do you all distribute the work you all do internally? One of the things that is special about this founding group is that we had all been working really closely together already, and some of the relationships actually even predated OpenAI. There were folks in that group who had worked
together at Google Research at Google Brain, and even before that, Dario and Jared, who’s our chief science officer, knew each other from a physics fellowship from 15 years before. So there
was really just going into it a lot of context and sort of trust a lot of time that we’d really spent working together. And so in a lot of ways when we kind of got there, we all had a great sense of
working together. And so in a lot of ways when we kind of got there, we all had a great sense of each other’s kind of strengths and weaknesses. It felt really easy to divide up the work and as the company has scaled, we’ve changed a lot of what it is that we focus on. Some of those co-founders are still managers, some of them have gone back to doing individual contributor, really senior
high level work. And it really has been kind of amazing to just get to watch this group of people really spread throughout the company and kind of seed that culture that we initially started with.
So one thing we were talking about before this is that you have a very broad role overlooking all of the managers inside the company. Can you talk a little bit about how do you think about this technical research side of the world at the same time also thinking a lot about this commercial go market? And then maybe the third thing I think a lot is foundationally
how you’re building the company up also, how do you balance these different areas?
It’s such a good question and it’s also evolving as the company evolves. There’s some component of it that feels like any startup. We’re sort of building the proverbial airplane while it’s taking off. But I do think something that is a little unique about it just compared to other places I’ve
off. But I do think something that is a little unique about it just compared to other places I’ve worked is we really started and kind of invested in the research organization for the first year, year and a half of time. We didn’t even have a go-to-market team until 2023. And part of what was
so interesting about that was we really almost got to build this kind of cohesive culture with this clear set of goals and then we almost went and did it again, right? We said, okay, now that we’ve gone and developed these really transformative, powerful safe AI systems, how can we now
bring them to market in line with our values? And I think part of what’s been so incredible about that journey is you really build one part of the airplane. We’re like, okay, we actually have the base now and now we’re building the controls or we’re fleshing out the wings. And of course there’s times where you’re like, oh man, this knob, I should have that knob. I built it too
slow. I should have built this other knob slower. But in general, I think it’s part of the joy and
slow. I should have built this other knob slower. But in general, I think it’s part of the joy and uniqueness of this type of business to be able to have research impacts and publish papers and have policy impact and coordinate with governments and civil society and also be a tech startup that’s
building a product that’s pushing it to customers, that’s seeing how the technology is really being used in the world on kind of a day-to-day basis. Do you have a schedule? How do you think through, if you’re, let’s say this week, is there a way you think about these different things sort of different days or does it change every quarter, every six months?
Well, let me tell you the perfect day with the perfect week of what it would look like and then a little more in reality, everyone has their own kind of quirks, but there’s so much context switching in any kind of management or leadership role. But I think for a place like Anthropic in particular, it’s really heightened because thinking about safety papers
and what we’re publishing and the research we’re developing and what areas we should be exploring is really different from trying to go win a big customer or build interviewing ahead of product. Those are just completely different parts of your brain. And so I generally try to
product. Those are just completely different parts of your brain. And so I generally try to have my days a little bit aligned, like focus on research on Monday and policy and comms on Tuesday and engineering on Wednesday. In reality, it’s probably like 70% hit rate just because things come up interviews or rescheduling or things like that. But it sounds a little bit tactical,
but I do think having a little bit of space to really go deep on each of the areas is the dream.
Let’s talk a little bit about hiring. I think this is something that a lot of entrepreneurs are very curious about. Could you take us back to, so you have six co-founders, you start with that. Could you talk a little bit about maybe the first few hires you all made? How did you think about building the team and the processes maybe you thought through?
made? How did you think about building the team and the processes maybe you thought through?
Anthropic is so unusual in terms of the types of people that we are looking for and that we attract. We’re a highly, highly interdisciplinary group of people, and so what that means is we have
attract. We’re a highly, highly interdisciplinary group of people, and so what that means is we have people on staff who were former neuroscientists or biologists in their previous life who are kind of coming to study large language models the way that they used to study the human brain.
But we also have product engineers from companies like the ones I used to work for who have been in Silicon Valley and bring that kind of expertise to the company. We have policy leaders and business leaders and operators. And so I think our kind of early hiring, we were really looking for people
that understood the mission of the company and the values and really embodied it. They were there to help build these very powerful transformative systems, make them accessible, make them fair, and do it in a way that felt safe. And really with humans, again in the driver’s seat, all of us did so many different things in the early days and we’re only really
now getting to the stage where we’re looking much more for specialists than generalists.
Did anything change from going from technical hiring? It sounds like first couple of research, it’s always interesting to me. I think for us, we went from I think tapping a lot into our networks and I think early days of Notion, probably almost subculture of people who really thought a lot about tools and craft in some ways. And then we sort of
built it more into a process. And I’m curious if you all went through a similar evolution.
That’s a great question. I think that does resonate. That sounds pretty similar. When we
first got off the ground in 2021, we were mostly mostly researchers, mostly Ml/AI researchers, and that’s a fairly small group of people in the world that kind of do this type of work.
And I think that was not entirely, but almost it was a lot of people that we knew through extended networks and things like that. Also, as you sort of expand the set of things that you’re doing at a company, you can attract more and different types of people and I think we have
grown really quickly. And so my sense is that we kind of went through that pace that change a little bit more rapidly, sort of at a faster pace maybe than your average tech startup.
So let’s change to different topic. Let’s talk about Claude 2. Congrats on the launch. Recently
you all talked about it as an AI assistant that is being trained to be honest, harmless and helpful. I guess my question to you is how do you train the model to reflect these values,
and helpful. I guess my question to you is how do you train the model to reflect these values, That HHH framework that you talked about? Helpful, honest, harmless, we use it internally all the time, right? We’ll sort of use it for talking about emergent behavior that we’re seeing in
time, right? We’ll sort of use it for talking about emergent behavior that we’re seeing in Claude for how we approach it with customers for the training that we do really of the large language models themselves. But internally we have different teams that basically focus on each of those. And it’s a little more complicated than that because sometimes the teams work on things
those. And it’s a little more complicated than that because sometimes the teams work on things together or there’s some overlap, but each of those challenges are really pretty different in terms of how we approach them. So honesty is of course trying to reduce the ever pesky hallucinations, large language models. Every LLM in the world today suffers from some hallucination
and the honesty team’s job is to try and get that number as low as possible. On the
helpfulness side, really what we’re looking for is this model answering your question.
So if you are talking to it or if it’s searching over something, is it really doing what it is that you want it to do? And then on the harmlessness front, we’re trying to reduce the chances of it producing toxic or biased outputs, but also trying to reduce the chance of it just helping somebody to commit a crime or produce violent content or things like
that. And part of what’s been so interesting from a research perspective is that many of these kind
that. And part of what’s been so interesting from a research perspective is that many of these kind of HHH qualities are sort of between them. You can have a perfectly harmless, perfectly harmless model today if you want one. It just wouldn’t be very helpful. It would just say, I can’t answer
your question to every question you ask it. And so really I think the kind of interesting challenge and opportunity is how to kind of raise the watermark on all of them together.
Can you talk a little bit more about the tradeoff? That seems super interesting in that I guess another way to think about it could be maybe something’s very honest, but that could be more harmful. And so can you walk a little bit through how you all think about these trade-offs and how you get the watermarks up on all three? Yeah, so first of all, it’s definitely more of an
art than a science than you would expect. We use constitutional AI to help raise the watermark on all three together. We have instructed Claude to say, these are the goals. We want you to be as helpful, honest, and harmless as possible. But I think even beyond that, there are particular,
and I think even if you were to get Claude to be sort of a perfectly performant model, there would still be trade-offs that an individual or a business would have to make. Depending on
what their goals are for the model, you might want a really creative partner that is going to develop more kind of risky language. And so you might want to tune down the harmlessness score just a little bit, but that still has to have some guardrails around it. But that really varies depending on
the business or the use case. So even modular, not being at the point where the models are perfect, you still have that set of trade-offs. So for a while we have really been exploring this trade-off between especially harmlessness and helpfulness, but I think honesty plays into that too. And I don’t think there’s an easy answer. And again, even if we got them perfectly,
probably your mileage might vary. Are there any fun stories associated with training Claude? Yeah, many fun stories. I mean, I think even just zooming out a little bit, something I reflect on a lot is just how much these models have improved in such a short period of time. I oversaw the team that did GPT2 when I was at OpenAI,
and I think the distance between GPT2 and Claude 1 feels smaller than between Claude 1 and Claude 2. It feels like things have just rapidly, rapidly improved. But early days of Claude training, we,
2. It feels like things have just rapidly, rapidly improved. But early days of Claude training, we, at one point, Claude was really convinced that the best way for you to lose weight was to go on an all potato diet. We have no idea where this came from. It was just really stuck on this idea. For a while, Claude would sometimes sort of invent creative mode. So it was like I’m
this idea. For a while, Claude would sometimes sort of invent creative mode. So it was like I’m stuck in dragon mode and we’re like, I don’t know what that is or where you got that from.
What are some other good ones? I mean, there was a time where we had really, really been tuning up harmlessness in one branch of the model that we were playing around with. And so Claude was just really concerned about your wellbeing, whatever question you would ask it. So Claude, we would say, Hey, can you tell me who the 34th president of the United States was? And it was like,
I’m so concerned about you. Here’s a link to some therapy guidelines. If you need help, please reach out to a friend. So definitely there was quite a lot of messiness in the middle to kind of get to the product that is available on the market today. I love french fries. I’m going to tell my wife that Claude one agrees with my diet. It’s only potatoes all day
potatoes. Yeah, plain potatoes actually. This whole aspect of evaluating the models,
potatoes. Yeah, plain potatoes actually. This whole aspect of evaluating the models, evaluating what your assistant is doing is super hard and we struggled a lot with that.
Also. I think people have such different use cases. How do you think through all of that?
Yeah, I think it’s such a prescient question, especially for generative ai. And I don’t think we have a perfect answer, but I think this kind of concept of tunability is really important. And
that’s not to say that there shouldn’t be kind of, again, guardrails, right? There’s sort of a range within. We say, Hey, there’s a level of harmlessness that is required in the model or a level of honesty or helpfulness. But I do think that there’s some kind of room for exploring and fine tuning with an individual customer just because you might even outside that framework,
you might just want the model to respond differently. And something that Claude, I think is uniquely good at is taking that kind of direction. So if you are trying to use it for helping you to create fiction, you can say, Hey, I want you to be more flowery your language, or can you kind of write in this tone? But if you’re writing a business brief, you can say, this is a technical
business document, please keep your responses short. And I think some of the magic of using this kind of junior assistant comes from the ability to really tell it how you want it to work with you.
That’s a good segue. I think the, I’ve always curious about this question, especially companies like Anthropic. I mean, you’re building this technology and you talk about the junior assistant. How do you all use Claude internally? Do you all have maybe the next generation Claude just as teammates right now? I’m just kind of curious about that space.
I do think we have found Claude immensely useful at Anthropic. I think there’s kind of a few ways that we use it just day to day. The first is we’ve just grown so quickly, we communicate so much across Slack in particular. And I think having this Claude and Slack integration was actually
something we built in-house first. We were trying to synthesize a lot of research discussions or technical discussions that were happening. Maybe a new hire joins and they’re like, what happened before I got here? And Claude is great at just taking and summarizing this huge corpus of information and really distilling the most important points. So Claude has been great in
helping support really the scaling of the business and the scaling of the company itself. We actually
have something called the Anthropic Times, which is Claude basically summarizing the most talked about threads in Slack. And so if you were out of the office or you just can’t keep up with all the Slack channels, you can go through and read them. But we use Claude to summarize information in meetings, in meeting notes. It’s a great first drafter. So I think really we feel it’s
important to actually make use of the products ourselves before we put them on the market.
So one of the things, I mean AI is obviously top of mind for everyone right now. There’s
a lot of founders working in this space as a founder. Can you talk a little bit about what it is to build this product and company on this cutting edge and sort of lessons you’ve learned that other founders could learn from? I have genuinely been amazed and sort
of impressed by how rapidly the technology is developing. And I think especially for folks who are taking the leap to found a company building on top of generative AI or using generative ai, I think part of the, again, sort of challenge but also opportunity is the research itself and the
sort of capabilities of the models themselves are evolving so quickly that product roadmap questions are kind of research questions today. Claude, you can’t rely on it to never hallucinate, but maybe in the future you’ll be able to, or it can’t yet integrate X or Y feature that you really want,
but we probably can fix that right in a few months or years of time. And so I think it’s almost like contrary to standard founder advice where you’re pick a thing and go run on the thing and don’t change it until you’ve really achieved exactly what you’re looking for from a product market fit perspective. I think having a little more of an eye of flexibility and sort of imagining
fit perspective. I think having a little more of an eye of flexibility and sort of imagining the potential for what this might be able to do six months from now, it’s kind of unusual advice, but I think that is something that we’ve really found important at Anthropic too.
Yeah, it’s kind of an interesting challenge. It’s also dizzying just how fast it’s changing. And so
I think that the flip side is things that you think are this really innovative thing today feels like gets commoditized tomorrow. And I like your advice of thinking out almost a little bit further out horizons of what could be possible and thinking through how you get there.
Yeah, sometimes we get questions about our product roadmap, especially from bigger customers. And
about 80% of the time it’s actually a question about our research roadmap. They’re like,
when are you going to be able to build feature X? And we’re like, well, when the model can do that.
So it’s a unique challenge for sure. I guess because it’s so fast moving, how do you all think about going first to market versus being the best in market?
I don’t necessarily think that they’re in tension, first of all. I do think we feel a lot of conviction that what we put on the market should be the best, highest quality, safest model that we’re able to produce. And I think we have a similar approach to how we
develop our products and we feel very strongly that when we bring something to others, they’re going to be depending on it, right? If you are a startup or a medium-sized business or a large business and you take the leap to integrate with us, we want you to know that that product and that
service is always going to be available and that it’s going to be reliable and that we’re giving you the best quality we can across all of those different dimensions that we’ve been talking about in safety. And so I think really our commitment to that level of quality in our products is something
in safety. And so I think really our commitment to that level of quality in our products is something that probably supersedes those questions. So we talked about you and Dario being aligned, but let’s talk about alignment in AI. Alignment seems to be top of mind for you all as you
built started. Can you talk a little bit about how do you ensure your approach to
built started. Can you talk a little bit about how do you ensure your approach to AI remains safe and responsible into the future? This is a really important question and we talk about a lot at Anthropic. We recently published something called our responsible scaling policy,
and this is basically a set of public commitments that we are making about how we will train and test our models and the implications for what we will do if we have concerns about the safety of them. And I think part of why we felt it was really important to put that out there now is
them. And I think part of why we felt it was really important to put that out there now is these models are becoming much more powerful, much more capable. And while I think there is incredible potential upside and benefit of these tools or we wouldn’t be building them, I also think it’s really important that companies and sort of the ecosystem as
a whole takes some accountability for the potential externalities that could create. And so I think the responsible scaling commitments are one version of that for us.
create. And so I think the responsible scaling commitments are one version of that for us.
Can you talk a little bit about the trade-offs that it creates, I guess safety versus capability of the model, and especially with the backdrop of now more and more companies and even open source chasing this, it’s supposed to be really interesting debate internally how you all sort of think through these trade-offs. I don’t necessarily think safety and capabilities
always have to be in tension. Another way of thinking about it is to sort of be on the frontier of model performance in general. You will want the model to behave in ways that are safe, right? No customer is going to be excited about a model that is hallucinating left and right or
right? No customer is going to be excited about a model that is hallucinating left and right or going off script and producing kind of harmful content. And so I think there is a through line in which developing more capable systems can be in keeping with the safety mission and goals of
anthropic. I do think though there are other components of, again, unintended consequences
anthropic. I do think though there are other components of, again, unintended consequences of the model that are important to think about. Building structures around or guardrails around the RSP addresses a lot of that, but I think it can feel easy to sort of get complacent and say,
okay, well we have these systems and safety is good for business. But I do think sort of having ongoing conversations and we have different formats for that internally to really talk about this tradeoff that you mentioned. It’s not perfect, right? There’s no perfect way of doing it, but I think having it be something that’s very much at the forefront of discussion for leadership
in the company has been really helpful for us. So Anthropic and Notion, I think we have a common goal in some ways in terms of leveraging AI, make people and organizations more productive. Do you have a view five, ten years out how think people will be
productive. Do you have a view five, ten years out how think people will be using AI and Claude in terms of doing their work? So one of the things we love about partnering with Notion is I do think this kind of AI for helping with productivity and sort of routinizing mundane
tasks and really allowing people to do the work that they’re best suited to do. I think that’s a really common mission that we both have in common. And I have to say today we really describe Claude as this very helpful junior assistant. It’s great at summarizing notes or taking action items five
to 10 years from now. I mean, of course it’s hard to know. I think I could imagine Claude just being a much more senior assistant, something that is actually able to help support humans in many, many different sectors of the economy. We’re already seeing healthcare applications and
legal services applications, climate tech science making sort of early use of these systems. But I think five to 10 years from now as these models just become sort of continually more capable, hopefully you could imagine them sort of really increasing the overall productivity of society as
a whole and really enabling humans to do the creative part. That’s really only our touch.
As it becomes goes from a junior assistant to a senior assistant. Are you optimistic that us humans will figure out this rapidly fast moving technology? Will we coexist with them? Well,
I guess people talk about potentially people will lose jobs. Some people say actually they’ll just use the time for more creative tasks. Based on what I heard, it seems like you’re sort of more on the latter, of just people will have more time to do interesting things.
So obviously this is a super important topic and I think I would even go a little bit further and say this is something that probably should not just be left up to corporations to decide. I think part of
why we have engaged so much with policymakers and civil society groups is that we really feel this kind of responsibility for the technology that we’re building. And I think the sort of things that you mentioned, we are excited to partner with those groups on those topics. And I think the sort
of societal impact of the work that we do, we are very well positioned to opine on, but we’re not the only stakeholders. We’re not the only opinions in the room or the only ones that matter. All of
that being said, I do think that I personally feel like many of the kind of productivity gains that we’re hopefully going to get will actually help enable more creative, productive, meaningful work for people around the world. So I think every company hears community feedback
and sees overall reception to a product. Can you talk a little bit about how you all at Anthropic balance sticking to your intuition, your origins versus sort of changing course as you get some of this new information? One of the things I think that is so again,
kind of unique about this type of business is because the technology itself is sort of being built around us. I actually think that community feedback and engagement hearing from our customers is almost more important even than in a traditional startup. We need to hear directly
from them what the gaps are that can make the product that we’re giving them better. And of
course there’s limits within research of what we can do, but a lot of this kind of feedback loop that we’ve seen I think is really healthy where we hear from our users, it would be great if Claude could do this thing. And then that kind of comes into our product organization and some things they can do themselves. But a lot of the time it really then comes a collaboration between research and
product and then that becomes something that we ship to our users, we hear more feedback and it kind of creates this really valuable cycle. Again, we can’t always provide every update that we wish the model could do everything. But I will say it is also interesting and inspiring to hear in sort of a day-to-day way, what are people building and what are the day-to-day challenges,
but also the safety challenges that they’re encountering in the real world. I think that does help us strengthen a lot of our convictions around safety and how we’re building the models.
I would imagine also specifically on feedback, you get feedback from all these different industries, all these different use cases. Do you all have a way internally to translate that into prioritizing what aspect of research might be you investing more versus less?
Yes. I mean, prioritization is so crucial for us, especially right now we’re actually a really small team. On the product side, we have something like 15 engineers and five salespeople, something like that. So we do a lot of prioritizing and that doesn’t necessarily mean that that aren’t things on that stack rank that we wish we could prioritize higher. But as we’re
growing the business and hopefully soon as we have more people, we’ll obviously be able to get to things further down the stack rank. It’s tricky. It’s almost like the intersection of what is high impact and what is tractable, right? And so there are some product or kind of research goals that we have that are just really hard and we might say, Hey, we need to be working on this on a one year
timescale, but we really can’t incorporate it into our next product launch. What is something that is tractable that we can fix that users are asking for today that we can really give them kind of the next time we push something out? And so of course it’s impossible to know have you prioritized everything correctly? But it really is kind of an exercise in sort of ruthless prioritization
everything correctly? But it really is kind of an exercise in sort of ruthless prioritization to be scaling this quickly and to be lucky enough to have the amount of demand that we do.
Alright, so we ask these two questions for every guest. And so maybe my first one for you is describe a day in your life. How do you start your day? What rituals do you have?
I really wish I was one of these amazing Silicon Valley people who was like, I hike up Mount Everest every morning and then I fly back to San Francisco and start my day at 6:00 AM. That’s not
my story. I am a mom. I have a 2-year-old. And so I’m actually not a morning person by default, but I’ve kind of become one over the past, I don’t know, five or ten years. And so I really like to get up early. I work out first thing in the morning. I just have that as kind of me time. It really helps me focus, reset my day, get some time to myself. And then I
usually spend about an hour with my son and my husband in the morning for some family time and still try to be in the office on the reasonably early side. And whenever possible mornings are great for thinking work, I think as the day goes on, I’m better at talking than thinking.
And so it’s much easier for me to do meetings in the afternoon. So when I’m at work, I’m really, really focused on work. I don’t check my phone, I really don’t notice what’s going on outside of the walls of my office. And then when I’m at home, I really try to have some dedicated time where I’m offline and restoring balance in my life. It’s not perfect. There’s sometimes where you
sort of have to flex in either direction, but a lot of what’s been important to me is also just feeling like a human outside of work. I grew up in San Francisco. I still have a lot of friends from all different eras of my life, like high school friends who live here. And I
think making time for family, for friends, for hobbies and interests outside of work, even when it’s just sort of a narrow slice, has been really helpful and just keeping me grounded.
Alright. Final question. If you were to write a book about what’s gotten you to this point in your career, what would you title? It was nice enough of you all to preempt this question for me. I actually asked Claude for help. Claude had some very hilarious suggestions, things like *An Unsuspecting Woman Thrust into the Executive Suite*. I mean, it was just
really funny, but I think probably the ones that resonated with me most were: *I Need an Adult, The Scaling Memoirs of a Generalist,* and *how did I get here?* All of these I think, really have encapsulated the experience of just going through 10 years of hypergrowth, which is a
very kind of particular experience getting to sort of three high growth technology companies in a row. But I do think that something kind of unique about this experience and just sort of my career
row. But I do think that something kind of unique about this experience and just sort of my career trajectory is I truly am the most generalist generalist that there is. And it often doesn’t get talked about as kind of an opportunity for management and leadership. But I really think some
of what has been so amazing about this journey is getting to have the privilege to support all of these different parts of the company, right? Learning something about research and something about engineering and product and business and G&A functions. And I think really kind of the story of having an appreciation for the importance of every one of those disciplines is kind
of the truest expression of what I’ve learned. What was your prompt to Claude to give you that?
I gave Claude a little bit of information about me and I said, if you were to title a book about this person for a podcast interview, what would you say? I did come up with, I Need an Adult. That wasn’t Claude, that one was me. Daniella, this has been so great. I wish,
Adult. That wasn’t Claude, that one was me. Daniella, this has been so great. I wish,
you know I probably could go on for many hours, but you’ve had such a storied career. Thank you for spending some time with us and sharing your wisdom with our audience.
career. Thank you for spending some time with us and sharing your wisdom with our audience.
Thank you so much for having me. It’s really a pleasure. Thanks Akshay.
First Block is brought to you by Notion for Startups. We at Notion care deeply about startups and founders, and we hope these stories inspire you to keep building.
To learn more about how we are supporting startups, please visit notion.com/startups.
Loading video analysis...