⚡️ 10x AI Engineers with $1m Salaries — Alex Lieberman & Arman Hezarkhani, Tenex
By Latent Space
Summary
## Key takeaways - **90% Downsizing Yields 10x Output**: Arman downsized his engineering team by 90% at Parthian and re-architected the product and process to be AI-first, resulting in 10x production-ready software output despite the massive headcount reduction. [02:35], [02:58] - **Output-Based Pay Enables $1M Salaries**: 10X compensates engineers based on story points of completed quality output rather than hours, allowing multiple engineers to earn over $1 million in cash next year purely from story point earnings. [04:25], [09:04] - **Hiring Long-Term Selfish Coders**: To prevent gaming the story point system, 10X hires 'long-term selfish' engineers who prioritize client relationships and those who genuinely love writing code and collaborating with smart people. [06:38], [07:20] - **Two-Week Retail AI Prototypes**: 10X built prototypes for a retail camera system using off-the-shelf and custom quantized models running in parallel on edge devices, enabling heat maps, queue detection, shelf analysis, and theft detection in just two weeks—work that previously took quarters. [09:26], [10:31] - **Four-Hour Sales-Winning Prototype**: After a fitness influencer rejected 10X, an engineer built a working AI health coach app prototype in four hours, instantly positioning the company as the top vendor choice. [11:11], [11:55] - **Entropy Controls Autonomous Agents**: The major bottleneck in building an AI to replace human engineers is controlling entropy, the accumulating error rate that derails autonomous agents even if individual tasks are 99% accurate. [19:38], [20:39]
Topics Covered
- AI Enables 10x Engineering Output
- Compensate Engineers for Output Not Hours
- Build Retail AI Prototypes in Weeks
- Speed Wins AI Engineering Sales
- Control Entropy to Build Autonomous Agents
Full Transcript
okay we're here in the real studio with alex lieberman and arman oh my god i did not fret this leave it in how do i thought it leave it i just don't say arman keep it rolling if it makes you feel bad for the first probably 20 times that i said arman's name i said it the wrong way and he was very polite in guiding me
to the right pronunciation he used to say arman not arman so it's okay it's totally fine me too i don't think you have to i mean it's hezerkani but we don't need to yeah yeah yeah arman hezerkani is fine um amazing yeah that's honestly even funnier now whereas you're about to introduce him you just dub arman's
saying his own name over your mouth introducing arman hezerkani that's so funny it's like when you're when you're like on a voicemail and you're like saying your name while like the automated machine totally um so you guys are the co -founders of 10x and also uh emcees and speakers at aie right so
i mean uh and i think for me i have a little bit extra context on alex because i follow morning brew for a while you have been an inspiration on the newsletter business but let's let's talk about 10x you know like i think that i my goal here is just to introduce people to you guys maybe you individually and then you together so whoever wants to take it first well i can
give you a little bit of the backstory uh behind the business and how arman and i got to know each other and then arman i'm sure will fill in some gaps you know arman and i met in 2020 uh when i had invested in his previous business uh parthian and a parthian uh was a ai financial tools business originally for consumers
being ai tooling for financial advisors our rias and um you know throughout arm on building that business we had continued to talk about just our philosophy on product how ai was influencing just product in general and i kind of think especially for non -technical folks like myself there's like a moment where
you get smacked in the face by how profound this technology can be if harnessed in the right way and i experienced that moment in conversation with arman so it was probably this point nine months nine nine ish months ago arman and i were talking and he he had shared a story about with parthian he unfortunately had to
downsize his engineering org and when he had downsize his engineering org he had to decrease the size of his engineering team by 90 percent and when he did so he had to rebuild he had to basically re -architect the entire product and engineering process to be ai first because he just no longer had a human resource and so he needed to like accelerate it with this technology and basically what arman had
shared with me is that output of production ready software had text after making this shift with the org and i i kind of didn't believe him at first because i had never seen kind of that level of leverage like i'd use chat gpt i'd use grok i'd use all these things but and yes they've been life -changing for me but i wouldn't i wouldn't have explained them as 10x experiences so we
basically talked through it and he kind of shared with me why ai and specifically loms have made such a profound impact on engineering as a type of knowledge work and from there the thought was around how the way in which engineers are compensated has to change materially because if you think about it like
historically people charge for their time by hour and then all of a sudden let's just say you're a new like you're truly an ai engineer who's truly 10x higher throughput imagine you're selling your work and someone's used to pay spending a hundred dollars an hour for an engineer and you go to them and you look them dead in the eyes and you're like yeah i'm a thousand bucks an hour you're going
to get laughed out of the room even though you're you're a better engineer than the engineer they would have hired and also you're perversely incentivized because you leveraging ai in your work as you operating faster but your incentive just like a lawyer or just like any hour uh pay based knowledge worker is to rack up as many hours as possible and so like actually the kernel of insight that started all of
10x was how do we hire the best engineers in the world how do we offer them unlimited upside by compensating them for output rather than hours and then how do we harness that in the right direction to help companies transform with ai in their business so i know there's a lot there but armand is there anything i missed i mean basically yeah like i think alex covered it like like i
was writing code and i was deeply incentivized to generate more output high quality output but faster more right and it's because it was my company but the whole thought is like when you work at someone else's company even if you have some equity even if you deeply care about the mission you're not deeply incentivized day in and day out to try new ai tools and push yourself to work better and
faster and smarter and so the economic model behind our company is one that does drive that and my talk is basically to show how we do that and how i think other companies might be able to adopt similar models and this is very tempting because every question i'm asked might actually just leak your talk it's okay the talk will the talk will just reiterate very important points i mean it should stand
on its own on youtube right so it's whatever that people i do like to encourage people to remix the content in different formats so this is the podcast version totally so i think like i think that the classic thing is well what is a unit of output software engineer is it a pr is it a story point it's extremely unclear and it's very basically unsolved like i mean don't tell me you've
solved it you know like what's maybe you have i don't know but like i'm default skeptical on the well what gets measured gets gained yeah we do use story points but you're right that it's it's easy to game it right like if we were to hire somebody who just like if you think about a technical system right a smart hacker will find ways to exploit it and the easy way
to exploit the story point system is to deflate the concept of a story point and decide that okay any line of code like lines of code are going to be directly proportional and equal to story points well then of course you've hacked the system right but your clients will churn and you'll probably get let go of and it just won't work long term and so what we found is that hiring two
what we found is that this problem gets solved in the hiring process and it gets solved by hiring people who fall into two buckets one is people who are selfish but they're long -term selfish everybody's selfish but we need to look for people who are long -term selfish people who understand that these incentives are longer than just today's story points they're forever right and we need to think about how do we
maintain the client relationship and that means that we're going to give them very robust story points so that we can maintain the relationship and continue to make money but the other is that we hire people who just like writing code and like working with really smart people and and they're not sharp elbowed and they just want to do great work and and that sounds squishy but that really is a part of
it as well i think both are both are really important just two other things i would quickly add is one when we work with clients at 10x there's basically two role players there's the ai engineer and then there's the technical strategist and and one of the best ways to fight perverse incentives is to incentivize two people at odds with each other in a healthy way and so our technical strategists are incentivized
based on nrr are incentivized based on retention and account growth for a client and they are the final one to sign off on the engineering plan for a client before we begin a sprint so like they are the last line of defense of quality before a client ever sees anything so so that's one thought the interesting thing
and i don't know if arman has thoughts on why this is is we have not yet and again we're a young company so this could change at some point but we have not yet had any clients argue about how we assign story points or ever feel like we are sandbagging story points or any of these things which is just interesting because i think to your points wicks like i would have expected
that to have already happened yeah it can be a political process when things go not well but when things go well no one you know everyone's just like steaming ahead okay you hire great people you work well with story points i think one thing i'm trying to get my guests to do a better job of is just brag could you brag a bit yeah just like some really impressive project that you
accomplished just just opens people's minds like let's get let's get specific without maybe naming the exact client unless you can um and then also like what's the highest hourly rate that one of the engineers has made since you're technically on cap yeah so i'll answer the last one uh or the second one first we will probably have
more than one engineer make million dollars cash next year based on this model and that is just with story point compensation it's very likely that we will have more than a handful of folks make more than a million dollars next year um the answer to the first question like for example one project we built so we work with this company that's a they build they work they partner with retailers to basically
make cameras in the business more valuable and the way that they do that is they deploy what was historically like a gen 4 raspberry pi to the stores and they would they would run like one model on that device we basically took some off -the -shelf models and trained some models ourselves and then quantized them down so
they can actually run on that um on that four but also on jets and nanos and we got them to all run in parallel so now basically what these models allow you to do is as a store you can get a heat map you can see where the lines and the queues are forming in your store you can even get pictures of shelves and understand what needs to be stocked and you
can do things like theft detection because we have body analysis and we can understand things like things are crossing arms right and this took our team two weeks to put together early prototypes and now we're just refining accuracy and improving metrics from there um and again this was like one of the many examples i think of course with that specific example that's more of a research project and it's going to take
a while to improve accuracy and things like that like we're not claiming that we're like these magical beings but previously that alone building a prototype of that would take several quarters for robust teams of engineers together and we were able to prototype that out very quickly and now we're working together with that team for for a year to build more and all that stuff alex anything i mean i guess another one
snapback sports we built them a mobile app in a month that hit 20th on the app store globally and there was no ai in this app it was a really fun trivia app but we built it together deployed it hit 20th in the world yeah i mean one other example i would just add is and this is just looking at things from a different angle which is sales i think the power
of ai engineering and fast prototyping is incredibly powerful within sales motions now and so just one example is we had a big influencer who wanted to basically build basically chat gpt but specifically as if it is your fitness like your health and your your health coach and your nutritionist so it has
all this context is a fitness influencer yeah exactly and we originally reached out to work with him and basically he said no because he was like you guys are like too early you don't have your like a design team built in yet and so he said no and it seemed like the conversation was done one of our engineers was like i'm just going to build a working version of this app as
soon as humanly possible so basically within i don't know probably took him four hours he got he had just like a working version of the app that was in the hands of this influencer and that influencer hasn't launched the app yet but we are number one on their list right now to actually do this build and the only reason is is the speed by which like working product could be in hands
of someone is faster than it's ever been yeah that's amazing okay so uh like quick question on just the uh the stack that you guys have landed on like is there a house stack what are you guys finding in terms of like the various coding agents and all that yeah um we do work in a number of different stacks a number of different languages and stuff but we feel pretty strongly in
like high structure allows for agents to work autonomously for longer and so our default stack is typescript front end typescript back end with a shared file where or a shared folder where all of our shared types and schemas and things like that live and typically react front end or even something as simple as like express on the back end like we don't really care about the frameworks it's more just like typescript
allows us to have that flexibility to um like the flexibility of javascript but the constraints of typescript and then those error messages allow the cloud code or cursor agents or whatever to iterate on themselves and run things see the errors and continue in terms of the actual like ai engineering stack and what coding agents and things like
that we're using i always tell clients this like our team doesn't have a favorite coding agent of the year or of the month or even of the week like if i go over there to our team right now and i ask them what model is performing the best for coding right now they'll say today at 4 42 we're noticing that cloud code is actually performing better because of xyz reason but yesterday
codex was outperforming cloud code on object on on activities like xyz right and and we stay really really deeply on top of all the different models all the different applications uh of these agents um to make sure that we're getting they're really pushing the most out of this and so that we can advise teams on how they should best use these things i mean well so yeah but you're gonna it's very
anecdotal right like don't you need more comprehensive evals because otherwise it's like you are just behaving or believing things based on the luck of the draw i think at this state did a samurai have a measurably better sword than the person to their left or right no right at a certain point i think a a warrior's weapon
becomes something of a feel and i think that at this point a lot of these like the coding agents are so good like yes you can have evals that that provably show that one is better than the other but for a lot of these things it really is feel it's like hey this agent actually like it just i can work better with it on a warm blooded level or it writes code
more like i like to or whatever and at least that's what we've noticed yeah fair enough and so i think like there's this you're you have like kind of a swat team approach you're paying you're very meritocratic meritocratic i think is is the probably the right right term in this are you human bound or are you agent bound like what is your limiting factor in tenets becoming a bigger business than
you know either of you have run before today it's human bound a hundred percent you're recruiting yeah yeah yeah we we are the thing that keeps us up at night is how can we hire enough good engineers fast enough um and then the second thing that keeps us up is how do we match those the great people in the business with the right process such that delivery doesn't suffer
as we scale and i think more and more as we build this business like technology is going to be an enabler of the work we do and i think long term if we're to talk about the long term of the business there are ambitions of this business beyond just acting as a transformation and engineering partner
for companies there there are ambitions to build our own technology but today and probably for the first future we are human capital constrained how do you interview you don't have to like give the exact interview questions but like has interviewing changed for either of you guys pre -ai versus post yeah this is actually somewhat controversial a lot
of my friends stopped doing take -home interviews after ai we still do take -homes but our take -homes are immensely they're like our take -homes are unreasonably difficult and so when i when i first wrote them up i told alex i was like hey man like people might get mad at you you know like you have a public persona like we're sending these to people like your reputation might take a
hit if we send these to people because they are so unreasonable for us to ask this of people and alex and in classic alex fashion was like eff it let's just do it you know like let's send it if this is the bar then you need to do it yeah exactly and what we found is that 50 of the people don't even respond to the take -home interview but because our take
-home is so difficult our interview process is actually quite short we do two calls before the take -home then we send the take -home then we review the take -home and then if it goes well we do maybe one or two meetings afterwards so it can be done in the fastest in a week it's very very quick if if people can get through that take -home yeah and just a few things
to add like i'm thinking about what are some of the most common questions we ask a few that arman asks that i really like are one he basically says like if you had infinite resources to build a ai senior software engineer like truly one that could replace either of you on this call right
now what would be the first major bottleneck that you would have to figure out how to overcome to build that that that's one question that he always asks and arman out of curiosity i don't know if you want to share it because then people will start giving the right answer on that but is it oh i guess just for swix like yeah i can i can offer one i don't know yeah
yeah let's hear it i mean so the the classic answer is just model intelligence right like we think the models are good but like actually they have been really trained into a certain sort of local minima of like well here's all the python because sweep bench is all python all django um and actually beyond that we've maybe generalized like a little bit of front end but hasn't really done like full back
end distributed services and all that so model intelligence is going to be like the main blocker but i don't know if that's a good answer because it's kind of like well you just wait and then maybe the frontier labs will solve it yeah i i generally think that it has to do with context i think it's not necessarily context length i think that it's context engineering in andre karpathy's words right
it's it's the problem of how do you how do you get the right context into into the like into the llm and get the llm to pay attention to the right parts of that context all of which i would consider context engineering and then from there it's like okay there's a lot of ways you could solve that right you can on the model layer do a lot of work to make sure
that the attention mechanisms are paying attention to the right stuff you could do work on the application layer to to context engineering you can extend context lengths like there's a lot of different work and then it leads to a really interesting discussion so so yeah that's one thing one thing i was just gonna say is i feel like dan who's one of our engineers at 10x he shared a different answer that
i remember your reaction to it was like it it broke your brain a little bit do you remember what his answer was no i should ask him but i believe it had to do with entropy i should ask him what it was there he is dan come here what was the question that we asked you during the interview remember you like i asked you if you had unlimited resources and you needed
to build an ai engineer what would you need to solve you gave me an answer that like what would be the um what would be like the limiting factor the honest thing yeah what was it i said it was like controlling entropy oh there it is yeah controlling entropy controlling swix does not does not agree with dan you if there is some error rate in your question was basically come closer so
they can hear you come closer to my headphones this is swix you're on a podcast yeah we can hear you we can hear you it's good it's good we're rolling so basically like if there is some your question was about a fully autonomous like coding loop so like what would it take to get the human out of the loop if in that loop you have some error rate um let's say it's
99 accurate with code even that one percent error rate will just multiply and decay more and more and that entropy will build and like accumulate and that's like kind of a compounding thing that will derail the agent more and more and so i think it's less of like a context engineering question per thing you're implementing and
it's like um making sure that the agent can reduce the entropy for a given task such that it gets to 100 accuracy um and then you don't have this like accumulating error issue cool man thanks brother no that was impressive oh no actually so yeah that's like that's that's the sophisticated version of context engineering right like a lot of people are going to answer context exactly we are one of the people
that coined context engineering coming to speak uh i think in one i think one of the early sessions on on friday and yeah like but this is the actually like the the advanced like yes this is one of the four ways in which long context fail and if you have enough experience you know that this is the one that gets a lot of agents off track and once they're off track it's
really hard to get them back on track exactly exactly and going back to your question i wouldn't use the same words but yeah it's yeah i get it yeah and going back to going back to your question about constraints in the business it's just how do we find more people like him is the thing that keeps us up at night well you know i'm in the business of making more you're helping
to contribute by putting this conference together where we're just sharing knowledge and the more people that watch like are kind of like drawn to you they might answer your call to action of like trying out one of your super hard tests or at least just learning and just advancing the state of the industry for sure so i'm excited to have you guys uh do you have any questions for me i mean
you know it's like a whole like two three day affair uh you know i've done this a little bit now one question i have for you is like as armand knows that i'm voraciously curious and i'm a lifelong learner but i'm also not an engineer by training and my goal is to get as smart about this space as quickly as possible and so like you know i i one of the first
things i did is was it armand did you send me the three blue one brown lecture like you know three blue one brown like does the lecture on element lms i took that then he's like if you want to go super deep do any of andre karpathy's like he does like the lecture series on how chat gpt works and he's like actually like write out notes by hand and like
you truly understand like the math behind these models and armand did that and he was like it's just like that's how you understand things at the deepest level so when when i'm not either working or taking care of a four -month -old at home that is next on the list but i guess like my question for you is like as a non -technical person who's always been like both enamored and intimidated
by technical folks what would you do if you were me to make the most of this conference where when i'm not like the core archetype of the person who's there geez yeah that's that's a tough one because i spent zero time thinking about that okay so so i think like latch on to the keywords and whatever people are excited about like context engineering people are excited about maybe four or five months
ago and now it's like entering the mainstream typically the the people at this kind of conference would be sort of stewing around those ideas like mc uh the last time we were here in new york mcp was kind of just taking off and we did the workshop and that really blew up uh mcp and i think like that is something that you will see a little bit of like the just like
by the way arma armand grims grins because he has very strong feelings about mcp very strong feelings pro or anti we're we're hosting a debate i just think that mcp is a three -letter word for api and like alex always every time he hears someone say the word the letters mcp in uh in that order he
tells them that i hate mcp and starts a a war a religious debate no so well i will say though i do think a few of our engineers have warmed you up to it more with specific use cases armand yeah i mean like are mcps useful like of course i use all the mcps with cloud code i just think that there's like what what bothers me is when people create a new
name for something and then use that to raise some inordinate amount of money because they know that three -letter acronyms get investors excited that's like the thing that like that's that's why i giggle when i hear mcp because i'm like a lot of people just say that and like the tweets that bother me are like mcp is coming for your job here's why you need to know about mcp and it's like
no it's just like a useful thing you know maybe maybe this is relevant to alex's question i do take a sociological and anthropological stance to tech in terms of like different groups of people coming in have different terminology to communicate with each other and it's it's just human behavior it's like i'm kind of non -judgmental about it like people just got to do what people do and
they always invent new language and they're all like there are only so many ideas doing going around in the world they're going to be recycled yeah totally that's it like i will defend mcp in a sense that like there actually are other parts of the spec that are not just api wrappers but people just comparatively don't use them as much but i think it's a little unfair to mcp the whole protocol
but that's why we have a debate where we actually have like a podcast booth and we're actually hosting like you know pro and con debater and i think it's really fun yeah yeah that's awesome i actually really want to get into this because i think like we learn more by contrast than by agreement right like so in a single talk like you know you're the authority you're out you're up there on
stage you say whatever you you want to say and no no one can really like people just fight in the comments but they're never going to rise to the same level i think in a real debate you can learn from both sides and make up your mind and i think that's uh what we're going to see that's awesome what we're trying for yeah yeah i love that well uh it's great to
meet you guys i'm looking forward to your talk amand and alex you're you're you're opening the show for us so all power to you i i do think like i personally i intentionally left that block that you're amand's in as the consulting block we also have mckenzie speaking but mckenzie is not in the consulting block so i'm very curious because i think my theory is that a lot of our attendees will
be from the enterprises that like might be looking to talk to you guys and i'm curious to like see how this sector grows it's not something i'm personally not familiar with because i mostly just work in companies as as an engineer but like the sort of consulting digital transformation industry is kind of new but it's also like very very in demand as you guys know very well and i'm just like excited
to feature it for the first time we're super excited to be there and thank you for having us and pumped to learn a ton from from you and from the uh the other speakers there and just the people who are attending yeah yeah i mean like everyone from the labs to uh the fortune 500s it'll be it'll be a whole party all right thank you love it thanks man
Loading video analysis...