Scaling Large Organisations: Empowering Independent Teams with Crux Micro‑Frontends
By Rust Nation UK
Summary
## Key takeaways - **Cross-Platform Duplication Fails Scaling**: Proton faced cross-platform duplication of non-trivial business logic like talking to the encrypted backend, increasing bugs, inconsistencies, and integration difficulties across teams and products. [03:44], [04:04] - **Embrace Conway's Law with Autonomy**: Proton structures teams around products like mail, VPN, calendar for speed and scaling, empowering them to make technical decisions to reach market fast, while seeking ways to integrate. [05:37], [06:20] - **Crux Pushes Side Effects to Edge**: Crux follows Elm architecture with pure testable core in Rust sharing logic across platforms, pushing side effects like HTTP and time to thin native UI shells. [11:42], [14:20] - **Lightning-Fast End-to-End Tests**: Notes example has 24 tests including two-way sync user journeys running in 16 milliseconds total, providing high confidence the app works identically on every platform. [23:19], [23:46] - **Compose Independent Crux Modules**: Proton News composes Lumo feature as off-the-shelf Rust crate from another team, running identical cross-platform core and shells for article summarization without intervention. [26:32], [27:14] - **Push Effects for Composability**: Pushing side effects to edge as low-level IO primitives like HTTP requests facilitates feature composition across teams, as top-level app resolves common effects from all modules. [34:02], [34:20]
Topics Covered
- Embrace Conway's Law for Autonomous Teams
- Push Side Effects to Edge for Testability
- End-to-End Tests Run in Milliseconds
- Compose Features as Reusable Rust Modules
Full Transcript
Let me introduce you to the first speakers uh of the day is uh we have uh Stuart Harris from Redbger and Ludo Wicker from Proton. Uh they'll be
presenting on scaling large organizations empowering independent teams with Krux Micro Fentins. Go stage
is yours.
Thank you. Good morning everyone. I'm
Ludviko, senior iOS and REST engineer at Proton. Um, I have a longlasting
Proton. Um, I have a longlasting experience building software for Apple platforms. And my job title at Proton says iOS engineer. So, you may be
wondering why am I here today at Ras Nation. Um, well, I love Rust. And at
Nation. Um, well, I love Rust. And at
Proton, I use Rust every day as part of my workflow building apps for iOS and other platforms. >> Hi, I'm uh Stuart Harris. I'm one of the um founders of Red Badger, which is a
consultancy just by All Street. Um but
first and foremost, I'm a software engineer at heart, always have been, always will be. Um and a core maintainer on the Krux project, which is what we're going to be talking about today.
And today I'm very excited to share with you with you um why Proton is using Rust on the front end across platform to share business logic across platform
across platforms, mobile, desktop, web, and more. uh but also in very concrete
and more. uh but also in very concrete terms how Rust is helping Proton to scale uh not just across platforms but also products and teams and to do that
we use cracks an amazing library built by red budget um that facilitates building crossplatform apps in rest with native UI on each of the platforms uh
but before we get into that let's start with the why so why is protein using rest on the front end and to understand the why we need to understand um a
little bit more uh where Proton is at today as a company. So, Proton started in 2014 um as a web- based um email
platform uh built on end to- end encryption and privacy and a very simple but extremely powerful idea. um what if you and only you uh could decide who can
who can read the content of your emails and not even your email provider uh could access the content of your emails.
And so it turns out this idea was so successful that Proton kept growing and applying similar principles to um a whole suite of new products that we have
today. um not just mail uh but calendar
today. um not just mail uh but calendar drive pass um VPN and Lumo are private AI assistants all built on the same
principle of privacy and end to end encryption and today we're not just on the web but all pretty much on any platform that you can think of uh mobile
iOS Android desktop Linux Windows Mac OS web TV and and more um and as a company.
We're also growing uh we're going to quite interesting numbers. So now now we count 600 employees uh spread across different countries, different offices
in Europe and 50% uh of us are in engineering of course as you can imagine given what we do we are a very tech
focused company. Uh and so as as Proton
focused company. Uh and so as as Proton grows and um evolves as a company uh the technical challenges that we face also
evolve and focusing on the front end in particular if we take a snapshot of where Proton is at today um a few challenges emerge um crossplatform
duplication so um we have some non-trivial business logic for example talking to proton encrypted back end um on the front And um we did have historically
crossplatform duplication. So we did
crossplatform duplication. So we did duplicate this logic in in uh across platforms uh in some of the clients. And
of course drawbacks of this um include um increasing the chances of of uh introducing inconsistencies and and and bugs because we multiply the number of
implementations for what is essentially the same thing. Uh but also we make integration between products more diff and teams more difficult because um if a
team uh that builds a feature um uses different solutions different designs uh across the different platforms and another team wants to use that feature
then that uh multiplies the cognitive load of understanding how that feature works. Um so yeah we think this doesn't
works. Um so yeah we think this doesn't scale but we're also confident that we can do better than this. Um and another technical challenge that we face is the
need for cross product integration which uh grows uh as the number of pro of um products that proton offers in the ecosystem grows. So um we have a drive
ecosystem grows. So um we have a drive application where you can store your files end to end encrypted. uh we want to empower our users to access those
files from other proton apps from mail from Lumo for example and and the um the use cases for cross product integrations can only grow and it's something that we
very much want to support for our users um and also uh we happen to have some divergent stacks across some of the of the products of the apps and to
understand why that's the case um we need to look at how proton is structured as a company.
So you may be familiar with Conway's law which essentially says uh that um an organization that designs a system is likely to produce a design which mirrors
the way the organization is structured and in case of proton we are structured we are structured in distributed teams across Europe so in different countries
and mostly organized around each of the products. So that would be mail, VPN,
products. So that would be mail, VPN, calendar for example. Um and and we like this structure. So uh because um it it
this structure. So uh because um it it has helped uh Proton to grow into what it is today because it's optimized for speed and scaling. So for example, when when we're launching a new product, uh a
team that is working on it is fully empowered to take all the technical decisions that that uh that they think are best to to to reach their their goal
and to get to market fast. Um so we we like that uh we want to encourage that but at the same time um integrate when we get to integration across products
can be a challenge because if different teams have taken very divergent paths uh then it's hard to reconcile after the fact.
So then the question becomes how do we empower autonomous teams and we like teams to be autonomous. uh we don't want to fight conquest low we want to embrace
it uh but how do we empower autonomous teams to integrate to work effectively together and of course an answer to this
is standardization right so if we if we introduce some conventions um then we can be confident that if team A follows
this conventions and then team B wants to integrate with the work from team A following the same conventions then everything should just slot into the right place, right? But at the same
time, uh we have a bit of a tension here because we don't want to introduce too many conventions like we don't want to introduce too much technical bureaucracy that that would get in the way of the
autonomy of each teams. So um where is that sweet spot? Where do we find that balance?
And so um a technical solution that um would empower us to find that balance uh it would need to have um it would need
to support at least uh a couple of business needs. So first uh we want a
business needs. So first uh we want a solution uh that provides us with a safe and scalable way to share business logic
across the platforms. uh as we as we said um cross crossplatform duplication is an enemy of integuation. So um we we
we want to um have a solution which empowers us to share business logic across platforms and also we want this
solution to um have an easy to follow pattern architecture which um enables us to compose different features which may be owned by different teams and belong
to different products into something new. Um and so what we want to achieve
new. Um and so what we want to achieve really is um empowering cross team collaboration and we think the way to
get there is to um empower crossplatform sharing of business logic and also encouraging cross product integration.
So let's have a look uh at each of these two constraints before we jump straight into the solution. So what do we mean exactly by crossplatform? As we said, we
want to share the business logic. Uh but
it's very important to to to um to understand that we don't want to compromise the user experience. So we
want a native user experience, native UI on each of the platforms that we support. So we want an iOS app to look
support. So we want an iOS app to look and to look and feel like an iOS app and same with Android, same with other platforms. So that automatically
excludes web- based for example solution to that also share the UI. Um so we want a native UX but we do want to share the business logic and for sharing the
business logic we want a solution that builds everywhere as we so we run on lots of different platforms. Uh we want a solution that is as safe as possible
because privacy is important. We want to do everything we can to reduce the chances of introducing bugs which could have catastrophic consequences for our users. Also, we want native grade
users. Also, we want native grade performance because we believe that's very critical to deliver a great user experience and as a tech company of course we want to provide our engineers
with a great developer experience and this is I mean a great fit for Rust and so this is why Proton has chosen Rust to share business logic across clients
and when it comes to cross product so um what we want to achieve is um a way to organize features in our products as separate crossplatform units that can be
shared across products and owned by different teams. And and the reason why we want to to do that is because we want to be able to mix and match features. So
have offtheshelf components that team can reach out to and integrate in the products um without like a doc intervention from the other teams that maintain these components. And that's
particularly important when we are creating new apps. We don't want to start with a blank canvas. We want to be able to go and get SDKs that can enable
talking to Proton services and integrate with other services for example. And so
to do that uh Proton is using cracks.
And to tell you all about what cracks is and how it works, I'll hand over to Stuart.
>> Thank thanks um thanks Ludo. Uh okay. So
Ludo set the scene. Now I think before we understand how we scale Krux across multiple products I think we ought to understand what Krux is. Um so this is a
very short sentence to describe what Krux is. It's um a a rust uh framework
Krux is. It's um a a rust uh framework follows the Elma architecture and allows us to bring all of our application behavior into a single pure testable
core um that can be shared across multiple platforms. Um, we push the side effects to the edge and down into the platform so that the core remains pure.
We'll look at that in a second. And we
can share that across all any platform we want. Um, and then just wrap it with
we want. Um, and then just wrap it with a very very thin UI layer um around the edge. All the logic is in the core and
edge. All the logic is in the core and all the logic is in Rust. Um, we
introduced Krux at Rust Nation here uh in 2023, three years ago. um in the Queen Charlotte room, I think that is.
Um and since then um we've continuously evolved it. Um they've got 50
evolved it. Um they've got 50 contributors. Thank you very much. And
contributors. Thank you very much. And
nearly 2 and a half thousand stars. So,
thank you. Thank you. Um that's the list. Uh that's the uh link there. And
list. Uh that's the uh link there. And
um now we're on Krux version 0.17 or we will be in a couple of days. Um
this is all about ergonomics and developer experience. So we've
developer experience. So we've introduced this command con um concept which we're going to talk about in a minute in a second. We're going to deep dive into what commands are, what
managed effects are. Um it becomes very very easy to write tests and we're going to look at that. We also introduced new bridges and middleware for those bridges so that we can run side effects within Rust as well. Um we've got new codegen
and packaging and type gen etc and new documentation. So all all good in a
documentation. So all all good in a word. Klux to me anyway is all about
word. Klux to me anyway is all about testability. It's all about being able
testability. It's all about being able to have a very high level of confidence that your application works exactly correctly and exactly the same on every single platform. Rust gives us a lot of
single platform. Rust gives us a lot of confidence already as we all know. Um
but then we can build on top of that by um by testing a whole user journey as though as as unit tests at lightning speed and we'll look at that as well in
a second. So this is the what Krux is
a second. So this is the what Krux is about really. It's this pattern is known
about really. It's this pattern is known as port ports and adapters or hexagonal architecture or the onion architecture or clean architecture or whatever you want to call it right but uh or the ELM architecture but effectively what
happens is the user clicks a button or something like that interacts with the UI sends an event into the core the core responds to that event by um emitting um
effects those effectively side effects they're pushed out to the edge so if we want to make an HTTP call to a REST API or something like that or we actually even want the time like time is a side
effect. It changes. You don't want that
effect. It changes. You don't want that in your test, right? So, um all of these interactions with the outside world are pushed to the edges where they're outside effectively the scope of our
tests.
So, Cox is based on the Elm architecture. You're probably familiar
architecture. You're probably familiar with this. Um it's a a pure functioning
with this. Um it's a a pure functioning functional programming language um and a pattern um that's used for building web applications. Krux has the same
applications. Krux has the same architecture. This is effectively the
architecture. This is effectively the update function in Krux which we'll look at in a second. But it takes given that event users clicked a button or something and an existing some existing
state we emit some new state or some modified state and a command and that command um is is the managed side
effect. So the command is a hierarchy of
effect. So the command is a hierarchy of serial or or parallel tasks each of which has a request and zero one or many
responses. So if we want to just notify
responses. So if we want to just notify the shell which is the which is the um very thin layer around the edge the UI layer. If we want to notify the shell we
layer. If we want to notify the shell we just um or we want to request from the shell like an HTTP endpoint or something or if we want to stream from the shell.
So we we can have zero one or many responses. So this is a Krux app. This
responses. So this is a Krux app. This
is it right? This is all there is to a Krux app um to the core the simplest one. I don't know if I'm in the way
one. I don't know if I'm in the way here. Um but on the left hand side we've
here. Um but on the left hand side we've got um an event enum here we've just using incre this is a counter example we've got increment decrement and reset obviously those uh variants can carry
data so if the users click data that that's how you get the the data into the core and there's an effect enum which is um a list of all the side effects effectively that we're working with here
this in this example we're only telling the shell to rerender we've got a model and a view model which is a projection of that model on the Right hand side we implement the app trait for counter and we set those um associated types and
this is the update function. This is the update function that looks exactly like this right which is event and model returns model and command. So there's
event event and mutable model um and the command that it returns. So we look at what the event was we do we update the state and we issue the um the side effects effectively. This is just a
effects effectively. This is just a render function that creates a render command. Um and then that's the view
command. Um and then that's the view function which is called by the shell when it sees the render um and which gives it um the the projection of the model.
This is a slightly more uh complex example but the same kind of thing. So
also a counter but this time uh there's a get event which gets the value of the counter from a API endpoint. So that's
an HTTP command effectively that gets from that URL. it expects some JSON in return and what once that JSON returns and is deserialized back into the um
type um it's passed on to the set um event and the set event is below it that there's there's a success um or a failure the success take then
effectively deserializes the content that takes the uh that object and calls the update event and that updates the model and then issues a render command.
So you can see how the logic flows through the program.
Um this is uh the same example but a little bit more more complex. So here
we're doing a post to an API URL. We're
sending some JSON. We're expecting some JSON in response that we'll des serialize into a post um object and then once that has finished um we will take
the result of that des serialize it get the URL off it and then call HTTP get expecting JSON in return and that will be the body of the post. And then we'll then send it to the got post event. And
then finally we return um an effect which a command which is um these two things in parallel that whole thing and a render. Um so that's those though
a render. Um so that's those though we're not actually making HTTP calls here. All we're doing is creating a
here. All we're doing is creating a command hierarchy that represents the effects that will be sent to the shell and the shell will process those with a very very tiny cap HTTP capability or
something that just looks at the command and says okay and just makes the request and then responds and Krux manages all the the wiring to make sure that everything comes back to the right
place. So that's like um a bunch of
place. So that's like um a bunch of sequential um combinators effectively that allow allow us to do this synchronously. But we can all Oh uh I
synchronously. But we can all Oh uh I meant to that's that's example effectively running in multiple shells.
We've got Android iOS. We've got uh a TypeScript shell and for web and a Rust shell with leptos and a and a reset terminal but you could wrap your core in
anything basically.
So this is exactly the same example that we looked at before but this is using Rust's async and await. So we can create a command. It gives with a closure that
a command. It gives with a closure that takes the context and allows us to convert the context allows us to convert those commands into futures which we can await. So here we make that post request
await. So here we make that post request we await its response and then we can des serialize um that get the URL make another request await that and then send
the body to the to that to to that event. So that's exactly the same as the
event. So that's exactly the same as the previous example, but with async instead of synchronous.
Um, and we can get really clever with it because we can spawn tasks. So here
we've got a channel. We spawn 10 tasks that each of them emit a command to the shell. Um, await for the response of
shell. Um, await for the response of that, send it through the channel, and then we've got another loop at the bottom that that's taking those events from the channel and emitting events
back into the Krux app. So we can we can be very very flexible with how we build our application. It feels like we're
our application. It feels like we're working with with real constructs like HTTP but actually we're not. Um so this makes the testing oh there's another example too which we combine a couple of
commands that explicitly request from the shell and then send back to those events. We can abort those commands with
events. We can abort those commands with an abort handle. Um so we've got full control over the life cycle of them. Um
so the testing story to me I think is is really really important. Um we can test at the command level. We can test that the command is going to issue the right request and we can resolve those
requests with responses to make sure that um that the command is correct integration with the outside world is correct. So this is a test. Um it makes
correct. So this is a test. Um it makes a post to that URL. Um, we expect that when we run this command, we will get one effect and that's going to be an
HTTP effect and we get the request out of that and test and we can test that that request was actually the correct thing. So, it's got a headers and it's
thing. So, it's got a headers and it's got the body and everything's correct.
So, that assert then make sure we know that the it's doing the the commands doing the right thing. But then we can also resolve create a um a post and resolve our request with that with that
post. And then we can check that that
post. And then we can check that that the app emitted or the command emitted um one event which is the got post. So
that's what a test might look like at the command level for what we just saw.
Um but really we want to test a higher level than that. We want to test the logic of our application. We want to test the whole user journey from beginning to end. Like I open a file, I
download the file, I save it or whatever and you know or whatever it is. We want
to be able to test that whole journey as a unit test.
This is a very simple one. So we can create an instance of our app. That's
the counter um a default model. So
that's going to have a count of zero. We
send the increment. We can call the update um ourselves. Um and we can send it the increment with the model that we've got. And then we can check that
we've got. And then we can check that the model has now has the view on the model is is this um string representation of the count is one. So
we know that that worked and we also can expect that one effect was emitted and that that effect is a render.
Um this is a slightly more complic complex one which is where we've got a count with a value and when it was last updated on the API server and when we send an increment we expect it to
render. That's the optimistic update to
render. That's the optimistic update to update the UI um which will change will revert if it doesn't work. Uh then we send that we expect it to send one effect which is an HTTP and that's going
to be to this URL and then we can resolve that with um a response that says the count is two and this was updated at this time and we expect now one event to be emitted which is set and we can send that back into the app and
we expect another event which is the update. So this is how you would test um
update. So this is how you would test um that the that the core is is the logic in the core is is correct. And so when you've got a lot of tests like this, this is the notes example in the repo.
There's 24 tests there. They run in 16 milliseconds. Some of these tests are
milliseconds. Some of these tests are quite the very very long kind of like whole user journeys. In fact, that one that's two-way sync. Second up from the
bottom. Um creates two instances of a
bottom. Um creates two instances of a notes app. Types into one, checks that
notes app. Types into one, checks that synchronizes with the other one, types into the second one, checks that it synchronizes with the first. Um, and it does all that testing in 7 milliseconds
and they all run in parallel and we get very very very high level of confidence that our app is absolutely 100% correct um and will work on every platform. So
I'm going to hand back to Ludo who's going to show how we scale this across organizations.
>> Thank you.
>> WOOHOO.
>> So what S has showed us is great. So we
we have a way to share um rest business logic cross platform which is safe and scalable um and easy to reason about and design for great testability. We we like
that a lot um but at pro we want to take this to the next level. So we we don't just want to share logic cross platform um but we also want to organize features
as independent cracks modules which can be mix and match and composed across teams and across products. And to show you how that is done in practice, we
have prepared an example project, I'm going to leave the QR code up for for a few more seconds which will take you straight to our public GitHub um page where you can download it and check it
out in your own time. Um in this repo you will find two projects both built with cracks uh with crossplatform um
Ras core shared for each of these apps and two shells one for iOS and one for Android with native UI. Um so these two apps the one on the left uh we call it
ask Lumo. It's just a simple uh client
ask Lumo. It's just a simple uh client that talks to Lumo Proton's private AI assistant. You can just enter a prompt
assistant. You can just enter a prompt and get response. Um and then on the right you have proton news just a simple client uh to read the latest news for protons official blog just to show you
that these are not just mockup I'm going to quickly show you how how they look so if I submit a um prompt to lumo it will take some time because oh okay it's
already done it uses the guest APIs which are normally uh rate limited uh and this is on so on on the left side is on iOS on the right side we have proton news on on Android uh you can tap and
open it article. Um yeah, I'm not sure what this is about. Um but so yeah, what if now we have a new um business use
case and so let's say these two apps are owned by different teams at Proton. Um
and now Proton News, so the the team that owns Proton News uh wants to use the Luma feature uh to provide users with a button to summarize an article
with Lumo. Um
with Lumo. Um so this is how it would work. So you tap then um there's a back and forth to Lumo and we send the the content of the article and we get back a summarized
response. Um so to do that what we
response. Um so to do that what we really want to do what we want to to empower uh Proton news team to do is to just take this component off the shelf
from the team that owns us glue. Um, so
>> yeah, it's it's not quite working, but yeah, there there's some internet issues, but it's fine. We're going in the slides. Um,
the slides. Um, so yeah, the the the takeaway here is that what we're seeing here, this this article summary feature,
it runs exactly the same code, exactly the same crossplatform stack, the the Ras core for the Luma feature and the
iOS and Android uh shell, Android shell UI. um which on the left side it powers
UI. um which on the left side it powers the whole as lumo app and on the right side the exact same code is composed as a feature inside the proton news app um
so to look at how we do that technically let's have a look at the architecture of these two apps um let's start with ask lumo so uh this will look probably familiar now as the architecture of a
cracks app so let's go through the um through the cracks loop so let's imagine the the the user taps on the button to submit the prompt to Lumo. Uh we emit
the shell will emit an event which will will get um rooted to to the RAS core.
Uh this core that this yellow circle is the engine that runs the the show and that will call the update function on the cracks app. um still on the RA side
and the update function was what showed before is the one that takes the the event in uh updates the model if needed and can output zero one or more side
effects uh which will get back to the core and the core will resolve by will ask the shell to resolve and then the shell will resolve them and then a new
loop can begin. Um, so let's look at how it looks in code. Again, very similar to what Stuart was showing before and no surprises here. Uh, there's no need to
surprises here. Uh, there's no need to go in detail on what the app does, but the submit event is is is the main thing when we submit the prompter to the Lumis back end. Uh, one thing to notice here
back end. Uh, one thing to notice here is that, uh, namespace annotation align to um, so when when we get to compose different features, uh, we're going to have different things that are called
event, right? Each feature will have
event, right? Each feature will have something a type called event and so effect view model and so on. Of course
these are in RAS they are in different crates so they're automatically namespaced but then uh these types will will be translated through a generation
phase uh to um strong uh swift cotlin or typescript types or or whatever platform you're you're talking to um from the
shell side and so in when they get translated and for this cracks uses the NF UNFI library uh they need to be namespace so that we have no collisions
And this is how the model looks. The
model is the internal state of the app.
Um again pretty straightforward. We have
a workflow which um stores the the state of the UI and then the response chunks as they coming through from uh Lumus server as a HTTP string and the view model is just a projection of this
internal state which is consumed by by the shell.
Um and then the update function. Again,
no need to go into too much detail over how it works. Pretty standard like cracks app. Uh let's just look at the um
cracks app. Uh let's just look at the um submit event. So we construct a um HTTP
submit event. So we construct a um HTTP um request from the prompt and then we update uh the UI workflow with a loading
state and then we emit to uh concurrent effects uh a render for the loading state and then the the fetch requests uh to the luma cell.
So this is the architecture of ask luma.
What about the architecture of proton news? Well, it's exactly the same,
news? Well, it's exactly the same, right? That's also a crack. Um so it
right? That's also a crack. Um so it will look again very similar. It will
have its own uh core uh its own cracks app and uh shells on iOS and Android. Uh
so but here now we have two separate apps right these apps are not talking.
So how do we go from here to integrating uh the lumo functionality into proton news?
This is how we do it. So um this is again the architecture of proton news after we integrate it uh with the luma feature. So let's go through the loop
feature. So let's go through the loop again. So um let's say the user taps on
again. So um let's say the user taps on the button to summarize an article. So
that will be translated to an event from the shell that goes to the core. Uh you
you notice we still have one core here because the core is the engine that runs the show and we want just one of those.
Um but then this core um will will um call into the update function of the top level proton news app to um handle this effect handle this event and then the
the proton news app will know that for this event it needs the luma functionality and so it will call into the update function of the lumo app and
I think the main takeaway of this slide is that that lumo app um blue box on on on the top right is
exactly the same Lumo app that powers the the the standalone uh Lumo um top level app. So we're just like pulling in
level app. So we're just like pulling in the crate and composing it into the Proton app and then we're doing a similar thing on the shell side. For now
let's focus on the core. We'll get to the shell uh in a moment. Um so let's look at how this look how this looks in code. Um we have a top level event for
code. Um we have a top level event for proton news uh app. Uh this has an top level initialize event uh that the app might need. And then we com we're compos
might need. And then we com we're compos we're composing two features. Uh the
articles features is the one that fetches the the the news from the blog.
And then the luma feature it comes is owned by a different team. It comes from a different cracks app which we import.
We import the create and all we do is uh we compose the event inside our own top level event for the luma feature and similar thing we do for the model and
the view model. And then let's look at the update function. So uh not all of it just the relevant bits that um composes with the luma feature. So uh when the
proton news app gets an event in um that is uh that it needs to delegate to the luma feature that it will call directly into the update function of the luma cracks
app and then it it may get the update function may emit further events which we need to map back to our top level events and also effects that the top
level uh core needs to be able to resolve and so forward to the shell and then the shell also needs to be able to resolve this effects.
So let let's focus again on the effects because um this is important. So when we compose features uh the top level app needs to be able to resolve the effects
that come from all the different composed features. So in this case we
composed features. So in this case we have render which of course is common to all apps and then we have HTTP which comes from proton news is what we use to fetch the RSS feed. And then we have
server center events which is what lumo uses for for the HTTP stream. Um so
what this tells us is this is extremely important to push side effects to the edge. So it's already important for
edge. So it's already important for testability of course uh but it's even uh important for composition. So uh the more this effects will look like
low-level IO primitives. Hey here's a HTTP request. Here's um a database uh
HTTP request. Here's um a database uh request like we talk to a key value store for example. The more the more it will look like this. the more um we increase the chances that it will be
common to more feature and then uh we it will be easier to integrate uh the features into into each of the apps.
So uh we looked at how the things worked on on the core side. What about the shells? So we have some sort of
shells? So we have some sort of composition there as well. Again this uh Lumo iOS shell and Lumo Android shell we wanted to be exactly the same code that
was powering the top level Lumo app. So
how do we bring in that same code into this uh compos architecture? So to to look at that we need to look at some swift code. I'm sorry to show you that
swift code. I'm sorry to show you that but I promise it's is not much at all and it's very important to uh make sure that we write as much as possible of the
logic in rest. Um so yeah this is a swift a Swift UI view. It's not really important to understand what it does, but it's just like it it displays the the um the view model state from the
lumo feature. And you can see here that
lumo feature. And you can see here that to do that, it's talking directly to the RAS core. So it has a direct dependency
RAS core. So it has a direct dependency on the RAS core because it needs to access the view model which is the UI encapsulates the UI state. So let's look
at this core uh type a little bit more in depth. So this core type is a nice
in depth. So this core type is a nice swift wrapper around the low-level type which is directly exposed from the RAS core which is compiled as a static
library in the iOS uh and is exposed through FFI interface which needs to go through the C API. So um um like I mentioned before uh in in Swift we have
nice types we which are generated from the corresponding RAS types. Um but then when we communicate with the core uh this types needs to go through um a seal
level interface. So uh the way uni does
level interface. So uh the way uni does that is that it serializes these types into a binary data and then on the other side they are the serialized and turned
into nice rest types again and then the same thing on the way back when the rest side needs to communicate back to the shell. Um why is that important? Because
shell. Um why is that important? Because
um if we look at the two cases for example in the in the ask lumo app uh the feature is at the top level um but in the proton news app the same luma
feature is not at the top level it's is nested inside a top level uh event variant from the proton news app which means these two uh strings of binary
data are not compatible with each other.
So if we depend directly on one of the cores then uh this solution won't work out of the box because we need to be able to compose this feature as part of
different coursees. Um and so to do that
different coursees. Um and so to do that we want to go from something like this so a direct dependency uh on the RAS 4 to something that looks a bit more like
this. So here this feature is an
this. So here this feature is an interface which abstracts um and is generic over a lumo view model and event
type. Uh so these types come from uh
type. Uh so these types come from uh lumo shed the import align 2 which are the types generated from the corresponding ras types in lumus core.
So all we're saying here is that okay to present this view to instantiate this view um you need to give me an object which conforms to this interface and so
an object that understands um this view model in event types for lumo and that responds to an update code which takes one of these events and can render a
view model as a result. So and then in this way with an interface we are able to get multiple calls to conform to the same interface and then we can reuse uh
the same shell logic uh composed in different features and in different apps and so to do that in the top level uh lumo app we can just um make the core
conform to this interface because uh it's at the top level but in the proton news app uh we need a little mapping so Um I promise this is the last bit of
swift that that I will show you. Uh but
the important bit here is n is line nine. Um so this is the top level proton news app and we know that uh the top level view model composes the
luma view model inside of it. So all we have to do here is just extract the luma view model from the top level and then on the way back at line um 11 we might get a lumo event back again. All we have
to do is just wrap it around the top level um event type for proton news and we're good to go.
So um it's quite a lot of ground to cover but um we all these pieces in the right place then we are able to achieve the goal that we set ourselves. So uh we
can empower uh collaboration across C through um business logic RAS business logic which is shared crossplatform and then um we
can enable structuring features as um reusable crossplatform SDKs we can that can be mixed and matched across different apps and then to do that we've
we saw that it's very important to push side effects to the edge uh to make um to facilitate composition and also we need to find a
way to teach a to teach our shell how to talk to multiple cores. We saw a possible way through an interface. There
are of course other ways but this is one solution that works and also we we were the dragons because this is somewhat cutting edge technology. So we can
expect uh some level of challenges uh to face some level of challenges when we start adopting this in an organization like product like proton and uh some challenges that that we faced I think
even before the technical ones uh maybe the biggest one is the mindset shift. Um
so mobile engineers might not necessarily be immediately on board with like um a foreign language coming into their iOS or or Android apps. Uh so for
that it's very important to um communicate the benefits of this approach and to um stress that we really don't want to compromise the native user
experience. So what we want to do is
experience. So what we want to do is just um empower teams to spend more times on what matters on delivering new features rather than reimplementing the same solving the same problems over and
over again across platforms. So this is what it this is all about. Um, another
challenge is uh transitioning from uh legacy code. Of course, here we saw an
legacy code. Of course, here we saw an ideal scenario where we have two new apps both built built with cracks. Of
course, in a typical organization, it might not be the case. Uh, so you might already have some apps that you may want to start migrating to this architecture for example. That's the case for Proton.
for example. That's the case for Proton.
Um and so in that case there are different things that you can do if you don't want to check if you have for example a full-blown architecture on on your iOS app you don't want to change it
all in one go uh a couple of approaches that we have tried is uh one is to push the communication from the shell to the core to the edge of your shell so you
maintain your iOS architecture but then you have um a single point of contact with the rest core uh in this way you lose uh some benefit benefit of the
crossplatform sharing because you still have a lot of code on the shell. Uh but
it could be useful as a transition phase as as you don't want to change all of your architecture in one go. Or for
example, if you're integrating like a cracks feature with with an app that doesn't have a RAS core, then you can package everything the full stack. So
the the Ras core uh and the UI uh side um as a swift package for example that integrates everything. Uh one drawback
integrates everything. Uh one drawback of this approach is that if you are composing many cracks features in this way then the binary size might increase because you have different RAS binaries
in different swift packages. Um but as you transition more and more towards something like what we've seen then this
uh this problem be becomes like less important. And another um overhead is
important. And another um overhead is the serialization uh that we s for the types to go through the CAVI and and back. Um however in in in crap
back. Um however in in in crap architecture this need for serialization is quite low frequency. So it typically happens as a
frequency. So it typically happens as a result of user input which is not that frequent. And so in real world scenarios
frequent. And so in real world scenarios uh it might not be um like a problem but it's something to keep in mind.
So um to summarize uh we we have seen a way um we've seen the way that proton uh is empowering independent teams to work
effectively together and I think if we if there is main one one main takeaway from all of this so we we saw a lot of technical details but I think the main
takeaway would be design for testability and design for integration and really testability is a type of integration. So
if you push your side effects to the edge, uh then you can replace them, you can mock them uh for test to write end to end tests that run in a fraction of a
seconds, but then you can also um have clear boundaries of how you talk to external systems and that's very useful for integration cross product and cross
thing. And also uh another takeaway is
thing. And also uh another takeaway is that RS is great at this. Rest is great at powering this and Proton is adopting Rust more and more across the
organization because we believe it's it's already helping us to scale and uh we think it's going to help us more and more.
So um Broen is hiring and is hiring grass position and if you are interested in knowing more please uh come and talk to
me. I'll be here all day. Thank you.
me. I'll be here all day. Thank you.
>> Thank you.
>> We have some time for questions, I think. Yeah,
think. Yeah, >> got some questions.
>> Thanks for the great talk. Um, so
there's a render effect, but you're also returning the view model. So I'm like I'm curious in cracks like why does your core have
to say when to render? Is there not like a diffing made from the view model? Does
that make sense as a question?
>> It it does and there's there is no reason why. Um render is just an effect
reason why. Um render is just an effect like any others. Those effects can carry data themselves. So if you know you can
data themselves. So if you know you can you can interact with the shell any way you like. Um I think most modern UI
you like. Um I think most modern UI frameworks like Swift UI, Jetack Compose, React like frameworks all work in a very very similar way in that they effectively bind to a view model. So
it's quite nice to have a function that you can just you can just call to get the latest view model. Um and and a render effect there's a built-in render effect in uh in Krux. You can use it or
don't have to use it. But that is a way of signaling the the shell that the view model is needs to be refetched. But
doesn't have to work that way at all.
And in fact um you know the view model may change depending on where you are in the application. Um and it may be small,
the application. Um and it may be small, it may be big, who knows? You know
depends on the on the need of the application. But um you don't really
application. But um you don't really want to pull like megabytes or whatever of data across the bridge necessarily.
So like there are other ways of just bringing a reference across the br you know a pointer to some shared memory or something like that instead. So there
are ways of of doing this but yeah it it's it's not enforced it's just a way you know.
>> Gotcha.
Questions.
>> Hi uh thanks for the talk. Um I was wondering how you manage the sort of interface between the shell and the core. So obviously the core maybe a new
core. So obviously the core maybe a new version comes out they've added new features. How do you manage the
features. How do you manage the different shells actually you know not breaking because of changes there or whatever.
>> Yeah that's a very good question. So um
an approach that we're um exploring a proton to handle this is a monor repo.
So um we're we're trying to keep all the cracks project in a single GitHub repo because you another challenge that I haven't mentioned but that is very real is that you would have different versions across different platforms of
of like the code that you're producing right so like the the the rest core needs to match with the shell core of course you also want to try to get to stableish API as soon as you can right
uh but at the same time of course in the real world it's not always possible uh so the the way we're exploring this at the moment is through a monor repo. So
you point to a single comeit which of course is a snapshot of everything. So
the rest and the swift and the the cotling and this is one way of doing it.
Then of course there are other ways. Uh
uh yeah it's it's it's a problem to consider and uh yeah of course like another option would be to have like different packages uh for the different platforms but then you
have to keep them in sync then um another thing that we are uh thinking of is some sort of build system introducing some sort of like highle build system which uh can manage this for you um but
that of course comes with like pros and cons and so Uh yeah there are different ways you can go about but yeah it's it's something to consider for sure.
>> Okay great thanks.
Hi. Um, so the update function itself is obviously sequentially executed, right?
Because it takes immutable reference to the model. And I think from what I
the model. And I think from what I understand, right, that function is supposed to be quite cheap, right? As in
just all the workloads actually pushed out to commands and well returns as effects, but it's still kind of a bottleneck, right? Have you ever thought
bottleneck, right? Have you ever thought uh in scenarios where this could become a problem? For instance, if there's like
a problem? For instance, if there's like high event systems that have like very high amounts of like events coming in that need to be processed that might end up in a backlog and potential sharding of these kinds of models.
>> Yeah, it's a it's a great question.
Thank you. Um in in the Krux repo there's an example I think it's called bridge echo or something like that that um effectively hammers the bridge with
lots of um events coming over in effectively multiple calls of the update function. Um and we get thousands and
function. Um and we get thousands and thousands depending on what platform it is through even with serialization and even with like you know a decent amount of serialization going on. Um it's not
like react native or something like that which is quite low level in you know every single thing goes across that bridge and because it's like you know your layout UI layout and all that kind of stuff goes all through that bridge.
This is effectively a behavioral bridge.
It's much higher level. So, it's only um when someone clicks a button or types something into a text box or something like that. So, there's actually not as
like that. So, there's actually not as much traffic going across as you'd expect. Um but yeah, I mean the we we
expect. Um but yeah, I mean the we we haven't seen any problems yet with it, but that doesn't mean that you know that but you're right. I mean it could it could I think it's more the
serialization overhead potentially, but it but we don't see that as a real thing. I mean it doesn't it doesn't show
thing. I mean it doesn't it doesn't show up in applications that we've written with Crux.
>> Cool. Thanks.
>> No worries.
>> Hey, thanks. Uh what did Proton use before Krux and how was the transition from that to Krux? Was it like a one big thing or >> Yeah. So um as I mentioned we we had um
>> Yeah. So um as I mentioned we we had um and we still have in some cases like a wide range of different stacks because the teams are independent empowered to choose the best technical solution for
for the job. Um so yeah it's a bit of a challenge to integrate uh when we have a variety of stacks and then um before adopting cracks we also uh did
some experiment with shared grass core.
Um so we have some uh projects that pioneered this uh this type of solution in proton and that proved the concept that like yes it's it's it's good for us to share business logic in this way. we
are actually able to share a lot of business logic in the rest core and then um yeah cracks it is just provides us with with a a really nice pattern which
is easy to to understand and and and reason about and so what we we are transitioning more and more to like um adopting it but yes there is a transition phase which is still very
much uh ongoing and so uh in a way this is a a beginning of the process we're still quite quite into it but still it feels like a beginning because uh um
yeah, we want to roll it out to more and more uh prototyps and clients. But yeah, as I mentioned
and clients. But yeah, as I mentioned like transition is is is definitely definitely a challenge when integrating another language. How
did you find that it worked with the various build systems that I guess you use like I get gradal maybe or >> Yeah. Yeah, that's another good question
>> Yeah. Yeah, that's another good question of course. um you need to build the RAS
of course. um you need to build the RAS core somehow before you can build the iOS or Android app. So um again there are lots of different solutions for this
uh from like the very basic one which is like scripts like shell scripts to more advanced one like build we are exposed.
So we started with the very basic ones and now we are more and more exploring like slightly more sophisticated the build build systems that can compose all all the different parts that are going
on in into a single single build build process. But I have to say uh although
process. But I have to say uh although it takes for sure some like getting used to uh once you understand what you have to do in practice uh it it's not that bad like there's not that much friction
in in like building the rest score especially if the if the team is um is working together um one thing actually
that I forgot to mention is that um proton is is training all new um engineers and especially mobile engineers in rest. So it's providing right training for us and that's very
important because we think uh it's really important that like the mobile engineers understand all the stack so understand everything that is going on in in the iOS and the Android app and
also we want to move to a place where everybody is able to make changes full stack.
uh from iOS to the rest. So of course it's it's um very important that like the team culture that to to to invest in the team
culture so that all the team understands like the the challenges and the build process and and and what it means to build a feature in rest and and so on.
>> Thank you.
take >> um >> sorry. Um yeah, nice talk and it's it's
>> sorry. Um yeah, nice talk and it's it's cool to see proon using this technology.
Um I was curious if was it for the purposes of the of the demo of having these two cores that um kind of one application then has two cores and also the front end is kind of aware of both
cores. It's it's quite common in the
cores. It's it's quite common in the front end to kind of abstract all of say microservices all behind kind of one layer. Um,
layer. Um, and yeah, I'm wondering at Proton, do you like I guess why wouldn't you sort of make the the the single core for that
application kind of depend on our smaller sections of other cores, you know, just to bring in like that little bit of functionality. I was just curious if it was for the demo.
>> Yeah. So the the the main idea of that demo is that uh we want to enable one team to uh take a function a functionality incorporate a functionality that has been developed by
another team because of the way the teams are structured at proton um which tends to be structured around uh products but then we want to integrate these products together. So of course
that that's when this uh need arises when we want to integrate with um services which are owned by other teams. So they will be typically
outside of your system. Uh but we want to facilitate this integration as much as possible. So that's when this
as possible. So that's when this composition comes in. But also if you structure even like your own app in this way then you're able to mix and match features also inside the the the app
itself. So I guess it provides you with
itself. So I guess it provides you with a nice um architecture to structure app in in reusable components. Not sure if your question.
>> I think it's also worth saying that you can compose Krux apps in multiple different different ways. I mean you can have like two completely separate independent side by side um both with
their own bridge and you know you can do that. You can have them um ch you know
that. You can have them um ch you know parent child relationship which is kind of what we've got here. But there's lots of different ways of doing it. I mean
the key to it is it's just data in and data out, right? So as long as you root that data in the right place, it's absolutely it's fine.
>> Hi. Yeah, thanks for the talk. Um I was just wondering because I didn't see any examples of it. Um does Krux have a DSL to abstract like the the guey the text
fields input fields from all the different platforms or how does that work?
>> No, it doesn't. it it it defers entirely to platform native user UI frameworks.
So Swift UI, Jetack Compose, React, Next.js, whatever it is, right? Um but
because it it it deliberately doesn't have an opinion on what the UI uh framework is, we want to en encourage those apps, those platforms to have
their truly native idiomatic user experience. And that really only comes
experience. And that really only comes from the platform native um UI frameworks. But um and there's no point
frameworks. But um and there's no point in trying to like like Flutter would do or React Native or anything. There's no
we don't want to get in the way of that.
This is just really about the behavior of the app and sharing the logic um in a testable format across all platforms.
>> Is that does that answer your question or not? No.
or not? No.
>> Yeah, it does. I'm just wondering where the interface layer is. Is that does it sort of the render component sort of >> Yeah. Yeah. So
>> Yeah. Yeah. So
>> passes down to a >> so the the shell typically is a very very thin layer. It's just it's some UI um and it's some what we've typ
typically called capabilities like an HTTP capability a service and events capability maybe a key value capability.
Um and they're very very generic. You
only need like three or four of them probably in the app in total. Like
there's not that many type different types of IO. You might have a websocket one. You might have like, you know, um
one. You might have like, you know, um there there's not many of them. So, and
they're very tiny and they're and they're generic. So, they can be you
they're generic. So, they can be you only need one of them, you know, you one HTTP capability and it can do um all the HTTP requests. Um but the UI itself, UI
HTTP requests. Um but the UI itself, UI layer is very very thin and it just effectively is there to render the view model.
>> Right. Yeah. Thanks for clarifying.
It's worth saying that all those all those UI frameworks work the same way now. You know, they're all they're all
now. You know, they're all they're all exact. So, it's nice and easy to
exact. So, it's nice and easy to integrate with. Thanks.
integrate with. Thanks.
>> All right. It was a brilliant talk.
Thank you, Stuart and Ludell. Uh please
give another round of applause to them.
Loading video analysis...