LongCut logo

Solana Developer Bootcamp 2026: Learn Blockchain and Full-Stack Crypto Development [Full Course]

By Solana

Summary

Topics Covered

  • Global Finance Is a Patchwork Held Together with Duct Tape
  • Financial Infrastructure Is Becoming Software
  • AI Agents Need More Context Than You Think
  • Smart Contracts Are Stateless by Design
  • Code Replaces Lawyers as the Trusted Middleman

Full Transcript

Welcome to the new full endtoend Salana blockchain and smart contract boot camp.

In this course, we'll show you how to design, build, and ship real full stack applications on Salana using the latest tools and most efficient workflows available. Whether you're completely new

available. Whether you're completely new to blockchain or already coding, by the end of this course, you'll be able to build and deploy productionready Salana applications on your own. At this

moment, Salana is one of the most active developer ecosystems in the entire blockchain space. Every day, companies

blockchain space. Every day, companies building on Salana are looking for skilled developers who can work across the stack and ship real applications.

While no course can guarantee a job, this boot camp is designed to equip you with the practical skills needed to build on Salana. By creating this course, we also want to solve a common

problem. Most tutorials stop where real

problem. Most tutorials stop where real building begins. This boot camp focuses

building begins. This boot camp focuses on practical hands-on work and teaches engineered thinking. The new versions

engineered thinking. The new versions consist of 12 projects, each designed to give you a strong foundation for building real applications in the future. The projects progress from

future. The projects progress from setting up your development environment and writing your first Salana program to designing and building complex productionready applications. I'm

productionready applications. I'm Brianna Mglachio and I'll help you set up your environment and walk you through building core Salana programs. Hi, my name is Gibbo. Together we're

going to build a prediction market focusing on a full stack to see the interactions between the program and the front end.

I'm Cat McGee and I'll be covering privacy on Salana and building private transfers with zero knowledge Bruce.

I'm Mike Ma. Together we'll look at the X42 client side development and real world assaite examples built on Salana.

Hey, I'm Robert from and I'm going to be explaining some Salana security issues.

Building real systems means running into real problems. When you get stuck, you'll find help through Salana's developer support channels and community resources. And each project includes

resources. And each project includes reference solutions and notes to guide you forward. All resources are linked in

you forward. All resources are linked in the description below. Take your time with each project and don't rush it. The

goal isn't to finish the course, but to understand what you're building.

Every country has its own financial system. Banks, stock exchanges, payments

system. Banks, stock exchanges, payments network, all built decades ago, all operating on their own rules, their own hours, and their own fees. If you want

to send money from Tokyo to Sao Pao, it passes through multiple banks. It takes

days, and it costs a percentage. If you

buy shares in a German company from Indonesia, you need brokers, currency conversion, settlement periods. The

global financial system isn't actually global. It's a patchwork of local

global. It's a patchwork of local systems held together with duct tape and swift messages. Solana is what happens

swift messages. Solana is what happens when you build financial infrastructure from scratch for the internet age. Let's

talk about what settlement actually means. When you buy a stock, you sit in

means. When you buy a stock, you sit in your account immediately. But the actual transfer of ownership that takes two business days. During those two days,

business days. During those two days, multiple parties are reconciling records, moving money around accounts, and verifying that everything matches.

This is called T plus two settlement, trade date plus 2 days. It exists

because the underlying system were not designed for instant anything. They were

designed for paper certificates and phone calls. Now multiply this

phone calls. Now multiply this inefficiency across every financial transaction on earth. Trillions of

dollars locked up in settlement limbo.

Billions paids to intermediaries who purely reconcile records between system and don't even talk to each other.

Payment networks charge 2 to 3% because they can. International transfer cost

they can. International transfer cost $25 $50 because a correspondent banking system requires it. Markets close at 4 p.m. because the humans are running back

p.m. because the humans are running back to their home. None of these limitations are laws of physics. They're artifacts

of infrastructure built before the internet existed. Solana is a global

internet existed. Solana is a global ledger that anyone can read, anyone can write to, and everybody agrees on.

Think about what a ledger does. It

records who owns what. Your bank has a ledger. The stock exchange has a ledger.

ledger. The stock exchange has a ledger.

Visa has one. They're all separate. They

all need to be reconciled and accessing them requires permission from whoever runs them. Well, Solana is a single

runs them. Well, Solana is a single ledger that settle transactions in around 400 millisecond. Not 2 days, less than half a second. It processes

thousands of transactions per second. It

runs 24/7 or the whole year. And it

costs fractions of a cent, not two or 3%, fractions of a cent. Here's the key part. It's not controlled by any single

part. It's not controlled by any single government or company. It's run by a network of independent validators spread across the world, all running the same software, agreeing on the same state of the ledger through a consensus

mechanism.

This is not theoretical. Right now, over $150 million moves to Solana daily in stable coin payments alone, and this is going up daily. Trading platforms built on Solana execute millions of trades

without traditional market makers.

People in Argentina use it to save dollars without opening a US bank account.

The infrastructure exists. It's

operating at scale. The question isn't whether it works. It's how quickly industries are adopting it.

Let's get specific about trading. On a

traditional exchange, when you place an order, it goes to a broker who routes it to a market maker or exchange who matches it with a counterparty. Each

layer takes a cut. Each layer has latency. Each layer represents a point

latency. Each layer represents a point of failure or conflict of interest. On

Solana, trading happens through programs that execute automatically. You connect

directly to a marketplace, your order matches against available liquidity, and settlement happens automatically, meaning that the trade either completes entirely or doesn't happen at all. No

partial fields sitting in limbo. The

spreads are tighter because there's actual competition. Anyone can provide

actual competition. Anyone can provide liquidity, not just licensed market makers. Anyone can build a trading

makers. Anyone can build a trading interface, not just licensed brokers.

This isn't about replacing Wall Street with chaos. It's about replacing rentse

with chaos. It's about replacing rentse seeeking intermediaries with open protocols. The trading still needs

protocols. The trading still needs skill. The analysis still needs

skill. The analysis still needs expertise. But the pipes, the actual

expertise. But the pipes, the actual movement of asset becomes a commodity rather than a moat.

Now consider payments. Each payment

network is essentially a messaging system with a settlement layer underneath. Visa messages your bank.

underneath. Visa messages your bank.

Your bank messages their bank.

Eventually money moves on Solana. The

message is the settlement. When you send value, the ledger updates done. The

recipient has the funds, not a promise of funds. No funds spending actual

of funds. No funds spending actual ownership transferred. For someone

ownership transferred. For someone receiving a $500 international payment, this is the difference between getting $500 and getting $465 after fees 3 days

later. For a business doing global

later. For a business doing global commerce, this is the difference between managing float across multiple currencies with multiple bank versus a single treasury that operates everywhere

simultaneously.

Stable coins are digital dollars that live on Solana. They have found massive product market fit precisely because they solve this problem really elegantly. Same dollar value,

elegantly. Same dollar value, programmable, instant, nearly free to move. We're early in this transition,

move. We're early in this transition, but the trajectory is clear. Financial

infrastructure is becoming software.

Software that runs on open network.

Software that beats locked in proprietary system. The same way that

proprietary system. The same way that the internet beat private networks. The

same way that email beat the fax machine. Solana isn't asking permission

machine. Solana isn't asking permission to integrate with legacy system. Is

building parallel infrastructure that's simply better, faster, cheaper, more accessible, and more transparent. The

opportunity isn't to make the old system slightly more efficient. It's to build a new system that makes the old one irrelevant.

Every application that touches money, trading payment lending saving insurance, it has to get rebuilt on infrastructure designed for the internet. That's what Solana is. It's

internet. That's what Solana is. It's

not an alternative to finance. It's the

next version of it.

Before we write a single line of Salana code, we need to get your machine set up properly. So, let's go over what we'll

properly. So, let's go over what we'll be installing. First, Rust. Salana

be installing. First, Rust. Salana

programs are written in R. So, we need the Rust compiler and cargo, which is Rust package manager. If you've never written with Rust before, don't worry.

We'll cover everything you need to know, but the tool chain needs to be there.

Second is the Salana CLI. This is your command line interface for interacting with Salana networks. You'll use it to create wallets, deploy programs, check balances, airdrop yourself, DevNet Soul, and a lot more. It's essential. Third is

Ankor. Anker is a framework that makes Salana development significantly easier.

It handles a lot of the boilerplate and security checks that you'd otherwise have to write manually. Most production

Salana programs today are built with Anker and it's what we'll be using throughout this boot camp. And fourth is Surfpool. This is a local test validator

Surfpool. This is a local test validator that's extremely fast. Instead of

waiting for transactions to confirm on DevNet, you can test locally and get instant feedback on your code. It's a

huge productivity boost when you are iterating quickly. Now, I'll be showing

iterating quickly. Now, I'll be showing this on a Mac, but I'll include instructions for other operating systems. The process is pretty similar across platforms. Now, let's get started.

So, we're going to go straight to the Salana docs. So, salana.com/doccks

Salana docs. So, salana.com/doccks

and there is going to be an installation section. And this is going to go over

section. And this is going to go over everything you need to set up your environment. And it's pretty simple

environment. And it's pretty simple because everything's been wrapped into this curl command to install all the dependencies that you need. So, this is going to cover Rust, the Salana CLI,

Ankor CLI, Surfpool, Node.js, and Yarn.

One thing to note though is if you are running Windows or Linux, there are prerexs that you're going to have to meet before actually running this crow command in your terminal. I am running a Mac so I can't show you how to do it,

but um if you just go to the documentation, everything is very explicitly stated here for you to follow along. And once that's all set up, then

along. And once that's all set up, then we can get to our install. So here we're just going to very simply copy this command and we're going to paste it into our terminal. And we'll let this run.

our terminal. And we'll let this run.

It's going to take some time to run, but what it's going to do is it's going to make sure all of the dependencies are installed on your machine. And once they are, then it's going to move on to the

next one. If you already have things

next one. If you already have things installed, it won't reinstall it. So,

you can see mine ran pretty fast just because I already did have everything installed on my machine. Um, and then it's also going to check versions for you. And then once it's all installed,

you. And then once it's all installed, you'll see that it says installation complete. Please restart your terminal

complete. Please restart your terminal to apply all changes. So, I'm just going to close this out and then we're going to reopen the terminal. I'll zoom in a bit for you there. And then all we're

going to do is check to make sure everything was installed correctly. So,

that command is also explained here in the docs. And all it is is checking the

the docs. And all it is is checking the version of each dependency that was installed on your computer. So, we're

just going to copy that and paste it into our terminal. And you can see I have Rust, Salana CLI, Anker CLI, and Surfpool all installed. So, we're all

good to go there. Now that that's set up, if you're curious to dive deeper into each of these CLIs and all the basics that go along with it, there are

docs for each of them. So, you can go into these pages here and just take a look and figure everything out on how you can use these CLIs. These will be used throughout the rest of the boot

camp. So you don't have to read through

camp. So you don't have to read through this documentation. You can follow along

this documentation. You can follow along with the rest of our courses. But if

you're curious now, take a look. And

then we're just going to go over a few more basics for getting your computer ready. So then you can follow along for

ready. So then you can follow along for the rest of our projects. So the next thing I'm going to look at is setting up your local wallet on your computer. So

if I go to terminal, if I type in Salana address, it's using the CLI to check what wallet is connected as your local key pair on your machine. So I'm going

to type this in and you can see I have no signer found right now. Um most of the time it's going to show that unless you've already set it up. So if you're just installing for the first time, this

is the output that you're going to get and you're going to be able to run Salana key gen new. And what this is, so the command actually tells you what to run, which makes it easy. But what this

is doing is it's generating a new key pair and storing that on your machine.

Now that that's created, you're going to have your seed phrase displayed on the screen. Mine's going to be blurred out

screen. Mine's going to be blurred out so you can't see it. Um, but make sure you save that. It's going to be how you're going to access your wallet in the future. And then you're going to be

the future. And then you're going to be able to see what that wallet address is.

So here this is your public key that's returned. Um if I type in Salana address

returned. Um if I type in Salana address now it should return the same public key. So you can see that these match up

key. So you can see that these match up and that is my local key pair on my machine. So now if I'm going to want to

machine. So now if I'm going to want to test anything out locally it's going to use this wallet. And for me to be able to test I'm going to need some DevNet solo on my wallet. So how are we going

to get that? Well there is a faucet. So,

I can take this address and we're going to go to the Salana DevNet faucet. And

we're just going to paste that in, choose an amount, and confirm the airdrop. And

airdrop. And this does require you to connect your GitHub. I don't want to connect mine

GitHub. I don't want to connect mine currently um just cuz that'll have all of my login. But we'll go back. There is

another method that you can use. This is

going to have a rate limit on it, so it's a lot more efficient to use the faucet. But if you go straight into the

faucet. But if you go straight into the Salana CLI and you type in Salana airdrop and I just type in two, it's

going to request to soul to my wallet.

And you can see that this rate limit was reached because I'm using Wi-Fi that a lot of other people are using right now and probably already requested some soul. So I'm going to just go to another

soul. So I'm going to just go to another browser where my GitHub is actually logged in. So, one second. Going to open

logged in. So, one second. Going to open that and we're going to go to the DevNet faucet.

Okay. So, I'll connect my GitHub here and then I'll paste in the wallet address and then we're going to choose an amount and we'll confirm the airdrop.

So we'll verify that and you can see that airdrop was successful. So now we have funds on our

successful. So now we have funds on our wallet and we're going to be able to use it for testing purposes. Now we can confirm that this actually happened by

using the explorer. So the explorer is Salana's block explorer and what this does is it shows you recent transactions, token account, mint account, wallet accounts, um just

activity that happens on chain and it's also going to connect to different clusters. So here you can see we're

clusters. So here you can see we're connected to mainet beta which is Salana's mainet. It's where all of the

Salana's mainet. It's where all of the actual real traffic happens. There's

other clusters. So you have local net when you're running a validator locally on your machine. There's test net where mainly the network team works on testing things and then DevNet which is where

you'll be testing things for um just when you're building. So here you can see I air dropped DevNet solo to my wallet. So we're going to check that. If

wallet. So we're going to check that. If

I paste in my wallet address here and I'm now connected to DevNet. We'll click

over and you can see my balance on my wallet is now five soul because I just requested five soul from the airdrop.

So, that's an overview of pretty much all the basics you need to be set up on your computer. We went over installing

your computer. We went over installing all of the dependencies on your machine, how to set up your wallet locally in your terminal, how to request funds to your wallet, and then how to use the

Salana block explorer. So, now you're all good to go. You have your machine set up to start working through the tutorials in the boot camp.

Now that you've installed your local environment, we will be sharing different ways how you can get started with creating your Solana projects. The

first way will be using our templates marketplace where you can find different applications pre-built for you. You'll

be able to filter by frameworks, by use cases, and we'll also share how you can use the create solutions where everything is pre-wired for you and is the best way to make a simple

hello world and get started quickly on these projects.

When you want to start a new Solana projects, you will have two options. The

first option will be to clone one of our pre-made templates for you to get started with a specific text tag that you're looking for. Let's go in and observe those options. I'm on the Solana website here. I will go to this

website here. I will go to this developer dropdown and I will click the template section.

The template section is a marketplace of different templates that are ready to start with a specific text stack and there's a lot of variety that you can choose from. If you're looking for

choose from. If you're looking for something which you know it's pretty precise, you have an idea where you want to get started, you can filter with this um menu on the left side here. For

example, if I want to filter and I want to work with React Native to do a mobile application, I can see all of the options for React Native. And if you want to consume one of these applications directly, one of these

templates directly, you'll simply select it, which will take us to the template details page. You will be able to read

details page. You will be able to read the readme of that project with all the information about how to use it, the different technologies used in composing that template. And once you have

that template. And once you have selected it, you're ready to use it. You

will go ahead and click use this template.

You'll be presented with options from mpm, yarn, bun, pnpm. We cover all of it. And you will simply choose your

it. And you will simply choose your favorite package manager. If you're not sure, npm is the standard one. I would

advise to use this and you will copy the command to clone it to your computer. So

I have this command copied. I will go over to my terminal and I can just paste it. Enter. And here I can enter my

paste it. Enter. And here I can enter my project name and it will clone everything for me. But what I want to show you first uh this is the use case

if you know exactly what you want to do specifically. If you have like a I want

specifically. If you have like a I want to build like a mobile application or you want to do something with a Phantom embedded wallet, this would be this use case. If you're not sure, what we will

case. If you're not sure, what we will do instead will be to just use the beginner templates, the starter templates, which are the most neutral um

the most flexible ones. So for this, you will do inside your terminal npx space create Solana

DAP at latest.

And here it will guide you in an interactive tutorial. It will ask you

interactive tutorial. It will ask you your project name. I will say boot camp take two.

And once you enter, you'll be presented with the group of templates that you want to use. Here these groups are the most general ones. For example, kit framework are the simplest one. They use

our new JavaScript SDKs Solana kit and Solana React hooks. So I will pick this one. And you have a choice to simply use

one. And you have a choice to simply use the Nex.js which is just a front end next.js with an anchor program or you can have the same options using the

React vid framework. I will go ahead and select the Nex.js anchor one. Once I

select it, what happens is that the Creator CLI will clone this code locally to your own folder. It will run the mpm

installation steps for the JavaScript part.

These steps should take I would say two to four minutes depending on the templates because we are setting everything up for you. So for example here I've set up the front end that has completed and I'm progressing now to

setting up the actual anchor program and generating the connection all the piping all the little things that need to make your full stack application come to life which we'll see later in the the

tutorials in this boot camp. All of this is getting pre-wired for you. So it does take a little bit of time. I would say two to four minutes at the most. But

once this is all ready to go, you'll have a working running application ready for you to code directly. And these

applications, by the way, there's a lot of documentation. So they are fully

of documentation. So they are fully ready for your AI agents like cloud code or openai codecs will work out of the box simply because the links to the

documentation uh the readmmes are well detailed and easy to get started with.

So let's give it a bit of time to see how this plays out.

So now the setup is almost complete.

We're running the last step, the init script. Once you're done, you will get

script. Once you're done, you will get this installation successful message.

You will want to navigate with your terminal inside this project. So I will cd into boot camp take 2. And you will want to open this in your favorite code

editor. In my case, I will be using VS

editor. In my case, I will be using VS Code as it is what I consider pretty much the most open, the most standard one these days. So code.open to open.

All right. Once you're in there, the basic Nex.js anchor project will be split along two ways. So if you go and open the package.json here, you will see

that we have a bunch of commands where anchor build anchor test which will take care of your Solana program. And the

other commands are mostly related to the web application of your program. And

we've seen that it's also split in a similar way. So the anchor folder here

similar way. So the anchor folder here will have your code for your Solana program. Let me go ahead and close a few

program. Let me go ahead and close a few windows here and let's close this anchor program. If

we open the app folder, we will be in the Nex.js application. I'll go ahead and actually start this pro this um fullstack application. I will do npm

fullstack application. I will do npm rundev back in my terminal.

This will open up on localhost 3000 our dev server.

And once we open it up, you will have instructions linked to the docs, um, wallet connection, and a vault program deployed to DevNet pre-wired for you.

So, obviously, one of the reason we've set it up this way is that you will have an easy time if you simply pass this to a coding agent like to a coding AI, you'll have an easy time because everything is documented and the code is

pretty simple. A lot of the time AI can

pretty simple. A lot of the time AI can trip over in here in this starter template. Everything is pre-wired so it

template. Everything is pre-wired so it should normally do really well. All

right, so that was starting with the Solana templates marketplace and using create Solana CLI.

So one of the hardest things in producing a boot camp which will last for 6 months to one year is that coding is changing quickly these days. I used

to write all of my code by hand. These

days I write most of my code with agents like Cloud Code or um GitHub Copilot. So

in setting up these resources we we want to acknowledge that we're going to share what we think is in January 2026 a good set of tools to get started. But just

take note these tools could be changing the next week, the next month, next year. We don't know what things are

year. We don't know what things are going to look like. So we are keeping all of these resources up to date for you. So obviously one of the best places

you. So obviously one of the best places to start with if you are working with an AI agent is to always include more context to your agent than you think

that it needs. So an example here the Solana documentation is kept up to date by our professional team of many people.

So we always are keeping all our best practices all our best um coding resources up to date on the documentation website. You should always

documentation website. You should always be referring to this documentation in all the projects that you're building and you should point your AI agents to using this documentation. This is a good

resource, but there's also more AI tailored resources that you can use. So

here on screen, I'm going to share with you the awesome Solana AI list, which is an ever growing list of AI resources that you can use for AI and Solana

together. So for example, we have a list

together. So for example, we have a list to a bunch of dev skills of how you can consume different frameworks on Solana.

You also have links to different AI agents and to different developer tools.

This list is brand new as of the recording. We're expecting this to grow

recording. We're expecting this to grow exponentially. So please do take the

exponentially. So please do take the time to read what are these resources so that you can adjust your workflows.

We're giving you mostly knowledge in this boot camp about how you can work on Solana. So the trick when you're working

Solana. So the trick when you're working with AI is to have the knowledge yourself and be the freethinker. You

don't want to outsource your thinking to the AI. You want to guide it. You want

the AI. You want to guide it. You want

to be the architect of how the AI is working. So read these resources, stay

working. So read these resources, stay up to date with documentation, and you'll already be ahead of the curve when it comes to shipping good code and fast.

And as of um January 2026, one of the most popular coding tool out there is cloud code. So as part of this list, one

cloud code. So as part of this list, one of the links which I consider it the most important link on that list is the Solana dev skills which is um meant to

be consumed inside cloud code for you to simply get started faster on your project. So these skills are essentially

project. So these skills are essentially an opinated way of working on Solana.

This will also be updated. So these

skills are a collection of MD files of how to work with IDL's, how to work with payments, Ankor, Pinocchio, all of these frameworks that you're currently learning. I would still highly encourage

learning. I would still highly encourage you to learn them. Even if you're going to have the code be done by an AI agent because you'll get faster, you'll get better as you do these. So let's see how

I can consume these AI dev skills. So

here I will copy this repo, this uh this link, sorry, from GitHub. I will head over to my terminal.

In my terminal, I have a project here which I cloned from create Solana DAP.

And what I will do is I will start cloud code. So cloud code.

code. So cloud code.

Uh all right, let's set it up. Once

you're inside cloud code, you can simply say um clone and install. I will just say actually install these skills.

install these skills from Solana and you will be pasting the link of the Solana dev skill. What will happen is that claude

skill. What will happen is that claude will ask you for a few permissions. It

will ask you, can I fetch this content?

You will say yes. When you're working with AI, there's different mentalities.

You can decide to give full control, which would be referred to as people vibe coding, or you can stay in charge and still approve all the actions. As a

beginner, I would encourage you to approve action one by one. Read,

understand what's happening. So now what is happening here is that it wants to install these skills onto cloud code.

You will proceed and and say yes. A

cloud skill is a skill that um Claude is able to locally work and refer to as its own documentation, its own playbook for

interacting with these um with these specific activities that it's trying to do. So what you have to understand, I'm

do. So what you have to understand, I'm giving a demo here and there's a little bug. This happens often when you're

bug. This happens often when you're dealing with AI. AI is not predictable.

They're probabilistic systems. So as you saw there were a few um a few bugs and I have them installed already locally but just read what is happening and make

sure that these skills get installed.

You will have more of an interactive experience. I cannot script this

experience. I cannot script this perfectly in this tutorial simply given to the fact that AI is um not always predictable. So once you have these

predictable. So once you have these skills installed now I am in this vault create Solena DAP program. I could say

please um please use the Solana skills to improve this vault and add SPL tokens.

I'm giving you this example and when you're building your projects you will want to use these skills especially because Claude or Chat GPT or Gemini 2.0 might be out of date on the latest

documentation. So that's the reason why

documentation. So that's the reason why you if you use these skills, you'll have better access to the most recent documentation that we're curating and maintaining for you. This is not going

to be a whole demo of how to use these.

It's more of a best practices and how to set up your environment.

But what you should be focusing on from this tutorial is remembering that the more information you give to your coding agent, the better it will perform. If

you're just hoping for the best crossing your fingers, you're likely not going to get good resources. This is what we call context engineering. Context is all the

context engineering. Context is all the information which you give to your AI agent to get the best result out of your coding session. So by giving all access

coding session. So by giving all access to the docs, the coding skills, the awesome AI solana list, you're narrowing down what the agent should work on

instead of just casting a wide net and hoping for the best. So this will be the end of this section on AI best practices.

The most important thing that you should be remembering is before going to production, if you're handling people's money, you should always make sure that your program is working well. I would

highly encourage you to also do a security audit with uh certified uh auditors. And remember that you're still

auditors. And remember that you're still responsible for the code that AI writes.

You still have your name when you're committing this code, when you're writing this code, even if you're not doing line by line. So be responsible and understand what is happening under the hood. Don't have to be an expert in

the hood. Don't have to be an expert in everything, but you still want to keep a good grasp on what you're building. You

want to be the architect, the orchestrator happening instead of outsourcing your full process to AI.

Thank you.

The next application we're going to build introduces how to handle data on chain. Specifically for this project,

chain. Specifically for this project, we'll be covering one of the most fundamental challenges in modern democracy, voting. Think about

democracy, voting. Think about traditional elections for a moment.

Whether it's a government election or a DAO proposal or a community poll, we face the same core problems. How do we ensure every vote is counted accurately?

How do we prevent fraud? How do we maintain transparency while protecting voter privacy? And perhaps most

voter privacy? And perhaps most frustratingly, why does it take hours or even days to get results? This is where blockchain technology shines. Let me

break down why blockchain is uniquely suited for voting. First, immutability.

Once a vote is cast on chain, it cannot be altered or deleted. The ledger is permanent and tamperproof. Second,

transparency with privacy. Every vote is publicly verifiable on the blockchain, meaning anyone can audit the results in real time. But here's the key. Voters

real time. But here's the key. Voters

are identified by their wallet addresses, not their personal identity.

So you can prove your vote was counted without revealing how you voted. Third,

instant verification and results.

Traditional elections require manual counting, recounts, and verification processes that take days. On blockchain,

votes are tallied automatically as they're cast. Results are available the

they're cast. Results are available the moment the voting period closes and every participant can independently verify the count. Fourth is

accessibility. Blockchain voting can happen from anywhere with an internet connection. No need for physical polling

connection. No need for physical polling places. No issues with mail-in ballots

places. No issues with mail-in ballots getting lost. Just your wallet and a few

getting lost. Just your wallet and a few clicks. In this project, we'll build a

clicks. In this project, we'll build a complete onchain voting system on Salana. Here's what makes our

Salana. Here's what makes our implementation powerful. We'll use

implementation powerful. We'll use program derived addresses or PDAs to create deterministic voting accounts.

This means each poll gets its own unique account derived from the polls parameters and each voter's ballot is stored in an account derived from their wallet and the poll ID. This

architecture prevents double voting while maintaining a clean scalable structure. You'll learn how to manage

structure. You'll learn how to manage state efficiently on chain, handling vote tallying in real time as transactions come in. will implement

proper access controls to ensure only authorized users can create polls and only eligible voters can cast ballots.

The code you write here scales from a small Discord community poll to governance systems managing millions in treasury funds. The principles are the

treasury funds. The principles are the same. Trustless verification,

same. Trustless verification, transparent execution, and immutable results. Let's get started and build the

results. Let's get started and build the future of voting together.

So for this project, we are coding the voting application. And this is actually

voting application. And this is actually the very first anchor program that you're going to be writing throughout the boot camp. So for this one, we're going to do it from scratch. Um the rest will be more highle understanding

concepts. But here we're going to show

concepts. But here we're going to show exactly how to get started for writing an in ankor program. So now I already have my voting directory open in my terminal. And all I'm going to do is

terminal. And all I'm going to do is type in ankorinit. And this is going to initialize my workspace. But I should probably name it. So we'll do anchor and nit voting.

Okay. And now it's generating a workspace for me set up for Inker smart contract development on Salana. And this

is able to be generated because you have already downloaded Inker onto your machine earlier in the install video.

And with that comes the Ink CLI. Okay.

So you can see it initialized an empty git repo for me with this anchorit command. So, I'm going to CD into voting

command. So, I'm going to CD into voting and I'm going to open up that code and we're going to talk through what Anchor and it generated for you and then actually start writing our program. And

this will be pretty quick because creating a voting app on chain is actually very simple. So, let's zoom in a bit so you can see it. Now, here you can see I have a lot of folders

generated. Um, this app folder is for

generated. Um, this app folder is for when you're ready to build your front end. Um, this is not going to cover the

end. Um, this is not going to cover the front end, but it will go into this directory. And we have migrations as

directory. And we have migrations as well. And then your node modules. All of

well. And then your node modules. All of

this is really for the front-end TypeScript side of things. So, we're

going to focus on this programs folder.

And inside programs, you have voting and source. And you have your cargo. Now,

source. And you have your cargo. Now,

this is actually set up for a lot more complex of a program than what we need.

You can see that there's an instructions folder and that's going to hold all of the separate instructions for your project. There's constants, error,

project. There's constants, error, instructions, liberation, state. And

this is the most common best practice for writing smart contracts. As you see when we get to more advanced concepts throughout the boot camp like the sable swap AMM, it's going to be structured this way. But just for simplicity here,

this way. But just for simplicity here, I'm actually going to delete all of this besides the lib RS. So your main code with the entry point for the program

always lives in your libs file. So just

going to delete this as well. Okay, now

we just have our libs and we'll get rid of the mods because we don't need them.

There's no other files.

Okay great.

Now let's start actually writing the program. So you can see there's already

program. So you can see there's already some boiler plate code here and it's importing in ankerlink which is the crate for the anker framework. there's

this declare ID and what this is it just defines the program's ID and the program ID is this that you see right here. So

this is your public key for the program and once you deploy the program on chain you're going to be able to identify that program based on this public key. So

we'll be able to see that later when we connect to an explorer and we're able to search the explorer with the public key.

It's going to show us this program. Now,

here you have a program macro. And all

this is doing is basically saying, hey, all of the functions that exist within here are going to be my instructions for the smart contract that I'm writing. And

that is how inker kind of helps simplify a lot of the smart contract writing because it's going to cover your entry point and deserializing and serializing data for you all within this macro. So,

all you have to worry about is the function and the logic behind it. Okay,

so now let's figure out what do we actually want to write for voting. So if we think about voting, essentially it's not going to handle tokens on chain. It's going to

be like your most simple smart contracts and it's really just handling data. And

for voting, you're keeping score of account between different candidates. So

how does data work on chain? Well, one

thing that we know is that smart contracts on Salana are stateless, meaning that no data is stored within the smart contract itself. If you're coming from a

contract itself. If you're coming from a different chain like Ethereum or Cosmos, this is going to be a very new concept for writing smart contract architecture.

Now, where is the state stored? It's

stored in accounts. So, what we're going to have to do is define what we want our state to be and how those accounts are going to look like. So whenever I'm designing a smart contract, I always like to start with the state and then

move on to the logic after. So how do we actually do that? Well, we're going to define an account because that's where state is stored. Whenever we want to create an account with Ankor, all we're going to do is just use the account

macro that Anker has. So I'll type out account here. And that is doing all of

account here. And that is doing all of the hard behind thes scenes work of generating an account and storing data.

Now all I have to do is define what data structure I want to store what fields into my account state. So I'm going to

name this strct poll account and I'm going to enter all of the fields that I want. So let's think if I'm creating a poll on Jane, what do I want to store? Well, I'll probably want the

to store? Well, I'll probably want the name of what the poll is and a description. So let's add that first. So

description. So let's add that first. So

we'll do poll.

We'll do pub pole name and we'll just define that. It's going

to be a string. Now, there's something that you're going to have to know with anchor is whenever you have a string variable, you're going to have to define a max length. And the reason is it is

length. And the reason is it is calculating the space that it needs to take up on chain to be able to pay rent for that account. So when you calculate the space, if you have a string, you want to know about how long the string

will be to be able to accurately calculate space. So I'm going to have to

calculate space. So I'm going to have to define here the max length macro.

And for the name, we'll just do like 32.

It doesn't need to be that long. It's

just a name. And back to where I was saying we need to be able to calculate how much space is on chain. There's

actually a fun anchor macro that automatically does this. So I remember when I first started building on Salana, you had to manually calculate how much space was on chain and it took forever.

So now you can just do derive a knit space as the macro underneath your account macro and it's going to automatically calculate that value for

you. So now I'm going to define the next

you. So now I'm going to define the next variable and that's just going to be the description. And I'll need the max

description. And I'll need the max length for this one to be a little bit larger because in a description it's usually longer than the title. And let's

make that like 280.

Okay. And we'll name that as pole description.

And also going to be a string.

I used the wrong brackets. Little typo.

I'm like, why is there an error? That's

why. Okay, cool. So now we have those fields. Now the other thing that's

fields. Now the other thing that's important when you're creating a poll is when is it going to start and when is it going to end because you don't want the poll to go on forever. So let's add in

those fields as well. Now this is going to introduce the concept of having time in your smart contract. So we'll talk about how you're actually able to get time as we're writing the logic. But for

now we just need to be able to keep those values in our state. and time is going to be a U64 type because we're using a Unix time stamp here. Okay, so

we're going to have the poll voting start and poll voting ends and that will also be a U64. Okay, and then

the last thing is how many options do I want in the poll? So we'll just have an index here that we're going to keep count of. So poll option index.

count of. So poll option index.

Okay great.

So, that's everything for the poll accounts. Um, now let's think about if

accounts. Um, now let's think about if we're creating a poll, what other information do we need to store? Not

just the information for the poll itself, but for what are we actually voting on? So, we have different

voting on? So, we have different candidates in a poll, and we're going to need to store that information. So,

that's another account that we're going to have to create. And after that, I think that's all of the data that we really have to store for this project.

So same thing we're going to follow adding account and derive a netit space because that's what you're going to do whenever you're generating an account.

And now we're just going to define what data we want to save for the candidate.

So pubstruct can account.

I'll use the correct curly brackets this time.

And now let's just see what do we want to store.

Well, we need to keep track of the votes. and then what the actual thing is

votes. and then what the actual thing is that we're voting for. So the name. So

let's save the candidate name and that's going to be a string. And

once again for the string we'll have to define the max length. So you can actually see here it's generating an error. The Rust Analyzer is pretty good

error. The Rust Analyzer is pretty good with this. Um so it's saying expected

with this. Um so it's saying expected max length attribute. So if you ever forget, you're just reminded because it's going to be bright red and you

don't like that. So, Mac length and for the name, we'll do the same as the poll name, we'll do 32. Okay. And if

I save that again, and the Rust Analyzer regenerates. Now, the error went away.

regenerates. Now, the error went away.

And the last thing we're saving is just the votes. So, pub candidate

the votes. So, pub candidate votes. And that's a number. So, we're

votes. And that's a number. So, we're

just going to do U64 there. And we're

good to go. So, now we have our two states. And usually whenever I'm writing

states. And usually whenever I'm writing smart contracts, I always start with the state to make sure that I know where my data is being stored and what data I have to work with throughout the logic.

So throughout the rest of the boot camp videos, um, whenever we're explaining a concept, I'm always going to be starting with explaining what the state looks like. But this is the only time I'm

like. But this is the only time I'm actually manually typing out how to write the account state in your smart contracts. So, if in the future you ever

contracts. So, if in the future you ever get confused, come back to this video and go over this section. Now, now that I have the account, how do I actually create it? Because all I've done so far

create it? Because all I've done so far is defined what the data looks like that's going to be stored within that account state, but I haven't actually generated the account. So, let's figure

out how that works. Um, well, there's this one concept with ankor, and that's this context. So, let me rewrite what I want this to actually

look like. And this is going to be

look like. And this is going to be initializing my poll.

And if I initialize the poll, I'm going to basically just be storing all of these data fields into the account and initializing it on chain. So, every time you write an anchor program, this first

parameter of your function is always going to be your context field. And what

this context field does is it maps you to a custom data structure that you write within your ankor program that's using this derive accounts macro and it's basically telling the function

these are all of the accounts that you're going to interact with when executing this function and how you're going to interact with those accounts.

So we're going to write the data structure for our derive accounts strct to tell this function that it's going to need to initialize a new account on

chain. So let me name that. I'm going to

chain. So let me name that. I'm going to name it a nitpole and we're just going to write.

We're going to write out our derive accounts macro.

And now all you're doing is writing a strruct that's going to hold every single account that you want to interact with for its corresponding instruction.

So for this instruction, all we're doing is initializing a pole account. So all

we have to worry about right now is three accounts. So the first account is

three accounts. So the first account is always going to be the serer. Whenever

you execute an instruction, you need to have a serer that's signing for that transaction. So we're going to do pub

transaction. So we're going to do pub sign ser. And that's going to be

sign ser. And that's going to be assigner type with a lifetime.

And I forgot to name my strct. So this

is a nitpole. Okay, there we go. I love

how errors just show up right away and then you know how to fix it. Okay, now

the next thing that we're missing is whenever we have what the account is, we also need to use the account macro to just declare that it's an account. Now

that we have that, it's going to be showing that the serer is an accounts. And one thing that we're going to have to add here is we're going to have to add a constraint called mute.

And what that does is it makes this variable mutable. So in Rust, all

variable mutable. So in Rust, all variables are mutable by default. So

whenever you are defining an account here and you know that the account is going to change in some manner, you're going to have to mark that account as mutable. So whether the data in the

mutable. So whether the data in the account is going to change here for the signer, the signer is going to have their balance change because they're going to be paying for the transaction.

So we do have to mark that as mutable.

Okay, now that's the first account. The second

account is the pull account because we know that we're initializing that. So

let me define an account again and we're going to add some constraints on this as well, but we'll come back to that. But first, let's just type out I

that. But first, let's just type out I want this to be a pole account.

And this is going to be an account type and we'll need the lifetime specifier.

And this is going to be the pole account data. So this is just mapping back to this pole account here

that we've defined.

Now I left the constraints blank. So

whenever you mark this account function, there's several constraints in anchor that are telling you what you want to do to that account during the logic of the corresponding instruction. So here what

corresponding instruction. So here what I want to do, this account doesn't exist yet. So I'm going to initialize it. So

yet. So I'm going to initialize it. So

let's type in an it.

That's our first instruction. Now

whenever you initialize a new account, there's going to have to be a payer. And

the reason is accounts have to pay to store data on chain. you're paying rent and that's what we talked about earlier where you have to calculate how much space an account takes up based on its

data. Um the reason you have to

data. Um the reason you have to calculate that space is the amount of money that you're paying for rent correlates to how much space you're taking up on chain. And because of that we do have to have a payer assigned to

the account that's going to be paying for that rent.

So the payer is the signer and then we are just going to define the space. So

we have eight, which is the anchor discriminator, and we're going to add in this fun little function we talked about earlier that just automatically calculates the space for us. And that's

this a knit space.

Okay.

And then the next thing, so we're going to make this pole account a PDA. And a

PDA is a program derived address. It's

essentially a address that doesn't have a public key. and it is created based on seeds and a bump. And the seeds are input fields that we're going to specify

as we're creating the accounts. And

later when you sign a transaction based on a PDA, you're using those seeds rather than a private key. So you can pick whatever seeds you want, whatever you think is the most efficient for the

program that you're creating. So here

we're just going to name it. We'll do

poll and then as ref.

And then I'm also going to want to store the poll ID just so I'm able to identify each of the polls. And we'll just do as ref there as

polls. And we'll just do as ref there as well.

And now you can see again getting an error. And the reason we're getting an

error. And the reason we're getting an error is because every time you have seeds, you're going to need a bump because a PDA is generated based on your input of seeds and a bump. So all we

have to do here is specify a bump, which I think if I delete this, it should instruct you. Yeah, bump must be

instruct you. Yeah, bump must be provided with seeds. So again, if you forget, it'll just remind you. Okay, so

there is everything we need to generate this poll account. Now you can see we're still getting some errors and the reason is whenever you are creating a new account on chain or interacting with an

account you're going to also need to interact with the system program and as we know we're going to need to specify every single account that this instruction is going to have to interact

with. So we're just going to add in the

with. So we're just going to add in the system program and there we go. So, we're getting some errors with the lifetime, but that'll go away in a second once we finish the

logic. But another error that is now

logic. But another error that is now generated is this poll ID. So, you can see here if I hover over it says it can't find the value of the poll ID. And

what I'm doing is I'm pulling a value from the poll accounts state. Um,

however, this poll ID has not been stored yet in the state because the account hasn't been created and it's not able to find that value anywhere. So,

where we're going to need to get the poll ID is going to be as an input parameter in the instruction. And

whenever we're pulling variables from the instruction, we have to tell anchor that so it knows where to look for it.

So, all we're doing here is just adding another instruction and that's going to be the instruction macro. And inside

you're just going to specify what variable you're pulling from the instruction. So we'll just do the pole

instruction. So we'll just do the pole ID is what I'm going to be taking from that instruction and that's going to be a U64 variable

because it's a number. Okay, so that's good to go. Now we have all of the accounts that are associated with the instruction. So the very last thing to

instruction. So the very last thing to do is just write the instruction logic.

So, if I'm initializing the poll, all I'm doing here is just saving all of the data that I need to the poll account.

And a lot of this is just going to be user input information because I'm generating the poll and I want to be able to specify all of the fields to be able to create it. So, as always, context is your first field. And then

we're just going to add all the other input parameters. So, we have poll ID.

input parameters. So, we have poll ID.

And I could store the poll ID in the pole account state, but I don't really need to because it's already stored in my PDA. And I wanted to introduce

my PDA. And I wanted to introduce another Rust concept here. So I'll

explain in one second, but this is going to be the only field that's not actually being stored into the state. So pole ID we're using for our PDA. And then the

rest of these input parameters are going to be everything that we need here that's being stored. So we have poll ID, we have start time, end time, name and

description.

So start is a U64.

End is also a U64 and then name is a string and then description will also be a string.

So those are all of our input parameters and all we have to do is store that to our state. And it's very easy with inker

our state. And it's very easy with inker because I can just take this context and load it in. So I can do context.accounts

and that's going to be able to access all of these accounts that I've specified here. So context.accounts.pole

specified here. So context.accounts.pole

account. And now I'm just going to access every field, but I can make this a variable just so it's easier to write out. So we'll make this poll is equal to

out. So we'll make this poll is equal to our context poll account. Okay. Now I

can just do poll dot poll description is our description. And

we're just going to fill in all of these fields. And what it's doing is taking

fields. And what it's doing is taking each field in the account state and saving the information that's been an input to the function into our account

state. So poll description, we have poll

state. So poll description, we have poll name.

And then we have poll voting start and voting end.

And I've used the wrong comma for all of these because I can't see that far.

Okay, cool. So now that this was generated, all that we have to do here is return an okay value because our

result is an empty value here. So okay.

And now we're good to go. And now two Rust concepts that I wanted to introduce here. One is you can see we're getting

here. One is you can see we're getting an error here for poll. And that's

because I created a variable and I didn't make it mutable because Rust all variables are immutable by default. So if I add this mute keyword

default. So if I add this mute keyword now all of those errors go away. Okay.

So one other thing that I wanted to talk about which it should be showing a little yellow squiggly line here but sometimes Rust analyzer is a bit funky.

Um, if ever you pass a variable as a parameter into a function, but that variable is not actually used in the logic of the function, you're going to get a warning because why are you passing the variable if you're not

actually using it? So, whenever that happens, because it does happen quite a lot when you're writing smart contracts, because as you can see here, we're needing the poll ID for our PDA, but we don't actually need it in the logic

because I'm not storing it to my poll account account state. So basically what you're doing is you're telling Russ, hey, I know this isn't being used in the logic of the function, but I do actually

need to accept it as an input parameter.

Okay, so that's your first instruction.

Now, the second one is pretty much identical to what I just explained because all you're doing is creating an account, initializing it, pulling in input data, and storing it to that

account. So, we've already initialized

account. So, we've already initialized the poll and now we're going to initialize the candidate account. Since

both of these functions are really just initializing a new account on chain and saving data to it, it is very repetitive. So, what I'm going to do is

repetitive. So, what I'm going to do is just copy and paste this section of code so I'm not repeating everything that I just explained. So, we're just going to

just explained. So, we're just going to copy in our strct. So again, whenever we start a function, we always start with the derive accounts because we want to

define every single account that that logic for the instruction is going to interact with. So I'm just going to copy

interact with. So I'm just going to copy this over um pre-wrote this section so you don't have to watch me type it all

out. And

out. And we'll just paste that in. So this is our initialized candidate section. And it

basically looks identical to the initialized poll except for one difference. And that difference is I'm

difference. And that difference is I'm passing through a fourth account. And

that fourth account that I'm passing through is actually the poll account that we just created. The reason that I'm passing this through is because I need to access and update data from

the poll account. Whenever we have a new candidate, we do have that field in the poll account called poll index. And what

that index is keeping track of is how many candidates are actually in the poll. So every time we do initialize a

poll. So every time we do initialize a new candidate, it's going to increment that value. So because of that, we do

that value. So because of that, we do have to make the account mutable to be able to update the account state in the poll. And as you can see here, I'm once

poll. And as you can see here, I'm once again passing through the seeds in the bump. And the reason that I'm doing this

bump. And the reason that I'm doing this is because the poll account is a PDA account. And to be able to access the

account. And to be able to access the correct PDA, you're always going to have to pass through its seeds and bump.

Okay, so now that we have that, um, we can just go over to the actual logic for writing the initialize candidate function. Now we have both of our

function. Now we have both of our accounts initialized on chain and we have an instruction for each. So the

very last thing that we have to do here is have an instruction that's actually doing the vote. So let's start with that. But before we actually write the

that. But before we actually write the logic there, there's one thing that I want to point out back to a few fun little Rust things. So you can see earlier we wrote our poll here and we

made it mutable. But now we're getting an error and it's saying it cannot move because it's behind a mutable reference.

And the reason here is because now we have this mutable reference for the poll account.

So the way to fix this, I'm going to remove this and make a reference to it

and do make it mutable.

And now we have our error go away. Okay,

so if that ever comes up, that's how you'll fix it. So two different ways to write variables with Rust. Okay, now the very last function is just our vote. So,

we have pub fn vote and we're going to pass through our context. I'm just going to copy this

our context. I'm just going to copy this over so I'm not writing everything out completely. Okay. And

we'll update it.

Okay. Now, we have our outline of our function. And first thing is creating

function. And first thing is creating that derive account strct. So I'm going to name this vote.

And I'll go down here and make the actual strct. So I can copy this. I do a

actual strct. So I can copy this. I do a lot of copy and pasting here with um anchor cuz again it does get repetitive.

And then we'll just edit it as we need to. So

to. So here is now going to be our vote.

always going to have the signer as our first accounts. We still do need the

first accounts. We still do need the poll account to access all of the information for the poll that we're voting on. And then we also need the

voting on. And then we also need the candidate accounts. And we're going to

candidate accounts. And we're going to need to update this. So I copied this over from

update this. So I copied this over from the candidate accounts derive account strct and that was when we were initializing the account. But now it's already initialized. So we can get rid

already initialized. So we can get rid of that. And everything you need for

of that. And everything you need for initializing are these three constraints. And now we're just going to

constraints. And now we're just going to make it mutable because we will be updating the account state by adding a vote to that candidate.

Okay. And that is good to go. And we can actually get rid of this system program for this one because we're not initializing any new accounts on chain.

So there you go. These are the three accounts that we're going to interact with for voting. Now we can just write the logic and then we're good to go. So

everything that we need to pass through for this vote instruction, there are a few different fields. One is going to be the poll ID and the other is going to be the candidate. And these we're actually

the candidate. And these we're actually not using in the logic, but the reason we need to pass them through is because we're using them for our PDAs down here.

So if we look back again at this vote strruct, we have a poll ID that we're using for our poll account PDA and we have a candidate that we're using for

our candidate PDA. So the way this actually works is if I'm voting for candidate A and there's A, B, and C as

my candidates, every PDA is going to be generated based on the candidates's name. So there'll be a PDA generated

name. So there'll be a PDA generated based on A, B, and C. And when I'm passing through what candidate I want to vote for, I'm going to be passing through A as my candidate parameter. And

it's going to pull that value from the input of the instruction. and it's going to use it here when it's calculating the PDA. So then it's generating the account

PDA. So then it's generating the account that corresponds with candidate A. So

that's why we're not actually using candidate in our logic for the instruction. It's all mapping back to

instruction. It's all mapping back to the PDA. So it's always best to optimize

the PDA. So it's always best to optimize how you're using PDAs and this is one example for that. Okay. So now that we

have that taken care of, let's add those two sections. So, we'll do the poll ID

two sections. So, we'll do the poll ID and the candidate.

Now, the logic. Okay. So, what am I doing? Well, I'm going to be saving the

doing? Well, I'm going to be saving the vote to the candidate account. So, let's

load in the candidate account. Let

So we're going to load in let candidate equal and mutes. We're going to load in our context and pull in that candidate account. CT ctx accounts

account. CT ctx accounts candidate account. Now we have all of

candidate account. Now we have all of the information related to the account and we can update it in our logic. Now,

first check. We want to make sure that someone is voting while the poll is actually active. So, we need to check

actually active. So, we need to check the current time and make sure that it falls in between the start time and the end time for the poll. So, let's load in what the current time is. So, let

current time equal to and we're actually going to be able to use clock. And clock is from a Salana

use clock. And clock is from a Salana crate that you can import and it represents the network time from the

blockchain. So clock is we're going to

blockchain. So clock is we're going to do get and we'll unwrap that. And one field within clock is the Unix time stamp and

that's what we're going to be using to keep track of what time it currently is when someone processes a vote. Okay, so now we have that and

vote. Okay, so now we have that and again Unix time stamp is a U64 type which is why we saved our start time and end time as U64s. Okay, now that we have

that we can do if we'll just do some very basic if statements here. So if the current time is greater than the voting

end. So we'll do the context and we'll

end. So we'll do the context and we'll grab the voting end time from the poll account which is why we needed to pass that poll account through here. So, poll

account dot poll voting end.

And I'm going to have to cast this as an I64 because actually Unix time stamps can't be negative. Um, so it's an I64, not a U64.

And we'll put this in brackets because I'm in parenthesis because I'm casting it.

Okay.

So if that happens, we're going to return an error.

And I can make errors by using the and I can write errors in ankor by using the error code macro. So I'm just pasting this in so I don't have to type it all out. But um we have an error code

macro and you're able to write all of the errors that you want for your instruction and then a message that you want to emit whenever that error does occur. So

occur. So let's go back and then specify our error.

So it's just the error code and then we're going to do voting end ended.

Now we need another if statement for essentially the opposite. So let me copy this paste it in. And now if the current time is

less than or equal to the poll voting start.

Okay. once again casting as an I64 and then we're going to return a different error which is voting not started.

So so far in our vote instruction we have checked the current time and made sure that the current time falls within the voting start and end time. So it's

actually an active poll. We've loaded in the candidate information. So the very last thing we have to do is just save the vote to the corresponding candidate account. So since we already know the

account. So since we already know the right candidate account is being loaded in because it's taking this input parameter and generating the PDA based

off of it, all we have to do is take candidate and then set the candidate votes field to

increment by one.

Okay, and we're good to go. Now, back to the rust warning for having variables that are unused in your logic. That

we'll just put our little underscore here because we are using it in the account strct. Okay. And that is

account strct. Okay. And that is everything that you need for your program. And we can just build it to

program. And we can just build it to make sure everything runs. But just a quick overview, we have our program macro here. All of our instructions live

macro here. All of our instructions live within that program macro. So, we have initializing our poll. Our poll accounts is the account that's holding all of the data for the new poll that we're

generating. It's also keeping an index

generating. It's also keeping an index of how many candidates are corresponding to that poll. We have initialize candidate, which is the instruction that will be ran every single time you want a

new candidate added to the poll. And

then we have vote. And vote is when anyone wants to come in and make a vote, they're going to specify the candidate they're voting for, and it's going to update the candidates's account state

with a another vote in their state. And

that's everything. So, whenever you're writing smart contracts, start with your account. What state do I want to store?

account. What state do I want to store?

Then you write out your instruction.

You're going to write all of the corresponding accounts that the instruction needs to interact with. And

then you can finish out your instruction logic. So, the last thing we're going to

logic. So, the last thing we're going to do here is just build our instruction.

And if I go to my terminal and I'm going to see.

Okay, making sure I'm in the right directory here. I'm just going to type

directory here. I'm just going to type in ankor build. And once again, we're using that Ankor CLI that we've installed earlier. It's going to compile

installed earlier. It's going to compile my program and make sure everything works. And the great thing about Rus

works. And the great thing about Rus code is if your code compiles, you're pretty much good to go. It has a really great compiler. Um, it just takes a long

great compiler. Um, it just takes a long time to compile. So, we'll let this load. Probably speed it up for the

load. Probably speed it up for the video. And, um, make sure everything

video. And, um, make sure everything looks good.

Okay. So, our code compiled, everything's good to go. Now, one more thing that I want to show you is just a pulling because I did mention earlier that this public key maps back to your

address and I just want to showcase that. So in our install script, we

that. So in our install script, we already created our Salana wallet address and we funded it with DevNet Soul. Um, so I should just be able to

Soul. Um, so I should just be able to deploy. So if you run the command anchor

deploy. So if you run the command anchor deploy and you have the flag provider.cluster

and then I type in DevNet, it's going to deploy my compiled program onto DevNet with that local key pair that we generated earlier.

And I spelled it wrong. So one second provider.

So you can see it's deploying to the Salana DevNet cluster. The upgrade

authority is my key pair that's saved on my local machine. It's deploying the program. Here is the path. So this path

program. Here is the path. So this path is taking you to the SO file that was generated when you ran ankor build. So

that is over here. If we go to target and deploy, you have this voting.so.

This file is what is being deployed onto the Salana blockchain. In our case, the DevNet chain. Um,

DevNet chain. Um, great. So, it's saying I need to install

great. So, it's saying I need to install a package. So, seems like I'm a bit out

a package. So, seems like I'm a bit out of date in my CLI. We'll just let this install and then let it finish deploying. Then, we're going to go to

deploying. Then, we're going to go to the Salana Explorer and then just check out our program.

Okay, great. So, we're good to go. Now,

I'm going to open up a browser, type in explorer.salana.com.

explorer.salana.com.

So, explorer, that's our block explorer currently connected to the mainet cluster. I'm going to switch over to

cluster. I'm going to switch over to DevNet and let's go back over to our VS Code and we're going to go back to that declare ID function that I talked about

in the very beginning. So, copying this public key, I'll go back to Firefox and I'm going to paste this in.

There we go. So, here's my program that was deployed on chain.

And you can see my deployment happened just a few minutes ago. And this is how you're able to access all of the information that's actually running on the network for what you're building.

Now we finished writing our first Salana smart contract and now we're going to talk about how to actually test it. So

contract is written but when you're testing a program you want to be able to actually test against the Salana virtual machine. So how can we do that? Well

machine. So how can we do that? Well

there is a framework called light SVM.

So lid SVM it's a lightweight library for testing Salana programs. It works by creating an inprocess Salana virtual machine that's optimized specifically for program developers. So we're going

to be writing Rust tests that correspond with our Rust smart contract that we just wrote. Here is all of the latest VM

just wrote. Here is all of the latest VM documentation. So there's several

documentation. So there's several different helper crates that are available. Um and then if we click into

available. Um and then if we click into the documentation, it talks about how to get started, why you would want to use LEVM for your testing, how to test your

program, some examples. So examples,

this is just some very common copy and paste test for certain ways that you would want to set up your environment.

And then we have our API reference and then just additional documentation for each helper crate that's available for you. Now great reference as you're

you. Now great reference as you're writing custom light test for the programs that you're writing. But we're

going to go to the voting app that we just wrote. Now,

just wrote. Now, I don't want to take the time to actually manually write out all of this code on on the video, so I already wrote it beforehand, and we're just going to go over all of the important concepts

that you need to know to be able to write SVM tests. So,

this is for an anchor program that we wrote. So, we're going to be using some

wrote. So, we're going to be using some additional helper crates for lightest VM. So we import Anker and then we also

VM. So we import Anker and then we also import the anker latest VM helper crate.

And by doing that it's going to help generate clients from the IDL that you generated when you ran ankor build. So

whenever you're writing your latest VM tests you want to make sure that you ran ankor build beforehand and you have your target directory generated and within your target directory you do have an IDL

here because it's going to be accessing this IDL to generate clients for you.

And this keeps all of your tests type safe in Rust. Now going back to this file, here are all the important concepts that you need to understand to be able to

write your own lightest VM tests. First

is declare program. You're going to be using this and calling in what your IDL is for the application. And this is going to be able to generate the types

for you. So here are the paths to be

for you. So here are the paths to be able to grab each type that you have. So

you can see when I hover over this, I have the type that we wrote for our candidate account data structure and then the type that we wrote for our poll

account. So now we can easily access all

account. So now we can easily access all of the data for each of these types. And

then this is pulling in the SO file that was generated when you ran the anchor build command. Now when you're writing

build command. Now when you're writing latest VM test, you do have a little bit of a setup before you can actually test your program. And the reason is you're

your program. And the reason is you're generating an inprocess VM. So you're

going to have to create all of the token accounts or deploy the program and get everything set up to be able to actually use it. So how we're setting this up

use it. So how we're setting this up when you use the Ankor context from your Anker latest VM test, you'll be able to now set up your VM. And because we're

using the Salana clock, which you can see here, we're using this for testing because we do have to get accurate timestamps to

know if the poll is a valid time or not.

And all of that is just setting up your environment. Now, we have a few helper

environment. Now, we have a few helper functions and this is just for generating PDAs. So, we know when we're

generating PDAs. So, we know when we're writing a PDA, we have to define our seeds and then we define our bump. And

the PDA is generated by running your input seeds and the bump through a SHA 256 hash function. And it returns your

public key for your PDA. Um, so

how do we now get that? Um, we're going to need to get the PDA account for each of these addresses to be able to accurately test it. So you can do that with this helper function. That's find

program address. And all you have to do here is just write out all of the C's that are associated with the PDA. So, we

have these two helper functions to be able to get the PDA for our poll account and the PDA for our candidate account.

Now that all that setup is done, um, typically it's a bit more setup because you'll have to initialize tokens on chain, create mint accounts, create token accounts, things like that. But

here we're just handling data. We didn't

introduce tokens yet. Um, that'll come with our ESCO program, which is the next project. So, here we're just worrying

project. So, here we're just worrying about data. That's all of our setup. Now

about data. That's all of our setup. Now

we can write our first test. So to write the test, we're just using our setup function. Now we do need to fund an

function. Now we do need to fund an account. And the reason we need to fund

account. And the reason we need to fund an account is because we need to have an account that's going to be the serer for processing the transactions. So this is creating an account for us. We're

funding it with lamports and then we're going to unwrap it. Now we have all of these values defined because these are all of our input parameters for our instructions that we're testing. So

little bit more setup there. Now all you have to do is call the instruction from your anchor context because the anchor lightest VM crate is now generating this

anchor context for you. So when I call this, I can now call the program and I can call the accounts that are associated with the instruction that I want to execute and then all of the

arguments that have to be passed through for the instruction. So here you can see for initializing the poll we did specify three accounts in our derive account strct which was the serer the pole

account and the system program. All of

that we do have to specify in the accounts section when we're calling this instruction for testing. Then the next section is your arguments and for initialize pull the arguments that we

accept are the poll ID the start time the end time the name and the description. So this is just pulling in

description. So this is just pulling in those parameters for the function. Now

all you do after that is specify that it's an instruction and then unwrap it.

Then you're able to get the result by running the execute instruction function from light SVM. So you can see this execute instruction here. So what that's going to do is execute the instruction

through the inprocess Solana VM that's running with light SVM and it's going to basically be like you're testing it on chain. So here we're just making sure

chain. So here we're just making sure that the result actually was successful and then we are asserting that the result is the accurate answer that we want. So here we're asserting that the

want. So here we're asserting that the result is actually the answer that we want. We're pulling in the poll account

want. We're pulling in the poll account data after the transaction took place and then we're asserting that each field in the poll account has been correctly updated to the input parameters that

we've already defined in our argument section here. So that's pretty much it.

section here. So that's pretty much it.

Now, what this function does is if I run it, it's going to test the initialize poll instruction. And you can see down

poll instruction. And you can see down here, one test passed. So, we're good to go. Now, you can take on your own time

go. Now, you can take on your own time when you clone this code to look through the other instructions. But when you want a 100% test coverage on your code, you just want to go through all of the

instructions and then potential edge cases that you can think of. So here we have testing the initialized candidate, testing a successful vote, we have

testing multiple votes, and we have multiple candidates as well.

So here we're testing some of our error cases. So we had two errors. One was

cases. So we had two errors. One was

voting before the vote actually started and one was voting after the voting window ended. So this is testing to make

window ended. So this is testing to make sure it returns the correct error. So

you can see here the result we want to assert that it returned an anchor error of voting not started and this anchor error is from that error code macro that you generated when you were writing the

program.

So here's the other error case and then testing multiple polls as well. And that

covers pretty much 100% test coverage for our voting program. So, if I just want to go through and run all of the tests, I can

open my terminal and I'm just going to run cargo test. And that's going to pull all the light SVM tests and run them.

And you can see here it ran eight tests, all the eight that we just went over and all of them passed. So, we're good to go. And that is how you're going to be

go. And that is how you're going to be testing all of the programs that you write. So, LighthouseVM is an efficient

write. So, LighthouseVM is an efficient way to test your programs. So here's an example with the voting program that's just handling data and then as you progress the rest of the projects in the boot camp it will show the progression

of using lightest VM for more advanced use cases and that is testing your voting program.

In this section we're building one of the most fundamental patterns in blockchain development, an escrow program. An escrow is a neutral third

program. An escrow is a neutral third party that holds assets until certain conditions are met. In traditional

finance, this might be a bank or a lawyer, but on the blockchain, we replace that trusted middleman with code. So, a smart contract that enforces

code. So, a smart contract that enforces the rules automatically. Escros are

everywhere in Web3. Dexes use the escrow logic for token swaps. NFT marketplaces

use it to hold assets until payments clear. Freelance platforms use it to

clear. Freelance platforms use it to protect both clients and contractors.

Once you understand the escrow pattern, you'll recognize it in almost every DeFi protocol you interact with. So in this program, we'll have three instructions.

Make, which allows someone to create an escrow, deposit their tokens into a vault, and specify what they want in return. Take lets another party accept

return. Take lets another party accept those terms. They send the requested tokens and receive what's in the vault.

The whole swap happens automatically, and refund lets the original creator cancel and reclaim their tokens if they change their mind. Beyond the escrow logic, we'll learn how to test anchor

programs using light SVM, which is a fast, lightweight alternative to running a local validator. By the end, you'll have a reusable pattern that you can apply to countless blockchain

applications. Now, let's get started.

applications. Now, let's get started.

So, I'm going to link a repo of the code and we're going to go over just the concepts and code snippets rather than code from scratch line by line just to stay more engaging and avoid a lot of

repetitive code. Now, we already talked

repetitive code. Now, we already talked about how most things in DeFi are an escrow, but let's chat about why. So,

let's say I'm making a $1,000 bet on the Super Bowl with my friend, and I'm betting on the Eagles and he's betting on the Chiefs. But how do we know the winner is actually going to get paid?

Well, in traditional finance, either I'd have to trust my friend that he'd actually pay me when the Eagles win, but this is an emotional time and I don't know if he will. So, we could hire a third party to hold our money and give

it to the correct person once the bet is over. But again, this involves a lot of

over. But again, this involves a lot of trust. So, blockchain fixes this. It

trust. So, blockchain fixes this. It

creates a fully trustless third party because this third party is just code.

And that's what we're going to write today. And it's an escrow. So, it holds

today. And it's an escrow. So, it holds funds and releases them to the correct person when the specified conditions are met. So, if we made a bet on chain

met. So, if we made a bet on chain instead, the code would query the outcome of the Super Bowl at the specified time and immediately release the funds to the correct person. And

it's as simple as that. So, how do we get started? Well, whenever I'm writing

get started? Well, whenever I'm writing smart contracts, I always like to start with the state. And as we now know, Solano programs are stateless. So, when

we want custom data stored, we're going to need to create an account in the smart contract. So, let's start with

smart contract. So, let's start with where we're going to define that account. So, first, let's open up the

account. So, first, let's open up the code. If I go to GitHub, um I'm going to

code. If I go to GitHub, um I'm going to link this repo. We're going to copy this GitHub CLI command. Open up our terminal.

And I'll just zoom in a bit so you can see. I'll paste in the clone command.

see. I'll paste in the clone command.

And we're going to first CD into the directory of the project that we just cloned.

And then we're going to take the code that was already written for us and we're just going to go through it. So,

first we said we're starting with the state. Now, let's see where that

state. Now, let's see where that actually is. If I go into programs,

actually is. If I go into programs, anchor escrow source, and then there's a file called state RS. And this is where you're going to define every account

that you create the custom state for that account. So, accounts are where all

that account. So, accounts are where all of your data is stored. Here, we're just going to have one account, and that's going to be our escrow accounts. So you

can see here we have this strct that is our escrow and it's going to have all the fields of the data that I want to store for my escrow. And here all I'm storing is the seed, the maker. The

maker is going to be the person who makes a new escrow contract, a new bet, a new whatever logic that they want to have. Um, and then we have these two

have. Um, and then we have these two mint public keys and that is going to be allowing for two different tokens. So if

I want a mint account for one and a mint account for another token and then the receive is going to be the taker whoever is going to receive the funds that the maker is locking into this vault of the

escrow and then we store a bump. So for

best practice when you're writing inker programs you always want to store the bump and that is just going to make things more efficient. We'll go through exactly why once we get a little deeper

into the code. Um but there you have it.

You have your custom data structure with all of the information that you want stored. Now, you can see that there's

stored. Now, you can see that there's these two macros and we talked about before how Ankor simplifies a lot of smart contract

design and they do that with these macros that are created. So here you can see this derive and knit space and what this does is it automatically calculates

the space needed to store the strruct onchain. So, Anker adds up all of the

onchain. So, Anker adds up all of the bite sizes of all of the fields that you've stated here in your strct and then it calculates how much space it's

going to store. So, you don't have to do that manually. Now, there's also this

that manually. Now, there's also this other macro and that is defining what the discriminator is. So, this is setting a custom discriminator that is a

one byte discriminator. Normally anchor

uses the first eight bytes of the SHA 256 as a discriminator. Um this overrides that with just a simple one. So the

discriminator is written at the start of the account data to identify a specific account type. So everything for your

account type. So everything for your state is good to go. But all this is doing is defining what is in your state.

So how do we actually create the account that's going to hold this state? Well,

let's go to these instructions. You have

a folder of three different instructions. Our make, refund, and

instructions. Our make, refund, and take. And we know that the

take. And we know that the escrow starts at the make instruction.

You're having a maker create a new escrow. If you go to the lib RS, you're

escrow. If you go to the lib RS, you're going to see this program macro. The

program macro is part of Ankor defining exactly all of the instructions that are going to exist within the smart contract that you're writing. So you can see we have

you're writing. So you can see we have those three instructions that were defined in this instruction folder and we're going to start with make. So make

is where you're actually initializing a new escrow account. And we're just going to click into this make context. So

if you're familiar with ankor this is a very familiar format that you'll see. Um

it always starts with this context as your first parameter. And the reason why is this is where all of the information for the accounts that you need to handle

for the instruction is going to be given. So if I click into this, it's

given. So if I click into this, it's going to take me over to the make RS file. Now here you can see

file. Now here you can see this derive accounts macro and a data structure inside of it. So this derive

accounts macro, what it does is it tells all of the accounts that are needed to be able to process the instruction that you're writing and it's going to tell you

everything that needs to be done to the account. So what does that mean? Well,

account. So what does that mean? Well,

for example, let's start with the escrow account. So we already defined what the

account. So we already defined what the state looks like, but here you can see there's a lot of constraints that were written within this account macro. So

what does that mean? Well, first we have this init constraint and what that's doing is saying this account doesn't exist yet and I need to create it. So

instead of writing all of the logic for how to create an account on Salana in inker, all you have to do is specify the init constraint when you are writing out

your derive accounts macro. So here you have the escrow. If I click into this escrow type, it's going to go back to

the data that we've already defined.

Now, let's go back over and you can see we already talked about a net payer. The

reason we need a payer here is because we're initializing a new account and accounts need to pay rent on chain. So,

this is how data is actually paid for.

you're going to have um the amount of space that is needed on chain calculated and then you're going to pay for that space. So someone has to pay for that.

space. So someone has to pay for that.

So here we're going to define that the maker is paying for the space on chain.

Now this is once the account is closed able to go back to the maker. So it's

not a sunk cost but kind of just holding their spot as they're using space on chain. Here this is just calculating the

chain. Here this is just calculating the space because we need to know how much to pay. But this is where the actual

to pay. But this is where the actual escrow logic comes in. So you can see this specific constraint defined as seeds and then there's a lot that

happens after. So what this actually is

happens after. So what this actually is is defining a PDA. So a PDA is a program derived address and what that is is it is a public key that falls off the

ED25519 curve and it's generated by having this input of seeds plus a bump and it runs through the SHA 256 hash and generates a

public key. If that public key does land

public key. If that public key does land on the ED25519 curve, that means it has a corresponding private key and it's not valid as a PDA. So, it's going to

regenerate the bump. The bump always starts at 255. Um, it will decrement by one every iteration until it actually lands off the ED25519 curve. And when

that happens, it's now a public key that does not have a corresponding private key. And the

reason that that's important is you're only going to be able to sign for the transaction with the signer seeds through a program.

So now no one can find your private key and be able to sign for you. So all

trust lies within this code and the program itself has to sign with the signer seeds. So what are those seeds?

signer seeds. So what are those seeds?

Well, they're exactly what you define here. So here you can have any type of

here. So here you can have any type of seeds that you want. All that I'm doing specifically for this program is writing

out escrow having the maker who is going to take the public key of the signer store that as part of the input seeds

for the PDA and then also this seed parameter. So this is just an example to

parameter. So this is just an example to show that if you want to have part of your seeds as something that doesn't exist within the strct, how that works is you're able to take it as an input

parameter for the instruction. So you

can see here there's this instruction macro and what that's saying is, hey, I'm using this variable seeds and I'm

using it within my account, but I don't have access to it. So how do I get it?

I'm going to get it as an input parameter. So if I go back to the lib RS

parameter. So if I go back to the lib RS where the function is defined, you can see that seed is taken in as a parameter for the function. So going back here

just to show exactly what I mean. If I

remove this macro where I'm not saying this comes as an input parameter of the function. Now you can see seeds is no

function. Now you can see seeds is no longer recognized. And if I save this, I

longer recognized. And if I save this, I should be getting an error. Um but yeah, you can see seeds doesn't exist. Cannot

find the value. So if I go back and put the instruction macro back, um give this a second to rerun the rust analyzer. And

now seeds can be found again. So this is just an example of if you want to have part of your um seeds for your PDA as a value that's not defined somewhere

within the strct, you can just pass it as a parameter in the instruction.

Okay, so that's kind of highle overview of having a PDA as your account and we'll go through exactly how to sign uh later on when we're going through the

code. But let's just go over the rest of

code. But let's just go over the rest of the accounts that are passed through.

When I'm creating this make function, I not only have the escrow, but I'm going to have to define all the other accounts that are going to need to be interacted

with in order to actually execute this instruction. So you can see I have this

instruction. So you can see I have this mint A, mint B. This is allowing for two different tokens to be interacted with um when you're actually executing the make function. And you can define what

make function. And you can define what that mint account is. So if I wanted to put USDC, I would define that mint account here when I'm creating the instruction and any other account that

you want. Now you have this maker ATA.

you want. Now you have this maker ATA.

So what is that? That's an associated token account. So the mint account,

token account. So the mint account, which we've gone over a few times um within other projects in this boot camp, but the mint account is kind of the global information of the token and the

token account is what actually holds tokens. So, if I want the maker to be

tokens. So, if I want the maker to be able to send tokens to the vaults to be stored for this escro program, it's going to have to have a token account

associated with the instruction to be able to send those tokens. So, this is just having whatever the token account for the maker is of the token they're sending into the vault.

And now the very last section here is going to be the vault itself. So, we did define the escrow, but all this account does is store data. It doesn't actually

store tokens. So, I want to have an

store tokens. So, I want to have an account associated with this escrow that's going to actually hold the tokens. So, then they can be correctly

tokens. So, then they can be correctly released when the logic has been executed. So, let's see. This is

executed. So, let's see. This is

initializing a new token account that's associated with this escrow program. And

it's defining the payer for the token account is going to be the maker. and

it's saying what this token account is.

So every token account is going to have an associated mint account because that is defining the global information of what token you're actually storing into that token account. And now you can see

this is kind of the key line. We have

associated token. The authority of the account is going to be the escrow. So as

we talked about over here, the escrow is not a typical public key with a private key that can be signed for. It's a PDA that needs signer seeds for the program

to be able to execute the transaction.

So what actually has authority over this vault to be able to release funds is the program itself. And that's what's making

program itself. And that's what's making this one line, the authority being the escrow. This is what's making everything

escrow. This is what's making everything here trustless.

And then the very last three lines are very typical whenever you're defining all of the accounts for an instruction.

It's going to have your associated token program that's needed whenever you're creating an ATA account. Your token

program because we're interacting with tokens. And then the system program is

tokens. And then the system program is needed because we're initializing new accounts on chain. So this is just accessing other program information to be able to execute all the commands that

we want for this instruction.

Now that's how your derive account strct works. all of the accounts that need to

works. all of the accounts that need to be accessed to be able to execute the logic and the instruction that you want.

So that defines everything we need for make. And now let's go back over to the

make. And now let's go back over to the actual instruction.

If I go to lib RS, you can see that this has been organized with a handler. And

if I click into this, it's going to take us back to the make page. And this

handler is executing two functions.

Populate the escrow and deposit tokens.

Because if I'm the maker, all I'm doing is creating a new escrow, defining what information needs to be in that escrow, and then I'm going to deposit tokens into the vault that I just initialized.

So, let's look at what the logic actually is. When you're populating the

actually is. When you're populating the escrow, all you're doing is defining all of the information that needs to be stored in the escrow account. It's just

the data. So if we go back to state here, you see all of the data that's specified that is the same data that has been defined for our populate escrow

instruction.

Now the other function that's being ran here is deposit tokens and this is where we're introducing a new concept and that is our CPI. So CPI is a cross program

invocation and what that is is you are calling an instruction from one program for another program. So we're writing our escrow program but when we're

calling the CPI we're actually calling a transfer instruction from another program on chain. Now when you see the CPI there are specific accounts that are

interacted with here. So this transfer checked instruction is what we're calling and that is transferring a token from one account to another. So you can

see our from accounts is the makers ATA and we're using the mint of the token that we've specified and it's going to the vault itself and the authority is

going to be the maker in this case because the maker is signing from their own wallet and depositing it into the vault. So in this case, the makaker's

vault. So in this case, the makaker's wallet is a public key that does have an associated private key. So it does fall on that ED25519 curve. It's essentially

going to be the wallet of the user that's going to be signing the transaction here. So we don't have to

transaction here. So we don't have to worry about the PDA signer seeds, but we'll get into that later. So this is just a very basic CPI for transferring a

token from one account to another. And

in this case, the account that it's going to be transferred into is going to be the vault account of the escrow program. So the escrow is now holding

program. So the escrow is now holding this token.

Okay, so that's everything for make. And

now the reason we're not actually writing this line by line is it does get pretty repetitive. And you'll see here

pretty repetitive. And you'll see here when I open the take file, actually I'm just going to lay these side by side so you can see how similar they look.

We're creating our derive accounts.

We're specifying a new data structure of all the accounts that need to be interacted with for this function to be able to execute. You can see they're pretty much the same accounts. You have

your maker. In this case, we're now introducing another um account which is the taker who's going to be taking those funds. Um and then we still have the

funds. Um and then we still have the escrow. It's going to be defined pretty

escrow. It's going to be defined pretty similarly. Um here the only thing that

similarly. Um here the only thing that changes is it's already been initialized. So I don't have to worry

initialized. So I don't have to worry about initializing it as a constraint.

But I do have to add this mute constraint. And the reason I have to add

constraint. And the reason I have to add it is because variables in Rust are immutable by default. And here I'm going to be changing the data in the escrow

account. So I have to define it as

account. So I have to define it as mutable because something's going to change. So I'm just saying hey I am

change. So I'm just saying hey I am changing this um and I'm just going to let you know that's going to happen. Now

here we have a few additional constraints that have been added and what these are are just kind of some security checks. So it's defining that

security checks. So it's defining that the maker is the correct public key. The

mint A and mint B for the mint accounts are also the correct public keys. So

this is just additional security checks to make sure the correct accounts are interacting with this instruction. And

if it is an incorrect account trying to interact with the instruction, it's going to return an error and say you don't have access. This is an invalid account.

Um, otherwise everything else looks pretty much the same. You're defining

all of the accounts that need to be interacted with, but now we're just adding the taker into this equation.

And now the one thing that is different here is how this CPI is actually executed. And the reason is

executed. And the reason is that we now are going to execute using our PDA. So for the taker to be able to

our PDA. So for the taker to be able to now take the funds, they're going to be taking the funds from the vault. And the

vault has to be able to sign for that instruction. And because we now know

instruction. And because we now know that the vault is a PDA, it's going to have to be signed from the program with the signer seats. So we can just compare what this looks like. We're still

transferring a token. So, we're making a CPI call to the token program, calling the transfer checked function, and that's still all the same. And you can

see here, we're defining once again the token program, the transfer check function, and it has all of the same parameters. However, when we're calling

parameters. However, when we're calling this CPI context, instead of just saying new, we're saying new assigner. And the

reason why is we're going to have to sign with serer seats. So there's an additional field added here which is your serer seeds and what these seeds are we're going to have to specify

and they're going to be the seeds of the PDA that you defined. So you can see here we've defined what the signer seeds are and we can make sure they're correct

by going back to our make instruction when we actually initialized the escrow program and defined what the seeds were for the PDA.

And you can see here we have our escrow as our first seed. We have our maker as our second. And then we have the escrow

our second. And then we have the escrow seed as the last. And then all we're doing after is just defining the bump.

And the reason you're defining the bump is to just make it easier to actually find what the PDA is. So we're saving the bump of the

PDA is. So we're saving the bump of the PDA when we initialize it. You can see here we're saving the bump and um the

bump is that 255 that's being added when you're actually generating the PDA. So

just to recap, when you're creating a PDA, it has these input seeds plus a bump starting at 255 and then every iteration it decrements by one. And

those values are going to be ran through the SHA 256 hash and output a public key. that public key is either going to

key. that public key is either going to fall on the ED25519 curve or it's not.

If it doesn't, it's a valid PDA address.

If it does, it's invalid and it's going to have to run again. But now it's going to run your input seeds with 254 and so on. So, if I save the bump and I know

on. So, if I save the bump and I know what it is, when I'm recalculating my PDA, it's going to know immediately by just pulling that bump that I saved. If

I don't save it, it's going to run through that cycle continuously until it finds the correct address. So now we're saving like CU cycles and um making the

contract more efficient by saving it and just being able to directly call that value.

Okay, so that shows the difference between just signing with your regular CPI and signing with a PDA. And

that's how we know that this is more secure of a program because the PDA is the authority here for the vaults. Okay,

so that goes over pretty much everything that you need for writing an escrow program.

So now that we've gone over the whole program, all that we have left is to build it and then test it. So how we're going to build it is we're going to open up our terminal and just type in ankor

build. So, Anker build, it's using the

build. So, Anker build, it's using the Anker CLI to compile all of your bus code and then create this target folder here. So, we're going to let this run

here. So, we're going to let this run and let everything compile. And here you can see it finished and compiled successfully. If you are following

successfully. If you are following along, it's going to take a little bit more time for you because I pre-ompiled this for this video. Um, so just wait until it completely finishes. Once

that's done, you can open your target directory, go to IDL, and what we're going to do is copy this IDL file into this IDL's directory up here. And the

reason is so our light SVM test can access it. Um, so now that it's

access it. Um, so now that it's compiled, we're going to be able to test. I will just copy this over. So

test. I will just copy this over. So

we'll go to IDL anchor escrow JSON and then paste it into the IDL's folder. Now

that that is good to go, I'll be able to run cargo test. And what that's going to do is run our light SVM test. And I'll

go over how those are working. So we'll

do cargo test. It's going to compile and then run the tests.

And you can see here, I'll just zoom in a bit so you can see it better.

We have running one test. Test the

escrow make and take has passed. So,

let's see what that test looks like.

I'll zoom out, go into the test folder, and it's showing an error just because my Rust Analyzer is a little behind. Um,

it's just waiting to be able to read this IDL file. Once it fully reloads, that error error will go away. But for

now, let's just go over what these tests look like. So this test demonstrates a

look like. So this test demonstrates a complete escrow flow using LidusVM. And

what LidusVM is, it's a fast lightweight Salana virtual machine that's used for testing. So

testing. So here you can see we're importing a couple crates. So one is our Angular

couple crates. So one is our Angular Lightest VM crate and our Lightest VM utils crate. And what that is doing is

utils crate. And what that is doing is using light SVM but adding some additional helpers to be able to make it a little bit more compatible with the anchor program that you're writing. So

you can see here we have this anchor lang de declare program for the anker escrow. And what this does is it

escrow. And what this does is it generates all the client modules for the program. So it's kind of like this magic

program. So it's kind of like this magic line that helps you keep all of your code type saved. and it's referencing this IDL that we pasted over here. So by

using this you don't need any manual serialization. Now we're going to just

serialization. Now we're going to just go through the test. All it's doing is setting up your light SVM environment.

So you can see right here where it has incorre light SVM build with program and it is referencing your program ID and the program bytes to yourso file. This

is where it's able to actually read the program and test against it. And it's

using this running it against the Salana virtual machine that light SVM is running and then being able to test as if it's running in a real blockchain environment which is why you're able to

create test accounts and create token mints and fund those token accounts to be able to fully like emulate what you want your environment to look like. So

we have everything set up here for our environment. We're also creating token

environment. We're also creating token accounts to be able to interact with our escrow program. And then once all of

escrow program. And then once all of that is created, you're going to be able to actually call the instructions from your program and execute them. So here

we're calling the make instruction and we're passing through all of the accounts that are needed for that instruction. So if you remember when we

instruction. So if you remember when we were writing the program, we have that derive account strct where you're specifying all of the accounts that the instruction is going to need to interact

with. And here you're going to specify

with. And here you're going to specify them once again when you're testing out the program. So we're calling the make

the program. So we're calling the make instruction defining every account that it interacts with that you can verify in your derive account strct that you wrote in the program and then you're

specifying all of the arguments that need to be passed through and then once you unwrap that function you're able to execute the instruction on your lightest

VM virtual machine and then we'll just asserting to make sure that everything resolves how we want once that once the instruction is executed and then we're

able to now test the take instruction as well. And this follows the entire escrow

well. And this follows the entire escrow flow. We're able to make an escrow, have

flow. We're able to make an escrow, have the logic be defined, and then once it is met, we're able to have the take instruction called to be able to

finalize that interaction between the maker and the taker. So there's your escrow program. You have your test. You

escrow program. You have your test. You

can see the test pass down here and you're good to go. So that was your escrow anchor project for the boot camp

and stay tuned for what's next.

In this next project, we're building private transfers on Salana. Users

deposit Saul into a shared pool and then withdraw later. But there's no way to

withdraw later. But there's no way to link which deposit belongs to which withdrawal. We're not hiding who

withdrawal. We're not hiding who deposits and who withdraws because those things are just going to be visible on chain, but we're hiding the connection between them. Without privacy, you've

between them. Without privacy, you've got something similar to the escrow that you built earlier. Alice deposits, Bob withdraws. Everyone knows that Alice

withdraws. Everyone knows that Alice sent funds to Bob. In the project we're going to build now, Alice deposits, Bob deposits, Carol deposits, and someone withdraws to some sort of address. No

one knows if that address belongs to Alice, Bob, or Carol, or someone else entirely. Now before we go any further,

entirely. Now before we go any further, this is an educational project. It is

not recommended to deploy this onto Minnet. Privacy tools have different

Minnet. Privacy tools have different legal implications depending on where you are. So check the compliance

you are. So check the compliance requirements in your country before building something like this for real users. If you know, you know. Anyway,

users. If you know, you know. Anyway,

the way this works is through zero knowledge proofs. A zero knowledge or zk

knowledge proofs. A zero knowledge or zk proof lets you prove something is true without revealing the underlying data.

Like proving you're over 18 without showing your actual birthday. In our

case, you prove that you made a valid deposit into the pool without revealing which one is yours. In this project, when you deposit, you'll generate random secrets on your device, hash them

together, and send that hash plus your salt to a pool on Salana. You get back a deposit note, which you need to save or give to whoever needs to withdraw. Then,

when someone wants to withdraw from the pool, they use their deposit note to generate a ZK proof on their device locally, send it to Salana. Salana

verifies that it's legit and releases the funds to you. The secrets never ever leave your device. In this project, we're going to start with a public pool and then we'll turn it into a private pool in six steps. We'll go through step

by step so you can get a deep understanding of how it all works together. All the code used here is

together. All the code used here is linked in the description. So feel free to clone the repo and follow along and by the end you'll understand how zero knowledge works on Salana. Let's go.

So in this first step we're just going to see what we have at the moment.

Currently, we just have a public pool and we're going to turn that into a private pool. So, if you head into the

private pool. So, if you head into the repo here, you can see everything in here. We've got different branches that

here. We've got different branches that we're going to work through. So, we've

got our main branch that's going to have the code that we want to actually have working. It's going to be our private

working. It's going to be our private transfers. You can just dive into that

transfers. You can just dive into that now if you want. We also have our starter branch, and that's what we're going to be starting with. And we have these step branches here, and we're going to go through all of them to see

the different steps involved in turning a a public transfer into a private transfer. So, let's go ahead and clone

transfer. So, let's go ahead and clone this repo.

Get clone, call it private transfers, and then we'll cd into that and open it up in our code editor.

So, I am using cursor here cuz I really like cursor tab. I think it's amazing, but you can use whatever you like, obviously. Just going to zoom in so you

obviously. Just going to zoom in so you can see this a bit better.

And I actually need to check out the starter branch.

All right. So

now we have our starter branch. We've

got our anchor folder. This is going to be storing all of our Salana programs. You've probably heard of anchor from our previous boot camp tutors. We've got

back end. This is going to be what's doing all of the ZK proof generation.

You can also do this on the front end do client side, but I just wanted to to separate things a little bit. We've got

our circuits. This is going to be what's actually running to do the DK stuff.

They are written in a language called Noir. And we're going to get to that in

Noir. And we're going to get to that in a in a further step. We've got our front end. This is going to be what's what's

end. This is going to be what's what's displaying and also doing all of the transaction creation for sending to the Salana program. And we've got our

Salana program. And we've got our instructions which you can ignore for this demo. So if we have a look at our

this demo. So if we have a look at our anchor directory, we've got programs in here. Got private transfers source and

here. Got private transfers source and this is the main file that we're going to be looking at today. got some steps in it so you can see what we're going to be doing. So this is just our public

be doing. So this is just our public pool file. And if we have a look at it

pool file. And if we have a look at it here, we can see we've got three different functions. We've got

different functions. We've got initialize, we've got deposit, and we've got withdraw the way down here. These

are the functions that we're going to be kind of editing today. So our initialize function just sets up our pool. If we

have a look at our initialized accounts, we can see what goes into it here. So if

we have a look at the initialize accounts here, we've got our pool account, which is just our programmer account. Then we've got our pool vault.

account. Then we've got our pool vault.

This is going to be our PDA that actually holds all of this all. So we've

got our seeds here. We're just setting them as vault. And then also giving it a seed of the pool that it belongs to. And

we're giving it a bump as well. Bumps

are kind of cool. They're needed for PDAs because PDAs do not have private keys. And if we didn't give it a bump,

keys. And if we didn't give it a bump, the math that generates these things together might accidentally give it a private key at some point. So we give it a bump just to kind of throw off that elliptic curve so it doesn't have a private key. So we're setting that we

private key. So we're setting that we have our pool vault and then we're also setting that we have the authority which is the person who's actually in control of the pool and we need our system program because our system program is

going to be handling our salt transfers.

Let's head back up to our initialize function.

So in our initialize function, we are setting the pool authority as whoever um we passed in in the accounts and we're setting our total deposits to zero.

Total deposits are pretty useful when it comes to private transfers in a pool because we want to know the anonymity set of the pool. The more transfers you have, the bigger anonymity set, the more privacy you have. So it's kind of

important to keep track of it. And then

we're setting a message out pool initialized and saying that everything is good in our deposit function. Right

at the moment, everything is just public. So, we're taking in an amount

public. So, we're taking in an amount and we're requiring we're requiring that it's above a minimum amount. This is

another useful thing for ZK because when you're generating a proof and trying to verify it on chain, there's a small computational cost and a little bit of transaction cost in there. So, we're

just setting a minimum deposit amount so that whatever you're depositing is not less than the cost cuz that would be kind of pointless. And then we have our CPI here. So what we're doing at the

CPI here. So what we're doing at the moment is we're CPIing into the system program to transfer all. You might have CPI into a token program before in your escrow, but token programs can only

transfer tokens and we want to transfer us all. So we're CPIing into the system

us all. So we're CPIing into the system program. Then we are saying that we want

program. Then we are saying that we want to transfer from the depositor into the pool vault and doing that transfer and we're just emitting event with everything public in at the moment. Then

in our withdraw function, it's pretty similar to our deposit function except it's coming from the pool vault to the person who's withdrawing. So we have to prove that our program is actually an

owner of that PDA of the pool vault. And

the way that we do that is we get the seeds here. So we already know the seeds

seeds here. So we already know the seeds are vault and the pool key and then the bump. So we just generate these seeds in

bump. So we just generate these seeds in here. We set them and then when we do

here. We set them and then when we do the CPI we do new with serer and our serer is going to be the serer seeds here. So then we do the system program

here. So then we do the system program transfer. It'll work from doing the pool

transfer. It'll work from doing the pool vault to whoever is trying to do the withdrawal. Everything is public at the

withdrawal. Everything is public at the moment and we're also just going to run the front end so we can kind of have a look at what we want to be doing. So

let's do cd front end. I'm going to be using bun here but you can use like pnpm or whatever you want and do bun I and then we can do bun rundev

and we'll run on localhost.

So, this is what we're going to be doing. Neither of these buttons are

doing. Neither of these buttons are going to work right now, but we'll just fill in a deposit. And then this will give us something that we can copy, which is going to be like our deposit note, our secrets. Then we'll paste that into here. And then we can withdraw from

into here. And then we can withdraw from a separate account. And that's what we're going to be doing over the next few steps.

We're going to start with the deposit.

The transaction itself will always be onchain. Anyone can see that Alice

onchain. Anyone can see that Alice called the deposit function. But we

don't want our program to store this information in the pool because then when someone withdraws, it's an obvious link. Right now, our deposit event logs

link. Right now, our deposit event logs the depositor's address and the amount.

Instead, we're going to store and emit a commitment, a cryptographic hash that represents the deposit without revealing who can claim it later. A commitment is just a hash. In the ZK world, we call it

a commitment because you're committing to certain values without revealing them. Think of it kind of like a sealed

them. Think of it kind of like a sealed envelope. Everyone can see that the

envelope. Everyone can see that the envelope exists, but no one can see what's inside and only you know. In our

program, we will store a commitment each time someone deposits, and it will be a hash of their nullifier, a secret, and their amount. The nullifier and the

their amount. The nullifier and the secret are random numbers that only the depositor knows, and they're going to be used in order to withdraw. We'll get

into what nullifier means a little bit later. When you deposit, you're going to

later. When you deposit, you're going to see of your nullifier, your secret, and your amount locally. This is your deposit note, kind of like your receipt.

If you lose this note, your funds are gone forever. There's no recovery

gone forever. There's no recovery mechanism because only you know the secrets. They're not on chain. This step

secrets. They're not on chain. This step

is pretty short. In this step, we're just going to update the deposit function to accept a commitment parameter and update the deposit event to show commitment instead of the depositor address. Then, even though the

depositor address. Then, even though the onchain record will say Alice called the deposit function, when it's actually in the pool, there's no way to see which salt in that pool is deposited by Alice.

All right. Okay, so in this one, we're just going to update the deposit function to accept a commitment instead of the depositor's address. Instead of

just storing that deposit, we're going to store a commitment and emit that as the event. So, we can just go into our

the event. So, we can just go into our other branch get checkout step one hiding deposits and all the code will be done for us. Okay, we're aborting for

some reason. Let's stash it.

some reason. Let's stash it.

Get checkout step one.

All right. So in here, let's have a look to see what we have changed. So inside

the deposit function, we can see that we are now taking a commitment and the commitment is of a 32 byt hash of uint of 8 bits. This is a Poseidon hash. This

is what it looks like. You're going to see this a lot throughout this program.

So you don't need to worry too much about it, but if you're doing anything with Poseidon, this is kind of going to be what it's looking like. Then when we go a little bit down further, we can see that we have our deposit event. and it

doesn't emit the depositor but instead emits the commitment. This is going to let other people build out their own version of these commitments so that they can actually prove them themselves.

Um so if we have a look into our back end we can just see a little bit about what this Poseidon commitment looks like. Let's go to backend source server

like. Let's go to backend source server and I'm just going to search for con commitments in here. So we have our commitment. It's Poseidon 2 hash and

commitment. It's Poseidon 2 hash and we're hashing together nullifier, secret and amount. We're going to talk a little

and amount. We're going to talk a little bit about those in a further step and then we're turning it into a hex. So we

have a hex of our of our hash. And the

reason we're using Poseidon here is we could use a regular hash function like shadow 256. Maybe you're familiar with

shadow 256. Maybe you're familiar with that one. But they have a lot of

that one. But they have a lot of constraints and whenever you're using them within CDK that will take a lot of compute and we want regular people to be able to generate proofs on their laptops

on their phones so they can do private transfers from everywhere. So using

something like SHA 256 is just too expensive for for ZK. So Boseidon was built specifically to use within zero knowledge and it's kind of like this ZK

friendly hash. It achieves the same kind

friendly hash. It achieves the same kind of security but far fewer constraints.

Poseidon 2 is what we're going to be using a lot here instead of Poseidon.

Poseidon 2 is just like another version of Poseidon. It's slightly faster um and

of Poseidon. It's slightly faster um and slightly slightly better, but you can use Poseidon as well. Poseidon is also becoming kind of a standard for Salana for anything to do with CK and not just

within privacy, but also within scalability. So for example, light

scalability. So for example, light protocol, they use Poseidon to hash together lots of different accounts and it means that they don't have to store as much information on chain because

storing on chain can get expensive. It

can it can get a little slow to like kind of fetch that information. So

instead of storing all of the different accounts on Salana, they can just store a hash of that of a side and hash and then they can generate a proof of that all those accounts are true. That's kind

of not going to be private because all the accounts need to be public anyway.

But it's using the same sort of primitives that we have like Poseidon to do things for scalability rather than rather than privacy. There's there's

quite a big Poseidon thing happening on Salana which is exciting. But you might have noticed here if we have a look at the code again that our nullifier hash hex is not the same as what our program

expects which expects this 32 byt array.

So instead we are turning this if we actually go down and find the API call that we're going to be doing which is API/ deposit let me just

search for it we will be able to see that we convert our hash

um into bytes. So yeah, commitment bytes, we do an array which is taking a buffer from our our um our hash that we did here, our hex hash. We store our bite array in Salana instead of big just

because Rust doesn't really have a big type natively in it. So it's not great and it also makes it faster to be able to do computations on. So we have to store everything with with bite arrays.

So now we've hidden the deposit, but how do we go on to withdraw the deposit without revealing which one is ours? We

need Merkel trees.

So now we've figured out how to deposit privately. Let's move on to withdrawals.

privately. Let's move on to withdrawals.

The first thing we need to do is allow the person who is withdrawing to prove that a certain commitment exists in the pool. The naive approach would be to

pool. The naive approach would be to store all commitments on chain and check against them. But then the user would

against them. But then the user would have to send their commitment with their withdrawal transaction instantly revealing which one it is. They could

alternatively send all commitments in a transaction, but this would make a huge transaction which would be too large for Salana to handle. Instead, we can use something called Merkel trees. A Merkel

tree is a binary tree where every node is a hash. At the bottom, the leaves are your commitments. Each pair of leaves

your commitments. Each pair of leaves get hashed together to create their parent node, and then those parents get h paired and hashed again. And this

continues level by level until you're left with just one hash at the top, the root, aka the Merkel route. To prove a leaf or a commitment exists, you just

need the sibling hashes along the path to the root. For 1,024 leaves, that's only 10 hashes, which is around 320 bytes instead of 1,024 commitments. On

Salana, we only store the Merkel route, which is a single 32 byt hash, which you've seen in the Poseidon thing. When

somebody wants to prove their deposit exists, they provide a Merkel proof. We

verify it against the stored root. This

saves a lot of storage costs. So in this step, we're going to be adding a leaf index that tracks each deposit's position in the tree. Then we're going to add constants just to define our tree depth and our size of that tree. It can

be whatever you really like. We're going

to store the root history. So proofs

against recent root roots will work. And

then we're going to update the withdraw function to validate that that root exists.

All right, so the goal of this step is to change our pool to hold a Merkel tree route. We don't need to store the entire

route. We don't need to store the entire Merkel tree on chain. This can be maintained offchain by indexers or backends. It's the reason that we emit

backends. It's the reason that we emit deposit events so that people can actually build their own Merkel tree and wherever they want to wherever they want to keep it. So we only keep track of where we are in the tree and what the

Merkel route is. So if we go to the next step, we're going to do get checkout step two proving deposit and we'll see in our in our program here that we've

got a few more const. Let's go through what they are. We've got our tree depth of 10. The depth is just how many levels

of 10. The depth is just how many levels the Merkel tree has. So each level is actually going to double the number of leaves that we have in the tree cuz it's a binary tree. So if we have a tree depth of size 10, this is quite small.

You might want to make it bigger for production. That means we have max

production. That means we have max leaves of,024.

And we do this rust thing where basically we can do tree depth to the power of two and then we'll get 1,024.

And then we also want to store our root history. We're going to store the last

history. We're going to store the last 10 Merkel roots in here in a ring buffer because whenever somebody deposits, they they the root changes. They update the root. They create a new commitment which

root. They create a new commitment which is going to update the Merkel route. So

if someone is trying to withdraw at the same time as somebody is trying to deposit, they're going to be withdrawing and generating a poof a proof against the Merkel route. So if somebody deposits and it changes the route, that

means their withdrawal is just going to fail. So we want to keep the the latest

fail. So we want to keep the the latest 10 most recent roots so that whenever they withdraw, we can check that it matches any of the roots and if it does, then the withdrawal can go through. And

we also have this this empty root const in our little Poseidon guy a little type there. This is just what a what a zero

there. This is just what a what a zero looks like when it's hashed using Poseidon. So we need to keep an empty

Poseidon. So we need to keep an empty root so that we can actually have a root in the tree when we just initialize the pool because if we don't have that then we can't do any sort of proving against

what a Merkel route looks like.

So if we scroll down here all the way to the bottom, we're going to look at what our pool strct looks like now with these new cons and we've got our pool here. So we're

still keeping the authority, we're keeping total deposits, and we're also keeping our next leaf index. This is the place in the tree where the next commitment is going to be inserted into.

A Merkel tree is append only. So you can only add new commitments. You can't

replace a previous one. And we're going to have all these commitments at the bottom here. we need to store where is

bottom here. we need to store where is this next deposit going to go into. Then

we're also going to have our current root index which is pointing to the most recent route in our ring buffer just so we know that that is the most recent one. And then we store our ring buffer

one. And then we store our ring buffer of our root history in here which is going to be our little Poseidon guy and we have 10 of them. So we're going to have 10 different routes being stored in

our pool.

And if we go to the initialize function, we can see that we're setting up all of these things to just be zero when we start. And we're setting our rules, our

start. And we're setting our rules, our roots to be an empty route here. All

these different zeros.

So let's have a look at our deposit function. Remember the full Merkel tree

function. Remember the full Merkel tree lives offchain, doesn't live on Salana.

It's maintained by clients, by indexers, by backends, and the program only stores the root. When a new deposit is made,

the root. When a new deposit is made, the commitment becomes a leaf in the tree, which changes the root hash the whole way up at the top of the tree. So

since the program doesn't have the full tree, it can't compute the new route whenever somebody creates a new commitment. So instead, the client has

commitment. So instead, the client has to insert the commitment and the new route when they're doing a deposit. Um,

so it's they're going to generate their local tree with their commitment, see what the new route looks like, and then put it into our deposit function. So

each deposit then needs to accept the new mert which is computed offchain accept the commitment as well.

So we're accepting these in our deposit and then whenever we do the soul transfer from our depositor into the pool vault, we need to update the tree state so the next deposit can go into

the right place and then emit the event so that everyone who's updating their Merkel trees can update it correctly.

And then we need to put the new route into our ring buffer and set that it is the current root. So the way that we're going to do that is we're going to get the next leaf index from the pool which is just going to be the leaf index that

this current deposit will go into. Then

we're going to calculate the new root index by looking at our our ring buffer and seeing what is next in it. Then

we're going to add our new new root index to our ring buffer and compute our current root index as a new root index.

So we plus one down here as the next leaf index. Then we deposit everything

leaf index. Then we deposit everything so people can build their Merkel tree the way that it is on chain. And once

our Merkel tree is full, we can't actually accept any more deposits. So we

just have another require statement in here that says that our next leaf index is less than max leaf cuz every single commitment in the tree is going to have a a leaf index

and the final one is going to be 1,024.

So if that's 1,025, we can't accept it because our tree is full.

So now we have updated our deposit function to accept the new route from the client, store that route into our ring buffer, emit the leaf index so that the next deposit knows what to do, and

check that the tree isn't full before we accept new deposits.

So if we go to our withdraw function, our withdraw function just needs to prove that the commitment exists inside the tree. And the way to do that is

the tree. And the way to do that is proving that the route that they have here is something that we have seen before. So if we scroll down here, we're

before. So if we scroll down here, we're taking a route and then we're just going to verify that that route is something that we've seen before.

Oh, it's right there.

Our withdraw function just has to prove that the commitment exists in the tree.

In order to do that, the person will calculate the new route, send the new route into the withdraw function and then we will just check that that route is something that we've actually seen before. So we take the new route here

before. So we take the new route here and then we call a function called is known root and pass the root into it. We

have a look at this is known root function. It is super simple. All it's

function. It is super simple. All it's

doing is is um making sure that the root is the same as something that exists in our root ring buffer in here. And if it is, then we return yes. And we can continue with the withdraw function. So

now in this Merkel tree step, our pool stores the ring buffer of the most recent Merkel roots. Each deposit gets a leaf index and it updates the route and withdrawals must prove that that root is

something that is in the recent history.

So now we can prove that a commitment exists. But you might have noticed that

exists. But you might have noticed that if we admit every single commitment publicly, then everyone can can just create their own local route and therefore be able to withdraw. In the

next step, we're going to talk about nullifiers, which accomplish two things.

We prove that we are actually the owner of the commitment that we're trying to withdraw, and we prove that it hasn't been withdrawn already.

So, now we can deposit into the pool without storing us as a depositor and then prove that the commitment exists in order to withdraw. But what stops someone from proving the same deposit

multiple times or proving that we know some other commitment that isn't yours?

Currently, nothing. And that's where nullifiers come in. That's what we're going to be talking about in this step.

The nullifier solves both problems. It proves that you own a commitment because only you know the nullifier you use when creating it and it prevents double spending because we can track which

nullifiers have been used. The problem

is that we can't just mark commitment X was spent because when we do that then it links commitment X to that withdrawal. We need to track something

withdrawal. We need to track something was spent without revealing which commitment. Do you remember when we

commitment. Do you remember when we deposited, we generated a random nullifier and included it in our commitment? When you withdraw, you have

commitment? When you withdraw, you have to send the hash of that nullifier nullifier hash and our Salana program needs to store all of the used nullifier hashes. So if you try to withdraw twice,

hashes. So if you try to withdraw twice, you'd submit the same hash twice and then our program can reject it. It isn't

possible to link a nullifier with the commitment. The commitment was just a

commitment. The commitment was just a hash of the nullifier secret and an amount. Nullifier hash is a hash of the

amount. Nullifier hash is a hash of the nullifier. They have completely

nullifier. They have completely different outputs and we can't reverse one or derive one from the other.

Observers will see a deposit with some sort of commitment hash, then a withdrawal with some sort of nullifier hash, but no connection between them.

This step is pretty simple. We're going

to update our program to create a nullifier set account to store the all the used nullifier hashes. We'll check

the nullifier hasn't been used during the withdrawal and we're going to mark nullifiers as used after successful withdrawal. Then we will update the

withdrawal. Then we will update the withdraw event to include the nullifier hash as well. So in this next step, we're going to add nullifier tracking to prove ownership of a specific commitment

and prevent the same deposit from being withdrawn twice. Nullifier is a unique

withdrawn twice. Nullifier is a unique value that's derived from the deposit secret. When you withdraw, you reveal

secret. When you withdraw, you reveal the nullifier, but not the commitment that it was associated with. The program

records it, so if you try to withdraw it again with the same nullifier, it's going to get rejected. Each deposit

produces exactly one nullifier and the nullifier can't be linked back to a commitment. So we know this deposit was

commitment. So we know this deposit was spent without knowing which deposit was spent. So let's go to our terminal here

spent. So let's go to our terminal here and we're going to do get checkout step three prevent double spend. And we can see that our program has changed a bit.

So if we scroll the whole way to the bottom, we're going to see that we now have a nullifier set account here.

We're storing the pool the nullifier set is associated with and we're setting our nullifiers in here. We're just going to store a ve of our little Poseidon guy.

And we're going to have our vec as a max length of 256. You can store it however you want. In quite a lot of production

you want. In quite a lot of production systems, you would actually store your nullifiers in a Merkel tree as well, but we don't need to do that for this one.

So, you might wonder why are we creating a new account just to hold all these nullifiers when we could just put this part into our pool account that already exists. There's a few different reasons

exists. There's a few different reasons for that. The first being that Salana

for that. The first being that Salana accounts can only go to a certain size and you pay rent depending on the size of your account. So, we kind of want to have as many small accounts as possible

rather than one huge account. And the

second one is that we get a separation of concerns here. So say you would want to store your nullifiers as a Merkel tree. You don't want to then store all

tree. You don't want to then store all of the like you know next leaf index pull next leaf index nullifiers. It

would get quite complicated to use and you'd be doing different things and you know these these are different information. They don't need to live in

information. They don't need to live in the same place. It also allows us to upgrade. So, for example, if you wanted

upgrade. So, for example, if you wanted your nullifier set to be a Merkel tree, like if you set it up this way and then you realize, oh, I want to make it a Merkel tree. So, I can store more

Merkel tree. So, I can store more nullifiers, you can just do that and you won't have to touch the pool account at all. So, you can keep all that. That's

all. So, you can keep all that. That's

not going to change. That's not going to break accidentally and you can just update your nullifier set. So, it's a pretty cool upgrade flexibility thing with Salana.

Um, and anyway, if we look in here, we can see that we also have a few functions for nullifier set. If you're

unfamiliar with Rust, this is just how we define that. We've got our strct that we set up here and then we do our implementations here. So, we've got is

implementations here. So, we've got is nullifier used. All we're doing is

nullifier used. All we're doing is taking our nullifier set. We're taking

the hash and we're seeing if the hash exists in our nullifier set. And then we also want to mark our nullifier as used at some point. So again, we take in our our nullifier set, we take in our hash,

and then we push the nullifiers into the nullifier hash. We also make sure here

nullifier hash. We also make sure here that we're not going to go over our nullifier set by ensuring that the length of our nullifier set is less than the length that we have set it going to

be as 256. We're going to be using our nullifier set in our initialize function so that we associate a nullifier set with a with a pool whenever we initialize a new pool. So if we head up

here, we'll be able to see that our um initialize accounts will include our nullifier set here. And then let's go up to our initialize function and see what

we're doing with it. So our initialize function is getting our nullifier set from the context and it's setting the nullifier sets pool as the pool also

from our context. So now we have a nullifier set strct to store and use nullifiers when we initialize it when the pool is created. The nullifier set is linked to the pool via a PDA program

derived arrays. So now we need to

derived arrays. So now we need to actually use it. The way that we're going to do that is in our withdraw function. So withdraw flow now needs to

function. So withdraw flow now needs to accept the nullifier hash from the user.

Check that it hasn't been used before and mark it used before it transfers funds. To have a look at our withdraw

funds. To have a look at our withdraw function, we're taking in our nullifier hash. Then we're just setting it as a

hash. Then we're just setting it as a variable here getting it from the accounts and we are ensuring that it hasn't been used. It's exclamation point

has not been used. And then we're going to mark it as used right here. Oh, you

can see that we mark our nullifier as used actually before we do the transfer of the withdrawal. And there's a good reason for that. It's called re-entry attacks. It's common thing in

attacks. It's common thing in blockchain. Basically, it's for example,

blockchain. Basically, it's for example, if we had this, we had it after our transfer down here.

Then what could happen is somebody would call the withdraw function and they'd have a valid nullifier. So, it would all go through and then that transfer would happen and the transfer starts sending funds into the attacker's accounts. But

before the step actually continues and they call withdraw again and they will still have a valid nullifier because we haven't marked the nullifier as used and they might be able to do this all over again. So that's quite a common attack

again. So that's quite a common attack that happens in blockchain systems. So we just make sure that we put our nullifier used up before we do the transfer. And then also just like always

transfer. And then also just like always when we're emitting our event, we're admitting that the nullifier hash has been used. so everyone can have an

been used. so everyone can have an updated Merkel tree and nullifier hash tree on their systems. So in this step, we've created a new nullifier set account to store all of the used

nullifier hashes. We've said that

nullifier hashes. We've said that withdrawals must provide a nullifier hash. We've made sure that if the same

hash. We've made sure that if the same nullifier hash is submitted twice, the second withdrawal is rejected. And our

nullifier hash cannot be linked back to the original commitment. So right now, we've got a pretty good system in place.

So, we've got something that can hide deposits, prove ownership, and prevents double spending. But, we're not actually

double spending. But, we're not actually verifying that any of this is really true. So, anyone right now could just

true. So, anyone right now could just submit a random nullifier hash that hasn't been used and then get a commitment because we're not associating those things with each other. That's

where zero knowledge comes in, which is what we're going to do in the next step.

So, now let's get on to the fun part.

Zero knowledge proofs. We can deposit privately, prove membership, and prevent double spending. But right now, if we

double spending. But right now, if we just send a Merkel proof, we'll be revealing which commitment is ours. This

is because when you're submitting a Merkel proof, you're basically saying, "Here's my commitment at leaf position five, and here are the sibling hashes that prove it's in the tree." The proof contains your actual commitment, the

path of sibling hashes, and the index position. Anyone watching the blockchain

position. Anyone watching the blockchain can see exactly which commitment you're claiming, look back at when that commitment was added during deposit, and then link your deposit wallet to your withdrawal wallet. So, we're going to

withdrawal wallet. So, we're going to prove I know a nullifier, the secret, and the amount. The commitment is in the Merkel tree. The nullifier hash is

Merkel tree. The nullifier hash is correct. The verifier in Salana is

correct. The verifier in Salana is convinced that we now have a valid deposit, but they learn nothing about which deposit. We can write these

which deposit. We can write these circuits in a language called Noir. It

allows someone to generate proofs client side or on a back-end server and then the user can send that proof to be verified on Salana. The reason we want to use Noir is that syntax- wise it's very familiar to our favorite language

Rust and it also allows us to use whichever proving system we want. This

means we can use GR 16. Gross 16 is especially great for us because Salana needs small proofs and fast verification which is exactly what GR 16 is known for. These are all going to be visible

for. These are all going to be visible on chain. The Merkel root, the nullifier

on chain. The Merkel root, the nullifier hash, the recipient address and the amount those are going to be visible on chain and used by the verifier on Salana. our private ones which we're

Salana. our private ones which we're going to be putting into the circuit.

These are never revealed. Our nullifier,

our secret, our Merkel proof path, which we'll get to in a second, and the path directions where it sits in the tree. In

this step, we're going to look at the entire withdrawal circuit, install Nargo, the Noir compiler, compile the circuit, and generate the proving and verification keys.

So in this step, we're going to go through all the circuits and talk about what they mean and they'll run on our back end to generate a proof and then we can verify them on chain. Let's go to

our terminal again and do get checkout step for zk.

And first thing we're going to want to do is have a look at our circuits. So

our circuits exist in the circuits directory here. We go into circuits,

directory here. We go into circuits, we'll see withdrawal. We'll see naruto tunnel, which basically tells our noir compiler how to compile and what dependencies we need. And then if we go

into source, we can find our main circuit here. So the way that circuits

circuit here. So the way that circuits kind of work when you write them in noir is that if this computation succeeds, we generate a proof. So normally that means at the end of the circuit you're going

to want to write some sort of assertion statement like we for example here we assert that the computed root is the root and you might just assert that a equals b or a is not equal to b or

something and if that goes through then we we create a root we create a proof but if it doesn't go through then we just submit this error. So if we have a

look at this um this is going to prove knowledge of our nullifier, our secret and our amount and prove that it exists inside the tree. So these are all the parameters that we need for our circuit.

The first four are public ones.

Everything is just private by default with noir but if you put a pub in front of it that means it's going to be public and used by our Salana verifier as well.

So these things are already all public.

These things are being emitted and are on chain already. We have our Merkel roots. We have the nullifier hash. We

roots. We have the nullifier hash. We

have the recipient. And we have the amount. And then our private inputs.

amount. And then our private inputs.

We've got our actual nullifier. We've

got our little secret thing that we use to generate our commitment. We've got

our Merkel proof, which is basically the path of where our commitment is up until the Merkel route. And we have is even.

So when you have a commitment, you have two different leaves at the bottom that are hashed together. It's saying which of these leaves is it. Is it the one on this side or is it the one on this side?

Is it odd or is it even?

So the way that this circuit is going to work is we're going to first compute the commitment. So we're going to use our

commitment. So we're going to use our Poseidon 2 here, which is also what we're using on on the Salana side. And

we're going to hash together the inputs we have here. Our nullifier, our secret, and our amount.

And we have to also send the next parameter here for Poseidon saying that expect three different things to be hashed together here. And then we're also going to compute our nullifier hash to make sure that the nullifier hash

that we have publicly is the same one as an as as the nullifier they sent in here. So what we do here is we just hash

here. So what we do here is we just hash our nullifier and we say okay is the computed nullifier hash the same as the public one and if not then we error out and then we're going to compute our

route. So we have another function here

route. So we have another function here that we're going to get to in a second.

What we do here is we provide our commitment that we just created here from our nullifier or secret and amount.

We provide the path which is the Merkel proof the path all the way up from our commitment to the top of the tree and we do is even which basically tells Poseidon which order we want to hash

them. Do we want to hash this commitment

them. Do we want to hash this commitment this commitment or do we want to hash this commitment this commitment and then we just assert that the computed route is actually the Merkel route and if it

is then the proof is generated and we also do this cool thing at the end where we you know we pass in this recipient but we're not doing anything with it but we do want to basically bind this to

that recipient. So what this is doing

that recipient. So what this is doing here is it it is associating this proof with a certain recipient. So if somebody changes this recipient amount, then the proof will not be valid. So it can only

be proved by one specific recipient. So

if we have a look at this compute Merkel route here, this is a thing that happens all the time with noir. So this is like super simple code that you can just find anywhere. And basically what this does

anywhere. And basically what this does is just computes a Merkel route from a leaf and its path all the way up to the top of the route. So we take in the leaf, which is just the commitment. We

take in the path which is the Merkel proof that we pass in and we take in if it's even which commitment that it is the ones that are being hashed for its parent and then we just go through um

until we get the depth and just keep hashing all these together until we get to the very final thing and we then give back what is the Merkel root that we've

just computed here and we can see in here that's the thing we have here and we just make sure that it's the same as the root the rest of the code in here is just tests um which

are pretty useful. You can test directly with a noir. So you just write something that looks similar to rust and you give it random nullifiers and random secrets and everything and make sure that this

is actually valid. So if we go into our circuits directory, we can test it by running nargo test.

Okay, can't see that. Oh, we're not in the right place. So we got to go into withdrawal and we run nargo test and we can see that all of our tests pass. You can have a look at this by looking at the code

but it's not really important right now.

We can also compile it and that's going to be important for making where we can actually use this. So when we compile it we can see we now have a target directory inside of our circuit and this

has our withdrawal JSON. We are going to be using this in the next step to generate verification and proving keys so that we can develop a Salana program that actually verifies that this proof

is legit. We are going to use this

is legit. We are going to use this compile circuit to generate proving and verifying keys so we can actually verify this stuff on Salana. Grass 16 proofs require something called a trusted setup

which is a one-time process that generates cryptographic keys specific to our circuit right here. And it the reason it can do this is because it needs a really really tiny proof size

and fast verification. And it does that by baking in the circuit structure into special keys during this setup. So this

setup when we run it is going to produce our proven key which contains all the cryptographic parameters needed to generate proofs and it's going to generate the verification key which contains just enough information to

verify those proofs. That's going to be what's deployed on chain. The trusted

part of the phrase trusted setup refers to the randomness that's during the setup. If somebody knew this randomness,

setup. If somebody knew this randomness, for example, they could just fake the proofs. Um, so we're going to do this

proofs. Um, so we're going to do this easily in development. We can just run something. But if you want to do this in

something. But if you want to do this in production, you'd want to have multiple parties doing the trusted setup all together and then discarding all the information so that one person can't

just do fake proofs.

So, inside of our circuit, we're going to use something called Sunspot. Sunspot

is a tool that you will want to install from here. You install it and just add

from here. You install it and just add it to your path. This will be linked in the description so you can do that to yourself. And then I'm just going to run

yourself. And then I'm just going to run Sunspot help. And once this succeeds for

Sunspot help. And once this succeeds for you, you'll know that you've actually um installed it. So what we're going to do

installed it. So what we're going to do is we're going to do sunspot compile target withdrawal JSON. That's this. And

this is going to convert this into a C CCS format which is what Sunspot is going to use to generate the proving case. And now we can do the setup, the

case. And now we can do the setup, the trusted setup by doing Sunspot setup and passing in that CCS file.

And now we have our proven key and we have our verification key. So, we have our setup completely done. Um, you can look at these files, but in most code editors, you're not going to be able to understand what this says unless you

have a something that can view CCS files. These are proving keys and our

files. These are proving keys and our verification keys. So, this one's going

verification keys. So, this one's going to be baked into our onchain verifier, which is pretty exciting. So, in this step, we have got a circuit that proves that you own a deposit without revealing

which one, and we have developed our proving key so that our back end can use that to generate proofs. And then we've created our verification key which is going to be deployed onchain as a verifier.

So now we have the circuit, we have the keys. Let's deploy the verifier program

keys. Let's deploy the verifier program to Salana. So now we've got our ZK

to Salana. So now we've got our ZK proof. Now we got to verify it on

proof. Now we got to verify it on Salana. This is the final piece in the

Salana. This is the final piece in the puzzle of zero knowledge proofs. We're

going to deploy a program on Salana that is like just like a verifier that's generated from Spongebob and our private transfers program is going to call that verifier via CPI. If the proof is

invalid, the whole transaction will fail atomically. So in this next step, we're

atomically. So in this next step, we're going to generate the Sennena verifier program from the verifier key that we just generated here. We're going to deploy that to DevNet and then we're going to add that verifier program ID to

our code and call it via CPI to verify our proofs.

So now we're going to deploy our Sunspot verifier program and verify ZK proofs on Salana. The way we're going to do that

Salana. The way we're going to do that is generate our verifier program from the verifier key that we already generated in the previous step. Then

we're going to deploy that verifier to Salana DevNet. And then we're going to

Salana DevNet. And then we're going to update the program that we have here to verify ZK proofs by calling that verifier via CPI. So the first thing we

need to do is tell Sunspot where our verifier template is. This is just a bit of a setup process. So you need to set an environment variable called narc verifier key. I've already set it, but

verifier key. I've already set it, but basically whenever you install Sunspot, inside that directory, you're going to see something called verifier bin. And

you just need to set that as an envir environment variable. So we can have a

environment variable. So we can have a look here and we can see which one I have. Echo narc verifier bin. This is

have. Echo narc verifier bin. This is

where mine sits. Yours is probably going to be somewhere similar. So just make sure you set that variable before you continue. And then we're going to

continue. And then we're going to generate our ver our verifier program.

So super super simple. inside of our circuit where we have all of these pieces, we need the verification key.

And what we're going to do is sunspot deploy and pass in the verification key.

And this is going to generate a Salana program. All right, that was super

program. All right, that was super simple. It's not going to generate it in

simple. It's not going to generate it in the nice format that we like. It's going

to generate it in an already compiled format that looks like this. a little

bit scary again, but we can just deploy that directly to DevNet and not worry about it. So, let's deploy to DevNet.

about it. So, let's deploy to DevNet.

Let's make sure first off that we are setting our config to DevNet. So, we're

going to do Salana config set and pass in URL DevNet.

You can also pass in an RPC there that already goes to DevNet. Okay, we can see that we are doing it for DevNet. And

let's check that we have enough salana balance. Okay, I've got eight SOL. This

balance. Okay, I've got eight SOL. This

is probably going to cost about 1.5 solve to deploy. It's pretty large program. It contains all of the gross 16

program. It contains all of the gross 16 cryptographic math. So it's it's quite

cryptographic math. So it's it's quite large. Um and then what we're going to

large. Um and then what we're going to do is Salana program deploy and target. Yes, this one right here.

and target. Yes, this one right here.

It's a Salana program. And now we are deploying that to DevNet.

You might notice that we're doing Salana program deploy here instead of something you might be familiar with which is this anchor program deploy. It's because

whatever program was developed and generated by Sunspot is not an anchor program. It's just like a base Salana

program. It's just like a base Salana native program. So we have to use Salana

native program. So we have to use Salana program deploy. All right, it's

program deploy. All right, it's deployed. We've got our program ID here.

deployed. We've got our program ID here.

This is going to be important. So make

sure you copy this. So now we've got this copied. We're going to head back

this copied. We're going to head back into the program that we're pretty familiar with, our private transfers program. up here and we're going to

program. up here and we're going to update at the very top. We've got a const here which is our sunspot verifier ID. We're going to pop this one in

ID. We're going to pop this one in there. We're also going to update it in

there. We're also going to update it in our anchor tunnel. We've got our sunspot verifier and make sure it's this one.

You can ignore mock verifier for now because that's something that we use for testing, but we're not using that today.

So now we've actually got to CPI into the sunspot verifier and call it with our proof and our public inputs and verify that proof. So the first thing we have to do is encode our public inputs

and our proof in a way that our verifier expects. It expects a very specific

expects. It expects a very specific binary format for its input. So if we head down into a function called encode

public inputs, we will see something that looks like this. And we're taking in our root, our nullifier, hash, our recipient, and our amount. If you

notice, that is the same order that we have in our function here. Our root,

nullifier, hash, recipient, and amount.

So, we pass those in. And then we are setting the public inputs as four. It's

important that we know how many public inputs we have because we have to tell that to our verifier.

And then we're going to allocate how much space we need for our inputs, which is going to be a ve with a capacity of 12. That is our header. This grow 16

12. That is our header. This grow 16 expects which we'll get to in a second and our 128 which is how many how much space we need for our actual public

inputs. So our header is something that

inputs. So our header is something that our verifier expects and it expects the first three bytes to be the number of the public inputs. Then the bytes four to seven to be any commitments that

we're doing which we're not because we're already doing that somewhere else.

And then bytes 8 to 11 to be the number of public inputs. Again, it's a strange thing that is expected from verifier programs. But if we have a look in here, we are adding our number of public

inputs that we defined here to big Indian bytes. This word might be

Indian bytes. This word might be familiar to you or maybe not. If you

study software engineering, then you might know what this is, but basically um big Indian and little Indian are ways to format bytes in memory. And most

systems now use little Indian because it's just faster. Like Salana uses little Indian and probably your computer does as well but a lot of crypto especially zero knowledge crypto uses

big Indian. So we got to convert things

big Indian. So we got to convert things into big Indian bytes. So that's what 2B is big Indian bytes means. So we convert our public inputs into big Indian bytes.

Then we're converting um this is just a zero into our bigendian bytes because we don't have any commitments to do here.

And then we're doing our public inputs to beium bytes once again.

So that is our header which fills up this 12. And then we can go into the

this 12. And then we can go into the next part which is actually our public inputs. And all we have to do is add

inputs. And all we have to do is add these into our inputs the way that they are here which is like our Poseidon hash. And we do um

hash. And we do um this this needs as these are already the Poseidon hash format but this one isn't.

So we have to do that there.

And then because our amount bytes is a little bit smaller, we just have to pat it with a bunch of zeros, which is what we're doing here. We're padding it with zeros and then we're adding it. And then

we're just returning inputs. So this is our public inputs now all formatted, ready to go for our verifier. And we

just have to CPI into it with our proof and our verifier.

So we have got to make sure that we add our verifier to our withdraw accounts because we're only going to be verifying whenever we're trying to withdraw. So,

we need to make sure that our withdraw function is aware of our verifier program. So, if we head down here into

program. So, if we head down here into our withdraw accounts, we can see that we have our verifier program here. It's

an unchecked account because we don't know what that program actually looks like. So, we we can't check it, but we

like. So, we we can't check it, but we are constraining it here by making sure the verifier program is equal to the sunspot verifier ID that we identify at the very start of this program.

So whenever we do anything with an unchecked account in anchor, we have to put this here to say that like okay, yes, it's unchecked, but it's safe. So

we're already validating it by the constraint here. So now our program is

constraint here. So now our program is aware of our verify our program and let's actually call it. So if we go back up to our withdraw function,

which it's getting to be quite a long program now, it's right here.

And we can scroll down and see that before. Wait, no.

before. Wait, no.

So we can scroll down and see that before we do our transfer, we're going to call and make sure that our proof is valid. So what we're doing here is we're

valid. So what we're doing here is we're getting our public inputs and encoding them and passing in the root qualifier hash recipient and the mic that we are getting from the parameters in here. And

then we're going to do this instruction data which is the data that we're actually going to be passing in to do the CPI. And we are doing our proof and

the CPI. And we are doing our proof and we're doing a public inputs. So then

we're going to invoke this CPI. This is

we're doing this in a slightly different way than what we did previously when we're calling the system program because the program that we're trying to CPI into is not an anchor program. So we

have to use this instead. So we just use something called invoke. Um whoops. And

it's pretty simple. And then we pass what the instruction is going to be. So

we're calling the program ID of the verifier program from the accounts.

We're passing in nothing for the accounts. It doesn't need any accounts.

accounts. It doesn't need any accounts.

And we're passing in this instruction data, which is our proof and our public inputs.

And that's going to verify the proof.

And if that goes through and everything is verified, then we go to the next part of the code and mark our nullifier as set.

Just a bit more information about these CPIs. Um, we use invoke when we're

CPIs. Um, we use invoke when we're calling another program. The caller's

authority is going to be passed through into this one. We use invoke signed to sign with PDAs for example. So we'll use this one. Our PDA needs to authorize

this one. Our PDA needs to authorize something. But we're just using invoke

something. But we're just using invoke here. And we're using invoke instead of

here. And we're using invoke instead of CPI context because this program that we are calling is not an anchor program.

Okay. So now we are ready to deploy our program. We've got everything working.

program. We've got everything working.

We've got our sunset verifier here. We

are verifying our proofs. We're making

sure we're updating our Merkel tree and we can actually deploy. So if we head into our anchor directory, this is an anchor program. So we can

just build it first. Make sure

everything's good. Oh yeah, we have to sync our keys first. So basically what happened there is that our anchor um the key here was different from the

key at the top that's declared here. So

we just run anchor key sync and then we can sync them all up. Then let's try building it.

So we can see in our program that our withdrawals now require zk proof. Our

program calls a verifier before CPI releases funds. If the verification

releases funds. If the verification fails, the entire transaction reverts and no funds can actually move without a valid proof.

So, let's just wait for this program to be built.

Okay, we got some warnings there, but we don't have any errors yet. So,

okay, awesome. It built. That's good.

So, now we can deploy as our DevNet. So,

we do anchor program deploy and pass in the cluster as DevNet.

All right, our program has been deployed onto deadnet. I'm going to make sure

onto deadnet. I'm going to make sure that we keep this program ID because we're going to be using that in the next step, which is where we wire everything together in our front end.

So now we've built all of the individual pieces. We're going to see the complete

pieces. We're going to see the complete flow and have a look at how we're calling some of these things and wiring all of them together.

In this step, we're going to understand everything about the front end, how it builds and sends withdrawal transactions using Salana Kit. So we're going to be generating our TypeScript client with Quudama. We're going to walk through

Quudama. We're going to walk through with the withdraw transaction code and then we're going to run and test the complete flow with everything all together. So first off, we're going to

together. So first off, we're going to be generating our Kudama client with our IDL.

An IDL stands for interface definition language. Think of it kind of like a

language. Think of it kind of like a JSON file that describes your Salana programs API. What instructions it has,

programs API. What instructions it has, what accounts they expect, what data types they use. Whenever we ran anchor build just now, anchor automatically generates an IDL file by reading our

Rust code and we can have a look at that actually in our target IDL private transfers JSON. This can be used by any front end or script that wants to call your program can read this

IDL to see exactly how to format their request. It makes it just a lot easier

request. It makes it just a lot easier to be doing stuff with Salana. So for

example, our IDL will describe our withdraw instruction and it says that it needs a proof in type bytes needs a nullifier hash and all the different types and everything

in there. Without this, you'd have to

in there. Without this, you'd have to like manually figure out how to serialize data correctly and what bite order they use and everything. And it

that's just a bit tedious and errorprone to do. So, we're going to be using this

to do. So, we're going to be using this IDL with something called Kudama, which is going to generate TypeScript definitions for us and make it super super easy for us to use this within our

front end. So if we go into our front

front end. So if we go into our front end here and we can find scripts generate client. You can find this

generate client. You can find this really easily online on the quadama docs. I'm just copying and pasting from

docs. I'm just copying and pasting from there. Basically what we're doing here

there. Basically what we're doing here is we are getting our IDL path that we've specified in here. Then we're

creating uh from root with our IDL path and we're just saying where we want to put it. So we want to put it in /source

put it. So we want to put it in /source slashgenerated.

So let's go ahead and run this script by cding in a front end again using bun bun run scripts slashg generate client

and if we have a look at our source directory we've got all the generated stuff in here. So if we look at it we can just see that it makes it super super easy to use all the stuff within typescript. We have a bunch of functions

typescript. We have a bunch of functions we can use bunch of types that we can look look at and it's going to be much easier for us to do our front end.

So before we dive super into the front end code, we're going to need to set some cons here. So let's again get our

Sunspot verifier ID from up here.

This is not the program that we just deployed, but it's our sunset program.

Okay, I think it should be okay. I don't know where it is, but I do

okay. I don't know where it is, but I do know that it is in my anchor tunnel here. So, I'm going to paste that. Copy that and paste it

into our um if we go to front end source.

Oh, consonants constants constants.

Front end source constants.

Sorry. Wait.

So if we go to our front end source and constants, we've got our sunspot verifier ID there. We're just going to change it to this one.

And then we are also going to need our program ID to put onto our back end.

So we get our program ID from our private transfers. We head into back

private transfers. We head into back end. This is going to be generating all

end. This is going to be generating all of our proofs for us. Source and server.

And then we'll see something at the top which is our program ID here. We don't need to do this for our front end because Kodama automatically gets our program ID for

us. So we don't have to set it anywhere.

us. So we don't have to set it anywhere.

All right, let's go back to our front end and we'll head inside our components and have a look at our withdraw section so we can see how we are formatting all of our requests.

So we are using something called Salana React hooks. This is going to make it

React hooks. This is going to make it super easy for us to use React. We're

using use wallet connection and use send transaction. This is a react hook that

transaction. This is a react hook that gives us the connected wallet and this is a react hook that helps us signing and sending transactions. Then we've

also got Salana kit which is what react hooked use under the hood. We've got an address which is going to convert a base 58 string into an address. We've got get

programmed derived address or get PDA which drives a PDA from the seeds and the program ID. We get get bytes encoder which is going to help us create an encoder that converts our strings into

bytes um which we're going to use for withdraw. And then we've got address

withdraw. And then we've got address encoder which converts an address um to 32 bytes which we're also going to use.

And then we're getting some of these things from kodama as well. So we can actually get an encoder to make sure that we get all of the data correct in our withdraw function. And we're getting our program address for our private

transfers.

So let's head down here and find our withdrawal proof.

So we're getting our withdrawal proof from our back end and then we're just converting this into the types that we need in here before we call our withdraw function. Then we're going to be

function. Then we're going to be deriving some PDAs here. This is super simple to do. We can just get our pool PDA from get PDA. We pass in our program address that Kodama gives us. We pass in

the seeds that we have set in our const.

We also need our nullifier set. We pass

in the same seeds and we need our pool vault which we do the same thing for.

And then we need to encode our instruction data into the way that our program expects. This is super simple

program expects. This is super simple with what Kudama has given us with get withdraw instruction encoder. So we just pass in our proof, our nullifier hash, our root recipient amount in the same

order that's expected from our program and we just encode it and then we're all good to go. So now we're actually ready to build the instruction. So what we have to do here is we pass in the program address and then we give it all

the accounts that it needs. So we've

already derived these accounts before.

We got our pool PDA, our nullifier set PDA, our pool vault PDA, our recipient address, and then we also need our Sunspot verifier ID and our system program ID. These roles specify what is

program ID. These roles specify what is actually going to be done with the account. So if it's one, that means that

account. So if it's one, that means that it is writable and if it's zero, that means it's read only. We can also set two as read only and assigner. And we

can do three as being writable and assigner as well. But we just need these ones for no. And also something that we specifically should do for verifying zk proofs is request more budget for

compute because sometimes we need a little bit more to verify proofs. So

what we can do here is just create a new instruction. We're going to call it

instruction. We're going to call it compute budget instruction. We'll pass

the compute budget program ID that we've set in a const. It just looks like this.

We'll pass the accounts which is nothing and then we pass the data which we have set up here. Basically the only important thing that we need for here is how much compute units we're actually

wanting and we're going to request this many. That should be enough.

many. That should be enough.

And then we can send the instruction.

First thing we want to do is just make sure we get that compute and then we'll send the withdrawal instruction.

I think we are ready to run the front end. Let's head back to our terminal.

end. Let's head back to our terminal.

We're going to first run into our back end because our back end is actually going to be the one that verifies our proofs. We'll do bun rundev.

proofs. We'll do bun rundev.

Oh, we need to install first.

Whoops. Bun rundev.

And then we will open our front end and bun rundev into here as well. I

think we've already installed.

All right. So, we have back to our browser.

This is it open here. I've already

connected with my wallet previously and we can deposit.

Let's zoom in a bit.

Let's deposit 0 two s. So, we're generating the note

two s. So, we're generating the note here. I have to sign this in my wallet.

here. I have to sign this in my wallet.

This isn't a real wallet, so it's fine.

It's going to say that it won't succeed.

That's because we don't have any saw on um mainet but we do on test net so it will succeed.

Okay. So now we've got our deposit note and this is what we need in order to withdraw. So if we paste this posit note

withdraw. So if we paste this posit note into withdraw, we can see that this is the amount that we want to deposit and we can hit withdraw. This is generating our proof right now. It's it doesn't

take much time to do it with grass 16.

It's pretty fast.

And then we've got our 0.2 soul. So we

have just deposited into a pool privately and withdrawn also privately.

And there you have it. You've built a privacy preserving transfer system on Salana using zero knowledge proofs. We

built a private transfer pool where deposits are hidden behind cryptographic commitments. Withdrawals reveal nothing

commitments. Withdrawals reveal nothing about which deposit is being spent and double spending is prevented without revealing identity. We practiced anchor

revealing identity. We practiced anchor macros, CPIs, PDAs, and we learned a little bit more about kit. We also

generated a ZK verifier, learned about ZK verifying and a bit more about Salana compute budgets and transaction sizes.

And we learned best practices when it comes to handling ZK and privacy on Salana. All about Merkel trees,

Salana. All about Merkel trees, commitments and nullifiers, Poseidon, and how to keep things private on a public ledger. Congratulations. I hope

public ledger. Congratulations. I hope

you enjoy the rest of the boot camp.

In this project, we're going to build something that powers billions of dollars in transaction volume on Salana, a fiatbacked stable coin. If you've ever used USDC or USDT, you've interacted

with this exact type of system. So, what

exactly is a fiat backed stable coin?

It's a token where every single coin in circulation is backed one to one by real dollars sitting in a bank account or treasury bills. When someone deposits

treasury bills. When someone deposits $100, we mint 100 tokens. When they want their dollars back, we burn the tokens and release the funds. Simple concept,

but building it correctly requires understanding some critical Salana concepts. Here's what you're going to

concepts. Here's what you're going to learn. First, we'll dive into token

learn. First, we'll dive into token 2022. It's Salana's modern token

2022. It's Salana's modern token standard designed to build tokens with real onchain roles, advanced control, and features like freezing, burning, and

role-based access. Token 2022 has

role-based access. Token 2022 has several different extensions that you can add onto a token for various use cases. Specifically, for this project,

cases. Specifically, for this project, we're using Token 2022 to be able to meet regulatory compliance needed to issue a stable coin. For example, if a

sanctioned wallet is holding your stable coin, regulators expect you to be able to freeze or seize those assets, and token 2022 enables this feature. Second,

we'll learn about program derived addresses or PDAs. Our program uses PDAs as the mint authority which means no private key controls the token supply.

The program itself is the only thing that can mint and it only does so when the rules are followed. Third, we're

implementing a mentor allowance system.

Not everyone can mint tokens. Only

authorized mentors and each one has a cap on how much they can create. This

teaches you how to build role-based access control on chain. Fourth, you'll

build emergency controls, a pause mechanism that stops all mincing if something goes wrong. Real stable coins need circuit breakers, and you'll understand exactly how to implement one.

The patterns you learn here are similar to those used by actual stable coin issuers. So, let's get to building.

issuers. So, let's get to building.

Before we build our stable coin, let's make sure we understand how tokens work on Salana. So, there's two account types

on Salana. So, there's two account types that you need to know. First, there's

the mint account. This defines the token itself. Think of it like the definition

itself. Think of it like the definition of a currency. It stores the total supply, the number of decimals, and who has the authority over it. There's only

one mint account per token. Now, second,

there's the token account. This is where tokens are actually held. It's like the wallet for that specific currency. It

tracks a balance and points back to its mint, but also its owner. There can be millions of token accounts for a single mint, one for every user holding that token. The simple way to remember this,

token. The simple way to remember this, the mint is like the currency definition. And the token account is a

definition. And the token account is a wallet that's holding that currency.

Salana has a built-in program for handling tokens called SPL tokens. If

you're familiar with Ethereum, this is Salana's equivalent to ERC20. It handles

the basics like creating mints, minting tokens, burning tokens, and transferring between accounts. It's been around since

between accounts. It's been around since Salana's early days and works great for simple tokens. But this is where token

simple tokens. But this is where token extensions comes in. It's also called token 2022 and it's a supererset of the token standard that's backwards

compatible with the SPL token, but it adds powerful new features called extensions. There's over 15 different

extensions. There's over 15 different extensions available today and a token can mix and match with different extensions as you see fit. For example,

if I'm creating a subscription service and I can have a website that is token gated, but you must own the subscription token to be able to access it. So for

this to work, I would create this token with an extension like non-transferable and permanent delegate enabled. So when

someone signs up for the subscription, I'll mint a token to their wallet and they won't be able to transfer it. And

this validates that the holder has actually successfully signed up for the subscription. Then if the user stops

subscription. Then if the user stops paying for the subscription, the permanent delegate can then go and burn their token, revoking their access to the subscription service. Now for this

project, it's a stable coin. So a stable coin would use token extensions to ensure regulatory compliance. So here

are some extensions that we could use for our stable claim. There is permanent delegate. This lets an admin transfer

delegate. This lets an admin transfer burn tokens to any accounts. And why

would you want that? Well, for

compliance, sanctioned enforcements, legal recovery, real regulated stable coins, they'll need something like this.

There's also default account state. Uh

new token accounts can start frozen or unfrozen by default. This gives you control over who can actually hold your tokens. Then there's metadata. You can

tokens. Then there's metadata. You can

store the tokens name, symbol, and image URI directly on chain. So there's no external programs that are needed. And

then another one would be transfer hooks. So transfer hooks allows you to

hooks. So transfer hooks allows you to run custom logic on every single transfer. This enables things like KYC

transfer. This enables things like KYC checks, blacklists, or transfer fees. So

when we initialize a mint account for a stablecoin, we'll initialize all the extensions that we want to use at that time. and we'll see exactly what that

time. and we'll see exactly what that looks like when we clone our code right now.

So here is our stablecoin code and as always we're going to start with the program state. So we're going to scroll

program state. So we're going to scroll down. Everything here is in one lib RS

down. Everything here is in one lib RS file. Um so we're going to be bouncing

file. Um so we're going to be bouncing back and forth uh around the file a bit, but we're going to scroll down to where all of our account structures are. And

the way that this stable coin is created, there are three core accounts.

So the first one is our config accounts and the config account is essentially like the control center of the stable coin and how that works is there's only

one config per program and the reason behind that is because when you're defining the config account when it's initialized. So, we're going to go back

initialized. So, we're going to go back to initialize and going to the initialize strct where all of the accounts are passed through. When

you're defining the config account, you can see here for the seeds field where the PDA is defined, the PDA is just

config. So, because it's only one word

config. So, because it's only one word that is a constant, there can only be one config PDA. If there were other variables added like the payer's public

key, then there could be multiple config PDAs. However, here there's only one

PDAs. However, here there's only one constant for the PDA. So, one config account per program. And this config account is what's going to be, like I said, the control center of the staple

coin. So, going back to our account

coin. So, going back to our account strrus, we have our config. And the reason that there's only one is because

it's essentially the mint authority and the permanent delegate of the coin. And

it signs all token operations via CPIs. So when our CPIs are called within the program, it's going to be with serer seeds. And those

signer seeds are going to be the config accounts. So this has the main control

accounts. So this has the main control of the stable coin. However, we're going to want to have people being able to have access to mint new tokens and burn

those tokens. And we're going to do that

those tokens. And we're going to do that by having a mentor config and being able to add and remove mentor authorities to the program. So, that is what this other

the program. So, that is what this other account is, this mentor config account.

And what that is is just storing all of the information for a mentor that is going to have access to mint tokens. So

you can see here it's storing the mentor's public key as well as how many tokens they're allowed to mint and then keeping track of how many they've minted so far. So if you're allowed to mint

so far. So if you're allowed to mint 1,00 tokens um at each token that you mint is going to add as a counter here in this allowed mint field. I mean

amount minted field and then just saving if it's initialized and then the bump um which we've already went over in the escrow program why you want to save the

bump field in your program state. Okay,

so these are the two accounts that actually have a custom data structure that's defined here that are holding data on chain. The other account which is the last core account for this

program is going to be the mint account of the stable coin itself. And that's

not going to have a strct here because it's not going to be holding actual data besides what is in the mint account for

a token. And that is created by using

a token. And that is created by using the mint instruction with the token program or token extensions program which we'll be using specifically for this project. So that does not have to

this project. So that does not have to have a data structure defined here. Um,

so just to recap, you only define this accounts structure when you are creating a new account on chain that has custom data that needs to be met. And you're

going to specify every field for that data.

For mint accounts, all of the data that's stored is already predefined for token mints. Okay. So before jumping

token mints. Okay. So before jumping into the actual instructions, one more thing to just recap with our config accounts is we have the mentor config to

enable multiple mentors with different issuant constraints and specified amounts over time. So this helps meet a lot of regulatory compliance that's

needed for issuing a stable coin. Now we

can dive into the actual instructions themselves. So if we go back up to the

themselves. So if we go back up to the top where we have this program macro from Ankor and then we have our stable coin that has every function defined here. We're just going to go over high

here. We're just going to go over high level what each function is and then we'll dig in a bit deeper where we need to review new concepts that are being learned from this boot camp. A lot of the concepts are repetitive from

previous projects in the boot camp. So

we're just going to not dive too deep into that but give a highle overview. So

the first instruction here is initialize.

And what this does is it basically bootstraps the whole system. So I'm

initializing what the config account is and storing all of the data needed for that config account. I'm initializing

what the mint is and what extensions that I want added to that mint account.

And I'm also calculating how much space is needed for the mint account to be minted and held on chain.

and setting up the instruction to initialize a mint using the token 2022 program.

Now the next instruction is configuring a mentor. So what this instruction does is

mentor. So what this instruction does is it configures a new mint authority on the stable coin. So, if you're adding a new person into the stable coin that has

the authority to mint x amount of tokens, you're going to execute the configure mentor instruction and specify the public key, the amount allowance

that they have to be able to mint. And

then that way, the user will be able to execute the mint instruction that we'll show in just a second. And then we have the remove mentor instruction which basically just does the opposite

removing that public keys access to be able to mint tokens. Now we have this mint token instruction and typically you

can just call mint from SPL token and anyone that has the correct authority when you mint it you are going to be able to just call it. However, the

reason we have a custom program here is we want to have specific mentors with a specific amount of allowance over a certain amount of time. So to be able to

customize that, it's going to be in its own program. So here you can see that

own program. So here you can see that there's this require. We're requiring to make sure the stable coin is active, but

there's also a require to make sure that it's the correct mentor config. So the

correct person with the correct allowance can call this mint instruction. And if you don't have that

instruction. And if you don't have that correct allowance, then this instruction is going to fail. So what sets this apart from the typical mint instruction

for SPL tokens is that your mentor authority is able to be controlled with an allowance and a time that it's able to mint because of this configure mentor

account that we have and these instructions that we have as well.

So again, this is just to map back to making stable coins meet specific regulatory compliance that's needed from specific companies.

Now the next few instructions that are here are just additional sanity checks and security checks like being able to

pause the token or unpause it, being able to force burn or force transfer tokens. Um, and these are just again to

tokens. Um, and these are just again to meet regulatory compliance. And one

instruction that I did skip over was the burn tokens instruction. And the burn tokens instruction is just burning tokens once someone is ready to then

receive fiat back for their stable coin.

So, just to go back over how this entire life cycle of a fiat backstable coin works is you have a user deposit fiat into a bank account and then they're

able to mint the equivalent one to one of what that asset is on chain. So, if I deposit $100 US into a bank account,

then I'm able to mint 100 USDC onchain.

And then when I'm ready to withdraw that from the bank accounts, I am able to burn those 100 tokens on chain and then withdraw from the accounts. Now, very

simple mint and burn mechanism that happens here. However, to be able to

happens here. However, to be able to have this meet compliance and be able to make sure that the backing stays onetoone and everything is correctly

working, we have these additional authorities and configs set for having mentors that have specific access to be able to validate to make sure these

transactions are accurate. So, now that we've gone over all of the functionality behind the stable coin and mainly the importance of SPL tokens versus token

extensions and where that use case comes in for using token extensions, we can now test the project. So, we're just going to run cargo test and let all of

the tests run. And this has a pretty developed test suite just looking at different edge cases, security restraints for the stable coin. So, you

can see here there's 23 tests. They all

passed and you can see we have testing initialize, testing if it fails, testing the mint authority, and testing all of the instructions. Um, so the best way to

the instructions. Um, so the best way to properly understand this program is just going through all of the tests, seeing how they work, and checking out

all of the ways to build out a stable coin. So that's everything for the

coin. So that's everything for the stable coin project. If you're

interested more in depth of how each one of these functions work and how the derive accounts worked for Ankor when you're passing through all of the accounts that are associated with an

instruction, go back to the escrow project because that goes in depth on that section of ankor and how to write out your instructions and the accounts associated with it. So that's everything

for the sample coin.

In this project, we're building a stable swap AMM. Now, you might be familiar

swap AMM. Now, you might be familiar with constant product AMMs like Uniswap, where swapping follows the classic X * Y= K formula. That works great for volatile token pairs. But when you're

swapping stable coins like USDC for USDT, that curve creates unnecessary slippage. You're trading assets that

slippage. You're trading assets that should be worth the same, so why lose value? That's where a stable swap comes

value? That's where a stable swap comes in. Originally, it was created by Curve

in. Originally, it was created by Curve Finance. This design uses a hybrid

Finance. This design uses a hybrid invariant that flattens the curve near the 1:1 ratio. And the result, it's a near-perfect swap between stable assets with minimal slippage. In this project,

you'll build a stable swap from scratch.

We'll implement the core math, including Newton's method for computing the invariant, and you'll understand what that amplification parameter actually does. But we're not stopping at the

does. But we're not stopping at the basics. We'll also implement dynamic

basics. We'll also implement dynamic fees that protect liquidity providers from arbitrage, a Pith oracle for real-time price feeds, and DPEG

detection that automatically pauses the pool when a stable coin loses its peg.

By the end of this project, you'll have hands-on experience with advanced DeFi mechanics, complex onchain math, and security patterns that separate basic projects from protocols that can handle

real value. So, let's dive in.

real value. So, let's dive in.

So, we're going to open up our code for the stable swap AMM. And first, we're just going to go through all of the code that we have because it is quite a lot.

So, this project is a bit more in-depth.

Um, so first we're going to open everything in our source file under the program. You can see there's an

program. You can see there's an instruction folder and inside this folder is all of the public instructions that exist for the AMM. Now, there's

several other files. So, we have this constants file. That's all of the

constants file. That's all of the constants throughout the project specifically for math or different tokens. Um, we have this dynamic fees

tokens. Um, we have this dynamic fees section which is how we're calculating the dynamic fees. Um, all of the errors for the program, the lib RS which is

where all of your instructions live. Um,

a math file, Oracle file, and then state which is where all of your custom state for the accounts that we're creating for this project exist. So before we go

through the actual instructions for the project, there's two main concepts that I want to go over a bit in depth first.

So first we're going to click on this math file and it's quite long. Um a lot of different calculations that happen.

So we're just going to understand what it is at a high level. And then if you want to dive in deeper, all of this is documented with some pretty in-depth

notes explaining where the math came from. But from a high level, we have two

from. But from a high level, we have two main functions. So one is this compute D

main functions. So one is this compute D function. So imagine the pool has a

function. So imagine the pool has a health score called D that represents the total value. When you add or remove liquidity, D changes, but when you swap,

D must stay exactly the same. And that's

the rule that makes the math work. The

problem is that the equation for D is too complicated to solve with simple algebra. So we use this technique called

algebra. So we use this technique called the Newton's method. Essentially, it's

making an educated guess and refining them until we get close enough. And it's

like playing hot and cold where each guess gets you a little bit closer to the answer. And all of that is explained

the answer. And all of that is explained in the calculations for compute D. The

next one is compute Y and that is when someone wants to swap a token A for token B. We know exactly how much token

token B. We know exactly how much token A is that they're trading to the pool.

The question is how much token B should they get back? Since swaps must keep D constant. We work backwards. If D stays

constant. We work backwards. If D stays the same and token A increases by this amount, then what must token B be? The

difference between old and new token B is what the user receives. Again, we use Newton's method to find the answer because the math is too complex for a

direct formula. So, all of that is

direct formula. So, all of that is explained within the documentation for the math RS file.

Now, the next concept we're going to introduce in this project is oracles. So

oracles are very common in D5 projects and I believe this is the first project within the boot camp that an oracle is being introduced. So we're just going to

being introduced. So we're just going to go over high level what an oracle is and why we're going to need it here. So a

smart contract on Salana can only see what's happening inside Salana like account balances, other programs, transaction data. It has no way to be

transaction data. It has no way to be able to look at the outside world. It

can't just Google what's the price of USDC right now or check a news website.

So sometimes contracts do need real world information. In our case, we need

world information. In our case, we need to know is USDC actually worth $1 like on exchanges of Coinbase or Binance. So

the solution here is integrating oracles into the smart contract. So an oracle, it's a trusted service that brings external data onto the blockchain. Think

of it like a courier who goes outside, checks the real price of USDC across major exchanges and writes that price into a special Salana account that our

program can then read. So for this project we're using the Pith network.

There's multiple other oracles on Salana. Um but here we're just using

Salana. Um but here we're just using Pith. So Pith is a decentralized oracle

Pith. So Pith is a decentralized oracle that aggregates prices from many professional trading firms. Every few hundred milliseconds, Pith updates price

accounts on Salana with the latest market prices. So this project, it needs

market prices. So this project, it needs oracles because we need to understand when a stable coin is at the correct

price or if it depicted. So our pool only knows its internal price. So how

much USDC versus USDT is in their vaults. But what if USDC crashes to 90

vaults. But what if USDC crashes to 90 cents in the real market? while our pool still thinks it's worth $1. That's going

to cause arbitrage opportunity. This is

creating an unrealistic market for USDC and giving people USDC at the incorrect price that it's not actually valued at.

So, by checking Pith oracle prices, we can detect when a stable coin has deped from $1. When that happens, we'll pause

from $1. When that happens, we'll pause the pool or charge emergency fees to protect the liquidity from being exploited. So, we have all of those

exploited. So, we have all of those checks in place within this project. But

the key takeaway here is that oracles are a bridge between blockchain isolation and real world data. So,

without them, DeFi protocols would basically be flying blind. So, that's

why we'll need oracles integrated into this project. Now let's move on to going

this project. Now let's move on to going through all of the functions that are created in our smart contract.

So looking into the instructions, we have initialize pool which creates a new two token pool in the AMM. We have

add liquidity or modify liquidity. And

what modify liquidity does is both add liquidity and remove liquidity. So

adding liquidity is just depositing tokens to receive LP tokens in return.

And then removing liquidity is just burning LP tokens to then receive tokens back. Then we have swap and that is

back. Then we have swap and that is trading token A for token B or vice versa. And then we have check DPEG and

versa. And then we have check DPEG and that is just a permissionless oracle check and it pauses the pool if it is

deped. And when I say deped, I mean it's

deped. And when I say deped, I mean it's no longer at the stable value that it's assumed to be. And those are all of the public instructions. So now we're going

public instructions. So now we're going to start with the state. And once we review all of the state stored in the program, we're then going to go back and

do a deep dive into the instructions. So

going back to the state file, I always like to start with the state so we understand what data that we're working with and how to take that data to then use in our instruction logic. So here

you can see we have two accounts.

Now let's talk about them. So the first one is our Oracle account. And what this account holds is all of the

configuration needed to be able to query a specific oracle from the Pith network.

And here you can see it has Oracle A and Oracle B and their corresponding public keys. And those public keys are how

keys. And those public keys are how you're able to identify the specific price feed from Pith for a specific token. So for example, let's say I have

token. So for example, let's say I have a USDC USD pair in my AMM. I'm going to have the public key for the USDC oracle

as oracle A and I'll have the public key for the USD oracle as Oracle B. And

that's then going to be able to query the real-time price feed for both of those assets. So those public keys need

those assets. So those public keys need to be defined in the config because it's going to then be held in our Oracle config accounts to then query throughout

all of the instructions to be able to pull the exact public key that you're going to use from Pith.

Now the next field here is a max DPEG bps. And what that is is basically just

bps. And what that is is basically just a maximum allowed deviation from $1. So,

this is how far we'll be able to allow our stable coin to vary.

And if it varies past this allowance, then it's going to pause the pool. And

then the next field here is your emergency fee BPS. And what that is is a fee to charge when the pool is in a dep.

So both of these two fields are really just to take precautions for any security concerns in case an asset does dep from its stable asset.

And then the last field enabled is just a boolean true or false. And it's just a detection if the pool needs to be deped

or not. So if the Oracle data comes back

or not. So if the Oracle data comes back and it surpasses the DPEG allowance, it's going to update the boolean to

then pause the pool.

So that's everything for the Oracle account. And you see this function here

account. And you see this function here that is really just calculating the length of the oracle. And what that's doing is understanding how much space

that's going to take up on chain. Now

the next section is pool. So what is the pool account? The pool account is

pool account? The pool account is just storing all the configuration for a stable swap pool. And it's a PDA seated

by the LP mint address. So the pool is a PDA that owns the token vaults. And

because it is a PDA, the program can then sign transfers from those vaults.

Now let's go through all of the fields.

We have the admin and the admin is just the public key of the person that is able to modify the parameters for that pool. Typically in production this would

pool. Typically in production this would be like a multisig or a governance program.

And then we have this LP mint. And what

that is is the public key for the LP token. So if I'm providing liquidity to

token. So if I'm providing liquidity to a pool, I'll receive an LP token in return. So later when I want to withdraw

return. So later when I want to withdraw the liquidity that I'm providing to the pool, I will show the liquidity token to understand how much I provided. That

token will be burnt and then I'll be able to receive the tokens that I originally provided to the pool. So this

is really just a way to track how many shares I own of the pool and how I can get them back in the future when I'm ready to withdraw my liquidity.

Now the next one is this amplification.

And what that is is the amplification coefficient that we talked about in the math section. So A is just controls how

math section. So A is just controls how the swap curve behaves. So if you have a higher A, it's going to have a flatter

curve. So the swaps are closer to one

curve. So the swaps are closer to one one when the pool is balanced. If you

have a lower A, then it's more curved.

So it behaves more like how the unis swap protocol works.

So typical values you would see here like if you're using USDC or USDT um it'd be about 100 to 2,000. If you're

using like staked ETH and ETH as a pool it would be about 50 to 100. So it's a little less

correlated than a one one USD stable coin. So

coin. So the next section is your fee BPS. So

what that is is your swap fee in basis points. So anytime you make a swap on an

points. So anytime you make a swap on an AMM, there's a fee that is acred and we're just going to have what that is.

So all of this really is just the config that you're setting for how you want your pool to be built. Um the next one is the token mints. So this is an array

of public keys and it's going to have the token mints that you want for this pool that you're creating.

And again, the mint is the mint address of the mint account, which is the global information for that token. The next one is the bump, which we know you save the PDA bump seed to your account state.

Then we're also saving the Oracle config. So once the Oracle config is

config. So once the Oracle config is initialized, we're going to take that and save it into the pool because when the pool is

running, it's going to need to be able to query the Oracle. So, we're going to need to get the public key of each token's oracle to get the real-time price feed. And then the last one is

price feed. And then the last one is just a boolean if you're pausing the pool or not. So, if the asset does depend to mark if it's paused or not.

And then the next is just adding additional functions on that account, which is calculating the space, the mints, and how to find the mint index.

So that's everything for the state. Now

that we understand the state, we're going to just deep dive into a few key points in the code. So you have all the code, it's all documented and a lot of

this has built off of all of the previous projects in the boot camp.

However, there are a few specifications that I do want to talk about. So the

first one is just key validations that happen within this project. So when we initialize a pool, there's a few validations that take place. So I'll

open up the initialize pool instruction and we're just going to search for it.

So we have this require for amplification.

So this is just making sure that your amplification, your a value is going to be a reasonable value. So it's making sure that it's positive. you need your A

to not be too large and also not be a negative value. So, this is just

negative value. So, this is just validating that, making sure all of our math checks out.

The next one is our fee BPS. So, this is just a sanity check to make sure that a fee doesn't exceed 100%. So,

maybe someone accidentally put the wrong decimal value. Um, this is just making

decimal value. Um, this is just making sure that that doesn't happen and returns an invalid fee if the fee is above 100%.

And then one other validation check to go over is validating if the volts are correct. So let's find that one.

correct. So let's find that one.

So here we have the vault validation. So

the vault must be the correct associated token account for the owner of the pool and the mint of the token.

And this is just critical for security.

So we want to make sure that if we accept tokens as the vault then it is the correct owner. So it has to be an

associated token address. And the reason is because ATA are deterministic meaning it has a specific owner and a specific mint and there is only one valid address for an ATA.

So you can see here there's even a note in the code specifying that if we do accept any token account as the vault, the attacker could pass their own token

account as the vault and when users swap tokens, they could go to the attacker instead of the pool. So very important that we're using an ATA here because it is a deterministic address. So the next

thing we're going to go to is the swap section. So this swap instruction this

section. So this swap instruction this is the core flow of your AMM. So here we have to get the current reserves of the

vaults and then you're going to calculate the dynamic fee which is the fee that you want to charge when you're making a swap from the volts. And then

it's going to do all of the swap math that we talked about at a high level that comes from this math RS file. And

then lastly, it's going to do a slippage check. And that slippage check is just

check. And that slippage check is just making sure that the amount out is larger or equal to the minimum amount

out. So that is the core logic for how

out. So that is the core logic for how our AMM is going to work for a stable swap. Now the next section that I want

swap. Now the next section that I want to talk about is that we have PDAs signing for the vault transfers. So, I

want to find that function to show you.

If we go here and we do token transfer, I believe it'll come up.

Here we go. So, here you can see when we're transferring out of the vault. So,

there's two transfers that happen in a swap. You have transfer in, and this is

swap. You have transfer in, and this is just a regular CPI because anyone is able to transfer tokens into the vault.

So, I could connect my wallet and deposit. Um, now when I want to get

deposit. Um, now when I want to get tokens out of the vaults, it's going to have an additional security check. So,

this may sound familiar to you because this is exactly how we designed our escrow program earlier. But if I go here to transfer out when we're calling the CPI, instead of just new, it's new

assigner. And it's new assigner because

assigner. And it's new assigner because the vault is who has authority to be able to transfer tokens out of the pool.

And the vault is a PDA, so it has to be signed by the PDA to be able to transfer tokens out of the vault to the user that's requesting them. Now, this goes back to the same logic we talked about

in the escrow program where the vault has a PDA signer. And this way, we don't have an attack vulnerability for having

a private key being able to sign this transaction. it is only the code itself

transaction. it is only the code itself can sign with the serer seeds of the PDA. So here you can see we're making a

PDA. So here you can see we're making a CPI call with a serer. We specify the serer seeds and those signer seeds are

exactly what the PDA is for this vault.

So the pool owns the vault but it can't hold a private key. So we use PDA seeds to let the program sign on behalf of the

pool. And

pool. And if we go back to this state file, sorry, not the state, initialize pool.

And here we go. So we're in the initialize pool file. If I go to the derive accounts macro, this is where all of the accounts that are passed through for the pool are specified. And you can

see here for the pool, it is defined with these seeds and those are the seeds for the PDA of the pool. And if you go

back to our swap, so we'll go back to the token transfer and the transfer with serer. So you can

see here these signer seeds, you have an input parameter for the transfer out function of your serer seeds and those are going to have to match up with the

PDA signer seeds of the pool that we just defined.

Okay. Now, one more feature to go over here is when we remove liquidity from the pool. So, I'll go back to modify

the pool. So, I'll go back to modify liquidity and I'm just going to search for this calculate withdrawal amounts function

and we'll talk about what it does.

So, this is a function for when you're withdrawing from the pool and what this is calculating is exactly what will be withdrawn. And this function is

withdrawn. And this function is specifying that you always get back proportional amounts of both tokens. So

there's no single-sided withdrawals allowed in this implementation of the AMM. So if I am depositing

AMM. So if I am depositing USDC and USDT, when I go to withdraw and burn my LP tokens, I am going to receive proportional

amounts of both USDC and USDT back in my wallet. Now we're going to talk about

wallet. Now we're going to talk about just three advanced features that have been implemented into this AMM and then we'll be able to wrap up our whole

overview of this project. So the three features we want to talk about is first the dynamic fees. So you can see there's an entire file talking about dynamic

fees. So why do we implement dynamic

fees. So why do we implement dynamic fees?

This function was implemented to be able to deter arbitrage on the projects and that's because an arbitrageer knows the true price and they'll extract value

from LPS and this happens when an asset depicts a pool is imbalanced or a large swap occurs and

by having dynamic fees on the project this makes toxic arbitrage very expensive and then it compensates LPS

for their informed traits. So, this just helps make a more healthy AMM.

Now, the next that we're going to talk about is DPEG protection. So, there's a function when you're checking oracles

and it's going to be called check DPEG.

So, let's find that.

And here's the function.

And what this does is it is checking to see if the asset has dep from the stable price that it should be. And if it does, then it's going to pause the pool to

stop trading, which again is going to deter arbitrage that is toxic in the AMM. Now, anyone can call this. So, if

AMM. Now, anyone can call this. So, if

USDC goes to 90s, the pool will pause and it will start charging 5% in fees.

And this is just going to protect LPS from getting the pool drained. The next

topic we're going to talk about is the remaining accounts pattern. So if I open up the swap RS file, you can see I'll go to the derive accounts and this is where

you need to specify all of the accounts that are passed through for an instruction that the instruction is going to interact with. And Inker

requires fixed account counts at compile time. But we do want to have flexibility

time. But we do want to have flexibility for variable token accounts. So here you can see we only have for swap the pool the user and the token program. We're

missing vaults and user ATAS. So that is going to be as a remaining accounts. So when

you are writing out in your context you can call remaining accounts and that is going to be able to pull in vaults and user ATAS.

So you can see here when you are calling the swap function we have this remaining variable and that is pulling the remaining accounts from your context.

So here the remaining accounts for a two token pool is going to be your token avault, your token bolt, your user input token accounts, and the user's output

token account. So all of that is going

token account. So all of that is going to be under ankor's remaining accounts function.

So this pattern, it lets us validate accounts at runtime while keeping the instruction flexible for multi-token pools.

And that is all of the features that I wanted to go over for the project. You

can see here throughout the project, everything is documented with an in-depth explanation. And a lot of this

in-depth explanation. And a lot of this does refer back to previous projects that we've worked on throughout the boot camp. So, we've just talked about all of

camp. So, we've just talked about all of the new logic that was introduced specifically for the stable swap AMM project. And that is everything here.

project. And that is everything here.

and check out the project, build it, and run your tests.

GM, this is Mike, the APC Dio at Sana Foundation. I'm going to show you how to

Foundation. I'm going to show you how to build a paid app from scratch using SA42 template. In this ripple, we will focus

template. In this ripple, we will focus on client side development. If you are new to the S42 protocol, I highly recommend you checking out Yunas Hans

video, the new code S42. Okay, just

kidding. But seriously, it's a great intro to S42 and AI agent payment. So,

why do we even need an AI agent payment standard? Imagine this. You see a really

standard? Imagine this. You see a really interesting article on your feed. You

click it and boom, it asks you to pay a 12 month subscription just to read one article. Annoying, right? Now think

article. Annoying, right? Now think

about bots and AI agents. They hear the same paywells when they try to access data or service on the web. That's where

the S42 comes in. S42 is a open payment protocol developed by Coinbase. It lets

website and AI agents charge per request using stable coins. Here's how it works in simple terms. Your AI agent as a server for something. The server replies

with a price. You pay and you get what you asked for. No signups, no subscriptions, no long forms. Even better, S42 is built directly into the

normal web traffic HTTP.

So it works with the internet the way we already use it today. In theory, this means you could pay just a fraction of a scent to read one article instead of

buying yet another for subscription. Now

that you have a basic idea of what S42 is, if you are interested in what S42 ecosystem looks like, jump in here as.com

2. So how does it work? 42 is actually

2. So how does it work? 42 is actually part of HTTP protocol and IS42 mini payment is just plain HTTP request.

First client hits your URL and then you reply for two payment requirement with a JSON payment requirement object. The

client pays and retries with a ask payment either your verify and cycle respond 200 and okay no accounts no

else. Also take a look of SA developer

else. Also take a look of SA developer guide intro to S42. You can find a lot more details about the S42 ecosystem

less SDKs the explorer and some real examples in this guide. But today I just want to show you one thing how insanely easy it is to build an app with native

S42 support on Solana. So here's where I'm going to build. Users will pay 1 cent in USDC and after they pay they can access an AI for ting service powered by

CHBT. They just enter their birth

CHBT. They just enter their birth information and the app gives them an AI generated for prediction. That's it.

Super simple. To build this we're going to use Sana developer templates. You can

use them directly through Quona app. If

you are new to Sona developer templates, I really recommend you playing with them first. With this templates plus AI, you

first. With this templates plus AI, you can save a ton of time and try out ideas really fast. From this example, we're

really fast. From this example, we're going to use the Sana S42 NexJS template because we got a demo app that already has S42 payment support built in.

All right. In this project, we will use Sana S42 template. It is a simple niceji starter template with X42 payment protocol integration for Solana. It

demonstrate a streamlined integration of X42 payment protocol using a library called X42 next making it very easy to add cryptocurrency payment gates to

NexJS apps. All right, let's first take

NexJS apps. All right, let's first take a look of the templates. Uh this is the templates we're going to use for this project. Uh you can see it's very clear.

project. Uh you can see it's very clear.

We got uh what is next for 2, its features, get started, how it works and product structure. Uh we can just um uh

product structure. Uh we can just um uh install it because I already installed all the prerequisites. Uh we can just simply copy it

here. We're going to uh jump into

here. We're going to uh jump into this. Okay.

this. Okay.

H you see there's an arrow but no no no no need to worry about it. We can just see if it's already created. Okay, it's

good. Let's

um come into it. We can just manually uh install it.

It will take a bit time.

Great. Right now we got everything installed. Uh let's take a look of the

installed. Uh let's take a look of the code.

Uh we can first take a look of the template structure here. We got app and it's uh

here. We got app and it's uh uh homepage.

Also, we got a layout here and uh styles also. Let's go to the

also. Let's go to the content.

Okay.

Uh also there's a component.

Uh also let's say the most important part here is actually the middleware. You can see we installed um the payment middleware

function from the next uh S42 Nex library. Okay, as you can see we got the

library. Okay, as you can see we got the root for each payment service. Here we

will configure the payment routes using uh this payment middleware feature uh from the Nex42 uh library.

And uh please note the original matcher here is actually match all the paths which means that this middleware which is a payment middleware will be valid to

each pass except for the static files.

Uh this is where we need to set up the root for the path covered by the uh middleware. So we already take a look of

middleware. So we already take a look of the overall structure. We also need to set up the uh environment here. So we

can just set up another one but call it um envlo you can set up the SA receiver drives

and also the also the network we're going to use. So here we're going to use SA dubnet for testing purpose and you can also test uh set up your facilitator

URL. Right now we are using the official

URL. Right now we are using the official S42 facilitator and also you need to set up your uh CDP

uh API key because this is from you can you can find it from the portal of uh of uh Coinbase developer platform here. I'm

going to use mine but uh you know we can take a uh one minute to set it up. Just

give me a minute.

Great. It's already set up. Let's uh

take a try. We can run uh so we can npm run directly uh after we set up the um the environment.

Great. We can just uh take a look how it's work.

This need a bit time to get it loaded.

Uh we can access the no we are not going to use metamask but feel free to use it. My used wallet is um

phantom.

You can choose your favorite wallet to use it but the um default one is actually metam mask. As you can see uh the demo work well just as expected. You

can unlock your uh exclusive content or any content here. So you just paid by your wallet uh in uh right now we use

dite. So it's dite token.

dite. So it's dite token.

So this is what the template look like.

Now let's add our own feature and further make this template into a fertailing app. I already finished the

fertailing app. I already finished the code actually. So you can just check the

code actually. So you can just check the code link in our description but also you can try yourself to v code it. It's

actually very easy to achieve the same features. Let's go through the finish

features. Let's go through the finish code.

Let's take a look of what we create here. Uh let's go through the app

here. Uh let's go through the app API and the root. Here we create a root uh which is uh a post method to receive

the birth information we need to uh get from the user and also in this post request uh we call open AI charge GBT for all mini model uh to act as the

photo tailor and return the photo results as a JSON fail.

So this is basically the API here and also we create two page here. One is a forton another one is a pay for here. So

under the pay for here uh we um actually it will trigger a payment when user visits a pay for page and uh it will show this page after successful

payment from our pay wall of X42 and then provide a button for user to open the uh feature page and after uh user uh manage to pay it and open the future

page uh fertain page we can see the fertain page here.

So here we set up like switch state here w info which will accept the user's input and also we got a fton uh to

return the um the the information got from the charge BT and also we need to fix a little bit you can see it's a very

very easy front end um and also we need to fix a little bit about the uh page

We need to add another button which is called for teller.

All right. So this is what the uh finished call is look like. We can just start to uh explore it.

We still just use empty run d.

Okay. We can just go back to net.

As you can see here, we got a new uh for Taylor button. We can just click it

Taylor button. We can just click it and pay now.

Okay, it's payment successfully. And

let's open the front teller. Here

you can input your um like birth date.

Uh I just use a uh template but let's see.

Ah as you can see you got the fertain result as a JSON fail from the charg model.

Um it shows as a gaze into the comastic energy surrounding your birthday. So it

will show you furton. It's actually a tradition in u Asian China. Uh but it's a it's a fun lot of people actually believe in it.

All right. This is the app we built today. So let's go through the

today. So let's go through the architecture together here.

Yeah, we change the uh button uh in the in the main page and then like it's actually uh goes to the uh like pay pay

for page. But when I need to go to this

for page. But when I need to go to this pay for them page, I also uh need to set up here in the middleware. So that I

need to set uh when I go to this um path, I need to set uh the S42 payment is actually routing to this path and

make it solid. So after it's got excited into a pay middleware and it will trigger your payment and uh further show

the content from this uh pay for page and then like when user opens the uh front teller it will generate the fton page and in this page we will going to

call the post request to open AI and then get response from um the chbt to get the certain out. All right. Today

we'll learn how to use Solana S42 to build a payment app on Solana with its native S42 support and also we use a library called N S42 NGS here in our uh

project and also we learned how to integrate third party APIs here. We used

uh open AI tract model. Thanks for

watching. If you want to learn more about S42, how it work, what it is uh in Solana, uh feel free to check out the

sana.com nicejs page. Looking forward to see you cook with S42 on Solana. Cheers.

J everyone, this is Mike the APS foundation. The most common topics we

foundation. The most common topics we heard these days is real world outside.

Today we are going to explore what is real world outside why it matters and we will also build a RWA app on Solana together. Real world tokenization is a

together. Real world tokenization is a process where an assite is converted into a programmable token. Starting from

the stable coin more and more assets are gradually adopted to get tokenized on chain. Now Solana has over $1 billion

chain. Now Solana has over $1 billion daily volume of RW trading. Solana aims

to be the internet capital market where a distributed internet ledger host all finance within a unified liquidity layer. On the sell side, issuers can get

layer. On the sell side, issuers can get the most efficient price discovery with deepest liquidity. On the buy side,

deepest liquidity. On the buy side, asset ownership can be more distributed.

Investors open access to all assets plus anytime anywhere. Enjoy the most capital

anytime anywhere. Enjoy the most capital efficiency. Okay, let's get started to

efficiency. Okay, let's get started to the building part. We will build a diamond app to let users mint real world asset. The app will be a labubu mystery

asset. The app will be a labubu mystery box on chain. It allows users to mint labubu as nonf fungeible token on solana. Please note that labubu we

solana. Please note that labubu we mentioned here is just used as a demo example for educational purpose. My

favorite labubu collection series is lazy yoga. In lazy yoga, there are 11

lazy yoga. In lazy yoga, there are 11 types including 10 common one and one rare hidden one. In our example, we assume to have 120 labubus for each

common type and six labubus for the rare type. So in total, we will got 126

type. So in total, we will got 126 labubus in our inventory. We also need to create a token program for each labubu type. We need to use token 2022

labubu type. We need to use token 2022 as a token standard and using kodamai to generate clients code for building the app. I already finished the starter code

app. I already finished the starter code with a mock front end and a finished program. You can find the ripple in the

program. You can find the ripple in the linked description. The starter code is

linked description. The starter code is based on querad app templates with nicejs as a front end and anchor and kit as a text stack of sana. You can find the original templates in sana developer

templates. Let's take a look of the

templates. Let's take a look of the starter code.

All right, first let's take a look of the overall uh code structure. Uh it's

actually a very basic uh anchor code.

Here you can see we got program and uh with src and ir program and also we got a a test here

and also we got a uh nextjs app. You can

see the front end with our components and uh a basic page. So let's go to the um our bdis which is our program fail.

First you can see the overall structure here.

We got the necessary imports and also uh we got uh three core instructions which is initialize

collection create aubable mint and uh the third one is mint random.

Also we got uh the data structure here.

Uh if you see the line a little bit below also we got a solid account validation logic here an arrow definition we customize here. First let's take a look

customize here. First let's take a look of the three core instructions. The

first one is initial initialize collection. In this instruction we

collection. In this instruction we initialize an inventory array called collection for the labubu series and collection will be used as a PDA card later.

And you can see the second instruction is a create labubu mint. Here we create a token 2022 mint for each labu type. uh

first we validate the id is valid and also we calculate the rent and then we create mint account using system program

where it's actually trigger a CPI which is cross program invocation and finally we uh initialize the mint

using the token program which is initialize mint Here we define it is a NFT because it's uh indivisible.

Third, let's take a look of the uh mint random instruction where we will mint a labuft uh to a user.

First if we also validate uh the ID is valid and uh the inventory is also valid.

Uh we need to decrement the inventory by one and then we have PDA to sign uh we use

collection PDA as a M authority here and finally we update the state.

uh if you see it's overall actually uh this is a program where we use CPI is that first we have our uh what program

uh to cross program invocation call our system program to um create account and further we use CPI

call to the token program which you can see we use initialize mean two and also uh mean to user.

All right, after we have a basic understanding of the program, let's go through the code generation part.

Right now, we already have our program ready. We need to use anchor build to

ready. We need to use anchor build to build the program.

Let's go to anchor file first and then anchor build.

In this process, it may need a bit time to uh rebuild the program.

We need to wait for a while.

All right, now it's finished. Uh let's

take a look here. Uh we're just using anchor build. So what's happens during

anchor build. So what's happens during this anchor build is that it will compile the rust code which we just um uh see into the binary program. Uh in

this process it will also generate IDL fail uh into target uh which is under target IDL. You can

see there's a JSON fail here. Uh this

one is very important because you can basically interact with any program on chain with this IDL and also you can find it IDL through the SA explorer. All

right, let's take a look of IDL uh fail structure. Here

structure. Here you can see the address metadata instructions and data structure and lots of stuff.

And in our next step, we'll show you how to interact with this arin program. uh

by generating a client using its um ideal JSON fail. So in this case we will use kodama to generate typescript

client. Kodama is a tool which support

client. Kodama is a tool which support python javascript typescript and rust to generate the client code interacting with the engine program. In our case we

will generate the typ uh javascript code.

All right. Uh we will use um uh Kodama CLI tool which we need to set up the configuration uh under

uh our root directories kodama.json.

Let's take a look here. You can see we got ideal pass and

here. You can see we got ideal pass and also uh it is a scripts and gs which define which is a javascript code generator. You can see we use the JS

generator. You can see we use the JS render here and also we define the output directory.

All right, let's uh run the generation comment.

It's actually very clear. Let's go to the root and then npm run code gs.

Bingo is generated.

You can see under the app is generated a directory called generated.

You can see the overall structure here.

It has uh programs also instructions and accounts uh and also the IRS we customized

and some shared types here.

So what we need to use is actually we need to interact by using this um instructions.

All right. So our next step is to integrate our front end. In our case we can take a look of our what our front end is like.

There are two page important here. The

first one is actually just the same with our SA template. if you already tried it.

Uh it's basically the same.

Another one is where we need to customize the front end logic here uh to interact with client code we just

generated which is labu card here. So in

our front end which is client we use nextjs and react as a front end stack and we use sana kit as client to interacting with blockchain.

All right, let's take a look of the core part of our mystery box component.

As you can see, the current starter code use a mock demo for the blend box.

We can first run the npm run dv to take a look.

It's already got started.

Uh let's uh you can see this is a front end code which already generated. You can easily generate it like in less than five minutes with uh AI tooling.

Here when you click the open mystery box button it will generate the labubu uh randomly.

So this is the basic um feature of our front end.

All right. So let's move back uh to add the real token mint logic to really allow users to m labu on chain by interacting with the kodama client we

just generated.

So here we need to import uh the function from the generated code uh which is this one here. I used copilot to help me generate

here. I used copilot to help me generate code but sometime it's not that accurate.

So this function is where we just generated here.

Let's continue.

And also in part two we also need to uh have a list of mint drives of labubu series

after we initialize the token mint.

We can define it as a m.

So with this address user will mint labu directly from this uh mint address.

All right let's finish the logic here. We need to define um sending uh in

here. We need to define um sending uh in our front end.

No, it's we need to use the kit which is use send transaction here and also we can define a signature

because we can um show it in our fondant and then add our logic here.

You can take a look of what the basic um logic here. It's actually just use our

logic here. It's actually just use our total um weight in our inventory which is in total we got like 126 label boost including like uh 1,200 common ones and

six rare ones and also we pick uh one of them randomly here. We can just continue the logic

here. We can just continue the logic here.

Here we need to get the corresponding M driv first and then we need to create the object which is used as user signer

and also uh under this user signer we have three parameters. We got user wallet address and sign transaction which is a transaction signing function

and also the message signing function and then we need to create the instruction and in this step uh we need to

um really use the generated code.

Here we need to have um user center which is the object we just defined also mint drive mint PDA address

and also labubu ID which is a selected labubu ID and finally we need to send a transaction to Sana blockchain

And also we can just directly get the signature because we can show it in our front end signature.

Here we use send with our instructions.

Also we can set the um signature after we finished. And also

we finished. And also potentially we can set up a alert to say if we are successfully.

All right. Now we have already finished the fun part. But remember uh we also need a script to guide the mint address right because this this time it's marked

or our collection account doesn't exist and uh our our 11 m doesn't exist too.

So it's a bit tricky here because we also need a test here.

Oh let's the test fail here. Although it's called a test, the fail is actually a

deployment initialization script.

The process is like after we dep anchor build, we also need to deploy and then we can just run the initialize here which is a test. The task field has two

goals. The first is to initialize the

goals. The first is to initialize the collection and the second is to create 11 minutes for each labu.

We can take a look of the test field here.

It got two test. The one is initialize collection.

The second one is create 11 mint.

So in the first test uh it calls initialize collection instruction and create collection PDA account and then

read and verify account data.

And in the second instruction it will loop 11 times uh to create a mint for each labu type and print each mint address which is

needed in our front end. Let's take a try.

First we need to um deploy it.

We can just deploy it to our uh dub knight.

All right. As you can see here we got a program ID and signature is deployed successfully.

Here is a bit tricky because usually we need to run anchor test but here I defined um um initialize in our

uh anchor tail. Here you can see the initialize actually the run uh you can say the initialize actually run the test There.

All right, I just changed another way to uh initialize it because something went wrong with my yarn here. Uh right now you can see it's actually already um

like initialized and you can see there are uh 12 11 address here. So remember

we just uh we just um um like mark the mint address here. We need to copy each address.

Let's copy it back here.

We need to get 11.

All right. So already we got the uh collection initialized and also we got the front end finished. We can finally have a try of what the current uh fun is

like.

Uh we need to run.

Let's do it.

When I click here, you need I need ask me to enter.

Bingo. You can see here it successfully minted Labu with a transaction here. And

this is a mystery box I just uh minted.

So in our case, we just use labubu here for educational purpose. Uh it's not for commercially used.

All right. And you can also check the wallet to see there's actually a labu here. Um

here. Um so why is the data here? Actually you

can set up the metadata in the metadata file. Let's take a look of what the

file. Let's take a look of what the current file is like. So here in the metadata I set up the labubu with its ID, name and description and uh

attributes. Actually you can also set up

attributes. Actually you can also set up this uh inspo token with its uh um metadata or um token 22 metadata

feature.

All right let's try to summarize it. So

our project uh actually we go through the whole process of uh the program and anchor build into the IDEL and using this IDEL to generate the client code by

Kodama and with this generated TypeScript code we further uh imported into the front end to get uh integrated and u add our core logic to allow users

to mint labu.

All right, that's all for the demo app we built for users to meet Adaboua. We

meet AWA as tokens on Solana through token 2022 and make a mystery box to allow users to meet IWA tokens themselves. Solana token 2022 is far

themselves. Solana token 2022 is far more than this. You can explore more about the token extensions on Solana to achieve more features for your IWA

including KYC, freezing seasoning, confidential transfer and much more.

Looking forward to seeing your fun from AdW asana. Cheers.

AdW asana. Cheers.

Building secure applications on Salana is critical but surprisingly difficult.

Today we'll cover some common vulnerabilities and how to prevent them.

First, you should never trust incoming accounts. This is by no means

accounts. This is by no means exhaustive, but there's a laundry list of issues you want to verify. First, is

the account owned by the expected program? Is the signer set where you

program? Is the signer set where you expect it to be? Are the PDAs drive correctly? Is the account already

correctly? Is the account already initialized? Did you check for

initialized? Did you check for reinitialization attacks? And more

reinitialization attacks? And more subtly, is the account a duplicate of another one? Some of the biggest hacks

another one? Some of the biggest hacks on Salana like wormhole, KMA, and more recently loop scale or because of unchecked or incorrect accounts. Ankor's

account types provide these checks for you, but you should always be careful and verify the checks yourself if you're using unchecked accounts or if you're writing your own program with Slana SDK.

Second, protect yourself against arithmetic errors. By default, Rust

arithmetic errors. By default, Rust won't save you here and a single overflow or underflow can easily lead to loss of funds. Use checkmath or enable

overflow checks. A surprisingly tricky

overflow checks. A surprisingly tricky edge case here is around truncation.

Even with overflows checks enabled, truncation is by default unchecked. This

has led to quite a few high-profile security issues. Third, you should

security issues. Third, you should properly validate your program derived addresses. There's a couple of potential

addresses. There's a couple of potential issues here. First, the seeds on PDAs

issues here. First, the seeds on PDAs are concatenated together. So, if you have variable length seeds, they can collide in unexpected ways with other PDAs. This might cause you to allocate

PDAs. This might cause you to allocate an account, for example, in an address where you expect for another one.

Another potential issue is with using non-cononical PDAs. Not everyone knows

non-cononical PDAs. Not everyone knows this, but there are actually multiple valid addresses associated with each set of seeds because you can pass in different bumps. Ankor by default will

different bumps. Ankor by default will handle this for you, but if you're handrolling your own program, you should be careful. Lastly, but not least, make

be careful. Lastly, but not least, make sure your program's authority is properly secure under multi-IG like Squads. Many times, smart contract

Squads. Many times, smart contract security isn't the only thing that matters. We always recommend using

matters. We always recommend using multi-ig with independent signers and hardware wallets. These are just a few

hardware wallets. These are just a few of the more common security vulnerabilities you should be aware of when building on Salana. There are many more and you should always get your code audited before any large amount of funds

are at stake in a production environment. Before you deploy on

environment. Before you deploy on Salana, make sure your application is secure.

So now that we've gone over building applications on Salana, we're just going to do a quick overview of indexing on Salana. What it is and what the best

Salana. What it is and what the best practices are. So indexing on Salana,

practices are. So indexing on Salana, it's just a way to extract, process, and store onchain data from Salana. And it

enables real-time insights and analytics and then a way to monitor apps that are live on chain. So here are some resources. indexing is going to be

resources. indexing is going to be different per application that you're building and these are the resources and best practices to get started for your application. So here is some Helas blogs

application. So here is some Helas blogs talking about how to index Salana data.

Um definitely start there fully understand what indexing on Salana looks like. One of the challenges with

like. One of the challenges with indexing on Salana is finding a way to have high throughput, low latency and

data consistency across distributed nodes. So how do we do that effectively?

nodes. So how do we do that effectively?

Well, one way that we can do this is using a Geyser plugin. So that's the fastest way to fetch updates for state transitions and events and it streams transactions as they're processed in

real time. So you can take a look here

real time. So you can take a look here at the Yellowstone gRPC and this is pretty simple to use. You can run a validator that's using this Geyser

plugin and this helps you be able to have a faster way to fetch date transitions. Now Yellowstone all it is

transitions. Now Yellowstone all it is is just a geyser based gRPC interface for Salana and it's going to help you index a lot faster and more efficient.

Then the next thing that we're going to talk about is carbon. And Carbon is an indexing framework on Salana to help you source account and instruction updates

and decoding and then processing the results. So makes it very very easy to

results. So makes it very very easy to index on Salana. So it's built to handle different types of updates and route them through the appropriate processors.

And the basics of Carbon, how it's going to work is you choose a built-in data source from the set of crates that are available within Carbon. And then you're

going to generate a decoder strrus for your program from your IDL. And you can do that with the Carbon CLI. So it's

very easy to generate. And then you just implement processor strcts to handle the account or the instruction updates that you're looking for. And then once you assemble each part into a carbon

pipeline, you have an indexer running for your program. So this also has very in-depth docs on how to set up your

carbon framework, how to use the CLI, and then there's also ways to implement the processor, and then several different crates that you can work with,

and then available program decoders. So

all of these very easy to access and then help you get the data that you're looking for.

So that is best practices for indexing on Salana.

Since late 2024, prediction markets have entered the mainstream discussion. The

most famous ones being Kali and Poly Market. Prediction market platforms play

Market. Prediction market platforms play uniquely to the strengths of the Solana network as they benefit immensely from low fees and fast transaction speeds. If

you're placing a $5 position, you really cannot afford $1 in transaction fee.

Solana makes small stakes viable, allowing everybody to participate. They

especially grew in popularity when they correctly predicted the outcome of the 2024 American presidential elections.

This election was famously misjudged by professional pollsters while prediction markets correctly predicted the outcome.

Why did the markets get it right? It

comes down to skin in the game. Unlike

polls, where opinions are free, prediction markets require you to back your belief with money. If you have bad information, you lose money. If you have good information, you profit. Over time,

the price moves toward the truth. People

sometimes call this the wisdom of the crowds with accountability. At their

core, these products offer a really simple premise. You are presented with

simple premise. You are presented with questions about outcomes. Most of these questions are in the form of a binary outcome. A question for which the answer

outcome. A question for which the answer is the inverse of the other. Let's

review a few examples. The price of Bitcoin will close in 2026 above 150,000. Yes or no? You pick a side. You

150,000. Yes or no? You pick a side. You

stake your money and you wait. We could

create a more interesting contract by proposing different brackets. Let's see

an example. The price of Bitcoin will go above 150 but will stay below 200,000.

The second question would be the price of Bitcoin will go above 200,000. By

composing these binary true or false questions, we are able to find the closest outcome and refine our predictions. Do note that some

predictions. Do note that some prediction markets have the capacity to use questions with more than two outcomes. But today, we will focus on

outcomes. But today, we will focus on designing with these binary question limitations. This keeps the Solana

limitations. This keeps the Solana program logic simple and they still provide a great user experience. So, how

do these markets actually work? The

mechanics are straightforward. When you

take a position, your funds go into a pool. There's a yes pool and there is a

pool. There's a yes pool and there is a no pool. When the outcome is decided,

no pool. When the outcome is decided, the winners split the losers pool proportionally to their stick. Let's say

that there's a market with 100 soul in a yes pool and 50 soul in a no pool. You

put 10 soul on yes, that means you own 10% of the yes pool. If yes wins, you get to keep your 10 soul back plus 10% of the no pool. That's five soul profit.

If no wins, you lose your tensil and it gets distributed to the no side. This

creates a natural price discovery mechanism. If most people think yes will

mechanism. If most people think yes will win, more money flows into the yes pool.

But that also means that the payout for yes gets smaller. At some point, contrarians see value in taking the other side. The ratio between the pools

other side. The ratio between the pools reflects the crowd's collective probability estimates. This is what

probability estimates. This is what we're going to build today. By the end of this lecture, you'll understand how to create a prediction market on Solana from scratch. We'll cover the full stack

from scratch. We'll cover the full stack from the Rust programming running onchain to the React front end that users interact with. On the Solana side, we'll write four instructions. Create a

market, take a position, resolve the market, and claim winnings. We'll go

over account design and constraints.

We'll look at the validation logic that prevents people from taking position after the deadline or claiming twice their winnings. On the front end side,

their winnings. On the front end side, we'll see how to fetch all markets from the chain, decode the account data, and display it in React components. We'll

use a code generation tool called Kodama that reads our ROS program and generates a TypeScript client code automatically for us. This means we get Types safe

for us. This means we get Types safe instruction builders without writing serialization logic by hand. The

architecture might seem like a lot at first, but it breaks down into clean layers. Let's start by looking at the

layers. Let's start by looking at the big picture.

Let's get an overview of the architecture. This is a full stack

architecture. This is a full stack lecture. So while we'll go over the

lecture. So while we'll go over the program, we will focus more on the integration between applications, signers, and transactions. The front end of our application will use Nex.js. We

could use totally another React framework or even another JavaScript framework, but Nex.js offers an easy way to get started with a good production standard. We'll have stuff like

standard. We'll have stuff like serverside rendering capacities, many optimizations for which we don't want to spend our time setting up manually. The

front end will be split between two pages. The first page will show all of

pages. The first page will show all of the currently active markets. That would

be where our Bitcoin above $150,000 example would be with components allowing users to take position on the yes or no. The second page will be a profit and loss page with statistics on

past markets. We'll use this page as an

past markets. We'll use this page as an example of how you can display data and present it nicely for users. These

components will talk to the blockchain using two easytouse libraries that are part of the Solana SDK suite. This will

be Solana client and Solana react hooks.

These will handle the wallet connection and the communication to the RPC layer for us. So when it comes time to connect

for us. So when it comes time to connect to our program and submit instructions or read from the accounts, we'll be using a TypeScript client that will be generated from the Kodama IDL library.

Transactions will be sent to an RPC on DevNet and our anchor program will take care of acting on the different accounts in the PDAs. One caveat that I would like to discuss is that this is not a

production ready market. The use of an oracle would typically be recommended to resolve the markets as it would build trust in the process. But in our implementation, the market creator is

the person responsible to resolve the outcome. As you can see, we've kept our

outcome. As you can see, we've kept our account design pretty simple in this implementation. Each market will be a

implementation. Each market will be a PDA so that we can easily reconstruct them on chain and each position holding a user stake will also be a PDA. Our

whole design will contain a simple count of four instructions. Create market

initializes a new market with a question that is yes or no. Place bet takes a position on either yes or no. And then

resolve market is for the creator who declared um the market to resolve the outcome and claim winnings will allow people who've won money to claim their

payout. The beauty of Solana programs is

payout. The beauty of Solana programs is that these four simple instructions can be composed into a fully functional prediction market. At this point, you

prediction market. At this point, you should have a clear mental model. A

market is two pools, a market account, and a handful of instructions that move imports around with strict rules.

Nothing magical, just constrained state transitions. That's why Solana is a

transitions. That's why Solana is a great fit. It makes these tiny frequent

great fit. It makes these tiny frequent stakes cheap and fast while keeping everything transparent. In the next

everything transparent. In the next part, we'll get hands-on with the onchain program. We'll define the

onchain program. We'll define the account data, derive PDAs, and write the instruction handlers with the guardrails that prevent late stakes or double claims. We'll also talk about the trade-offs we're making, like create a

resolve outcomes, and how we might upgrade this in a future production system. Once the program is solid, we'll

system. Once the program is solid, we'll back up the stack. We'll wire the UI with a generated client. Let's start

building.

All right. Before we write any code, we need to decide what lives onchain.

Onselon app programs are basically pure functions over accounts. Each

instruction takes accounts in, reads them, and writes them back out. So the

account layout is the real API. Get that

right and everything else becomes straightforward state transitions. For

this build, we're keeping it minimal.

Two accounts, market for global state and user position for a single user's exposure. No order book, no price

exposure. No order book, no price history, no offchain reference, just the data that we need to validate positions and pay people out. That's the whole idea.

All right, so we define these in state rs. We have the account macro and we

rs. We have the account macro and we have the derive init space macro. Just a

quick note, the init space matters because Solana really allocates spaces up front. So if you go too small, your

up front. So if you go too small, your transaction would fail, but if you go too big, you're going to burn fees forever. So with init space

forever. So with init space essentially what happens is that anchor will do the sizing for us and we don't have to care about it. All right. So

here we're going to go a bit over how we did the actual protocol in the back end.

So we're going to be going over the state RS where we defined the different accounts that we'll be using in our prediction market. The first one I want

prediction market. The first one I want you to take a look at is this market account. We have a little bit of

account. We have a little bit of information on it. A few things that I think we should look at is that we have a creator. This is the person that's

a creator. This is the person that's actually created the market. So the

person who really creates a question sets you know the the yes or no every every detail about the prediction market the specific question it is this person

who took this action. So we're storing the creator's public key here. We also

have a market ID. This is interesting because we will be having multiple markets. So we want to store all of

markets. So we want to store all of these as a single ID so that we can differentiate them.

Another important one I am I am highlighting here the question um part we have to put a maximum length because we need all this data to be

known ahead of time. If not we'll be paying too many fees. So I put a length of just 200 characters. This will keep the question pretty interesting. It will

also ensure that we have a maximum. So

this way you cannot have a full novel when you for example some people might want to do really complicated prediction markets. But for a good UX you should

markets. But for a good UX you should keep things light. So maximum of 200 characters should be pretty good. So

resolution time is the next thing we're going to have a look at. It is an I64 to store the actual time when the question will end. As an example, you could not

will end. As an example, you could not really keep that open forever. People

will want to see if they won, if they were right on yes or no. So the

resolution time is the moment where the market stops where we're going to really say this is finalized after tomorrow you cannot take any more position on this

and then we are storing the yes pool and the no pool. So this will be where we store the full amounts. For example,

more and more people are going to take participation on yes and on no and we actually need those later on to do the calculations. for example, when yes has

calculations. for example, when yes has won and um we need to know who took a position and who can get the distribution of the pool. So we're

keeping the total amount that people are putting on these um yes or no options.

And now going over at the end result is pretty simple. It really says has this

pretty simple. It really says has this been um closed. So we have the I think one specification is that the real resolution time is the moment where you

cannot take a position after this. But

it could be that the question needs so you know somebody to verify if the price really was over 100,000 as an example.

So the result is when the person closing the market has resolved it. We're going

to flip this to true and the outcome will simply be if yes or no as one. And we are keeping the bump simply because this will make it easier

for us to get back our accounts.

Let's go now scrolling down to the user position. What makes this a really

position. What makes this a really simple design is that each user will have its own position. So its own account and in there we'll have to link it to the actual market. So if I'm a

user and I'm taking a position on the yes pool, I need to know exactly of which market on which question I'm answering. I I really want my stake to

answering. I I really want my stake to be put on the right place. So, I'm going to put this here um on my user position so that I can link both a bit later.

We're going to keep the user here just for simplification so that this user position we're easily able to match it to the actual user who took it. And

we'll have a yes amount or a no amount.

This will we'll have to work a bit in the constraints because we will want a user to really put the position only on yes or only on no. So we'll work on this a bit later.

And lastly, we're keeping also the claim status. As an example, if I'm picking my

status. As an example, if I'm picking my money a first time, I should not be able to take it twice. So I'm going to store this information on the user position account so that we're preventing double

claiming. And this is the tour of the

claiming. And this is the tour of the accounts we had in state. RS. Let's move

on to the next parts. All right. So, in

this next part, even though we're not going to be building the prediction market line by line, the the whole goal of this section is really to show you the interactions between the program, the code generation, and the front end.

Even if we're still going a bit high level, we should be trying to understand the business logic of our program because anytime you're building a full stack application, you really need to understand these interactions between

both. it it is really what makes the

both. it it is really what makes the difference between just a quick screen that you put up and an actual thoughtful application that you've built. So we're

going to go into lib. RS here where most of our business logic is living. We saw

in the step just before in the video we saw that we had four instructions. We

have create market, we have place bet, resolve market and claim winnings.

So obviously creating a market will mean that we need all of the information that we had in state RS. All of this information we will need to have it in

this function and we will want to set it onto the account. So going back to li.

RS here if I expand this section you will see that the user has to provide a market ID a question and a resolution time. All the rest we're able to

time. All the rest we're able to actually initialize it oursel. So a few things that we need to take into account. If I'm setting this resolution

account. If I'm setting this resolution time here, I obviously want it to be in the future. For example, creating a

the future. For example, creating a market about an event in the past makes no sense. You could be taking position

no sense. You could be taking position against who won the 2024 election. We

all know what happened. So there's no point in actually creating it. So this

is why we will use our require here to ensure that the resolution time is actually above the current time stamp.

We could go a bit into deeper territory.

For example, I could say give it at least a day or at least an hour, but we're keeping things light and we're just making sure that it is over the current time.

And after we are getting the market account and we are setting on this market account all the details that we

want such as the um the question that we received the resolution time and the other parameters we're able to infer naturally. So for example at the

naturally. So for example at the beginning of creating this market we know that nobody has taken a position on it yet. So we can initialize the yes

it yet. So we can initialize the yes pool and the no pool where we're tracking the actual amount of people that have invested the imports in this where we can initialize both of them at

zero. Obviously if we're just creating

zero. Obviously if we're just creating this market it has not been resolved. So

we can set that to false and in terms of the outcome we are not ready yet to set the outcome because this will have to happen in the later instructions when we're resolving the market.

So I'll go ahead and I will close the create market instruction here and let's have a look at placing a bet. This is

when the user actually takes into their own hands to say yes, this is true. No,

this is true. So we'll have two information into our parameters for these interactions. Obviously the user

these interactions. Obviously the user will want to put money where their mouth is. So they were going to say going to

is. So they were going to say going to put 10 soul on this or I'm going to put one soul on this. And we will have to see if they decided yes or true in their

stake that they're taking. So let's

expand a bit the instructions here.

We need a few basic checks and balances.

So we need to ensure that you know they're placing an actual bet. So an

actual stake. So we need to check that the amount is bigger than zero. And if

it is not we will be returning an invalid um amount error. Simply we're

also doing the the checks to ensure that the user are still allowed to participate in that market. So we will be checking that the current time is

below the resolution time because if it is over it means that the market is closed. People cannot take a position

closed. People cannot take a position anymore. Bitcoin has already crossed you

anymore. Bitcoin has already crossed you know the threshold. We cannot really take um we cannot really place at this moment a stake.

And lastly, what we're doing here is a CPI to get the user's balance from the user and to get it into

the account of the market. So this way we're actually able to confirm that the user is sending their soul onto the position that they're taking.

And these last few steps and again I'm going a bit faster on this. I'm not

trying to get you to write line by line this program. Let's really focus here on

this program. Let's really focus here on the architecture. So we're trying to get

the architecture. So we're trying to get the business logic. So you could read all of these these lines, but what I really want you to focus on is how they will serve into the future application that we're building on the front end and

how the little different parts you have to keep those into account. For example,

I showed you here that we're returning errors. When you're building your front

errors. When you're building your front end, you will want to know those errors.

You will want to understand what to display on the screen if a user is doing something which the program is not set up to do. This is when you should understand what these errors are doing.

So all this business logic that we have here is simply just taking care of setting the user stake when they're deciding if they're placing their stake on yes or on no. And if it's not

happening, we're simply returning errors and we're finalizing the user stake.

Let's close place bet and let's explore the last two instructions that we have quickly the what I think is the two most important ones which are resolving the

market and claiming the winnings.

Resolving the market here we're doing something actually pretty simple where we have the actual answer. Has Bitcoin

passed 150? Yes or no? Who won the election? Candidate A or candidate B. So

election? Candidate A or candidate B. So

all that we're doing at this point is that we are setting the actual answer which means who has won who has lost. So

this is as simple as it gets.

And the last instruction that we have is claiming the winnings. This one is pretty simple. We have again time logic.

pretty simple. We have again time logic.

So we're checking okay is the market resolved because if you claim winning on a market which is still open obviously you should not be getting that money.

And we're also doing a require check here because we want to make sure that you've not already taken your money taken your winnings out of the pool.

This is important because we're trying to prevent double spending here. We're

trying to make sure that everybody has access to their winnings.

And lastly, what we're doing is that we're actually taking out the money of the users and we are also ensuring that the totals that are left in the pool are

properly done. So these are our four

properly done. So these are our four instructions. As you can see, they're

instructions. As you can see, they're not super complicated. You'll have

access to the code if you really want to understand this, understand this line by line. What I really want to understand

line. What I really want to understand here is that these four instructions, we will need to interact with them in the Nex.js application. You could wire those

Nex.js application. You could wire those manually if you wanted. However,

serializing the answers from the front end to the program and using the RPCs to broadcast your transactions, it is actually a lot of manual work. So, this

is one of the reasons why in the next section we're going to have a look at a tool called Kodama, which will allow us to create our own TypeScript client automatically from this anchor program

that we've created.

I'm just as a final overview before we move on, I'm just going to go and show you that we also have a few tests here.

So these tests um essentially ensure the safety of our program. They do not replace a full audit, but you always want when you're building a full stack application, you want to test every bits

and pieces. You want to start from the

and pieces. You want to start from the program and you want also unit tests on your JavaScript application. This is

important because creating a full-on program means there's a lot of moving parts. So you need to pay special

parts. So you need to pay special attention to these little details.

All right, so we're going to jump straight up into creating our Kodama IDL client and I'll see you in the next step.

Now we actually have running our full application here. I want to understand a

application here. I want to understand a little bit how we actually communicate between the program and the actual front end that we have on screen right here.

So I have this NexJS application. We are

fetching markets. We're displaying

statistics for our users. The key to wiring this up is creating what we call an IDL. So I will jump back into my

an IDL. So I will jump back into my coding editor. Right now

coding editor. Right now we we saw in our anchor folder right here that we had this program with a bunch of instructions. We had a bunch of

errors. This is the way that we actually

errors. This is the way that we actually do them on chain. Consuming this

program, you could decide to do it in a Java application, in a JavaScript application, or even in Python. The way

that you're achieving this layer of communication is actually pretty important because you need to pass from the different discriminators, the different instruction data with the different bites in the perfect order.

So, how do you actually achieve that communication?

You could wire it up manually, but just understand that this is errorprone.

There is a lot of little bites, a lot of little details, a lot of things to tweak. So understand that doing this

tweak. So understand that doing this manually could lead you to have a bunch of errors in your program, and they're just not fun to debug. They're a great learning experience, but instead, we should be using what we call an IDL, an

interface definition language. I will go back to my terminal here, and I'm going to run npm run anchor build.

You've probably seen that command directly with the Anchor CLI. I've wired

up all of this in a fullstack application. So I'm using npm to run

application. So I'm using npm to run commands. But what npm anchor build is

commands. But what npm anchor build is doing right now is that it is building our program and it's compiling. But if

we go back to our editor here, there is a specific file that I want to look at.

If I go under anchor program prediction market and target, there's a folder called IDL. Let's open it up and open

called IDL. Let's open it up and open the big JSON file in it. At first

glance, this file is nothing special. We

can see different names like the market, discriminators. But if we look a bit

discriminators. But if we look a bit more closely, there's something interesting. Everything that we've built

interesting. Everything that we've built into our anchor program, such as our instructions,

the errors, everything maps perfectly into this IDL file. For example, under the instruction section, if I open that up, you see that I have my claim

winnings instructions here.

This is essentially the translation of the logic of your program for other clients to consume. The IDL is essentially a contract layer between the

program and the different consumers. And

you could have a lot of different consumers. So if you want to go ahead

consumers. So if you want to go ahead and use this data, use this IDL in something of a more useful format, what you would do would be to wire up a tool

like Kodama, with Kodama, you're able to generate a client from that IDL into the target language of choice. In our case, this will be TypeScript, but you could fully decide to go with Go, with Python.

There are a lot of different options for your target language.

So we've wired this up with this kodama.json file.

kodama.json file.

And the way that we wire it up is that we give it the information of where to look for the IDL file. So if you see this property here, we're pointing to the relative path of prediction

market.json in the IDL folder. And we're

market.json in the IDL folder. And we're

also pointing exactly to where we want this to be generated. More specially,

what I want to happen is that I want to generate a client inside my nextj application so that I will be able to consume it later on. Let's see it in action.

So if I go back to my editor here, our last command was npm anchor build. Now

we will do npm run kodamajs.

What this does is that it reads the IDL and it generates a typescript client for us. Let's go and open that client

us. Let's go and open that client itself.

We've put it into application generated and then index.ts.

Once I am inside my generated client file here, we can actually see that the accounts, the errors, the instructions and the programs are matching one to one the different sections of the IDL. This

is important because this is a communication layer between our front end and our back end. As an example here, if I click to definition on my instructions, you can see that the exact

same four instructions, claim winnings, create market, place a bet, and resolve the market are matched one to one as we wrote them in our program.

I'm going to click again to claim winnings to explore a little bit how these files look like. Do understand

that you don't have to understand these files completely. There is a good reason

files completely. There is a good reason that we generate them man automatically instead of manually because this is really errorprone. If you did this

really errorprone. If you did this yourself, it is a great learning experience. But you might not want to

experience. But you might not want to directly yourself consume this low-level code simply because if you're making small errors, it is quite common that you'll see these errors reflected in your front end quite quickly. One

important thing to understand about these generated IDLs and the generated clients is that this is a workflow that you have to work in between your program and your front end. If you have a bug, if something is not working when you're

decoding accounts, there's a high chance that you've probably forgot to redo your program and redo your generated client.

The workflow is generally you do a change in anchor, you run npm anchor build, and you do again the kodama client. You could put those in a loop if

client. You could put those in a loop if you wanted. You could do hot reloading

you wanted. You could do hot reloading as an example where each time there's a change in your anchor program, you rebuild your client. However, sometimes

building a program can take a few seconds to up to a few minutes. So, I

would say do it manually at first and once you find your groove, if you feel that you need to automate this, this is a good call. In the next section, we're going to dive directly into the front end architecture. So, I'll meet you on

end architecture. So, I'll meet you on the next steps.

In this next section, we're going to see the front end architecture. In the

previous section, we've talked about the program making the IDL in a client from it. But now we got to put it all

it. But now we got to put it all together and see how it all works and connects to make a full stack application. So in my editor here, in my

application. So in my editor here, in my terminal here, I will do npm rundev.

I will switch over here to my code editor.

So this application is an XJS application. You have a few options when

application. You have a few options when it comes time to make your front end or for a web application. The most popular ones on Solan are generally react applications. And the most popular React

applications. And the most popular React framework for production I would say is Nex.js. Even though you could be using

Nex.js. Even though you could be using Zvelt, you could be using Vue, but Nex.js is where you will probably find the most support in libraries and in content like tutorials on the web. So if

we explore here the structure here, I have my anchor folder, which I'm going to skip because we've seen that a bit before. And I also have my app folder.

before. And I also have my app folder.

Inside my app folder, we have our generated client which we'll be consuming in our pages directly. And if

I open the index page here, page.tsx,

we're going to see a few things where we have our homepage. Let's scroll down just a little bit. Our homepage, if we see what it looks like in the UI,

our homepage has a header where we have the wallet connection at the top. We

also have a button to create new markets.

And we have this table here where we're showing active markets. We're going to create one in a little bit. And we have these past markets.

So this page, as you can see, has a lot of Solana information. We needed to actually wire this to go ahead and start the fetching so that each component can properly display its own data.

In Nex.js, when you want to provide information to multiple components, you will want to use what we called a React context. A React context is simply a

context. A React context is simply a component which is able to pass down its information to all of the child components under them.

And if you want to put a context for the whole application into Nex.js, you will put it in your layout file at the root of the at the root of the application.

So let's open out layout.tsx here where we wired all the magic of this context provider. It's pretty light. As you can

provider. It's pretty light. As you can see, there are the regular NexJS information here. We're not going to

information here. We're not going to look too deeply on those. What I really care about is this provider tag in JSX here. This provider is essentially

here. This provider is essentially wiring all of the different communication layers that our components will need in order to communicate with the Solana RPCs. So let's go over and open up that component. So inside

providers.tsx,

there are two things I really want to look at here. The Solana client npm package and the Solana React hooks npm package. A few things to note. These

package. A few things to note. These

libraries are part of the JavaScript SDK family that are built on top of Solana Kit, but you do not have to write a lot of Solana Kit code manually by hand because these libraries are fully wired

so that you can really go and focus straight on your business logic.

I'm going to go over pretty quickly what we're doing here. We have this DevNet RPC URL, which you will want to have a proper RPC provider if you're going to production with your application. We'll

talk about this later in the boot camp.

You will want to create a client where you pass the wallets that you want to support and you will pass your commitment level.

The client, by the way, supports a number of options by default. You do not have to set those. However, you have full control if you if you decide to do.

And lastly, this one is a bit interesting. I'm going to pass in my

interesting. I'm going to pass in my query configuration.

This allows me to set the behaviors of how fast I want my application to refresh and revalidate content on chain.

This is pretty fun because some things like a simple CRUD application for example would not need to be refreshed pretty often. You would simply create

pretty often. You would simply create and update data and you'd refresh it when you're changing. But if you're doing something like a prediction market, which have to have a really

realtime feeling, you might want to bump those numbers to be refreshing even faster. In our example, we're refreshing

faster. In our example, we're refreshing every um 300 3,000 milliseconds here, which translate to 3 seconds. You could

even get that down if you wanted to 500 um centc. We're passing in between units

um centc. We're passing in between units here. Uh but essentially, this wouldn't

here. Uh but essentially, this wouldn't be near real time. But understand again that there are different options between polling and um doing websockets and

we're currently doing a polling mechanism here.

So once this Solana provider is wired it will be available to all the children components such as fetching our markets or fetching the past data for the users.

We're going to get back to the UI to understand a bit the components that we're playing with here. So in our first stop, we're going to look at how we can

create a market. So in the UI, we simply click new market. We have to ask a question. For example, will Saul hit

question. For example, will Saul hit $200 this month? And then we decide when it ends. This was the resolution time in

it ends. This was the resolution time in our accounts. I will set it for 1 hour.

our accounts. I will set it for 1 hour.

I'll go ahead and I'll create it here. This will submit a transaction

here. This will submit a transaction into the wallet provider that I'm using.

In this case, Phantom, but every wallet would work. And once we confirm,

would work. And once we confirm, we should see this market appear pretty soon. So, as you saw, we have around a

soon. So, as you saw, we have around a two second to 3 second refresh delay. We

could still wire it so that it's manual like I just did here. But essentially,

all of this flow and communication is consuming the IDL that we created. Let's

go over and explore a bit of components and see what they're doing. I'll go over to my code editor back here.

Let's go to the create market form, which was the first form that we actually created um that we just used to make a market.

Let's focus a bit on the different sections here. So, I have a few

sections here. So, I have a few constants here just to make a good user experience. For example, you want to

experience. For example, you want to provide information for your users to get started so that they're not lost. It

provides them with a good series of information to get started and use your app.

And then we're consuming a few React hooks such as use the wallet connection.

This is simply because we need to ensure that a user is properly connected before using that component and we are using use send transaction. This will be needed when the user is actually

submitting the creation of the market.

And here one part that I want to get your attention on is that we have this endle create um function here which is wired up to the submission of the form.

There is one special place that I want you to have a look at. You see this get create market instruction async. That

was a mouthful to to say. This is coming from my kodm IDL. So if I click to definition here, you can see that this is all the code that was generated for me that I did not

have to write by hand.

This really makes everything much easier to do simply because by passing my arguments such as here I'm passing the creator, a tiny TypeScript bug that we're going to have to fix later on, but

the call still compiles. As you can see, these instructions, these arguments to the function are really way simpler than wiring all of this manually yourself.

There's a lot less uh problems that you're going to deal with. You simply

just have to pass your proper arguments and all of this will be handled for you.

The rest after we are ready to send the transaction with the send function which came from the use send transaction hook.

The rest is really just wiring up the UI to give a proper user experience. So, as

you can see, we're doing um a bit of UX cleaning. As an example, we're resetting

cleaning. As an example, we're resetting the question to be empty. We're sending

the status to say that the market was created. This is all the behavior that

created. This is all the behavior that we're not going to go over. We're going

to have deeper videos on the Solana YouTube channels about doing a proper UX. We're simply trying to understand

UX. We're simply trying to understand the flow here between submitting data on chain and actually having reflect in the UI.

So all of these little details, they are mostly to give your users a better experience such as loading times, such as really reflecting the data and refreshing the data that you're seeing.

So this was for creating the actual market. Once you've created it, what

market. Once you've created it, what will be really important will be to fetch it and display to the user. So

I'll go back to my code here and I will go to this marketslist.tsx

file here. All right. So inside our markets list.tsx file here. What is

markets list.tsx file here. What is

really important to understand is that we are fetching the markets. We're doing

a little bit of filtering by active and by past markets. For example, if a market is still open and allows a user to take a position, we're good on it.

We're going to display it as the active one. And for past markets we have

one. And for past markets we have resolved which are in the past. for

example, the 2024 presidential election.

We're going to put them in this past section.

What I want to have a look at here is this use program hook from the Solana React hooks library. So, a program has different accounts. The use program

different accounts. The use program accounts will give you directly access to fetching all of the onchain accounts directly by passing just a little bit of

information such as the program address and how you want to encode and decode these. We still openly have work to do

these. We still openly have work to do on making this easier. This will

probably improve a lot by the time that this boot camp is released. So, please

go ahead and look in the code. uh you

should probably see something much easier than this on screen but for now just understand that the best way of doing this is by consuming these use program hooks and that things will be

much simpler pretty soon and then everything that we're doing if I scroll down is logic around changing the tabs between the active and the past markets

but I want to take bring your attention over the rest of the UI. So I'm going to get back to my application here.

Something that is always nice to give to your users is data around all of your interactions with your program. So this

activity tab here that I've just clicked on, as you can see, is pretty data heavy. And we are still doing the same

heavy. And we are still doing the same accounts reading to provide all of this data. All we are doing is simply doing

data. All we are doing is simply doing more advanced React filtering and animations to give users a bit more data. And all of this is really possible

data. And all of this is really possible because I didn't have to write by hand all of the IDL in the generated client.

This would be way more difficult if we were writing and wiring all of this manually.

So as when it comes to really doing this full stack communication between your program and your front end, what you're looking for is a smooth experience for

your users. You should try and build

your users. You should try and build applications which make it feel like the blockchain is just erasing itself.

You're trying to achieve a point where yes, users are connecting the wallet or maybe they're even connecting with social login, but you want to get to a point where this is easy and pretty

seamless for your users. And the current libraries are getting much better at doing this. For example, this refetching

doing this. For example, this refetching in providers really allow you to create these kind of applications where everything is live and refreshable for

your users. In the next part, we're

your users. In the next part, we're going to go over the different problems that we had and how you could expand upon this project.

All right, quick but important section.

This is a tutorial build. So, we made a bunch of choices that keep the code simple. Some of these choices trade-off

simple. Some of these choices trade-off decentralization or product features.

That's fine for learning, but you should know exactly where the shortcuts are.

We'll go through the main design decisions, the protections we did add, and the obvious upgrades that you would have to make if this was a real product.

The biggest one is resolution. In our

build, the creator decides the outcome.

That keeps the program tiny and it avoids integrating an oracle, which is a way of getting data from outside to on the chain. But it creates a trust issue.

the chain. But it creates a trust issue.

In production, you would want to add an oracle or even multi- signature so that no single party can manipulate the results. We also had to make a trade-off

results. We also had to make a trade-off with polling. It's easy to implement and

with polling. It's easy to implement and understand, but it adds a bit of latency and it it's a bit heavier on the RPC load. So, if you want a realtime user

load. So, if you want a realtime user interface, you would use websockets or an indexer. Also, all or nothing bets

an indexer. Also, all or nothing bets keeps the mat clean, but you cannot do partial exits, cash outs, or liquidity.

That makes the program easier to reason about, but it limits really what traders can do and most onchain activity is usually people trading. And finally, we

skip fees. This keeps the math pure and

skip fees. This keeps the math pure and the example clear, but it also means that there's no revenue or sustainability for the business. In

production, you will want to add a small fee on each person taking a position or on the winnings simply for production.

Let's also have a a basic look at the security that we built in. It won't be perfect. It will not replace an audit.

perfect. It will not replace an audit.

But here are the minimum protections that you want to add into any prediction market. The arithmetic checks are there

market. The arithmetic checks are there to really prevent an overflow attack. In

a per mutual system, overflowing a pool can change implied prices or even allow free bets. Using checkmat is boring, but

free bets. Using checkmat is boring, but it's critical. We also gate claims with

it's critical. We also gate claims with position.claimed that makes claims

position.claimed that makes claims indempotent and prevents double spending. The resolution time check

spending. The resolution time check prevents late position and the creator check prevents random signers from resolving the market. Notice what's

missing. We do not prevent a creator from resolving early with the wrong outcome and we don't prevent griefing with spam markets. Those are product and governance problems that more mature code bases would actually have to

handle. We also don't do anything about

handle. We also don't do anything about frontunning or transaction ordering. on

Solana that usually shows up as users racing at the end of a deadline. But for

a tutorial, it's really fine. If this

was production, we'd have to do a bit more work to cut everything and put everything together properly. But, you

know, this is a tutorial and it's completely fine. So, what could we be

completely fine. So, what could we be adding to make this a bit better? First,

we could add decentralized oracles.

Switchboard or Pith really come to mind.

We could also add a fee so that the protocol is more mature and sustainable.

Another one for people actually interacting with these markets would be partial position exits or better liquidity. And obviously, like we

liquidity. And obviously, like we discussed in the beginning, multi-outcome markets that go beyond the simple yes or no question. If you really wanted to keep going, I would say work first on oracles. They're a bigger

upgrade, but they're the most logical one to do. You could also look right next to this after fees to make a better protocol really work for you. So the

takeaway is simple. The core is solid, but this is a learning focused version.

If you treat it like production, you will want to harden it. That's normal.

Engineering and creating projects is all about a balance between solid features and security. Keep your users in mind,

and security. Keep your users in mind, take the decision, but continue learning.

All right, let's wrap it up. If you've

made it this far, you have a full stack prediction market running on Solana. You

built the onchain program, you generated a type- safe client, and you wired it up into a React front end. The best part is that the whole thing is small. There are

only a few instructions, a couple of accounts, and a simple UI. But the

architecture can really scale. You can

add features without changing the core flow. Quick recap. We talked about where

flow. Quick recap. We talked about where prediction markets work. Barcelona is a good fit. We built an on-chain program

good fit. We built an on-chain program with four instructions. We also

generated a client from the IDL so that our front end is all wired up and type safe. And we made our user interface

safe. And we made our user interface connected to the back end and we fetch transactions and doing the full flows.

In the last section, we simply discuss the security trade-offs, the architecture, and how you can improve things going further. So, the full picture, first, a user interacts with a React component. That component calls

React component. That component calls the generated instructions.

We have a wallet that signs transactions and sends them to Solana. We also have our anchor program that validates, mutates the account state. And lastly,

our front end pulls, and updates the state, rerendering our application to make this all come alive. That loop is the entire app. The user clicks, the client builds an instruction, the wallet

signs, the program updates the state, and the UR refreshes. Once you

internalize that loop, you have pretty much everything you need for Solana development and becomes easier. A few

takeaways here. Programs are stateless.

It's the account that hold the state.

PDAs enable trustless escros and no private keys are holding funds. Code

generation eliminates serialization bugs and make type- safe front ends by default. The IDL is the contract in

default. The IDL is the contract in between the chain and offchain is the connection flow. And if you remember

connection flow. And if you remember just one thing, remember this. Your

program is small. The accounts are the truth and the client should be generated. This combo keeps you sane as

generated. This combo keeps you sane as your project grows and really scales easily. Also, don't underestimate the

easily. Also, don't underestimate the value of just keeping things boring. The

simpler the state model, the the easier it is to debug and really to extend it later. Most production bugs are not

later. Most production bugs are not clever things. They're really just

clever things. They're really just mismatch assumptions between the layers.

If you want to go deeper, Ankor and the Solana cookbook are the best starting points. Kodama is a bit more high level,

points. Kodama is a bit more high level, but it's worth exploring if you want to build program and you want to keep your front end clean. If you're new to Solana, especially keep time rethinking

about your PDAs and account ownership.

These two concepts show up everywhere.

Once they click, everything will just feel way less mysterious. So the next steps for you would be simply cloning this repo locally. Try modifying a few validation, see if you break anything.

You could also be adding new field to the market strct. You could trace the codegen output. All of these exercises

codegen output. All of these exercises force you to touch the full stack, which is a real skill you want here. Change

Rust, regenerate the client, update the UI, watch how the whole loop works. If

you want a bigger challenge, try adding an Oracle base resolution flow or a tiny feed to the protocol. Both changes will touch every layer, which is great practice. Thank you for watching. If

practice. Thank you for watching. If

you're building something cool with this, I'd love to see it. Tweet us on it and just have fun building.

Now, we're going to talk about something that does not get enough attention in the Solan ecosystem. Production

readiness for application. You know,

there's a massive gap between building a proof of concept that works on DevNet to really launching something that can handle money users because there are real consequences if there's bugs.

Honestly, this gap has burned a lot of dreams. So, let me start with a story that might sound familiar. You're

building this amazing payments application. Maybe it's a checkout

application. Maybe it's a checkout system, a marketplace, or something with some form subscription. You've tested it locally. You've deployed to DevNet and

locally. You've deployed to DevNet and everything works beautifully. You're

ready to flip the switch and you start processing real transactions and then it hits you like a truck. So, suddenly

you're dealing with questions that you never thought about. Let's go through them. What happens when the RPC endpoint

them. What happens when the RPC endpoint goes down and you're in the middle of making a payment or the transaction is stuck in the main pool and it expires?

How do you handle priority fees when the network gets congested? Or what happens if somebody tries to pay you but the token is actually fake but looks exactly like USDC?

So, how do you keep also your signing keys secure when you need to have them accessible by your backend service? This

is not theoretical. These things will absolutely wreck your launch if you don't handle them properly. And the

tricky part is that most of these issues don't show up in testing. They only

emerge when your life there's real money on the line and people are expecting things to just work. So, let's break down what production readiness actually means for Solan applications because it's not just one thing. It's a whole

system of consideration that work together to create a reliable, secure, and maintainable infrastructure. So,

first of all, let's talk about soul.

This seems basic, but it's critical.

Your mainet wallet needs actual salt to pay for transaction fees and rent. Not a

lot necessarily, but you need to think about this operationally. How much soul do you need? Well, that depends on your transaction volume, but you should be thinking in terms of coverage fees and thousands of transactions plus rent for

any accounts you're creating. And here's

the thing that nobody tells you. You

need a system to monitor this balance and alert you before you run out.

Running out of soul mid operation is embarrassing and completely avoidable.

Now RPC configuration. This trips up a lot of teams. During development, you're probably used to using the solen RPC endpoint. They're great for testing, but

endpoint. They're great for testing, but they're not suitable for production.

They're rate limited. They can be slow and they don't give you any guarantee of uptime or performance. For production,

you need to configure your own RPC endpoint. This normally means running

endpoint. This normally means running your own validator or more commonly getting a dedicated provider like Helio, Triton, Quicknode, and way many more.

There are great services out there, so use them. But here's a crucial part.

use them. But here's a crucial part.

Whatever RPC setup you choose, it shouldn't be a public endpoint that anyone on the internet can hit. Your

production RPC should be private, authenticated, and logged down to only your services. Why? because there's rate

your services. Why? because there's rate limits, security, and you want to control exactly how your application interacts on the network. And speaking

of RPC endpoints, you need a fallback.

Your primary RPC is going to go down at some point. It will. There are great

some point. It will. There are great resources, but if there's a maintenance window that you forgot about or there's a DDoS attack, maybe even the submarine cuts an underground cable that can happen and it is a real problem.

Whatever the reason is, you need a backup RPC for your application that you can fail over to. This means your codes needs to be designed with multiple RPC endpoints in mind from day one. Let's

talk about priority fees. It seems

optional, but when it becomes actually needed, it is critical. Solana's

transaction processing is a first come first serve by default. So when a network is busy, priority fees lets you jump the queue right up. For

applications that are critical like payments, this can be the difference between a transaction going up right away or sitting in the main pool and waiting for seconds, not minutes, but it

could delay your user experience. The

smart approach here is dynamic pricing.

Don't just hardcode a priority fee value. Implement logic that looks at the

value. Implement logic that looks at the current network prediction and adjust the priority fee accordingly. When a

network is quiet, pay a minimum. You're

good. When things are congested, prefer what you need so that your transaction goes in. Your users want your payments

goes in. Your users want your payments to work. They cannot wait for seconds at

to work. They cannot wait for seconds at the coffee shop. So spending an extra fraction of a cent on priority is worth making the experience better. Now we go into the retry logic and it gets

interesting. Solana transactions can

interesting. Solana transactions can fail for all sorts of reasons. Maybe the

block hash expired before the transaction was processed or maybe there was a temporary network hiccup. Maybe

the RPC endpoint had a momentary issue.

Whatever the cause, your application needs to handle these failures gracefully. But here's the tricky part.

gracefully. But here's the tricky part.

You can just blindly retry after every failed transaction. Some are permanent.

failed transaction. Some are permanent.

Maybe there was a real error in the code or maybe somebody tries to pay you with an account that they don't control.

Retrying won't help. If there are insufficient funds, retrying won't help again. Your retry logic needs to be

again. Your retry logic needs to be smart. You need to understand what is a

smart. You need to understand what is a temporary failure and what is actually a bug that is preventing your user from doing the real transaction. And there's

another layer to this block hash expiration. Every Solana transaction

expiration. Every Solana transaction includes a recent block hash and it's only valid for 150 slots. In today's

conditions, 60 to 90 seconds. If your

transaction doesn't land within that window, it expires. Your retry logic needs to detect this and create a fresh transaction with a new block hash and retrying instead of using the old one.

Confirmation levels. This is really a concept that trips people up because it's different than other blockchains.

On Solana, a transaction can be confirmed at different level. Processed,

confirmed, and finalized. Processed

means it's been included in a block.

Confirmed means that the block has been voted on by a super majority. And

finalized means that the block is rooted and cannot be rolled back. For most

application, confirmed is the right level to wait for. It's fast, takes a second or two, and the probability of rolling back is really low. Finalize is

more secure, but it's longer. There's 32

slots around 13 seconds around the current conditions. So you need to

current conditions. So you need to choose based on your use case. Highv

value payments might be justifying waiting for finalize but regular payments don't. Error handling really

payments don't. Error handling really deserves its own deep dive also because Solana's error messages can be cryptic at times and your users don't want to decipher them themselves. When something

goes wrong, you need to catch the error, understand what it means and present it as something useful to the user.

transaction simulation failed is not helpful. Insufficient funds to complete

helpful. Insufficient funds to complete this payment is actually working well.

Think about all the common error cases like insufficient soul for fees, insufficient token balance, invalid recipient address, slippage tolerance exceeded if you're doing swaps, account

not rent exempt. Every one of these should be handled specifically with a clear message. If you're dealing with

clear message. If you're dealing with gasless transaction when you're covering fees for your user, there's another level of complexity added. You need a robust system to sign and submit transactions on behalf of the users

while keeping security and preventing abuse. This typically involves some

abuse. This typically involves some kinds of transaction relay service with rate limiting and fraud detection. Now,

token verification. This is critical and often overlooked. If your application

often overlooked. If your application accepts payments, for example, USDC, you must verify that the USDC payment that is being sent is not a scam token created to look just like it. Check the

mint address against the official list and don't trust what the token calls itself. Verify the actual onchain

itself. Verify the actual onchain address of the mint token. Security is

paramount. Your private keys are the keys to your treasury. Literally, they

should never ever be stored in your front-end code anywhere a user can access them. Backend only encrypted at

access them. Backend only encrypted at rest with proper key management practices. Consider using a hardware

practices. Consider using a hardware security module HMS or a key management service and implement the principle of lease privilege. Keys should only be

lease privilege. Keys should only be able to do exactly what they need and nothing more. Actually, let's expand on

nothing more. Actually, let's expand on that. You should have separate keys for

that. You should have separate keys for different operation. Don't use the same

different operation. Don't use the same key for signing transactions and for controlling your treasury. Set up a proper key hierarchy with hot wallets for day-to-day operations and cold

storage for the bulk of your funds. And

please, please back up your keys securely. Multiple backups in different

securely. Multiple backups in different locations. Proper encryption. Now,

locations. Proper encryption. Now,

transaction monitoring and alerting. You

need to know what's happening in your application in real time. Set up

monitoring for transaction success rates, average confirmation times, fill transaction patterns. If something is

transaction patterns. If something is going wrong, you want to know immediately, not when users start complaining. Build dashboards, put up

complaining. Build dashboards, put up alerts, and integrate with incident management systems. And finally, load testing. You cannot skip this. You need

testing. You cannot skip this. You need

to test your system under the load that you expect to handle and then test it under way more load than that. What

happens when you go to processing 10,000s an hour, 1 million? What's the

bottleneck? Is it your RPC connection, your database, signing service? You need

to find the limits before your users find them themselves. Production

readiness is not about checking boxes on the list. It's about making a system

the list. It's about making a system that is reliable, secure, and maintainable under real world conditions. It's about thinking through

conditions. It's about thinking through the failure modes, the edge cases, and respecting the fact that you're handling money and users expectation. The good

news is that this is doable. Teams

successfully launch application on Solana all the time. But they do it by taking these considerations seriously from the start, not adding them on as afterthoughts. So as you're building

afterthoughts. So as you're building your next application, keep these principles in mind. Your future self, your users, and everybody will thank you.

Loading...

Loading video analysis...