In a previous article, we saw how NFTs unlock new ways of owning data. Now letâs take a look at how NFTs actually work in Solana.
Since the Solana blockchain structures data differently than other blockchains, it may be confusing at first but, in my opinion, its model makes a lot of sense and is closer to how we represent things in the real world.
Thus, weâll first start by looking at how we do things in the real world and gradually move our way towards the representation of an NFT.
Bear with me, weâre going on a little journey.
This article is not about coding but itâs worth noting that, if youâre used to other blockchains such as Ethereum, then youâll have a bit of unlearning to do.
In these blockchains or any typical piece of code, you add variables within your program, and your logic updates these variables. In Ethereum, a deployed âSmart Contractâ, contains both the logic and the data needed for that contract to do its job.
Thatâs not the case for Solana. In Solana, a âProgramâ (the equivalent of a Smart Contract) interacts with âAccountsâ that are stored outside of the program. This enables us to create more generic logic that can scale to new orders of magnitude since the data is no longer bound by the size of the program. Additionally, it enables the blockchain to run more efficiently since it can run the same program in parallel with different accounts.
The TL;DR; here is: Data in Solana is stored outside of programs, in reusable and scalable models called âAccountsâ.
Alright, letâs step back into the real world for a minute and reflect on what makes us own the money that we own in a given currency.
Finance is a complex subject and there are many different ways to own money: cash, banks, assets, etc. To get closer to how we model things in Solana, we need to simplify our real-world model.
Imagine all of the money in the world for a given currency is generated by one bank and one bank only. Letâs call them âPrinting Machinesâ since they would literally be able to control how much of that currency is in circulation. For instance, the âUSD Printing Machineâ would be responsible for managing all the US dollars in the world. Nice and simple.
These âPrinting Machinesâ would then allow individuals to own money via âBank Accountsâ where each individual can have as many bank accounts as they want. That way bank accounts act as many-to-many relationships between individuals and currencies.
In the example below, Alice owns US dollars and British pounds via three different bank accounts (two for USD and one for GBP) whereas Bob only owns British pounds via one bank account.
Good, with that model in mind, letâs enter the world of tokens!
In the previous section, we created a simple model where people use bank accounts to access money in a given currency.
Well, surprise surprise, that model is analogous to how tokens are represented in Solana. You can think of a token as a decentralised currency that lives on a blockchain.
Each type of token is defined by what we call a âMint Accountâ. That account is analogous to a âPrinting Machineâ because it can literally be used to âmint tokensâ which is equivalent to printing money.
Then, individuals can own tokens via âToken Accountsâ which store the number of tokens owned.
Finally, thereâs no such thing as individuals in blockchains since people interact with them through cryptographic key pairs called wallets. The public key of each wallet points to an account in Solana that stores the amount of SOL owned by the wallet. For that reason, I will refer to these accounts as âWallet Accountsâ.
So that leads us to the following analogy.
Note that it is common for individuals to have multiple wallets â usually for security purposes. Therefore, the following analogy is more accurate.
Before we move on to the next section, letâs take a quick example of token ownership in Solana.
Instead of US dollars and British pounds, weâll use the tokens USDC
and AVDO
. These are real tokens in Solana. USDC
is a stablecoin pegged to the US dollar and AVDO
is a cryptocurrency backed by the avocado industry (because why not).
As you can see Alice owns some USDC and some AVDO via two wallets and three token accounts. On the other hand, Bob only owns AVDO through one wallet and two token accounts.
Now, thereâs one big inconvenience with the model described so far.
To illustrate the issue, imagine that Alice, in our previous example, wanted to send some AVDO tokens to Bob.
Since Bob has two token accounts for his AVDO tokens, which token account should Alice choose to deposit her tokens? Should she ask Bob to send her the public key of the token account of his choice? Even worse, imagine Alice now wants to send some USDC tokens to Bob when Bob doesnât currently have any USDC token accounts. Should she create a new token account for Bob and then send him its public key?
None of these issues makes it impossible to send tokens but they make our life harder than it should be.
When sending tokens to someone, you usually only have the public key of their wallet and you really donât want to worry about which token account to use or if it even exists.
The solution to that problem is called âProgram Derived Addressesâ or PDA for short. These are public keys that are derived from other public keys using a special algorithm.
What that means for us is, given a âWallet Accountâ and a âMint Accountâ, we can deterministically find the associated Token Account. In fact, these accounts are called âAssociated Token Accountsâ (ATA for short) and they are managed by the âAssociated Token Account Programâ.
Therefore, we end up with two ways of creating and using token accounts: One thatâs deterministic (using PDAs) and one thatâs not (using normal token accounts).
Note the annotation above the PDA. These are the parameters required to derive the address of the associated token account. If youâd like to read more about PDAs, the Solana Cookbook is a good place to start.
Since weâre going to use this diagram a lot in this article, letâs shorten its representation slightly. Weâll use the following diagram to represent the generic connection between wallets and mint accounts. They could be using PDAs or they could not.
Before we move on from this important PDA parenthesis, letâs take a quick look at our previous example using only associated token accounts.
Notice how weâve now only got one token account per wallet per mint which means Alice knows where to send the AVDO tokens to Bob. Additionally, Alice knows where to send Bob some USDC even though the account does not exist yet.
Okay, letâs move on!
I appreciate your patience but you might be thinking: âLoris, what does that have anything to do with NFTs?â.
I promise weâre getting there! But to answer that, we first need to understand what data is stored in both the token accounts and the mint accounts.
To that end, letâs update our diagram to show all of the data available under each account and, whilst weâre at it, letâs also display the owner of the account â i.e. the Solana program responsible for creating it.
Okay, we have a few things to notice here.
Amount = 250
and the mint account has Decimals = 2
, that means the token account actually owns 2.50
of that token. That way, all monetary values can be stored using integers.In order to move closer to the representation of an NFT, letâs play with these attributes a little.
What if we created a mint account with zero decimals and immediately minted one token to a wallet account?
The result would be a mint account with only one token in circulation that cannot be broken down into smaller units â e.g. Alice and Bob cannot both have 0.5 of that token.
The only issue with that is nothing stops the âMint Authorityâ to continue minting more tokens in the future. If they did, weâd suddenly have more than one token in circulation and, thus, more than one wallet could own them.
To prevent that, the mint authority needs to revoke its right to mint more tokens immediately after minting the first one.
What we end up with is a mint account whose supply will never go above one and whose token cannot be shared or divided.
Thus, there can only be one token holder for that mint at any given time.
And that, my frens, is a Non-Fungible Token! đ„ł
The definition of a âFungibleâ is:
Being of such nature or kind as to be freely exchangeable or replaceable, in whole or in part, for another of like nature or kind.
In other words, it can be replaced by something else of the same kind. For instance, olive oil is fungible. One litre of olive can be replaced with another litre of olive oil and weâve not lost any value. The same goes for currencies: replace a 10 USD note with another 10 USD note and youâve still got 10 dollars.
Therefore, for a token to be non-fungible, it needs to have such characteristics that it cannot be traded with something of the same kind. We have managed to achieve this by creating a mint account that will never have more than one token holder. Whoever owns this token, owns the mint account and therefore, owns the NFT.
To recap: An NFT is a mint account with zero decimals and whose supply will never exceed one.
Iâd also like to take a second to appreciate how elegant that model is. Not only is its representation inlined with its definition, but by relying on token accounts and mint accounts, we can interact with NFTs the same way we can interact with tokens. Sending an NFT to someone is as simple as giving them your only token for that mint account.
Okay, Iâll be honest we havenât completely finished our journey in explaining NFTs in Solana. What weâve explained so far is indeed an NFT by definition but not a very useful one. All itâs telling us is that we own a token and that no one else can own that token. Great! But whatâs that token called? What is its purpose? Whereâs the picture?!
Well, it sounds like we need to attach more data to our NFT to make it useful. Thatâs where Metaplex comes in!
Metaplex is a company that creates and maintains Solana programs. Their most popular program is called the âToken Metadata Programâ. You guessed it, it adds metadata to our tokens!
It does that by using Program Derived Addresses (PDAs). If you remember, PDAs allow us to deterministically find an address using other addresses.
In this case, the Token Metadata program will create a new âMetadata Accountâ attached to that NFT. But can you guess which address it uses to derive its own address? Is it the mint account or the token account?
The answer is: the mint account! The mint account is the most important account for an NFT. A token account is a relationship between an NFT and a wallet. If we attached a PDA to a token account and then sold our NFT, the new owner will lose all of that data. Therefore, the mint account is the main entry point of an NFT.
Alright, letâs see what sort of data Metaplex has blessed us with on the âMetadata Accountâ.
Letâs go through all of these attributes one by one.
Key
: This attribute is what we call a discriminator. Because Metaplex has many different types of account within the Token Metadata Program, this tells us which account we are dealing with. Here, the âMetadata Accountâ is identified by the MetadataV1
key. Note that the Token Program uses the size of the account instead of a discriminator to figure this out, which is more performant but less flexible.Update Authority
: This is the account that can update the âMetadata Accountâ.Mint
: This points back to the mint account.Name
: The on-chain name of the NFT â limited to 32 bytes.Symbol
: The on-chain symbol of the NFT â limited to 10 bytes. This is often left as an empty string but can be useful if you want your NFT collection to have a shared symbol. For instance, your âBanana Blossomâ NFT drop could have the symbol âBNBLâ.URI
: The URI of the NFTâ limited to 200 bytes. This is one of the most important attributes. It contains a URI that points to a JSON file off-chain. This JSON file can either be stored in a traditional server (e.g. using AWS) or it can be stored using a permanent storage solution on another chain (e.g. using Arweave). Weâll talk more about this JSON file in a minute.Seller Fee Basis Points
: The royalties shared by the creators in basis points â i.e. 550
means 5.5%
.Creators
: An array of creators and their share of the royalties. This array is limited to 5 creators. A verified
attribute exists on each creator to ensure they signed the NFT to prove its authenticity.Primary Sale Happened
: A boolean keeping track of whether the NFT has been sold before or not. This affects the royalties.Is Mutable
: A boolean indicating if the on-chain metadata of the NFT can be modified. Once flipped to false
, it cannot be reverted.Edition Nonce
: This optional attribute is slightly out of scope but it is used to verify the edition number of limited edition NFTs.Token Standard
: This optional attribute captures the fungibility of the token. Weâll talk more about this in this article.Collection
: This optional attribute links to the mint address of another NFT that acts as a Collection NFT. This is helpful for marketplaces to group NFTs together and safely verify these collections.Uses
: This optional attribute can make NFTs usable. Meaning you can load it with a certain amount of âusesâ and use it until it has run out. You can even make it so the NFT destroys itself when itâs been completely used out.Phew! As you can see, lotâs of cool features. Iâm not going to go through them all here but Iâm hoping to add more articles to this series on that subject. In the meantime, feel free to check the official Metaplex documentation for more information.
So many attributes, yet, still no picture or digital asset? Isnât that a big part of what an NFT is?
Yes! Worry not, we do have a place to store that information.
Remember that URI
attribute that points to an off-chain JSON object? Well, that JSON object follows a certain standard in order to store even more data.
As you can see, we can provide, amongst other things, a name
, a description
and, finally, an image
! Similarly to the URI
attribute of the Metadata Account, that image
attribute should be a URI that can be used to download the digital asset. Thereâs also an animation_url
attribute and a files
array for NFTs that have more custom needs. All of these assets could be either stored off-chain (in a traditional server) or using a permanent storage solution (in another blockchain such as Arweave). Be sure to check out the Metaplex documentation for more information on its NFT standard.
It is worth mentioning that you can add anything you want within that JSON object. If youâre planning on building an app that recognises your own NFTs that can be useful. But be aware that other applications and marketplaces will not be aware that this data exists and therefore will not be using it.
You might be wondering, why do we need two places to store the data of our NFT? Couldnât we just store everything on the Metadata Account?
There are several issues with that.
As a result, we need to cleverly and safely split the data into two categories: on-chain and off-chain. For instance, the Creators
array is on-chain because we need a trustless way of knowing if an NFT has truly been offered and signed by a given artist. The Is Mutable
attribute is on-chain because we need to ensure that, once flipped to false
it can never be reverted.
The bottom line is: the Token Metadata Program can create guarantees and expectations for on-chain data but cannot do that for off-chain data.
However, that doesnât necessarily mean the off-chain JSON file is insecure and never to be trusted. Permanent storage blockchains such as Arweave are commonly used to store both the digital assets and the JSON metadata that references them, ensuring the immutability of the off-chain data. Additionally, NFTs can be made immutable ensuring that the URI
attribute of the Metadata Account will never point elsewhere. This is the most secure configuration of an NFT as it is guaranteed nothing could even be done to alter it.
Also, note that some NFT projects might need their data to be mutable in ways that will benefit the NFT owners. For instance, if youâre planning on creating an NFT of a baby monkey that gradually grows into an adult monkey, you need the JSON metadata stored on a server you control in order to make these gradual changes. Next time, the NFT owners open up their wallets they will see another image but they will be delighted rather than outraged. My point is, as long as you trust the creators of your NFT â which are guaranteed by the Verified
flag of the on-chain Creators
attribute â mutable off-chain data can be genuine. But of course, always do your own research.
Okay, surely weâve now reached the final representation of an NFT in Solana?
For the most part, yes, but not quite.
There is another important account offered by the Token Metadata Program, derived from the mint account (using a PDA).
In fact, the account located inside that PDA can be one of two different types. It can either be a âMaster Editionâ or an âEditionâ.
A âMaster Editionâ also known as an âOriginal Editionâ is an NFT that can be duplicated by its owner a certain amount of times dictated by the âMax Supplyâ attribute.
An âEditionâ also known as a âPrint Editionâ is an NFT that was duplicated from a âMaster Editionâ. Whenever a new âEditionâ is created, it keeps track of its parent "Master Edition" and its edition number. It also increases the supply of its parent âMaster Editionâ. Once the supply reaches the max supply, no more NFTs can be printed that way. Note that the âMax Supplyâ of a âMaster Editionâ can be null, which means an unlimited amount of NFT can be printed from it.
Also note that another â lesser-known â PDA Account called âEdition Markerâ exists on âEditionâ NFTs to ensure there is no overlap between the edition numbers of a given âMaster Editionâ.
One use-case for this feature is to allow artists to sell more than one copy of their art. For instance, they can release 100 limited editions of their 1/1 NFT and each of them will keep track of their edition number on-chain.
It is worth noting that printing multiple editions of an NFT is totally optional and most NFTs out there will set their âMax Supplyâ to 0 to prevent using it.
So why did I decide to mention âMaster Editionsâ and âEditionsâ in this â rather long â article? Because, whilst printing editions is their primary purpose, they are responsible for more than just that.
If you remember, we said earlier that for a âMint Accountâ to be an NFT, it needed to:
Well, the Token Metadata Program uses the edition PDA to guarantee these properties. When creating the edition PDA account (whether itâs a âMaster Editionâ or an âEditionâ) it will check that the mint account has got zero decimals and a supply of exactly one. If either of these conditions fails, it will refuse to create the account. If they succeed, it will transfer the âMint Authorityâ to that new edition PDA ensuring no wallet can ever mint additional tokens. What that means is, the simple fact that a âMaster Editionâ or âEditionâ account exists on a mint account, is proof of its non-fungibility.
This is why âMaster Editionâ and âEditionâ accounts are important in Solana NFTs and are worth mentioning in this article.
Hereâs a little update on our NFT diagram. As you can see the âMint Authorityâ is no longer null but points to the edition PDA which can only be controlled by the open-sourced and audited Token Metadata Program.
We now have the full picture of what makes a mint account an NFT in Solana!
However, this article wouldnât be complete if I didnât mention that the Token Metadata Program also supports what we call âSemi-Fungible Tokensâ or SFT for short.
SFTs are basically the same as NFTs but without the non-fungible guarantees, we talked about earlier.
âBut why?â, you might ask, âI thought non-fungibility was the whole point?â.
Well, it is for NFTs but, when you think about it, the core purpose of the âMetadata Accountâ is to add data to tokens. Why should we only restrict that feature to non-fungible tokens?
Why couldnât our fungible Avocado token (AVDO) from earlier add on-chain data to its mint account? It could use that data to let decentralised exchanges know which symbol to use, which external links to list, which logo to display and so on.
Another use case for this would be creating a gaming asset as a zero-decimal token. For instance, you could create a token for the âWoodâ resource in your game. Since players should be able to own and trade more than one wood, it makes little sense to have to create one single NFT for every single piece of wood in your game. Thatâs why making game resources fungible assets whilst still benefiting from on-chain data is super valuable.
As such, the Token Metadata Program allows us to create Metadata Accounts for mint accounts that are fungible. This is why, it is the responsibility of the edition PDA to guarantee non-fungibility.
To make our life easier, the Metadata Account keeps track of âhow fungibleâ a token is via the âToken Standardâ attribute. It can be one of the following values.
NonFungible
If the token standard is NonFungible
, we know we are dealing with an NFT. This standard is applied if and only if a âMaster Editionâ or âEditionâ account was created for that mint.
That means, we have the following guarantees:
FungibleAsset
If the token standard is FungibleAsset
, we know we are dealing with a non-decimal SFT. For instance, our âWoodâ resource example would be marked as a FungibleAsset
.
That means we have the following guarantees:
Fungible
If the token standard is Fungible
, we know we are dealing with a decimal SFT. For instance, the Avocado token or the USDC token would both fit in this category.
That means we have the following guarantees:
It is worth noting that the off-chain JSON metadata standard varies based on the âToken Standardâ weâve just discussed. You can find the JSON standard definition for each token type below.
Phew, what a journey!
Not only do we now have a full picture of what an NFT and an SFT look like in Solana, but we also understand why things are structured the way they are and how they compare to other models we are used to.
This article wouldnât be complete without a final diagram resuming what weâve learned here.
And for the sake of making this diagram even more useful for developers out there, letâs also add the offset and the size of each account attribute in bytes. Iâll use a tilde ~
for variable account sizes.
Alright, I hope youâve enjoyed this article and Iâll see you soon for more Solana and Metaplex adventures! â”ïž
]]>I see a lot of hate online for NFTs and the crypto world in general. For a very long time, I myself was completely indifferent to this universe and so it was easy to convince me I shouldnât invest too much time learning about it.
However, less than a year ago, I decided to make my own opinion on the matter and invested time figuring out as much as I could. My interest was piqued very quickly and I continued to dig deeper into this world until I finally decided to quit my web2 job to dedicate my entire focus to web3.
Now, this article is not about me trying to convince anyone that crypto is great or that everyone should focus their career on web3. Far from that. This article is about explaining one of the reasons blockchains and NFTs piqued my interest and that is: a new way of owning data.
Iâll go through why I believe this is a new shift in our technology and, in a follow-up article, how this is materialised in Solana which is the blockchain on which I decided to invest my time.
Letâs start by checking how one traditionally owns something that does not have a physical form. That could be a song, a game, a movie, a particular clip of your favourite TV show, a digital illustration, a 3D animation, an icon, a website template, a license to use some software, a tweet, an event from your own life, a receipt, an expense, a tax return, a spreadsheet, a slide from your deck, a message on a chat, Iâll stop here.
Letâs start by discarding virtual ownership that originates from physical ownership. For instance, you buy a song as a CD or vinyl and upload its content onto your computer. Yes, you now own the virtual copy of that song but as a proxy of the physical good. The same goes for digital assets that only live on your computer since their ownership is essentially an extension of your computerâs ownership.
However, these assets are getting less frequent as hardware becomes more and more of a commodity. Conveniently, if you lose your smartphone or break your laptop, you can buy a new one and have an exact replica of its content in no time.
This means your digital assets are held somewhere else and you trust some company or external entity not to mess up with them.
Similarly, ownership of assets is increasingly becoming on-demand. Meaning we pay a subscription to some company to access a huge number of songs, movies, tv shows, services, etc. We never really own them but thatâs okay because itâs convenient and it makes sense. In fact, the companies that offer these assets to us very often donât own them themselves. Netflix may buy an expiring license for a movie to get the rights to share it on its platform therefore never really owning the movie in the first place.
Last but not least, thereâs a lack of ownership on just about anything anyone enters on any application. That receipt youâve uploaded to your accounting software, that image you posted on Instagram, that Facebook status, you get the picture.
The gist here is, that most of the data we generate or consume never really belongs to us.
More often than not, the answer is no. Whilst data is powering large corporations, at an individual level, it rarely affects us and itâs often too convenient to care. I will also add that I am not one of these individuals that value privacy above all. I am willing to pay away a reasonable amount of privacy when it makes my life more convenient.
Thus we might not have data ownership but we usually have (or at least should have) enough data privacy that it doesnât matter.
Okay so if weâre (mostly) happy with the way things are, why do we want to change things? Why do we need crypto for data ownership? Why does this even matter?
I donât see crypto as a replacement or an enhancement of the technology we currently have (at least not in the near future), but as an additional tool we can leverage to unlock new ways of owning things that, ironically, mimic the way we used to own them physically.
It opens up new opportunities in the digital world that (whether we like it or not) represents a big part of the human species, and thatâs why I think it matters.
In my opinion, NFTs are the primary reason crypto gained so much traction over the past few years. Whilst the concept of Non-Fungible Tokens is not new, it is its application to the digital art market that made its concept known worldwide. All of the sudden, you could sell a digital asset to someone and give them irrevocable proof that they (and only they) owned it. What could only be done physically by going to an art gallery and purchasing a physical piece, could now be done with any asset, made by anyone, anywhere in the world.
Yes, you could argue that a digital asset cannot be hung on the wall for your guests to see but the reality is most people value their social walls far more than their physical ones. Sadly, the former tend to be visited a lot more.
I hear this a lot so Iâd like to touch on that a little bit. Yes, it is possible (in any blockchain) for anyone to upload someone elseâs work and sell it to someone who has no clue what they are buying.
But how does that differ from someone buying a fake painting whilst thinking it was the original one? Yes, there are experts and certificates to prevent that but these are also prone to human errors and maliciousness.
On the other hand, with NFTs, the artist signs their art using cryptography before selling it. That means anyone can verify that a piece of digital art was indeed offered by the original artist and nothing can be done to alter that. An ill-informed user can still be scammed by not knowing how to ensure the NFT is an original but this is something we should aim to improve by educating buyers instead of scaring them with âall NFTs are scamsâ.
NFTs are also very volatile. You can buy a piece of digital art for thousands of dollars from a verified artist and it could be worth nothing the very next day. But again so is physical art. Hype and FOMO are not new and certainly not specific to NFTs. Hype can scale to new orders of magnitude through social media but thatâs what progress looks like. We should be careful, make informed purchases, help each other out, but gatekeeping will gets us nowhere.
On that note, letâs move on to another industry currently affected by this new way of owning data.
Gaming is another good use case for decentralised ownership of data because it allows players to own and exchange in-game items in real-life marketplaces.
This is huge. If youâve ever played games such as MMORPGs that have their own in-game marketplace with their own in-game currency, you might have noticed how clever these economic bubbles can be and how players really enjoy coming up with new creative ways of making the game work in their favour. Imagine this same level of entertainment but the money you make in-game translates to real money.
Additionally, games can create loyalty and scarcity by releasing limited edition assets such as skins, special packs, levels, potions, spells, etc. These can even help the game creators fund themselves to make the game free and accessible to everyone.
Another great advantage of having decentralised assets is that they are not locked in a specific game. Other games can also leverage them in ways that benefit the owners of these assets. Say Fortnite releases a skin as an NFT, nothing stops Mario Kart to unlock a new matching skin in their game for the owner of that NFT. This will create exciting gaming ecosystems where even massive corporations and indie developers can have synergies together.
I could talk about NFT and gaming for hours so Iâll stop here but hopefully, you can see why Iâm excited about this topic.
The excitement doesnât stop here. Digital art and gaming are the first organic use cases of NFTs but ultimately, all an NFT is, is a secure proof that someone owns a digital asset.
That digital asset can be anything and so it opens up a whole new era of data ownership where the only limit is our imagination. Letâs have a look at some examples.
A company could release a limited amount of NFTs that gives its owners special offers on their products. Imagine a Sephora NFT giving you 10% off on all their product or a Prada NFT that only allows NFT owners to purchase certain clothes. The cool thing is because NFTs can have secondary sales royalties, businesses can make up for the given discounts and even adjust that discount accordingly. Say, if lots of people are exchanging their Sephora NFTs then the discount goes up to 15%.
Owning an NFT could make you a full member of an organisation (decentralised or not). Depending on the type of membership, this could grant you special access to things, moderation rights, admin rights, etc. In fact, an organisation could offer multiple types of NFTs based on the permission level they want to offer. Note that because NFTs are exchangeable, someone with lots of money could buy all the admin-permission NFTs and take complete control of the organisation (I guess we could compare that to shares in a company). Thatâs why Decentralised Autonomous Organisations (DAO) adopt different models instead such as using voting tokens that need to be staked for some time to get voting power.
Thatâs similar to the point above but only focuses on granting the NFT owners access to special resources. For instance, you could fund your blog by releasing say 1000 NFTs granting access to special articles. You could even make a decent passive income through secondary sales royalties.
This one might not please everyone but some accounting data could live on the blockchain, giving us a source of truth that cannot be toyed with. One example of accounting data as NFTs could be invoices and receipts. As a business A, you could create an NFT that represents an invoice another business B has to pay for your services. The price of the NFT would equal the total amount of the invoice. Business B would then purchase that NFT, paying the invoice and using their NFT as proof of payment, i.e. a receipt. As it is possible to break down NFTs into multiple shares using vaults, Business B could even pay in multiple payments and the NFT would only unlock once 100% of the invoice has been paid. There would be a lot of things to sort out legal-wise but again, our imagination is the real limit here.
There is so much synergy between web3 and social media that Iâm just going to focus on one example. Imagine you post a picture of your holiday in the Bahamas and mint that as an NFT. Now you literally own that life event on the blockchain as a souvenir. You can even share that NFT with friends using a vault or using what we call âSemi-Fungible Tokensâ which allow more than one owner for a given NFT. Now youâre probably thinking, who would buy that? To which Iâd reply, who would buy that NFT you minted yesterday in a month? đ Just kidding, my point being, maybe the value of an NFT isnât always to sell it later. Maybe having an asset in your wallet and being a proud owner of it is enough. Especially with some of the new virtual environments that allow people to showcase their NFTs to the world. That being said, some of these life events or statuses would actually have a monetary value. If a famous influencer wanted to sell the souvenir of their holidays in the Bahamas as an NFT, rest assured that followers would fight for it. The only caveat I will add here is that there is a tremendous amount of data in social media and if a significant portion of it would end up being stored on any blockchain, it could damage that blockchain. Not so much because of the traffic but because of how big the blockchain state will become for the nodes to maintain. Nevertheless, nothing progress canât improve.
Okay, that one is a bit weird but hear me out. You could, for instance, give 50% of your total net worth to a limited amount of NFT owners in your will. Then youâd give one NFT to each member of your family (or people you care about). And just like that they can sort things out on their own and exchange them with one another if they need the money right now. For very rich individuals, this could even become a way to stock trade on the net worth of a person rather than a company where people speculate on the amount of money youâll be worth when you pass away. Slightly morbid Iâll give you that. đ
Hopefully, some of the examples I provided resonated with you and you can see why I think NFTs are a new shift in our technology.
Should everything become an NFT? Of course not. But we now have this new tool on our belt that unlocks a new way of owning data and therefore unlocks new applications that could have never been done before.
Anyway, thatâs one of the reasons why Iâm excited about web3 and I hope I managed to share some of my enthusiasm with you.
In a follow-up article, I will focus on how we can achieve all of this in the Solana blockchain at a high level â no code but lots of diagrams.
See you there. đ
]]>Sponsor me on GitHub to read this article and get access to the full library of sponsor-only posts.
If you're already a sponsor, simply visit this article on my blog to read it.
]]>If youâve ever tried to get all accounts from a Solana program, youâve probably noticed that you get everything in one go without being able to limit or offset the accounts retrieved. Additionally, we cannot control the order in which we receive them.
This makes it almost impossible for us to paginate and/or order accounts from a given program. Almost impossible.
In this article, we will go through different techniques to learn how to optimise our calls to the Solana cluster to support pagination and account ordering.
Before diving through these techniques, letâs have a quick look at the tools available for us to query the Solana cluster. The rest of the article will focus on how to use and arrange these tools in a way that benefits us.
Letâs start by having a look at the RPC methods weâll use to query the cluster.
getProgramAccounts
. This RPC method allows us to fetch all accounts owned by a given program. For instance, if youâve followed the series on how to create a Twitter dApp in Solana, this could fetch all the Tweet
accounts of our program.getAccountInfo
. This RPC method allows us to get the account information and data from a given public key.getMultipleAccounts
. This RPC method does the same thing as getAccountInfo
except that it retrieve multiple accounts from a provided list of public keys. This enables us to retrieve a bunch of accounts in only one API call to improve performances and avoid rate limiting. Note that the maximum number of accounts we can retrieve in one go using this method is 100.Some of the RPC methods above support additional parameters to either filter or slice the accounts retrieved.
dataSlice
. This parameter limits the data retrieved for each account. It expects an object containing an offset
â where the data should start â and a limit
â how long the data should be. For example, providing { offset: 32, limit: 8 }
will only retrieve 8 bytes of data starting at byte 32 for every account. This parameter is available on both getProgramAccounts
and getMultipleAccounts
RPC methods.dataSize
. This parameter is a filter that only selects accounts whose data is of the given size in bytes. This filter is only available on the getProgramAccounts
RPC method. You can read more about this filter here.memcmp
. This parameter is a filter that only selects accounts such that their data matches the provided buffer at a given position. This filter is only available on the getProgramAccounts
RPC method. You can read more about this filter here.Alright, these are the tools at our disposal, now letâs use them!
Letâs start by trying to fetch all CandyMachine
accounts from the Candy Machine V2 program from Metaplex.
This is a particularly interesting exercise because there are thousands of accounts in that program and some of them have huge amounts of data.
If we wanted to fetch all accounts within the Candy Machine V2 program, thatâs how we could do it.
import { Connection, clusterApiUrl, PublicKey } from '@solana/web3.js';
const candyMachineV2Program = new PublicKey('cndy3Z4yapfJBmL3ShUp5exZKqR3z33thTzeNMm2gRZ');
const connection = new Connection(clusterApiUrl('mainnet-beta'))
const accounts = await connection.getProgramAccounts(candyMachineV2Program)
I donât recommend you to run this piece of code. If you do, chances are your browser tab will stop working after having downloaded hundreds of megabytes of data.
Also, note that we didnât need to provide a dataSize
or a memcmp
filter to ensure we retrieve CandyMachine
accounts because thatâs the only type of account available in the Candy Machine V2 program. That being said, this wonât always be the case and itâs a good practice to be explicit about which account weâre looking for. So letâs add a filter anyway.
We canât use a dataSize
filter here because the size of a CandyMachine
account is not static and depends on its content. So we need to use the memcmp
filter on the first 8 bytes to compare the hash of the account type â called the discriminator.
Since this is an Anchor program, account discriminators should be the first 8 bytes of the SHA-256 hash of "account:CandyMachine"
. So letâs compute that discriminator and ensure it is present on the first 8 bytes of every account we retrieve using a memcmp
filter.
import { sha256 } from "js-sha256";
import bs58 from 'bs58';
// ...
const candyMachineDiscriminator = Buffer.from(sha256.digest('account:CandyMachine')).slice(0, 8);
const accounts = await connection.getProgramAccounts(candyMachineV2Program, {
filters: [
{ memcmp: { offset: 0, bytes: bs58.encode(candyMachineDiscriminator) } }, // Ensure it's a CandyMachine account.
],
})
Note that Anchor adds the filter automatically for you when using the API it provides â e.g. program.account.candyMachine.all()
.
Okay, thatâs all nice but we havenât fixed our issue as running the code above will still try to fetch all the data from all candy machines ever created in this program.
Itâs time we explore how to paginate this.
The key is to pre-fetch the accounts once without any data.
You might need the account data later on, but pre-fetching them without data will allow us to scan all the accounts we need and paginate them by fetching their data page by page.
So how do we do this?
Easy! All we need to do is provide a dataSlice
parameter with a length of zero.
const accounts = await connection.getProgramAccounts(candyMachineV2Program, {
dataSlice: { offset: 0, length: 0 }, // Fetch without any data.
filters: [
{ memcmp: { offset: 0, bytes: bs58.encode(candyMachineDiscriminator) } }, // Ensure it's a CandyMachine account.
],
})
Note that I kept our previous memcmp
filter but this will work with any filters you want.
And thatâs it! Now we should have the entire list of CandyMachine
accounts within the program. Because we didnât ask to retrieve their data, this endpoint is much faster and requires much less memory than the previous endpoint. In addition, this will be significantly more performant when dealing with heavy accounts and/or as the program contains more and more accounts.
Now what?
Well, first of all, we get the total count of our filtered accounts for free.
const accountsInTotal = accounts.length
This will be helpful when paginating the accounts as weâll know when weâve reached the last page.
Additionally, and most importantly, we might not get any account data but we do get the public key of each account.
const accountPublicKeys = accounts.map(account => account.pubkey)
That means we now have enough information to fetch the missing data page by page.
Now that we have the exhaustive list of public keys that interest us, letâs implement a getPage
function which will return all accounts in a given page with their data.
// ...
const accountPublicKeys = accounts.map(account => account.pubkey)
const getPage = async (page, perPage) => {
// TODO: Implement.
}
First, we need to slice all public keys within the requested page. We can achieve this by using the slice
method on the accountPublicKeys
array. Additionally, if the given page is out-of-bound, slice
will return an empty array, so letâs return early if thatâs the case.
const getPage = async (page, perPage) => {
const paginatedPublicKeys = accountPublicKeys.slice(
(page - 1) * perPage,
page * perPage,
);
if (paginatedPublicKeys.length === 0) {
return [];
}
// TODO: Continue implementing.
}
Next, we can use the getMultipleAccounts
RPC method to fetch all of the accounts within the page. This method is called getMultipleAccountsInfo
on the JavaScript client.
const getPage = async (page, perPage) => {
const paginatedPublicKeys = accountPublicKeys.slice(
(page - 1) * perPage,
page * perPage,
);
if (paginatedPublicKeys.length === 0) {
return [];
}
const accountsWithData = await connection.getMultipleAccountsInfo(paginatedPublicKeys);
return accountsWithData;
}
And just like that, we can fetch our accounts data page by page! đ„ł
const perPage = 6
const page1 = await getPage(1, perPage)
const page2 = await getPage(2, perPage)
// ...
Remember that the getMultipleAccounts
RPC method can only accept a maximum of 100 public keys. If you need more accounts within a page, you will need to split the paginatedPublicKeys
array into chunks of 100 and make a getMultipleAccounts
call for each of these chunks.
So far, weâve seen how to paginate and/or access subsets of all the accounts in a program. However, no assertions can be made on the order in which we receive these accounts. In fact, calling the exact same endpoint twice can return the accounts in different orders.
So how do we get some control over the order in which we retrieve our accounts?
The key here is to add a little bit of data in the pre-fetch call that scans all available accounts. We need just enough data to successfully reorder our array of public keys before we paginate them or select a subset of them.
Letâs take back our previous example. This time, weâll want to fetch all CandyMachine
accounts ordered by descending price. Meaning weâll have the most expensive candy machine first and the cheapest last.
To achieve this, we need to slice the price
property of every account before paginating them. If we look at the following CandyMachine
structure inside the program, we can find out where exactly that price
property resides in the array of bytes.
#[account]
#[derive(Default)]
pub struct CandyMachine { // 8 (discriminator)
pub authority: Pubkey, // 32
pub wallet: Pubkey, // 32
pub token_mint: Option<Pubkey>, // 33
pub items_redeemed: u64, // 8
pub data: CandyMachineData, // See below
}
#[derive(AnchorSerialize, AnchorDeserialize, Clone, Default)]
pub struct CandyMachineData {
pub uuid: String, // 4 + 6
pub price: u64, // 8
// ...
}
From the code above, we can see that the price
property is located at byte 123 (8 + 32 + 32 + 33 + 8 + 4 + 6) and that it uses 8 bytes â or 64 bits. If youâre struggling to understand how I came up with the number of bytes for each property, you might benefit from reading this article and this one too.
Now that we know where the price
property is located, we can use this in our dataSlice
parameter to only fetch that price and nothing else.
const accounts = await connection.getProgramAccounts(candyMachineV2Program, {
dataSlice: { offset: (8 + 32 + 32 + 33 + 8 + 4 + 6), length: 8 }, // Fetch the price only.
filters: [
{ memcmp: { offset: 0, bytes: bs58.encode(candyMachineDiscriminator) } }, // Ensure it's a CandyMachine account.
],
})
Unfortunately, in this particular case, it doesnât quite work as expected. I wish it did in order to keep things simple in this article but the truth is, things like that happen frequently when reading accounts whose structure we donât control and itâs good to know how to tackle these issues.
Okay, so whatâs wrong here?
Take a look at the token_mint
property on the account.
#[account]
#[derive(Default)]
pub struct CandyMachine { // 8 (discriminator)
pub authority: Pubkey, // 32
pub wallet: Pubkey, // 32
pub token_mint: Option<Pubkey>, // 33
pub items_redeemed: u64, // 8
pub data: CandyMachineData, // See below
}
We can see it define an optional public key. It uses one byte to determine if there is a public key or not and 32 bytes to store the public key itself â hence the 33 bytes required in total.
So when token_mint
contains a public key, it writes the number â1â on the first byte and the public key on the other 32 bytes. However, when token_mint
doesnât contain a public key, it writes the number â0â on the first byte and thatâs it! It will not use any more storage than that because it does not need to. And thatâs where the problem is!
Because the price
property is located after the token_mint
property, whether or not it contains a public key will affect the location of the price
property!
Note that other programs may tackle optional properties differently in order to go around this issue and make their other properties more âsearchableâ. For example, some programs only use 32 bytes for optional public keys and will store PublicKey::default()
to indicate an empty state. Additionally, some programs might try to store their fixed-size properties first to ensure they are not shifted by properties of variable length. Thatâs not to say this program didnât have valid reasons to structure their accounts that way but just to let you know that they are other options with their own trade-offs.
Okay enough chitchat, how do we fix this?
One way would be to fetch all the data between the token_mint
property and the price
property (included). That way we can analyse the first byte of the token_mint
property and slice the price
property at the right location. This can be problematic when the two properties are far away from each other as youâll end up slicing much more data than you actually need.
Whilst the two properties are relatively close here, letâs have a look at another way to fix this. Since weâve only got two scenarios to handle here, why not make two different calls to the getProgramAccounts
RPC method? One where the token_mint
is empty and one where itâs not. We know that the first byte of the token_mint
property â determining if there is a public key or not â is located on byte 72 (8 + 32 + 32). Therefore, we just need a memcmp
filter that compares that 72nd byte with â0â and â1â. Then all we need to do is merged the two arrays together.
import bs58 from 'bs58';
import BN from 'bn.js';
const accountsWithTokenMint = await connection.getProgramAccounts(candyMachineV2Program, {
dataSlice: { offset: 8 + 32 + 32 + 33 + 8 + 4 + 6, length: 8 }, // Fetch the price only.
filters: [
{ memcmp: { offset: 0, bytes: bs58.encode(candyMachineDiscriminator) } }, // Ensure it's a CandyMachine account.
{ memcmp: { offset: 8 + 32 + 32, bytes: bs58.encode((new BN(1, 'le')).toArray()) } }, // Ensure it has a token mint public key.
],
});
const accountsWithoutTokenMint = await connection.getProgramAccounts(candyMachineV2Program, {
dataSlice: { offset: 8 + 32 + 32 + 1 + 8 + 4 + 6, length: 8 }, // Fetch the price only.
filters: [
{ memcmp: { offset: 0, bytes: bs58.encode(candyMachineDiscriminator) } }, // Ensure it's a CandyMachine account.
{ memcmp: { offset: 8 + 32 + 32, bytes: bs58.encode((new BN(0, 'le')).toArray()) } }, // Ensure it doesn't have a token mint public key.
],
});
const accounts = [...accountsWithoutTokenMint, ...accountsWithTokenMint];
There are a few things to notice here.
dataSlice
parameter varies for the two calls to account for the byte shift created by the token_mint
property. Notice how 33
becomes 1
on the second call.'le'
as a second parameter to the BN
class so it knows the number should be encoded using little endian.token_mint
is always either 0 or 1 because the program enforces that constraint. However, if this wasnât the case, then we would need to use the first approach mentioned above and slice more data.Phew! That was one tough parenthesis! Iâm glad we went through this though because this is typically the sort of exercise you can expect when trying to make optimised calls to a decentralised cluster.
Right, letâs go back to ordering our accounts by descending price. So far, weâve managed to use getProgramAccounts
(twice) to list all CandyMachine
accounts in the program whilst including the 8 bytes that store their price (in lamports).
Now, all we need to do is parse these 8 bytes and sort them in descending order to reorder our array of public keys. Letâs start by parsing the price of each account using the map
method.
const accountsWithPrice = accounts.map(({ pubkey, account }) => ({
pubkey,
price: new BN(account.data, 'le'),
}));
Here again, we use the bn.js library to parse an array of little-endian bytes into a BN
object. We wonât try to convert this into a JavaScript number because 8 bytes have the potential of creating an integer bigger than Number.MAX_SAFE_INTEGER
.
Next, we will order this accountsWithPrice
array using the sort
method. This method expects a callback that, given two array items, should return -1, 0 or 1 based on whether or not the first item is before, at the same position or after the second item respectively. Fortunately, the BN
object contains a cmp
(compare) method that does just that.
const sortedAccountsWithPrice = accountsWithPrice.sort((a, b) => b.price.cmp(a.price));
Here we compare the price of b
with the price of a
in order to get the descending order. Returning a.price.cmp(b.price)
would generate the opposite order.
Finally, we can now extract the public keys of this ordered array of accounts.
const accountPublicKeys = sortedAccountsWithPrice.map((account) => account.pubkey);
With that sorted accountPublicKeys
array in hand, we can reuse our getPage
asynchronous methods to fetch accounts in the desired order.
const top20ExpensiveCandyMachines = await getPage(1, 20);
Querying the Solana cluster can be tricky but, with the right tools and the right techniques, we can achieve more complex and/or performant queries.
This is a terrible comparison on many levels but, coming from web 2, it does help me to compare the tools and techniques weâve seen in this article with SQL clauses to summarise their purpose at a higher level. Please take the table below and the analogy it represents with a tablespoon of salt.
Solana tools and techniques | SQL clauses analogy |
---|---|
getProgramAccounts(programId) |
SELECT * FROM programId |
dataSlice |
SELECT less data. |
Filters (dataSize and memcpm ) |
WHERE |
Pre-fetch with no data + getMultipleAccounts |
LIMIT and OFFSET |
Pre-fetch with some data + sort data + getMultipleAccounts |
ORDER BY |
See you soon for more Solana adventures! đ
]]>Sponsor me on GitHub to read this article and get access to the full library of sponsor-only posts.
If you're already a sponsor, simply visit this article on my blog to read it.
]]>Sponsor me on GitHub to read this article and get access to the full library of sponsor-only posts.
If you're already a sponsor, simply visit this article on my blog to read it.
]]>So far in this series, weâve only developed our decentralised application (dApp) locally on our machine. Now that our dApp is ready, letâs learn how to deploy it so everyone can benefit from it.
In Solana, there are multiple clusters we could deploy to. The main one that everybody uses with real money is called âmainnetâ. Another one, called âdevnetâ, can be used to test our program on a live cluster that uses fake money.
When deploying dApps, it is common to first deploy to the âdevnetâ cluster as you would on a staging server and then, when youâre happy with everything, deploy to the âmainnetâ cluster analogous to a production server.
In this episode, weâre going to learn how to deploy to the devnet cluster. However, deploying to mainnet is a very similar process so you should also be able to do that by the end of this episode. Additionally, Iâll make sure to add a note any time thereâs a little difference between the two.
Alright, letâs do it!
The first thing to do is to change our cluster from localhost to devnet. We need to do this in two places: in the terminal and our programâs configurations.
The former is easy to do. We simply need to run this command in our terminal to let Solana know we want to use the devnet cluster.
solana config set --url devnet
# Outputs:
# Config File: /Users/loris/.config/solana/cli/config.yml
# RPC URL: https://api.devnet.solana.com
# WebSocket URL: wss://api.devnet.solana.com/ (computed)
# Keypair Path: /Users/loris/.config/solana/id.json
# Commitment: confirmed
đ For mainnet, run solana config set --url mainnet-beta
instead.
From this point, any solana
command that we run in our terminal will be executed on the devnet cluster. That includes, solana airdrop
, solana deploy
, etc.
This means that we no longer need to start a local ledger on our machine â using solana-test-validator
or anchor localnet
â to interact with the blockchain.
The next place we need to change our cluster is in the Anchor.toml
file of our program.
If you look inside that file, you should currently see the following.
[programs.localnet]
solana_twitter = "2BDbYV1ocs2S1PsYnd5c5mqtdLWGf5VbCYvf28rs9LGj"
# ...
[provider]
cluster = "localnet"
wallet = "/Users/loris/.config/solana/id.json"
[provider]
, weâre telling Anchor to use the localnet cluster and where to find the wallet that should be used to pay for transactions and storage.[programs.localnet]
, weâre giving Anchor the program ID â i.e. the public key â of our solana_twitter
program.Itâs important to notice that the program ID is provided under the context of a cluster â here, the localnet cluster. This is because the same program could be deployed to different addresses based on the cluster. For instance, you could use a different program ID for the mainnet cluster so that only a few restricted members have the right to deploy to mainnet.
But hang on a minute, that program ID is public, right?
Thatâs true! The program ID is public but its keypair is located on the target/deploy
folder and Anchor uses a naming convention to find it. If your program is called solana_twitter
, then it will try to find the keypair located at target/deploy/solana_twitter-keypair.json
. If that file cannot be found when deploying your program, a new keypair will be generated giving us a new program ID. This is exactly why we had to update the program ID after the very first deployment.
Whilst we didnât pay much attention to that target/deploy/solana_twitter-keypair.json
file before, it is important to acknowledge that this file is the proof that you own the program at this address. If you deploy to mainnet using this keypair and someone else gets hold of it, that person will be able to deploy any changes they want to your program.
In our case, weâll keep things simple and use the same keypair for all clusters but I would recommend using a different keypair for the mainnet cluster at least.
[programs.localnet]
solana_twitter = "2BDbYV1ocs2S1PsYnd5c5mqtdLWGf5VbCYvf28rs9LGj"
[programs.devnet]
solana_twitter = "2BDbYV1ocs2S1PsYnd5c5mqtdLWGf5VbCYvf28rs9LGj"
[programs.mainnet]
solana_twitter = "2BDbYV1ocs2S1PsYnd5c5mqtdLWGf5VbCYvf28rs9LGj"
# ...
[provider]
cluster = "devnet"
wallet = "/Users/loris/.config/solana/id.json"
As you can see in the code above, we duplicated the [programs.localnet]
section twice. Once for [programs.devnet]
and once for [programs.mainnet]
. Then we updated the cluster
to âdevnetâ under [provider]
.
đ For mainnet, simply set cluster
to âmainnetâ.
And thatâs it! Weâre now on the devnet cluster.
Before we can deploy our program to devnet, we need to get some money on this cluster.
If you remember, we used the command solana airdrop
in the past to give ourselves some money in our local cluster. Well, we can do the same in the devnet cluster except that the command has a limit at around 5 SOL.
So letâs give ourselves some SOLs on devnet. Weâll do that for both of our wallets: the one we use on our local machine to deploy the program, and our ârealâ wallet that we use in the browser to make transactions.
For the former, we donât need to specify the address because it is located at ~/.config/solana/id.json
which is the default place to look for your machineâs keypair. Therefore, we can run the following.
solana airdrop 5
Note that if we needed more than 5 SOLs we could run that command again. Itâs just we canât get too many SOLs at a time.
If you get the following error Error: unable to confirm transaction. This can happen in situations such as transaction expiration and insufficient fee-payer funds
, it often means that the devnet faucet is drained and you should try again a bit later. You can also try requesting fewer SOLs and see if it works. Feel free to check the Solana discord as well to get some updates on when itâs replenished.
For the other wallet â i.e. our ârealâ wallet â we can run the same command but we need to specify its address as a second argument. For me, it looks like that.
solana airdrop 5 B1AfN7AgpMyctfFbjmvRAvE1yziZFDb9XCwydBjJwtRN
Weâre now ready to deploy!
đ For mainnet, we cannot use the airdrop command since weâre handling real money. Therefore, you would need to transfer some money to your local machineâs wallet from your real wallet. Alternatively, you could import your real wallet locally on your machine and use it when deploying to mainnet by providing the path to its keypair on the wallet
option of the Anchor.toml
configuration file.
Letâs finally deploy our program to devnet! As usual, weâll use the anchor deploy
command now that Anchor knows which cluster to deploy to. Whilst this is not necessary, I always like to run anchor build
before deploying to make sure Iâm deploying the latest version of my code.
anchor build
anchor deploy
Congratulations! Your code is now available on the devnet cluster for everyone to use! đ„ł
Now letâs update our frontend so we can interact with our program in the devnet cluster instead of the local one.
To achieve that, we need to change the cluster URL inside the useWorkspace
composable.
Replace the localhost URL with the following and, boom, our frontend is now using the devnet cluster as well!
const connection = new Connection('https://api.devnet.solana.com', commitment)
At this point, you should be able to send and read tweets from the devnet cluster using the VueJS applications.
Now all thatâs left to do is deploy our frontend to a server somewhere so that other users can interact with it too. But first, letâs talk about costs.
Earlier, we ran a command to airdrop 5 SOL to the wallet of our local machine so it can have enough money to deploy our program. Letâs see how much that cost us. We can run the following command to access the balance of our local wallet.
solana balance
# Outputs:
# 2.361026
We now have 2.361026 SOL which means deploying cost us 2.638974 SOL in total. At the time of writing, thatâs about $450.
Fortunately for us, we deployed on devnet where we can airdrop ourselves some money but if we wanted to deploy that to the mainnet cluster, we would need to pay that from our own pocket.
So why does it cost that much and do we have to pay that sort of money every time we deploy?
The reason it cost so much is, just like when creating accounts, we need to allocate some storage on the blockchain to hold the code of our program. Once compiled, our code requires quite a few bytes which mean the rent-exempt money to pay is usually pretty high. On top of that, Solana defaults to allocating twice the amount of space needed to store the code. Why does it do that? Because our program is likely to have updates in the future and it is trying to account for the fact that, when we next deploy, we might need more space.
If necessary, we may change this by explicitly telling Solana how much space we want to allocate for our program.
Therefore, deploying for the first time on a cluster is an expensive transaction because of the initial rent-exempt money but, afterwards, deploying again should cost virtually nothing â i.e. the price of a transaction â because weâve already paid for our storage.
Good, now that we know more about the economics of deploying, letâs go back to deploying the frontend of our application.
If you look inside the useWorkspace
composable, you will see that we import the IDL file generated by Anchor using a relative path that is outside of the app
directory containing our VueJS application.
import idl from '../../../target/idl/solana_twitter.json'
This works on our machine because we built the application locally and therefore the target
folder was properly created. However, when deploying our frontend to a server, it wonât have access to this target
repository. Therefore, we need to copy and paste the generated IDL somewhere inside our app
folder.
To make our life a little easier, letâs add a custom copy-idl
script inside our Anchor.toml
file.
[scripts]
test = "yarn run ts-mocha -p ./tsconfig.json -t 1000000 tests/**/*.ts"
copy-idl = "mkdir -p app/src/idl && cp target/idl/solana_twitter.json app/src/idl/solana_twitter.json"
The first part of the script ensures the app/src/idl
folder exists and the second part copies the IDL file inside it.
Now, every time we want to copy that IDL over to our frontend, all we have to do is run the following command. Letâs do it now so we can access the IDL file in our frontend.
anchor run copy-idl
Finally, we need to update the import path of our IDL file inside the useWorkspace
composable so it points to the new IDL path.
import idl from '@/idl/solana_twitter.json'
Note that thereâs a new feature starting from Anchor v0.19.0
that allows us to specify a custom directory that the IDL file should be copied to every time we run anchor build
.
[workspace]
types = "app/src/idl/"
That sounds perfect but, at the time of writing, it wonât copy the program ID inside the IDL which we need for our workspace. Therefore, I decided to go with a traditional copy script but keep an eye out for this feature as Iâm sure it will continue to improve.
Currently, weâre having to manually update the cluster URL inside the useWorkspace
composable every time we want to switch clusters.
It would be much better if this could be set dynamically. One way to achieve this would be to have a little dropdown on the application where users can select their cluster. However, I prefer having one explicit cluster defined for each environment I deploy to. For instance, solana-twitter.com
would be using the mainnet cluster whereas devnet.solana-twitter.com
would be using the devnet cluster.
Luckily, VueJS applications support multiple environments via the âmodeâ feature.
Here is how this works.
app
directory: .env
for local variables, .env.devnet
for devnet variables and .env.mainnet
for mainnet variables..env
which is why weâre using that one locally.VUE_APP_
inside these environment files will be automatically injected in the process.env
of our frontend.useWorkspace
composable to provide a cluster URL dynamically.package.json
file to help us compile the frontend for all the different modes.Okay, letâs implement this.
(1) Start by adding the following files inside the app
directory and copy/paste their content. Weâll only define one variable that provides the URL of each cluster.
.env
VUE_APP_CLUSTER_URL="http://127.0.0.1:8899"
.env.devnet
VUE_APP_CLUSTER_URL="https://api.devnet.solana.com"
.env.mainnet
VUE_APP_CLUSTER_URL="https://api.mainnet-beta.solana.com"
(4) Next, update the useWorkspace
composable to use that variable.
// ...
const clusterUrl = process.env.VUE_APP_CLUSTER_URL
const preflightCommitment = 'processed'
const commitment = 'processed'
const programID = new PublicKey(idl.metadata.address)
let workspace = null
export const useWorkspace = () => workspace
export const initWorkspace = () => {
const wallet = useAnchorWallet()
const connection = new Connection(clusterUrl, commitment)
// ...
}
(5) Finally, add the following scripts to your app/package.json
file.
"scripts": {
"serve": "vue-cli-service serve",
"serve:devnet": "vue-cli-service serve --mode devnet",
"serve:mainnet": "vue-cli-service serve --mode mainnet",
"build": "vue-cli-service build",
"build:devnet": "vue-cli-service build --mode devnet",
"build:mainnet": "vue-cli-service build --mode mainnet",
"lint": "vue-cli-service lint"
},
Done! Now we can build our frontend for devnet using npm run build:devnet
and it will automatically know to use the URL of the devnet cluster.
Note that if you currently have npm run serve
running on a terminal, you will need to exit it (Ctrl+C) and run npn run serve:devnet
instead so it uses the right cluster URL.
Alright, now itâs time to push our frontend application to the world. There are gazillion options to deploy a frontend app so by all means feel free to use the method you prefer.
In my case, Iâm used to deploying frontend-only applications using Netlify because itâs free and itâs honestly pretty amazing.
All you need to do is have your code on a repository somewhere and it will ask you for the command to run and the directory to serve. Nice and simple like it should be.
Note that before deploying, Iâve added a little favicon and updated the metadata of the index.html
file inside the app/public
folder. Feel free to download and extract the following ZIP file into your public
directory if you want to do the same.
Next, in my Netlify account, Iâve added a new site and provided the following options.
main
app
npm run build:devnet
app/dist
And thatâs it. Our frontend has been deployed to a random subdomain of netlify.app
. You can connect your own domain name for free or if like me, youâre just using this as a learning project, you can update the Netlify subdomain to something a bit nicer. In my case, Iâve used solana-twitter.netlify.app
.
Another nice thing about Netlify is that it will automatically trigger a new deployment every time you push a commit to your selected branch which defaults to main
.
And weâre done! You can now share your URL to all your frens and start tweeting in Solana! đ„đ„ł
This is an optional step but it's good to know that you can publish your IDL file on the blockchain. This allows for other tools in the Solana ecosystem to recognise your program and understand what it has to offer.
Here's an example with a Solana explorer I'm building. Even though the explorer knows nothing about our program, it can fetch the IDL file and decode the Tweet account accordingly to show some valuable information.
To publish your IDL file, all you need to do is run the following in the terminal.
anchor idl init <programId> -f <target/idl/program.json>
And if your program changes in the future, you can upgrade the published IDL by running:
anchor idl upgrade <programId> -f <target/idl/program.json>
Before I leave you, letâs just do a tiny recap of the developing cycle we have been using when creating and deploying a dApp in Solana. Iâll give it to you as code because letâs be honest, thatâs the best thing to read.
# Make sure youâre on the localnet.
solana config set --url localhost
# And check your Anchor.toml file.
# CodeâŠ
# Run the tests.
anchor test
# Build, deploy and start a local ledger.
anchor localnet
# Or
solana-test-validator
anchor build
anchor deploy
# Copy the new IDL to the frontend.
anchor run copy-idl
# Serve your frontend application locally.
npm run serve
# Switch to the devnet cluster to deploy there.
solana config set --url devnet
# And update your Anchor.toml file.
# Airdrop yourself some money if necessary.
solana airdrop 5
# Build and deploy to devnet.
anchor build
anchor deploy
# Push your code to the main branch to auto-deploy on Netlify.
git push
Weâve done it! You can congratulate yourself for finishing this series because it certainly was a tough journey to follow. đȘ
I hope youâve learned a lot along the way and hopefully enough so you can start developing more dApps. If you do, Iâd love to hear about what youâre building! Nothing would make me happier than seeing this article series lifting others to build amazing things.
If thereâs anything else youâd like to learn regarding Solana development feel free to reach out. Iâm planning on adding more bonus episodes to this series in the future and making them âGitHub sponsor onlyâ so they can help me a little bit financially.
On top of that, Iâm planning on adding more generic Solana articles for free on my blog so feel free to follow me on Twitter to get some updates.
As usual, you can find the repository for this episode on GitHub and compare its code to the previous episode.
Iâll see you soon for more Solana adventures. LFG! đ
]]>We are so close to having a finished decentralised application (dApp) we can share with the world! Everything is ready except that we canât send tweets from our frontend. Not so handy for a Twitter-like application.
So letâs implement this right here right now and complete our dApp! đȘ
Since the last episode, you might have tried to connect your wallet and send a tweet to see what it does. Well, nothing is what it does. Not only does our sendTweet
API endpoint returns mock data, but that mock data will now throw an error when trying to display the tweet since we've updated the TweetCard.vue
component in the previous episode.
So letâs start by removing the last bit of mock data from our frontend and implement the real logic to send a tweet to our program.
Open the send-tweet.js
file inside the api
folder and replace all of its content with the following code.
import { web3 } from '@project-serum/anchor'
import { useWorkspace } from '@/composables'
import { Tweet } from '@/models'
// 1. Define the sendTweet endpoint.
export const sendTweet = async (topic, content) => {
const { wallet, program } = useWorkspace()
// 2. Generate a new Keypair for our new tweet account.
const tweet = web3.Keypair.generate()
// 3. Send a "SendTweet" instruction with the right data and the right accounts.
await program.value.rpc.sendTweet(topic, content, {
accounts: {
author: wallet.value.publicKey,
tweet: tweet.publicKey,
systemProgram: web3.SystemProgram.programId,
},
signers: [tweet]
})
// 4. Fetch the newly created account from the blockchain.
const tweetAccount = await program.value.account.tweet.fetch(tweet.publicKey)
// 5. Wrap the fetched account in a Tweet model so our frontend can display it.
return new Tweet(tweet.publicKey, tweetAccount)
}
Okay, weâve got some explaining to do.
sendTweet
method. We need access to the topic and the content of the tweet which we require as the first two parameters. As with the other API endpoints, we import and call the useWorkspace
method to access our program and the connected Anchor wallet.Tweet
account. The tweet will be initialised at this Keypairâs public address and we will need the entire Keypair object to sign the transaction to prove we own this address.SendTweet
instruction to our Solana program. Much like we did in our tests, we pass the data, the account and the signers to the sendTweet
method of our programâs API. This time, we use the connected wallet as the author
account.Tweet
account at the provided tweet.publicKey
address. Thus, we use it to access the data of our newly created tweet. We do this so we can return the created tweet account to whoever is calling this API endpoint. This enables our components to automatically add it to the list of tweets to display without having to re-fetch all the tweets on the page.Tweet
object so that our frontend has everything it needs to display it.If you look inside the TweetForm.vue
component responsible for creating new tweets, you'll notice we don't need to change anything as it already provides the topic
and content
of the tweet as first and second parameters already.
const send = async () => {
if (! canTweet.value) return
const tweet = await sendTweet(effectiveTopic.value, content.value)
emit('added', tweet)
topic.value = ''
content.value = ''
}
However, it will new send a real instruction to our program as opposed to returning a mock tweet like it was before.
Awesome! So with our sendTweet
API endpoint properly wired, we should be able to send our first tweet through the frontend, right? Sadly no.
Thereâs a little bug in our useWorkspace
composable which causes our code to throw the following error.
TypeError: Cannot read properties of undefined (reading 'preflightCommitment')
The bug is that we only provided two parameters when instantiating our Provider
object when we should have given three.
const provider = computed(() => new Provider(connection, wallet.value)) // <- Missing 3rd parameter.
The missing third parameter is a configuration object thatâs used to define the commitment of our transactions.
To fix this, we could simply give an empty array â i.e. {}
â as a third argument which would fallback to the default configurations.
However, Iâd like us to take that opportunity to understand what configurations are needed and how we can provide them explicitly.
This configuration object accepts two properties: commitment
and preflightCommitment
. Both of them define the level of commitment we expect from the blockchain when sending a transaction. The only difference between the two is that preflightCommitment
will be used when simulating a transaction whereas commitment
will be used when sending the transaction for real.
If youâre wondering why we would need to simulate a transaction, a good example would be for your wallet to show you the amount of money that is expected to be gained or lost from a transaction before approving it.
Now, what exactly is a âcommitmentâ? According to the Solana documentation, a commitment describes how finalized a block is at the point of sending the transaction. When we send a transaction to the blockchain it is added to a block which will need to be âfinalizedâ before officially becoming a part of the blockchainâs data. Before a block is âfinalizedâ, it has to be âconfirmedâ by a voting system made on the cluster. Before a block is âconfirmedâ, there is a possibility that the block will be skipped by the cluster.
Therefore, there are 3 commitment levels that match exactly the scenarios describe above. They are, in descending order of commitment:
finalized
. This means, we can be sure that the block will not be skipped and, therefore, the transaction will not be rolled back.confirmed
. This means the cluster has confirmed through a vote that the transactionâs block is valid. Whilst this is a strong indication the transaction will not roll back, it is still not a guarantee.processed
. This means, the transaction has been processed and added to a block and we don't need any guarantees on what will happen to that block.So which commitment level should we choose for our little Twitter-like application? Looking at the Solana documentation, they recommend the following.
When querying the ledger state, it's recommended to use lower levels of commitment to report progress and higher levels to ensure the state will not be rolled back.
In our case, I wouldnât consider a tweet being rolled back to be a critical issue. On top of that, it is very unlikely that a block containing our transaction will end up being skipped by the cluster. Therefore, the processed
commitment level is good enough for our application. Weâll use it for both simulated and real transactions.
Note that using the finalized
commitment level might be more appropriate for (non-simulated) financial transactions with critical consequences.
Now that we know which commitment levels to use, letâs explicitly configure them in our useWorkspace.js
composable. First, we defined two variables preflightCommitment
and commitment
for simulated and real transactions respectively.
// ...
const preflightCommitment = 'processed'
const commitment = 'processed'
const programID = new PublicKey(idl.metadata.address)
let workspace = null
Then, we pass these commitment levels to the Provider
constructor as a configuration object. We also give the commitment
variable as the second parameter of our Connection
object so it can use it as a fallback commitment level when it is not directly provided on the transaction.
export const initWorkspace = () => {
const wallet = useAnchorWallet()
const connection = new Connection('http://127.0.0.1:8899', commitment)
const provider = computed(() => new Provider(connection, wallet.value, { preflightCommitment, commitment }))
const program = computed(() => new Program(idl, programID, provider.value))
// ...
}
Alright, now surely we can send a tweet via our frontend?! Almost⊠đ
If we try to send a tweet right now, weâll get the following error in the console.
Attempt to debit an account but found no record of a prior credit.
Okay, weâve seen this error in the past. That means, weâve got no money. Remember how I told you that starting a local ledger always gives 500 million SOL to your local machineâs wallet for running tests? Well, the issue here is that weâre not using that wallet in our browser. Instead, weâre using our real wallet in the local cluster.
That means we need to explicitly airdrop ourselves some money before we can send transactions.
To do that, we first need to know the public key of our real wallet. We can use the dropdown menu of the wallet button for that purpose. Once your wallet is connected, click on the wallet button on the sidebar and select "Copy address".
Next, we can use the solana airdrop
command followed by how many SOL we want to airdrop and the address of the account to credit.
We donât need much money but letâs give ourselves 1000 SOL â why not, we deserve it. Then, paste your public key and run that command. Thatâs what it looks like for me.
solana airdrop 1000 B1AfN7AgpMyctfFbjmvRAvE1yziZFDb9XCwydBjJwtRN
# Outputs:
# Requesting airdrop of 1000 SOL
#
# Signature: 4rBNKMyRTcddaT9QHtYSD62Juk5F4AdgryiuE4N83Yj8JJVTomAnHWL8xvPitJdtDdLorSf81rBsYz89r7dXis6y4
#
# 1000 SOL
Alright, now we can finally send our first tweet from the frontend!
Enter some content and, optionally, a topic before hitting the âTweetâ button. You should get a pop-up window from your wallet provider asking you to approve the transaction.
Depending on your wallet provider, you should also see an estimation of how much SOL this transaction will credit or debit from your account. However, using Phantom, this often doesnât work for me when using the local cluster as the simulated transaction just loops forever.
Nevertheless, clicking on âApproveâ should run the transaction for real and display the new tweet at the top of the list! đ„ł
If you refresh the page, you can see that our new tweet has been properly persisted to our local ledger.
Hell yeah, our decentralised application is finally functional! đ„đ„đ„
Now, all thatâs left to do is deploy it to a real cluster to share it with the rest of the world. And thatâs exactly what weâll do in the next episode!
]]>In the previous episode, weâve worked hard to allow users to connect their wallets and ended up with a program
object from Anchor allowing us to interact with our Solana program. Now, itâs time to use that program
to remove all the mock data and fetch real tweets from the blockchain.
Some of the work weâll do in this article will be familiar because weâve already seen how to fetch tweets from the blockchain when testing our Solana program.
Okay, letâs start the wiring!
Weâll start simple, by fetching all existing tweets and displaying them on the home page.
Open the api/fetch-tweets.js
file and paste the following code.
import { useWorkspace } from '@/composables'
export const fetchTweets = async () => {
const { program } = useWorkspace()
const tweets = await program.value.account.tweet.all();
return tweets
}
A few things to notice here:
useWorkspace
composable to access the workspace store.program
object from the workspace, we destructure it from the result of useWorkspace()
.program.value
because program
is a reactive variable and wrapped in a Ref
object.Tweet
accounts using account.tweet.all()
just like we did when we tested our program.Okay, letâs try this in our PageHome.vue
component. If you look inside the script part of the component, you'll notice we're already calling the fetchTweets
api method and using its result to display tweets.
import { ref } from 'vue'
import { fetchTweets } from '@/api'
import TweetForm from '@/components/TweetForm'
import TweetList from '@/components/TweetList'
const tweets = ref([])
const loading = ref(true)
fetchTweets()
.then(fetchedTweets => tweets.value = fetchedTweets)
.finally(() => loading.value = false)
// ...
At this point, everything should be wired properly for our home page so letâs see if everything works.
First, youâll need to start a new local validator. You may do this by running solana-test-validator
in your terminal or, alternatively, by running anchor localnet
which will also re-build and re-deploy your program.
For us to see some tweets in our application, weâll need to have some Tweet
accounts inside our local ledger. Fortunately for us, we know that running the tests will create 3 of them so letâs run anchor run test
to add them to our local ledger.
Okay, now we have a running local ledger that contains 3 tweets in total. Therefore, we should see these tweets on the home page.
However, if you go to the home page and open the âNetworkâ developer tools in your browser, you should see the following.
As we can see in the network tab, we are indeed getting 3 tweet accounts but they are not displayed properly on the home page.
Thatâs because our frontend is expecting to receive an object with a certain structure that doesnât match what we get from the API call.
So instead of changing our entire frontend to accommodate for that structure, letâs create a new Tweet
model that works for our frontend and abstracts the data received from the API.
Inside the src
folder of our frontend application, letâs create a new folder called models
. Inside that new folder, weâll add two files:
Tweet.js
. This will structure our tweet accounts using a Tweet
class.index.js
. This will register the Tweet
model so we can import it like we import composables and API endpoints.Once that folder and those two files are created, paste the following code inside the index.js
file.
export * from './Tweet'
And paste the following inside the Tweet.js
file.
export class Tweet
{
constructor (publicKey, accountData) {
this.publicKey = publicKey
this.author = accountData.author
this.timestamp = accountData.timestamp.toString()
this.topic = accountData.topic
this.content = accountData.content
}
}
As you can see, to create a new Tweet
object, we need to provide:
publicKey
, which will be an instance of Solanaâs PublicKey
class.accountData
object, provided by the API endpoint.When creating a new Tweet
object, we store its public key and all of the properties inside the accountData
object individually. That way we can access, say, the topic via tweet.topic
. We also parse the timestamp into a string because the API endpoint gives us the timestamp as an array of bytes.
On top of these properties, our frontend relies on Tweet
objects to have the following additional properties: key
, author_display
, created_at
and created_ago
.
The key
property should be a unique identifier that represents our tweet. It is used in some VueJS templates when looping through arrays of tweets. Since the public key is unique for each tweet, weâll use its base-58 format to provide a unique string.
Weâll use a getter function to provide this key
property. You can achieve this by adding the following getter at the end of the Tweet
class.
export class Tweet
{
// ...
get key () {
return this.publicKey.toBase58()
}
}
Whilst weâve already got access to the authorâs public key through the author
property, the frontend uses a condensed version of this address on the TweetCard.vue
component as not to visually overwhelm the user.
This condensed version is quite simply the first 4 characters and the last 4 characters of the public key with a couple of dots in the middle.
Thus, letâs add another getter function called author_display
and use the slice
method to condense the authorâs public key.
export class Tweet
{
// ...
get author_display () {
const author = this.author.toBase58()
return author.slice(0,4) + '..' + author.slice(-4)
}
}
The last two properties we need are human-readable versions of the timestamp
provided by our program. created_at
should be a localised human-readable date including the time whereas created_ago
should briefly describe how long ago the tweet was posted.
Fortunately, there are plenty of JavaScript libraries out there for manipulating dates. Moment.js is probably the most popular one but Iâd say overkill for our purpose. Instead, I often prefer using Day.js which is super lightweight by default and extendable to fit our needs.
So letâs start by installing Day.js using npm.
npm install dayjs
Next, we need to import it and extend it slightly so it supports localised formats and relative times â used for created_at
and created_ago
respectively.
In your main.js
file, add the following code after the âCSSâ section.
// CSS.
import 'solana-wallets-vue/styles.css'
import './main.css'
// Day.js
import dayjs from 'dayjs'
import localizedFormat from 'dayjs/plugin/localizedFormat'
import relativeTime from 'dayjs/plugin/relativeTime'
dayjs.extend(localizedFormat)
dayjs.extend(relativeTime)
// ...
Now, back to our Tweet.js
model, we can import Day.js and provide two getter functions for created_at
and created_ago
. Both of them can use dayjs.unix(this.timestamp)
to convert our timestamp
property into a Day.js object. Then, we can use the format('lll')
and fromNow()
methods to get a localised date and a relative time respectively.
We end up with the following Tweet.js
model! đ
import dayjs from "dayjs"
export class Tweet
{
constructor (publicKey, accountData) {
this.publicKey = publicKey
this.author = accountData.author
this.timestamp = accountData.timestamp.toString()
this.topic = accountData.topic
this.content = accountData.content
}
get key () {
return this.publicKey.toBase58()
}
get author_display () {
const author = this.author.toBase58()
return author.slice(0,4) + '..' + author.slice(-4)
}
get created_at () {
return dayjs.unix(this.timestamp).format('lll')
}
get created_ago () {
return dayjs.unix(this.timestamp).fromNow()
}
}
Tweet
modelsNow that our Tweet
model is ready, letâs use it in our fetch-tweets.js
API endpoint so that it returns Tweet
objects instead of whatever the API returns.
For that, we can use map
on the tweets
array to transform each item inside it. As weâve seen in a previous episode, the API returns an object containing a publicKey
and an account
object which is exactly what we need to create a new Tweet
object.
import { useWorkspace } from '@/composables'
import { Tweet } from '@/models'
export const fetchTweets = async () => {
const { program } = useWorkspace()
const tweets = await program.value.account.tweet.all();
return tweets.map(tweet => new Tweet(tweet.publicKey, tweet.account))
}
Right, at this point, we should see all tweets displaying properly on the home page. In the image below, Iâve also logged the return value of the fetchTweets
method so we can make sure all our custom getters are working properly.
All good! letâs move on to the next task.
When viewing the tweets on the home page, you might have noticed that each tweet contains 3 links:
However, if you try to click on them, they will always send you to the home page. Thatâs simply because these links are not yet implemented and thatâs what we are going to do now.
If you have a look inside the TweetCard.vue
component, you should see a few comments on the template that look like this: <!-- TODO: Link to ... -->
. So letâs tackle each of these comments one by one, starting with the authorâs link.
This link is slightly more complicated than the others because the route it directs to depends on whether itâs one of our tweets or not. By that I mean, if we click on our own address it should direct us to the profile page, whereas, if we click on the address of someone else, it should take us to the users page with the address already pre-filled.
Therefore, weâre going to create a computed property called authorRoute
that will use the connected wallet to figure out which route we should be directed to.
Update the script part of the TweetCard.vue
component, with the following lines.
import { toRefs, computed } from 'vue'
import { useWorkspace } from '@/composables'
const props = defineProps({
tweet: Object,
})
const { tweet } = toRefs(props)
const { wallet } = useWorkspace()
const authorRoute = computed(() => {
if (wallet.value && wallet.value.publicKey.toBase58() === tweet.value.author.toBase58()) {
return { name: 'Profile' }
} else {
return { name: 'Users', params: { author: tweet.value.author.toBase58() } }
}
})
Letâs go through that piece of code:
computed
method from VueJS that weâll use to create our authorRoute
computed property.useWorkspace
composable and access the connected wallet
from it.{ name: 'Profile' }
. In Vue Router, that's how you can identify a route named âProfileâ.params
object. Thus, we can access the âUsersâ page of the tweetâs author by returning: { name: 'Users', params: { author: tweet.value.author.toBase58() } }
Now that our authorRoute
computed property is available, we can give it to the relevant <router-link>
component and remove the comment above.
- <!-- TODO: Link to author page or the profile page if it's our own tweet. -->
- <router-link :to="{ name: 'Home' }" class="hover:underline">
+ <router-link :to="authorRoute" class="hover:underline">
{{ tweet.author_display }}
</router-link>
Next, letâs implement the link to the tweet page. For that, we can use the base-58 format of the tweetâs public key as a parameter of the Tweet
route. We end up with the following route object.
{ name: 'Tweet', params: { tweet: tweet.publicKey.toBase58() } }
This time, we can use this object directly inside the appropriate <router-link>
without the need for a new variable.
- <!-- TODO: Link to the tweet page. -->
- <router-link :to="{ name: 'Home' }" class="hover:underline">
+ <router-link :to="{ name: 'Tweet', params: { tweet: tweet.publicKey.toBase58() } }" class="hover:underline">
{{ tweet.created_ago }}
</router-link>
Finally, we need to implement the link to the topics page. Similarly to the previous links, we can pass the topic as a parameter of the Topics
route and end up with the following route objectâŠ
{ name: 'Topics', params: { topic: tweet.topic } }
âŠwhich we can use directly in the final <router-link>
that needs updating.
- <!-- TODO: Link to the topic page. -->
- <router-link v-if="tweet.topic" :to="{ name: 'Home' }" class="inline-block mt-2 text-pink-500 hover:underline">
+ <router-link v-if="tweet.topic" :to="{ name: 'Topics', params: { topic: tweet.topic } }" class="inline-block mt-2 text-pink-500 hover:underline">
{{ tweet.created_ago }}
</router-link>
And just link that, our TweetCard.vue
component is complete and all of its links are pointing to the right places.
However, if we try to click on these links, they will always show all tweets ever created because that's what our fetchTweets
method currently does.
So letâs fix this. Weâll start with the Topics
and Users
pages. Both of these pages will need access to all tweets from our program that match a certain criterion. However, our fetchTweets
API endpoint does not support filters yet. Therefore, weâve got to sort this out first.
Since weâve already seen how to filter accounts in Solana, supporting filters in our API endpoint should be nice and easy.
The first thing we need to do is add a new filters
parameter to the fetchTweets
method of our fetch-tweets.js
file, allowing us to optionally provide filters when fetching tweets.
import { useWorkspace } from '@/composables'
import { Tweet } from '@/models'
export const fetchTweets = async (filters = []) => {
const { program } = useWorkspace()
const tweets = await program.value.account.tweet.all(filters);
return tweets.map(tweet => new Tweet(tweet.publicKey, tweet.account))
}
Now, writing Solana filters can be a little tedious and having that logic scattered everywhere in our components might not be ideal to maintain. It would be nice if we could offer some helper methods that would generate filters so our components can use these methods instead of generating filters themselves. So letâs do that!
Weâll start by exporting an authorFilter
function that accepts a public key in base 58 format and returns the appropriate memcmp
filter as weâve seen in episode 5 of this series.
Hereâs said function that you can now add at the end of your fetch-tweets.js
file.
export const authorFilter = authorBase58PublicKey => ({
memcmp: {
offset: 8, // Discriminator.
bytes: authorBase58PublicKey,
}
})
Next, weâll do the same for topics by exporting a topicFilter
function that accepts a topic as a string and returns a memcmp
filter that encodes the topic properly and provides the right offset for it.
Add the following topicFilter
function at the end of your fetch-tweets.js
file and donât forget to import the bs58
library so it can encode the given topic string into a base 58 formatted array of bytes.
import { useWorkspace } from '@/composables'
import { Tweet } from '@/models'
import bs58 from 'bs58'
// ...
export const topicFilter = topic => ({
memcmp: {
offset: 8 + // Discriminator.
32 + // Author public key.
8 + // Timestamp.
4, // Topic string prefix.
bytes: bs58.encode(Buffer.from(topic)),
}
})
If youâre wondering why weâre using this particular offset, it is for the exact same reasons we described in episode 5 when filtering tweets by topics in our tests.
And thatâs it! We now have a fetchTweets
endpoint that not only supports filters but also makes it super easy for our components to use them. Your final fetch-tweets.js
file should look like this.
import { useWorkspace } from '@/composables'
import { Tweet } from '@/models'
import bs58 from 'bs58'
export const fetchTweets = async (filters = []) => {
const { program } = useWorkspace()
const tweets = await program.value.account.tweet.all(filters);
return tweets.map(tweet => new Tweet(tweet.publicKey, tweet.account))
}
export const authorFilter = authorBase58PublicKey => ({
memcmp: {
offset: 8, // Discriminator.
bytes: authorBase58PublicKey,
}
})
export const topicFilter = topic => ({
memcmp: {
offset: 8 + // Discriminator.
32 + // Author public key.
8 + // Timestamp.
4, // Topic string prefix.
bytes: bs58.encode(Buffer.from(topic)),
}
})
Our components can now use this API endpoint to fetch and filter tweets like this.
import { fetchTweets, authorFilter, topicFilter } from '@/api'
// Fetch all tweets.
const allTweets = await fetchTweets()
// Filter tweets by author.
const myTweets = await fetchTweets([
authorFilter('B1AfN7AgpMyctfFbjmvRAvE1yziZFDb9XCwydBjJwtRN'),
])
// Filter tweets by topic.
const solanaTweets = await fetchTweets([
topicFilter('solana'),
])
// Filter tweets by author and topic.
const mySolanaTweets = await fetchTweets([
authorFilter('B1AfN7AgpMyctfFbjmvRAvE1yziZFDb9XCwydBjJwtRN'),
topicFilter('solana'),
])
Noice! Letâs use that new shiny API endpoint on our Topics
and Users
pages.
Inside our PageTopics.vue
component, letâs import the topicFilter
helper in addition to the already imported fetchTweets
method.
import { ref } from 'vue'
import { useRouter } from 'vue-router'
import { fetchTweets, topicFilter } from '@/api'
import { useSlug, useFromRoute } from '@/composables'
import TweetForm from '@/components/TweetForm'
import TweetList from '@/components/TweetList'
import TweetSearch from '@/components/TweetSearch'
Next, letâs scroll down a bit and provide the appropriate parameter to the fetchTweet
method. Here, weâll use the value
of the slugTopic
computed property as a topic to use for filtering tweets.
const fetchTopicTweets = async () => {
if (slugTopic.value === viewedTopic.value) return
try {
loading.value = true
const fetchedTweets = await fetchTweets([topicFilter(slugTopic.value)])
tweets.value = fetchedTweets
viewedTopic.value = slugTopic.value
} finally {
loading.value = false
}
}
Topics page⊠Done! â
You should now be able to click on a topicâs link and view all tweets from that topic.
Letâs do the same for our PageUsers.vue
component.
Similarly, we import the authorFilter
function next to the fetchTweets
function.
import { ref } from 'vue'
import { useRouter } from 'vue-router'
import { fetchTweets, authorFilter } from '@/api'
import { useFromRoute } from '@/composables'
import TweetList from '@/components/TweetList'
import TweetSearch from '@/components/TweetSearch'
Next, we provide an authorFilter
using the author
property to the first parameter of the fetchTweets
function.
const fetchAuthorTweets = async () => {
if (author.value === viewedAuthor.value) return
try {
loading.value = true
const fetchedTweets = await fetchTweets([authorFilter(author.value)])
tweets.value = fetchedTweets
viewedAuthor.value = author.value
} finally {
loading.value = false
}
}
Boom, users page⊠Done! â
Before we move on, thereâs one more page that needs to use the authorFilter
function and thatâs the profile page.
So letâs do the same to our PageProfile.vue
component. As usual, we import the authorFilter
...
import { ref, watchEffect } from 'vue'
import { fetchTweets, authorFilter } from '@/api'
import TweetForm from '@/components/TweetForm'
import TweetList from '@/components/TweetList'
import { useWorkspace } from '@/composables'
... and use it in the first parameter of the fetchTweets
function.
watchEffect(() => {
if (! wallet.value) return
fetchTweets([authorFilter(wallet.value.publicKey.toBase58())])
.then(fetchedTweets => tweets.value = fetchedTweets)
.finally(() => loading.value = false)
})
Notice that, we also added a line that ensures we do have a connected wallet before continuing â i.e. if (! wallet.value) return
. Even though the profile page is hidden when no wallet is connected, we still need to add that extra check because, upon refresh, there will be a little delay before the wallet re-connects automatically.
Thereâs one last page where users can access tweets and thatâs the Tweet
page. That page is a little special because instead of displaying multiple tweets, it simply retrieves the content of a Tweet
account at a given address.
Therefore, we canât use the fetchTweets
API endpoint here. Instead, there is a getTweet
API endpoint located in the get-tweet.js
file that we need to update.
Replace everything inside that file with the following code.
import { useWorkspace } from '@/composables'
import { Tweet } from '@/models'
export const getTweet = async (publicKey) => {
const { program } = useWorkspace()
const account = await program.value.account.tweet.fetch(publicKey);
return new Tweet(publicKey, account)
}
The getTweet
method accepts a publicKey
parameter which should be an instance of Solanaâs PublicKey
class.
It then uses the fetch
method from the account.tweet
API provided by Anchorâs program to fetch the content of the account. We can then combine this account data with the provided public key to return a new Tweet
object.
Now that our getTweet
API endpoint is ready, letâs use it inside our PageTweet.vue
component.
If you read the code inside this component, youâll notice the getTweet
method is already imported and used because thatâs how we were displaying mock data before.
watchEffect(async () => {
try {
loading.value = true
tweet.value = await getTweet(new PublicKey(tweetAddress.value))
} catch (e) {
tweet.value = null
} finally {
loading.value = false
}
})
Notice that, for the public key, we use the tweetAddress
reactive property which is dynamically extracted from the current URL. We then wrap its value
inside a PublicKey
object as this is what our API endpoint expects to receive.
All done! â
If you click on the timestamp of a tweet, you now have access to a page that can be used to share it.
Our application really is starting to take shape! At this point, you should be able to look around and read all the tweets present in the blockchain.
The only thing missing before we can share our application to the world is allowing users to send tweets directly via our frontend app which is exactly what weâll do in the next episode. đ
]]>At this point, we've got a nice user interface to send and read tweets but nothing is wired to our Solana program. In addition, we have no way of knowing which wallet is connected in the user's browser. So let's fix that.
In this episode, we'll focus on integrating our frontend with Solana wallet providers such as Phantom or Solfare so we can send transactions on behalf of a user. Once we'll have access to the connected wallet, we'll be able to create a "Program" object just like we did in our tests.
Okay, let's get started!
Fortunately for us, there are some JavaScript libraries we can use to help us integrate with many wallet providers out there.
These libraries are available on GitHub and there is even a package for Vue 3! So let's install these libraries right now. We'll need one for the main logic and components and one for importing all supported wallet adapters.
Run the following inside your app
directory to install them.
npm install solana-wallets-vue @solana/wallet-adapter-wallets
With these libraries installed, the first thing we need to do is to initialise the wallet store. This will provide a global store giving us access to useful properties and methods anywhere in our application.
Inside the script part of your App.vue
component, add the following lines.
import { useRoute } from 'vue-router'
import TheSidebar from './components/TheSidebar'
import { PhantomWalletAdapter, SolflareWalletAdapter } from '@solana/wallet-adapter-wallets'
import { initWallet } from 'solana-wallets-vue'
const route = useRoute()
const wallets = [
new PhantomWalletAdapter(),
new SolflareWalletAdapter(),
]
initWallet({ wallets, autoConnect: true })
This does two things.
wallets
array. Whilst we'll only use these two in this series, note that you can use any of the supported wallet providers listed here. To add some more, import the relevant class from @solana/wallet-adapter-wallets
and instantiate it in the wallets
array.initWallet
method to initialise the global store using the wallets defined in step 1 so it knows which wallet providers we want to support. Additionally, we set the autoConnect
option to true
so that it will automatically try to reconnect the user's wallet on page refresh.And just like that, our wallet store is initialised and we can use its properties and methods to create components allowing users to connect their wallets. Fortunately, one of the libraries we've installed also provide UI components that handle all of that for us.
The solana-wallets-vue
library provides VueJS components that allow the user to select a wallet provider and connect to it. It contains a button to initiate the process, a modal to select the wallet provider and a dropdown that can be used once connected to copy your address, change provider or disconnect.
All of that can be added to your application through the following component.
<wallet-multi-button></wallet-multi-button>
This component will delegate to other components â such as <wallet-connect-button>
â to give the user a complete workflow to connect, manage and disconnect their wallet.
Currently, we have a fake "Select a wallet" button on the sidebar. Thus, let's replace it with the two components above to connect our wallets for real.
Inside the script part of the TheSidebar.vue
component, add the following line to import the component.
import { WalletMultiButton } from 'solana-wallets-vue'
Then, use it inside the template part and remove the fake button.
- <!-- TODO: Connect wallet -->
- <div class="bg-pink-500 text-center w-full text-white rounded-full px-4 py-2">
- Select a wallet
- </div>
+ <wallet-multi-button></wallet-multi-button>
Last but not least, we need to import some CSS to style that component properly. Add the following line to your main.js
file. It's important to add it before our main.css
file so we can make some design tweaks in the next section.
// CSS.
import 'solana-wallets-vue/styles.css'
import './main.css'
// ...
Awesome! At this point you should be able to compile your application â using npm run serve
â and connect your wallet! đ
Note that, if you don't have a wallet or a browser extension such as Phantom installed yet, don't worry about it, we'll tackle that in a minute. But first, let's have a look at what our wallet button looks like.
Overall, not so bad but the style doesn't really match the rest of our application and the dropdown is not properly aligned so let's fix that.
Fortunately for us, all the UI components provided by the solana-wallets-vue
library use CSS classes that we can override to tweak their style.
So let's do that. Open your main.css
file and add the following lines at the end of the file.
.swv-dropdown {
@apply w-full;
}
.swv-button {
@apply rounded-full w-full;
}
.swv-button-trigger {
@apply bg-pink-500 justify-center !important;
}
.swv-dropdown-list {
@apply top-auto bottom-full md:top-full md:bottom-auto md:left-0 md:right-auto;
}
.swv-dropdown-list-active {
@apply transform -translate-y-3 md:translate-y-3;
}
The @apply
directive allows us to write CSS using TailwindCSS classes for convenience. Aside from that, we're just updating some CSS classes.
Okay, let's have a look at our wallet button now.
Much better! đš
It is worth mentioning that the wallet you're going to use in your browser is usually different from the wallet we created earlier in this series to run our tests in the console. The former is typically your "real" wallet whereas the latter is just the wallet your local machine uses to run tests or use CLI tools. They can be the same if you want them to be but I prefer to keep them separate.
Now, if you already have a wallet registered in a wallet provider such as Phantom or Solflare, you should already be good to go. If you're using another wallet provider, feel free to add it to the wallets
array we defined earlier.
However, if you don't have a wallet or a wallet provider installed as a browser extension, then you'll need to do this to interact with your application. For that purpose, I recommend installing Phantom in your browser. It's a very popular wallet provider and has a friendly user interface. Once installed, you can follow the steps to create a new wallet directly on the Phantom extension. Be sure to store your recovery phrase someplace safe since it can recover your full wallet including its private key.
By default, your wallet will show you the money or assets you have in the "mainnet" cluster. The "mainnet" cluster is basically the real cluster where real money is kept. However, the same wallet can be used in other clusters such as "devnet" â a live cluster with fake money to test things â or "localnet" â your local cluster.
As such, if you want to see your money or assets in other clusters, you may do this by changing a setting in your wallet provider. In Phantom, you can do this by clicking on the cog icon, then going to the "Change Network" setting and selecting your cluster here.
Note that changing this setting is optional as it only affects the assets displayed by the wallet provider. It does not affect which cluster our application sends transactions to. We will configure this within our code a bit later in this article.
At this point, users can connect their wallet to our application and we can access that data within our VueJS components. But how do we access that data and what do we actually get from it?
Let's start with the "how".
You can access the data provided by the wallet store by using the useWallet
composable from the solana-wallets-vue
library.
import { useWallet } from 'solana-wallets-vue'
const data = useWallet()
As long as the initWallet()
method was called, this will give you access to properties and methods regarding the connected wallet.
So what do we actually get from useWallet()
?
wallet
. Potentially the most interesting piece of information for us is the user's connected wallet. If the user has connected a wallet, this will be an object containing its public key. Otherwise, this property will be null
.ready
, connected
, connecting
and disconnecting
. These are useful booleans for us to understand which state we are in. For instance, we can use the connected
boolean to know if the user has connected its wallet or not.select
, connect
and disconnect
methods enable us to select, connect to and disconnect from a wallet provider. We don't need to use these methods directly since they are already being used by the wallet UI components we imported earlier.sendTransaction
, signTransaction
, signAllTransactions
and signMessage
methods enable us to sign messages and/or transactions on behalf of the connected wallet. Whilst we will not use them directly, Anchor requires some of these methods inside its wallet
object.As you can see, useWallet()
gives us lots of granular information that can be used to interact with the connected wallet. Because of that, the wallet
object it provides is not compatible with Anchor's definition of a wallet. If you remember the following diagram from episode 5, you can see that Anchor uses its own "Wallet" object to interact with the connected wallet and sign transactions on its behalf.
In order to get an object compatible with Anchor's definition of a wallet, we can use yet another composable called useAnchorWallet
. This will return a wallet
object that can sign transactions.
import { useAnchorWallet } from 'solana-wallets-vue'
const wallet = useAnchorWallet()
And just like that, we can connect our previous Anchor diagram with our brand new wallet integration.
I'd like to take a little break here to talk about reactive variables in Vue 3. If you're not familiar with them, some of the code you'll read later could be a little confusing.
Most of the properties we've listed above are reactive and wrapped inside Ref
objects. If you're not familiar with Vue's Ref
variables, they ensure the content of a variable is passed by reference and not by value.
This means, by having a reference of a Ref
variable, we can mutate its content and any code using that variable can be notified of such change. To access the content of a Ref
variable, you must access its value
property â e.g. wallet.value
â unless you're using that variable inside a VueJS template, in which case, VueJS automatically does that for you. You can read more about them in Vue's documentation or â if you're used to React â this might help.
Here's a little example to summarise how we can access Ref
variables. Inside the script part, we use value
. Inside the template part, we don't.
<script setup>
import { ref } from 'vue'
const name = ref('Loris')
console.log(name.value) // Outputs: Loris
</script>
<template>
<div>{{ name }}</div> <!-- Displays: Loris -->
</template>
Okay, let's put what we've learned into practice.
At the moment, anyone can see the form that allows users to send tweets. However, that form should only be visible to users that have connected their wallets so let's fix that.
In the script part of the TweetForm.vue
component, import the useWallet
composable.
import { computed, ref, toRefs } from 'vue'
import { useAutoresizeTextarea, useCountCharacterLimit, useSlug } from '@/composables'
import { sendTweet } from '@/api'
import { useWallet } from 'solana-wallets-vue'
Then, in the template part of the component â Under "Permissions" â update the following line.
// Permissions.
- const connected = ref(true) // TODO: Check connected wallet.
+ const { connected } = useWallet()
This will use the connected
variable from the wallet data instead of being always true
like it was before.
If you look inside the template of that component, you can see that this connected
variable is used to toggle which HTML we are showing to the user: either the form or an empty state.
<template>
<div v-if="connected" class="px-8 py-4 border-b">
<!-- Form here... -->
</div>
<div v-else class="px-8 py-4 bg-gray-50 text-gray-500 text-center border-b">
Connect your wallet to start tweeting...
</div>
</template>
And that's it! Now, only users with connected wallets can see the tweet form.
Let's do another one. This time, we'll make sure the profile page is not visible on the sidebar if you're not connected.
In the script part of TheSidebar.vue
component, import and call useWallet
to access the connected
variable.
import { WalletMultiButton, useWallet } from 'solana-wallets-vue'
const { connected } = useWallet()
Then, inside the template, look for the comment that says "TODO: Check connected wallet". Under that comment, replace v-if="true"
with v-if="connected"
and voilĂ ! You can also remove that "TODO" comment now.
<template>
<aside class="flex flex-col items-center md:items-stretch space-y-2 md:space-y-4">
<!-- ... -->
<div class="flex flex-col items-center md:items-stretch space-y-2">
<!-- ... -->
<router-link v-if="connected" :to="{ name: 'Profile' }" ...>
<!-- ... -->
</router-link>
</div>
<div class="fixed bottom-8 right-8 md:static w-48 md:w-full">
<wallet-modal-provider>
<wallet-multi-button></wallet-multi-button>
</wallet-modal-provider>
</div>
</aside>
</template>
To recap, here's what you should see if you have a connected wallet.
And here's what you should see if you don't. I.e. no profile page and no tweet form.
Okay, let's take a deep breath and see what we've accomplished so far in this episode.
initWallet
to initialise a wallet store that provides everything we need to connect a wallet and access its data.useWallet()
in various components.useAnchorWallet()
to obtain a wallet
object compatible with Anchor.So if we refer to our diagram that represents all entities needed for Anchor to create a Program, we can see that we currently only have one piece of the puzzle: the "Wallet".
In order for us to have everything we need to interact with our Solana program, we need to fill the rest of the puzzle. Fortunately for us, the "Wallet" piece was the most difficult piece to find since we needed to integrate with wallet providers which we've now done.
Anchor refers to this whole picture as a "Workspace" because it gives us everything we need to work with our program.
Okay, let's fill the missing pieces of the puzzle and create our workspace. We'll create a new useWorkspace.js
file inside the composables
folder and register it inside composables/index.js
.
export * from './useAutoresizeTextarea'
export * from './useCountCharacterLimit'
export * from './useFromRoute'
export * from './useSlug'
export * from './useWorkspace'
Inside the useWorkspace.js
composable, we'll use a global variable to provide a new global store to our application. For that, we need an initWorkspace
method that initialises that variable and a useWorkspace
method that access it. Here's how we can do this using VueJS.
let workspace = null
export const useWorkspace = () => workspace
export const initWorkspace = () => {
workspace = {
// Provided data here...
}
}
Let's start simple, by importing the connected Anchor wallet and providing it as data. That way, we don't need to use the other composables to access the connected wallet. We'll have everything in one place.
import { useAnchorWallet } from 'solana-wallets-vue'
let workspace = null
export const useWorkspace = () => workspace
export const initWorkspace = () => {
const wallet = useAnchorWallet()
workspace = {
wallet,
}
}
The next thing we need is a Connection object. For that, we simply need to know which cluster â or network â we want to interact with. For now, we'll continue to develop our application locally. Therefore, we'll hardcode our localhost URL which is: http://127.0.0.1:8899
. We'll have a more dynamic way to handle this in the future when we'll deploy to devnet.
So let's create a new Connection
object using this cluster URL and provide it as data as well.
import { useAnchorWallet } from 'solana-wallets-vue'
import { Connection } from '@solana/web3.js'
let workspace = null
export const useWorkspace = () => workspace
export const initWorkspace = () => {
const wallet = useAnchorWallet()
const connection = new Connection('http://127.0.0.1:8899')
workspace = {
wallet,
connection,
}
}
We know that Connection + Wallet = Provider
so we can now create a new Provider
object. However, this provider object needs to be a computed
property so that it is recreated when the wallet
property changes â e.g. it is disconnected or connected as another wallet.
Here's how we can achieve this using VueJS. Notice how we access the wallet using wallet.value
inside the computed
method.
import { computed } from 'vue'
import { useAnchorWallet } from 'solana-wallets-vue'
import { Connection } from '@solana/web3.js'
import { Provider } from '@project-serum/anchor'
let workspace = null
export const useWorkspace = () => workspace
export const initWorkspace = () => {
const wallet = useAnchorWallet()
const connection = new Connection('http://127.0.0.1:8899')
const provider = computed(() => new Provider(connection, wallet.value))
workspace = {
wallet,
connection,
provider,
}
}
Next, we need to access the IDL
file which is the JSON file representing the structure of our program. This file is auto-generated in the target
folder of the root of our project so let's access it directly from there.
Note that this will not work when the app is deployed to a server on its own since the target
folder will be empty but we will take care of that later on when we deploy to devnet.
import { computed } from 'vue'
import { useAnchorWallet } from 'solana-wallets-vue'
import { Connection } from '@solana/web3.js'
import { Provider } from '@project-serum/anchor'
import idl from '../../../target/idl/solana_twitter.json'
let workspace = null
export const useWorkspace = () => workspace
export const initWorkspace = () => {
const wallet = useAnchorWallet()
const connection = new Connection('http://127.0.0.1:8899')
const provider = computed(() => new Provider(connection, wallet.value))
workspace = {
wallet,
connection,
provider,
}
}
Finally, because IDL + Provider = Program
, we can now create our program object. We'll use a computed property here as well because provider
is also reactive.
On top of asking for the idl
and the provider
objects, creating a Program
also requires its address as an instance of PublicKey
. Fortunately for us, the IDL file already contains that information under idl.metadata.address
. We just need to wrap this in a PublicKey
object and feed it to the program.
âïž Warning: The metadata.address
variable containing our program ID will only be available after running anchor deploy
because that's when Anchor knows which address the program was deployed to. So if you run anchor build
without running anchor deploy
, you will end up with the following error: Cannot read properties of undefined (reading 'address')
.
And there we have it! The final code of our useWorkspace.js
composable that gives us access to everything we need to interact with our Solana program.
import { computed } from 'vue'
import { useAnchorWallet } from 'solana-wallets-vue'
import { Connection, PublicKey } from '@solana/web3.js'
import { Provider, Program } from '@project-serum/anchor'
import idl from '../../../target/idl/solana_twitter.json'
const programID = new PublicKey(idl.metadata.address)
let workspace = null
export const useWorkspace = () => workspace
export const initWorkspace = () => {
const wallet = useAnchorWallet()
const connection = new Connection('http://127.0.0.1:8899')
const provider = computed(() => new Provider(connection, wallet.value))
const program = computed(() => new Program(idl, programID, provider.value))
workspace = {
wallet,
connection,
provider,
program,
}
}
Now, all we need to do is call that initWorkspace
method somewhere so that our application can access its data. Since the workspace store depends on the wallet store, let's call initWorkspace
immediately after calling initWallet
.
Inside our App.vue
component, we'll add the following lines of code.
import { useRoute } from 'vue-router'
import TheSidebar from './components/TheSidebar'
import { PhantomWalletAdapter, SolflareWalletAdapter } from '@solana/wallet-adapter-wallets'
import { initWallet } from 'solana-wallets-vue'
import { initWorkspace } from '@/composables'
// ...
initWallet({ wallets, autoConnect: true })
initWorkspace()
Phew, all done! We can now access the workspace data from any component of our application.
Before we wrap up this article, let's have a quick look at how we can access that workspace data in our components.
We'll take that opportunity to update the wallet address on the profile page.
If you open the PageProfile.vue
component, you should see a public key hardcoded in the template.
<div v-if="true" class="border-b px-8 py-4 bg-gray-50">
B1AfN7AgpMyctfFbjmvRAvE1yziZFDb9XCwydBjJwtRN
</div>
Now that we have access to the real connected wallet, let's replace this with its public key.
<script setup>
import { ref, watchEffect } from 'vue'
import { fetchTweets } from '@/api'
import TweetForm from '@/components/TweetForm'
import TweetList from '@/components/TweetList'
import { useWorkspace } from '@/composables'
const tweets = ref([])
const loading = ref(true)
const { wallet } = useWorkspace()
// ...
</script>
<template>
<div v-if="wallet" class="border-b px-8 py-4 bg-gray-50">
{{ wallet.publicKey.toBase58() }}
</div>
<tweet-form @added="addTweet"></tweet-form>
<tweet-list :tweets="tweets" :loading="loading"></tweet-list>
</template>
As you can see, we:
useWorkspace
composable.useWorkspace()
â here, the wallet
object.wallet
object inside our template to display its public key in base 58 format.This was a simple example, but we're now able to do much more than that. Just like we were in our tests, we can now access the program
object and use its various APIs to interact with our Solana program which is exactly what we will be doing in the next couple of episodes.
First of all, well done for following this far! Honestly, this series has been quite a journey and I'm super happy to see many of you looking forward to the next episodes.
Whilst integrating with wallets has been made super easy for us by the solana-wallets-vue repository, we still had to set it up properly and understand various concepts along the way. But look at what we've got now:
That's a massive progress from our mock application that wasn't doing anything before!
As usual, you can access the code for this episode in the branch below and compare it with the previous episode.
In the next episode, we will replace the mock data from our api
files and use our brand new workspace to fetch real tweets from our Solana program.
EDIT 2022-02-10: This article was updated to use the new solana-wallets-vue
package.
Whilst users can technically start sending and reading tweets using our program by interacting directly with the blockchain, no one really wants that sort of user experience.
We want to abstract all of that into a nice user interface (UI) that resembles what they are familiar with. For that reason, we will build a frontend client and we'll build it using VueJS.
We'll use VueJS because A. it's my favourite JavaScript framework and B. it is very much under-documented in the Solana ecosystem. If you're more familiar with other JavaScript frameworks such as React, you can still follow along as most concepts resonate between frameworks.
Now, frontend development is a world of its own and I could easily spend hours and hours detailing how to create the UI we will end up with at the end of this episode. However, the focus of this series is Solana and I wouldn't want to deviate too much from it. There are plenty of tutorials out there about frontend development â even on this blog.
At the same time, we need a UI to continue our journey and create our decentralised application. So here's the deal. In this episode, I'll explain how to get started with VueJS and install all the dependencies weâll need so you can do it yourself. Then, when it comes to the actual design and components of the UI, I'll give you a bunch of files to copy/paste in various places and briefly explain what they do. The components will contain mock data at first so we can wire them with our Solana program in the next episodes.
Fasten your seatbelt because we're going to move quickly. Let's go! đ
One of the easiest ways to create a new VueJS application is to use its CLI tools.
If you donât have them already installed, you can do this by running the following.
npm install -g @vue/cli@5.0.0-rc.1
Note that weâre explicitly asking for version 5 â which is still a release candidate at the time of writing â because we want our VueJS app to be bundled with Webpack 5 instead of Webpack 4.
You can check the VueJS CLI tools are installed properly by running:
vue --version
We can now create a new VueJS app by running vue create
followed by the directory that should be created for it.
We want our frontend client to live under the app
directory which is currently an empty folder. Therefore, weâll also need to use the --force
option to override it. Okay, letâs run this.
vue create app --force
You should now be asked to choose a preset for your app. Weâll be using Vue 3 in this series so letâs select the Default (Vue 3)
preset.
? Please pick a preset:
Default ([Vue 2] babel, eslint)
⯠Default (Vue 3) ([Vue 3] babel, eslint) # <- This one.
Manually select features
And just like that weâve got ourselves a VueJS 3 application inside our project.
Letâs cd
into it as weâll be working inside that directory for the rest of this episode.
cd app
Next, letâs install the JavaScript libraries provided by Solana and Anchor. We mentioned in a previous episode that they were already included in our Anchor project for our tests but this is a different environment with its own dependencies so we need to install them explicitly.
Be sure to be inside the app
directory and run the following.
npm install @solana/web3.js @project-serum/anchor
The frontend world is full of quirks and gotchas and hereâs one that I struggle with when creating this series.
Some of the JavaScript libraries weâll be using in our app depend on Node.js polyfills.
Node.js is basically âJavaScript for serversâ and the purpose of Node.js polyfills are to bring some of its core dependencies into the frontend world. That way, the same code can be used on both side.
For instance, remember how we converted a string into a buffer by using Buffer.from('some string')
? We didnât need to import that Buffer
object because itâs a Node.js core dependency that was polyfilled for us.
Currently, the frontend world is moving away from bundling all these Node.js dependencies by default. And thatâs exactly what Webpack did when they released version 5. Hereâs a very good explanation from their documentation:
In the early days, webpack's aim was to allow running most Node.js modules in the browser, but the module landscape changed and many module uses are now written mainly for frontend purposes. Webpack <= 4 ships with polyfills for many of the Node.js core modules, which are automatically applied once a module uses any of the core modules (i.e. the crypto module).
Webpack 5 stops automatically polyfilling these core modules and focus on frontend-compatible modules. Our goal is to improve compatibility with the web platform, where Node.js core modules are not available.
So thatâs a nice change but, as I said earlier, some of our dependencies rely on these polyfills to exist. If we donât do anything we will end up with the following error when compiling our frontend.
BREAKING CHANGE: webpack < 5 used to include polyfills for node.js core modules by default.
This is no longer the case. Verify if you need this module and configure a polyfill for it.
Fortunately for us, there is a way to fix this issue by adding the polyfills we need back and/or telling Webpack we donât need them so it can stop complaining.
In our case, weâll only need the Buffer
polyfill and we can disable the others that would have otherwise failed. We can do this inside our vue.config.js
file which contains a configureWebpack
property allowing us to provide additional Webpack configurations.
const webpack = require('webpack')
const { defineConfig } = require('@vue/cli-service')
module.exports = defineConfig({
transpileDependencies: true,
configureWebpack: {
plugins: [
new webpack.ProvidePlugin({
Buffer: ['buffer', 'Buffer']
})
],
resolve: {
fallback: {
crypto: false,
fs: false,
assert: false,
process: false,
util: false,
path: false,
stream: false,
}
}
}
})
Awesome! We should now be safe from confusing polyfill errors. đ
Whilst weâre configuring things, letâs add a couple of things to our ESLint configurations. If youâre not familiar with ESLint, itâs a JavaScript linter that our code editor uses to warn us about errors or code that doesnât comply with a given code style.
Since weâll be using the super fancy <script setup>
tag in our VueJS 3 components, we need to tell ESLint about it so our code editor doesnât show lots of errors when the code is actually valid.
Thereâs no need to worry too much about the details here, simply open the package.json
of your app
directory and replace your eslintConfig
object with the following.
"eslintConfig": {
"root": true,
"env": {
"node": true,
"vue/setup-compiler-macros": true
},
"extends": [
"plugin:vue/vue3-essential",
"eslint:recommended"
],
"parserOptions": {
"parser": "@babel/eslint-parser"
},
"rules": {
"vue/script-setup-uses-vars": "error"
}
},
Iâll be using my favourite CSS framework to design the user interface: TailwindCSS. If youâre not familiar with it, itâs a utility-based framework that is super powerful and an absolute delight to work with. Needless to say, I highly recommend it.
To install it, we need the following dependencies. As usual, make sure to run this in the app
directory.
npm install tailwindcss@latest postcss@latest autoprefixer@latest
Then we need to generate our Tailwind configuration file. For that simply run the following.
npx tailwindcss init -p
This generated a tailwind.config.js
file in our app
directory.
Note that we used the -p
option to also generate a postcss.config.js
file. This is necessary so that Webpack can recognise Tailwind as a PostCSS plugin and therefore compile our Tailwind configurations.
Letâs immediately make a little adjustment to our Tailwind config file. Weâll provide a purge
array so that, when compiling for production, Tailwind can remove all of the utility classes that are not used within the provided paths.
Basically, we need to tell it where our HTML is located which, in our case, is inside any JavaScript file within the src
folder or within the public index.html
file.
So open up your tailwind.config.js
file and replace the empty purge
array with the following lines.
module.exports = {
purge: [
'./public/index.html',
'./src/**/*.{vue,js,ts,jsx,tsx}',
],
// ...
}
Next, create a new file in the src
folder called main.css
and add the following code.
@tailwind base;
@tailwind components;
@tailwind utilities;
When compiled, these three Tailwind statements will be replaced with lots of utility classes generated dynamically.
Finally, we need to import this new CSS file into our main.js
file so it can be picked up by Webpack.
Letâs import it at the top of that file and add a few comments to separate the code into little sections.
// CSS.
import './main.css'
// Create the app.
import { createApp } from 'vue'
import App from './App.vue'
createApp(App).mount('#app')
Weâre now fully ready to use TailwindCSS!
Next, we need some routing within our frontend. Clicking on a new page should be reflected in the URL and vice versa. Fortunately, we donât need to implement that from scratch as we can use Vue Router for that purpose.
To install it, we need to run the following. Note that we need to explicitly install version 4 of Vue Router since this is the version compatible with Vue 3.
npm install vue-router@4
Next, letâs define our routes â i.e. the mapping between URLs and VueJS components.
Create a new file in the src
folder called routes.js
and paste the following inside.
export default [
{
name: 'Home',
path: '/',
component: require('@/components/PageHome').default,
},
{
name: 'Topics',
path: '/topics/:topic?',
component: require('@/components/PageTopics').default,
},
{
name: 'Users',
path: '/users/:author?',
component: require('@/components/PageUsers').default,
},
{
name: 'Profile',
path: '/profile',
component: require('@/components/PageProfile').default,
},
{
name: 'Tweet',
path: '/tweet/:tweet',
component: require('@/components/PageTweet').default,
},
{
name: 'NotFound',
path: '/:pathMatch(.*)*',
component: require('@/components/PageNotFound').default,
},
]
These are all of the pages our application contains including a fallback for URLs that donât exist.
If we try to compile our frontend application at this point using npm run serve
it will fail because all of these components are missing but, don't worry, we'll add all of them in the next section.
Now that our routes are defined, we can import and plug the Vue Router plugin into our VueJS application.
Open your src/main.js
file and update it as follow.
// CSS.
import './main.css'
// Routing.
import { createRouter, createWebHashHistory } from 'vue-router'
import routes from './routes'
const router = createRouter({
history: createWebHashHistory(),
routes,
})
// Create the app.
import { createApp } from 'vue'
import App from './App.vue'
createApp(App).use(router).mount('#app')
As you can see, we first create a router instance by providing our routes and then make our VueJS app use
it as a plug-in.
In case youâre wondering, the createWebHashHistory
method prefixes all paths with a #
so that we donât need to configure any redirections in our server later.
At this point, our VueJS app is fully configured with Vue Router and TailwindCSS. All thatâs left to do is implement the components that will make up the user interface of our frontend.
That means if you wanted to create your own design, you could pause here and implement the components listed in the routes.js
file yourself.
However, Iâve prepared all of that for you so we can focus on how to integrate the frontend with our Solana program rather than spending ages designing a user interface.
So itâs time for some copy/pasting! đ
Okay, letâs do this! Download the ZIP file below and extract it to access all of the files that will compose our user interface.
Now that youâve got all the files, letâs move them to the right folders.
Inside the src
folder:
App.vue
component and replace it with the one provided in the ZIP file.components
directory and replace it with the one provided in the ZIP file.composables
and api
directories from the ZIP file.Boom, frontend ready! đ„
At this point you should be able to run npm run serve
and have a look at the user interface by accessing: http://localhost:8080/
.
npm run serve
# Outputs:
#
# App running at:
# - Local: http://localhost:8080/
# - Network: http://192.168.68.118:8080/
Okay, let's have a little look around and explain the purpose of all of these files we've just added.
Weâll start with the components. Aside from the App.vue
component which is located in the src
folder, all other components should be inside the src/components
folder.
Note that any tweet or any connected wallet is currently mocked with fake data so we can learn how to wire everything in the next episodes.
App.vue
: This is the main component that loads when our application starts. It designs the overall layout of our app and delegates the rest to Vue Router by using the <router-view>
component. Any page that matches the current URL will be rendered where <router-view>
is.PageHome.vue
: The home page. It contains a form to send tweets and lists the latest tweets from everyone.PageNotFound.vue
: The 404 fallback page. It displays an error message and offers to go back to the home page.PageProfile.vue
: The profile page for the connected user/wallet. It displays the walletâs public key before showing the tweet form and the list of tweets sent from that wallet.PageTopics.vue
: The topics page allows users to enter a topic and displays all tweets matching it. Once a topic is entered it also displays a form to send tweets with that topic pre-filled.PageTweet.vue
: The tweet page only shows one tweet. The tweetâs public key is provided in the URL allowing us to fetch the tweet account. This is useful for users to share tweets.PageUsers.vue
: Similarly to the topics page, the users page allows searching for other users by entering their public key. When a valid public key is entered, all tweets from that user will be fetched and displayed on this page.TheSidebar.vue
: This component is used in the main App.vue
component and designs the sidebar on the left of the app. It uses the <router-link>
component to easily generate Vue Router URLs. It also contains a button for users to connect their wallets but for now, that button doesnât do anything.TweetCard.vue
: This component is responsible for the design of one tweet. It is used everywhere we need to display tweets.TweetForm.vue
: This component designs the form allowing users to send tweets. It contains a field for the content, a field for the topic and a little character count-down.TweetList.vue
: This component uses the TweetCard.vue
component to display not just one but multiple tweets.TweetSearch.vue
: This component offers a reusable form to search for criteria. It is used on the topics page and the users page as we need to search for something on both of these pages.On top of components, the ZIP file also contains an api
folder. This folder contains one file for each type of interaction we can have with our program. Technically, we donât need to extract these interactions into their own files but it is a good way to make our components less complicated and easier to maintain.
For now, each of these files defines a function that returns mock data.
fetch-tweets.js
: Provides a function that returns all tweets from our program. In a future episode, we will transform that function slightly so it can filter through topics and users.get-tweet.js
: Provides a function that returns a tweet account from a given public key.send-tweet.js
: Provides a function that sends a SendTweet
instruction to our program with all the required information.Thereâs one last folder in that ZIP file to explain: composables.
In VueJS, we call âcomposablesâ functions that use the composition API to extend the behaviour of a component. If youâre familiar with React, this is comparable to React hooks for VueJS.
Since certain components needed some extra functionality, I took the liberty to create some composables to make components easier to read.
useAutoresizeTextarea.js
: This composable is used in the TweetForm.vue
component and makes the âcontentâ field automatically resize itself based on its content. That way the field contains only one line of text to start with but extends as the user types.useCountCharacterLimit.js
: Also used by the TweetForm.vue
component, this composable returns a reactive character count-down based on a given text and limit.useFromRoute.js
: This composable is used by many components. Itâs a little refactoring that helps deal with Vue Router hooks. Normally, weâd need to add some code for when we enter a router and some other code when the route updates but the components stay the same â e.g. the topic changes in the topics page. That function enables us to write some logic once that will be fired on both events.useSlug.js
: This composable is used to transform any given text into a slug. For instance Solana is AWESOME
will become solana-is-awesome
. This is used anywhere we need to make sure the topic is provided as a slug. That way, weâve got less risk of users tweeting on the same topic not finding each otherâs tweets due to case sensitivity.Well done, weâve got ourselves a user interface! I truly hope you didnât get any troubles along the way, the frontend world can be quite unforgiving at times. If you have any issues, feel free to comment below or, even better, create a new issue on the projectâs repository.
Speaking of repositories, you can view the code of this episode on the episode-7
branch and compare the code with the previous episode as usual. This time, Iâve also added another link to compare after the commit that generated the frontend via vue create app
so you can see what weâve changed afterwards.
Compare with Episode 6 / Compare after creating the VueJS app
In the next three episodes, we will wire our mock user interface with real data and with real interactions with our Solana program. Weâll start with integrating our frontend with Solana wallets such as Phantom so we can identify the connected user in our application. See you in the next episode!
]]>Let's see what we've learned so far. Implementing a Solana program that creates Tweet
accounts... Check! â
Interacting with our program from a client to send tweets to the blockchain... Check! â
Retrieving all of our tweets to display them to our users... Hmm... Nope! â
Let's learn how to do this now! We'll add a few tests that retrieve multiple tweets and ensure we get the right tweets in the right amount.
Let's start simple by retrieving all Tweet
accounts ever created on the blockchain.
In the previous episode, we learned that Anchor exposes a little API for each type of account inside the program
object. For instance, the retrieve the Tweet
account API, we need to access program.account.tweet
.
Previously, we used the fetch
method inside that API to retrieve a specific account based on its public key. Now, we'll use another method called all
that simply returns all of them!
const tweetAccounts = await program.account.tweet.all();
And just like that we have an array of all tweet accounts ever created.
Let's add a new test at the end of the tests/solana-twitter.ts
file. We're adding it at the end because we need to make sure we have accounts to retrieve. The first 5 tests end up creating a total of 3 tweet accounts â since 2 of the test make sure accounts are not created under certain conditions.
Therefore, our new test will retrieve all accounts and make sure we've got exactly 3.
it('can fetch all tweets', async () => {
const tweetAccounts = await program.account.tweet.all();
assert.equal(tweetAccounts.length, 3);
});
Now if we run anchor test
, we should see all 6 of the tests passing! â
Note that for this new test to always work, we need to make sure our local ledger is empty before running the tests. When running anchor test
, Anchor does that automatically for us by starting a new empty local ledger.
However, if you run tests with your own local ledger â by running solana-test-validator
and anchor run test
on a different terminal session â then make sure to reset your local ledger before running the tests by exiting the current local ledger and starting a new empty one using solana-test-validator --reset
. If you don't, you'll end up with 6 tweet accounts the next time you run your tests and therefore our brand new test will fail.
This applies for Apple M1 users that have to run solana-test-validator --no-bpf-jit --reset
and anchor test --skip-local-validator
instead of anchor test
. Just make sure you restart your local ledger before running the tests every time.
Okay, let's move on to our next test. we know how to fetch all Tweet
account ever created but how can we retrieve all accounts matching certain criteria? For example, how can we retrieve all Tweet
accounts from a particular author?
It turns out, you can provide an array of filters to the all()
method above to narrow the scope of your result.
Solana supports only 2 types of filters and both of them are quite rudimentary.
dataSize
filterThe first filter â called dataSize
â is quite simple. You give it a size in bytes and it will only return accounts that match exactly that size.
For instance, we can create a 2000 bytes dataSize
filter this way.
{
dataSize: 2000,
}
Anything above or below 2000 bytes will not be included in the result.
Since all of our Tweet
accounts have a size of 1376 bytes, that's not very useful to us.
memcmp
filterThe second filter â called memcmp
â is a bit more useful. It allows us to compare an array of bytes with the account's data at a particular offset.
That means, we need to provide an array of bytes that should be present in the account's data at a certain position and it will only return these accounts.
So we need to provide 2 things:
offset
: The position (in bytes) in which we should start comparing the data. This expects an integer.bytes
array: The data to compare to the account's data. This array of bytes should be encoded in base 58.For instance, say I wanted to retrieve all accounts that have my public key at the 42nd byte. Then, I could use the following memcmp
filter.
{
memcmp: {
offset: 42, // Starting from the 42nd byte.
bytes: 'B1AfN7AgpMyctfFbjmvRAvE1yziZFDb9XCwydBjJwtRN', // My base-58 encoded public key.
}
}
Note that memcmp
filters only compare exact data. We cannot, for example, check that an integer at a certain position is lower than a provided number. Still, that memcmp
filter is powerful enough for us to use it in our Twitter-like dApp.
memcmp
filter on the author's public keyOkay, back to the matter at hand. Let's use that memcmp
filter to filter tweets from a given author.
So we need two things: the offset
and the bytes
. For the offset, we need to find out where in the data the author's public key is stored. Fortunately, we've already done all that work in episode 3.
We know that the first 8 bytes are reserved for the discriminator and that the author's public key comes afterwards. Therefore, our offset is simply: 8
.
Now, for the bytes
, we need to provide a base-58 encoded public key. For the purpose of our test, we'll use our wallet's public key to retrieve all tweets posted by the wallet.
We end up with the following piece of code.
const authorPublicKey = program.provider.wallet.publicKey
const tweetAccounts = await program.account.tweet.all([
{
memcmp: {
offset: 8, // Discriminator.
bytes: authorPublicKey.toBase58(),
}
}
]);
Considering only two of the three Tweet
accounts created in the tests are from our wallet, the tweetAccounts
variable should only contain two accounts.
Let's fit that code into a new test and make sure we get exactly two accounts back.
it('can filter tweets by author', async () => {
const authorPublicKey = program.provider.wallet.publicKey
const tweetAccounts = await program.account.tweet.all([
{
memcmp: {
offset: 8, // Discriminator.
bytes: authorPublicKey.toBase58(),
}
}
]);
assert.equal(tweetAccounts.length, 2);
});
Let's be a bit more strict in that test and make sure that both of the accounts inside tweetAccounts
are in fact from our wallet.
For that, we'll loop through the tweetAccounts
array using the every
function that returns true
if and only if the provided callback returns true
for every account.
it('can filter tweets by author', async () => {
const authorPublicKey = program.provider.wallet.publicKey
const tweetAccounts = await program.account.tweet.all([
{
memcmp: {
offset: 8, // Discriminator.
bytes: authorPublicKey.toBase58(),
}
}
]);
assert.equal(tweetAccounts.length, 2);
assert.ok(tweetAccounts.every(tweetAccount => {
return tweetAccount.account.author.toBase58() === authorPublicKey.toBase58()
}))
});
Done! We have our second test and we know how to filter by authors! đ
You might be wondering why we are accessing the author's public key via tweetAccount.account.author
whereas, when using the fetch
method, we were accessing it via tweetAccount.author
directly. That's because the fetch
and the all
methods don't return exactly the same objects.
When using fetch
, we get the Tweet
account with all of its data parsed.
When using all
, we get the same object but inside a wrapper object that also provides its publicKey
. When using fetch
, we're already providing the public key of the account so it's not necessary for that method to return it. However, when using all
, we don't know the public key of these accounts and, therefore, Anchor wraps the account object in another object to gives us more context. That's why we're accessing the account data through tweetAccount.account
.
Here's a little diagram to summarise this.
Filtering tweets by topic is very similar to filtering tweets by author. We still need a memcpm
filter but with different parameters.
Let's start with the offset. Again, if we look at the way our Tweet
account is structured, we can see that the topic starts at the 52nd byte.
That's because we have 8 bytes for the discriminator, 32 bytes for the author, 8 bytes for the timestamp and an extra 4 bytes for the "string prefix" that tells us the real length of our topic in bytes.
So let's add these numbers explicitly in a memcmp
filter to make it easier to maintain in the future.
const tweetAccounts = await program.account.tweet.all([
{
memcmp: {
offset: 8 + // Discriminator.
32 + // Author public key.
8 + // Timestamp.
4, // Topic string prefix.
bytes: '', // TODO
}
}
]);
Next, we need to provide a topic to search for in our tests. Since two of the three accounts created in the tests use the veganism
topic, let's use that.
However, we can't just give 'veganism'
as a string to the bytes
property. It needs to be a base-58 encoded array of bytes. To do this, we first need to convert our string to a buffer which we can then encode in base 58.
Buffer.from('some string')
.bs58.encode(buffer)
.The Buffer
variable is already available globally but that's not the case for the bs58
variable that we need to import explicitly at the top of our test file.
import * as anchor from '@project-serum/anchor';
import { Program } from '@project-serum/anchor';
import { SolanaTwitter } from '../target/types/solana_twitter';
import * as assert from "assert";
import * as bs58 from "bs58";
So now we can finally fill the bytes
property with our base-58 encoded veganism
topic.
const tweetAccounts = await program.account.tweet.all([
{
memcmp: {
offset: 8 + // Discriminator.
32 + // Author public key.
8 + // Timestamp.
4, // Topic string prefix.
bytes: bs58.encode(Buffer.from('veganism')),
}
}
]);
Similarly to our previous test, let's create a new test that asserts tweetAccounts
contains only two accounts and that both of them have the veganism
topic.
it('can filter tweets by topics', async () => {
const tweetAccounts = await program.account.tweet.all([
{
memcmp: {
offset: 8 + // Discriminator.
32 + // Author public key.
8 + // Timestamp.
4, // Topic string prefix.
bytes: bs58.encode(Buffer.from('veganism')),
}
}
]);
assert.equal(tweetAccounts.length, 2);
assert.ok(tweetAccounts.every(tweetAccount => {
return tweetAccount.account.topic === 'veganism'
}))
});
Retrieving and filtering multiple tweet accounts... Check! â
Congratulations, you now have a fully tested Solana program! We can now spend the rest of our time implementing a JavaScript client for our program that our users can interact with. Fortunately, because we've learned so much by writing tests, this will feel very familiar.
I'll see you in the next episode where we'll start scaffolding our VueJS application. Let's go! đ„
]]>Our program may be ready but our job is not finished. In this article, we will write a few tests that interact with our program and, more specifically, our SendTweet
instruction.
Whilst writing tests might not feel like the most exciting task, it is the perfect opportunity for us to understand how we can interact with our program and on behalf of which wallet.
So far in this series, weâve focused on program development, that is, the part of the code that lives in the blockchain.
Now, itâs time to move on to the other side â a.k.a. the client.
Much like a traditional web server, we need a client to interact with our Solana program. Later on in this series, we will implement a JavaScript client using the VueJS framework, but for now, we will use a JavaScript client to test our program.
The benefit of that is, after writing our tests, we will know the exact syntax to use in our frontend to interact with our program.
Okay, so how does one interact with the Solana blockchain?
Solana offers a JSON RPC API for this purpose. Donât get scared by the RPC specifications, at the end of the day, itâs just an API.
That being said, Solana provides a JavaScript library called @solana/web3.js
that encapsulates this API for us by providing a bunch of useful asynchronous methods.
All of these methods live inside a Connection
object that requires a Cluster
for it to know where to send its requests. As usual, that cluster can be localhost, devnet, etc. Since weâre working locally for now, weâre using the âlocalhostâ cluster.
Hereâs a little visual representation that will keep growing in this article.
Now we know how to interact with the Solana blockchain but how can we sign transactions to prove our identity? For that, we need a Wallet
object that has access to the key pair of the user making the transaction.
Fortunately for us, Anchor also provides a JavaScript library called @project-serum/anchor
that makes all of this super easy.
Anchorâs library provides us with a Wallet
object that requires a key pair and allows us to sign transactions. But thatâs not it, it also provides us with a Provider
object that wraps both the Connection
and the Wallet
and automatically adds the walletâs signature to outgoing transactions. The Provider
object makes interacting with the Solana blockchain on behalf of a wallet seamless.
Hereâs an updated version of our previous diagram.
But wait thereâs more! If you remember, in episode 2 we mentioned that every time we run anchor build
Anchor generates a JSON file called an IDL
â stands for "Interface Description Language". That IDL file contains a structured description of our program including its public key, instructions and accounts.
Imagine what we could get if we would combine that IDL
file that knows everything about our program and that Provider
object that can interact with the Solana blockchain on behalf of a wallet. That would be the final piece of the puzzle.
Well, imagine no more because Anchor provides yet another object called Program
that uses both the IDL
and the Provider
to create a custom JavaScript API that completely matches our Solana program. Thanks to that Program
object, we can interact with our Solana program on behalf of a wallet without even needing to know anything about the underlying API.
And there you have it, the final picture illustrating how Anchor encapsulates Solanaâs JavaScript library to improve our developer experience.
Letâs put what weâve learned into practice by setting up a Program
object that we can use in our tests.
First of all, no need to import any new JavaScript libraries, both of the libraries mentioned above are included by default on every Anchor project.
Then, if you look at the diagram above, there are essentially two questions we need to answer to end up with a Program
object: which cluster and which wallet?
Anchor takes care of answering both of these questions for us by generating a Provider
object that uses the configurations inside our Anchor.toml
file.
More precisely, it will look into your provider
configurations which should look something like this.
[provider]
cluster = "localnet"
wallet = "/Users/loris/.config/solana/id.json"
With these provider configurations, it knows to use the localhost cluster â using a local ledger â and it knows where to find your key pair on your local machine.
Now, open your test file that should be located at tests/solana-twitter.ts
. If you look at the first 2 lines located inside the describe
method, you should see the following.
// Configure the client to use the local cluster.
anchor.setProvider(anchor.Provider.env());
const program = anchor.workspace.SolanaTwitter as Program<SolanaTwitter>;
anchor.Provider.env()
method to generate a new Provider
for us using our Anchor.toml
config file. Remember: Cluster + Wallet = Provider. It then registers that new provider using the anchor.setProvider
method.Program
object that we can use in our tests. Note that, since the tests are written in TypeScript, we are also leveraging the custom SolanaTwitter
type that Anchor generated for us when running anchor build
. That way, we can get some nice auto-completion from our code editor.And just like that, our test client is all set up and ready to be used! Here's a little update on our diagram to reflect what we've learned here.
Right, enough theory, let's write our first test! Let's start by deleting the dummy test that was auto-generated in our tests/solana-twitter.ts
file.
// Configure the client to use the local cluster.
anchor.setProvider(anchor.Provider.env());
const program = anchor.workspace.SolanaTwitter as Program<SolanaTwitter>;
- it('Is initialized!', async () => {
- // Add your test here.
- const tx = await program.rpc.initialize({});
- console.log("Your transaction signature", tx);
- });
Now, add the following code instead.
it('can send a new tweet', async () => {
// Before sending the transaction to the blockchain.
await program.rpc.sendTweet('TOPIC HERE', 'CONTENT HERE', {
accounts: {
// Accounts here...
},
signers: [
// Key pairs of signers here...
],
});
// After sending the transaction to the blockchain.
});
Don't worry if your IDE shows some red everywhere. It's just TypeScript complaining our instruction doesn't have enough data. We'll get there gradually.
Okay, let's digest that piece of code:
it
method from the mocha
test framework.async
function because we are going to call asynchronous functions inside it. More precisely, we're going to need to await
for the transaction to finish before we can make sure the right account was created.program
object to interact with our program. The program
object contains an rpc
object which exposes an API matching our program's instructions. Therefore, to make a call to our SendTweet
instruction, we need to call the program.rpc.sendTweet
method.program.rpc
object, we need to first provide any argument required by the instruction. In our case, that's the topic
and the content
arguments in this order.program.rpc
method is always the context. If you remember from the previous episode, the context of an instruction contains all the accounts necessary for the instruction to run successfully. On top of providing the accounts
as an object, we also need to provide the key pairs of all signers
as an array. Note that we don't need to provide the key pair of our wallet since Anchor does that automatically for us.Okay, let's fill this sendTweet
method with real data. Let's provide a topic and a content for our tweet and let's make it vegan because why not? đ±
it('can send a new tweet', async () => {
await program.rpc.sendTweet('veganism', 'Hummus, am I right?', {
accounts: {
// Accounts here...
},
signers: [
// Key pairs of signers here...
],
});
});
Next, let's fill the accounts. From the context of our SendTweet
instruction, we need to provide the following accounts: tweet
, author
and system_program
. We'll start with the tweet
account.
Since this is the account our instruction will create, we just need to generate a new key pair for it. That way, we can also prove we are allowed to initialise an account at this address because we can add the tweet
account as a signer.
We can generate a new key pair in JavaScript using the anchor.web3.Keypair.generate()
method. Then we can add that generated key pair in the signers
array and add its public key to the accounts
object.
it('can send a new tweet', async () => {
const tweet = anchor.web3.Keypair.generate();
await program.rpc.sendTweet('veganism', 'Hummus, am I right?', {
accounts: {
tweet: tweet.publicKey,
},
signers: [tweet],
});
});
Next up, the author
account. For that, we need to access the public key of the wallet used inside our program. Remember how our program contains a provider which contains a wallet? That means we can access our wallet's public key via program.provider.wallet.publicKey
.
Since Anchor automatically adds the wallet as a signer to each transaction, we don't need to change the signers
array.
it('can send a new tweet', async () => {
const tweet = anchor.web3.Keypair.generate();
await program.rpc.sendTweet('veganism', 'Hummus, am I right?', {
accounts: {
tweet: tweet.publicKey,
author: program.provider.wallet.publicKey,
},
signers: [tweet],
});
});
Finally, we need to provide the system_program
account. Note that, in JavaScript, Anchor automatically transforms snake case variables into camel case variables inside our context. This means we need to provide the System Program using systemProgram
instead of system_program
.
Now, how do we access the public key of Solana's official System Program in JavaScript? Simple, we can access the System Program using anchor.web3.SystemProgram
and so we can access its public key via anchor.web3.SystemProgram.programId
.
it('can send a new tweet', async () => {
const tweet = anchor.web3.Keypair.generate();
await program.rpc.sendTweet('veganism', 'Hummus, am I right?', {
accounts: {
tweet: tweet.publicKey,
author: program.provider.wallet.publicKey,
systemProgram: anchor.web3.SystemProgram.programId,
},
signers: [tweet],
});
});
Boom! Just like that, we're ready to send tweets to our own Solana program. đ€Ż
Whilst running this test will successfully create a Tweet
account on the blockchain, we're not actually testing anything yet. So the next step is to make another call to the blockchain to fetch the newly created account and make sure the data matches with what we've sent.
To fetch an account on the blockchain we need to access another API provided by the program
object. By calling program.account.tweet
, we have access to a few methods that help us fetch Tweet
accounts from the blockchain. Note that these methods are available for every account defined in our Solana program. So if we had a UserProfile
account, we could fetch them using the program.account.userProfile
API.
Within these API methods, we can use fetch
to retrieve exactly one account by providing its public key. Because Anchor knows what type of account we're trying to fetch, it will automatically parse all the data for us.
So let's fetch our newly created Tweet
account. We'll use the public key of our tweet
key pair to fetch our tweetAccount
. Let's also log the content of that account so we can see what's in there.
it('can send a new tweet', async () => {
// Call the "SendTweet" instruction.
const tweet = anchor.web3.Keypair.generate();
await program.rpc.sendTweet('veganism', 'Hummus, am I right?', {
accounts: {
tweet: tweet.publicKey,
author: program.provider.wallet.publicKey,
systemProgram: anchor.web3.SystemProgram.programId,
},
signers: [tweet],
});
// Fetch the account details of the created tweet.
const tweetAccount = await program.account.tweet.fetch(tweet.publicKey);
console.log(tweetAccount);
});
Now let's run our tests using anchor test
â remember this will build, deploy and test our program by running its own local ledger.
Reminder: Apple M1 users will need to run solana-test-validator --no-bpf-jit --reset
and anchor test --skip-local-validator
on a separate terminal session.
You should see the test passing â normal as we haven't defined any assertions yet â but you should also see an object that looks like this in the logs.
{
author: PublicKey {
_bn: <BN: 7d9c91c77d1f5b693cf0b3960a0c037211298a1e495ac14ef0d8fb904b38388f>
},
timestamp: <BN: 619e2495>,
topic: 'veganism',
content: 'Hummus, am I right?'
}
Congrats! That's the account we've retrieved from the blockchain and it looks like it's got the right data. At least for the topic and the content.
So the last thing to do for us to complete our test is to write assertions. For that, we'll need to import the assert
library at the top of our test file. No need to install it, it's already one of our dependencies.
import * as anchor from '@project-serum/anchor';
import { Program } from '@project-serum/anchor';
import { SolanaTwitter } from '../target/types/solana_twitter';
import * as assert from "assert";
Now, we can use assert
to:
assert.equal(actualThing, expectedThing)
.assert.ok(something)
.So let's remove our previous console.log
and add some assertions inside our test.
it('can send a new tweet', async () => {
// Execute the "SendTweet" instruction.
const tweet = anchor.web3.Keypair.generate();
await program.rpc.sendTweet('veganism', 'Hummus, am I right?', {
accounts: {
tweet: tweet.publicKey,
author: program.provider.wallet.publicKey,
systemProgram: anchor.web3.SystemProgram.programId,
},
signers: [tweet],
});
// Fetch the account details of the created tweet.
const tweetAccount = await program.account.tweet.fetch(tweet.publicKey);
// Ensure it has the right data.
assert.equal(tweetAccount.author.toBase58(), program.provider.wallet.publicKey.toBase58());
assert.equal(tweetAccount.topic, 'veganism');
assert.equal(tweetAccount.content, 'Hummus, am I right?');
assert.ok(tweetAccount.timestamp);
});
A few things to note here:
tweetAccount.author
and program.provider.wallet.publicKey
are public key objects, they will have different references and therefore we can't simply compare them as objects. Instead, we convert them into a Base 58 format using the toBase58
method so they will be equal if and only if these two strings match. Note that the wallet address you give to people in Solana is the Base 58 encoding of your public key. Mine is: B1AfN7AgpMyctfFbjmvRAvE1yziZFDb9XCwydBjJwtRN
.topic
and the content
of our tweet were stored correctly.timestamp
. We could also check that the timestamp corresponds to the current time but it's a bit tricky to do without having the test failing every so often due to the time not matching to the second. Thus, let's keep the test simple and just make sure we have a timestamp.All done! We can now run anchor test
and see our test and all of its assertions passing!
Should we write a few more?
Now that we understand how to write tests for our program, we can simply copy/paste our first test and tweak a few things to test different scenarios.
In this case, I'd like us to add a scenario for tweets that have no topics since our frontend will allow users to send tweets without them.
To test this scenario, copy/paste our first test, rename it "can send a new tweet without a topic" and replace the 'veganism'
topic with an empty string ''
. Note that I've also replaced the content with "gm".
it('can send a new tweet without a topic', async () => {
// Call the "SendTweet" instruction.
const tweet = anchor.web3.Keypair.generate();
await program.rpc.sendTweet('', 'gm', {
accounts: {
tweet: tweet.publicKey,
author: program.provider.wallet.publicKey,
systemProgram: anchor.web3.SystemProgram.programId,
},
signers: [tweet],
});
// Fetch the account details of the created tweet.
const tweetAccount = await program.account.tweet.fetch(tweet.publicKey);
// Ensure it has the right data.
assert.equal(tweetAccount.author.toBase58(), program.provider.wallet.publicKey.toBase58());
assert.equal(tweetAccount.topic, '');
assert.equal(tweetAccount.content, 'gm');
assert.ok(tweetAccount.timestamp);
});
Et voilĂ ! We now have two tests testing different scenarios. Onto the next one.
Let's test a slightly more complicated scenario now. So far, we've used our wallet as the author of the tweet we're sending but, technically, we should be able to tweet on behalf of any author as long we can prove we own its public address by signing the transaction.
So let's do that. Again, starting by copy/pasting our first test, we'll do the following:
otherUser
variable.otherUser
's public key as the author
account.otherUser
key pair in the signers
array. Note that Anchor will only automatically sign transactions using our wallet which is why we need to explicitly sign here.author
of the fetched tweetAccount
matches the public key of our otherUser
.it('can send a new tweet from a different author', async () => {
// Generate another user and airdrop them some SOL.
const otherUser = anchor.web3.Keypair.generate();
// Call the "SendTweet" instruction on behalf of this other user.
const tweet = anchor.web3.Keypair.generate();
await program.rpc.sendTweet('veganism', 'Yay Tofu!', {
accounts: {
tweet: tweet.publicKey,
author: otherUser.publicKey,
systemProgram: anchor.web3.SystemProgram.programId,
},
signers: [otherUser, tweet],
});
// Fetch the account details of the created tweet.
const tweetAccount = await program.account.tweet.fetch(tweet.publicKey);
// Ensure it has the right data.
assert.equal(tweetAccount.author.toBase58(), otherUser.publicKey.toBase58());
assert.equal(tweetAccount.topic, 'veganism');
assert.equal(tweetAccount.content, 'Yay Tofu!');
assert.ok(tweetAccount.timestamp);
});
Unfortunately, this test will not pass.
If we try to run it, we will get the following error â you need to read the error logs carefully to find it.
Transfer: insufficient lamports 0, need 10467840
Okay, what is happening and what is lamport?
A lamport is the smallest decimal of Solana's native token SOL
. The Solana token has exactly 9 decimals, which mean 1 SOL is equal to 1'000'000'000 lamports.
Lamports are used a lot in Solana development as they allow us to make micropayments of fractional SOLs whilst handling amounts using integers. They are named after Solana's biggest technical influence, Leslie Lamport.
Okay, so the error is telling us we have insufficient funds. More precisely, we need 10467840 lamports
or 0.01046784 SOL
.
It turns out this is exactly the amount of money we need for our Tweet
account to be rent-exempt. When we sized our Tweet
account in episode 3, we came up with a required storage of 1376 bytes. Let's find out how much money we need for an account of 1376 bytes to be rent-exempt.
solana rent 1376
# Outputs:
# Rent per byte-year: 0.00000348 SOL
# Rent per epoch: 0.000028659 SOL
# Rent-exempt minimum: 0.01046784 SOL <- Aha!
Good, now we understand what's happening. The transaction is failing because we're using the otherUser
as the author
of the tweet which is set to pay the rent-exempt money on the Tweet
account but that otherUser
has no money at all!
To fix this, we need to airdrop some money to the otherUser
before we can call our SendTweet
instruction.
We can do this using the connection object which is available under program.provider.connection
. This object contains a requestAirdrop
asynchronous method that accepts a public key and an amount of lamport. Let's give that user 1 SOL â or 1 billion lamports.
await program.provider.connection.requestAirdrop(otherUser.publicKey, 1000000000);
Now, this API method is a bit special because the user still won't have any money after the await
call. That's because it's only "requesting" for the airdrop. To ensure we wait long enough for the money to be in the otherUser
account, we need to wait for the transaction to confirm.
Fortunately for us, there is a confirmTransaction
method on the connection object that does just this. It accepts a transaction signature which is returned by the previous requestAirdrop
call.
const signature = await program.provider.connection.requestAirdrop(otherUser.publicKey, 1000000000);
await program.provider.connection.confirmTransaction(signature);
Let's add this code to our test and run anchor test
to see if it passes.
it('can send a new tweet from a different author', async () => {
// Generate another user and airdrop them some SOL.
const otherUser = anchor.web3.Keypair.generate();
const signature = await program.provider.connection.requestAirdrop(otherUser.publicKey, 1000000000);
await program.provider.connection.confirmTransaction(signature);
// Call the "SendTweet" instruction on behalf of this other user.
const tweet = anchor.web3.Keypair.generate();
await program.rpc.sendTweet('veganism', 'Yay Tofu!', {
accounts: {
tweet: tweet.publicKey,
author: otherUser.publicKey,
systemProgram: anchor.web3.SystemProgram.programId,
},
signers: [otherUser, tweet],
});
// Fetch the account details of the created tweet.
const tweetAccount = await program.account.tweet.fetch(tweet.publicKey);
// Ensure it has the right data.
assert.equal(tweetAccount.author.toBase58(), otherUser.publicKey.toBase58());
assert.equal(tweetAccount.topic, 'veganism');
assert.equal(tweetAccount.content, 'Yay Tofu!');
assert.ok(tweetAccount.timestamp);
});
Yes! All tests are passing! â
solana-twitter
â can send a new tweet (277ms)
â can send a new tweet without a topic (517ms)
â can send a new tweet from a different author (1055ms)
3 passing (2s)
Before we move on to our next test, you might be wondering: Why didn't we need to do airdrop some money to our wallet on the previous tests?
That's because every time a new local ledger is created, it automatically airdrops 500 million SOL to your local wallet which, by default, is located at ~/.config/solana/id.json
.
If you remember, running anchor test
starts a new local ledger for us and therefore airdrops some money to our wallet automatically every single time. That's why we never need to airdrop money into our local wallet before each test.
Okay, let's move on to our two final tests for this episode.
So far, we've only tested "happy paths" â i.e. scenarios that are allowed.
In the previous episode, we created two custom guards in our SendTweet
instruction to ensure topics and contents could not have more than 50 and 280 characters respectively. So it could be a good idea to add a test for each of these guards to make sure they work properly.
These tests will be slightly different from the previous ones because we will be asserting that an error is being thrown.
Let's start with the topic and create a new test called "cannot provide a topic with more than 50 characters". This time, we'll only copy the first part of the first test we created.
it('cannot provide a topic with more than 50 characters', async () => {
const tweet = anchor.web3.Keypair.generate();
await program.rpc.sendTweet('veganism', 'Hummus, am I right?', {
accounts: {
tweet: tweet.publicKey,
author: program.provider.wallet.publicKey,
systemProgram: anchor.web3.SystemProgram.programId,
},
signers: [tweet],
});
});
Now we need to replace the topic veganism
with anything which has more than 50 characters. To make this obvious, we'll create a topic made of only one character repeated 51 times using the repeat
JavaScript function.
it('cannot provide a topic with more than 50 characters', async () => {
const tweet = anchor.web3.Keypair.generate();
const topicWith51Chars = 'x'.repeat(51);
await program.rpc.sendTweet(topicWith51Chars, 'Hummus, am I right?', {
// ...
});
});
Good, now â if our guard works properly â this call should throw an error. But how do we assert for this? They are many ways to do this, including an assert.throws
method that accepts a callback and an error that should match the error thrown. However, I prefer to use a try/catch
block so we can make further assertions on the error object.
The idea is:
try
block.catch
any error and make further assertions on the error thrown before returning so the test stops here.try/catch
block, we call assert.fail
since we should have returned inside the catch
block.We end up with the following test.
it('cannot provide a topic with more than 50 characters', async () => {
try {
const tweet = anchor.web3.Keypair.generate();
const topicWith51Chars = 'x'.repeat(51);
await program.rpc.sendTweet(topicWith51Chars, 'Hummus, am I right?', {
accounts: {
tweet: tweet.publicKey,
author: program.provider.wallet.publicKey,
systemProgram: anchor.web3.SystemProgram.programId,
},
signers: [tweet],
});
} catch (error) {
assert.equal(error.msg, 'The provided topic should be 50 characters long maximum.');
return;
}
assert.fail('The instruction should have failed with a 51-character topic.');
});
And just like that, we have a passing test that ensures our custom "topic" guard is working properly!
We can do the same for our custom "content" guard, by copy/pasting that test, using a content of 281 characters and tweaking some of the other variables and texts. We end up with the following test.
it('cannot provide a content with more than 280 characters', async () => {
try {
const tweet = anchor.web3.Keypair.generate();
const contentWith281Chars = 'x'.repeat(281);
await program.rpc.sendTweet('veganism', contentWith281Chars, {
accounts: {
tweet: tweet.publicKey,
author: program.provider.wallet.publicKey,
systemProgram: anchor.web3.SystemProgram.programId,
},
signers: [tweet],
});
} catch (error) {
assert.equal(error.msg, 'The provided content should be 280 characters long maximum.');
return;
}
assert.fail('The instruction should have failed with a 281-character content.');
});
Awesome, now both of our custom guards are tested!
The only annoyance with testing instruction calls that throw errors is that these errors will inevitably become visible in our test logs and add a lot of noise to the terminal. I couldn't see any way around that without overriding some console
methods so if anyone has a neat solution to this issue feel free to share it and I'll update this article accordingly.
Congrats on finishing episode 5! Not only we've implemented five tests for our Solana program, but we've also learned how to build JavaScript clients that interact with our program.
Knowing these concepts â Program
, Provider
, Wallet
, etc. â will be a big help when we implement our JavaScript frontend.
In the next episode, we'll add three final tests that will fetch multiple Tweet
accounts at once. This will allow us to understand how we can retrieve and display all tweets and filter them by topic or by author. See you there! đ
Now that our Tweet account is defined and ready to be used, let's implement an instruction that allows users to send their own tweets.
As we've seen in the previous episode, programs are special accounts that store their own code but cannot store any other information. We say that programs in Solana are stateless.
Because of that, sending an instruction to a program requires providing all the necessary context for it to run successfully.
Similarly to how we defined our Tweet
account, contexts are implemented using a struct
. Within that struct
, we should list all the accounts that are necessary for the instruction to do its job.
In your lib.rs
file, just above the Tweet
struct we defined in the previous episode, you should see an empty Initialize
context.
#[derive(Accounts)]
pub struct Initialize {}
Let's replace that Initialize
context with a SendTweet
context and list all the accounts we need in there.
Remove the two lines above and replace them with the following code.
#[derive(Accounts)]
pub struct SendTweet<'info> {
pub tweet: Account<'info, Tweet>,
pub author: Signer<'info>,
pub system_program: AccountInfo<'info>,
}
There's a bunch of new stuff here so I'll first focus on the accounts themselves and then explain a few Rust features that might look confusing.
First of all, adding an account on a context simply means its public key should be provided when sending the instruction.
Additionally, we might also require the account to use its private key to sign the instruction depending on what we're planning to do with the account. For instance, we will want the author
account to sign the instruction to ensure somebody is not tweeting on behalf of someone else.
Okay, let's have a quick look through the listed accounts:
tweet
: This is the account that the instruction will create. You might be wondering why we are giving an account to an instruction if that instruction creates it. The answer is simple: we're simply passing the public key that should be used when creating the account. We'll also need to sign using its private key to tell the instruction we own the public key. Essentially, we're telling the instruction: "here's a public key that I own, create a Tweet account there for me please".author
: As mentioned above, we need to know who is sending the tweet and we need their signature to prove it.system_program
: This is the official System Program from Solana. As you can see, because programs are stateless, we even need to pass through the official System Program. This program will be used to initialize the Tweet
account and figure out how much money we need to be rent-exempt.Next, let's explain some of Rust's quirks we can see in the code above.
#[derive(Accounts)]
: This is a derive attribute provided by Anchor that allows the framework to generate a lot of code and macros for our struct
context. Without it, these few lines of code would be a lot more complex.<'info>
: This is a Rust lifetime. It is defined as a generic type but it is not a type. Its purpose is to tell the Rust compiler how long a variable will stay alive for.Rest assured, there's no need to dig deeper into these Rust features to follow this series. I'm just throwing some references for the interested readers.
Finally, let's talk about types. Each of these properties has a different type of account so what's up with that? Well, they all represent an account but with slight variations.
AccountInfo
: This is a low-level Solana structure that can represent any account. When using AccountInfo
, the account's data will be an unparsed array of bytes.Account
: This is an account type provided by Anchor. It wraps the AccountInfo
in another struct
that parses the data according to an account struct
provided as a generic type. In the example above, Account<'info, Tweet>
means this is an account of type Tweet
and the data should be parsed accordingly.Signer
: This is the same as the AccountInfo
type except we're also saying this account should sign the instruction.Note that, if we want to ensure that an account of type Account
is a signer, we can do this using account constraints.
On top of helping us define instruction contexts in just a few lines of code, Anchor also provides us with account constraints that can be defined as Rust attributes on our account properties.
Not only these constraints can help us with security and access control, but they can also help us initialise an account for us at the right size.
This sounds perfect for our tweet
property since we're creating a new account in this instruction. For it to work, simply add the following line on top of the tweet
property.
#[derive(Accounts)]
pub struct SendTweet<'info> {
#[account(init)]
pub tweet: Account<'info, Tweet>,
pub author: Signer<'info>,
pub system_program: AccountInfo<'info>,
}
However, the code above will throw an error because we are not telling Anchor how much storage our Tweet
account needs and who should pay for the rent-exempt money. Fortunately, we can use the payer
and space
arguments for that purpose.
#[derive(Accounts)]
pub struct SendTweet<'info> {
#[account(init, payer = author, space = Tweet::LEN)]
pub tweet: Account<'info, Tweet>,
pub author: Signer<'info>,
pub system_program: AccountInfo<'info>,
}
The payer
argument references the author
account within the same context and the space
argument uses the Tweet::LEN
constant we defined in the previous episode. Isn't it amazing that we can do all of that in just one line of code?
Now, because we're saying that the author
should pay for the rent-exempt money of the tweet
account, we need to mark the author
property as mutable. That's because we are going to mutate the amount of money in their account. Again, Anchor makes this super easy for us with the mut
account constraint.
#[derive(Accounts)]
pub struct SendTweet<'info> {
#[account(init, payer = author, space = Tweet::LEN)]
pub tweet: Account<'info, Tweet>,
#[account(mut)]
pub author: Signer<'info>,
pub system_program: AccountInfo<'info>,
}
Note that there's also a signer
account constraint that we could use on the author
property to make sure they have signed the instruction but it is redundant in our case because we're already using the Signer
account type.
Finally, we need a constraint on the system_program
to ensure it really is the official System Program from Solana. Otherwise, nothing stops users from providing us with a malicious System Program.
To achieve this, we can use the address
account constraint which requires the public key of the account to exactly match a provided public key.
#[derive(Accounts)]
pub struct SendTweet<'info> {
#[account(init, payer = author, space = Tweet::LEN)]
pub tweet: Account<'info, Tweet>,
#[account(mut)]
pub author: Signer<'info>,
#[account(address = system_program::ID)]
pub system_program: AccountInfo<'info>,
}
The system_program::ID
is a constant defined in Solana's codebase. By default, it's not included in Anchor's prelude::*
import so we need to add the following line afterwards â at the very top of our lib.rs
file.
use anchor_lang::prelude::*;
use anchor_lang::solana_program::system_program;
EDIT 2022-03-22: In newer versions of Anchor, we can achieve the same result by using yet another type of account called Program
and passing it the System
type to ensure it is the official System program.
#[derive(Accounts)]
pub struct SendTweet<'info> {
// ...
pub system_program: Program<'info, System>,
}
And just like that, we're done with defining the context of our SendTweet
instruction.
Note that Anchor provides a lot more constraints for us. Click here and scroll down a bit to see the exhaustive list of constraints it supports.
Now that our context is ready, let's implement the actual logic of our SendTweet
instruction.
Inside the solana_twitter
module, replace the initialize
function with the following code.
pub fn send_tweet(ctx: Context<SendTweet>, topic: String, content: String) -> ProgramResult {
Ok(())
}
A few things to note here:
initialize
instruction to send_tweet
. Function names are snake cased in Rust.Context
to SendTweet
to link the instruction with the context we created above.topic
and content
. Any argument which is not an account can be provided this way, after the context.ProgramResult
which can either be Ok
or ProgramError
. Rust does not have the concept of exceptions. Instead, you need to wrap your return value into a special enum to tell the program if the execution was successful (Ok
) or not (Err
and more specifically here ProgramError
). Since we're not doing anything inside that function for now, we immediately return Ok(())
which is an Ok
type with no return value inside ()
. Also, note that the last line of a function is used as the return value without the need for a return
keyword.Now that our function signature is ready, let's extract all the accounts we will need from the context.
First, we need to access the tweet
account which has already been initialised by Anchor thanks to the init
account constraint. You can think of account constraints as middleware that occur before the instruction function is being executed.
We can access the tweet
account via ctx.accounts.tweet
. Because we're using Rust, we also need to prefix this with &
to access the account by reference and mut
to make sure we're allowed to mutate its data.
pub fn send_tweet(ctx: Context<SendTweet>, topic: String, content: String) -> ProgramResult {
let tweet: &mut Account<Tweet> = &mut ctx.accounts.tweet;
Ok(())
}
Similarly, we need to access the author
account to save it on the tweet
account. Here, we don't need mut
because Anchor already took care of the rent-exempt payment.
pub fn send_tweet(ctx: Context<SendTweet>, topic: String, content: String) -> ProgramResult {
let tweet: &mut Account<Tweet> = &mut ctx.accounts.tweet;
let author: &Signer = &ctx.accounts.author;
Ok(())
}
Finally, we need access to Solana's Clock
system variable to figure out the current timestamp and store it on the tweet. That system variable is accessible via Clock::get()
and can only work if the System Program is provided as an account.
pub fn send_tweet(ctx: Context<SendTweet>, topic: String, content: String) -> ProgramResult {
let tweet: &mut Account<Tweet> = &mut ctx.accounts.tweet;
let author: &Signer = &ctx.accounts.author;
let clock: Clock = Clock::get().unwrap();
Ok(())
}
Note that we're using the unwrap()
function because Clock::get()
returns a Result
which can be Ok
or Err
. Unwrapping a result means either using the value inside Ok
â in our case, the clock â or immediately returning the error.
Including the topic
and the content
passed as arguments, We now have all the data we need to fill our new tweet
account with the right data.
Let's start with the author's public key. We can access it via author.key
but this contains a reference to the public key so we need to dereference it using *
.
pub fn send_tweet(ctx: Context<SendTweet>, topic: String, content: String) -> ProgramResult {
let tweet: &mut Account<Tweet> = &mut ctx.accounts.tweet;
let author: &Signer = &ctx.accounts.author;
let clock: Clock = Clock::get().unwrap();
tweet.author = *author.key;
Ok(())
}
Then, we can retrieve the current UNIX timestamp from the clock by using clock.unix_timestamp
.
pub fn send_tweet(ctx: Context<SendTweet>, topic: String, content: String) -> ProgramResult {
let tweet: &mut Account<Tweet> = &mut ctx.accounts.tweet;
let author: &Signer = &ctx.accounts.author;
let clock: Clock = Clock::get().unwrap();
tweet.author = *author.key;
tweet.timestamp = clock.unix_timestamp;
Ok(())
}
Finally, we can store the topic
and the content
in their respective properties.
pub fn send_tweet(ctx: Context<SendTweet>, topic: String, content: String) -> ProgramResult {
let tweet: &mut Account<Tweet> = &mut ctx.accounts.tweet;
let author: &Signer = &ctx.accounts.author;
let clock: Clock = Clock::get().unwrap();
tweet.author = *author.key;
tweet.timestamp = clock.unix_timestamp;
tweet.topic = topic;
tweet.content = content;
Ok(())
}
At this point, we have a working instruction that initialises a new Tweet
account for us and hydrates it with the right information.
Whilst Anchor's account constraints protect us from lots of invalid scenarios, we still need to make sure our program reject data that's not valid from our own requirements.
In the previous episode, we decided to use the String
type for both the topic
and the content
properties and allocate 50 characters max for the former and 280 characters max for the latter.
Since the String
type is a vector type and has no fixed limit, we haven't made any restrictions on the number of characters the topic and the content can have. We've only allocated the right amount of storage for them.
Currently, nothing could stop a user from defining a topic of 280 characters and a content of 50 characters. Even worse, since most characters only need one byte to encode and nothing forces us to enter a topic, we could have a content that is (280 + 50) * 4 = 1320 characters long.
Therefore, if we want to protect ourselves from these scenarios, we need to add a few guards.
Let's add a couple of if statements before hydrating our tweet
account. We'll check that the topic
and the content
arguments aren't more than 50 and 280 characters long respectively. We can access the amount of characters a String
contains via my_string.chars().count()
. Notice how we're not using my_string.len()
which returns the length of the vector and therefore gives us the number of bytes in the string.
pub fn send_tweet(ctx: Context<SendTweet>, topic: String, content: String) -> ProgramResult {
let tweet: &mut Account<Tweet> = &mut ctx.accounts.tweet;
let author: &Signer = &ctx.accounts.author;
let clock: Clock = Clock::get().unwrap();
if topic.chars().count() > 50 {
// Return a error...
}
if content.chars().count() > 280 {
// Return a error...
}
tweet.author = *author.key;
tweet.timestamp = clock.unix_timestamp;
tweet.topic = topic;
tweet.content = content;
Ok(())
}
Now that the if statements are in place, we need to return an error inside them to stop the execution of the instruction early.
Anchor makes dealing with errors a breeze by allowing us to define an ErrorCode
enum using the #[error_code]
Rust attribute. For each type of error inside the enum, we can provide a #[msg("...")]
attribute that explains it.
Let's implement our own ErrorCode
enum and define two errors inside of it. One for when the topic is too long and one for when the content is too long.
You can copy/paste the following code at the end of your lib.rs
file.
#[error_code]
pub enum ErrorCode {
#[msg("The provided topic should be 50 characters long maximum.")]
TopicTooLong,
#[msg("The provided content should be 280 characters long maximum.")]
ContentTooLong,
}
Now, let's use the errors we've just defined inside our if statements.
pub fn send_tweet(ctx: Context<SendTweet>, topic: String, content: String) -> ProgramResult {
let tweet: &mut Account<Tweet> = &mut ctx.accounts.tweet;
let author: &Signer = &ctx.accounts.author;
let clock: Clock = Clock::get().unwrap();
if topic.chars().count() > 50 {
return Err(ErrorCode::TopicTooLong.into())
}
if content.chars().count() > 280 {
return Err(ErrorCode::ContentTooLong.into())
}
tweet.author = *author.key;
tweet.timestamp = clock.unix_timestamp;
tweet.topic = topic;
tweet.content = content;
Ok(())
}
As you can see, we first need to access the error type like a constant â e.g. ErrorCode::TopicTooLong
â and wrap it inside an Err
enum type. The into()
method is a Rust feature that converts our ErrorCode
type into whatever type is required by the code which here is Err
and more precisely ProgramError
.
Awesome, not only we're protected against invalid topic and content sizes but we also know how to add more error types and guards in the future.
Before wrapping up this article, I'd like to mention the difference between an instruction and a transaction because they are commonly used interchangeably and it did bug me at first.
The difference is simple though: a transaction is composed of one or multiple instructions.
When a user interacts with the Solana blockchain, they can push many instructions in an array and send all of them as one transaction. The benefit of this is that transactions are atomic, meaning that if any of the instructions fail, the entire operation rolls back and it's like nothing ever happened.
Instructions can also delegate to other instructions either within the same program or outside of the current program. The latter is called Cross-Program Invocations (CPI) and the signers of the current instruction are automatically passed along to the nested instructions. Anchor even has a helpful API for making CPI calls.
No matter how many instructions and nested instructions exists inside a transaction, it will always be atomic â i.e. it's all or nothing.
Whilst we haven't and we won't directly use multiple and nested instructions per transaction in this series, we have used them indirectly already. When using the init
account constraint from Anchor, we asked Anchor to initialise a new account for us and it did this by calling the create_account
instruction of Solana's System Program â therefore making a CPI.
Before we close this long parenthesis, it can be useful to know that instructions are often abbreviated ix
whilst transactions are often abbreviated tx
.
Believe it or not, our Solana program is finished! đ„ł
We've defined our Tweet
account and implemented an instruction that creates new ones on demand and stores all relevant information. As usual, you can find the code for that episode on GitHub.
From this point forward, we will focus on using our program by interacting with its instruction and fetching existing accounts.
Eventually, we'll do this in a fully-fledged JavaScript application but first, we'll do this in tests to make sure everything is working properly. See you in the next episode!
]]>Accounts are the building blocks of Solana. In this episode, we'll explain what they are and how to define them in our programs.
In Solana, everything is an account.
This is a fundamental concept that differs from most other blockchains. For instance, if you've ever created a smart contract in Solidity, then you ended up with a bunch of code that can store a bunch of data and interact with it. Any user that interacts with a smart contract ends up updating data inside a smart contract. That's not the case in Solana.
In Solana, if you want to store data somewhere, you've got to create a new account. You can think of accounts as little clouds of data that store information for you in the blockchain. You can have one big account that stores all the information you need, or you can have many little accounts that store more granular information.
Programs may create, retrieve, update or delete accounts but they need accounts to store information as it cannot be stored directly in the program.
But here is where it becomes more interesting: even programs are accounts.
Programs are special accounts that store their own code, are read-only and are marked as "executable". There's literally an executable
boolean on every single account that tells us if this account is a program or a regular account that stores data.
So remember, everything is an account in Solana. Programs, Wallets, NFTs, Tweets, there're all made of accounts.
Okay, let's define our first account. We need to define a structure that will hold all of the information we need to publish and display tweets.
We could have one big account storing all of the tweets ever created but that wouldn't be a very scalable solution because we need to allocate a fixed size to our accounts. That means we would need to define a maximum number of tweets allowed to be published by everyone.
Additionally, someone has to pay for the storage that will exist on the blockchain. If we have to pre-allocate the storage for every single tweet, we will end up paying for everybody's storage. And the more storage we require, the more expensive it will be.
A better solution would be to have every tweet stored on its own account. That way, storage will be created and paid on demand by the author of the tweet. Since each tweet will require only a small amount of space, the storage will be more affordable and will scale to an unlimited amount of tweets and users. Granularity pays in Solana.
Let's implement solution B in our Solana program. Open your lib.rs
file, that's where we'll implement the entirety of our program. You can ignore all the existing code for now and add the following at the end of the file.
// programs/solana-twitter/src/lib.rs
// ...
#[account]
pub struct Tweet {
pub author: Pubkey,
pub timestamp: i64,
pub topic: String,
pub content: String,
}
That's it! These 7 lines of codes are all we needed to define our tweet account. Now, time for some explanations.
#[account]
. This line is a custom Rust attribute provided by the Anchor framework. It removes a huge amount of boilerplate for us when it comes to defining accounts â such as parsing the account to and from an array of bytes. Thanks to Anchor, we don't need to know much more than that so let's be grateful for it.pub struct Tweet
. This is a Rust struct that defines the properties of our Tweet. If you're not familiar with structs (in Rust, C or other languages), you can think of them as classes that only define properties (no methods).author
. We keep track of the user that published the tweet by storing its public key.timestamp
. We keep track of the time the tweet by published by storing the current timestamp.topic
. We keep track of an optional "topic" field that can be provided by the user so their tweet can appear on that topic's page. Twitter does that differently by parsing all the hashtags (#) from the tweet's content but that would be pretty challenging to achieve in Solana so we'll extract that to a different field for the sake of simplicity.content
. Finally, we keep track of the actual content of the tweet.You might think that creating an account on the Solana blockchain keeps track of its owner, and you'd be right! So why do need to keep track of the Tweet's author inside the account's data?
That's because the owner of an account will be the program that generated it.
Therefore, if we didn't store the public key of the author that created the tweet, we'd have no way of displaying the author later on, and even worse, we'd have no way of allowing that user â and that user only â to perform actions such as updating or deleting their own tweets.
Here's a little diagram that shows how all accounts will be related to one another.
As you can see even our "solana-twitter" program is owned by another account which is Solana's System Program. This executable account also owns every user account. The System Program is ultimately the ancestor of all Solana accounts.
Okay, now that we know what we want to store in our Tweet
account, we need to define the total size of our account in bytes. We will need to know that size on the next episode when we create our first instruction that will send tweets to the blockchain.
Technically, we donât have to provide the optimal size to store our data. We could simply tell Solana that we want our Tweet accounts to be, say, 4000 bytes (4kB). That should be more than enough to store all our content. So why donât we? Because Solana gives us an incentive not to.
Rent is an important concept in Solana and ensures everybody that adds data to the blockchain is accountable for the amount of storage they provide.
The concept is simple:
Wow, wait what?!
Yes, if youâre account cannot pay the rent at the next collection, it will be deleted from the blockchain. But donât panic, that does not mean we are destined to pay rent on all of our tweets for the rest of our days. Fortunately, there is a way to be rent-exempt.
In practice, everybody creates accounts that are rent-exempt, meaning rent will not be collected and the account will not risk being deleted. Ever.
So how does one create a rent-exempt account? Simple: you need to add enough money in the account to pay the equivalent of two years of rent.
Once you do, the money will stay on the account forever and will never be collected. Even better, if you decide to close the account in the future, you will get back the rent-exempt money!
Solana provides Rust, JavaScript and CLI tools to figure out how much money needs to be added to an account for it to be rent-exempt based on its size. For example, run this in your terminal to find out the rent-exempt minimum for a 4kB account.
# Ensure your local ledger is running for this to work.
solana rent 4000
# Outputs:
# Rent per byte-year: 0.00000348 SOL
# Rent per epoch: 0.000078662 SOL
# Rent-exempt minimum: 0.02873088 SOL
That being said, we wonât be needing these methods in our program since Anchor takes care of all of the math for us. All we need to figure out is how much storage we need for our account, so letâs do that now.
Earlier, we defined our Tweet account with the following properties:
author
of type PubKey
.timestamp
of type i64
.topic
of type String
.content
of type String
.Therefore, to size our account, we need to figure out how many bytes each of these properties require and sum it all up.
But first, thereâs a little something you should know.
Whenever a new account is created, a discriminator
of exactly 8 bytes will be added to the very beginning of the data.
That discriminator stores the type of the account. This way, if we have multiple types of accounts â say a Tweet account and a UserProfile account â then our program can differentiate them.
Alright, letâs keep track of that information in our code by adding the following constant at the end of the lib.rs
file.
const DISCRIMINATOR_LENGTH: usize = 8;
Also, if youâre a visual person like me, hereâs a little representation of the storage weâve established so far where each cell represents a byte.
Good, now we can move on to our actual properties, stating with the authorâs public key.
How do we find out the size of the PubKey
type? If youâre using an IDE such as CLion, you can control-click on the PubKey
type and it will take you to its definition. Hereâs what you should see.
pub struct Pubkey([u8; 32]);
This special looking struct defines an array. The size of each item is given in the first element and the length of the array is given in the second element. Therefore, that struct defines an array of 32 items of type u8
. The type u8
means itâs an unsigned integer of 8 bits. Since there are 8 bits in one byte, we end up with a total array length of 32 bytes.
That means, to store the author
property â or any public key â we only need 32 bytes. Letâs also keep track of that information in a constant.
const PUBLIC_KEY_LENGTH: usize = 32;
And here is our updated storage representation.
The timestamp property is of type i64
. That means itâs an integer of 64 bits or 8 bytes.
Letâs add a constant, see our updated storage representation and move on to the next property.
const TIMESTAMP_LENGTH: usize = 8;
The topic property is a bit more tricky. If you control-click on the String
type, you should see the following definition.
pub struct String {
vec: Vec<u8>,
}
This struct defines a vector (vec
) containing elements of 1 byte (u8
). A vector is like an array whose total length is unknown. We can always add to the end of a vector as long as we have enough storage for it.
Thatâs all nice but how do we figure its storage size if itâs got no limit?
Well, that depends on what we intend to store in that String
. We need to explicitly figure out what we want to store and what is the maximum amount of bytes it could require.
In our case, weâre storing a topic. That could be: solana
, laravel
, accessibility
, etc.
So letâs make a decision that a topic will have a maximum size of 50 characters. That should be enough for most topics out there.
Now we need to figure out how many bytes are required to store one character.
It turns out, using UTF-8 encoding, a character can use from 1 to 4 bytes. Since we need the maximum amount of bytes a topic could require, weâve got to size our characters at 4 bytes each.
Okay, so far we have figured out that our topic property should at most require 50 x 4 = 200 bytes.
Itâs important to note that this size is purely indicative since vectors donât have limits. So whilst weâre allocating for 200 bytes, typing âsolanaâ as a topic will only require 6 x 4 = 24 bytes.
Note that characters in âsolanaâ donât require 4 bytes but Iâm pretending for simplicity.
Weâre almost done with our topic property but thereâs one last thing to think about when it comes to the String
type or vectors in general.
Before storing the actual content of our string, there will be a 4 bytes prefix whose entire purpose is to store its total length. Not the maximum length that it could be, but the actual length of the string based on its content.
That prefix is important to know where the next property is located on the array of bytes. Since vectors have no limits, without that prefix we wouldnât know where it stops.
Phew! Okay, now that we know how to size String
properties, letâs define a few constants that summarise our findings.
const STRING_LENGTH_PREFIX: usize = 4; // Stores the size of the string.
const MAX_TOPIC_LENGTH: usize = 50 * 4; // 50 chars max.
Weâve already done all the hard work of understanding how to size String
properties so this will be super easy.
The only thing that differs from the topic property is the character count. Here, we want the content of our tweets to be a maximum of 280 characters which make the total size of our content 4 + 280 * 4 = 1124 bytes.
As usual, letâs add a constant for this.
const MAX_CONTENT_LENGTH: usize = 280 * 4; // 280 chars max.
Sizing properties can be hard on Solana so hereâs a little recap table that you can refer back to when sizing your accounts.
If anyone would like to add to this table, feel free to reach out to me and Iâll make sure to keep this up-to-date.
EDIT 2022-03-24: There's now a similar table in the official Anchor Book called "Space References". Be sure to check it out. đ
Type | Size | Explanation |
---|---|---|
bool |
1 byte | 1 bit rounded up to 1 byte. |
u8 or i8 |
1 byte | |
u16 or i16 |
2 bytes | |
u32 or i32 |
4 bytes | |
u64 or i64 |
8 bytes | |
u128 or i128 |
16 bytes | |
[u16; 32] |
64 bytes | 32 items x 2 bytes. [itemSize; arrayLength] |
PubKey |
32 bytes | Same as [u8; 32] |
vec<u16> |
Any multiple of 2 bytes + 4 bytes for the prefix | Need to allocate the maximum amount of item that could be required. |
String |
Any multiple of 1 byte + 4 bytes for the prefix | Same as vec<u8> |
Letâs have a look at all the code weâve written in this article and combine our various constants into one that gives the total size of our Tweet account.
// 1. Define the structure of the Tweet account.
#[account]
pub struct Tweet {
pub author: Pubkey,
pub timestamp: i64,
pub topic: String,
pub content: String,
}
// 2. Add some useful constants for sizing propeties.
const DISCRIMINATOR_LENGTH: usize = 8;
const PUBLIC_KEY_LENGTH: usize = 32;
const TIMESTAMP_LENGTH: usize = 8;
const STRING_LENGTH_PREFIX: usize = 4; // Stores the size of the string.
const MAX_TOPIC_LENGTH: usize = 50 * 4; // 50 chars max.
const MAX_CONTENT_LENGTH: usize = 280 * 4; // 280 chars max.
// 3. Add a constant on the Tweet account that provides its total size.
impl Tweet {
const LEN: usize = DISCRIMINATOR_LENGTH
+ PUBLIC_KEY_LENGTH // Author.
+ TIMESTAMP_LENGTH // Timestamp.
+ STRING_LENGTH_PREFIX + MAX_TOPIC_LENGTH // Topic.
+ STRING_LENGTH_PREFIX + MAX_CONTENT_LENGTH; // Content.
}
The third section of this code defines an implementation block on the Tweet
struct. In Rust, thatâs how we can attach methods, constants and more to structs and, therefore, make them more like classes.
In this impl
block, we define a LEN
constant that simply sums up all the previous constants of this episode. That way we can access the length of the Tweet account in bytes by running Tweet::LEN
.
And weâre done with this episode! đ„ł
Even though we didnât write that much code, we saw why and how Solana gives us an incentive to think twice about the amount of storage we push to the blockchain.
We also took the time to understand how each property can be sized into an optimal amount of byte and defined reusable constants for better readability.
As usual, you can find the code for this episode on the episode-3
branch of the repository.
In the next episode, we will add more code to our lib.rs
file to create our first instruction which will be responsible for creating a new Tweet account.
Getting started with Solana can be quite the chore without a guide. In this article, Iâll make sure we have everything ready in our local machine to get started with Solana programs using the Anchor framework.
Since there is quite a lot to go through, I'll make sure to get to the point quickly so you can re-read this article as an actionable checklist in the future. That being said, we will spend a bit of time digging through how Anchor works to better understand the "Build, Deploy, Test" cycle we will use throughout this series.
If you're a Windows user, I'm afraid this guide is more tailored for Linux and Mac users. Fortunately, Buildspace has got a nice guide for installing Solana on a Windows machine so hopefully, you can still follow along after that.
Iâd also like to add that the Solana ecosystem moves relatively quickly and, therefore, some of these steps might end up changing or â fingers crossed â being simplified in the future. If thatâs the case, please reach out to me and Iâll make sure to update the article accordingly.
Finally, here's a table of content in case you're re-reading this and looking for a particular section.
Why? Rust is the language Solana uses to build programs.
Installing Rust is as simple as running this.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
When installed, it will add the following PATH â or similar â to your shell configurations which is good to know if you want to move that to your dotfiles or something.
export PATH="$HOME/.cargo/bin:$PATH"
You can check that rust is properly installed by running the following commands.
rustup --version
rustc --version
cargo --version
Similarly, you can install Solana by running the following installer script.
sh -c "$(curl -sSfL https://release.solana.com/v1.9.4/install)"
Note that newer versions might have been released since then so feel free to check for the latest version on the Solana documentation.
Installing Solana will also add a new PATH to your shell configurations. Alternatively, depending on your system, it might ask you to manually update your PATH by providing you with a line to copy/paste.
export PATH="$HOME/.local/share/solana/install/active_release/bin:$PATH"
You can check that Solana is properly installed by running the following commands.
# Check the Solana binary is available.
solana --version
# Check you can run a local validator (Run Ctrl+C to exit).
# Weâll see what this does in this article.
# Note this creates a "test-ledger" folder in your current directory.
solana-test-validator
EDIT 2022-01-15: Since Solana v1.9.4, Apple M1 users no longer need to compile the Solana binaries from source and can simply follow the instructions above! đ„ł If you have an Apple M1 computer and you're using an older version of Solana, you can read the previous instructions on this gist.
Why? Solana defaults to using the âmainnetâ network. For now, we want to develop our programs locally before we deploy them for real.
solana config set --url localhost
After running this, you should see all your configs pointing to localhost URLs which is exactly what we want.
Why? We need a public and private key to identify ourselves when using the Solana binaries locally.
First, you might want to check if you've already got a local key pair by running the following command.
solana address
If you get an error, that means you don't have one yet and so let's create a new one.
Simply run the following command and follow the steps. Personally, I don't enter a paraphrase since the generated key pair will only be used locally.
solana-keygen new
At the end of the process, you will be given a long recovery phrase that can be used to recover both your public and private key. Even though it's only used locally, I still store that recovery phrase on my password manager just in case.
Note that you can recover any key pair by running solana-keygen recover
and providing the recovery phrase in the process.
Why? Anchor is a Solana framework that significantly improved the developer experience when creating programs.
You can install Anchor in your local machine by running the following command.
cargo install --git https://github.com/project-serum/anchor anchor-cli --locked
Note that you can also install Anchor using npm
globally but since I use multiple versions of npm
via nvm
I'm not a big fan of npm
global dependencies.
You may run the following command to check Anchor CLI is installed properly.
anchor --version
Why? By default, Anchor relies on yarn
to manage JavaScript libraries.
If you don't have it installed already, you can do so by running one of the following commands:
# Using npm global dependencies.
npm install -g yarn
# Using homebrew on Mac.
brew install yarn
# Using apt on Linux
apt install yarn
Now that we have Anchor installed, we can run anchor init
to start a new project!
# Go to your dev folder (for me itâs â~/Codeâ).
cd ~/Code
# Create a new Anchor project.
anchor init solana-twitter
# Cd into the newly created project.
cd solana-twitter
Inside our new project, Anchor prepared a bunch of things for us:
programs
folder for all our Solana programs. It already comes with a very simple program we can build upon so we don't have to do all the scaffolding.tests
folder for all our JavaScript tests directly interacting with our programs. Again, it already comes with a test file for our auto-generated program.Anchor.toml
configuration file helping us configure our program ID, Solana clusters, test command, etc.app
folder that will, later on, contain our JavaScript client.Now that we have our project scaffolded, let's see how we can build, deploy and test the default program generated by Anchor. That way, we'll understand more about the development cycle of building a Solana program.
Anchor has two very useful commands that will delegate to the Rust compiler and Solana CLI tools to build and deploy your programs for you.
# Compiles your program.
anchor build
# Deploys your compiled program.
anchor deploy
Whilst these commands are not necessary to compile and deploy, they certainly make the developer experience a lot more enjoyable by abstracting all of the more complex commands we would otherwise need to run.
That being said, let's have a quick look at what happens when we run these commands.
First, our code is compiled and we will be shown any warnings or errors that occur at compile time. The Rust compiler is pretty powerful so if we did something wrong in our code, it most likely won't let use compile it.
Let's run this command on our brand new project.
anchor build
As you can see our program compiled but you should see the following warning: unused variable: 'ctx'
. That's fair because the auto-generated program is so simple that it doesn't actually do anything with that ctx
variable and, therefore, the compiler warns us it's not being used. We can safely ignore that warning for now.
Additionally, once our code was compiled, the target
folder was updated accordingly. You don't need to fully understand what's happening inside that folder but it basically keeps track of any built releases and deployment of our program. Note that this folder is relative to your local machine and will not be committed to your git
repository.
Finally, anchor build
also generated an IDL file. IDL stands for "Interface Description Language" and it is quite simply a JSON file that contains all the specifications of our Solana program. It contains information about its instructions, the parameters required by these instructions, the accounts generated by the program, etc.
The purpose of this IDL file is to feed it to our JavaScript client later on so we can interact with our Solana program in a structured manner.
Running anchor deploy
will take our latest build and deploy it on the cluster.
Note that the first time you build a program, it will also generate a public and private key for it â which will be stored in the target
directory. The public key generated will become the unique identifier of your program â a.k.a the program ID.
Since we've set up our cluster to be localhost
earlier, we currently have no network to deploy to. That means, if you try to run anchor deploy
right now, you'll get an error saying error sending request for url (http://localhost:8899/)
.
To fix that, we need to run a local ledger.
A local ledger is basically a simulation of a Solana cluster inside your local machine. When building locally, we don't actually want to send anything to the Solana blockchain so this is exactly what we want.
Fortunately for us, running a local ledger is as simple as running the following command.
solana-test-validator
This command will keep a session open in your terminal until you exit it by running Ctrl+C
. Whilst the session is open, you now have a local ledger to deploy to! đ
That means you can now run anchor deploy
and it successfully deploy to your local ledger.
anchor deploy
Note that all the data sent to your local ledger is stored in a test-ledger
folder created in the current directory.
So let's make sure we don't commit that entire folder to our git
repository by updating our .gitignore
file like so.
.anchor
.DS_Store
target
**/*.rs.bk
node_modules
+ test-ledger
Also note that exiting your local ledger (by running Ctrl+C
) will not destroy any data you've sent to the cluster. However, removing that test-ledger
folder will. You can achieve the same result by adding the --reset
flag.
# Runs a new empty local ledger.
solana-test-validator --reset
Now that we've run anchor build
and anchor deploy
for the first time, we need to update our program ID.
As we've mentioned above, a new key pair for our program is generated on the very first deployment. Before that, we simply don't know what the public address of our program will be.
Your program ID should be displayed when running anchor deploy
but you may also access it by using the following Solana command.
solana address -k target/deploy/solana_twitter-keypair.json
# Outputs something like: 2EKFZUwMrNdo8YLRHn3CyZa98zp6WH7Zpg16qYGU7htD
Depending on how you named your anchor project, this file might be called something else so you may have to look at the target/deploy
folder to find the right file.
Okay now that we know our program ID, let's update it.
When we created our new project using anchor init
, Anchor used a random placeholder in two places as our program ID that we can now replace.
First, in our Anchor.toml
configuration file.
[programs.localnet]
solana_twitter = "2EKFZUwMrNdo8YLRHn3CyZa98zp6WH7Zpg16qYGU7htD"
Then, in the lib.rs
file of our Solana program. In my case, that's programs/solana-twitter/src/lib.rs
.
use anchor_lang::prelude::*;
use anchor_lang::solana_program::system_program;
declare_id!("2EKFZUwMrNdo8YLRHn3CyZa98zp6WH7Zpg16qYGU7htD");
Finally, we need to build and deploy one more time to make sure our program is compiled with the right identifier.
anchor build
anchor deploy
Before we wrap up this article, Iâd like to make sure we can run the generated tests on our program.
If you look inside your Anchor.toml
file, you'll notice a scripts
section containing a test
script.
[scripts]
test = "yarn ts-mocha -p ./tsconfig.json -t 1000000 tests/**/*.ts"
This is already configured for us so that it'll run all the tests inside our tests
folder using Mocha.
To run that script, run the following command.
anchor run test
If you have a local ledger running â via solana-test-validator
â and you've built and deployed your project properly â via anchor build
and anchor deploy
â then you should see the test passing!
Note that you can add any custom script to your Anchor.toml
configuration file and use anchor run
to execute it. Here's a quick example.
[scripts]
test = "..."
my-custom-script = "echo 'Hello world!'"
anchor run my-custom-script
# Outputs: Hello world!
Alright, we now know the full development cycle. First, you need a local ledger, then you can build, deploy and test. Here's a quick recap:
# Start the local ledger.
solana-test-validator
# Then, on a separate terminal session.
anchor build
anchor deploy
anchor run test
Well, it turns out anchor has a special command that takes care of that full cycle for us. It's called:
anchor test
âïž Not to confuse with anchor run test
that only runs the test script inside your Anchor.toml
file.
So what does anchor test
actually do?
anchor test
if you already have a local ledger running. Make sure to terminate any local ledger before running anchor test
, this is a common gotcha. Also, note that it uses the --reset
flag to make sure our tests always start with the same empty data.
solana-test-validator --reset
anchor build
anchor deploy
anchor run test
The anchor test
command is really powerful when developing your Solana programs locally. It abstracts away all the faff and lets you focus on your program.
Remember though that running anchor test
immediately after generating a new project â via anchor init
â will not work because you'll first need to update your program ID after the first deployment.
Therefore I suggest, you build and deploy manually the very first time and once you've updated your program ID, you can start using anchor test
.
If you end up working on other Anchor projects, you might come across the anchor localnet
command. This command is very similar to anchor test
except that it does not run any test and does not terminate the local ledger at the end.
Thus, it is basically the equivalent to:
solana-test-validator --reset
anchor build
anchor deploy
# The local ledger will stay active after deployment.
This command is typically used to quickly spin up your program when working on your frontend client.
Phew, we got there in the end! Congratulations on setting up your local machine with Solana and Anchor. On top of that, we now have a fully scaffolded project we can use to build our Twitter in Solana project.
I do hope you didn't encounter too many issues along the way. I know setting up things on your machine can be a real nightmare especially when the technology is moving quickly. If you have any issues feel free to add a comment to this article or better yet, create an issue on the GitHub repository below so anyone can jump in and help you across all episodes.
If you want to see the code we generated during this episode, here's a link to the episode-2
branch of this series' repository.
Now that, we're all set up and we understand our development cycle, let's start building things! đ„
]]>Before we dive into this series, let's start by having a quick overview of what we are trying to achieve. I'll also list a few prerequisites that should help following the series but, no worries, nothing too drastic â we are building "from scratch" after all.
By the end of this series, we'll have a fully functional Twitter-like application where anyone can use their wallet to connect and start publishing tweets.
Note that I've already deployed this project on devnet so you can have a little play around.
Here's a quick overview of the features it will have:
Regarding the implementation:
Don't worry if not all of the points above make sense to you yet, we will go through them in this series.
Additionally, once we've implemented all of these features, we'll likely build on top of them as additional follow-up articles. For instance, we could allow users to edit their tweets or even delete them so they can get their rent-exempt money back â again, we'll explain what "Rent" is and how it works in Solana in this series.
Are you excited? I'm excited! Alright, let's go through a few prerequisites and get started.
There aren't many prerequisites for this series as we're going to build everything from scratch. However, some acquired knowledge might make your journey smoother and therefore is worth mentioning.
Demo... check! Prerequisites... check! In the next episode, we'll make sure our local machine has everything it needs to start working with Solana and its most popular framework: Anchor.
One last important note: the project we are building is open source and already available on GitHub. So, if you can't wait to have a look around the code, here's the link.
Alright, LFG! đ„
]]>Saying that itâs unlikely youâve not heard or read about web 3 lately would be an understatement. In just a few months, the decentralised world went from being there to being everywhere. From NFT profile pictures to shiny new blockchains, itâs been all over the Twitter sphere and has been both celebrated as a revolution for freedom and criticised as a frenzy that will destroy our planet.
In this article, Iâll start by giving web 3 a brief high-level introduction before mentioning how and why I ended up getting more and more interested in this decentralised world and more particularly NFTs and the Solana blockchain.
As a little disclaimer, I would like to add that I didnât know much about web 3 until around a month ago so I still have a lot to learn and some of my views might appear too simplistic for more advanced readers. Please be kind and feel free to comment on things that could be improved in the comments and I will make sure to update the article accordingly.
Web 1 gave us the basis of the internet. Static pages, FileZilla, positioning with tables, etc. If youâve ever had to design email templates, youâve basically gone back in time to web 1.
Web 2 improved on top of that basis by allowing us to gather and display user-generated content. This allowed web applications of all sorts to flourish and gave us the internet we know today.
Web 3 isnât a direct improvement of web 2 but rather an alternative approach to computing and storing information. Web 2 uses a bunch of servers owned and controlled by the company that created the application whereas web 3 uses a network of servers that can be owned by anyone to compute and store data.
It works by receiving events that alter the data â called transactions. They are first authenticated and verified by a network of servers â called a cluster. When the cluster reaches a consensus, they are then stored in blocks of multiple transactions that are duplicated and propagated to the entire cluster creating a public and decentralised digital ledger that we call a blockchain.
Anyone, including you, can set up a server and become a node in a blockchainâs cluster. If you do though, it will cost you a significant amount of power to run and you can expect your electricity bill to be much higher on your next meter reading. Thatâs why blockchains usually reward servers in their clusters by offering a small remuneration paid in their own cryptocurrency â a.k.a. mining.
The cryptocurrency of the blockchain creates an economical balance where users of the blockchain end up contributing to the clusterâs remuneration. So instead of paying monthly for your server in web 2, you pay a small fee for each transaction that you send.
I wonât talk about blockchains in much more detail because A. Iâm not the most qualified person for that task and B. the actual architecture of a blockchain can vary significantly from one blockchain to another as we will see with Solana in this article.
The decentralised web is not new. Bitcoin was released in 2009 and Ethereum in 2015. Until recently, web 3 was mostly used in finance as a way to decentralise financial entities such as banks and brokerages to disturb their monopole and bring more transparency to the industry. It created a whole new era of applications called DeFi for âDecentralised Financeâ.
In my opinion â biased by the fact that I never paid attention to web 3 until now â what skyrocketed the amount of attention projected onto the decentralised web are NFTs. All of the sudden, you had stories everywhere about people and even celebrities buying pixelated pictures that could be done on Paint for millions of dollars. That sure created some interest in investing time in this technology â that was once considered niche â since it had the potential of yielding a tremendous return of investment.
If youâre confused about what NFTs are, they stand for âNon-Fungible Tokenâ and represent an entity that canât be replaced with something else and thus can only be priced based on what others are willing to pay for it. Reversely, we say a litre of olive oil, a kilogram of gold or $200 are fungible because they are replaceable by another identical version of themselves and therefore their value is known and set by the market. Just like in the physical world, NFTs have strived in art. Instead of buying a painting, youâre buying a record on a blockchain that says you own a digital asset.
Whilst NFTs are vastly popular in web 3, they are certainly not the only blockchain use case out there. Fully decentralised communities with no leadership are also flourishing and are known as âDecentralised Autonomous Organisationsâ (DAO). These communities typically use a decentralised application â known as a dApp â to reach a consensus among their members. Just like web applications in web 2, dApps can be created for any number of use cases such as crypto gaming and DeFi.
Like everyone, I could see web 3 slightly taking over my Twitter timeline but didnât pay much attention to it. I was slightly interested to know how it all worked but didnât really know where to start. Then I came across a live feed from Nader Dabit deploying an NFT smart contract similar to the Loot project but instead of clothing items, it was developer items and characteristics that were randomly picked. The vision for that NFT was (and still is) to create a developer-focused DAO â called Developer DAO â where you need to own one of these NFTs to be part of the community. Since minting the NFT â i.e. claiming an NFT thatâs not been generated yet â was free I decided it would be a good way for me to dip my toe into this world.
I downloaded the MetaMask chrome extension, created my first Ethereum (ETH) wallet and then followed this tutorial to mint my first NFT. Note that minting an NFT is usually easier than this because the creators can implement a frontend client that interacts with the blockchain and abstract all the necessary steps for you.
Whilst minting the NFT was free, I still had to fund my ETH wallet to pay for the transaction free â a.k.a. gas fee. Thatâs when I found out that a single ETH transaction can cost between $40 and $400 depending on how cluttered the network is.
Imagine creating a web application that charges your users between $40 and $400 every single time they interact with it. Want to update your password? $45, please. Want to change your projectâs title? $168, please. Insane.
Fortunately for me, the gas price that evening was around $60 so I went for it. And just like that, I owed my very first NFT.
Whilst Iâm not planning on selling this one, I did start to get interested in NFTs in general and how to predict the ones that will be successful. Minting an NFT is usually pretty low cost â depending on the hype of the project â so if you bet on the right one, you can make an insane return. For instance, Mekas were minted for 0.2 ETH (~ $770) and sold the same day for 7 ETH (~ $27â000).
Note that the ETH conversion rate was taken at the time of writing this article.
That being said, itâs not always easy to bet on the right NFT and even with all the research in the world, the volatility is ridiculously high. You can easily end up in a situation where you mint 2-3 NFTs for $200 each and a week later nobody cares about these projects anymore and youâve just lost $600. Even with projects that are going to be clear winners like the Mekaverse, they often implement a lottery â a.k.a. a raffle â that randomly decides who will be able to mint one of the 8888 Mekas available. Since hundreds of thousands participated in the raffle, you had to be pretty lucky to get one in the first place â sadly but predictably, I didnât.
Then I got interested in who made these NFTs, how, and how much money they were making. After all, investing and trading digital art is not really my thing but, as a developer and creative person, I could potentially create my own.
So I did some number crunching and yep thatâs a lot of money. Letâs take Mekas as a successful example here to see how much they made on their launch day. They sold 8888 Mekas for 0.2 ETH each â technically they sold a bit less because they reserved some for marketing purposes but letâs just keep it simple. Thatâs 1777.6 ETH or around 6.8 million dollars at the time of this writing. On top of that, they have a 2.5% royalty fee on secondary sales. That means every time someone that owns a Meka sells it to someone else, the creators of the Mekaverse take 2.5% of that price. Currently, Mekas have a total trade of 36800 ETH on OpenSea (a popular NFT marketplace for Ethereum). Take 2.5% of that and thatâs another 920 ETH or 3.5 million dollars on top of their initial 6.8 million. And that number will continue to grow as Mekas continue to be traded.
Now, not all NFT projects make that sort of money and a lot of them are giving a significant percentage of their earnings to various charities but all in all, itâs not impossible to make a few million dollars by creating your own NFT projects.
Because of this, you see hundreds of new NFT projects being launched every week and, sure, there is this consideration that it could be a bubble thatâs not going to last forever. However, I do feel that if you design your project in a way that brings something new, something different and something thatâs going to create a community, you have the potential to succeed and you donât even have to rush.
But if youâre going to go down this road, a candy machine NFT drop â you insert lots of randomly generated images in an existing program and allow others to mint them one by one â wonât be enough. Youâve got to create your own smart contracts and dig deeper into the world of blockchains. And thatâs what I did.
Choosing a blockchain is not easy because thereâs plenty to choose from. A lot of people decide to go with Ethereum because it is the most popular blockchain especially in the world of NFT. However, I still couldnât get my head around the insanely high transaction fee and the amount of time and power it takes for the blockchain to validate a transaction.
Additionally, this has caused web 3 to be vastly criticised due to its ecological impact. For instance, Iâve only made 3 transactions with my ETH wallet so far and, according to carbon.fyi, they have caused 43kg of CO2 emissions. Thatâs the equivalent of the CO2 emissions of an average person for 4 days! With the scale at which the Ethereum network is being used, you can see why people are complaining, especially when they find out itâs all for exchanging pictures of pixelated monkeys.
Sadly, all blockchains get thrown in the same bag but it is not the case. Old blockchains such as Bitcoin and Ethereum do not scale well because of legacy reasons but most of the recent blockchains do acknowledge that a more sustainable web 3 is crucial for the future of the decentralised internet.
A lot of those recent more scalable blockchains build on top of what Ethereum has done and even use the same language for creating smart contracts called âSolidityâ making it easier to deploy in many blockchains and reducing the learning curve of learning new ones. They typically improve on Ethereum by changing a few algorithms and making them more scalable.
However, thereâs one blockchain out there that decided not to play by those rules and to start everything from scratch and thatâs Solana. Its blockchain implementation is so different to the others that it feels like itâs just living on its own desert island. Its mission is to solve the scalability issue with its high-performance protocol in order to make web 3 more scalable, affordable and sustainable. It has a total of 8 core innovations making it the best blockchain network by transaction speed as of July 2021.
And the transaction fees? Itâs currently at $0.00025 and is set to never go above $0.01 no matter the scale. Thatâs more like it. I donât mind updating my password for that price.
So I was hooked and decided to embark on that desert island and forget about all other blockchains. I have a few friends that have chosen other blockchains and we can never understand each other because the architectures are massively different.
Getting started with a fairly new blockchain that looks like no other was not an easy task. Depending on when youâre reading this article things might be different but there is definitely a lack of documentation, articles and tutorials simply because of how new the ecosystem is. That being said, I have been extremely surprised by the speed at which the Solana ecosystem is growing. Almost every week comes with major improvements, new projects, frameworks, tutorials and/or courses.
I started by reading the "Programming on Solana - An Introduction" article from Paul Schaaf which has been widely successful in the Solana ecosystem as one of the only detailed tutorials on how to create programs in Solana â smart contracts are called programs in Solana. The article creates an "Escrow" program from scratch and whilst it says itâs a one hour read, it takes a good day to digest.
It introduces key concepts that are unique to the Solana blockchain such as programs, accounts, PDAs, etc. Even though it took me a whole day to read and digest, I have to say some of these concepts didnât resonate with me until I started putting them in practice days later. So if youâre interested in reading this article, you kind of need to accept that some things wonât make much sense until later.
Furthermore, the program written in this article uses no framework or any abstraction to make the code easier to understand. Whilst this was frustrating to read because you constantly see the potential in extracting generic logic (such as transforming data from and to arrays of bytes), I actually think thereâs value in learning how to create programs that way to really understand how they work.
That being said, I searched for a Solana framework and found Anchor. It takes a lot of the low-level pain away from you such as serialisation, defining instructions, verifying accounts, etc. It certainly isnât what Laravel is to PHP but it is a good step in the right direction and the framework keeps becoming better and better.
Shortly after finding out about Anchor and playing with their getting started tutorials, Nader Dabit released an article called "The Complete Guide to Full-Stack Solana Development with React, Anchor, Rust, and Phantom". The timing couldnât have been more perfect. I was able to follow the steps and create a full web application that was using a program on Solana as the backend. Even though the application was a simple counter, that was a key moment for me because I was finally able to create full web 3 applications â dApps â in Solana and apply everything I had learned before.
Getting started in the Solana ecosystem was a unique experience for me because I had never dived so early in a technical field. If I had a specific question, I simply couldnât rely on documentation, Stack Overflow or third-party articles like Iâm used to. So I developed a few techniques that Iâll mention here.
The search feature of GitHub has been a tremendous help. By simply searching for relevant pieces of code and setting the language filter to "Rust", I was able to find public repositories of other Solana developers that had gone through the same troubles as me and learn from their code. None of these repositories had any stars or was listed anywhere because they were just little labs for developers like me trying to make sense of it all. I found a lot of gems and answered a lot of my questions using this simple technique and I will definitely continue using it in the future.
Using an IDE was crucial for me as I was using Rust for the first time. At first, I kept everything in VS Code with a few helper extensions but Rust is a very unique language and I kept having to google things to understand its quirks every 5 seconds. Then I decided to use CLion from JetBrains and suddenly everything became a lot smoother. The IDE was autocompleting things for me and I was able to quickly understand why types, references and lifetimes werenât working. I wonât make this mistake again and Iâll make sure to have an IDE to hold my hand when learning new complex programming languages.
Searching and posting on Discord was another useful technique for me. Discord seems to be a very important tool for web 3 communities and you are going to end up signing up to a few Discord servers to get by. Most of these servers have one or more "Developer support" channels where devs can ask any question and hope someone helps them. I have to say more often than not you wonât get an answer because the probability that someone who knows the answer is online at the exact time youâre sending your question is quite low. However, you can use the Discord search feature and hope that someone else has asked a similar question in the past and that it has been answered. Itâs not the most user-friendly way to get answers but, in certain situations, it will be the only place that information is available so itâs good to know how to reach it.
One last thing Iâd like to mention is, when an ecosystem is still young and under-documented, the best thing to do to help it grow is to contribute to it. When youâve made such an effort to learn about it all and put all the pieces of the puzzle together, it would be a shame not to share it with others and make their introduction to web 3 that much less painful.
Speaking of contributing, once I knew how to create dApps in Solana, I spent a lot of time contributing to one of Solanaâs core repositories: "wallet-adapter".
This repository provides JavaScript packages and UIs for integrating your application with almost all wallets that support Solana. However, I was surprised to see that they did not have a Vue version of their packages even though they had one for React and Angular. Being a Vue fan and determined not to work with the other frameworks, I decided to create a dApp that used a custom Vue version of their wallet adapters. I then saw they had an open issue to add support for Vue and I agreed to take on that task. Two weeks and three Pull Requests later, this repo now fully supports Vue and I couldnât be prouder to have contributed to an ecosystem I knew nothing about a few weeks earlier.
Now that I was more confident with the Solana ecosystem, it was time to dig a bit more into the world of NFTs in Solana since that was my initial goal.
In almost every other blockchain, there is a standard called ERC-721 that defines how NFTs should be modelled in smart contracts and how the metadata should be provided in order for other applications such as wallets and marketplaces to display them properly.
Since Solana does not use Solidity, it cannot follow that standard. Instead, the standard was defined by Metaplex which is a set of Solana programs designed to help you create your own NFT marketplace that supports many features such as printing duplicated editions, auctions, etc.
Now I have to say, I have a lot of frustrations towards Metaplex and the main reason being that its main use-cases are pretty niche yet we have no other choice but to use it if we want our NFTs to be recognised by wallets and marketplaces. Chances are, the only thing youâre going to need from Metaplex is their "Token Metadata Program" and if you want to use this in isolation for your project, good luck. I have, however, managed to finally reach that point in a way thatâs reusable and fully encapsulated in a Solana program so I will likely dedicate an entire article to it at some point to help others that might be stuck. That being said, I hope that, in the near future, either Solana will take over the Metaplex standard or Metaplex will make it easier to use their programs in isolation.
And thatâs where I am now. I can create Solana dApps that generate NFTs following the Metaplex standard!
Now I want to add additional custom data to my NFTs which can be used to make them interact with one another and keep track of their current state. Whilst most NFTs only need an array of properties with different probabilities that are store directly in the metadata, Iâd like to have real data on-chain so I can treat NFTs like players or entities in a decentralised game.
Funnily enough, I donât think this is going to be hard at all now that I know how Metaplex attach their standardised metadata to the NFT. Iâd love to explain how all of this work in Solana but I want to keep this article focused on the journey rather than the low-level technical stuff which Iâll dedicate an entire article to.
After that, it will just be the case of implementing the logic of the game itself using these NFTs with storage as entities which is going to be super fun!
Whilst there is certainly a steep learning curve when entering the world of web 3, it has been a fun and exciting experience that I can only recommend to curious readers.
In addition, the decentralised ecosystem is growing day and day and more resources are constantly being released making it easier and easier to get started. Speaking of, if youâre interested in the Solana blockchain, there is a very promising course getting released soon focused on learning Solana by creating a dApp from scratch! I will definitely check it out as Iâm sure Iâll learn a lot of new things from it. Also, if youâre more into Solidity and other blockchains, Buildspace has other courses for them too.
Finally, Iâd like to mention that, if you do enter the world of web 3, there is an enormous contribution opportunity. So many things are not documented creating teaching opportunities. So many things can be improved in the open-source world making it possible to have a significant impact on the technology. So many things have not even been done yet and people will lose their minds when they get released. So come join the fun and leave your mark!
]]>Sponsor me on GitHub to read this article and get access to the full library of sponsor-only posts.
If you're already a sponsor, simply visit this article on my blog to read it.
]]>In this series, weâve seen how to deploy a Laravel application from scratch by creating our server manually. Whilst itâs good to know how to do it ourselves to understand the mechanics of our server, it can be a pain to maintain in the long run.
Thatâs why SaaS applications such as Laravel Forge and Ploi exist. They provide an abstraction layer between you and your server by automating its provisioning, its maintenance and by allowing you to configure it directly inside a user interface.
This article focuses on creating a server using Ploi and deploying to it using Deployer. The previous one focused on Laravel Forge.
Before we start, Iâm going to assume you already have a Ploi account and that youâve configured it appropriately.
If youâre going to follow along, make sure the following points are configured.
All of these configurations can be found on your profile pages on Ploi.
Alright, letâs get started. Weâll create a new server directly on the Ploi interface.
Select the server provider of your choice â in our case, weâll use Digital Ocean.
Then, select your credentials, select âServerâ as a "Server type" and fill the rest of the form however you like.
Notice how you can select the PHP version of your choice before creating the server. Additionally, youâll be able to upgrade or downgrade PHP versions later on with only one click. Thatâs much easier than having to do it ourselves as we did in the second episode of this series.
When youâre done, click âCreate serverâ and you should see a "Server installation" page showing you the progress in percents. This means your server is being created on Digital Ocean and Ploi is running a bunch of scripts on it to install everything we need for our Laravel applications.
Now this may take a little while so, whilst weâre waiting, letâs point our domain name to our new server.
As weâve seen in episode 2, we need to add a record in our DNS configurations for our domain name to point to the IP address of our server.
In this tutorial, weâve already assigned jollygood.app
to the server we manually created in episode 2. Thus, I am going to use the subdomain ploi.jollygood.app
to point to our new server created by Ploi. Of course, feel free to use any domains and/or subdomains for your new server.
Once thatâs done, it may take a few minutes or even hours for the changes to be live so itâs better to do this as soon as weâve got the IP address of our server. Whilst Ploi will not tell you the IP address of your server until it is fully configured, you should be able to see it fairly quickly on Digital Ocean.
With any luck, the DNS changes should be live by the time the server has finished being configured on Ploi.
As soon as the server has been successfully installed and configured, you should receive an email from Ploi with important and confidential credentials.
sudo
command on your server.ploi
user. Weâll need this to access our production database later on.Speaking of databases, we'll need one for our application, so let's create one right now. On your server page, click on "Databases" on the sidebar and create a new database. We'll call ours jollygood
for this article.
Now that our server has been successfully configured, let's add a site to it by going on the "Sites" page accessible via the sidebar.
First, click on "Advanced settings" to have access to all fields.
Then, enter the domain of your application that matches the DNS record created on Digital Ocean â in our case ploi.jollygood.app
.
Finally â and thatâs important â replace the âWeb directoryâ and "Project directory" fields with /current/public
and /current
respectively. This is because, as weâve seen in episode 4, when deploying with Deployer, a subfolder named current
will be created pointing to the latest stable release. This will ensure Ploi knows where to run commands in our application and update the Nginx configuration accordingly.
After clicking on âAdd siteâ, you should see the following page.
If we ignore the "1-click installation" options, Ploi is asking us to provide a Git repository so it can clone it inside the server for us.
Technically, weâve got no need for that since weâll be deploying using Deployer who already knows our repository URL. However, if we donât, the user interface for our new site will be locked in this state which is not very helpful to maintain it.
Thus, weâre going to play the game and add our Git repository even though weâll re-deploy using Deployer in a minute.
Choose the Git provider of your choice and select your repository. Thereâs no need to tick âInstall composer dependenciesâ since weâre going to re-deploy in a minute.
Next, thereâs a little adjustment we need to make to our Nginx configuration file. If you remember, in episode 2, we mentioned that the SCRIPT_FILENAME
and DOCUMENT_ROOT
FastCGI parameters had to be overridden to use the real absolute path to avoid symlink paths being incorrectly cached. Since Ploi does not expect us to use Deployer by default, its Nginx configuration does not account for that. But thatâs fine we can update this directly inside the UI.
On your site's page, click on "Manage" from the sidebar. From there, you'll have a bunch of buttons to manage your site including "Edit NGINX configuration". Click on that button to open a modal allowing you to edit your Nginx config file.
Then, add the following lines after include fastcgi_params
and remove the line before it since we're already overriding it.
- fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
+ fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
+ fastcgi_param DOCUMENT_ROOT $realpath_root;
After that, make sure to restart Nginx to apply your changes. Go to your server's page, click on "Manage" on the sidebar and click on the "Restart NGINX" button.
If youâre planning on using Deployer for a lot of sites in the future, you may also create a new Nginx template that will be used instead of the default one. To do that, go to your profileâs page, click on âWebserver templatesâ on the sidebar and create a new template by adding the two lines above and removing the overridden line.
Finally, letâs make sure our domain is available via HTTPS. Ploi makes this super easy for us. On your siteâs page, click on âSSLâ on the sidebar and select âLetsEncryptâ.
Then make sure you enter the right domains and click âAdd certificateâ. And thatâs it.
Okay, now that our server and our site are ready, letâs make sure we can deploy using Deployer.
For this article, I will use the same configuration file we ended up with after episode 4. However, Iâm going to update the host configurations slightly so it works with Ploi.
ploi
user so weâll use this as remote_user
.ploi.jollygood.app
as the hostname since weâve created a DNS record that points to the IP address of our server.ploi
user and uses the siteâs domain to name the siteâs folder. So weâll use the same convention here and deploy to /home/ploi/ploi.jollygood.app
which can be simplified to ~/{{hostname}}
.Additionally, we need to make sure the php_fpm_version
matches the PHP version of our server.
Thus, we end up with the following deploy.yaml
file.
import:
- recipe/laravel.php
- contrib/php-fpm.php
- contrib/npm.php
config:
application: 'blog-jollygood'
repository: 'git@github.com:lorisleiva/blog-jollygood.git'
php_fpm_version: '8.0'
hosts:
prod:
remote_user: ploi
hostname: 'ploi.jollygood.app'
deploy_path: '~/{{hostname}}'
tasks:
deploy:
- deploy:prepare
- deploy:vendors
- artisan:storage:link
- artisan:view:cache
- artisan:config:cache
- artisan:migrate
- npm:install
- npm:run:prod
- deploy:publish
- php-fpm:reload
npm:run:prod:
- run: 'cd {{release_or_current_path}} && npm run prod'
after:
deploy:failed: deploy:unlock
Okay now we should be ready to deploy but before we do letâs delete the folder generated by Ploi when we created our site.
Deployer will be generating a different folder structure with a releases
folder and a current
symlink. If we donât delete the existing folder, weâll end up with a strange fusion of Deployer and a traditional deployment.
Letâs SSH into our server by running dep ssh
, then go to the home directory ~
and run rm -rf ploi.jollygood.app
or whatever your domain is.
Whilst we're in our server, there's something extra we should install that was not provided by Ploi out-of-the-box. By default, Deployer uses the acl
library to manage permissions which has to be installed on the server. Thus, we need to run the following command on our server to install it. Make sure to provide the sudo password received by email when the server was created.
sudo apt install acl
Alright, now weâre finally ready to deploy. Simply exit
the server and run dep deploy
. You should see the following familiar console output.
If you remember, the artisan:migrate
did not run because our .env
file has been generated in Deployerâs shared
folder but it is empty. So letâs fix this.
First, weâll copy the .env.example
file and generate an application key randomly.
# SSH into your server.
dep ssh
# Prepare the .env file.
cp .env.example .env
php artisan key:generate
# Exit your server.
exit
Now, if you remember, in episode 4, we had to edit our .env
file directly inside our server using vim
.
We can still do that, but Ploi provides a nice interface for us to update our .env
directly from their application. Simply go to the "Site > General" page and you should see an "Edit environment" button on the right.
Make sure to update your production variables appropriately and use the database password provided earlier in the email.
Now that our production environment is ready, letâs deploy a second time to ensure our database is migrated. Simply run dep deploy
and with any luck, you should see the following output.
And thatâs it! You should now be able to see your application live if you visit its URL. đ„ł
Okay, weâve successfully deployed our application using Ploi and Deployer but there are still a couple of points Iâd like to mention.
The first point is that, once your application is deployed, youâll likely want to update some environment variables from time to time
Since Ploi has a dedicated page to do so, it can be easy to forget that our configuration files are cached â due to the artisan:config:cache
task we added to our deployment flow.
That means, whenever you update your .env
file, the changes wonât be live until the next deployment.
That being said, if you want to regenerate the configuration cache without having to redeploy the application, you may do that by running php artisan config:cache
on your server.
A nice touch from Ploi is that it allows you to run such commands directly from the UI. On your siteâs page, click on âLaravelâ on the sidebar and you'll have access to many php artisan commands that you can run by clicking a button. You may even add your own commands inside that dashboard by clicking the "Custom commands" button.
In our case, all we need to do is click the config:cache
button and our environment variable will be live.
My last point is about the "Deploy Script" available on the "Site > General" page.
If you've read the previous article on Laravel Forge, you've seen us work out a bit of magic to trigger a Deployer deployment directly from the Laravel Forge interface. Concretely, we ended up with a deploy script calling dep deploy
.
Unfortunately, at this time, it is not possible to do that in Ploi since it runs more than our deploy script behind the scenes. If you remove all the lines from the deploy script, you should see the following error fatal: not a git repository (or any of the parent directories): .git
. This is because deployed releases donât have git initialised inside them. Instead, Deployer uses a cached repository inside the .dep
folder.
That wasn't a problem for Laravel Forge since it just executed what we told it to execute. However, Ploi runs some extra commands behind the scenes trying to access git and therefore making this not possible.
On the other hand, it is worth noting that â starting from a certain plan â Ploi supports its own zero-downtime deployment system out-of-the-box. So with Ploi, you could ditch Deployer altogether, click on a button and have zero-downtime deployments configured.
That being said, you'll need to configure your entire deployment flow inside the deploy script. I prefer using Deployer since it allows us to create powerful deployment flows via reusable recipes and custom tasks written in PHP but â if you have a simple deployment flow â it might be worth considering.
Alright, I hope this article was useful for Ploi users and also for those who are looking for a solution to help them create and maintain servers.
As usual, you can find the deploy.yaml
file updated for this episode on GitHub by click on the link below.
As an alternative to Ploi, you might also want to consider Laravel Forge which I have talked about in the previous episode.
I have no personal preference between the two and so Iâm actually a customer of both because I'm a very indecisive person. đ Hopefully, these two articles will help you decide on which one suits you best.
In the next episode, I will provide a complete checklist of this entire series as a gift for my wholesome sponsors. This will be the perfect article to come back to when youâre ready to get your hands dirty and want a quick list of things to do to deploy your Laravel app from scratch.
]]>