I'm trying to write a simple Solana Program using Rust/Anchor which uses a PDA.
Here is the Program code:
use anchor_lang::prelude::*;
declare_id!("51v31qHaEQniLoYuvvtXByZcfiyvog3R2EKC39EPD52p");
#[program]
pub mod solana_sandbox {
use super::*;
pub fn initialize(ctx: Context<Initialize>, bump: u8) -> ProgramResult {
ctx.accounts.sandbox_account.bump = bump;
Ok(())
}
}
#[derive(Accounts)]
#[instruction(bump: u8)]
pub struct Initialize<'info> {
#[account(mut)]
pub signer: Signer<'info>,
#[account(
init,
seeds = [b"seed".as_ref()],
bump,
payer = signer,
)]
pub sandbox_account: Account<'info, SandboxAccount>,
pub system_program: Program<'info, System>,
}
#[account]
#[derive(Default)]
pub struct SandboxAccount {
pub bump: u8,
}
Here is the client code:
const [sandboxPda, sandboxBump] = await PublicKey.findProgramAddress([Buffer.from('seed')], this.program.programId);
await program.rpc.initialize(
sandboxBump,
{
accounts: {
signer: keypair.publicKey,
sandboxAccount: sandboxPda,
systemProgram: anchor.web3.SystemProgram.programId,
},
signers: [keypair],
instructions: []
});
It works correctly, but I have a doubt. I get sandboxBump from findProgramAddress and input this but it isn't used.
If I set bump when init like this :
#[account(
init,
seeds = [b"seed".as_ref()],
bump = bump,
payer = signer,
)]
Error occurred. So I delete the instruction macro from the program code but it still works correctly.
So is bump value no need when PDA initialization or does anchor system use it automatically?
Appreciate any help!
For enhanced security Solana makes sure all the Program Address will not lie on the ed25519 curve. As the address is not on the curve there will no be associated private key hence no risk.
For generating program address solana uses 256-bit pre-image resistant hash function using collection of seeds and a program id as input, as it's output can't be predicted in advance and can't be controlled in any manner there is 50% chance of newly generated address will lie on ed25519 curve. In such case where generated address is on curve, which is prohibited by solana. Solana will use a different set of seeds or a seed bump to find a valid address (address off the curve).
In short bump will be used only in case where provided input doesn't generate a valid program address.
Program addresses are deterministically derived from a collection of
seeds and a program id using a 256-bit pre-image resistant hash
function. Program address must not lie on the ed25519 curve to ensure
there is no associated private key. During generation, an error will
be returned if the address is found to lie on the curve. There is
about a 50/50 chance of this happening for a given collection of seeds
and program id. If this occurs a different set of seeds or a seed bump
(additional 8 bit seed) can be used to find a valid program address
off the curve.
references:
https://docs.solana.com/developing/programming-model/calling-between-programs#hash-based-generated-program-addresses
https://en.wikipedia.org/wiki/EdDSA
PDA's are used to sign transactions in Solana. when you sign a transaction you need to have a secret key to prove that you own the associated address. But for a program to hold the private key is not ideal because the program lives on-chain and everybody can look on-chain. so storing the private key on-chain is not a good idea because everyone else can steal the private key. that is why we need those PDAs. Because with Pda, the program can prove that it is allowed to sign the transaction. when do you need PDA? let's say you have a treasury of tokens and you want to send some tokens from that treasury to another address. the program can approve this by signing with seed and bump seed and system program id to tell the system program to transfer the sol.
WE ARE ACTUALLY LOOKING FOR A PUBLIC KEY THAT CANNOT BE DERIVED FROM A SECRET KEY
PDAs ensure that no external user could also generate a valid signature for the same address
Our pda's should not be on the elliptic curve. If it is, then we use bump seeds. so our smart contract will try I think upto 20 times to create a unique hash using a new bump seed each time.
It starts with bump = 255 and simply iterate down through bump = 254, bump = 253, etc. until we get an address that is not on the elliptic curve. T
Related
While trying to implement NEP-141 fungible token, I am using trait
impl FungibleTokenCore for FungibleToken {
fn ft_transfer(&mut self, receiver_id: ValidAccountId, amount: U128, memo: Option<String>) {
assert_one_yocto();
let sender_id = env::predecessor_account_id();
let amount: Balance = amount.into();
self.internal_transfer(&sender_id, receiver_id.as_ref(), amount, memo);
}
}
But the problem is the function ft_transfer is inaccessible from the contract. It gives error:
"Contract method is not found".
export TOKEN=dev-1618119753426-1904392
near call $TOKEN ft_transfer '{"receiver_id":"avrit.testnet", "amount": 10, "memo":""}' --accountId=amiyatulu.testnet
Your method must be public. See the near-sdk-rs docs README for a few examples.
https://github.com/near/near-sdk-rs
You probably need a pub before that fn. See Best Practices.
Also see FT example. You can use the near-contract-standards library to simplify your efforts.
As you mentioned in a comment, the problem is that you need to:
Use #[near_bindgen] for the impl definition:
#[near_bindgen]
impl FungibleTokenCore for FungibleToken { ... }
Use a public method:
pub fn ft_transfer(&mut self, ...)
There is already an implementation of this token in near-contract-standards
I am unable to add custom logic into the token using the library, so not using it.
Regarding how to extend the behaviour of the toke I suggest you to take a look at how the Rainbow Bridge does it for ERC20 tokens bridged to Near: BridgeToken. We also needed to extend its functionalities, and for this end, we used the Token as an internal field, and then changed a bit the public functions exposed.
There is also a useful macro to derive base implementation for common functions.
Like burning some tokens during transfer.
To this, you can follow the previous approach, without using the macro to expose all functions, and instead implement ft_transfer properly for your use case, but still making calls to the inner field: token: FungibleToken.
I have started out learning Rust and is currently trying to write a small neural network as personal exercise. I want to define a struct for my forthcoming Layers/Clusters/Groups of nodes. My initial definition looks like this:
struct Layer {
name: String, // Human readable name
id: String, // UUID in the future
order: u8, // int for sorting
width: u8, // Number of nodes
input: [&'Self], // References to other Layers that feed input into this
}
The thing I am struggling with is the input field which should contain a list of references to other Layer-instances. I will know at compile time how many each Layer will have in the list so it wont have to me mutable. Is it possible to do this? I cant find a solution on the Google machine or in "the book".
Please advise.
Is it possible to do this? I cant find a solution on the Google machine or in "the book".
Possible yes, though I would not recommend it.
Let's start with the possible: &Self would be a "layer reference" with an unnamed lifetime, a lifetime name is for the form '<symbol>, so when you write &'Self you're specifying a reference of lifetime 'Self, but you're never specifying the type being refered to, which is why rustc complains about "expected type".
If you add a "proper" lifetime name, and parametrize the structure, it compiles fine:
struct Layer<'sublayers> {
name: String, // Human readable name
id: String, // UUID in the future
order: u8, // int for sorting
width: u8, // Number of nodes
input: [&'sublayers Self], // References to other Layers that feed input into this
}
However I would not recommend it as the last member being a slice means it's a DST which are difficult to work with at the best of time -- as the nomicon specifically notes "custom DSTs are a largely half-baked feature for now".
Since Rust doesn't yet have const generics proper you can't use an array you'd parameterize through layer either (e.g. Layer<const Size> and input: [&Self;Size], maybe one day), so you probably want something like a vector or a slice reference e.g.
struct Layer<'slice, 'sublayers: 'slice> {
name: String, // Human readable name
id: String, // UUID in the future
order: u8, // int for sorting
width: u8, // Number of nodes
input: &'slice [&'sublayers Self], // References to other Layers that feed input into this
}
I use generators as long-lived asynchronous threads (see
How to implement a lightweight long-lived thread based on a generator or asynchronous function in Rust?) in a user interaction scenario. I need to pass user input into the generator at each step. I think I can do it with a RefCell, but it is not clear how to transfer the reference to the RefCell inside the generator when creating its instance?
fn user_scenario() -> impl Generator<Yield = String, Return = String> {
|| {
yield format!("what is your name?");
yield format!("{}, how are you feeling?", "anon");
return format!("{}, bye !", "anon");
}
}
The UserData structure contains user input, the second structure contains a user session consisting of UserData and the generator instance. Sessions are collected in a HashMap.
struct UserData {
sid: String,
msg_in: String,
msg_out: String,
}
struct UserSession {
udata_cell: RefCell<UserData>,
scenario: Pin<Box<dyn Generator<Yield = String, Return = String>>>,
}
type UserSessions = HashMap<String, UserSession>;
let mut sessions: UserSessions = HashMap::new();
UserData is created at the time of receiving user input - at this moment I need to send a link to UserData inside the generator, wrapping it in RefCell, but I don’t know how to do it since the generator has a 'static lifetime, and the RefCell lives less!
let mut udata: UserData = read_udata(&mut stream);
let mut session: UserSession;
if udata.sid == "" { //new session
let sid = rnd.gen::<u64>().to_string();
udata.sid = sid.clone();
sessions.insert(
sid.clone(),
UserSession {
udata_cell: RefCell::new(udata),
scenario: Box::pin(user_scenario())
}
);
session = sessions.get_mut(&sid).unwrap();
}
The full code is here, but the generator here does not see user input.
Disclaimer: resumption arguments are a planned extension for generators, so at some point in the future it will be possible to resume the argument with &UserData.
For now, I will recommend sharing ownership. The cost is fairly minor (one memory allocation, one indirection) and will save you a lot of troubles:
struct UserSession {
user_data: Rc<RefCell<UserData>>,
scenario: ..,
}
Which is built with:
let user_data = Rc::new(RefCell::new(udata));
UserSession {
user_data: user_data.clone(),
scenario: Box::pin(user_scenario(user_data))
}
Then, both the session and the generator have access to the UserData each on their turn, and everything is fine.
There is one little wrinkle: be careful of scopes. If you keep a .borrow() alive across a yield point, which is possible, then you will have a run-time error when trying to write to it outside the generator.
A more involved solution would be using a queue of messages; which would also involve memory allocation, etc... I would consider your UserData structure to be a degenerate form of a pair of queues: it's two queues with capacity for one message. You could make it more explicit with a regular queue, but that would not buy you much.
I want to write a function to be called like this:
send("message","address");
Where some other thread that is doing
let k = recv("address");
println!("{}",k);
sees message.
In particular, the message may be large, and so I'd like "move" or "zero-copy" semantics for sending the message.
In C, the solution is something like:
Allocate messages on the heap
Have a global, threadsafe hashmap that maps "address" to some memory location
Write pointers into the memory location on send, and wake up the receiver using a semaphore
Read pointers out of the memory location on receive, and wait on a semaphore to process new messages
But according to another SO question, step #2 "sounds like a bad idea". So I'd like to see a more Rust-idiomatic way to approach this problem.
You get these sort of move semantics automatically, and get achieve light-weight moves by placing large values into a Box (i.e. allocate them on the heap). Using type ConcurrentHashMap<K, V> = Mutex<HashMap<K, V>>; as the threadsafe hashmap (there's various ways this could be improved), one might have:
use std::collections::{HashMap, RingBuf};
use std::sync::Mutex;
type ConcurrentHashMap<K, V> = Mutex<HashMap<K, V>>;
lazy_static! {
pub static ref MAP: ConcurrentHashMap<String, RingBuf<String>> = {
Mutex::new(HashMap::new())
}
}
fn send(message: String, address: String) {
MAP.lock()
// find the place this message goes
.entry(address)
.get()
// create a new RingBuf if this address was empty
.unwrap_or_else(|v| v.insert(RingBuf::new()))
// add the message on the back
.push_back(message)
}
fn recv(address: &str) -> Option<String> {
MAP.lock()
.get_mut(address)
// pull the message off the front
.and_then(|buf| buf.pop_front())
}
That code is using the lazy_static! macro to achieve a global hashmap (it may be better to use a local object that wraps an Arc<ConcurrentHashMap<...>, fwiw, since global state can make reasoning about program behaviour hard). It also uses RingBuf as a queue, so that messages bank up for a given address. If you only wish to support one message at a time, the type could be ConcurrentHashMap<String, String>, send could become MAP.lock().insert(address, message) and recv just MAP.lock().remove(address).
(NB. I haven't compiled this, so the types may not match up precisely.)
Lets say I want to write a little client for an HTTP API. It has a resource that returns a list of cars:
GET /cars
It also accepts the two optional query parameters color and manufacturer, so I could query specific cars like:
GET /cars?color=black
GET /cars?manufacturer=BMW
GET /cars?color=green&manufacturer=VW
How would I expose these resources properly in Rust? Since Rust doesn't support overloading, defining multiple functions seems to be the usual approach, like:
fn get_cars() -> Cars
fn get_cars_by_color(color: Color) -> Cars
fn get_cars_by_manufacturer(manufacturer: Manufacturer) -> Cars
fn get_cars_by_manufacturer_and_color(manufacturer: Manufacturer, color: Color) -> Cars
But this will obviously not scale when you have more than a few parameters.
Another way would be to use a struct:
struct Parameters {
color: Option<Color>,
manufacturer: Option<Manufacturer>
}
fn get_cars(params: Parameters) -> Cars
This has the same scaling issue, every struct field must be set on creation (even if its value is just None).
I guess I could just accept a HashMap<String, String>, but that doesn't sound very good either.
So my question is, what is the proper/best way to do this in Rust?
You could use the Builder pattern, as mentioned here. For your particular API, it could look like this:
Cars::new_get()
.by_color("black")
.by_manufacturer("BMW")
.exec();
I would like to point out that no matter the solution, if you wish for a compile-time checked solution the "url parsing -> compile-time checkable" translation is necessarily hard-wired. You can generate that with an external script, with macros, etc... but in any case for the compiler to check it, it must exist at compile-time. There just is no short-cut.
Therefore, no matter which API you go for, at some point you will have something akin to:
fn parse_url(url: &str) -> Parameters {
let mut p: Parameters = { None, None };
if let Some(manufacturer) = extract("manufacturer", url) {
p.manufacturer = Some(Manufacturer::new(manufacturer));
}
if let Some(color) = extract("color", url) {
p.color = Some(Color::new(color));
}
p
}
And although you can try and sugarcoat it, the fundamentals won't change.