How does the block protection system work in Proof-of-Stake? - security

Let's say we have a block that contains a signature.
Example:
Block {
Id,
Transactions,
PreviousHash,
Hash,
Signature,
Validator
}
The block was signed by the validator. What prevents an attacker from taking a block, changing the coinbase transaction, recalculating the hash, and signing the block with his private key? How do I check that the block is really valid?

Related

Solana Anchor Rust: How to convert a public key into an AccountInfo type

I know I can give my Solana Rust program an user's token account via a Context struct as shown in Anchor tutorial 2: https://project-serum.github.io/anchor/tutorials/tutorial-2.html#defining-a-program
#[derive(Accounts)]
pub struct Stake<'info> {
pub user_reward_token_account: CpiAccount<'info, TokenAccount>,
...
}
But what if I want users to save that user's token account in certain user's storage account first, then let my Solana program get those token accounts from that user's storage account?
let user_acct = &ctx.accounts.user_acct;
Then when trying to mint some reward tokens to the user's token account:
let cpi_accounts = MintTo {
mint: ctx.accounts.reward_mint.to_account_info(),
to: user_acct.reward_user,
authority: ctx.accounts.pg_signer.clone()
};
I got an error at compilation: expected struct anchor_lang::prelude::AccountInfo, found struct anchor_lang::prelude::Pubkey
but this to_account_info() method is not found in anchor_lang::prelude::Pubkey
I checked the Pubkey doc: https://docs.rs/anchor-lang/0.13.2/anchor_lang/prelude/struct.Pubkey.html
But it does not say anything about AccountInfo ...
Then I tried to make an AccountInfo struct from the reward_user address with the help of https://docs.rs/anchor-lang/0.13.2/anchor_lang/prelude/struct.AccountInfo.html:
let to_addr = AccountInfo {
key: &user_acct.reward_user,
is_signer: false,
is_writable: true,
lamports: Rc<RefCell<&'a mut u64>>,
data: Rc<RefCell<&'a mut [u8]>>,
owner: &user_pda.user_acct,
executable: false,
rent_epoch: u64,
};
But it is really hard and I do not know what the lamports, data, rent_epoch values are...
So how can I convert a public key into AccountInfo type?
You will need to pass the accounts through the context in order to be able to access its data. This design allows Solana to parallelize transactions better by knowing which accounts and data is required before runtime.
So for Sealevel to parallelise batches of instructions you need to provide an account in the list of accounts to be used by the program even if that account doesn't exist yet. As an account is being created it will need to have some lamports transferred to it to become rent exempt. Depositing lamports will require this account to have its state modified and as such it needs to be marked as writable.
I had problem relating to a potential, but unlikely concurrency issue where PDA seed would include state of a different account, counter and afterwards counter would be incremented. This is Solana's way of doing arrays that aren't capped by size constraints and can be indexed or looped over. I wanted each call to assemble a new account address on chain, rather than relaying on two clients not reading the counter state but one submitting transaction slightly slower and having it rejected. This is impossible.
What this means in practice is that the client always has to derive PDA and pass it to the program, even if program itself will do the same action again and then submit the transaction to the system program.

Pre-generating a "queue" of heavy data to get it immediately in actix-web endpoint

I use actix-web and want to generate pairs of (password, password hash).
It takes some time (0.5s).
Instead of generating each pair on demand:
pub async fn signup (data: web::Data<AppData>) -> impl Responder {
// Generate password
let password = data.password_generator.generate_one().unwrap();
let password_hash = password::hash_encoded(password.as_bytes()); // 0.5s
// ...
}
I want to always have 10-20 of them pre-generated and just get already existing pair. After that generate a new one in the background.
How can I do it using actix-web?
I think of some kind of refilling "queue". But don't know how to implement it and use it correctly in multiple actix threads.
You can just use a regular thread (as in std::thread::spawn): while actix probably has some sort of blocking facility which executes blocking functions off the scheduler, these are normally intended for blocking tasks which ultimately terminate. Here you want something which just lives forever. So an stdlib thread is exactly what you want.
Then set up a buffered, blocking channel between the two, either an mpsc or a crossbeam mpmc (the latter is more convenient because you can clone the endpoints). With the right buffering, the procuder thread just loops around producing entries, and once the channel is "full" the producer will get blocked on sending the extra entry (21st or whatever).
Once a consumer fetches an entry from the channel, the producer will be unblocked, add the new entry, generate a new one, and wait until it can enqueue that.

JDBC LockRegistry accross JVMS

Is my application service obtaining a lock using JDBC LockRepository supposed to run inside an #Transaction ?
We have a sample application service that updates a JDBCRepository and since this application can run on multiple JVMS (headless). We needed a global lock to serialize those updates.
I looked at your test and was hoping my use case would work too. ... JdbcLockRegistryDifferentClientTests
My config has a DefaultLockRepository and JdbcLockRegistry;
I launched( java -jar boot.jar) my application on two terminals to simulate. When I obtain a lock and issue a tryLock() without #Transaction on my application service both of them get the lock (albeit) one after the other almost immediately. I expected one of them to NOT get it for at least 10 seconds (Default expiry).
Service (Instance -1) {
Obtain("KEY-1")
tryLock()
DoWork()
unlock();
close();
}
Service (Instance -2) {
Obtain("KEY-1")
tryLock() <-- Wait until the lock expires or the unlock happens
DoWork()
unlock();
close();
}
I also noticed here DefaultLockRepository that the transaction scope (if not inherited) is only around the JDBC operation.
When I change my service to
#Transaction
Service (Instance -1) {
Obtain("KEY-1")
tryLock()
DoWork()
unlock();
close();
}
It works as expected.
I am quite sure I missed something ? But I expect my lock operation to honor global-locks (the fact that a lock exists in a JDBC store with an expiration) until an unlock or expiration.
Is my understanding incorrect ?
This works as designed. I didnt configure the DefaultLockRepository correctly and the default ttl was shorter than my service (artificial wait) lock duration. My apologies. :) Josh Long helped me figure this out :)
You have to use different client ids. The same is means the same client. That for special use-case. Use different client ids as they are different instances
The behavior here is subtle (or obvious once you see how this is working) and the general lack of documentation unhelpful, so here's my experience.
I created a lock table by looking at the SQL in DefaultLockRepository, which appeared to imply a composite primary key of REGION, LOCK_KEY and CLIENT_ID - THIS WAS WRONG.
I subsequently found the SQL script in the spring-integration-jdbc JAR, where I could see that the composite primary key MUST BE on just REGION and LOCK_KEY as #ArtemBilan says.
The reason is that the lock doesn't care about the client, obviously, so the primary key must be just the REGION and LOCK_KEY columns. These columns are used when acquiring a lock and it is the key violation that occurs should another client attempt to obtain the lock that is used to restrict other client IDs.
This also implies that, again as #ArtemBilan says, each client instance must have a unique ID, which is the default behavior when no ID specified at construction time.

Safe to use unsafeIOToSTM to read from database?

In this pseudocode block:
atomically $ do
if valueInLocalStorage key
then readValueFromLocalStorage key
else do
value <- unsafeIOToSTM $ fetchValueFromDatabase key
writeValueToLocalStorage key value
Is it safe to use unsafeIOToSTM? The docs say:
The STM implementation will often run transactions multiple times, so you need to be prepared for this if your IO has any side effects.
Basically, if a transaction fails it is because some other thread wroteValueToLocalStorage and when the transaction is retried it will return the stored value instead of fetching from the database again.
The STM implementation will abort transactions that are known to be invalid and need to be restarted. This may happen in the middle of unsafeIOToSTM, so make sure you don't acquire any resources that need releasing (exception handlers are ignored when aborting the transaction). That includes doing any IO using Handles, for example. Getting this wrong will probably lead to random deadlocks.
This worries me the most. Logically, if fetchValueFromDatabase doesn't open a new connection (i.e. an existing connection is used) everything should be fine. Are there other pitfalls I am missing?
The transaction may have seen an inconsistent view of memory when the IO runs. Invariants that you expect to be true throughout your program may not be true inside a transaction, due to the way transactions are implemented. Normally this wouldn't be visible to the programmer, but using unsafeIOToSTM can expose it.
key is a single value, no invariants to break.
I would suggest that doing I/O from an STM transaction is just a bad idea.
Presumably what you want is to avoid two threads doing the DB lookup at the same time. What I would do is this:
See if the item is already in the cache. If it is, we're done.
If it isn't, mark it with an "I'm fetching this" flag, commit the STM transaction, go get it from the DB, and do a second STM transaction to insert it into the cache (and remove the flag).
If the item is already flagged, retry the transaction. This blocks the calling thread until the first thread inserts the value from the DB.

How to model bank transfer in CQRS

I'm reading Accounting Pattern and quite curious about implementing it in CQRS.
I think AccountingTransaction is an aggregate root as it protects the invariant:
No money leaks, it should be transfer from one account to another.
public class AccountingTransaction {
private String sequence;
private AccountId from;
private AccountId to;
private MonetaryAmount quantity;
private DateTime whenCharged;
public AccountingTransaction(...) {
raise(new AccountingEntryBookedEvent(sequence, from, quantity.negate(),...);
raise(new AccountingEntryBookedEvent(sequence, to, quantity,...);
}
}
When the AccountingTransaction is added to its repository. It publishes several AccountingEntryBookedEvent which are used to update the balance of corresponding accounts on the query side.
One aggregate root updated per db transaction, eventual consistency, so far so good.
But what if some accounts apply transfer constraints, such as cannot transfer quantity more that current balance? I can use the query side to get the account's balance, but I'm worried that data from query side is stale.
public class TransferApplication {
public void transfer(...) {
AccountReadModel from = accountQuery.findBy(fromId);
AccountReadModel to = accountQuery.findBy(toId);
if (from.balance() > quantity) {
//create txn
}
}
}
Should I model the account in the command side? I have to update at least three aggregate roots per db transaction(from/to account and account txn).
public class TransferApplication {
public void transfer(...) {
Account from = accountRepository.findBy(fromId);
Account to = accountRepository.findBy(toId);
Transaction txn = new Transaction(from, to, quantity);
//unit or work locks and updates all three aggregates
}
}
public class AccountingTransaction {
public AccountingTransaction(...) {
if (from.permit(quantity) {
from.debit(quantity);
to.credit(quantity);
raise(new TransactionCreatedEvent(sequence, from, to, quantity,...);
}
}
}
There are some use cases that will not allow for eventual consistency. CQRS is fine but the data may need to be 100% consistent. CQRS does not imply/require eventual consistency.
However, the transactional/domain model store will be consistent and the balance will be consistent in that store as it represents the current state. In this case the transaction should fail anyway, irrespective of an inconsistent query side. This will be a somewhat weird user experience though so a 100% consistent approach may be better.
I remember bits of this, however M Fowler uses a different meaning of event compared to a domain event. He uses the 'wrong' term, as we can recognize a command in his 'event' definition. So basically he is speaking about commands, while a domain event is something that happened and it can never change.
It is possible that I didn't fully understood that Fowler was referring to, but I would model things differently, more precisely as close to the Domain as possible. We can't simply extract a pattern that can always be applied to any financial app, the minor details may change a concept's meaning.
In OP's example , I'd say that we can have a non-explicit 'transaction': we need an account debited with an amount and another credit with the same amount. The easiest way, me thinks, is to implement it via a saga.
Debit_Account_A ->Account_A_Debited -> Credit_Account_B-> Account_B_Credited = transaction completed.
This should happen in a few ms at most seconds and this would be enough to update a read model. Humans and browsers are slower than a few seconds. And a user know to hit F5 or to wait a few minutes/hours. I won't worry much about the read model accuracy.
If the transaction is explicit i.e the Domain has a Transaction notion and the business really stores transactions that's a whole different story. But even in that case, probably the Transaction would be defined by a number of accounts id and some amounts and maybe a completed flag. However, at this point is pointless to continue, because it really depends on the the Domain's definition and use cases.
Fixed the answer
Finally my solution is having Transaction as domain model.
And project transactions to AccountBalance but I implement special projection which make sure every data consistence before publish actual event.
Just two words: "Event Sourcing" with the Reservation Pattern.
And maybe, but not always, you may need the "Sagas" pattern also.

Resources