Prof. Palam asked to submit the assignment (Answer prepared after the deadline are
not considered) onto a server/ drive to which the whole students-group have access (read
and write not delete). State precisely (step wise) the protocol for submission and
verification. Note that Prof. Palam has no individual shared key with the students for
scalability issues. Justify your protocol
Related
Objective: We are building a dual flow system on NEAR. The flow is something like:
Client -> Escrow Wallet -----true---> Beneficiary
Client -> Escrow Wallet -----false--> Client
I was just wondering if there is a standard procedure for this. Because hard-coding a wallet address to act as the escrow wallet does not sound very reasonable or safe. So please let me know, what might be a better way to do this.
You can implement your desired custodian/escrow behavior in a smart contract's logic. I'm not sure what you mean by hard-coding an account, but this escrow contract's logic would remain unchanged once you have deployed it to the network, as such, you can rely on it as much as you can rely on the network for your application's logic.
There are many ways.
One is creating an a wallet for the specific scrow service and then erasing it at the end of the transaction.
function one - Bob ask for an scrow service, the contract is deployed and tokens sent in same transaction
function two - Alice sent the other part, the contract send tokens from Bob to Alice and from Alice to Bob, closes the contract and send remaining funds to master contract
The Second and easier:
function one - send 5 N from Bob to the Smart contract address
function two - accept from Alice AND in the same function send the another part
Of course this needs more logic and information from your question, an a task maybe is something that couldnĀ“t be sent in the blockchain, so it always will require honesty from one part.
Assume the Joker is a maximally sophisticated, well-equipped and malicious user of Batman's start up batmanrules.com hosted by, say, AWS infrastructure. The business logic of batmanrules.com requires that unregistered users be able to send http requests to the REST API layer of batman.com, which lead to the invocation (in one way or another) of queries against an AWS-based DB. Batman doesn't want to be constrained by DB type (it can be either SQL or noSQL).
The Joker wants to ruin batman financially by sending as many http requests as he can in order to run up Batman's AWS bill. The Joker uses all the latest tricks in the book using DDOS-like methods to send http requests from different IP addresses that target all sorts of mechanisms within batman.com's business logic.
Main Question: how does Batman prevent financial ruin while keeping his service running smoothly for his normal users?
Assume a lot of traffic is going on, how can you weed out the 'malicious' queries from the non-malicious, especially when users arent being registered? I know you can do rate-limiting against IP addresses, but cant the Joker (who is maximally sophisticated and well-equipped) find clever ways to issue requests from ever-changing IP addresses, and or to tweak the requests so that no two are exactly the same?
Note: my question focuses not on denial of service -- let's assume it's ok if the site goes down for a while -- but, rather, on Batman's financial loss. Batman has done a great job on making the architecture scale up and down with varying load, his only concern is that high loads (induced by Joker's shenanigans) entail high cost.
My instinct tells me that there is no silver bullet here, and that batman would have to build safeguards into his business logic (e.g. shut down if traffic spikes within certain parameters) AND/OR to require reCAPTCHA tokens on all non-trivial requests submitted to the REST API.
You can use AWS WAF and configure rules to block malicious users.
For example a straight forward rule would be to do a rate base blocking where if you could find its highly unlikely to get above X amount of requests concurrently from a same IP address.
For advanced use cases you can implement custom rules by analyzing the request logs with Lambda and to apply the block in WAF.
In addition, as you clearly identified it is not possible to prevent all the malicious requests. The goal should be to inspect and prevent which is an ongoing process with the right architecture in place to block requests on need basis.
I am trying to understand why a verification of endorsed transactions has been positioned at the application layer instead of Hyperledger Fabric 1.0 ledger network.
Let's assume three possible scenarios :
a) Using Oracles to request information needed to perform a function, and that the address to the Oracle is embedded into transaction attribute.
b) Execution of different actions depending on the origin of the transaction (i.e. through the unmarshalled peer or sender identity)
c) Original smart contract code is tampered with through an injection of malicious binary code into the dev-* container
If, let's say, a genuine network participant with malicious intents wants to inject some garbage into the ledger and has an access to the application source code, she/he can tweak around this SDK function in order to force proposed transactions with dissimilar results to be sent straight to Orderers. If I understand right, the network will not detect such a misconduct.
Please correct me if I am wrong and if this issue can somehow be mitigated at the network layer.
The application layer is the one to fulfill the endorsement policy, since the application to invoke the chaincode, therefore to make it valid the application has to go and literally invoke chaincode against all parties involved or related to given transaction.
That being said, it become kind of obvious that once application at any case to invoke and collect endorsements it's make many sense to have the application layer to verify endorsement results and make sure they are correct before submitting to the ordering service.
However if client won't do that check or will try to temper the endorsement results, first of all it won't be able to provide required signatures over tampered data. While moreover there is a VSCC (Validation System Chaincode) which takes care to validate transaction to ensure that endorsement policy satisfied, overwise rejects/invalidates the transaction.
I'd say doing verification on the application side is more like a best practices and the optimization path which aims to spare validation cycles for transaction known not to be consistent once application receives all endorsement results.
I have a public computer that is used in an ATM sort of fashion. When a certain action occurs (person inserts money), the program I've written on the computer sends a request to a trusted server which does a very critical task (transfers money).
I'm wondering, since I have to communicate to a server to start the critical task, the credentials to communicate with it are stored on this public computer. How do I prevent hackers from obtaining this information and running the critical task with their own parameters?
HSM (Hardware Security Modules) are designed to store keys safely:
A hardware security module (HSM) is a physical computing device that safeguards and manages digital keys for strong authentication and provides cryptoprocessing. These modules traditionally come in the form of a plug-in card or an external device that attaches directly to a computer or network server.
HSMs may possess controls that provide tamper evidence such as logging and alerting and tamper resistance such as deleting keys upon tamper detection. Each module contains one or more secure cryptoprocessor chips to prevent tampering and bus probing.
Impossible in general
If your user has access to this PC, they can easily insert fake money. Your model is doomed.
Minimize attack surface
This PC ought to have unique token (a permanent cookie is enough), and sever will refuse a request without a valid cookie. Server maintains database of device types, and this ATM-PC is only allowed certain operations (deposit money up to NNN units). Ideally it is also rate-limited (at most once per 3 seconds).
What mechanisms exist already for designing a P2P architecture, in which the different nodes do work separately, in order to split a task (say distributed rendering of a 3D image), but unlike torrents, they don't get to see, or hijack the contents of the packets being transmitted? Only the original task requester is entitled to view the? results of the complete task.
Any working implementations on that principle already?
EDIT: Maybe I formulated the question wrongly. The idea is that even when they are able to work on the contents of the separate packets being sent, the separate nodes never get the chance to assemble the whole picture. Only the one requesting the task is supposed to do this.
If you have direct P2P connections (no "promiscuous" or "multicasting" sort of mode), the receiving peers should only "see" the data sent to them, nothing else.
If you have relay servers on the way and you are worried that they can sniff the data, I believe encryption is the way to go.
What we do is that peer A transmits data to peer B in an S/MIME envelope: the content is signed with the Private Key of Peer A and encrypted with the public Key of Peer B.
Only peer B can decrypt the data and is guaranteed that peer A actually sent the data.
This whole process is costly CPU and byte wise and may not be appropriate for your application. It also requires some form of key management infrastructure: peers need to subscribe to a community which issues certificates for instance.
But the basic idea is there: asymetric encryption with a key or shared secret encryption.