Non-repudiation is duplicated? - security

When we talk about security we have the following requirements:
authentication
integrity
Non-repudiation
Isn't the third requirement included in the first two? If we know A sent the message (authentication) and it has not been changed since A sent it (integrity) then how can A repudiate sending it?
Please don't talk about dig-sig as it's in the technical level. I'm talking about the business requirements.

Neither authentication nor integrity protections prevent replay attacks. A malicious user can capture a signed and encrypted message and post it multiple times. Therefore a party can repudiate having sent the same message multiple times.
Making each message unique using timestamps and/or nonces addresses this and is therefore used for non-repudiation in combination with signing and encryption.

Non-repudiation is different than integrity and authentication because it implies that the sender is accountable for the contents of a message.
There are many systems that use a key for authentication and integrity, but the authenticated content doesn't mean anything. For example, suppose that in order to authenticate you on my system, I send an unpredictable challenge and ask you sign it and send it back. If the signature is valid, I trust that you know some secret and therefore are who you claim to be. I'd require the key you use for signing these challenges to be designated for digital signatures, but not necessarily for non-repudiation.
Now suppose instead of choosing a random challenge, I try to trick you by sending the challenge, "I will pay erickson one million dollars." If your system signs that, do I have a claim to a million dollars? The signed message is authentic and not tampered with, but unless you signed it with a key flagged for non-repudiation (for example, setting this flag in the key usage extension of an X.509 certificate), you can deny that you were aware of its content and reject my claim.
Non-repudiation makes sense for things like a signatures on documents in a business transaction—cases where you are obligating yourself to some action or payment.

With authentication and integrity, what you can achieve is message authenticity, ie. the recipient can be confident that the sender ID and message content are genuine.
Non-repudiation, on the other side, ensures that none of the involved party can deny having sent or received the message. In the previous scheme:
While the recipient can prove that the sender has indeed sent the message,
The sender itself has no proof that the recipient actually received it.
Non-repudiation systems will therefore include some kind of acknowledgment in order to provide these proofs.

Usually the three security requirements are CIA, i.e.
Confidentiallity
Integrity
Authenticity
But concerning non-repudiation, authentication and integrity dont necessarily provide non-repudiation since integrity says some message has not changed when traveling from point X to Y.
Authentication can tell you that some message is sent by anybody who has knowlegde of some (shared) secret that should be only known to a person.
Imaginge a virus stealing private keys from Alice, in that case you can have integrity of message X and authentication that the message is from Alice (although one can argue if this is real authentication) however, some eavesdropped used the stolen private key to send the message.

Related

Why does Stripe not use public-key cryptography for signing webhooks?

From the docs:
Stripe can optionally sign the webhook events it sends to your endpoints by including a signature in each event’s Stripe-Signature header. This allows you to verify that the events were sent by Stripe, not by a third party. You can verify signatures either using our official libraries, or manually using your own solution.
Before you can verify signatures, you need to retrieve your endpoint’s secret from your Dashboard’s Webhooks settings. Select an endpoint that you want to obtain the secret for, then click the Click to reveal button.
The last paragraph suggests that the secret is truly something to be treated confidentially. Is there a reason why Stripe doesn't use a private-public key scheme for signing webhook events?
They could keep the private key in their DB, never displaying it on the UI. The UI would only show the public key. Every request made to a webhook by Stripe would be signed with the private key and verified at the receiving end with the public key. This way, malicious actors getting access to the public key would be irrelevant, as they could only use it for verification, whereas now — I assume — an accidentally revealed secret can be used for forgery.
I'm neither a Stripe employee nor a Stripe expert, but I think your question comes from an ideal world where HMAC has disadvantages compared to digital signature from the point of view you're standing on. I suppose the main reason behind this decision is the computational complexity. But for the sake of truth, why you don't consider this problem along with related factors? Stripe prevents replay attacks and publish their ip adresses. Taken together, these measures provide very real protection against message forgery.

Using Public/Private key pair as proof of delivery

The problem
I'm working on a mobile application where user A should physically delivery something to user B, and the user A MUST prove that delivered it.
There is a restriction:
User A or User B might be offline on the delivery, so it can not rely on internet connection
My approach
I thought about using cryptography to solve this problem:
When the delivery is scheduled, the following process occurs:
A key-pair is generated, and stored on database.
The private key belongs to User B and should be transfere to his mobile app.
Some well-known+delivery_uuid string is encrypted using the public key, and transferred to the User A.
User A is oriented to only show the encrypted code (in form of a QRCode) if the delivery occurs.
User B is oriented to read the QRCode using the mobile app when delivering.
Since the encrypted message begins with a well-known string, the User B mobile app can decrypt it and verify that the message is OK. The application store the delivery_uuid part if valid, and sends to server-side to keep track as soon as user get internet access.
If the User B try to fake the delivery_uuid, it will obviously not match.
If the User A try to fake the QRCode, the User B's app will not be able to validate the message.
Concerns
The fact of a well-known piece be present on every encrypted message can make it weaker? Considering that the key-pair is used just once.
The public key should NOT be visible to anyone. Only the back-end must use it to create the delivery proof message. Same obviously apply to the delivery_uuid
Sh*t happens. If the user B mobile app somehow crashes and loose the delivery_uuid before sending it to back-end, the user B will need to rely on user A honesty.
How strong must my keys be? Considering the monetary value of the package is low. Is RSA the better encryptation in this case?
I really know that this question is complex, but I really appreciate if someone can help-me with it.
Note: I'm not sure that Stackoverflow is the right stackexchange community to ask about this, please comment if it's offtopic. But since it have something about logic, I think that's the right place.
Seems a little complicated. Why not
UserB (and every user who installs the app to receive deliveries) is issued a public/private key pair. The private key is held only by UserB; if it is lost, a new pair can be issued. Meanwhile, the public key is public, and is stored in a database along with UserB's identity.
Upon receipt of the delivery, UserB generates a simple text document containing the date and time, the QRCode, the name of the person receiving the package, or whatever information is needed. The document also contains the public key. Any format will do.
UserB signs the document with his private key and appends the signature to the end of the document. Now you have a cleartext document spelling out everything that happened, and proof that UserB agreed to it.
UserB shares the document with UserA, and/or uploads it anywhere that is needed, e.g. system of record. Both UserA and UserB can keep an offline copy.
If proof of delivery is ever needed, UserA just needs to produce the signed document.
If I understand the problem correctly, there are three parties here :
The verifying party, which is your back-end.
The selling party, which is User A.
The buying party, which is User B.
Also I do not buy the idea that "The public key should NOT be visible to anyone". They are meant to be, that's why they are called public.
Now, In order to make sure that the item was delivered
By User A
To User B,
We can have the following setup.
The verifying party ( i.e. the backend ) generates a token, associates it with the buyer, the seller, item and persists the info.
The verifying party encrypts the token with User A's ( seller ) public key.
The verifying party encrypts the already encrypted token with User B's ( buyer ) public key.
The verifying party sends the double encrypted token along to User B. Since it has been encrypted with User B's public key, only User B can read it.
User B decrypts the double encrypted token using his private key and saves the result in his device. The result is now encrypted with User A's public key, which means only User A can read it.
When User A comes to deliver the item, User B hands over the encrypted token as acknowledgement. This can be done via a QR code scan.
User A decrypts the encrypted token with his private key and keeps it in the device.
Whenever it is feasible ( in terms of availability of internet ) to prove it to the back-end, User A encrypts the token with the backend's public key and sends it along to the back-end.
The back-end decrypts the encrypted token with its private key and does a lookup in its persistence store, matches the buyer and the seller and completes the verification.
Use signatures and not encryption.
0) The app, as distributed, contains the public portion of a keypair only the server knows.
1) Every user that installs the app generates a keypair, keeping the private key, and uploading the public key to a database.
2) When a delivery is scheduled, the server generates a delivery ID and creates a message containing:
The Delivery ID
The deliverer's (user A's) public key fingerprint.
The recipient's (user B's) public key fingerprint.
A will receive this message and signature prior to making delivery. B can, but need not receive it prior to delivery.
3) When A meets B, A can off-line transfer the message and signature to B. QR code would be difficult, depending on key size, but NFC would certainly work. B can verify the server signature and know that the message has not been falsified or tampered with. A can transfer his public key to B and B can verify its fingerprint via the signed message. A can prove he is who he says he is by creating a signature with the private key belonging to the public key that was just transferred and verified.
4) B can prove who he is by creating a signature with the public key matching the fingerprint in the delivery message. B would have to transfer his public key to A if A didn't already have it, and A can verify it matches the fingerprint stored in the message.
5) B can certify receipt by creating a signature on a message saying so (however you want to format that) and off-line delivering it to A. A can then present this to the server, which the server can verify because it has B's public key.
Let's assume that:
- A has a key pair.
- B holds a delivery UUID.
The goal is to enable B to provide a proof that only A can provide.
As previous answers this can be solved by digital signature. You should let B and A to enchange the necessary information. B should provide the delivery Id, and A must sign it.
You need to hold the public keys of the partners.
You need to provide a way for the information exchange, bluetooth perhaps?
I believe that you may need more than this simple protocol.
After reading the answers, I've formulated a solution to the problem. I'm posting it as answer to know from everyone comments about it validity.
First, let give names to the actors to simplify:
User A is the Seller
User B is the Buyer
The use case
The buyer decides to buy, and make the payment. Now he talk with the seller to combine the delivery.
After payment confirmed, the back-end generates an UUID , and associate it with that transaction_id. Let's call it delivery_uuid. By the business logic, only the buyer have access to it.
The buyer app requests the delivery_uuid to the back-end, which produce a message containing both the delivery_uuid and the transaction_id, then digitally sign this message. Lets call this delivery secret
The buyer app will keep this message in storage.
When both seller and buyer meet, and the buyer checks if the product is OK, then he gives the delivery secret, using a QRCode or NFC.
With the delivery secret in hands, the seller's app can check the delivery secret authenticity (using the application's public key), then store it to send to back-end as proof of delivery as soon as he have internet access.
Now that the back-end have the proof of delivery, the payment to the seller is made.
Considerations
Buyer signature to the message
I think that it's not necessary, since by the back-end conception, only the buyer have access to the delivery secret. If the delivery secret leaked, certainly I'm in a bigger trouble.
Seller signature to the message
Also not necessary, the only concern about the seller is that he needs to send the delivery secret to the back-end. If he loose the cellphone, that's not our problem. If the application crashes and the delivery secret is lost, then we have a problem.
Server signature
This signature allow the seller to make sure that the delivery secret
Belongs to the right transaction, preventing the the buyer to use an authentic delivery token from another transaction.
Have a valid delivery_uuid
Make sure that delivery secret is still valid (we may add a validity to it)

How to secure an app built on phone-number registration?

I want to build a mobile app that requires the number of the user to be filled in. After that, the phone number is sent to a server, the server generates a random verification code that corresponds to this phone number. Then, this verification code is sent via SMS to the user. Next, the user sends the verification code back to the server to ensure that he/she has entered his/her real phone number and not anybody's else.
I was wondering how do you really authenticate against the server if you only have a phone number and nothing else? I mean, in the typical scenario you have a username and a password that are checked on the server and if both of them are correct you can have access to the server. But in the case of a phone-number registration, you have only a phone number and if you authenticate with it only, it means that anyone who knows your number or just picks it out, can pretend to be you.
If you send some sort of a unique device ID, that means that you won't be able to use your existing account anymore, for example, if you happen to change your device with a new one.
So, how do you solve this issue?
The pattern is always: client provides proof of something they have, in return they receive an identifying token. In a typical username/password scenario, this means the user proves that they have a secret (username + password), in return they'll typically receive a session cookie. In your case, the user proves that they are in possession of a specific phone, in return you give them a session token or other identifying token. The client holds on to this token and uses it to identify themselves to the server.
You're relying on the principles of the telephone system to make sure that's a uniquely identifying characteristic. You're basing your security on the assumption that only one person can receive messages for a specific phone number at any one time, and that you need to be in physical possession of the phone at the time of login to complete the proof. Of course you require this proof every time the user logs in. You do not let them register once with an SMS-loop, then afterwards you just ask them for their number and let them through.
If a user wishes to log in, they must proof they're in physical possession of the phone in question using the SMS-loop, then they'll receive a token. Period. That's the way it goes. No other way. The client (app) must hold on to the token for as long as it wishes to stay logged in. Obviously, you probably want this to last for quite a while and not require the user to do SMS confirmations all the time.
This obviously brings us to the topic of token theft, which can be a real issue. The token must be kept secret, since it essentially allows proof-less authentication. You may want to think about signing that token using some unique identifier specific to the device it's for, or encrypting it while it's stored on the device or other measures to make sure it can't be nicked from the device while it is stored on it.
As deceze points out, the best way to ensure the comm is safe is to use a temporal token signed with the device id. If the user logs out, changes device or reinstalls the app then they must go through the SMS verification process again to ensure the SIM cards is still in their possession. Keep in mind that the SIM card and device have different udids. To simplify this you can use RingCaptcha SDK on your app to generate the token, verify the user possesses the SIM card and store that token or id temporarily. Use the phone number as an identifier - similar to a username - and the temporal PIN code as the password. That pair plus the token will give you enough security that the device and SIM card are joined.

Exchange Web Services identifying spoofing

Let's assume that i use a telnet session and send an email with address alice#domain.com to bob#domain.com but in fact i am charly#domain.com...
On alice#domain.com i have a WCF web service running that's monitoring that specific mailbox using Exchange Web Services...
How can i tell that the message from bob#domain.com actually came from charly#domain.com?
I am using Visual Studio 2010, with .NET 4.0 and EWS managed API 1.1
The server is configured to use SSL and i have Exchange Server 2007 SP1.
i tried the two properties "Sender" and "From" but they are identical and both point to bob...
nothing in the message header actually points to charly... everything points to bob... any ideas? things that i might have overlooked?
If you want to make sure that identity spoofing is not possible using an email service, you can use cryptographic signatures. PGP / GPG and S/MIME are common technologies in use to implement this.
This requires every mail sent from alice#domain.com to be signed by her with a secret. The key or certificate she uses to do this must be trusted by your webservice. Your webservice can verify that a mail has really been sent by Alice by checking the validity of the signature. Only someone who possesses the secret of Alice can create such a valid signature. If the signature is wrong or missing, your webservice can trigger an alert.
Should the real Alice forget to sign an email, your service will trigger as well, because it cannot tell if it really was Alice who sent that mail. You also need to make sure that the secret in use can only be accessed by Alice. If you need further informations, you should read up on public key cryptography.
I don't think you can detect such practices with EWS Managed API at least I don't see anything that could be helpful in this situation. Unless valid sender is recognized with use of some cryptographic signatures or you can somehow mark messages from valid senders with your own extended property that only you (your software) create and uses you have to believe that mail was sent by whoever is showing up in Sender or From property

How to verify mail origin?

I wish to code a little service where I will be able to send an e-mail to a specific address used by my server to send specific commands to my server.
I'll check against a list of permitted e-mail addresses to make sure no one unauthorized will send a command to the server, but how do I make sure that, say, an e-mail sent by "mrzombie#thezombie.net" really comes from "thezombie.net"?
I thought about checking the header for the original e-mail server's IP and pinging the domain to make sure it is the same, but would that be reliable?
Example:
Server receives a command from mrzombie#thezombie.net
mrzombie#thezombie.net is authorized, proceed with checks
Server checks "thezombie.net"'s IP from the header: W.X.Y.Z
Server pings "thezombie.net" for it's IP: A.B.C.D
The IPs do not correspond, do not process command
Is there any better way to do that?
If you can solve this for generic e-mails, you solved the problem of SPAM.
Given that the mail headers are free form text in which anyone can claim anything, you can't do any sort of authorization nor authentication based on the mail headers. Your best bet is to authenticate the content, and there are protocols for that like S/MIME or PGP. They rely on cryptography for authentication and your server will be able to verify that the content is signed by a certificate you trust. But you'll move the burden on the mail sender that will have to send a properly signed message. Most mail readers though support adding digital signatures to content.
but how do I make sure that, say, an
e-mail sent by
"mrzombie#thezombie.net" really comes
from "thezombie.net"?
You also may want to look at Sender Policy Framework, as it is at least in part trying to provide a means of verifying that email was sent from authorized domain servers.
Also, serverfault.com may have some helpful answers for you since it is a networking- and server-related question.
You can use SPF to verify that a given IP is/is not authorized to send email on behalf of a particular domain (assuming that domain implements SPF, of course), but that only gets you so far. For example, it may not prevent another user at the source domain from impersonating the authorized user.
Authenticating the content with a digital signature is really the best way to go.

Resources