SAM PSO(Perform Security Operation):CDS(Compute digital signature) 6982 error - digital-signature

I'm trying to compute digital signature RSASSA-PSS with sha256 for my IdentityIdentificationData (ASN1).
Directory file address 0x3D00
Aplication ID A000000061123A22738F4421
Private key folder 0x2F01
My ASN1 encoded hex data after sha256 encoded:
860c30a5f2b254ee92cbd3ec5c4282a940853aaef5f36d50ca20050637aaf4b0
I'm sending this command after SAM pin verified
MSE:SET
002241B606800191840110
SW1SW2:9000
Select File
00A40800043D002F0100
SW1SW2:9000
PSO: Compute Digital Signature
002A9E9A20860c30a5f2b254ee92cbd3ec5c4282a940853aaef5f36d50ca20050637aaf4b000
SW1SW2:6982
I'm a bit new on smart cards. How can i solve this problem. What is wrong or missing.
My SAM don't want to algorithm identifier for RSASSA-PSS.

6982 means: Security condition not satisfied
You should probably send the VERIFY PIN command directly before the PSO: Compute Digital Signature. Signature generation generally has very high requirements with regards to PIN, because the user has give consent for each and every signature. Hence the PIN may be invalidated by each command, especially if that command is an MSE:SET command. Selecting a DF by name may also influence the security environment.
So try the following order:
SELECT by Name (AID)
MSE:SET (for digital signature)
VERIFY PIN
PSO:COMPUTE DIGITAL SIGNATURE
The signature may also be depending on other security related objects such as an authentication key, for instance one used to setup secure messaging.

Can you check the access condition of RSA_Sign key ? If the access condition is NEVER then you wont be able to sign with this key. So in such case, SW 6982 make sense.

002241b606800191840181 mse:set is worked on me.

Related

Can I calculate data integrity check for RSA keys using the key itself?

I'm developing a solution which stores in a DB, for each customer, an RSA key which will be used to sign payments.
Those keys are so called SIM keys created via an SKS HSM, long story short we don't store actually the key in our DB but only an encrypted blob that only the HSM can use. In this way no one, except the HSM, knows what the keys are.
In order to prevent an inside attacker, with access to the DB, to switch keys among users it was decided to calculate, for each key, an integrity check and store it together with the key.
The solution is to perform an HMAC of customerId + key, in this way it would be impossible to switch keys among users without breaking the integrity check.
The key used to calculate the HMAC is dedicated for this use case and is stored in the HSM. However this point is the one I would like to change and for which I'm making this question.
Technically speaking it would be possible to calculate the integrity check using the RSA key itself by encrypting customerID + key using the public part of the key.
However CTOs are blocking this solution because they said the same key should never be used for signing and encrypting.
In my opinion the guideline to have separate keys for signing and encrypting doesn't apply to this case, in fact the guideline is true but only if we would actually expose an API that perform encryption and signing using the same key, which is not the case. The encryption operation we perform is against data generated by the application itself, not an input, is always the same for the entire lifetime of the key and is used only for the integrity check of the key itself.
I'm looking for someone with security knowledge that can help me understanding if the principle "don't use the same key for signing and encryption" really applies to this case, which in my opinion doesn't.
Thanks a lot.
Your CTO is correct on blocking this. Because you didn't give the details on what kind of payment solution you're building, just the fact you're storing, process and/or transmitting cardholder and/or sensitive authentication data puts you in scope for a PCI-DSS or a PA-DSS audit.
But there are very strict rules for key rotation, strength (you didn't mention the SHA to use with HMAC) and storage when handling sensitive cardholder data.
If you aren't familiar with the PCI Security Council, get ready to learn fast. They publish the guidelines that you as a developer must abide by if you are either developing an in-house solution or one for resale. The first overview are the guidelines themselves:
https://www.pcisecuritystandards.org/pci_security/maintaining_payment_security
As stated above, this forum really is no place to begin to discuss the details of all of the required secure coding practices, network segmentation considerations, employee separation of duties, etc. that are included in complianc = and along with mountains of paperwork and possible quarterly scans. And, based on your size and payment volume, an outside auditor.
In a former position I managed the US, Europe and South America's PCI & PA-DSS software compliance programs and it was very costly. Talk to an expert because you do not want to be the next breach of the day. It sounds like your CTO understands the implications so I'd listen to him.

msmtp and smtp account password - how to obfuscate

I configured msmtp with my gmail account.
I obviously want to avoid writing my password in plaintext format in the config file.
Luckily enough msmtp offer the option passwordeval which can be used to obtain the password from the output of an an executable.
The question is: how should I use it?
I found here the following suggestion:
passwordeval gpg -d /some/path/to/.msmtp.password.gpg
That doesn't make much sense to me: if someone is able to access my config file he will certainly manage to run such a command and obtain the password from gpg.
So I believe I'm left with the only option of obfuscating the password within the binary executable even if I read almost everywhere that this is bad!
My impossible-to-hack implementation is: if the sendmail process is running output the correct pass, otherwise give a fake pass.
Your suggestions?
Other (more secure) tricks different from storing the pass in the binary file?
From Sukima's comment:
The reason gpg -d works is because it requires the private key of the person the file is encrypted to. So just placing that encrypted file in the public it is still encrypted an only one person (the one with the secret key) can decrypt it. It is assumed that the secret key is locked up on the user's machine and not leaked. It also assumes that they have not setup any agents which cache the unlock password while a hacker has direct access to the same machine. All of which is highly unlikely in 99% of all attacks.
There is not a standard solution on how to save credentials with the constraint of
having to use the credentials in plain text later
and in an unattended way
on a system which is not completely controlled by you (if it is you just set appropriate rights on the files holding the secrets)
You have several solutions, none solves perfectly your problem:
encrypt your credentials in a symmetric way: you need to input the key to decrypt them
encrypt in an asymmetric way: you need to provide your private key, which must be stored somewhere (unattended approach) or keyed in
obfuscate: as you mention, this only protects from some population
get it from somewhere else - you need to identify a way or another your system
You need to take into account which risk is acceptable and go from there.

Reading data from European DTCO company card

I need to be able to read card and company identification data from European digital tachograph company cards (smart cards). These are described within the document COMMISSION REGULATION (EC) No 1360/2002 but I have run into a problem. The data I need to be able to read is contained within the file EF Identification, which must be read with secure messaging and I therefore need to issue a Manage Secure Environment APDU command that requires a key identifier that identifies a key residing on the card.
I don't know where to find these key identifiers or the data that makes them up (described in an appendix of the document). I am waiting for feedback from our partners in Europe but thought I would take a chance an ask here in the hope that someone will have done this and be able to offer some advice.
The key identifier is made up of an equipment serial number, a date, a manufacturer code and a manufacturer specific type. This suggests a problem as I need to be able to access the data from any company card, regardless of manufacturer, issuer or holder. Not sure how I can get the data to compose the key.
I realise that this is pretty specialised information but have been stalled for over a week so am pretty desperate to find a solution so I can continue.
I believe that you first have to obtain a certificate from a country CA. You can then perform the following algorithm (simplified from Appendix 11, section 4):
Select and read the card certificate (EF_CERTIFICATE)
Issue a Manage Security Environment command to select the Root CA public key
Issue a Verify Certificate with the country CA certificate
Issue a Manage Security Environment command to select the country CA public key
Issue a Verify Certificate with your certificate
Issue a Manage Security Environment command to select your public key
Issue an Internal authenticate command. Verify response.
Issue a Get Challenge command
Issue an External authenticate command
Calculate the session key
Select File EF_IDENTIFICATION
Perform a Read Binary command using secure messaging (you need the session key to calculate the checksum and decrypt the result).
I don't know the standard, but I would assume that you read out EF Card_Certificate, recover the certificate content and extract the key identifier from that.
Assuming you have the root certificate (it is published here: http://dtc.jrc.it/erca_of_doc/EC_PK.zip), you will need to:
Read EF CA_Certificate
Follow the algorithm in Appendix 11, section 3.3.3
Extract the CA public key from the certificate content
Read EF Card_Certificate
Follow the algorithm in Appendix 11, section 3.3.3
The Key Identifier should now be byte 20-27 of the recovered certificate content.

Security tokens with unreadable private keys?

I'd like to uniquely identify users by issuing security tokens to them. In order to guarantee non-repudiation I'd like to ensure that no one has access to the user's private key, not even the person who issues the tokens.
What's the cheapest/most-secure way to implement this?
Use a 3rd party certificate authority: you don't know the private key and you don't have to care about how the client gets and secures the private key (but you can worry about it). Not the cheapest solution ever...
OR:
Share a secret with each client (printed on paper, through email, phone, whatever...).
Have the client generate the keys based on that secret, time (lets say 5 minute intervals) and whatever else you can get (computer hardware id - if you already know it, client IP, etc...). Make sure that you have the user input the secret and never store it in an app/browser.
Invalidate/expire the tokens often and negotiate new ones
This is only somewhat safe (just like any other solution)...if you want to be safe, make sure that the client computer is not compromised.
It depends on where/how you want to use those keys* but the bottom line is that in the case of asymmetric keys, the client will encrypt the data sent to you (the server) using their private key and you (the server) will decrypt that data using the client's public key (opposite of how HTTPS works).
Can you verify, at any point in time, the identity of your clients?
If the client computer is compromised, you can safely assume that the private key is compromised too. What's wrong with SSL/HTTPS. Do you really need to have one certificate per client?
Tokens are not the same thing as keys and they don't have to rely on public/private keys. Transport, however, might require encryption.
*my bank gave me a certificate (which only works in IE) used to access my online banking account. I had to go through several convoluted steps to generate and install that certificate and it only works on one computer - do you think that your clients/users would agree to go through this kind of setup?
It would be relatively easy for compromised computers to steal the user's private key if it were stored as a soft public key (e.g., on the hard drive). (APT (botnet) malware has been known to include functionality to do exactly this.)
But more fundamentally, nothing short of physically incapacitating the user will guarantee non-repudiation. Non-repudiation is something the user chooses to do, opposing evidence notwithstanding, and to prove that a user didn't do something is impossible. Ultimately, non-repudiation involves a legal (or at least a business) question: what level of confidence do you have that the user performed the action he is denying having performed and that his denial is dishonest? Cryptosystems can only provide reasonable confidence of a user's involvement in an action; they cannot provide absolute proof.
PIV cards (and PIV-I cards) use a number of safeguards for signing certificates. First, the private key is stored on the smart card, and there is no trivial way to extract it. The card requires a numeric PIN to use the private signing key, and effectively destroys the key after a certain number of incorrect attempts. The hardware cryptomodule must meet Level-2 standards and be tamper-resistant, and transport of the card requires Level-3 physical security (FIPS 201). The certificate is signed by a trusted CA. The PIN, if entered using a keyboard, must be sent directly to the card to avoid keylogger-type attacks.
These precautions are elaborate, intensive, and still do not guarantee non-repudiation. (What if malware convinces the user to sign a different document than the one he is intending to sign? Or the user is under duress? Or an intelligence agency obtains the card in transit and uses a secret vulnerability to extract the private key before replacing the card?)
Security is not generally a question of cheapest/most secure, but rather of risk assessment, mitigation, and ultimately acceptance. What are your significant risks? If you assess the types of non-repudiation risks you face and implement effective compensating controls, you will be more likely to find a cost-effective solution than if you seek to eliminate risk altogether.
The standard way to handle non-repudiation in a digital signature app is to use a trusted third party, a 3rd party cert authority.
Be careful trying to create your own system--since you're not an expert in the field, you'll most probably end up either losing the non-repudiation ability that you seek or some other flaw.
The reason the standards for digital signatures exist is that this stuff is very hard to get right in a provable way. See "Schneier's Law"
Also, eventually, non-repudiation comes down to someone being sued--you say that "B" did it (signed the agreement, pressed the button, etc), but "B" denies it. You say you can "prove" that B did it. But so what, you'll need to prove in court that B did it, to get the court to grant you relief (to order B to do something such as pay damages.)
But it will be very very expensive to sue someone and to prove their case due to a digital sig system. And if you went to all that trouble and then the digital sig system was some homebrew system, not a standard, then your odds of relief would drop down to about 0%.
Conclusion: if you care enough about the digital sig to sue people, then use a standard for digital sig. If you will ultimately negotiate rather than sue, then look at the different options.
For example, why not use a hardware security token They're now available as apps for people's phones, too.

Signing and verifying an automatically generated report

Last summer, I was working on an application that tested the suitability of a prospective customer's computer for integrating our hardware. One of the notions suggested was to use the HTML report generated by the tool as justification for a refund in certain situations.
My immediate reaction was, "well we have to sign these reports to verify their authenticity." The solution I envisioned involved creating a signature for the report, then embedding it in a meta tag. Unfortunately, this scenario would require the application to sign the report, which means it would need a private key. Once the application is storing the private key, we're back at square one with no guarantee of authenticity.
My next idea was to phone home and have a server sign the report, but then the user needs an internet connection just to test hardware compatibility. Plus, the application would need to authenticate with the server, and an interested party could figure out what credentials it was using to do that.
So my question is this. Is there any way, outside of obfuscation, to verify that the application did indeed generate a given report?
As Eugene has rightly pointed that my initial answer was to authenticate the receiver. Let me propose an alternative approach for authenticating the sender
authenticate the sender:
When your application is deployed at your client end, you generate and deploy a self signed PFX certificate which holds the private key.
The details of your client and passphrase for the PFX is set by your client and may be you can get it printed and signed by your client in paper to hold them accountable for the keys which they have just generated..
Now you have a private key which can sign and when exporting the HTML report, you can export the certificate along with the report.
This is a low cost solution and is not as secure as having your private keys in a cryptotoken, as indicated by Eugene, in the previous post.
authenticate the receiver:
Have a RSA 2048 key pair at your receiving end. Export your public key to your senders.
When the sender has generated the report, let the report be encrypted by a symmetric key say AES 256. Let the symmetric key itself be encrypted/wrapped by your public key.
When you receive the encrypted report,use your private key to unwrap/decrypt the symmetric key and in turn decrypt the encrypted report with the symmetric key.
This way, you make sure that only the intended receiver alone can view the report.
I'd say that you need to re-evaluate possible risks and most likely you will find them to be not as important as you could think. The reason is that the report has value for you but less likely for a customer. So it's more or less a business task, not a programming one.
To answer your concrete question, there's no simple way to protect the private key used for signing from being stolen (if one really wants to). For more complex solutions employing a cryptotoken with private key stored inside would work, but cryptotoken is itself a hardware and in your scenario it would unnecessarily complicate the scheme.

Resources