Keygen tag in HTML5 - security

So I came across this new tag in HTML5, <keygen>. I can't quite figure out what it is for, how it is applied, and how it might affect browser behavior.
I understand that this tag is for form encryption, but what is the difference between <keygen> and having a SSL certificate for your domain. Also, what is the challenge attribute?
I'm not planning on using it as it is far from implemented in an acceptable range of browsers, but I am curious as to what EXACTLY this tag does. All I can find is vague cookie-cutter documentation with no real examples of usage.
Edit:
I have found a VERY informative document, here. This runs through both client-side and server-side implementation of the keygen tag.
I am still curious as to what the benefit of this over a domain SSL certificate would be.

SSL is about "server identification" or "server AND client authentication (mutual authentication)".
In most cases only the server presents its server-certificate during the SSL handshake so that you could make sure that this really is the server you expect to connect to. In some cases the server also wants to verify that you really are the person you pretend to be. For this you need a client-certificate.
The <keygen> tag generates a public/private key pair and then creates a certificate request. This certificate request will be sent to a Certificate Authority (CA). The CA creates a certificate and sends it back to the browser. Now you are able to use this certificate for user authentication.

You're missing some history. keygen was first supported by Netscape when it was still a relevant browser. IE, OTOH, supported the same use cases through its ActiveX APIs. Opera and WebKit (or even KHTML), unwilling to reverse-engineer the entire Win32 API, reverse-engineered keygen instead.
It was specified in Web Forms 2.0 (which has now been merged into the HTML specification), in order to improve interoperability between the browsers that implemented it.
Since then, the IE team has reiterated their refusal to implement keygen, and the specification (in order to avoid turning into dry science fiction) has been changed to not require an actual implementation:
Note: This specification does not
specify what key types user agents are
to support — it is possible for a user
agent to not support any key types at
all.
In short, this is not a new element, and unless you can ignore IE, it's probably not what you want.

If you're looking for "exactly" then I'd recommend reading the RFC.
The keygen element is for creating a key for authentication of the user while SSL is concerned about privacy of communication and the authentication of the server. Quoting from the RFC:
This specification does not specify how the private key generated is to be used. It is expected that after receiving the SignedPublicKeyAndChallenge (SPKAC) structure, the server will generate a client certificate and offer it back to the user for download; this certificate, once downloaded and stored in the key store along with the private key, can then be used to authenticate to services that use TLS and certificate authentication.

Deprecated
This feature has been removed from the Web standards. Though some
browsers may still support it, it is in the process of being dropped.
Avoid using it and update existing code if possible. Be aware that
this feature may cease to work at any time.
Source

The doc is useful to elaborate on what is the keygen element. Its requirement arises in WebID that maybe understood to be part of Semantic Web of Linked Data as seen at https://dvcs.w3.org/hg/WebID/raw-file/tip/spec/index-respec.html#creating-a-certificate 2.1.1

This might be useful for websites that provide services, where people need to pay for the service, like video on demand, or news website for professionals like Bloomberg. With this keys people can only watch the content in their computer and not in simultaneous computers! You decide how data is stored and processed. you can specify a .asp or .php file that will receive the variables and your file will store that key in the user profile. This way your users will not be able to log in from a different computer if you want. You may force them to check their email to authorize that new computer, just like steam does. Basically it allows to individualize service access, if your licensing model is per machine, like Operating System.
You can check the specs here:
http://www.w3.org/TR/html-markup/keygen.html

Related

Penetration Testing HTML Posting Issue

We are planning to go for a security testing certificate. For that reason we are using Paros tool to test our system.
The system is written in GWT on front end and database connectivity is happening through Hibernate.
When we use this tool to test our application following behaviour is happening which needs to be restricted.
The tool is able to see the data which is passed to server. This is fine but when we make any changes in the data through tool it gets updated in the system on database end. This is a big security issue.
Can someone guide me in this?
If you're still looking for a solution to this problem, you could use request signing. The reason I didn't mention it earlier was because the only time I had seen request signing, there were certificates involved, and it was mostly using the Web Services Security Standard. The other time I recommended implementation of request signing was for a mobile application - its relatively easier to do there also, since you can use certificates that are on the device to perform the signing, and the server can verify this signature (essentially, a public key encryption mechanism).
As you mention in the comments, there are multiple aspects to it - one is to prevent XSRF, which is essentially including a nonce to ensure that an attacker cannot replay requests, or craft requests that might harm an authenticated user. This nonce will have to come from the server, since anything that you create using Javascript, the attacker can create also. This nonce will make sure that your request is time specific, and that it cannot be replayed at a later point of time.
However, a nonce isn't going to stop attacks where a user is in a hostile network, and an attacker is performing a MitM attack on all traffic. The attacker can still modify a request, and since the server has never seen that nonce before, it will accept the request as valid. To prevent this, you need to countermeasures in place - one, all traffic should go via SSL, and two, all requests must be signed so as to prevent tampering. The signature part is particularly hard, especially if you have to ensure that an attacker cannot perform the same signing. The examples I have seen of it involve certificate level authentication for the webapp, and using these certificates to then perform the signing - which might be too stringent a requirement for the application that you seem to be developing. Other methodologies involve using something that the user has/knows - maybe a token, password, secret answer, etc. - that cannot be replicated by an attacker, and using that information to sign requests.
Here's an example on how you can do this via PHP. I don't know if this mechanism can be adapted to do it for your purposes, though. OAuth might be another possible method, but since I've never seen an application do it that way, I am not very sure.
Sorry I don't have a specific methodology or examples of code for you to look at, but most implementations I've seen are only from a design standpoint, versus an actual code standpoint.

Advanced SSL: Intermediate Certificate Authority and deploying embedded boxes

Ok Advanced SSL gals and guys - I'll be adding a bounty to this after the two-day period as I think it's a complex subject that deserves a reward for anyone who thoughtfully answers.
Some of the assumptions here are simply that: assumptions, or more precisely hopeful guesses. Consider this a brain-teaser, simply saying 'This isn't possible' is missing the point.
Alternative and partial solutions are welcome, personal experience if you've done something 'similar'. I want to learn something from this even if my entire plan is flawed.
Here's the scenario:
I'm developing on an embedded Linux system and want its web server to be able to serve out-of-the-box, no-hassle SSL. Here's the design criteria I'm aiming for:
Must Haves:
I can't have the user add my homegrown CA certificate to their browser
I can't have the user add a statically generated (at mfg time) self-signed certificate to their browser
I can't have the user add a dynamically generated (at boot time) self-signed certificate to their browser.
I can't default to HTTP and have an enable/disable toggle for SSL. It must be SSL.
Both the embedded box and the web browser client may or may not have internet access so must be assumed to function correctly without internet access. The only root CAs we can rely on are the ones shipped with operating system or the browser. Lets pretend that that list is 'basically' the same across browsers and operating systems - i.e. we'll have a ~90% success rate if we rely on them.
I cannot use a fly-by-night operation i.e. 'Fast Eddie's SSL Certificate Clearing House -- with prices this low our servers MUST be hacked!'
Nice to Haves:
I don't want the user warned that the certificate's hostname doesn't match the hostname in the browser. I consider this a nice-to-have because it may be impossible.
Do not want:
I don't want to ship the same set of static keys for each box. Kind of implied by the 'can't' list, but I know the risk.
Yes Yes, I know..
I can and do provide a mechanism for the user to upload their own cert/key but I consider this 'advanced mode' and out of scope of this question. If the user is advanced enough to have their own internal CA or purchase keys then they're awesome and I love them.
Thinking Cap Time
My experience with SSL has been generating cert/keys to be signed by 'real' root, as well as stepping up my game a little bit with making my own internal CA, distributing internally 'self-signed' certs. I know you can chain certificates, but I'm not sure what the order of operations is. i.e. Does the browser 'walk up' the chain see a valid root CA and see that as a valid certificate - or do you need to have verification at every level?
I ran across the description of intermediate certificate authority which got me thinking about potential solutions. I may have gone from 'the simple solution' to 'nightmare mode', but would it be possible to:
Crazy Idea #1
Get an intermediate certificate authority cert signed by a 'real' CA. ( ICA-1 )
ROOT_CA -> ICA-1
This certificate would be used at manufacturing time to generate a unique passwordless sub-intermediate certificate authority pair per box.
ICA-1 -> ICA-2
Use ICA-2 to generate a unique server cert/key. The caveat here is, can you generate a key/pair for an IP (and not a DNS name?)? i.e. A potential use-case for this would be the user connects to the box initially via http, and then redirects the client to the SSL service using the IP in the redirect URL (so that the browser won't complain about mismatches). This could be the card that brings the house down. Since the SSL connection has to be established before any redirects can happen, I can see that also being a problem. But, if that all worked magically
Could I then use the ICA-2 to generate new cert/key pairs any time the box changes IP so that when the web server comes back up it's always got a 'valid' key chain.
ICA-2 -> SP-1
Ok, You're So Smart
Most likely, my convoluted solution won't work - but it'd be great if it did. Have you had a similar problem? What'd you do? What were the trade offs?
Basically, no, you can't do this the way you hope to.
You aren't an intermediate SSL authority, and you can't afford to become one. Even if you were, there's no way in hell you'd be allowed to distribute to consumers everything necessary to create new valid certificates for any domain, trusted by default in all browsers. If this were possible, the entire system would come tumbling down (not that it doesn't already have problems).
You can't generally get the public authorities to sign certificates issued to IP addresses, though there's nothing technically preventing it.
Keep in mind that if you're really distributing the private keys in anything but tamper-proof secured crypto modules, your devices aren't really secured by SSL. Anyone who has one of the devices can pull the private key (especially if it's passwordless) and do valid, signed, MITM attacks on all your devices. You discourage casual eavesdropping, but that's about it.
Your best option is probably to get and sign certificates for a valid internet subdomain, and then get the device to answer for that subdomain. If it's a network device in the outgoing path, you can probably do some routing magic to make it answer for the domain, similarly to how many walled-garden systems work. You could have something like "system432397652.example.com" for each system, and then generate a key for each box that corresponds to that subdomain. Have direct IP access redirect to the domain, and either have the box intercept the request, or do some DNS trickery on the internet so that the domain resolves to the correct internal IP for each client. Use a single-purpose host domain for that, don't share with your other business websites.
Paying more for certificates doesn't really make them any more or less legit. By the time a company has become a root CA, it's far from a fly-by-night operation. You should check and see if StartSSL is right for your needs, since they don't charge on a per-certificate basis.

Why do browsers show ugly errors for untrusted SSL certificates?

When faced by an untrusted certificate, every single browser I know displays a blaring error like this:
Why is that?
This strongly discourages web developers to use an awesome technology like SSL out of fears that users will find the website extremely shady. Ilegitimate (ie: phishing) sites do fine on HTTP, so that can't be a concern.
Why do they make it look like such a big deal? Isn't having SSL even if untrusted better than not having it at all?
It looks like I am being misunderstood. I am taking issue with the fact that HTTP sites cannot be more secure than an HTTPS site, even if untrusted. HTTP doesn't do encryption or identification. Phishers can make their sites on HTTP and no warnings are shown. In good faith, I am at the very least encrypting traffic. How can that be a bad thing?
They do that because a SSL certificate isn't just meant to secure the communication over the wire. It is also a means to identify the source of the content that is being secured (secured content coming from a man in the middle attack via a fake cert isn't very helpful).
Unless you have a third party validate that you are who you say you are, there's no good reason to trust that your information (which is being sent over SSL) is any more secure than if you weren't using SSL in the first place.
SSL provides for secure communication between client and server by allowing mutual authentication, the use of digital signatures for integrity, and encryption for privacy.
(apache ssl docs)
Yep, I don't see anything about third party certificate authorities that all browsers should recognize as "legit." Of course, that's just the way the world is, so if you don't want people to see a scary page, you've got to get a cert signed by someone the browsers will recognize.
or
If you're just using SSL for a small group of individuals or for in-house stuff, you can have people install your root cert in their browser as a trusted cert. This would work fairly well on a lan, where a network admin could install it across the entire network.
It may sound awkward to suggest sending your cert to people to install, but if you think about it, what do you trust more: a cert that came with your browser because that authority paid their dues, or a cert sent to you personally by your server admin / account manager / inside contact?
Just for shits and giggles I thought I'd include the text displayed by the "Help me understand" link in the screenshot in the OP...
When you connect to a secure website, the server hosting that site presents your browser with something called a "certificate" to verify its identity. This certificate contains identity information, such as the address of the website, which is verified by a third party that your computer trusts. By checking that the address in the certificate matches the address of the website, it is possible to verify that you are securely communicating with the website you intended, and not a third party (such as an attacker on your network).
For a domain mismatch (for example trying to go to a subdomain on a non-wildcard cert), this paragraph follows:
In this case, the address listed in the certificate does not match the address of the website your browser tried to go to. One possible reason for this is that your communications are being intercepted by an attacker who is presenting a certificate for a different website, which would cause a mismatch. Another possible reason is that the server is set up to return the same certificate for multiple websites, including the one you are attempting to visit, even though that certificate is not valid for all of those websites. Chromium can say for sure that you reached , but cannot verify that that is the same site as foo.admin.example.com which you intended to reach. If you proceed, Chromium will not check for any further name mismatches. In general, it is best not to proceed past this point.
If the cert isn't signed by a trusted authority, these paragraphs follow instead:
In this case, the certificate has not been verified by a third party that your computer trusts. Anyone can create a certificate claiming to be whatever website they choose, which is why it must be verified by a trusted third party. Without that verification, the identity information in the certificate is meaningless. It is therefore not possible to verify that you are communicating with admin.example.com instead of an attacker who generated his own certificate claiming to be admin.example.com. You should not proceed past this point.
If, however, you work in an organization that generates its own certificates, and you are trying to connect to an internal website of that organization using such a certificate, you may be able to solve this problem securely. You can import your organization's root certificate as a "root certificate", and then certificates issued or verified by your organization will be trusted and you will not see this error next time you try to connect to an internal website. Contact your organization's help staff for assistance in adding a new root certificate to your computer.
Those last paragraphs make a pretty good answer to this question I think. ;)
The whole point of SSL is that you can verify that the site is who it says it is. If the certificate cannot be trusted, then it's highly likely that the site is not who it says it is.
An encrypted connection is really just a side-benefit in that respect (that is, you can encrypt the connection without the use of certificates).
People assume that https connections are secure, good enough for their credit card details and important passwords. A man-in-the-middle can intercept the SSL connection to your bank or paypal and provide you with their own self-signed or different certificate instead of the bank's real certificate. It's important to warn people loudly if such an attack might be taking place.
If an attacker uses a false certificate for the bank's domain, and gets it signed by some dodgy CA that does not check things properly, he may be able to intercept SSL traffic to your bank and you will be none the wiser, just a little poorer. Without the popup warning, there's no need for a dodgy CA, and internet banking and e-commerce would be totally unsafe.
Why is that?
Because most people don't read. They don't what what https means. A big error is MANDATORY to make people read it.
This strongly discourages web developers to use an awesome technology like SSL out of fears that users will find the website extremely shady.
No it doesn't. Do you have any evidence for that? That claim is ridiculous.
This strongly encourages developers and users to know whom they are dealing with.
"fears that users will find the website extremely shady"
What does this even mean? Do you mean "fears that lack of a certificate means that users will find the website extremely shady"?
That's not a "fear": that's the goal.
The goal is that "lack of a certificate means that users will find the website extremely shady" That's the purpose.
Judging from your comments, I can see that you're confused between what you think people are saying and what they are really saying.
Why do they make it look like such a big deal? Isn't having SSL even if untrusted better than not having it at all?
But why do they have to show the error? Sure, an "untrusted" cert can't be guaranteed to be more secure than no SSL, but it can't be less secure.
If you are solely interested in an encrypted connection, yes this is true. But SSL is designed for an additional goal: identification. Thus, certificates.
I am not talking about certs that don't match the domain (yes, that is pretty bad). I am talking about certs signed by authorities not in the browser's trusted CA's (eg: self-signed)
How can you trust the certificate if it is not trusted by anyone you trust?
Edit
The need to prevent man-in-the-middle attacks arises because you are trying to establish a privileged connection.
What you need to understand is that with plain HTTP, there is absolutely no promise of security, and anyone can read the contents passed over the connection. Therefore, you don't pass any sensitive information. There is no need for a warning because you are not transferring sensitive information.
When you use HTTPS, the browser assumes you will be transferring sensitive information, otherwise you would be using plain HTTP. Therefore, it makes a big fuss when it cannot verify the server's identity.
Why is that?
Because if there's a site that's pretending to be a legit site, you really want to know about it as a user!
Look, a secure connection to the attacker is no damn good at all, and every man and his dog can make a self-signed certificate. There's no inherent trust in a self-signed cert from anyone, except for the trust roots you've got installed in your browser. The default set of trust roots is picked (carefully!) by the browser maker with the aim that only CAs who only act in a way to secure trust will be trusted by the system, and this mostly works. You can add your own trust roots too, and if you're using a private CA for testing then you should.
This strongly discourages web developers to use an awesome technology like SSL out of fears that users will find the website extremely shady. Ilegitimate (ie: phishing) sites do fine on HTTP, so that can't be a concern.
What?! You can get a legit certificate for very little. You can set up your own trust root for free (plus some work). Anyone developing and moaning about this issue is just being lazy and/or over-cheap and I've no sympathy for such attitudes.
Ideally a browser would look for information that you want kept secure (such as things that look like credit card numbers) and throw that sort of warning up if there was an attempt to send that data over an insecure or improperly-secured channel. Alas, it's hard to know from just inspection whether data is private or not; just as there's no such thing as an EVIL bit, there's also no PRIVATE bit. (Maybe a pervasive metadata system could do it… Yeah, right. Forget it.) So they just do the best they can and flag up situations where it is extremely likely that there's a problem.
Why do they make it look like such a big deal? Isn't having SSL even if untrusted better than not having it at all?
What threat model are you dealing with?
Browser makers have focused on the case where anyone can synthesize an SSL certificate (because that's indeed the case) and DNS hacks are all too common; what the combination of these means is that you can't know that the IP address you've got for a host name corresponds to the legitimate owner of that domain, and anyone can claim to own that domain. Ah, but you instead trust a CA to at least check that they're issuing the certificate to the right person and that in turn is enough (plus a few other things) to make it possible to work out whether you're talking to the legitimate owner of the domain; it provides a basis for all the rest of the trust involved in a secure conversation. Hopefully the bank will have used other unblockable communications (e.g., a letter sent by post) to tell people to check that the identity of the site is right (EV certs help a little here) but that's still a bit of a band-aid given how unsuspicious some users are.
The problems with this come from CAs who don't apply proper checks (frankly, they ought to be kicked off the gravy train for failing their duty) and users who'll tell anyone anything. You can't stop them from deliberately posting their own CC# on a public message board run by some shady characters from Smolensk[1], no matter how stupid an idea that is…
[1] Not that there's anything wrong with that city. The point would be the same if you substituted with Tallahassee, Ballarat, Lagos, Chonqing, Bogota, Salerno, Durban, Mumbai, … There are scum all over.

Backwards HTTPS; User communicates with previously generated private key

I am looking for something like https, but backwards. The user generates their own private key (in advance) and then (only later) provides the web application with the associated public key. This part of the exchange should (if necessary) occur out-of-band. Communication is then encrypted/decrypted with these keys.
I've thought of some strange JavaScript approaches to implement this (From the client perspective: form submissions are encrypted on their way out while (on ajax response) web content is decrypted. I recognize this is horrible, but you can't deny that it would be a fun hack. However, I wondered if there was already something out there... something commonly implemented in browsers and web/application servers.
Primarily this is to address compromised security when (unknowingly) communicating through a rogue access point that may be intercepting https connections and issuing its own certificates. Recently (in my own network) I recreated this and (with due horror) soon saw my gmail password in plain text! I have a web application going that only I and a few others use, but where security (from a learning stand point) needs to be top notch.
I should add, the solution does not need to be practical
Also, if there is something intrinsically wrong with my thought process, I would greatly appreciate it if someone set me on the right track or directed me to the proper literature. Science is not about finding better answers; science is about forming better questions.
Thank you for your time,
O∴D
This is already done. They're called TLS client certificates. SSL doesn't have to be one-way; it can be two-party mutual authentication.
What you do is have the client generate a private key. The client then sends a CSR (Certificate Signing Request) to the server, who signs the public key therein and returns it to the client. The private key is never sent over the network. If the AP intercepts and modifies the key, the client will know.
However, this does not stop a rogue AP from requesting a certificate on behalf of a client. You need an out-of-band channel to verify identity. There is no way to stop a man in the middle from impersonating a client without some way to get around that MITM.
If a rogue access point can sniff packets, it can also change packets (an ‘active’ man-in-the-middle attack). So any security measure a client-side script could possibly provide would be easily circumvented by nobbling the script itself on the way to the client.
HTTPS—and the unauthorised-certificate warning you get when a MitM is trying to fool you—is as good as it gets.
SSL and there for HTTPS allows for client certificates. on the server side you can use these environment variables to verify a certificate. If you only have 1 server and a bunch of clients then a full PKI isn't necessary. Instead you can have a list of valid client certificates in the database. Here is more info on the topic.
Implementing anything like this in JavaScript is a bad idea.
I don't see, why you are using assymetric encryption here. For one, it is slow, and secondly, it is vulnerable to man in the middle anyhow.
Usually, you use an asymmetric encryption to have a relatively secure session negotiation, including an exchange of keys for a symmetric encryption, valid for the session.
Since you use a secure channel for the negociation, I don't really understand why you even send around public keys, which themselves are only valid for one session.
Asymmetric encryption makes sense, if you have shared secret, that allows verifying a public key. Having this shared secret is signifficantly easier, if you don't change the key for every session, and if the key is generated in a central place (i.e. the server and not for all clients).
Also, as the rook already pointed out, JavaScript is a bad idea. You have to write everything from scratch, starting with basic arithmetic operations, since Number won't get you very far, if you want to work with keys in an order of magnitude, that provides reasonable security.
greetz
back2dos

I need resources for API security basics. Any suggestions?

I've done a little googling but have been a bit overwhelmed by the amount of information. Until now, I've been considering asking for a valid md5 hash for every API call but I realized that it wouldn't be a difficult task to hijack such a system. Would you guys be kind enough to provide me with a few links that might help me in my search? Thanks.
First, consider OAuth. It's somewhat of a standard for web-based APIs nowadays.
Second, some other potential resources -
A couple of decent blog entries:
http://blog.sonoasystems.com/detail/dont_roll_your_own_api_security_recommendations1/
http://blog.sonoasystems.com/detail/more_api_security_choices_oauth_ssl_saml_and_rolling_your_own/
A previous question:
Good approach for a web API token scheme?
I'd like to add some clarifying information to this question. The "use OAuth" answer is correct, but also loaded (given the spec is quite long and people who aren't familiar with it typically want to kill themselves after seeing it).
I wrote up a story-style tutorial on how to go from no security to HMAC-based security when designing a secure REST API here:
http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/
This ends up being basically what is known as "2-legged OAuth"; because OAuth was originally intended to verifying client applications, the flow is 3-parts involving the authenticating service, the user staring at the screen and the service that wants to use the client's credentials.
2-legged OAuth (and what I outline in depth in that article) is intended for service APIs to authenticate between each other. For example, this is the approach Amazon Web Services uses for all their API calls.
The gist is that with any request over HTTP you have to consider the attack vector where some malicious man-in-the-middle is recording and replaying or changing your requests.
For example, you issue a POST to /user/create with name 'bob', well the man-in-the-middle can issue a POST to /user/delete with name 'bob' just to be nasty.
The client and server need some way to trust each other and the only way that can happen is via public/private keys.
You can't just pass the public/private keys back and forth NOR can you simply provide a unique token signed with the private key (which is typically what most people do and think that makes them safe), while that will identify the original request coming from the real client, it still leaves the arguments to the comment open to change.
For example, if I send:
/chargeCC?user=bob&amt=100.00&key=kjDSLKjdasdmiUDSkjh
where the key is my public key signed by my private key only a man-in-the-middle can intercept this call, and re-submit it to the server with an "amt" value of "10000.00" instead.
The key is that you have to include ALL the parameters you send in the hash calculation, so when the server gets it, it re-vets all the values by recalculating the same hash on its side.
REMINDER: Only the client and server know the private key.
This style of verification is called an "HMAC"; it is a checksum verifying the contents of the request.
Because hash generation is SO touchy and must be done EXACTLY the same on both the client and server in order to get the same hash, there are super-strict rules on exactly how all the values should be combined.
For example, these two lines provides VERY different hashes when you try and sign them with SHA-1:
/chargeCC&user=bob&amt=100
/chargeCC&amt=100&user=bob
A lot of the OAuth spec is spent describing that exact method of combination in excruciating detail, using terminology like "natural byte ordering" and other non-human-readable garbage.
It is important though, because if you get that combination of values wrong, the client and server cannot correctly vet each other's requests.
You also can't take shortcuts and just concatonate everything into a huge String, Amazon tried this with AWS Signature Version 1 and it turned out wrong.
I hope all of that helps, feel free to ask questions if you are stuck.

Resources