SagePay's form callback can be hacked by re-using the success URL that the user is directed to upon a successful transaction. This can create all sorts of problems with duplicate transactions, fake transactions etc.
You can check for a duplicate VPSTxId, but these can be generated anew by hacking around the crypt parameter of the callback URL.
The crypt parameter can also be manipulated to generate a different "Amount" field.
I have not tested what other field values can be changed by hacking the callback URL crypt parameter.
Is there any way (as per PayPal's IPN validation) of doing a double-check callback to SagePay to ensure that the transaction is new and unique?
Thanks for your post. In general we encourage clients to use Server integration where they can. We also constantly monitor transactions for suspicious behaviour and proactively contact our customers if we suspect any malicious activity.
We recommend customers make sure that they’re using the latest version of our integration protocol which is currently v3. Get the latest integration documents.
As Dan suggests you could use the Reporting and Admin API to validate that a transaction does indeed exist on the Sage Pay side but having an additional validation mechanism (like PayPal's IPN) is something we will actively explore.
If you'd like us to update you on this, then please get in contact with our customer services team at support#sagepay.com or 0845 111 44 55.
Sage Pay Support
You should always redirect a user from a success URL.
I personally use a fulfil page (success url), and a thank you page. On the fulfil page, you should obviously only ever process a transaction once (based on the transaction id), and you can store crypt sent with a transaction. The crypt will have to be valid and is only possible to encrypt if you have the encryption key.
So hacking would be extremely difficult unless you are being very security lax, and the hacker would have to know your encryption key to even begin trying to hack it.
Alternatively, you should use the server integration, so that the communications are server-server, not client-server. There is little difference between form and server.
10 immutable laws of security
http://technet.microsoft.com/library/cc722487.aspx
Related
I'm creating a WebApi project that aims to handle payments in my project.
Actually, my approach is like:
CreatePaymentMethod(creates the payment, contact paypal and redirect the user to the payment page. If payment is done, redirect to PaymentApprovalMethod)
PaymentApprovalMethod (confirm the payment on my system, allowing the user to access the content)
What I'm wondering is... What if an users directly creates a message to invoke the PaymentApprovalMethod?
Suppose:
- CreatePaymentMethod is called, user sniffs with Fiddler the Transaction ID of the redirect
- Creates and adhoc call of the PaymentApprovalMethod like.
I see two possibles scenario to handle this situation:
Restrict (I don't know how) the calling of PaymentApprovalMethod to the messages from Paypal only
Cross Check the payment inside the PaymentApprovaMethod (ok, user, you said the transaction ID 123 is done, I'll contact paypal to verify this before trust this call)
The second one seems more safe but also more "time / resource" problematic..
Do I'm missing something? Is there something else I can do to avoid this problem?
Take a look at Instant Payment Notification (IPN). PayPal's system will POST transaction data to a listener script that you setup for all transactions on your account, be it a payment, refund, dispute, cleared payment (from pending), etc. You can use this to completely automate many post-payment procedures.
Part of your IPN listener will include a call back to PayPal's server to verify that the data actually came from them. You can then log IPN's separately, and only process verified IPN's, which means you know somebody didn't just POST directly to it on their own.
Some people do go a step further and only accept traffic to that script from PayPal's IP addresses, however, they do change their addresses quite a bit and I think that just becomes a hassle. The verification has always been enough for me.
IPN is definitely what I would recommend for you.
Summary: I want to use the returnUrl as a proof that the transaction has been accepted by PayPal.
I'm implementing a very basic purchase workflow based on PayPal.
Everything works nice, the User clicks on pay, the User goes to PayPal, PayPal sends the User to my returnURL... and I'm accepting the payment in this last step.
I know I would have to implement an IPN endpoint and accept the payment there, but this project is very basic and I'm too old or too lazy to implement all this asynchronous behavior which can be a hell of edge-cases.
Would be nice if I would just make the returnUrl more confident, difficult to fake.
I was thinking that there would be the possibility that in the returnURL there would be included a checksum signature based in a secret key stored in the PayPal account and in the actual transaction token
I don't know if this exists, I didn't find any of this into the documentation, any suggestion to make the returnUrl more confident is welcome.
Also if someone thinks I'm completely wrong and the returnUrl is never gonna be a proof that the transaction has been accepted please express your self.
When you're just doing the return URL you need to post to PayPal again to verify the transaction using your PDT token.
Say your return URL is Thanks.aspx:
"From the code-behind of Thanks.aspx, you'll parse the tx value and make an HTTP POST to https://www.paypal.com/cgi-bin/webscr with the following parameters: cmd=_notify-synch&tx=[TransactionID]&at=[PDTIdentityToken]."
This will respond with whether or not that request was valid.
The problem is that this page isn't guaranteed to get hit. The user could close their browser, or their internet could get cut off, or anything else.
The IPN will be getting hit from PayPals servers, and you really can't beat that.
It's pretty easy to set up, but I suggest reading through this document which will explain the PDT and IPN methods, and gives an easy way to figure out what you need.
http://www.codeproject.com/Articles/42894/Introduction-to-PayPal-for-C-ASP-NET-developers?msg=4382854#xx4382854xx
After completing an installation for Express Checkout I realised how it could be exploited.
Even though I am creating a unique invoice number and returning to mark that as paid, I found that it could still be exploited by a user changing the invoice parameter in the return link. Of course there were already checks to ensure that any invoice could only be paid once, but I needed to make it hack proof.
So what I finished up doing was adding some extra checks to ensure that the invoice in question was the last invoice assigned to that user. Yes, the same session ID is maintained aven after a visit to Paypal and back.
I've done a little googling but have been a bit overwhelmed by the amount of information. Until now, I've been considering asking for a valid md5 hash for every API call but I realized that it wouldn't be a difficult task to hijack such a system. Would you guys be kind enough to provide me with a few links that might help me in my search? Thanks.
First, consider OAuth. It's somewhat of a standard for web-based APIs nowadays.
Second, some other potential resources -
A couple of decent blog entries:
http://blog.sonoasystems.com/detail/dont_roll_your_own_api_security_recommendations1/
http://blog.sonoasystems.com/detail/more_api_security_choices_oauth_ssl_saml_and_rolling_your_own/
A previous question:
Good approach for a web API token scheme?
I'd like to add some clarifying information to this question. The "use OAuth" answer is correct, but also loaded (given the spec is quite long and people who aren't familiar with it typically want to kill themselves after seeing it).
I wrote up a story-style tutorial on how to go from no security to HMAC-based security when designing a secure REST API here:
http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/
This ends up being basically what is known as "2-legged OAuth"; because OAuth was originally intended to verifying client applications, the flow is 3-parts involving the authenticating service, the user staring at the screen and the service that wants to use the client's credentials.
2-legged OAuth (and what I outline in depth in that article) is intended for service APIs to authenticate between each other. For example, this is the approach Amazon Web Services uses for all their API calls.
The gist is that with any request over HTTP you have to consider the attack vector where some malicious man-in-the-middle is recording and replaying or changing your requests.
For example, you issue a POST to /user/create with name 'bob', well the man-in-the-middle can issue a POST to /user/delete with name 'bob' just to be nasty.
The client and server need some way to trust each other and the only way that can happen is via public/private keys.
You can't just pass the public/private keys back and forth NOR can you simply provide a unique token signed with the private key (which is typically what most people do and think that makes them safe), while that will identify the original request coming from the real client, it still leaves the arguments to the comment open to change.
For example, if I send:
/chargeCC?user=bob&amt=100.00&key=kjDSLKjdasdmiUDSkjh
where the key is my public key signed by my private key only a man-in-the-middle can intercept this call, and re-submit it to the server with an "amt" value of "10000.00" instead.
The key is that you have to include ALL the parameters you send in the hash calculation, so when the server gets it, it re-vets all the values by recalculating the same hash on its side.
REMINDER: Only the client and server know the private key.
This style of verification is called an "HMAC"; it is a checksum verifying the contents of the request.
Because hash generation is SO touchy and must be done EXACTLY the same on both the client and server in order to get the same hash, there are super-strict rules on exactly how all the values should be combined.
For example, these two lines provides VERY different hashes when you try and sign them with SHA-1:
/chargeCC&user=bob&amt=100
/chargeCC&amt=100&user=bob
A lot of the OAuth spec is spent describing that exact method of combination in excruciating detail, using terminology like "natural byte ordering" and other non-human-readable garbage.
It is important though, because if you get that combination of values wrong, the client and server cannot correctly vet each other's requests.
You also can't take shortcuts and just concatonate everything into a huge String, Amazon tried this with AWS Signature Version 1 and it turned out wrong.
I hope all of that helps, feel free to ask questions if you are stuck.
I want to create a portal website for log-in, news and user management. And another web site for a web app that the portal redirects to after login.
One of my goals is to be able to host the portal and web-app on different servers. The portal would transmit the user's id to the web-app, once the user had successfully logged in and been redirected to the web app. But I don't want people to be able to just bypass the login, or access other users accounts, by transmitting user ids straight to the web app.
My first thought is to transmit the user id encrypted as a post variable or query string value. Using some kind of public/private key scenario, and adding a DateTime stamp to key to make it vary everytime.
But I haven't done this kind of thing before, so I'm wondering if there aren't better ways to do this.
(I could potentially communicate via database, by having the portal store the user id with a key in a database and passing that key to the web app which uses it to get the user id from that database. But that seems crazy.)
Can anyone give a way to do this or advice? Or is this a bad idea all-together?
Thanks for your time.
Basically, you are asking for a single-sign-on solution. What you describe sounds a lot like SAML, although SAML is a bit more advanced ;-)
It depends on how secure you want this entire thing to be. Generating an encrypted token with embedded timestamp still leaves you open to spoofing - if somebody steals the token (i.e. through a network sniffing) he will be able to submit his own request with the stolen token. Depending on the time to live you will give your token this time can be limited, but a determined hacker will be able to do this. Besides you cannot make time to live to small - you will be rejecting valid requests.
Another approach is to generate "use once" tokens. This is 'bullet proof' in terms of spoofing, but it requires coordination among all the servers within the server farm servicing your app, so that if one of them processed the token the other ones would reject it.
To make it really secure for the failover scenarios, etc. it would require some additional steps, so it all boils down to how secure you need it to be and how much you want to invest in building it up
I suggest looking at SAML
PGP would work but it might get slow on a high-traffic site
One thing I've done in the past is used a shared secret method. Some token that only myself and the other website operator knows concatenated to something identifying the user (like their user name), then hash that with a checksum algorithm such as SHA256 (you can use MD5 or SHA1 which usually are more available but they are much easier to break)
The other end should do the same thing as above. Take the passed identifying information and checksum it. Compare that to the passed checksum, if they match the login is valid.
For added security you could also concat the date or some other rotating key. Helps to run SSL on both sides as well.
In general, the answer resides somewhere in SHA256 / MD5 / SHA1 plus shared secret based on human actually has to think. If there is money somewhere, we may assume there are no limits to what some persons will do - I ran with [ a person ] in High School for a few months to observe what those ilks will do in practice. After a few months, I learned not to be running with those kind. Tediously avoiding work, suddenly at 4 AM on Saturday Morning the level of effort and analytical functioning could only be described as "Expertise" ( note capitalization ) There has to be a solution else sites like Google and this one would not stand the chance of a dandelion in lightning bolt.
There is a study in the mathematical works of cryptography whereby an institution ( with reputable goals ) can issue information - digital cash - that can exist on the open wire but does not reveal any information. Who would break them? My experience with [ person ]
shows that it is a study in socialization, depends on who you want to run with. What's the defense against sniffers if the code is already available more easily just using a browser?
<form type="hidden" value="myreallysecretid">
vis a vis
<form type="hidden" value="weoi938389wiwdfu0789we394">
So which one is valuable against attack? Neither, if someone wants to snag some Snake Oil from you, maybe you get the 2:59 am phone call that begins: "I'm an investor, we sunk thousands into your website. I just got a call from our security pro ....." all you can do to prepare for that moment is use established, known tools like SHA - of which the 256 variety is the acknowledged "next thing" - and have trace controls such that the security pro can put in on insurance and bonding.
Let alone trying to find one who knows how those tools work, their first line of defense is not talking to you ... then they have their own literature - they will want you to use their tools.
Then you don't get to code anything.
Applications send out emails to verify user accounts or reset a password. I believe the following is the way it should be and I am asking for references and implementations.
If an application has to send out a link in an email to verify the user's address, according to my view, the link and the application's processing of the link should have the following characteristics:
The link contains a nonce in the request URI (http://host/path?nonce).
On following the link (GET), the user is presented a form, optionally with the nonce.
User confirms the input (POST).
The server receives the request and
checks input parameters,
performs the change,
and invalidates the nonce.
This should be correct per HTTP RFC on Safe and Idempotent Methods.
The problem is that this process involves one additional page or user action (item 3), which is considered superfluous (if not useless) by a lot of people. I had problems presenting this approach to peers and customers, so I am asking for input on this from a broader technical group. The only argument I had against skipping the POST step was a possible pre-loading of the link from the browser.
Are there references on this subject that might better explain the idea and convince even a non-technical person (best practices from journals, blogs, ...)?
Are there reference sites (preferably popular and with many users) that implement this approach?
If not, are there documented reasons or equivalent alternatives?
Thank you,
Kariem
Details spared
I have kept the main part short, but to reduce too much discussion around the details which I had intentionally left out, I will add a few assumptions:
The content of the email is not part of this discussion. The user knows that she has to click the link to perform the action. If the user does not react, nothing will happen, which is also known.
We do not have to indicate why we are mailing the user, nor the communication policy. We assume that the user expects to receive the email.
The nonce has an expiration timestamp and is directly associated with the recipients email address to reduce duplicates.
Notes
With OpenID and the like, normal web applications are relieved from implementing standard user account management (password, email ...), but still some customers want 'their own users'
Strangely enough I haven't found a satisfying question nor answer here yet. What I have found so far:
Answer by Don in HTTP POST with URL query parameters — good idea or not?
Question from Thomas -- When do you use POST and when do you use GET?
This question is very similar to Implementing secure, unique “single-use” activation URLs in ASP.NET (C#).
My answer there is close to your scheme, with a few issues pointed out - such as short period of validity, handling double signups, etc.
Your use of a cryptographic nonce is also important, that many tend to skip over - e.g. "lets just use a GUID"...
One new point that you do raise, and this is important here, is wrt the idempotency of GET.
Whilst I agree with your general intent, its clear that idempotency is in direct contradiction to one-time links, which is a necessity in some situations such as this.
I would have liked to posit that this doesn't really violate the idempotentness of the GET, but unfortunately it does... On the other hand, the RFC says GET SHOULD be idempotent, its not a MUST. So I would say forgo it in this case, and stick to the one-time auto-invalidated links.
If you really want to aim for strict RFC compliance, and not get into non-idempotent(?) GETs, you can have the GET page auto-submit the POST - kind of a loophole around that bit of the RFC, but legit, and you dont require the user to double-optin, and you're not bugging him...
You dont really have to worry about preloading (are you talkng about CSRF, or browser-optimizers?)... CSRF is useless because of the nonce, and optimizers usually wont process javascript (used to auto-submit) on the preloaded page.
About password reset:
The practice of doing this by sending an email to the user's registered email address is, while very common in practice, not good security. Doing this fully outsources your application security to the user's email provider. It does not matter how long passwords you require and whatever clever password hashing you use. I will be able to get into your site by reading the email sent out to the user, given that I have access to the email account or am able to read the unencrypted email anywhere on its way to the user (think: evil sysadmins).
This might or might not be important depending on the security requirements of the site in question, but I, as a user of the site, would at least want to be able to disable such a password reset function since I consider it unsafe.
I found this white paper that discusses the topic.
The short version of how to do it in a secure way:
Require hard facts about the account
username.
email address.
10 digit account number or other information
like social security number.
Require that the user answers at least three predefined questions (predefined by you,
don't let the user create his own questions) that can not be trivial. Like "What's
your favorite vacation spot", not "What's your favorite color".
Optionally: Send a confirmation code to a predefined email address or cell number (SMS) that the user has to input.
Allow the user to input a new password.
I generally agree with you with some modification suggested below.
User registers at your site providing an email.
Verification email is sent to the users account with two links:
a) One link with the GUID to verify the registration b) One link with the GUID to reject the verification
When they visit the verification url from their email they are automatically verified and the verification guid is marked as such in your system.
When they visit the rejection url from their email they are automatically removed from the queue of possible verifications but more importantly you can tell the user that you are sorry for the email registration and give them further options such as removing their email from your system. This will stop any custom service type complaints about someone entering my email in your system...blah blah blah.
Yes, you should assume that when they click the verification link that they are verified. Making them click a second button in a page is a bit much and only needed for double opt in style registration where you plan to spam the person that registered. Standard registration/verification schemes don't usually require this.