Remote activation/deactivation and protecting against out of business - activation

I'm in charge of an app that uses the internet to transfer data between sites, and some customers are being awkward about paying, so we need a mechanism that will allow us to cut off the service of non-payers. I'd like to protect against the admin people using firewalls to block off our checks, but conversely I'd like to give some allowance for our company web site disappearing for some reason and not being accessible.
The scheme I'm imagining is:
server makes twice daily check to web page using a URL like:
http://www.ourcompany.com/check.php?myID=GUID&Code=MyCode
This then returns a response that contains either nothing of interest, or the GUID and a value.
GUID=0
That zero indicates that the server should stop operation. To make it work again, the server will check every 5 mins for the same info, until the value matches what it thinks the code that it passed in should be transformed to.
This scheme makes sense to me, but the question really is how to protect against blocking. Given we know we must have internet access, how long should we continue to operate without being able to get the response from our web server? Is something like 14 days and then we just shut it off anyway the best way?

The solution I used in the end was pretty much as I suggested. Yes, it is defeatable using tools outlined here, but it is better than nothing.
The app checks daily to access a web site that contains a control file encrypted using public key encryption. It decrypts in memory, and if it finds its GUID, then it must match a code. To disable the operation, the code is set to 0 (zero) which will always fail. When disabled, it checks every two minutes to allow rapid restoration. There is also a manual mechanism to generate a code that will work for a week in case of server trouble.
The code will allow up to 14 days without connecting to the server before it takes this as a deliberate attempt to block it. After 10 days, it shows an error message which asks them to contact support.

This method is really easy to circumvent: just use a local dns server to point www.ourcompany.com to the local machine, or use a http proxy. Then the user can return whatever response they want to the program.
Assuming the user hasn't circumvented the check, how long you are to continue to operate without confirmation is a business decision and not a programming decision.

A user can use a tool such as OWASP WebScarab to change values on the fly to subvert your security model. You need to include something more difficult such as requiring a secure channel, comparing public key and so on.

Related

How to prevent snooping by user of Mac app?

I am creating a Chromium/Electron based Mac app. The app is essentially a browser for my customers to use a web service that I have no control over. My requirement is that users of my app (who may have root access on their Mac) should not be able to view the URLs the app is visiting, and should be unable to gain access to the cookies the app is storing. Normally it is not hard to MITM yourself, or attach a debugger to an app and dump memory to see the URLs and cookies.
How can I prevent these types of leaks to the user? If it's impossible, it may be acceptable to make it very hard so that a very high level of sophistication is needed.
Your users have full control of their devices, it is not possible to securely prevent them from proxying or exploring what your client-side app does. Obfuscation would seem like an option, but in the end, the http request that leaves your app will traverse the whole OS through different layers, and your user can easily observe that, if not else then in network packets (but usually much easier).
The only way it is possible to prevent the user from knowing what's happening is if you have your own backend. The frontend app (Electron) would make a request to your backend, which in turn could make any request with any parameters without the user being aware.
Note though that your backend could still be used as a proxy or oracle just like if the user was connecting to the real service. This might or might not be a problem in your case, depending on what you actually want to achieve and why.
The app is essentially a browser for my customers to use a web service that I have no control over. My requirement is that users of my app (who may have root access on their Mac) should not be able to view the URLs the app is visiting, and should be unable to gain access to the cookies the app is storing
Basically, you cannot (you could with the appropriate infrastructure. But you lack that infrastructure).
Network communications can be secured, to a point, using HTTPS (if you can't even use that, then you're completely out of luck - users wouldn't even need root access to the Mac to sniff traffic). You need to verify the server certificate to be sure you're connecting to the correct server.
One thing you might do - effectual just against wannabes, I'm afraid - is first run a test API call on some random server and verify that the connection either fully succeeds, with the proper server identification and matching IP, if the server exists, or that it properly fails if the server never existed. Anything else would be a telltale that someone has taken over the network layer, and at that point you could connect to a different server, making different calls, and lament that the server isn't answering properly.
Strings in memory can be (air quote) protected (end air quote) by having them available only for the shortest time, and otherwise stored in a different form - you can have for example an URL and a random byte sequence with the same length, then store the sequence and the XOR of the URL and the sequence. You can then reconstruct the URL every time you need it, remembering to clear it off any app caches it might find its way into. Also, just for the lols, you can keep a baker's dozen of different URLs sprinkled in the clear throughout the code. A memory dump at that point will turn out nothing useful.
Files, of course, can be encrypted with any one of several schemes - the files residing on the same machine that has to know how to decode them makes all such schemes ultimately vulnerable, but there again, you can try and obfuscate things. I once stored some information in a ZIP file - but it was just the header of an encrypted ZIP file, with the appropriate directory entry block glued at the end. The data were actually just gzipped in the clear, there was no password whatsoever. The guys that tried to decode the file thought it was a plain encrypted Zip file with the extension changed, wasted a significant amount of time with several Zip cracking tools, and ended up owing me a beer.
More than that, there is not much that can realistically be done.
A big advantage would be in outsourcing the API calls and "cookie" maintenance to an external service that you control, e.g. on Amazon AWS or Azure or similar. Then you could employ all kinds of protection schemes (for example: all outbound API calls could be stored in an opaque object, timestamped, nonced, and encrypted with your server's public key, and the responses sent encrypted with your client's unique key). Since this is relatively simple and cost-effective, it would also be my recommendation.

In iOS what is the standard way to keep unknown Apps off your server?

The App could have a private key hard-coded into it and my server could have the public for it and the App could sign everything. But then a hack could identify the private key in the object code and write a malicious App that signs everything with that same key. Then that App could use my server.
The App could do a key exchange with my server but how does the server know the App is authentic when it does the key exchange?
In essence you cannot know.
Reason is simple: since anybody can get to the client and have everything the client is and knows by reverse engineering the client (to which they have all they need to perfrom that), there is nothing that can prevent them from answering any challenge you might set to what the real app would answer.
You can make it harder on fake apps though. But they could (if done right) give the answer anyway.
E.g. how to make it harder:
The server sends a challenge to the client app to calculate e.g. the CRC32 (or md5, sha-1, sha-256, ... doesn't matter as such) of the app itself from a given start to a given end. If you set those start and end points to be fully random for every challenge you send, you essentially force the fake app to have the real app's compiled code in full ... So you place the burden of having to have the real app (not forcing it to be actually running the (unmodified) code, just having the actual unmodified code).
Take care that you would need to support the server side with allowing for multiple versions of the client etc. or you can't upgrade the clients anymore.
Anybody distributing a fake app would hence be forced to violate your copyright on the real app (and your lawyers would have am easier case maybe).
Alternatives:
To pick an alternative, you need to figure out why it's (so) important to have your client ?
If the client contains secrets: remove them, make the client display only and have an 3 tier model where you only let the user run the display part and keep all secrets on your servers.
If you get your revenue from selling an app, give it away for free and sell accounts on your server. Use authentication to do that: you can authenticate users (login&password, real 2 factor authentication , ...) you can also disallow them to dramatically change their geo location in a short time, disallow simultaneous logins, ...
But the price is the hoops for the user to jump through. And they might use other clients nonetheless.
If you allow logic (like e.g. used in online games) to use the power of the user's CPU to do things, you can still keep oversight on a logic level on the server: e.g. if it takes 5 minutes at the very minimum to complete a task in the real client, and if the client reports back as "achieved" before those 5 minutes are done: you have a cheater ... Similarly, make sure all important assets are only given from the server, don't trust the clients ...

How to defend excessive login requests?

Our team have built a web application using Ruby on Rails. It currently doesn't restrict users from making excessive login requests. We want to ignore a user's login requests for a while after she made several failed attempts mainly for the purpose of defending automated robots.
Here are my questions:
How to write a program or script that can make excessive requests to our website? I need it because it will help me to test our web application.
How to restrict a user who made some unsuccessful login attempts within a period? Does Ruby on Rails have built-in solutions for identifying a requester and tracking whether she made any recent requests? If not, is there a general way to identify a requester (not specific to Ruby on Rails) and keep track of the requester's activities? Can I identify a user by ip address or cookies or some other information I can gather from her machine? We also hope that we can distinguish normal users (who make infrequent requests) from automatic robots (who make requests frequently).
Thanks!
One trick I've seen is having form fields included on the login form that through css hacks make them invisible to the user.
Automated systems/bots will still see these fields and may attempt to fill them with data. If you see any data in that field you immediately know its not a legit user and ignore the request.
This is not a complete security solution but one trick that you can add to the arsenal.
In regards to #1, there are many automation tools out there that can simulate large-volume posting to a given url. Depending on your platform, something as simple as wget might suffice; or something as complex (relatively speaking) a script that asks a UserAgent to post a given request multiple times in succession (again, depending on platform, this can be simple; also depending on language of choice for task 1).
In regards to #2, considering first the lesser issue of someone just firing multiple attempts manually. Such instances usually share a session (that being the actual webserver session); you should be able to track failed logins based on these session IDs ang force an early failure if the volume of failed attempts breaks some threshold. I don't know of any plugins or gems that do this specifically, but even if there is not one, it should be simple enough to create a solution.
If session ID does not work, then a combination of IP and UserAgent is also a pretty safe means, although individuals who use a proxy may find themselves blocked unfairly by such a practice (whether that is an issue or not depends largely on your business needs).
If the attacker is malicious, you may need to look at using firewall rules to block their access, as they are likely going to: a) use a proxy (so IP rotation occurs), b) not use cookies during probing, and c) not play nice with UserAgent strings.
RoR provides means for testing your applications as described in A Guide to Testing Rails Applications. Simple solution is to write such a test containing a loop sending 10 (or whatever value you define as excessive) login request. The framework provides means for sending HTTP requests or fake them
Not many people will abuse your login system, so just remembering IP addresses of failed logins (for an hour or any period your think is sufficient) would be sufficient and not too much data to store. Unless some hacker has access to a great many amount of IP addresses... But in such situations you'd need more/serious security measurements I guess.

cheat prevention for browser based xmlhttp/js/perl/php game

Lets say that in a browser based game, completing some action (for simplicity lets say someone clicks on a link that increases their score by 100) clicking on this link which would have a url for example increase_score.pl?amount=100 what kind of prevention is there from someone simply sending requests to the web server to execute this command:
Over and over again without actually doing the task of clicking on the link and
Sending a false request to the server where amount is set to something rediculus like 100000.
I am aware of checking HTTP_REFERER however I know people can get around that (not sure how exactly) and other than some bounds checking for the 2nd option I'm kind of stumped. Anyone ever experience similar problems? Solutions?
Nothing can stop them from doing this if you implement your game how you propose.
You need to implement game logic on the server and assign points only once the server validates the action.
For example: on SO when someone votes your question up, this isn't sent as a command to increase your reputation. The web-app just says to the server user X voted question Y up. The server then validates the data and assigns the points if everything checks out. (Not to say SO is a game, but the logic required is similar.)
Short version: you can't. Every piece of data you get from the client (browser) can be manually spoofed by somebody who knows what they're doing.
You need to fundamentally re-think how the application is structured. You need to code the server side of the app in such a way that it treats every piece of data coming from the client as a pack of filthy filthy lies until it can prove to itself that the data is, in fact, plausible. You need to avoid giving the server a mindset of "If the client tells me to do this, clearly it was allowed to tell me to do this."
WRONG WAY:
Client: Player Steve says to give Player Steve one gazillion points.
Server: Okay!
RIGHT WAY:
Client: Player Steve says to give Player Steve one gazillion points.
Server: Well, let me first check to see if Player Steve is, at this moment in time, allowed to give himself one gazillion points ... ah. He isn't. Please display this "Go Fsck Yourself, Cheater" message to Player Steve.
As for telling who's logged-in, that's a simple matter of handing the client a cookie with a damn-near-impossible-to-guess value that you keep track of on the server -- but I'll assume you know how to deal with session management. :-) (And if you don't, Google awaits.)
The logic of the game (application) should be based on the rule to not trust anything that comes from the user.
HTTP_REFERER can be spoofed with any web client.
Token with cookie/session.
You could make the link dynamic and have a hash that changed at the end of it. Verify that the hash is correct given that period of time.
This would vary in complexity depending on how often you allowed clicks.
A few things to note here.
First, your server requests for something like this should be POST, not GET. Only GET requests should be idempotent, and not doing so is actually a violation of the HTTP specification.
Secondly, what you're looking at here is the classic Client Trust Problem. You have to trust the client to send scores or other game-interval information to the server, but you don't want the client to send illegitimate data. Preventing disallowed actions is easy - but preventing foul-play data in an allowed action is much more problematic.
Ben S makes a great point about how you design the communication protocols between a client and a server like this. Allowing point values to be sent as trusted data is generally going to be a bad idea. It's preferable to indicate that an action took place, and let the server figoure out how many points should be assigned, if at all. But sometimes you can't get around that. Consider the scenario of a racing game. The client has to send the user's time and it can't be abstracted away into some other call like "completedLevelFour". So what do you do now?
The token approach that Ahmet and Dean suggest is sound - but it's not perfect. Firstly, the token still has to be transmitted to the client, which means it's discoverable by the potential attacker and could be used maliciously. Also, what if your game API needs to be stateless? That means session-based token authentication is out. And now you get into the deep, dark bowels of the Client Trust Problem.
There's very little you can do make it 100% foolproof. But you can make it very inconvenient to cheat. Consider Facebook's security model (every API request is signed). This is pretty good and requires the attacker to actually dig into your client side code before they can figure out how to spoof a reqeust.
Another approach is server replay. Like for a racing game, instead of just having a "time" value sent to the server, have checkpoints that also record time and send them all. Establish realistic minimums for each interval and verify on the server that all this data is within the established bounds.
Good luck!
It sounds like one component of your game would need request throttling. Basically, you keep track of how fast a particular client is accessing your site and you start to slow down your responses to that client when their rate exceeds what you think is reasonable. There are various levels of that, starting at the low-level IP filters up to something you handle in the web server. For instance, Stackoverflow has a bit in the web application that catches what it thinks are too many edits too close together. It redirects you to a captcha that you need to respond to if you want to continue.
As for the other bits, you should validate all input not just for its form (e.g. it's a number) but also that the value is reasonable (e.g. less than 100, or whatever). If you catch a client doing something funny, remember that. If you catch the same client doing something funny often, you can ban that client.
Expanding on Ahmet's response, every time they load a page, generate a random key. Store the key in the user session. Add the random key to every link, so that the new link to get those 100 points is:
increase_score.pl?amount=100&token=AF32Z90
When every link is clicked, check to make sure the token matches the one in the session, and then make a new key and store it in the session. One new random key for every time they make a request.
If they give you the wrong key, they're trying to reload a page.
I would suggest making a URL specific to each action. Something along the lines of:
/score/link_88_clicked/
/score/link_69_clicked/
/score/link_42_clicked/
Each of these links can do two things:
Mark in the session that the link has been clicked so that it wont track that link again.
Add to their score.
If you want the game to only run on your server, you can also detect where the signal is sent from in your recieving trick, and ignore anything not coming from your domain. It will be a real pain to tamper with your codes, if you have to run from your dedicated domain to submit scores.
This also blocks out most of CheatEngine's tricks.

Best way to limit (and record) login attempts

Obviously some sort of mechanism for limiting login attempts is a security requisite. While I like the concept of an exponentially increasing time between attempts, what I'm not sure of storing the information. I'm also interested in alternative solutions, preferrably not including captchas.
I'm guessing a cookie wouldn't work due to blocking cookies or clearing them automatically, but would sessions work? Or does it have to be stored in a database? Being unaware of what methods can/are being used so I simply don't know what's practical.
Use some columns in your users table 'failed_login_attempts' and 'failed_login_time'. The first one increments per failed login, and resets on successful login. The second one allows you to compare the current time with the last failed time.
Your code can use this data in the db to determine how long it waits to lock out users, time between allowed logins etc
Assuming google has done the necessary usability testing (not an unfair assumption) and decided to use captchas , I'd suggest going along with them.
Increasing timeouts is frustrating when I'm a genuine user and have forgotten my password (with so many websites and their associated passwords that happens a lot , especially to me)
Storing attempts in the database is the best solution IMHO since it gives you the auditing records of the security breach attempts. Depending on your application this may or may not be a legal requirement.
By recording all bad attempts you can also gather higher level information, such as if the requests are coming from one IP address (i.e. someone / thing is attempting a brute force attack) so you can block the IP address. This can be VERY usefull information.
Once you have determined a threshold, why not force them to request the email to be sent to their email address (i.e. similar to 'I have forgotten my password'), or you can go for the CAPCHA approach.
Answers in this post prioritize database centered solutions because they provide a structure of records that make auditing and lockout logic convenient.
While the answers here address guessing attacks on individual users, a major concern with this approach is that it leaves the system open to Denial of Service attacks. Any and every request from the world should not trigger database work.
An alternative (or additional) layer of security should be implemented earlier in the req/ res cycle to protect the application and database from performing lock out operations that can be expensive and are unnecessary.
Express-Brute is an excellent example that utilizes Redis caching to filter out malicious requests while allowing honest ones.
You know which userid is being hit, keep a flag and when it reaches a threshold value simply stop accepting anything for that user. But that means you store an extra data value for every user.
I like the concept of an exponentially increasing time between attempts, [...]
Instead of using exponentially increasing time, you could actually have a randomized lag between successive attempts.
Maybe if you explain what technology you are using people here will be able to help with more specific examples.
Lock out Policy is all well and good but there is a balance.
One consideration is to think about the consruction of usernames - guessable? Can they be enumerated at all?
I was on an External App Pen Test for a dotcom with an Employee Portal that served Outlook Web Access /Intranet Services, certain Apps. It was easy to enumerate users (the Exec /Managament Team on the web site itself, and through the likes of Google, Facebook, LinkedIn etc). Once you got the format of the username logon (firstname then surname entered as a single string) I had the capability to shut 100's of users out due to their 3 strikes and out policy.
Store the information server-side. This would allow you to also defend against distributed attacks (coming from multiple machines).
You may like to say block the login for some time say for example, 10 minutes after 3 failure attempts for example. Exponentially increasing time sounds good to me. And yes, store the information at the server side session or database. Database is better. No cookies business as it is easy to manipulate by the user.
You may also want to map such attempts against the client IP adrress as it is quite possible that valid user might get a blocked message while someone else is trying to guess valid user's password with failure attempts.

Resources