web development e-mail protection - security

Currenty, web sites gives generic messages to the users on invalid login attemps such as:
The username or password you entered is not valid
to protect e-mails from spammers. However, I read somewhere that this is not enough because sign up forms will warn user if the e-mail address is already taken. Therefore spammers can find valid e-mails by trying to fill registration forms not login forms.
The question: how can we prevent this? Is there a good way of handling this situation?

One quite nice way to prevent brute forcing is to add an increasing delay before checking.
A fairly good way is to add a 1 second delay before showing the error implying that the email is taken, then double that to 2 seconds, then 4 then 8 etc for the user. You could max this out at 16 seconds, or block the IP for 10 minutes after this for instance.
This way, real users get a 1, 2 or 4 second delay (not much), but bruteforcing becomes too laborious.

Related

Request a change to a user's restricted attribute securely

I'm creating a web application where users earn points for using it (from time to time).
What is the best way to change the amount of points the user has in a safe way?
My first solution was to use a POST Request with the data in the body, but it would be easily circumvented since the user could open the console and send infinite copied requests and earn infinite points. And if I created a token, the user would copy that same token and reuse it until it is invalidated.
My second solution was to create a websocket that while the user maintains connection, he earns X points in X time but it would also be circumvented due to a false connection by the console
What to do in this situation?
Use a POST request, and validate that the request is authentic on the back-end (based upon the criteria in which it is deemed 'reasonable' to award the points).
For example, if a user could only earn 1 point every 30 minutes, store when the user was last awarded points in a database, and then ensure that 30 minutes had passed since that point in time (again, server side).
You could also ensure that the user isn't getting more points than the should by checking the value of the points that they are being awarded, in a similar manner. Want one point at a time? Check that their existing score equals their target score plus one.

Why do some websites make us wait before redirection?

While browsing I found many websites say:
wait for 5 seconds and download will begin; or click this link to download now
or
Wait for 5 seconds, we will redirect to specific website; if you are on fire click this link
Why do websites make us wait for 5 seconds? Are they doing something in that time?
Sometimes developer do not execute all code in same request they put their request in queue (exp. Rabbit MQ) so that another servers can handle it. It increase system performance. it takes some time when queue has much packets but it is so fast 5 secs are more enough to handle it. Does it make sense?
Generally there are two reasons (from my experience):
You got to the page via a link and that page either doesn't exist anymore, or was moved. If it doesn't exist anymore, you will sometimes get redirected higher in the navigation stack (Apple does this with their documentation, sending users to a ore filled search of related/similar pages, if you're lucky). If it has been moved due to a change in the IA of the site, it may be in a "sun setting" period wherein the user is moved from the old URL to the new - to slowdown and stop further propagation of the old link. After the sun setting period that redirect page will be dropped for either a 404 page; or, the higher level search concept.
Depending on the type of form you are filling out, there may be a process which must be run without user interaction; however, these rarely have the option to click the link.
Of course, with the latter part of the first reason, there must also be a process in place to stop this sort of thing and take the page down altogether. Either a date or a "when less than X users land on this page in a month, we can take it down" - so, sometimes a well intentioned change management consideration may never get fully resolved to the new way of things.
Hope that helps.
https://ux.stackexchange.com/questions/37731/why-do-websites-use-you-are-being-redirected-pages

Categorizing Gmail messages based on the time user spent reading the message

I am looking for the following feature in Gmail.
For each message I open, it tracks the time I spent reading the message when it is feasible to do so. For example, if I open message 1 and then move to message 2, by clicking a button within 2 seconds, it notes that the time spent on message 1 is less than 2 seconds.
Gmail automatically labels the messages on which the User spends less than some configurable amount of time (say 2 seconds) and assigns them a label, say "LowAttentionSpan". This way the user can periodically look for messages with this label and take actions like unsubscribing from a list to minimize the amount of time spent on the Inbox.
Is such a feature already available now or can it be developed using Gmail API?
I believe this feature is not yet available for Gmail. Referencing the documentation, there are no such labels similar to what you are looking for nor can you customize to have such labels.
As gerardnimo said, there is currently no such feature available for Gmail. An approximate solution using the Gmail API comes to mind though:
Subscribe to push notifications and issue a watch on the UNREAD-label.
Every time you get a push notification related to a certain user, it will mean that the user just started reading a mail (or marked an old mail as UNREAD). Check the difference in time since last time you got a notification for the same user. If the difference was less than LowAttentionSpan seconds, you could add a custom label to it.
This simple solution has some caveats though.
If the user marks an old message as unread, it might cause some unwanted behavior.
Also, if the user reads only one mail, and comes back e.g. three hours later to read another one, the solution above will interpret that as the user read the first mail for three hours, which will not be the case. It will in other words just work when the user reads multiple new mails in succession.

Is my credit card page secure? Safeguard inquiry

https: xyz dot com/authenticate/cc.php
This is the page 3 of my registration....where my members enter their credit card. This is tied into my authorize.net account.
One thing i noticed recently is this page can be accessed on its own just by typing in the URL.. there is no required pre-url that leads up to it. This seems unsafe, but regardless if someone wanted to abuse it they could just go through the registration process and keep submitting incorrect CC numbers.. costing me money right?
I dont remember if we put an IP limit on it, or again if that is even 100% safeguard.
I am pretty sure we did something where if they enter mastercard with their number(temporarily stored) and it gets sent back as invalid it will match that and not allow them to keep entering the wrong 16 digit number.
Should i just leave the page accessible without specific pages allowing access and worrying about IP limits instead? Couldn't someone just keep switching their ip and submitting this page with incorrect CC's or fake ones at that?
What is the proper way to secure this page considering i/losing my merchant account could be the one at risk?
Thank you in advance
It is strange that you would allow direct acces to the 3rd step of the process, where is all the other data like the user name, addres, ...?
This are some ideas of what I would do, a completely secure system (which might not exists) would be much more complex than my simple steps.
note that you probably would like to first allow the users to register with some information where you can know who they are (verified email, verified phone number, etc) then and only then, you do the credit card thing, and if they continuosly input wrong or invalid numbers, you can do something else, like black-listing them, call them, insult them, etc..
note 2 I spent long time writting this, the more I read it and think about it, the worse it seems to be, but as it is already written I'll post it anyway.
Some notes before begin:
There is only one address, for example /authenticate/auth.php
The process has a "state" and depending on this, it will show/do different things.
For different states has other extra files which are included depending on the state.
After the process starts, a session is created and linked with the user IP, the process state and any other identifiable information about the user, for example 'User-Agent', this data is saved in the server.
Seems you would like to show a different state using different pages, so it will be like that. But actually I would do in a single page using ajax calls.
There is NO black-listing of suspicious IP addresses(too many normal or buggy or completely wrong requests), it could be added if desired, but the complexity increases. You might or might not want to do this, maybe a capcha would be enough, but..
There is NO capcha which might help in some cases, but the session handling I describe here might need to change.
There is NO email verification which you probably want to do.
Let's say that the process states are ask_name, ask_address, ask_cc, etc...
So, when there is any request to the auth page (/authenticate/auth.php), this is what we could do:
1 If 'Referer' doesn't come from one of the possible process starters (main page, etc,) or this page (/authenticate/auth.php), we redirect to the main page. end.
This first step avoids people writting the address directly or coming from untrusted pages.
2 If there is no session information for this request:
2.1 If there is a 'user_name' parameter AND 'Referer' is this page (/authenticate/auth.php)
2.1.1 If that user name is already registered, show(include, not redirect) 'ask_name.php' with the extra notice "User already registered". end.
2.1.2 Create a session for this user, link it with it's IP, User Agent, etc, other data.
2.1.3 Set the state to ask_address (the second) and show 'ask_address.php'. end.
2.2 Else (no parameter or 'Referer' was wrong)
2.2.1 Show 'ask_name.php'. end.
This second step either shows the first screen (ask_user) or the second (ask_name), it delays the creation of the session until we are sure the user wants to do something real.
It has a couple of problems:
Some user (or program) continuosly sends requests without session but with 'user_name', so forcing you to always check if the user is valid or not, and may slow things down. This could be avoided using several different techniquest, for example using a capcha or by black listing some IPs for some time.
It could be possible that one user start the process with a 'user_name' which doesn't exists, but he is slow and takes some time to finish the process, while this is happening, a second user begins and finish the process with the same 'user_name', so when the first user is going to finish, it will fail at the last step. This could be avoided with several different techniques which are left as an exercice.
3 If there is session information for this requests (this is the else to the previous step)
3.1 If referer is not this page OR the IP stored in the server is not the same as the current request OR some other data like User Agent is different OR the state is invalid (not in the list of states), remove the session id from the request (so the browser deletes it) and show 'ask_name.php' with the extra notice "Looks like your device changed!!!". end.
3.2 'include' the page for the state:
3.2.1 If the parameters are passed and are correct, set the state to the-next-state and show the page for it. If is the last state do something appropiate for the last state. end.
3.2.2 Show the same page for this state with an error message for the user to retry. end.
This last step tries to ensure that the request is not coming from a different computer and/or with stolen session keys.

How can I throttle user login attempts in PHP

I was just reading this post The definitive guide to form-based website authentication on Preventing Rapid-Fire Login Attempts.
Best practice #1: A short time delay that increases with the number of failed attempts, like:
1 failed attempt = no delay
2 failed attempts = 2 sec delay
3 failed attempts = 4 sec delay
4 failed attempts = 8 sec delay
5 failed attempts = 16 sec delay
etc.
DoS attacking this scheme would be very impractical, but on the other hand, potentially devastating, since the delay increases exponentially.
I am curious how I could implement something like this for my login system in PHP?
You cannot simply prevent DoS attacks by chaining throttling down to a single IP or username. You can't even really prevent rapid-fire login attempts using this method.
Why? Because the attack can span multiple IPs and user accounts for the sake of bypassing your throttling attempts.
I have seen posted elsewhere that ideally you should be tracking all failed login attempts across the site and associating them to a timestamp, perhaps:
CREATE TABLE failed_logins (
id INT(11) UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
username VARCHAR(16) NOT NULL,
ip_address INT(11) UNSIGNED NOT NULL,
attempted DATETIME NOT NULL,
INDEX `attempted_idx` (`attempted`)
) engine=InnoDB charset=UTF8;
A quick note on the ip_address field: You can store the data and retrieve the data, respectively, with INET_ATON() and INET_NTOA() which essentially equate to converting an ip address to and from an unsigned integer.
# example of insertion
INSERT INTO failed_logins SET username = 'example', ip_address = INET_ATON('192.168.0.1'), attempted = CURRENT_TIMESTAMP;
# example of selection
SELECT id, username, INET_NTOA(ip_address) AS ip_address, attempted;
Decide on certain delay thresholds based on the overall number of failed logins in a given amount of time (15 minutes in this example). You should base this on statistical data pulled from your failed_logins table as it will change over time based on the number of users and how many of them can recall (and type) their password.
> 10 failed attempts = 1 second
> 20 failed attempts = 2 seconds
> 30 failed attempts = reCaptcha
Query the table on every failed login attempt to find the number of failed logins for a given period of time, say 15 minutes:
SELECT COUNT(1) AS failed FROM failed_logins WHERE attempted > DATE_SUB(NOW(), INTERVAL 15 minute);
If the number of attempts over the given period of time is over your limit, either enforce throttling or force all users to use a captcha (i.e. reCaptcha) until the number of failed attempts over the given time period is less than the threshold.
// array of throttling
$throttle = array(10 => 1, 20 => 2, 30 => 'recaptcha');
// retrieve the latest failed login attempts
$sql = 'SELECT MAX(attempted) AS attempted FROM failed_logins';
$result = mysql_query($sql);
if (mysql_affected_rows($result) > 0) {
$row = mysql_fetch_assoc($result);
$latest_attempt = (int) date('U', strtotime($row['attempted']));
// get the number of failed attempts
$sql = 'SELECT COUNT(1) AS failed FROM failed_logins WHERE attempted > DATE_SUB(NOW(), INTERVAL 15 minute)';
$result = mysql_query($sql);
if (mysql_affected_rows($result) > 0) {
// get the returned row
$row = mysql_fetch_assoc($result);
$failed_attempts = (int) $row['failed'];
// assume the number of failed attempts was stored in $failed_attempts
krsort($throttle);
foreach ($throttle as $attempts => $delay) {
if ($failed_attempts > $attempts) {
// we need to throttle based on delay
if (is_numeric($delay)) {
$remaining_delay = time() - $latest_attempt - $delay;
// output remaining delay
echo 'You must wait ' . $remaining_delay . ' seconds before your next login attempt';
} else {
// code to display recaptcha on login form goes here
}
break;
}
}
}
}
Using reCaptcha at a certain threshold would ensure that an attack from multiple fronts would be stopped and normal site users would not experience a significant delay for legitimate failed login attempts.
You have three basic approaches: store session information, store cookie information or store IP information.
If you use session information the end user (attacker) could forcibly invoke new sessions, bypass your tactic, and then login again with no delay. Sessions are pretty simple to implement, simply store the last known login time of the user in a session variable, match it against the current time, and make sure the delay has been long enough.
If you use cookies, the attacker can simply reject the cookies, all in all, this really isn't something viable.
If you track IP addresses you'll need to store login attempts from an IP address somehow, preferably in a database. When a user attempts to log on, simply update your recorded list of IPs. You should purge this table at a reasonable interval, dumping IP addresses that haven't been active in some time. The pitfall (there's always a pitfall), is that some users may end up sharing an IP address, and in boundary conditions your delays may affect users inadvertantly. Since you're tracking failed logins, and only failed logins, this shouldn't cause too much pain.
The short answer is: Do not do this. You will not protect yourself from brute forcing, you could even make your situation worse.
None of the proposed solutions would work. If you use the IP as any parameter for throttling, the attacker will just span the attack across a huge number of IPs. If you use the session(cookie), the attacker will just drop any cookies. The sum of all you can think of is, that there is absolutely nothing a brute forcing attacker could not overcome.
There is one thing, though - you just rely on the username that tried to log in. So, not looking at all the other parameters you track how often a user tried to log in and throttle. But an attacker wants to harm you. If he recognizes this, he will just also brute force user names.
This will result in almost all of your users being throttled to your maximum value when they try to log in. Your website will be useless. Attacker: success.
You could delay the password check in general for around 200ms - the website user will almost not notice that. But a brute-forcer will. (Again he could span across IPs) However, nothing of all this will protect you from brute forcing or DDoS - as you can not programatically.
The only way to do this is using the infrastructure.
You should use bcrypt instead of MD5 or SHA-x to hash your passwords, this will make decrypting your passwords a LOT harder if someone steals your database (because I guess you are on a shared or managed host)
Sorry for disappointing you, but all the solutions here have a weakness and there is no way to overcome them inside the back-end logic.
The login process needs reduce its speed for both successful and unsuccessful login. The login attempt itself should never be faster than about 1 second. If it is, brute force uses the delay to know that the attempt failed because success is shorter than failure. Then, more combinations can be evaluated per second.
The number of simultaneous login attempts per machine needs to be limited by the load balancer. Finally, you just need to track if the same user or password is re-used by more than one user/password login attempt. Humans cannot type faster than about 200 words per minite. So, successive or simultaneous login attempts faster than 200 words per minite are from a set of machines. These can thus be piped to a black list safely as it is not your customer. Black list times per host do not need to be greater than about 1 second. This will never inconvenience a human, but plays havoc with a brute force attempt whether in serial or parallel.
2 * 10^19 combinations at one combination per second, run in parallel on 4 billion separate IP addresses, will take 158 years to exhaust as a search space. To last one day per user against 4 billion attackers, you need a fully random alphanumeric password 9 places long at a minimum. Consider training users in pass phrases at least 13 places long, 1.7 * 10^20 combinations.
This delay, will motivate the attacker to steal your password hash file rather than brute force your site. Use approved, named, hashing techniques. Banning the entire population of Internet IP for one second, will limit the effect of parallel attacks without a dealy a human would appreciate. Finally, if your system allows more than 1000 failed logon attempts in one second without some response to ban systems, then your security plans have bigger problems to work on. Fix that automated response first of all.
Store fail attempts in the database by IP. (Since you have a login system, I assume you know well how to do this.)
Obviously, sessions is a tempting method, but someone really dedicated can quite easily realize that they can simply delete their session cookie on failed attempts in order to circumvent the throttle entirely.
On attempt to log in, fetch how many recent (say, last 15 minutes) login attempts there were, and the time of the latest attempt.
$failed_attempts = 3; // for example
$latest_attempt = 1263874972; // again, for example
$delay_in_seconds = pow(2, $failed_attempts); // that's 2 to the $failed_attempts power
$remaining_delay = time() - $latest_attempt - $delay_in_seconds;
if($remaining_delay > 0) {
echo "Wait $remaining_delay more seconds, silly!";
}
session_start();
$_SESSION['hit'] += 1; // Only Increase on Failed Attempts
$delays = array(1=>0, 2=>2, 3=>4, 4=>8, 5=>16); // Array of # of Attempts => Secs
sleep($delays[$_SESSION['hit']]); // Sleep for that Duration.
or as suggested by Cyro:
sleep(2 ^ (intval($_SESSION['hit']) - 1));
It's a bit rough, but the basic components are there. If you refresh this page, each time you refresh the delay will get longer.
You could also keep the counts in a database, where you check the number of failed attempts by IP. By using it based on IP and keeping the data on your side, you prevent the user from being able to clear their cookies to stop the delay.
Basically, the beginning code would be:
$count = get_attempts(); // Get the Number of Attempts
sleep(2 ^ (intval($count) - 1));
function get_attempts()
{
$result = mysql_query("SELECT FROM TABLE WHERE IP=\"".$_SERVER['REMOTE_ADDR']."\"");
if(mysql_num_rows($result) > 0)
{
$array = mysql_fetch_assoc($array);
return $array['Hits'];
}
else
{
return 0;
}
}
IMHO, defense against DOS attacks is better dealt with at the web server level (or maybe even in the network hardware), not in your PHP code.
You can use sessions. Anytime the user fails a login, you increase the value storing the number of attempts. You can figure the required delay from the number of attempts, or you can set the actual time the user is allowed to try again in the session as well.
A more reliable method would be to store the attempts and new-try-time in the database for that particular ipaddress.
Cookies or session-based methods are of course useless in this case. The application has to check the IP address or timestamps (or both) of previous login attempts.
An IP check can be bypassed if the attacker has more than one IP to start his/her requests from and can be troublesome if multiple users connect to your server from the same IP. In the latter case, someone failing login for several times would prevent everyone who shares the same IP from logging in with that username for a certain period of time.
A timestamp check has the same problem as above: everyone can prevent everyone else from logging in a particular account just by trying multiple times. Using a captcha instead of a long wait for the last attempt is probably a good workaround.
The only extra things the login system should prevent are race conditions on the attempt checking function. For example, in the following pseudocode
$time = get_latest_attempt_timestamp($username);
$attempts = get_latest_attempt_number($username);
if (is_valid_request($time, $attempts)) {
do_login($username, $password);
} else {
increment_attempt_number($username);
display_error($attempts);
}
What happens if an attacker sends simultaneous requests to the login page? Probably all the requests would run at the same priority, and chances are that no request gets to the increment_attempt_number instruction before the others are past the 2nd line. So every request gets the same $time and $attempts value and is executed. Preventing this kind of security issues can be difficult for complex applications and involves locking and unlocking some tables/rows of the database, of course slowing the application down.
As per discussion above, sessions, cookies and IP addresses are not effective - all can be manipulated by the attacker.
If you want to prevent brute force attacks then the only practical solution is to base the number of attempts on the username provided, however note that this allows the attacker to DOS the site by blocking valid users from logging in.
e.g.
$valid=check_auth($_POST['USERNAME'],$_POST['PASSWD']);
$delay=get_delay($_POST['USERNAME'],$valid);
if (!$valid) {
header("Location: login.php");
exit;
}
...
function get_delay($username,$authenticated)
{
$loginfile=SOME_BASE_DIR . md5($username);
if (#filemtime($loginfile)<time()-8600) {
// last login was never or over a day ago
return 0;
}
$attempts=(integer)file_get_contents($loginfile);
$delay=$attempts ? pow(2,$attempts) : 0;
$next_value=$authenticated ? 0 : $attempts + 1;
file_put_contents($loginfile, $next_value);
sleep($delay); // NB this is done regardless if passwd valid
// you might want to put in your own garbage collection here
}
Note that as written, this procedure leaks security information - i.e. it will be possible for someone attacking the system to see when a user logs in (the response time for the attackers attempt will drop to 0). You might also tune the algorithm so that the delay is calculated based on the previous delay and the timestamp on the file.
I generally create login history and login attempt tables. The attempt table would log username, password, ip address, etc. Query against the table to see if you need to delay. I would recommend blocking completely for attempts greater than 20 in a given time (an hour for example).

Resources