I have an application where any request on a page must be inserted to a count table , however how can i check if the requesting is a real user not some tool made by a user to make 1 million request in a second or something like this in asp.net ?
note : I cannot put a captcha or something like this on this because the user is not entering any data
I guess most pages do this via a cookie or they store the IP adress of the requester. The agent can be faked and there a quite a lot of tools around that do this.
hth
Mario
If a cookie is out of the question then i would suggest time tracking on the server. I would assume you have some way of tracking the user, wehther it be cookie based or by authentication (they logged in to your site). (Without tracking the user it would be pointless counting requests).
This means that for each request from each user, you track two things:
the user ID
the time when they requested the page
When they make the request, you increment the counter. If they make another request, you check the timestamp for the previous request - if it is too soon you don't count the current request. This mechanism can be very easily modified to count requests only once every x seconds, or even to only count 1 request per day, etc. If you settle on a timespan that would represent continual requests from a normal user, you can skip counting any requests that happen before that timespan has elapsed. Alternatively you could also log those suspect requests. If you are sending data of some sort back with each request, you can send nothing back for the suspect ones.
You can check the user-agent and see if the request is coming from a web browser.
See this article for more info.
Related
I need to add only my personal Instagram posts to my personal website so I can use them as portfolio.
I don't want to do authenication every request is made and I don't want to use legecy api.
There is some answers here and there, some of them are outdated and some are incomplete (doesn't answer this question). I am looking an answer which summarize this and which I can go back to when I need.
Assuming you already created a Instagram app, got a 1 hour token.
First you do something like this:
GET https://graph.instagram.com/access_token
?grant_type=ig_exchange_token
&client_secret={instagram-app-secret}
&access_token={short-lived-access-token}
This will give a 60 days access token
Source
Once you got the the long-lived token, you can make a GET requst from this endpoint: https://graph.instagram.com/me/media
Add the token:
../me/media?access_token={access-token}
You can add also these some of these fields:
.../me/media?fields=media_url,thumbnail_url, caption&access_token=access_token={access-token}
This should return a json file that include things you need to do the portfolio.
Keep in mind that the token lasts only for 60 days and you need to refeash it once this time is over:
See this for more information
I've come across the same issue and I've gathered up my findings on a step-by-step guide. Here are the key points:
You need to go and create an App on developers.facebook.com and generate a Long-lived token (there is no need for post requests. The token can be generated with the click of a button).
To fetch the photos, you need to make more than one call to the API. One for fetching the photos' IDs and then, one separate call for each ID to get the actual data (permalink, image URL, caption, etc).
There is a limit to the calls that you can make per hour, which, as I realized, can be easily reached on a site with moderate traffic. So, you have to somehow cache the results (on WordPress, I used a transient for that which seems to work fine).
The token expires after 60 days and you need to refresh it on time for your app to keep working.
To make things easier for future implementations, I also made a small PHP function that takes care of all of the above (except the token generation). It will make the necessary calls, store the response in a transient which expires after an hour, and refresh the token, if it can be refreshed.
I'm working on an implementation of the Facebook api and I'm to the point that i can fetch a users pages and would now like to display these to the user so they can select where to send the post. These page objects have an access token on them to verify requests with Facebook and intuition tells me you wouldn't want to send these through to the ui then back again. I could just make 2 calls when sending and receiving, filter the results to remove the access tokens, then when receiving a request make another call to the api and filter the page results by id.
I'm curious though if theres a way to get around making 2 api requests and reduce overall requests to the api and keep the usage down.
You could just store the page tokens in the session, when you get the list of pages - then you don’t need to make a second API request after the user made their choice.
(Session data is tied to a specific client, and never leaves the server. Only thge session ID is passed between client and server.)
In our web application, we have a function where the user reset his/her password. Part of the process requires sending OTP via SMS. The thing is, we have a function in our page that allows user to resend the OTP in case it was not received due to certain reasons (sms provider error, network error etc.). Upon recent penetration testing, it was found that that the back end call for sending the OTP is vulnerable for DoS attacks. Hackers can run it to flood users with SMS.
We already have a mechanism in our firewall which detects automated attacks for denial of service. The problem is, there is a minimum limit of requests per second for the firewall classifies it as an attack. (e.g. 100 requests per second, the FW blocks it but anything below, it allows).
Lets say hacker did a program to resend otp via sms per second, the firewall would not be able to detect it. Another option we can do is handle it programatically but we can't think of a best way to do it. Can anyone advise us on this? We can't just limit the no of times an OTP can be resend because we are worried of its effect in user experience.
Two things come to my mind:
Take Macuistin's idea but make the timeouts grow over time. I know I wouldn't want 3 text messages a minute. After X number of messages don't send anymore and have them contact support. If this is a legitimate user, after so many messages something isn't right and you should just stop.
How about adding a step before this, send a link to the email address of the user with a one-time link, click on the link will send them to the page to enter in the OTP that triggered on the link (there could be a resend link on there as well which would not trigger another email).
Have you looked at the timings in real world use cases?
For example, if a real user takes 20 seconds before pressing retry then you could add that restriction to your service without real users knowing that the restriction is in place.
That doesn't mean that you couldn't accept another request before this time, it could just be queued until the timeout has passed.
This will not possible through WAF, Here you can use Captcha for failed attempts.
Captcha only pop up when particular limit cross. You can set limit on IP, UserID, and session variable.
I am building a web app wherein a user can like some choices displayed on the page.
I want to build this like/unlike system in the most efficient way possible. Does every press of the like button need to send an http request to the node.js server to modify user data in Mongo?
I'm asking since I will be having a python script as a recommender system that listens to every change happening in MongoDB.
Yes, every click should go to the server by making a callback. Someone can say that:
you can also do tweaks with this functionality like pop all the ids of posts liked by a specific user in an array and send it back at the end of its session or after a specific amount of time.
But think what if that array somehow lose the data by mistake ? Or the session is failed due to some reasons? Also, how will other users see that which post is liked or not ?
See these are the reasons we always send the response back each time. However JQuery and other frameworks are there to make it fast.
Does every press of the like button need to send an http request to the node.js server to modify user data in Mongo?
You need to get your data to the server somehow, yes. An HTTP request is generally a good choice, and doesn't have to be as heavyweight as it once was.
Firstly, your server should be enabling HTTP keep-alive, where the underlying TCP connection stays open for some amount of time once the request is finished. That way, subsequent requests can be made on the same connection.
Additionally, you should ensure you have HTTP/2 enabled, which is a more efficient protocol due to its binary nature. More importantly, headers like Cookie and what not aren't sent over and over again.
By following these best practices, you'll find that your request/responses are just a few bytes down the wire of an existing connection. And, you won't have to change anything in your code to do it!
I am new to backend. Only way i can think of is this:
at visit if doesn't have cookie then do next step
generate unique id and then set it as cookie
then upon every request check if that id is present in database and if not go to step 1.
if it's present then fetch data under that id and respond as needed.
Now is it safe?, Is it logical. What does actually happen.
Scenario to use in:
This is meant for not logged in users. Basically, users visit my site, click something that takes time.. so user is redirected to a page with waiting gif all the while using ajax (long polling) server is requested for results. Now to differentiate between requests from multiple users i am thinking this will work. It's important because data i'm going to be sending back is going to be private from 3rd party.
You have to decide up front if you want a:
Temporary session for a given browser that will only work for that user in one specific browser and may be reset at any time
or
A longer term session associated with a particular user that they user can use any time and from any browser.
The first can be done with a server or client generated cookie that is any globally unique value. You can then use that id as a key into your database to get the user's server-side settings/data on any given request. In node.js, there are a number of session related NPM modules that will handle the generation of a sessionID for you automatically. The problem with this first method is that it relies on the preservation of a cookie value in the user's browser. Not only can cookies be temporal (they can be cleared), but they are only set in one specific browser.
If you're only planning on using it for the duration of one session, then this first method should work just fine. It is common to use a time value (e.g. Date.now()) combined with a random number for a unique id. Since no two requests can be processed in the same ms, this guarantees a unique id value. Add the random number to make it not be predictable. Other NPM session modules offer further features such as an encryption key, etc...
The second method requires some sort of identifier that the user must enter in order to know which user it is (often an email address). If you don't want other people to be able to impersonate a user by only knowing their user id, then you also need to require a password. This essentially requires a sign-up process on your site where the user ends up with a userID and password that they use to login to your site.
It is not uncommon to see the first method used for short term storage on behalf of the user. For example, a shopping cart on a site that you are not registered for.
The second method is used by all the sites that have a user login.