How to give time limited download link? - security

I am using PHP and MySQL and I want to sell my Digital EBooks on line. I want to give a Download Link to my clients who buys my EBook but I want to secure my digital download links to work for only 3 times and the Download Link should be deleted automatically after 24 hours or after 3 tries to downloads.
How can I fulfill this requirement.
Since I don't want to buy any Digital Download Service. I want to create my own.

At the point that they make a purchase, generate a unique identifier and incorporate this in the URL.
Then when that URL is visited, lookup the code, see when it was created and how many times its been used, and either send the content or an error, accordingly.
To expand on this:
User visits your site, buys somecontent.txt, and pays their money. You then generate a random string and give them a link that looks like:
www.yoursite.com/download?content=somecontent.txt&authKey=fdkjhwsiufhwaoeuhfq
where the string at the end is the one you generated, and stored in the DB.
At some time later, the buyer uses the link. Your code then goes 'Aha! - that code was created as a result of the user Fred buying some content. His payment was received, and its the first time its been downloaded, so I'll send him the file, and increment the counter that records how many times that specific user has used that specific code.
Eventually, due to either a time expiry, or having been used too many times, your code will instead turn around and say 'Sorry Fred, that link is too old/has been used too many times'...at which point they can re-buy the content, for example, at which point you can generate a new code.

Related

Instagram API media/popular

What are the queries we can use with media/popular. Can we localize it according to country or geolocation?
Also is there a way to get the discovery feature's results with the api?
This API is no longer supported.
Ref : https://www.instagram.com/developer/endpoints/media/
I was recently struggling with same problem and came to conclusion there is no other way except the hard one.
If you want location based popular images you must go with location endpoint.
https://api.instagram.com/v1/locations/214413140/media/recent
This link brings up recent media from custom location, key being the location-id. Your job is now to follow simple pagination api and merge responded arrays into one big bunch of JSON. $response['pagination']['next_max_id'] parameter is responsible for pagination, so you simply send every next request with max_id of previous request.
https://api.instagram.com/v1/locations/214413140/media/recent?max_id=1093665959941411696
End result will depend on the amount of information you gathered. In the end you will just gonna need to sort the array with like count and you're up to go whatever you were going to do.
Of course important part is to save images locally rather than generating every time user opens the webpage. Reason being not only generation time but limited amount of requests per hour.
Hope someone will come up better solution or Instagram API will finally support media/popular by location.

How can I scrape data from a website?

I want to scrape only four data items from the following page in each and every product from the following link that was an infinitive scroll down page.
name of the product
price of the product
href of the product
img src of the product.
All the data will be stored in a single csv file.
How can I do this?
Any idea?
i have not sure of this method.
get the original source code where you can get all of info of the website including the photo link or any word
This is usually considered a bad idea. If you write code to scrape a website for it's content, what happens when they change their markup? Or what happens when they realize you're scraping (stealing) their original content and ban your server's IP address or IP range even. It's a losing battle, so unless you have permission from them to do so I wouldn't recommend trying. It may work for a little while, but probably not for long. It's generally considered poor form to do something like this, so personally I wouldn't encourage anyone to teach someone how to scrape a website for it's content.
Furthermore, it says very clearly in their Terms of Use not to do exactly that:
You agree not to access (or attempt to access) the Website and the materials
or Services by any means other than through the interface that is provided by
Snapdeal. You shall not use any deep-link, robot, spider or other automatic
device, program, algorithm or methodology, or any similar or equivalent manual
process, to access, acquire, copy or monitor any portion of the Website or
Content (as defined below), or in any way reproduce or circumvent the
navigational structure or presentation of the Website, materials or any
Content, to obtain or attempt to obtain any materials, documents or
information through any means not specifically made available through the
Website.

The New Google Takeout

Is there a way to include one's search history within Google Takeout?
https://www.google.com/takeout/
Takeout purports to let you download everything stored within your Google account.
As far as I can tell, no. It's an obvious and mysterious gap in service.
You can download your recent google searches via an rss feed.
https://www.google.com/history/?output=rss
You can add commands to the url. The max query is 1000. num is number of searches and start is how many back to draw from. Like so:
https://www.google.com/history/?output=rss&num=1000&start=4000
Unfortunately, it starts to become somewhat reduced (as in not actually all of your searches) after a few thousand. I have over 40,000 searches on google, but I can only go back 7000 on this rss feed. Bummer. This means we still don´t have access to all our data that they have.
Please prove me wrong!
Today is the last day to delete your search history, as suggested by the EFF(https://www.eff.org/deeplinks/2012/02/how-remove-your-google-search-history-googles-new-privacy-policy-takes-effect), before the new google terms come into force, linking that history with all the other google products. So if you can't grab it today, delete it so as to partially anonymise it eventually, or be tentacularised.

Is this method of checking a "gift code" secure?

I have a backend that generates gift codes, each with a certain number of uses. Give these to a blogger or whatever, and their readership can redeem the code for a promotional item.
I'm working on the best way to check a codes validity without having collisions/dupes, or anything like that. I need to 1) validate the code 2) collect shipping info
My first draft was
A) Check code via a form, if good, proceed to address input. When input is received, save code and address/name etc.
This fails because if there are 74 uses on a 75 use code, 25 people could "validate" but not enter their address yet, and we'd end up with more than 75 valid redemptions.
My current solution looks more like:
B) Just have the code as the first field in the information gathering form, and when a valid code is typed in, ajaxify that and live check it against the DB. If the code is valid, it then shows the rest of the form, and that entry of the code is "claimed" for half an hour or something. If no DB entry w/in half an hour, it's then released.
This seems pretty complex, and I'm wondering if I'd need to do throttling against the ajax attempts to make sure people don't brute force a valid code.
Is this method secure, and/or are there any other blatantly obvious patterns I'm missing for this type of application?
Let everyone enter their gift-code and address, and then submit
In the backend, verify the address and the gift-code.
If the gift-code is valid and not exhausted, congratulate the user. Else apologise to them and suggest they buy it instead anyway.
Does it have to be more complicated than that?
Why don't you just have one form with all the information (redemption code and shipping info)?
Then, when the user submits, atomically (using transactions on your database) check if it's valid and commit the user's information.
If the code is no longer valid, just show a message like "Sorry, the redemption code you used has been depleted and is no longer valid."
Just wanted to add, if you're worried about bruteforcing attempts, you can require a captcha or javascript based hashcash value to be submitted along with the gift code. If you want to be as unobtrusive as possible, you can only require this for subsequent attempts after the first failed one.
One thing you might consider, is after the user enters a gift code, create an intermediate page that has more details about the offer, shows the number of claims remaining, and has some information about what will be required to complete the offer (address, creditcard, whatever). If the user chooses to claim the offer, have a 10-15 minute countdown (updated via javascript) on the data entry page for the address and other personal information, so the user knows that the offer might expire if they don't enter their information immediately.
Another thing to consider is implementing a "cancel" button that indicates the user can make the offer available for another user, without waiting for the countdown to expire.
Your current solution looks like the proper one, although I think you left out the method by which you associate the user association with the code. Still, providing the functionality of "reserving" a redemption of the code for a user is a good solution.
Option B seems reasonable. Just use a captcha rather than trying to throttle it. Captchas aren't perfect but it's less obnoxious than say misreading the code three times and then being denied the ability to try another for 24 hours. This will work particularly well if you're already planning on doing it AJAXy.
So -
User will fill in code field and captcha.
You'll confirm the captcha, then confrim the code.
Once successful, the user will fill in the other info and submit.
Using this method you could also probably only lock the code for something more like 5 minutes (ticket agency style) and show a timer on the form somewhere notifying the user.
Your A method (Check code via a form, if good, proceed to address input) looks very reasonable. Just combine it with B's "code is "claimed" for half an hour or something", and everything should work as you expect.
That is:
Customer enters code
Check code – if valid, and not already used MAX+ times, add an extra entry in code use table, with a timestamp that expires after x minutes.
Collect other info
On submit, permanently mark the code as used (remove entry expiration)
If customer never makes the order, the timestamped entry is removed (or ignored) after time x, and released for others to use.
We do a low end encryption (RC4) with a checksum added for this type of thing. Because RC4 generates a problematic character set, we also converted it to HEX. The combination is relatively secure and self checking. The decrypted value is just a number that we can verify in the database. This works with both our eMail reminders and gift certificates.

How does the "mark as read" system on webforums work?

I've wondered about this for some time now. I'm wondering webforums implement the option to highlight something you haven't read. How the forum knows.
Since most webforums have a function to show you all posts since your last visit, they must save the last time you visited one of their pages in your userdata in the database.
But that doesn't explain how individual topics are still highlighted after you've read just one.
A many to many table connecting a user to a topic/post with flags for read/favorite etc.
Many web forums store a huge list of the last time you looked at each topic you've looked at.
This gets out of hand quickly, but there are mitigations. See Determining unread items in a forum
Keeping track of what posts a visitor has read is of course not that much of a big deal. Since it's highly likely that the number of posts a visitor read will be much less than the posts not read. So, if you know what posts a visitor has read, you also know what posts this visitor didn't read. To make this less computational intensive you'd normally do this only over a certain period of time, say the last two weeks. Everything before that time will be considered read.
Usually, this list of "unread" items only shows changes that have been made since the last time you logged out.
Use the user's last activity date/time to mark items as "unread" (any activity in a topic after that time is marked "unread"). Then store in a Session variable, a list of topic IDs that the user viewed since last login. Combining these two would give you a relatively accurate list of unread topics.
Of course this data would then be lost on log-out or session expire and the cycle would start again without sacrificing an unnecessary amount of SQL queries.
On the custom forum I used to work with, we used a combination of your last visit time (updated every time you viewed another page - usually cookied), and a "mark read" button on each topic that added a date/time value to a SQL table containing your UserID, the TopicID and the Date/Time.
Thus to view new topics we would look at your last visit date and anything created after that point in time was a new topic.
Once you entered a topic any topic you had clicked "mark read" on would only show the initial topic and then any replies with a date/time added after you clicked the mark read button. If you have fewer viewers and performance to spare you could basically set it up to add an entry to the table for every topic the user clicks on, when they click on it.
Another option you have, and I have actually seen this done before in a vBulletin installation, is to store a comma separated list of viewed topic ids client-side in a cookie.
Server-side, the only thing stored was the time of the user's previous visit. The forum system used this in conjunction with the information in the user's cookie to show 'as read' for any topic where either
Last modified date (ie last post) older than the user's previous visit
Topic ID found in the user's cookie as a topic the user has visited this session.
I'm not saying it's a good idea, but I thought I'd mention it as an alternative - the obvious way to do it has already been stated in other answers, ie store it server-side as a relation table (many to many table).
I guess it does have the advantage of putting less burden on the server of keeping that information.
The downsides are that it ties it to the session, so once a new session is started everything that occurred before the last session is considered 'already read'. Another downside is that a cookie can only hold so much information, and a user may view hundreds of topics in a session, so it approaches the storage limit of the cookie.
One more approach:
Make sure your stylesheet shows a clear difference between visited and non-visited links, taking advantage of the fact that browsers remember visited pages persistently.
For this to work, however, you'd need to have consistent URLs for topics, and most forum systems don't tend to do this. Another downside to this is that users may clear their history, or use more than one browser. This therefore puts this measure into the 'not highly reliable category'; you would probably just do this to augment whatever other measure you are using to track viewed topics.

Resources