i don't know if this is the best approach, but here is it:
I've made a system in django and only want the users in a lab to be able to access it, so they can't go to other web pages(it's a program that the students can answer some tests).
I've read that doing the proxy stuff to limit the IP is very easily bypassed(since all the students from IT).
Them I read somewhere that you can create your own "Chrome" or firefox browser.
And it made me wander if I can make a browser that can only access one domain (in this case my project domain). This way it would be more invisible to the users what's going on.
But I can't find any good references to do this, and don't know if this is a complicated stuff.
Is it necessary to change the code of an existing browser? or can I just create a extension for it?
Why not do your test on a fixed private network where the only IP address that the connected machines can see is the one with the test. Any requests to external pages will fail because the Internet won't be reachable.
Editing a browser is possible, but is likely that it will be simulteniously excessive for what you want and insufficient to stop users from getting content that you don't want them to have.
Related
I'm hoping to get some kind of idea if what I have in mind is even possible or if I'm looking in the wrong place.
Basically, my company provides a website which users are able to access online with credentials we sell and provide them. We have another potential customer who would like to access this website. Sadly this customer is very stuck in the past, and they don't allow their users any internet access at all.
For a number of reasons, I don't want them to host their own version of this website. However, I considered that we might configure a web proxy on their network (which is given internet access) which reverse forwards connections to our website. Is this even possible? And should it be attempted? Or are there better ways to achieve this?
Yes it's possible, you can install on their intranet a simple proxy script for example
https://github.com/Athlon1600/php-proxy-app
and modify the index.php and allow from there only a single host to your website.
I don't know what technology you can use on their Intranet network but virtually for every web language, such software is available.
Here is some discussion related to the "Access the sites blocked over the network" that is just for Gmail but it will definitely help you too:
https://superuser.com/questions/453825/how-to-bypass-web-url-filtering-service-to-access-blocked-websites-proxy
For bypassing the firewall and getting access to the blocked sites:
http://www.makeuseof.com/tag/how-to-get-into-blocked-websites-in-school-with-freeproxy/
Apologies if this is a basic question but I was unsure what to search to try and get a answer. If someone can point me in the right direction I would be grateful.
Basically this is what I want to do.
I have a pretty much blank website that I want to display text out of a text file on my local pc to at regular intervals (The contents of the text file will change regularly). What are the things that I'll need to learn to do this?
I read up on how you can do this with AJAX but as I understand it the text file would have to be on the server, which in this case it's not.
I understand that this a month old but since no one has really paid any attention to it, might as well answer it.
It really doesn't make sense to keep it in your local PC. Because for your web app to be able to access that text resource you will have to have your local machine accessible from the outside world. If you want it that way, you create a webserver like application in your local machine, get it accessible via a public domain then access that with a (hopefully) secure GET request from your hosted application.
Simplest solution I can think of and a saner way to do this would be
Save it in a database in the hosting server. Whenever you need to update the text, you change the value in the DB using a DB administration application such as phpmyadmin that is also hosted from that same machine (but hopefully you restrict IP access to phpmyadmin only to yourself, but that is another matter). Better yet you can always make a fancy CMS app to make things easier.
Or if you still find that a tad to hard why not just put in the source code, have it versioned and update it whenever you need to.
In another question dealing with a bug in blackberry10 that denies cross origin XHR calls, it is proposed to get around the issue by disabling web security.
But what does disabling web security really imply here? Am I going to torture small harmless woodland creatures if I use this?
Seriously though, does doing this expose my app to additional security risks beyond those introduced when adding the popular wildcard access uri="*" or access origin="*" line in my config.xml for blackberry10?
please advice
But what does disabling web security really imply here? Am I going to torture small harmless woodland creatures if I use this?
No.
It means your application could access ANY resource in the Internet good, bad or ugly IF (and only if) the user is able to navigate / access that resource.
By disabling web security, the following scenario could happen:
If you published a link in your app to a remote page that you do not control, you risk that page may display unexpected/malicious/inappropriate content OR enable the user to navigate elsewhere to another page that might. Example: Say you are display content in your app loaded directly from some remote URL. Do you know exactly what type of content your users might 'see' in your app? If that remote URL was loading 'buy these pills now to get huge' advertisements from a different URL, would you be okay with YOUR users seeing that content in YOUR app?
Most devs will only include content in their app that they 'trust' and white list just the specific urls they need. However, sometimes you do need to unlock the front door if you don't know what URL your users want to access.
So disabling web security is available if you really need it, but not recommended. Use it at your own risk, not as a matter of convenience.
i trying make an internet voting service but the problem is internet is just so easy to cheat by creating multiple accounts and vote same thing. capcha and email is not helping as take just 3 second to pass by human. IP can be changed by proxy. if we put some cookie on voter browser he just clean it next time.
i created this question to ask help for methods we can use with basic futures that all browsers have (javascript etc)to prevent our service being cheated easily.
the first idea i have myself is that possible my website access all cookies user have on his browser by just visiting my site ? because when they clean everything by CCleaner for new accounts then i can understand the browser is empty so the person is perhaps a cheater as most of real users when come to my site always have at least several cookie from different sites
There is no way to address the issue of uniquely identifying real-world assets (here: humans) without stepping out of your virtual system, by definition.
There are various ways to ensure a higher reliability of the mapping "one human to exactly one virtual identity", but none of them is fool-proof.
The most accessible way would be to do it via a smartphone app. A human usually only has one smartphone (and a phone number).
Another way is to send them snail mail to their real address, with a secret code, which you require them to enter in your virtual system.
or the social insurance number
or their fingerprints as log in credentials
The list could go on, but the point is, these things are bound to the physical world. If you combine more such elements, you get a higher accuracy (but never 100% certainty).
I need dev and beta sites hosted on the same server as the production environment (let's let that fly for practical reasons).
To keep things simple, I can accept the same protections in place on both dev and beta -- basically don't let it get spidered, and put something short of user names and passwords in place to prevent everyone and their brother from gaining access (again, there's a need to be practical). I realize that many people would want different permissions on dev than on beta, but that's not part of the requirements here.
Using robots.txt file is a given, but then the question: should the additional host(s) (aka "subdomain(s)") be submitted to the Google Webmaster tools as an added preventive measure against inadvertent spidering? It should go without saying, but there will be no linking into the dev/beta sites directly, so you'd have to type in the address perfectly (with no augmentation by URL Rewrite or other assistance).
How could access be restricted to just our team? IP addresses won't work because of the various methods of internet access (meetings at lunch spots with wifi, etc.).
Perhaps having dev/beta and production INCLUDE a small file (or call a component) that looks for URL variable to be set (on the dev/beta sites) or does not look for the URL variable (on the production site). This way you could leave a different INCLUDE or component (named the same) on the respective sites, and the source would otherwise not require a change when it's moved from development to production.
I really want to avoid full-on user authentication at any level (app level or web server), and I realize that leaves things pretty open, but the goal is really just to prevent inadvertent browsing of pre-production sites.
Usually I see web server based authentication with a single shared username and password for all users, this should be easy to set up. An interesting trick might be to check for a cookie instead, and then just have a better hidden page to set that cookie. You can remove that page when everyone's visited it, or implement authentication just for that file, or allow access to it just from the office and require people working from home to use VPN or visit the office if they clear their cookies.
I have absolutely no idea if this is the "proper" way to go about doing it, but for us we place all Dev and Beta sites on very high port numbers that crawlers/spiders/indexers never go to (in fact, I don't know of any off the top of my head that go beyond port 80 unless they're following a direct link).
We then have a reference index page listing all of the sites with links to their respective port numbers, with only that page being password-protected. For sites involving real money transactions or other sensitive data, we display a short red bar on top of the website explaining that it is just a demo server, on the very rare chance that someone would directly go to a Dev URL and Port #.
The index page is also on a non-standard (!= 80) port. But even if a crawler were to reach it, it wouldn't get past the password input and would never find the direct links to all the other ports.
That way your developers can access the pages with direct URLs and Ports, and they have a password-protected index for backup should they forget.