How to stay on same IP for days and weeks on tor browser? - tor

I want to create a secondary anonymous social media profile.
Since most social media websites can easily detect that it's the same person whos using two profiles so I want to avoid that & I want to use the tor browser to work on my second profile.
In order to achieve above, how do I keep the same IP address for weeks and every time log into the torr using an IP address in a specific city/ area?
Kindly suggest.

To achieve this you can edit the Tor Browser's torrc configuration file and use the ExitNodes configuration option to specify which exit node(s) you wish to use for your browsing session. You can pick the exit nodes using a country code, or a specific exit relay's identity fingerprint.
Selecting by city isn't an option unless you know a relay operator and where they are operating relays from, by country or autonomous system number is the best you can do to isolate where your traffic comes from. Depending on where you want the traffic to come from, you may have limited selections available, and there's no guarantee the same relay will always be fast or online at all.
Note that all Tor exits are known so no matter which exit you choose, the site you're browsing can easily identify your Tor usage, and using the same exit doesn't help with anonymity or making it harder to detect if you're the same person using another profile. In fact, it could be argued that using the same exit over a long period of time could reduce anonymity depending on how it's used. But if you're logging in to the same profile over time, then using the same or different exits may not make much difference.
One way to find suitable nodes is to use Tor's relay search and use the Advanced mode as you will need to only select nodes with the "Exit" flag. Advanced search also lets you further reduce the list by selecting a country or AS.
Here are examples of what you might put in torrc:
# Only allow exit's through relays located in Russia
ExitNodes {RU}
# Only exit through a single node of your choosing
# The fingerprint here is displayed in relay search
ExitNodes ABCD1234CDEF5678ABCD1234CDEF5678ABCD1234

Related

Achieve site specific TOR exit nodes

Any one who's had enough experience using TOR (browser) and is persistent and adamant enough to continue to do so to regain a little of their privacy and anonymity on the internet knows that this could start becoming very challenging for many sites that automatically become the white knight in shining armor to "protect" your account from being compromised.
The technically aware cannot choose to take responsibility of their own security and need to be baby-sat along with the masses.
That being said, in order to prevent the security alarms and algorithms employed by such sites and services it may be required to force an exit node to be used from a particular region consistently, thereby satisfying one of the major triggers employed by such algorithms.
Given however, one would access a varied number of sites in a session, this poses the problem that some services are better accessed from one region and another from another region.
Thus the question is, is there a way to configure TOR Browser to choose site specific exit nodes?

How to identify visitors are unique?

i trying make an internet voting service but the problem is internet is just so easy to cheat by creating multiple accounts and vote same thing. capcha and email is not helping as take just 3 second to pass by human. IP can be changed by proxy. if we put some cookie on voter browser he just clean it next time.
i created this question to ask help for methods we can use with basic futures that all browsers have (javascript etc)to prevent our service being cheated easily.
the first idea i have myself is that possible my website access all cookies user have on his browser by just visiting my site ? because when they clean everything by CCleaner for new accounts then i can understand the browser is empty so the person is perhaps a cheater as most of real users when come to my site always have at least several cookie from different sites
There is no way to address the issue of uniquely identifying real-world assets (here: humans) without stepping out of your virtual system, by definition.
There are various ways to ensure a higher reliability of the mapping "one human to exactly one virtual identity", but none of them is fool-proof.
The most accessible way would be to do it via a smartphone app. A human usually only has one smartphone (and a phone number).
Another way is to send them snail mail to their real address, with a secret code, which you require them to enter in your virtual system.
or the social insurance number
or their fingerprints as log in credentials
The list could go on, but the point is, these things are bound to the physical world. If you combine more such elements, you get a higher accuracy (but never 100% certainty).

Future proofing client-server code?

We have a web based client-server product. The client is expected to be used in the upwards of 1M users (a famous company is going to use it).
Our server is set up in the cloud. One of the major questions while designing is how to make the whole program future proof. Say:
Cloud provider goes down, then move automatically to backup in another cloud
Move to a different server altogether etc
The options we thought till now are:
DNS: Running a DNS name server on the cloud ourselves.
Directory server - The directory server also lives on the cloud
Have our server returning future movements and future URLs etc to the client - wherein the client is specifically designed to handle those scenarios
Since this should be a usual problem, which is the best solution for the same? Since our company is a very small one, we are looking at the least technically and financially expensive solution (say option 3 etc)?
Could someone provide some pointers for the same?
K
I would go for the directory server option. Its the most flexable and gives you the most control over what happens in a given situtaion.
To avoid the directory itself becoming a single point of failure I would have three or four of them running a different locations with different providers. Have the client app randomly choose one of the directoy urls at startup and work its way through them all until it finds one that works.
To make it really future proof you would probably need a simple protocol to dynamicly update the list of directory servers -- but be careful if this is badly implemented you will leave your clients open to all sorts of malicious spoofing attacks.
Re. DNS: requests can be cached, and it might take a while for the changes to propagate themselves (hours to days).
I'd go for a list of prioritized IPs that can be updated on the client. If one IP fails, the client would retry with 2nd, 3rd and so on.
I'm not sure I 100% understood your question, but if I did it boils down to: if my server moves, how can my clients find it?
That's exactly what DNS did in nearly the last three decades.
Every possible system you could choose would need to be bootstrapped with initial working data: address for a directory server, address of a working server to get an updated list of addresses, etc. That's what the root dns servers are for and OS vendors will do the bootstrapping part for you.
Sure DNS queries could be cached, that's how it is supposed to work and how it scales to internet size. You control the caching (read about the TTL) and you can usually keep it on sane values (doesn't make sense to keep it shorter than the absolute minimum time needed to re-deploy the server somewhere else).

How to determine nationality based on IP address?

How can I tell the nationality of a user of my web site based on client ip?
Edit: Like commented, this question have been answered before:
https://stackoverflow.com/questions/283016/know-a-good-ip-address-geolocation-service
use the GeoIP databse. there is a free one. there are also a lot of GeoIP webServices you can use.
If you're thinking localization, let the user choose the correct language instead of doing it automatically -- or at least provide an easy way for them to change it and make it sticky via cookies. You can do ok most of the time at guessing using GeoIP, but sometimes you'll get it really wrong. Google sometimes sends my wife to the German version of their web site even though we're in the middle of the US. Using anonymization services (like TOR) will also likely result in guessing errors. Having the option to choose and keeping the choice on the computer will make it a better experience for your users.
Besides the already mentioned GeoIP database, you could also use IP2LOCATION service. It's a paid one but it will also work.
Keep in mind that all these services will give you an estimate of the location but not a very accurate geographic position. I read a networking paper once stating that this is an impossible task to accomplish (give an accurate position of an IP address).

Dynamic IP-based blacklisting

Folks, we all know that IP blacklisting doesn't work - spammers can come in through a proxy, plus, legitimate users might get affected... That said, blacklisting seems to me to be an efficient mechanism to stop a persistent attacker, given that the actual list of IP's is determined dynamically, based on application's feedback and user behavior.
For example:
- someone trying to brute-force your login screen
- a poorly written bot issues very strange HTTP requests to your site
- a script-kiddie uses a scanner to look for vulnerabilities in your app
I'm wondering if the following mechanism would work, and if so, do you know if there are any tools that do it:
In a web application, developer has a hook to report an "offense". An offense can be minor (invalid password) and it would take dozens of such offenses to get blacklisted; or it can be major, and a couple of such offenses in a 24-hour period kicks you out.
Some form of a web-server-level block kicks in on before every page is loaded, and determines if the user comes from a "bad" IP.
There's a "forgiveness" mechanism built-in: offenses no longer count against an IP after a while.
Thanks!
Extra note: it'd be awesome if the solution worked in PHP, but I'd love to hear your thoughts about the approach in general, for any language/platform
Take a look at fail2ban. A python framework that allows you to raise IP tables blocks from tailing log files for patterns of errant behaviour.
are you on a *nix machine? this sort of thing is probably better left to the OS level, using something like iptables
edit:
in response to the comment, yes (sort of). however, the idea is that iptables can work independently. you can set a certain threshold to throttle (for example, block requests on port 80 TCP that exceed x requests/minute), and that is all handled transparently (ie, your application really doesn't need to know anything about it, to have dynamic blocking take place).
i would suggest the iptables method if you have full control of the box, and would prefer to let your firewall handle throttling (advantages are, you don't need to build this logic into your web app, and it can save resources as requests are dropped before they hit your webserver)
otherwise, if you expect blocking won't be a huge component, (or your app is portable and can't guarantee access to iptables), then it would make more sense to build that logic into your app.
I think it should be a combination of user-name plus IP block. Not just IP.
you're looking at custom lockout code. There are applications in the open source world that contain various flavors of such code. Perhaps you should look at some of those, although your requirements are pretty trivial, so mark an IP/username combo, and utilize that for blocking an IP for x amount of time. (Note I said block the IP, not the user. The user may try to get online via a valid IP/username/pw combo.)
Matter of fact, you could even keep traces of user logins, and when logging in from an unknown IP with a 3 strikes bad username/pw combo, lock that IP out for however long you like for that username. (Do note that a lot of ISPs share IPs, thus....)
You might also want to place a delay in authentication, so that an IP cannot attempt a login more than once every 'y' seconds or so.
I have developed a system for a client which kept track of hits against the web server and dynamically banned IP addresses at the operating system/firewall level for variable periods of time for certain offenses, so, yes, this is definitely possible. As Owen said, firewall rules are a much better place to do this sort of thing than in the web server. (Unfortunately, the client chose to hold a tight copyright on this code, so I am not at liberty to share it.)
I generally work in Perl rather than PHP, but, so long as you have a command-line interface to your firewall rules engine (like, say, /sbin/iptables), you should be able to do this fairly easily from any language which has the ability to execute system commands.
err this sort of system is easy and common, i can give you mine easily enough
its simply and briefly explained here http://www.alandoherty.net/info/webservers/
the scripts as written arn't downloadable {as no commentry currently added} but drop me an e-mail, from the site above, and i'll fling the code at you and gladly help with debugging/taloring it to your server

Resources