Tor StrictExitNodes is not working - tor

So, following advice from all over the Internet, including Tor documentation, I'm trying to force US-only exit nodes by editing the torrc file like so:
StrictNodes 1
ExitNodes {US}
But, I’m still getting exit nodes from Western Europe, Australia, and the US. I’m using the Vidalia bundle, though I’m starting Tor and Polipo from the command line programmatically and executing HttpWebRequests via Polipo. Any thoughts? I really, really need the exit nodes to only be from the US, and I'm really surprised this isn't working. Thanks.

I appear to have fixed the problem by adding this argument when I start Tor from the command line:
-f C:\Users\Frank\AppData\Local\Vidalia\torrc
I'm not sure why Tor wasn't using this config file by default, but now that I'm pointing to it explicitly, it appears to be following the StrictNodes directive. Thanks.

Related

Strange problem with varnish automatically reverts varnish.service file

I am facing an extremely rare and peculiar problem.
We use Magento 2 in many websites, which uses Varnish almost out of the box. We face problems problems, but they are rare and easily fixable.
Yesterday, we noticed something really strange.
The file /lib/systemd/system/varnish.service is reverting to its default form somehow, without us updating or changing it. By reverting, Varnish stops working (because on its default installation, Varnish is configured on port 6081, but usually everybody changes this to port 80). So the fix is really easy, but it's really frustrating. I saw that on different versions too, both 5 & 6.
Does anybody know if Varnish is autoupdating these files somehow?
Many thanks, I am at your disposal for further explanations.
The fact that /lib/systemd/system/varnish.service is reverted shouldn't really be a problem, because you should have a copy of that file in /etc/systemd/sytem that contains the appropriate values.
If you perform sudo systemctl edit varnish and perform changes, a new file called /etc/systemd/system/varnish.service.d/override.conf will be created.
If you call sudo systemctl edit --full varnish a file called /etc/systemd/sytem/varnish.service will be created.
It's also possible to do this manually by running sudo cp /lib/systemd/system/varnish.service /etc/systemd/system/, but this also requires calling sudo systemctl daemon-reload.
Have a look at the following tutorial, which explains the systemd configuration of Varnish for Ubuntu: https://www.varnish-software.com/developers/tutorials/installing-varnish-ubuntu/#systemd-configuration.
If you're using another distribution you can find the right tutorial on https://www.varnish-software.com/developers/tutorials/#installations.

Using a fixed server ID with OOkla speedtest-cli

Some time ago, I set up a Linux task to run speedtest-cli every 30 minutes to figure out a network issue. The task used the "--server ID" argument to get the speed to the same server each time. I used it for a while then forgot about it. Today I go back to revisit this only to find out that the API seems to have changed. Now proving the --list argument does not print a list of hundreds of servers, but of only the few (~10) nearest you. In my case, the servers it reports seems to change at least daily. Requesting speedtest to any server ID not reported in the list gives a failure. Has anyone figured out a way to get a periodic speedtest to a fixed server using speedtest-cli or any other tool?
If you are still looking for a solution, here is my suggestion.
While this does not use speedtest-cli (which no longer is supported and you should look at Ookla SpeedTest command line client instead) I believe this is what you are looking for, I'm running this in a Debian VM but if you have access to a RPi and can dedicate to this task, you may want to check this out.
https://github.com/geerlingguy/internet-pi
You can modify the docker-compose to hard code the server ID of your choice. You can get this from the Ookla SpeedTest command line client.
You would need to run the command:
speedtest -L
Good Luck!

Ubuntu apt-get with .pac files

I would like to use Ubuntu's apt-get on my computer, which is behind a company proxy server. Our browsers use a .pac file to figure out the right proxy. In the past, I browsed through that file and manually picked a proxy, which I used to configure apt, and that usually worked. However, they are constantly changing this file, and I'm getting so tired of this that I would love if I could just somehow tell apt to use the .pac file.
I've done some research, but every thread I found regarding this topic so far ended in: 'it's probably not possible, but why don't you just read the .pac file and manually pick a proxy'. It's a pain in the neck and sounds like computer stone age, that's why.
I find it hard to believe that something as - what I believe to be, but I may be wrong - ubiquitous as the .pac method has not been addressed yet by Ubuntu. Can someone give me a definitive answer for that? Are there other distributions that allow that sort of thing?
The console’ish way
If you don’t want to use dconf-editor, or if you use another flavor of Linux, your can use this second method.
Create a .proxy file in your home directory. Have it only read/write-able by yourself, as we will store your credentials in there (including your password).
...
Optional step: if you want your proxy setting to be propagated when you’re using sudo, open the sudo config file with sudo visudo and add the following line after the other Defaults lines:
Defaults env_keep += "http_proxy https_proxy no_proxy"
From http://nknu.net/ubuntu-14-04-proxy-authentication-config/

IPTables rules being applied multiple times at startup

Specifically talking about an Ubuntu 10.04 LTS server (Lucid Lynx), although its probably applicable to other Linux versions.
I was trawling through the logs for a few websites, doing some spring cleaning so to speak and noticed a few IP addresses that have been acting dodgy so I wanted to add them to the blacklist.
Basically I got playing around with IPtables, the blacklist of IP's is just a text file. I then created a shell script to loop through the text file and block each IP address in IPtables.
This worked fine when the shell script was run manually. But obviously I wanted it to run automatically at start up, for whenever the server may be rebooted. So I included the shell script into
Code:
/etc/network/if-pre-up.d/iptables
So it now looks like
Code:
#!/bin/sh
/sbin/iptables-restore < /etc/iptables.up.rules
sh /etc/addBlacklist.sh
So I rebooted the server and the blacklist rules where applied, but it seems like they have been applied multiple times. As in duplicate lines appearing when iptables -L is run.
Just wondering if anyone would know the reason for this?
I suppose it doesn't really matter in the grand scheme of things but I'm curious.
Never did find out why they where being applied multiple times but I just removed the separate blacklist file and amalgamated it into the iptables.up.rules file.
Not as pretty but stops the duplication.
Just add the iptables -F at the start of the script so when the script starts, it automatically flushes the old entry and then blocks the ip's again.

Using directory traversal attack to execute commands

Is there a way to execute commands using directory traversal attacks?
For instance, I access a server's etc/passwd file like this
http://server.com/..%01/..%01/..%01//etc/passwd
Is there a way to run a command instead? Like...
http://server.com/..%01/..%01/..%01//ls
..... and get an output?
To be clear here, I've found the vuln in our company's server. I'm looking to raise the risk level (or bonus points for me) by proving that it may give an attacker complete access to the system
Chroot on Linux is easily breakable (unlike FreeBSD). Better solution is to switch on SELinux and run Apache in SELinux sandbox:
run_init /etc/init.d/httpd restart
Make sure you have mod_security installed and properly configured.
If you are able to view /etc/passwd as a result of the document root or access to Directory not correctly configured on the server, then the presence of this vulnerability does not automatically mean you can execute commands of your choice.
On the other hand if you are able view entries from /etc/passwd as a result of the web application using user input (filename) in calls such as popen, exec, system, shell_exec, or variants without adequate sanitization, then you may be able to execute arbitrary commands.
Unless the web server is utterly hideously programmed by someone with no idea what they're doing, trying to access ls using that (assuming it even works) would result in you seeing the contents of the ls binary, and nothing else.
Which is probably not very useful.
Yes it is possible (the first question) if the application is really really bad (in terms of security).
http://www.owasp.org/index.php/Top_10_2007-Malicious_File_Execution
Edit#2: I have edited out my comments as they were deemed sarcastic and blunt. Ok now as more information came from gAMBOOKa about this, Apache with Fedora - which you should have put into the question - I would suggest:
Post to Apache forum, highlighting you're running latest version of Apache and running on Fedora and submit the exploit to them.
Post to Fedora's forum, again, highlighting you're running the latest version of Apache and submit the exploit to them.
It should be noted, include the httpd.conf to both of the sites when posting to their forums.
To minimize access to passwd files, look into running Apache in a sandbox/chrooted environment where any other files such as passwd are not visible outside of the sandbox/chrooted environment...have you a spare box lying around to experiment with it or even better use VMWARE to simulate the identical environment you are using for the Apache/Fedora - try get it to be IDENTICAL environment, and make the httpd server run within VMWare, and remotely access the Virtual machine to check if the exploit is still visible. Then chroot/sandbox it and re-run the exploit again...
Document the step-by-step to reproduce it and include a recommendation until a fix is found, meanwhile if there is minimal impact to the webserver running in sandbox/chrooted environment - push them to do so...
Hope this helps,
Best regards,
Tom.
If you already can view etc/passwd then the server must be poorly configured...
if you really want to execute commands then you need to know the php script running in the server whether there is any system() command so that you can pass commands through the url..
eg: url?command=ls
try to view the .htaccess files....it may do the trick..

Resources