I am facing an extremely rare and peculiar problem.
We use Magento 2 in many websites, which uses Varnish almost out of the box. We face problems problems, but they are rare and easily fixable.
Yesterday, we noticed something really strange.
The file /lib/systemd/system/varnish.service is reverting to its default form somehow, without us updating or changing it. By reverting, Varnish stops working (because on its default installation, Varnish is configured on port 6081, but usually everybody changes this to port 80). So the fix is really easy, but it's really frustrating. I saw that on different versions too, both 5 & 6.
Does anybody know if Varnish is autoupdating these files somehow?
Many thanks, I am at your disposal for further explanations.
The fact that /lib/systemd/system/varnish.service is reverted shouldn't really be a problem, because you should have a copy of that file in /etc/systemd/sytem that contains the appropriate values.
If you perform sudo systemctl edit varnish and perform changes, a new file called /etc/systemd/system/varnish.service.d/override.conf will be created.
If you call sudo systemctl edit --full varnish a file called /etc/systemd/sytem/varnish.service will be created.
It's also possible to do this manually by running sudo cp /lib/systemd/system/varnish.service /etc/systemd/system/, but this also requires calling sudo systemctl daemon-reload.
Have a look at the following tutorial, which explains the systemd configuration of Varnish for Ubuntu: https://www.varnish-software.com/developers/tutorials/installing-varnish-ubuntu/#systemd-configuration.
If you're using another distribution you can find the right tutorial on https://www.varnish-software.com/developers/tutorials/#installations.
Related
Ever since I ran the command dnscmd /Config /SocketPoolSize 9100 my Win'2008R2 dc is not working properly and is stuck for hrs at Applying Computer Settings after rebooting before it finally logs in. Obviously 9100 was a big mistake but I figured by re-running the same command but specifying 2500 (default) that things would be fine. I was wrong.
So I've deleted the SocketPoolSize afterwards directly from the registry in SYSTEM\CurrentControlSet\services\DNS\Parameters in hopes of undoing it but the system still takes hrs before I can finally login. I have a feeling that the dnscmd command I ran and reverted is just coincidence and perhaps something else is causing Exchange service to get stuck with "Starting"
This is also preventing some services from starting (ie: Exchange 2010 - not best practice on DC, I know but it was never a problem).
Is there anything else that happens after running that command? Is there a way I can undo it using the same command or some other (rather than thru the registry)?
Turns out it had nothing to do with the dnscmd command I ran. I forgot that days ago I temporarily disabled Tcpipv6 on the network adapter but never rebooted until today after running the dnscmd. This incorrectly lead me to believe it was something to do with dnscmd.
I would like to use Ubuntu's apt-get on my computer, which is behind a company proxy server. Our browsers use a .pac file to figure out the right proxy. In the past, I browsed through that file and manually picked a proxy, which I used to configure apt, and that usually worked. However, they are constantly changing this file, and I'm getting so tired of this that I would love if I could just somehow tell apt to use the .pac file.
I've done some research, but every thread I found regarding this topic so far ended in: 'it's probably not possible, but why don't you just read the .pac file and manually pick a proxy'. It's a pain in the neck and sounds like computer stone age, that's why.
I find it hard to believe that something as - what I believe to be, but I may be wrong - ubiquitous as the .pac method has not been addressed yet by Ubuntu. Can someone give me a definitive answer for that? Are there other distributions that allow that sort of thing?
The console’ish way
If you don’t want to use dconf-editor, or if you use another flavor of Linux, your can use this second method.
Create a .proxy file in your home directory. Have it only read/write-able by yourself, as we will store your credentials in there (including your password).
...
Optional step: if you want your proxy setting to be propagated when you’re using sudo, open the sudo config file with sudo visudo and add the following line after the other Defaults lines:
Defaults env_keep += "http_proxy https_proxy no_proxy"
From http://nknu.net/ubuntu-14-04-proxy-authentication-config/
Is there a way to execute commands using directory traversal attacks?
For instance, I access a server's etc/passwd file like this
http://server.com/..%01/..%01/..%01//etc/passwd
Is there a way to run a command instead? Like...
http://server.com/..%01/..%01/..%01//ls
..... and get an output?
To be clear here, I've found the vuln in our company's server. I'm looking to raise the risk level (or bonus points for me) by proving that it may give an attacker complete access to the system
Chroot on Linux is easily breakable (unlike FreeBSD). Better solution is to switch on SELinux and run Apache in SELinux sandbox:
run_init /etc/init.d/httpd restart
Make sure you have mod_security installed and properly configured.
If you are able to view /etc/passwd as a result of the document root or access to Directory not correctly configured on the server, then the presence of this vulnerability does not automatically mean you can execute commands of your choice.
On the other hand if you are able view entries from /etc/passwd as a result of the web application using user input (filename) in calls such as popen, exec, system, shell_exec, or variants without adequate sanitization, then you may be able to execute arbitrary commands.
Unless the web server is utterly hideously programmed by someone with no idea what they're doing, trying to access ls using that (assuming it even works) would result in you seeing the contents of the ls binary, and nothing else.
Which is probably not very useful.
Yes it is possible (the first question) if the application is really really bad (in terms of security).
http://www.owasp.org/index.php/Top_10_2007-Malicious_File_Execution
Edit#2: I have edited out my comments as they were deemed sarcastic and blunt. Ok now as more information came from gAMBOOKa about this, Apache with Fedora - which you should have put into the question - I would suggest:
Post to Apache forum, highlighting you're running latest version of Apache and running on Fedora and submit the exploit to them.
Post to Fedora's forum, again, highlighting you're running the latest version of Apache and submit the exploit to them.
It should be noted, include the httpd.conf to both of the sites when posting to their forums.
To minimize access to passwd files, look into running Apache in a sandbox/chrooted environment where any other files such as passwd are not visible outside of the sandbox/chrooted environment...have you a spare box lying around to experiment with it or even better use VMWARE to simulate the identical environment you are using for the Apache/Fedora - try get it to be IDENTICAL environment, and make the httpd server run within VMWare, and remotely access the Virtual machine to check if the exploit is still visible. Then chroot/sandbox it and re-run the exploit again...
Document the step-by-step to reproduce it and include a recommendation until a fix is found, meanwhile if there is minimal impact to the webserver running in sandbox/chrooted environment - push them to do so...
Hope this helps,
Best regards,
Tom.
If you already can view etc/passwd then the server must be poorly configured...
if you really want to execute commands then you need to know the php script running in the server whether there is any system() command so that you can pass commands through the url..
eg: url?command=ls
try to view the .htaccess files....it may do the trick..
How come I always get
"GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.)"
when I start 'gedit' from a shell from my superuser account?
I've been using GUI apps as a logged-in user and as a secondary user for 15+ years on various UNIX machines. There's plenty of good reasons to do so (remote shell, testing of configuration files, running multiple sessions of programs that only allow one instance per user, etc).
There's a bug at launchpad that explains how to eliminate this message by setting the following environment variable.
export DBUS_SESSION_BUS_ADDRESS=""
The technical answer is that gedit is a Gtk+/Gnome program, and expects to find a current gconf session for its configuration. But running it as a separate user who isn't logged in on the desktop, you don't find it. So it spits out a warning, telling you. The failure should be benign though, and the editor will still run.
The real answer is: don't do that. You don't want to be running GUI apps as anything but the logged-in user, in general. And you never want to be running any GUI app as root, ever.
For some (RHEL, CentOS) you may need to install the dbus-x11 package ...
sudo yum install dbus-x11
Additional details here.
Setting and exporting DBUS_SESSION_BUS_ADDRESS to "" fixed the problem for me. I only had to do this once and the problem was permanently solved. However, if you have a problem with your umask setting, as I did, then the GUI applications you are trying to run may not be able to properly create the directories and files they need to function correctly.
I suggest creating (or, have created) a new user account solely for test purposes. Then you can see if you still have the problem when logged in to the new user account.
I ran into this issue myself on several different servers. It I tried all of the suggestions listed here: made sure ~/.dbus had proper ownership, service messagbus restart, etc.
I turns out that my ~/.dbus was mode 755 and the problem went away when I changed the mode to 700. I found this when comparing known working servers with servers showing this error.
I understand there are several different answers to this problem, as I have been trying to solve this for 3 days.
The one that worked for me was to
rm -r .gconf
rm -r .gconfd
in my home directory. Hope this helps somebody.
I have Apache running on a public-facing Debian server, and am a bit worried about the security of the installation. This is a machine that hosts several free-time hobby projects, so none of us who use the machine really have the time to constantly watch for upstream patches, stay aware of security issues, etc. But I would like to keep the bad guys out, or if they get in, keep them in a sandbox.
So what's the best, easy to set up, easy to maintain solution here? Is it easy to set up a user-mode linux sandbox on Debian? Or maybe a chroot jail? I'd like to have easy access to files inside the sadbox from the outside. This is one of those times where it becomes very clear to me that I'm a programmer, not a sysadmin. Any help would be much appreciated!
Chroot jails can be really insecure when you are running a complete sandbox environment. Attackers have complete access to kernel functionality and for example may mount drives to access the "host" system.
I would suggest that you use linux-vserver. You can see linux-vserver as an improved chroot jail with a complete debian installation inside. It is really fast since it is running within one single kernel, and all code is executed natively.
I personally use linux-vserver for seperation of all my services and there are only barely noticeable performance differences.
Have a look at the linux-vserver wiki for installation instructions.
regards, Dennis
I second what xardias says, but recommend OpenVZ instead.
It's similar to Linux-Vserver, so you might want to compare those two when going this route.
I've setup a webserver with a proxy http server (nginx), which then delegates traffic to different OpenVZ containers (based on hostname or requested path). Inside each container you can setup Apache or any other webserver (e.g. nginx, lighttpd, ..).
This way you don't have one Apache for everything, but could create a container for any subset of services (e.g. per project).
OpenVZ containers can quite easily get updated altogether ("for i in $(vzlist); do vzctl exec apt-get upgrade; done")
The files of the different containers are stored in the hardware node and therefore you can quite easily access them by SFTPing into the hardware node.
Apart from that you could add a public IP address to some of your containers, install SSH there and then access them directly from the container.
I've even heard from SSH proxies, so the extra public IP address might be unnecessary even in that case.
You could always set it up inside a virtual machine and keep an image of it, so you can re-roll it if need be. That way the server is abstracted from your actual computer, and any virus' or so forth are contained inside the virtual machine. As I said before, if you keep an image as a backup you can restore to your previous state quite easy.
To make sure it is said, CHRoot Jails are rarely a good idea it is, despite the intention, very easy to break out of, infact I have seen it done by users accidentally!
No offense, but if you don't have time to watch for security patches, and stay aware of security issues, you should be concerned, no matter what your setup. On the other hand, the mere fact that you're thinking about these issues sets you apart from the other 99.9% of owners of such machines. You're on the right path!
I find it astonishing that nobody mentioned mod_chroot and suEXEC, which are the basic things you should start with, and, most likely the only things you need.
You should use SELinux. I don't know how well it's supported on Debian; if it's not, just install a Centos 5.2 with SELinux enabled in a VM. Shouldn't be too much work, and much much safer than any amateur chrooting, which is not as safe as most people believe.
SELinux has a reputation for being difficult to admin, but if you're just running a webserver, that shouldn't be an issue. You might just have to do a few "sebool" to let httpd connect to the DB, but that's about it.
While all of the above are good suggestions, I also suggest adding a iptables rule to disallow unexpected outgoing network connections. Since the first thing most automated web exploits do is download the rest of their payload, preventing the network connection can slow the attacker down.
Some rules similar to these can be used (Beware, your webserver may need access to other protocols):
iptables --append OUTPUT -m owner --uid-owner apache -m state --state ESTABLISHED,RELATED --jump ACCEPT
iptables --append OUTPUT -m owner --uid-owner apache --protocol udp --destination-port 53 --jump ACCEPT
iptables --append OUTPUT -m owner --uid-owner apache --jump REJECT
If using Debian, debootstrap is your friend coupled with QEMU, Xen, OpenVZ, Lguest or a plethora of others.
Make a virtual machine. try something like vmware or qemu
What problem are you really trying to solve? If you care about what's on that server, you need to prevent intruders from getting into it. If you care about what intruders would do with your server, you need to restrict the capabilities of the server itself.
Neither of these problems could be solved with virtualization, without severly criplling the server itself. I think the real answer to your problem is this:
run an OS that provides you with an easy mechanism for OS updates.
use the vendor-supplied software.
backup everything often.