Manuals from google indicate that with the configurating of the apache virtual hosts it must be done through the /etc/apache2/site-aviliable(for Debian). But i configurate server via apache.conf, without site-aviliable and site-enable, and my sites work good.
What's the difference between these types of settings and how it will affect the safety / performance/something else?
Sorry for my english.
Basically all the configuration snippets will get merged into one configuration before parsing it, as if you would write them to a single file (like you actually do). So it makes no difference for the performance of apache.
However, if you put configuration values for a single vhost into a separate file, it is easy add or remove vhost by a simple file operation and therefore makes it easy to maintain and automate.
Related
I'm configuring a new Apache 2.4 webserver. I've installed and run Apache many times in the past but always tended to stick to what I learned in the early days when it came to configuration files, in particular I'd usually just have one virtualhosts.conf file somewhere and add all virtual hosts to it, with the first one in the file automatically becoming the default host that was served if someone visited the server just by IP address. Where a server had multiple IPs, then the default host would be the first one in the file configured to answer on that IP.
However with the new box I'm trying to do things "properly", which I gather means that in /etc/apache2/vhosts.d I should create a separate config file for each virtualhosted website. Which works, and is fine so far - but there doesn't seem to be any specific mechanism to specify which one should be the default for each IP on the server, other than them being arranged alphabetically. Ok, I can work round this by naming the one I want to be first 00-name.conf, so it automatically comes first in ASCII, but that seems a bit clunky - is there a formal method for instructing Apache that a specific virtualhost out of the many that may exist for a given IP is to be the default one?
I've tried Googling, and reading the Apache documentation, but not found anything that answers this specific question.
There's nothing in the Apache documentation because that default configuration is a debian/ubuntu invention. In that environment, using numbered config files to determine the default VH for a set of name-based virtual hosts is absolutely a standard thing to do.
I have a perl server which needs the ability to read user's files and data, and write to them. The users are authenticated via LDAP, so I can verify passwords and learn their home directory.
From here I need some way for this webserver (running as www-data) to access their files. I've thought about running every command through su/sudo but that's really not optimal when I just need to open/write/close/glob files in their home directories.
Is there standard practice for this? I haven't been able to turn up anything so far.
Notes
I want the files in their home directory, as the users will be SSHing in and running other commands on them that won't be available via the web
The web connection is made over HTTPS of course.
Related
How to successfully run Perl script with setuid() when used as cgi-bin?
You might want to reconsider your architecture. This sounds like a job for virtual hosts in an ISP-like configuration.
First, read "Dynamically configured mass virtual hosting" page in the Apache VirtualHost documentation. Then read about how to run each virtual host as a different user
Under this approach you would have a vhost for each user running as $user.example.com; when Apache forks off a worker for the vhost, the fork runs suid as the appropriate user. Then you set up docroot and scriptalias for the vhost which point to the site code.
Long story short, it's probably better to use Apache's (well-tested and well-documented) features for managing user identity than it is to do it in Perl or whip up your own suid wrapper. It's notoriously difficult to get it right.
Are you running Apache? This sounds like a job for WebDAV.
The trouble is that your web server is running as www-data. By design, it won't be able to change the owner of any file. Some other privileged process will need to change ownership on the webserver's behalf.
You could write a minimal set UID script to handle changing the ownership of files and deleting them, but this path is fraught with peril (especially if you've never written a setUID program before.)
I started to optimize my wordpress blog and one of what I need to do is to configure .htaccess file. I have a VPS -> CentOS, so I need to move all my rules from .htaccess to httpd.conf? Also, I use W3C Total Cache for Wordpress, and this plugin inserted some rules in my .htacces file, what I should do with this code?
Thanks and sorry please my Enghlish
Apache best practice is to use your system or vhost config -- if necessary with <Directory> filters -- and to disable .htaccess files if you have (root) access to you system config. The upside here is that there are less pitfalls in using the root config and there is also a performance bonus. The downside is that you will need to restart Apache when you change the configuration.
People (and the Apache docs) often discuss performance hit of using .htaccess files, but in terms of CPU load this is minimal (~1 mSec per file parsed). However, the main benefit is in I/O savings because if you do have .htaccess files enabled then Apache searches the entire path to any requested file for putative .htaccess files. OK, these will all end up in the VFS cache on a dedicated system, but these probes still involve a lot of cache I/O with can add up to ~10 mSec per request even if fully cached.
One other thing to be aware of is the root, vhost and directory rules are by nature single pass whereas the .htaccessprocessing is a while loop which keeps selecting and retrying the deepest .htaccess file until no more rewrite changes are made. And there are subtle syntax differences in regexps and substitute rules.
Independent of all this is the point that Gerben makes about network tuning which I endorse 100% wherever you address these issues.
Our client currently has a website on their own domain: we are in the process of setting up their new site on a new server, with the same domain name.
Originally we started work on a subdomain that they could access when need be. As time came closer we pushed the work to the actual domain (on the new server) and continued to make changes (by adding a line to our hosts file to ensure we were looking at the new server).
The client wants to see the site as it stands today, before switching the DNS to point to the new server. While we could copy everything back to the original subdomain that is not as easy as we first hoped, as unfortunately there's a few too many links and references to files using the domain name (as opposed to just using relative paths).
One other thing: the code auto-redirects back to the 'proper' domain if it's not currently being used (it's a Magento install) and this stops the possibility of pointing the subdomain document root to the current directory (as the first thing that will happen is that it will see that we're using the subdomain, and will push the client to the original domain).
What are our options? I know that we could get them to change their hosts file, but I'm hoping for something a little less 'techy' for the client.
Is there any proxy server out there that we can use, specifically using our own DNS settings, maybe, or is there some Windows client side application that they could install to make it a bit simpler?
It would be pretty darn simple to write a program — in pretty much any language — which would change the hosts file for your clients. All they'd need to do is run the program.
Alternately (this is more work, and not necessarily any more benefit) you could set up a DNS server on the subnet, and configure the web server to use that DNS server. I really don't see this being any easier than just (somehow) modifying the hosts file, though.
I wrote my own proxy server for exactly this purpose: http://chiselapp.com/user/evilotto/repository/web-tools/wiki?name=hr-proxy
The standalone executable is not there, but it can be bundled into a starpack fairly easily. It it only a proxy tho, and does not do things like change the user's system proxy settings (meaning the user would need to change that themself through Internet Options, etc)
Is there a way to execute commands using directory traversal attacks?
For instance, I access a server's etc/passwd file like this
http://server.com/..%01/..%01/..%01//etc/passwd
Is there a way to run a command instead? Like...
http://server.com/..%01/..%01/..%01//ls
..... and get an output?
To be clear here, I've found the vuln in our company's server. I'm looking to raise the risk level (or bonus points for me) by proving that it may give an attacker complete access to the system
Chroot on Linux is easily breakable (unlike FreeBSD). Better solution is to switch on SELinux and run Apache in SELinux sandbox:
run_init /etc/init.d/httpd restart
Make sure you have mod_security installed and properly configured.
If you are able to view /etc/passwd as a result of the document root or access to Directory not correctly configured on the server, then the presence of this vulnerability does not automatically mean you can execute commands of your choice.
On the other hand if you are able view entries from /etc/passwd as a result of the web application using user input (filename) in calls such as popen, exec, system, shell_exec, or variants without adequate sanitization, then you may be able to execute arbitrary commands.
Unless the web server is utterly hideously programmed by someone with no idea what they're doing, trying to access ls using that (assuming it even works) would result in you seeing the contents of the ls binary, and nothing else.
Which is probably not very useful.
Yes it is possible (the first question) if the application is really really bad (in terms of security).
http://www.owasp.org/index.php/Top_10_2007-Malicious_File_Execution
Edit#2: I have edited out my comments as they were deemed sarcastic and blunt. Ok now as more information came from gAMBOOKa about this, Apache with Fedora - which you should have put into the question - I would suggest:
Post to Apache forum, highlighting you're running latest version of Apache and running on Fedora and submit the exploit to them.
Post to Fedora's forum, again, highlighting you're running the latest version of Apache and submit the exploit to them.
It should be noted, include the httpd.conf to both of the sites when posting to their forums.
To minimize access to passwd files, look into running Apache in a sandbox/chrooted environment where any other files such as passwd are not visible outside of the sandbox/chrooted environment...have you a spare box lying around to experiment with it or even better use VMWARE to simulate the identical environment you are using for the Apache/Fedora - try get it to be IDENTICAL environment, and make the httpd server run within VMWare, and remotely access the Virtual machine to check if the exploit is still visible. Then chroot/sandbox it and re-run the exploit again...
Document the step-by-step to reproduce it and include a recommendation until a fix is found, meanwhile if there is minimal impact to the webserver running in sandbox/chrooted environment - push them to do so...
Hope this helps,
Best regards,
Tom.
If you already can view etc/passwd then the server must be poorly configured...
if you really want to execute commands then you need to know the php script running in the server whether there is any system() command so that you can pass commands through the url..
eg: url?command=ls
try to view the .htaccess files....it may do the trick..