Related
I have a web server with two websites: a & b.
a is production.
b is testing/staging.
Whoever wrote these apps before me relies on
Request.ServerVariables("LOGON_USER")
which is assigned when the user authenticates against the server via Windows Authentication. On a, this works great, on b there's some weirdness:
I get my login prompt, but i can't use [domain]\myusername to login, I can do it with \\myusername though, same passwords (AD based). The IIS configs are identical as far as I can tell, the only inconsistency is a DNS CNAME pointing from a.domain.com to b.domain.com. Changing that DNS record to point at the IP fixed the problem, but I'm trying to understand what was going on.
Previous DNS record: b.domain.com > a.domain.com
Working DNS record: b.domain.com > 10.0.x.131
It should've been b > a > regular windows authentication, but for some reason I found myself using \\ , is it tacking on the domain name twice or something? And what exactly is \\ in regards to authentication?
Make sense?
A few thoughts.
Which specific version of the OS is your server running under? Microsoft in particular tends to have somewhat different behaviors across different versions, and the documentation is version-specific
It's difficult to answer "what's going on" questions because there's no way to be sure what's correct. I can toss out hypotheses (and will), and if you could phrase the question as a "how do I fix this" rather than a "what's going on" you could check if I'm right and respond, probably having acquired a bit more pertinent data along the way.
This sounds like it's more about deep system administration understanding than programming understanding - if you don't get what you need here, you might have better luck asking on serverfault.
That having been said, in the absence of other information, the "\" most likely results from one of two things.
It's possible that you have two different parts of the code that each adds a '\' on. Domain Names are in many cases valid both with and without the trailing '\'. Thus, it's quite possible that windows authentication adds one immediately after domain names and before login ids in order to ensure the separation between the two. If your DNS CNAME lookup is automatically adding one at the end of the domain name for similar reasons, the two might stack.
It's possible that somewhere in the DNS process the domain may have gone through a converter to change control characters into escape characters (as a way of avoiding certain security exploits). '\' is used as the basis of such escape characters, and thus requires an escape character of its own ('\').
I have a WSGI application running in PythonPaste. I've noticed that the default 'Server' header leaks a fair amount of information ("Server: PasteWSGIServer/0.5 Python/2.6").
My knee jerk reaction is to change it...but I'm curious what others think.
Is there any utility in the server header, or benefit in removing it? Should I feel uncomfortable about giving away information on my infrastructure?
Thanks
Well "Security through Obscurity" is never a best practice; your equipment should be able to maintain integrity against an attacker that has extensive knowledge of your setup (barring passwords, console access, etc). Can't really stop a DDOS or something similar, but you shouldn't have to worry about people finding out you OS version, etc.
Still, no need to give away information for free. Fudging the headers may discourage some attackers, and, in cases like this where you're running an application that may have a known exploit crop up, there are significant benefits in not advertising that you're running it.
I say change it. Internally, you shouldn't see much benefit in leaving it alone, and externally you have a chance of seeing benefits if you change it.
Given the requests I find in my log files (like requests for IIS-specific bugs in Apache logs, and I'm sure IIS server logs will show Apache-specific requests as well), there's many bots out there that don't care about any such header at all. I guess almost everything is brute force nowadays.
(And actually, as for example I've set up quite a few instances of Tomcat sitting behind IIS, I guess I would not take the headers into account either, if I were to try to hack my way into some server.)
And above all: when using free software I kind of find it appropriate to give the makers some credits in statistics.
Masking your version number is a very important security measure. You do not want to give the attacker any information about what software you are running. This security feature is available in the mod_security, the Open Source Web Application Firewall for Apache:
http://www.modsecurity.org/
Add this line to your mod_security configuration file:
SecServerSignature "IIS/6.0"
as can be seen from two other question I had I am looking for a secure webserver as there where discussion at work how safe tomcat really is.
But basically what I found on the net regarding how safe it is is greek to me. So I was hoping, someone could explain to me how safe tomcat really is? Like, is it possible to mess with java-code on the server or something like this?
I know this is probabaly a dumb question, but I really can't seem to find an answer that helps me to argument that writing an own server is not more safe than using tomcat or how it might be better to use tomcat.
Maybe someone knows a good way to secure tomcat and to minimize certain functions of tomcat? (I really dunno how to else explain it ...)
I hope you can help me.
Thnx in advance!
... dg
Writing your own server? As opposed to using Tomcat? That is a classic case of reinventing the wheel and (unless you are the NSA) likely to result in a less secure server. Rhetorical question: Why not write your own OS to go with it!
Tomcat 6 is a very mature, stable, current, well understood code base that has had zillions of very, very smart people reviewing, testing it, and operating it in production for years and years.
Tomcat is very secure.
maybe before, Tomcat was pretty unsecure, but nowadays... just anything having Apache under its name is enough for me to trust it. Anyway, security was ALWAYS imagination, there is not such thing existing in real life, so there would always be factor of (in)security.
Problem with Tomcat is like problem with Windows, no matter how 'secure' they built it, if there are millions of people out using it, hackers will have interest to invest their energy (and eventually, they will success) in finding way to break into it. So maybe to feel more secure, you can consider using something not wide used, but this will not help if hacker is intentionally hacking your site for some special reason, he will find out technology you are using and in this moment - it would be better it was Tomcat..
That is why is very important to 'get married' with open-source technologies like tomcat, since there is not big chance for a hole in system to live long, people have chance to fix things, you can always do the job yourself, do not have to wait for a new version etc.
Look at the changelog and count the security issues fixed;
Look at the CVE entries for Tomcat and see how the seem to you.
But all in all, it's a really bad idea to write your own Servlet Container, especially if the Tomcat security arguments are not clear to you.
If you need boss-convincing arguments, show him the serlvet spec you need to implement, and estimate time in the order of man-years ( not kidding! ), contrasting this with the 'download,unzip,start' option of using Tomcat.
I know this is probabaly a dumb
question, but I really can't seem to
find an answer that helps me to
argument that writing an own server is
not more safe than using tomcat or how
it might be better to use tomcat.
What you have to remember is that Tomcat has 1000s of hours of people looking at code and fixing bugs and holes. Thinking about writing secure code is easy. Doing it is extremely hard. There are lots of little things that can be overlooked which can contribute to a massive hole.
Tomcat is a secure server. However, it is even more secure to use Apache Web Server to proxy it. You can use mod_proxy to connect Apache with Tomcat using AJP or HTTP protocol. This is the safest configuration and you can leverage the many plug-in modules available for Apache Http Server.
Some tips for a secure installation:
Create a user to run Tomcat. Do not use the root user.
Uninstall the example applications.
Uninstall the manager application. If you use Apache to proxy Tomcat, you can safely keep the manager and make it available only through your local network.
While I'm no hacker, I'd find it hard to imagine how Tomcat would be your first port of call if you were trying to attack a system - after all it's running your code and is presumably behind a firewall and fronting servers. If this isn't that case, then it should be!
Once the network is as secure as possible remember Tomcat is just a Servlet engine - you're gonna have trouble exploiting with http requests. I'd focus on your application code, things like user authentication and avoiding the various injection attacks - this in my mind is the highest risk to your system and will exist whatever server you're running on.
As others have already mentioned, Tomcat is ready for production use and security of Tomcat itself is certainly better than what any small team could achieve while writing their own servlet server.
That said, the probably weakest point in a Tomcat setup is commonly the setup of the underlying OS.
This question is more security related than programming related, sorry if it shouldn't be here.
I'm currently developing a web application and I'm curious as to why most websites don't mind displaying their exact server configuration in HTTP headers, like versions of Apache and PHP, with complete "mod_perl, mod_python, ..." listing and so on.
From a security point of view, I'd prefer that it would be impossible to find out if I'm running PHP on Apache, ASP.NET on IIS or even Rails on Lighttpd.
Obviously "obscurity is not security" but should I be worried at all that visitors know what version of Apache and PHP my server is running ? Is it good practice or totally unnecessary to hide this information ?
Prevailing wisdom is to remove the server ID and the version; better yet, change them to another legitimate server ID and version - that way the attacker goes off trying IIS vulnerabilities against Apache or something like that. Might as well mislead the attacker.
But honestly, there are so many other clues to go by, I wonder about whether this is worth it. I suppose it could stop attackers using a search engine to find servers with known vulnerabilities.
(Personally, I don't bother on my HTTP server, but it's written in Java and much less vulnerable to the typical kinds of attack.)
I think you usually see those headers because the systems send them by default.
I routinely remove them as they provide no real value and could, as you suggested reveal information about the server.
Hiding the information in the headers usually just slows down the lazy and ignorant villains. There are many ways to fingerprint a system.
Running nmap -O -sV against an IP will give you the OS and service versions with a fairly high degree of accuracy. The only extra info you're giving away by having your server advertise that information is which modules you have loaded.
It seems that some of the answers are missing an obvious advantage of turning off the headers.
Yes, you all are right; turning of the headers (and the statusline present e.g. at directory listings) does not stop an attacker from finding out what software you use.
However, turning this information off prevents malware which uses google to look for vulnerable systems from finding you.
tldr: Don't use it as a (or even as THE) security-measure, but as a measure to drive away unwanted traffic.
I normally turn off Apache's long header version information with ServerTokens; it adds nothing useful.
One point which nobody has picked up on, is it looks like better security to a prospective client, pen testing company etc, if you're giving out less information from your web server.
So giving less information out boosts the perceived security (i.e. it shows you have actually thought about it and done something)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I want to know if people here typically disable SELinux on installations where it is on by default? If so can you explain why, what kind of system it was, etc?
I'd like to get as many opinions on this as possible.
I did, three or four years ago when defined policies had many pitfalls and creating policies was too hard and I had 'no time' to learn. This was on not critical machines, of course.
Nowadays with all the work done to ship distros with sensible policies, and the tools and tutorials that exist which help you create, fix and define policies there's no excuse to disable it.
I worked for a company last year where we were setting it enforcing with the 'targeted' policy enabled on CentOS 5.x systems. It did not interfere with any of the web application code our developers worked on because Apache was in the default policy. It did cause some challenges for software installed from non-Red Hat (or CentOS) packages, but we managed to get around that with the configuration management tool, Puppet.
We used Puppet's template feature to generate our policies. See SELinux Enhancements for Puppet, heading "Future stuff", item "Policy Generation".
Here's some basic steps from the way we implemented this. Note other than the audit2allow, this was all automated.
Generate an SELinux template file for some service named ${name}.
sudo audit2allow -m "${name}" -i /var/log/audit/audit.log > ${name}.te
Create a script, /etc/selinux/local/${name}-setup.sh
SOURCE=/etc/selinux/local
BUILD=/etc/selinux/local
/usr/bin/checkmodule -M -m -o ${BUILD}/${name}.mod ${SOURCE}/${name}.te
/usr/bin/semodule_package -o ${BUILD}/${name}.pp -m ${BUILD}/${name}.mod
/usr/sbin/semodule -i ${BUILD}/${name}.pp
/bin/rm ${BUILD}/${name}.mod ${BUILD}/${name}.pp
That said, most people are better off just disabling SELinux and hardening their system through other commonly accepted consensus based best practices such as The Center for Internet Security's Benchmarks (note they recommend SELinux :-)).
My company makes a CMS/integration platform product. Many of our clients have legacy 3rd party systems which still have important operative data in them, and most want to go on using these systems because they just work. So we hook our system to pull data out for publishing or reporting etc. through diverse means. Having a ton of client spesific stuff running on each server makes configuring SELinux properly a hard, and consequentially, expensive task.
Many clients initially want the best in security, but when they hear the cost estimate for our integration solution, the words 'SELinux disabled' tend to appear in the project plan pretty fast.
It's a shame, as defense in depth is a good idea. SELinux is never required for security, though, and this seems to be its downfall. When the client asks 'So can you make it secure without SELinux?', what are we supposed to answer? 'Umm... we're not sure'?
We can and we will, but when the hell freezes over, and some new vulnerability is found, and the updates just aren't there in time, and your system is unlucky enough to be the ground zero... SELinux just might save your ass.
But that's a tough sell.
I used to work for a major computer manufacturer in 3rd level support for RedHat Linux (as well as two other flavors) running on that company's servers. In the vast majority of cases, we had SELinux turned off. My feeling is that if you REALLY NEED SeLinux, you KNOW that you need it and can state specifically why you need it. When you don't need it, or can't clearly articulate why, and it is enabled by default, you realize pretty quickly that it is a pain in the rear end. Go with your gut instinct.
SELinux requires user attention and manual permission granting whenever (oh, well) you don't have a permission for something. Many people such find that it gets in the way and turn it off.
In recent version, SELinux is more user friendly, and there are even talks about removing the possibility to turn it off, or hide it so only knowledgeable users would know how to do it - and it is assumed just users are precisely those who understand the consequences.
With SELinux, there's a chicken and egg problem: in order to have it all the time, you as a user need to report problems to developers, so they can improve it. But users don't like to use it until it's improved, and it won't get improved if not many users are using it.
So, it's left ON by default in hope that most people would use it long enough to report at least some problems before they turn it off.
In the end, it's your call: do you look for a short-term fix, or a long-term improvement of the software, which will lead to removing the need to ask such question one day.
Sadly, I turn SELinux off most of the time too, because a good amount of third-party applications, like Oracle, do not work very well with SELinux turned on and / or are not supported on platforms running SELinux.
Note that Red Hat's own Satellite product requires you to turn off SELinux too, which - again, sadly - says a lot about difficulties people are having running complex applications on SELinux enabled platforms.
Usage tips that may or may not be useful to you: SELinux can be turned on and off at runtime by using setenforce (use getenforce to check current status). restorecon can be helpful in situations where chcon is cumbersome, but ymmv.
I hear it's getting better, but I still disable it. For servers, it doesn't really make any sense unless you're an ISP or large corporation wanting to implement fine-grain access level controls across multiple local users.
Using it on a web server, I had a lot of problems with apache permissions. I'd constantly have to run,
chcon -R -h -t httpd_sys_content_t /var/www/html
to update the ACLs when new files were added. I'm sure this has been solved by now, but still, SELinux is a lot of pain for the limited reward that you get from enabling it on a standard web site deployment.
I don't have a lot to contribute here, but since its gone unanswered, I figured I would throw my two cents in.
Personally, I disable it on dev boxes and when I'm dealing with unimportant things. When I am dealing with anything production, or that requires better security, I leave it on and/or spend the time tweaking it to handle things how I need.
Weather or not you use it really comes down to your needs, but it was created for a reason, so consider using it rather than always shut it off.
Yes. It's brain dead. It can introduce breakage to standard daemons that's nearly impossible to diagnose. It also can close a door, but leave a window open. That is, for some reason on fresh CentOS installs it was blocking smbd from starting from "/etc/init.d/smb". But it didn't block it from starting when invoked as "sh /etc/init.d/smb" or "smbd -D" or from moving the init.d/smb file to another directory, from which it would start smbd just fine.
So whatever it thought it was doing to secure our systems - by breaking them - it wasn't even doing consistently. Consulting some serious CentOS gurus, they don't understand the inconsistencies of its behavior either. It's designed to make you feel secure. But it's a facade of security. It's a substitute for doing the real work of locking your system security down.
I turn it off on all my cPanel boxes, since cPanel won't run with it on.
I do not disable it, but there are some problems.
Some applications don't work particularly well with it.
For example, I believe I enabled smartd to try and keep track of my
raid disks s.m.a.r.t. status, but selinux would get confused about the
new /dev/sda* nodes created at boot (I think that's what the problem was)
You have to download the source to the rules to understand things.
Just check /var/log/messages for the "avc denied" messages and you
can decode what is being denied.
google "selinux faq" and you'll find a fedora selinux faq that will
tell you how to work through these problems.
I never disabled selinux, my contractor HAVE to use it. And, if/when, some daemon (with OSS license btw) don't have a security policy it is mandatory to write a (good) one. This is not because i believe that selinux is an invulnerable MAC on Linux - useless to put example - but because it augment much the operating system security anyway. For the web app the OSS security better solution is mod_security : so i use both. Most the problem with selinux are on the little or comprensible docu, although the situation is much improved in recent years.
A CENTOS box I had as a development machine had it on and I turned it off. It was stopping some things I was trying to do in testing the web app I was developing. The system was (of course ) behind a firewall which completely blocked access from outside our LAN and had a lot of other security in place, so I felt reasonably secure even with SELinux off.
If it's on by default I'll leave it on until it breaks something then off it goes.
Personally I see it as not providing any security and I'm not going to bother with it.
Under Red-hat, you can edit /etc/sysconfig/selinux and set SELINIX=disabled.
I think under all versions of Linux you can add selinux=0 noselinux to the boot line in lilo.conf or grub.conf.