Our company currently hosts a web application for a customer which runs on Orion Application Server. Unfortunately, support for OAS stopped about 10 years ago after Oracle acquired the source code and turned it into OC4J, as such there is very little documentation available, other than it's Wikipedia page.
The issue I am having is now that Chrome/Firefox/Opera are actively blocking insecure SSL connections, the site is inaccessible (Chrome gives the error: Server has a weak, ephemeral Diffie-Hellman public key).
I believe to fix this issue, I need to specify a list of acceptable ciphers that the server is allowed to use, but with no documentation available, I have no idea how/where to set this (if it is even possible).
Has anyone else had this issue and been able to resolve it?
It turned out updating to Java 8 resolved this issue for me; after the update, the server started using TLS 1.2 (though I couldn't work out which exact cipher it was using) and the website is now working in Chrome 45.
Related
Are there any build in features that prohibit
1) too many login attempts from the same IP
2) too many login attempts for the same username
or should I enhance those web applications by myself?
My server already got hacked once because I had a weak password, now I have a 10k bit keyfile for my piece of mind.
Those tomcat managers seem to be the next most dangerous things in my eyes, besides someone exploiting my web applications with malicious requests.
Not as far as I know, the best would be to disable the Host manager if you don't need it, or restrict ONLY the manager app to a single IP address. As long as you have a strong password and a secure application you should be fine
You want the LockOutRealm:
http://tomcat.apache.org/tomcat-8.0-doc/config/realm.html#LockOut_Realm_-_org.apache.catalina.realm.LockOutRealm
It is configured by default in server.xml in remotely recent versions of Tomact 7 and all versions of Tomcat 8.
Is installing Centos using standard installation for webserver relative safe? (without considering the CMS safety and only for Wordpress). The contents are:
- Virtualmin & Webmin:
- APC caching
- Apache, MySQL and Php
Everything is installed with default settings.
I installed Centos server at home and access it 100% from local network.
If it is not safe then what is the minimum requirement for safety?
'Safe' is too relative a term really. CentOS 6, Virtualmin and Webmin all have security bugs filed against them, some of which can even be exploited automatically by scripts and packages like Metasploit.
That said, no system will ever be perfectly secure unless you bury it underground with no net connection, so here are some good initial steps to take to improve security a little:
Turn off services and daemons that you don't need. For instance, it could be that you won't be using FTP, and will use SFTP for file transfer. If so, turn off the ones you aren't using.
Enforce a policy of unique and secure passwords of a decent length
install system updates, especially security updates.
Modify IPtables settings to disallow access to unused ports. Look into further iptables settings that can help
Consider key-based logins, 2 or 3 factor authentication etc. and weigh the pros and cons (google authenticator PAM module is very easy to install, for example).
That's a good start off, a key thing is to keep an eye on the server, try to monitor if unusual bandwidth, or logins are being used.
No box is a fortress, but you can at the very least discourage opportunists.
I have pretty strange problem with Collectd. I'm not new to Collectd, was using it for a long time on CentOS based boxes, but now we have Ubuntu TLS 12.04 boxes, and I have really strange issue.
So, using version 5.2 on Ubuntu 12.04 TLS. Two boxes residing on Rackspace (maybe important, but I'm not sure). Network plugin configured using two local IPs, without any firewall in between and without any security (just to try to set simple client server scenario).
On both servers collectd writes in configured folders as it should write, but on server machine it doesn't write data received from client.
Troubleshooted with tcpdump, and I can clearly see UDP traffic and collectd data, including hostname and plugin names from my client machine, received on server, but they are not flushed to appropriate folder (configured by collectd) ever. Also running everything as root user, to avoid troubleshooting permissions.
Anyone has any idea or similar experience with this? Or maybe some idea what could I do for troubleshooting this beside trying to crawl internet (I think I clicked on every sensible link Google gave me in last two days) and checking network layer (which looks fine)?
And just small note: exactly the same happened with official 4.10.2 version from Ubuntu's repo. After trying to troubleshoot it for hours moved to upgrade to version five.
I'd suggest trying out the quite generic troubleshooting procedure based on the csv and logfile plugins, as described in this answer. As everything seems to be fine locally, follow this procedure on the server, activating only the network plugin (in addition to logfile, csv and possibly rrdtool).
So after no way of fixing this, I upgraded my Ubuntu to 12.04.2 LTS (3.2.0-24-virtual) and this just started working fine, without any intervention.
Let me preface this by saying I have basically 0 knowledge of web development. That being said, I'll still try to provide you with as much information as I possibly can. Our client is using IIS7 on a Windows Server 2008 R2 machine. The TortoiseSVN error they're getting is this:
Error: Could not send request body: an existing connection was forcibly closed by the remote host.
Using the powers of Google, it seems that there's two possible things that could be occurring here. As it is a 4GB file, I've seen people mention that it could be a configuration issue in that the timeout could be a little short, that I might need to enable a setting somewhere to allow committing of larger files or that it could be a network issue. It might be useful to note that they can commit smaller files.
I've all ready tried disabling the firewall, as well as the antivirus, on the server and having them retry, but that didn't work. They are trying to upload from a desktop to the server and they are on the same network through a gigabit switch. I'm sure I'm missing useful information for you guys but I'm a total noob to web dev, their set up, and actually understanding what they're trying to do. If you need any more information from me I'll be glad to provide it.
The problem could be the too strict timeout options configured in Apache2's reqtimeout module. I simply disabled it
a2dismod reqtimeout
/etc/init.d/apache2 restart
Chocolate to: https://serverfault.com/questions/297562/svn-https-problem-could-not-read-status-line-connection-was-closed-by-ser
I have configured IIS 7.5 ftp service to use SSL. I have two environments (one for testing purposes, without ssl). When we activate SSL users can logon and list and get files maybe one time if there lucky, then the host (service?) becomes unreachable for some reason. I have no idea what happens or why the FTP "locks" it self. When the ftp is in the "locked" state i am still able to telnet the ftp service, but login do not work.
The test environment without SSL works perfectly and never locks itself. I have also tried turning off SSL on the production environment and that makes that environment work perfectly too.
So the problem must be with SSL (certificate is from versisign). Have someone experienced the same problem och now what can be the cause of this?
/ Tommy
See this document
Specifically these sections:
Using Windows Firewall with secure FTP over SSL (FTPS) traffic
More Information about Working with Firewalls
(At the bottom)