pdflush on port 80 preventing Apache from restarting - linux

Preliminaries:
+ CentOS 5
+ Plesk 10.4.4 Update #35
Problem: During the addition/alteration of a new domain/host in plesk, it will normally write new or update apache vhost config files and then restart the apache service. The updating rewriting seems to go fine, there are no errors in the files, however lately apache fails to restart after shutting down due to the unavailability of port 80, further examination via "netstat -tulpn..." shows the following...
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::80 :::* LISTEN 25794/PDFLUSH
tcp 0 0 :::443 :::* LISTEN 25794/PDFLUSH
You can see that PDFLUSH is occupying a high process ID but is sitting on both 80 and 443 which prevents apache from coming back up. I'm having to manually get the PID and issue a kill before I can run "service httpd start" again to get apache up.
In my searching, I've seen an old reference to someone being hacked but I can find any similar symptoms, and honestly I don't know what to look for in the logs or which log file to look at specifically. I've also heard that this could be a symptom of failing memory but I don't know how to test memory on a production server.
Please, any help would be greatly appreciated, my heart sinks every time I get an SMS that the servers down again!
EDIT
It's happened again by simply adding a subdomain, however this time I was able to run a ps -aux quickly prior to killing the PDFLUSH instance and bringing back up apache...
apache ... ./PDFLUSH -b service.config
Trying to search out the location of that now...

The good news is I found the culprit, the bad news is that it is "c99", just do a google on it and you'll find a long history. Now the real fun begins, has the server been rooted?
For those that have similar issues and think it might be the same even if it's using a name other than "PDFLUSH", just do a
find /var/www/vhosts -name PDFLUSH
To figure out where the little bastard is hiding. I found mine in one of my shared hosting clients sites buried deep in a dir tree inside the webroot.

The netstat output that you have included is highly suspect:
The program that you are seeing is called PDFLUSH with all characters in upper case. This seems like an attempt to evade detection; pdflush (all lower case) is the name of a legitimate kernel thread that handles writing dirty memory pages back to the disk. It highly unlikely that any legitimate program would be using such a name.
The legitimate pdflush does not have any networking capabilities - it has nothing to do with networking at all. This one seems to be acting as a web server, yet no web server with such a name exists. Unless you explicitly installed a custom web server with that unfortunate name, you have a problem.
Have you tried connecting to those two ports with netcat or a Telnet client? That might give you a clue on what is going on.
As far as testing the memory of your system goes, memtest86 is the de facto standard tool these days. Failing memory, though, usually appears in the form of random crashes - what you are seeing seems way too specific.

Related

linux -- determine what service was running on a down port

I need to write a bash script that:
-- takes ip address and list of ports as standard input,
-- check to see if port up or down,
-- if port is down, then restart the service via ssh
Got the first two working, however I am stuck on the last part, determining what service was running on the down port, as I may not know what services the machine is supposed to be running. lsof, netstat are not useful because the service is down.
The assumption is that this script will run on the users machine to check server status and restart any downed services automagically. It is known that some services may use ports listed in /etc/services for other services (for example, cpanel customer portal uses 2083, which /etc/services lists as radsec).
Any help is most appreciated, thank you!!
There is no way to determine what nonstandard ports what a non-running application may have used. All that you can do is to check for services which are not running, and (perhaps) restart those that are not running.
Even doing that runs into problems:
some services may not be running for other reasons (than loss of connectivity)
some services may not give a useful status when asked if they are running (Apache Tomcat, for instance, seems to come with service scripts which never do more than half the job).

Configuration deluge on tor

I know we already talked about this, but the Q&A I found here and googling doesn't suit my issue.
I've read that I shouldn't do that for several reasons (and take a look to the alternatives and probably an affordable seedbox will be the best option). Even though that, I'm struggling to get it working but currently I cant.
Based on what I read, to get it work, first all I have to run Tor browser and successful initiate it. I got that, tor browser running with all ok.
Then I launched Deluge, Edit -> Preferences, and for each field (Peer, Web seeds, Tracker and DHT) Sockv5, 127.0.0.1, port 9050 and restart Deluge.
But doesn't work at all... Deluge is working good without that configuration.
I've been trying to track that with Wireshark, I noticed that source port for TCP Tor connections is 9666. Also tried that port and get nothing. Also tried as Sockv4 and Sockv5 W/ Auth. Ping at 127.0.0.1 is ok and I can 'ssh 127.0.0.1'.
nmap 127.0.0.1 -> 22/25/80/111/631/9418 are open.
I'm out of ideas.
Recommend that you disable certain features to stay anonymous when using a SOCKS proxy, click the network tab on the side menu, under "Network Extra's" disable UPNP/NAT-PMP, to disable them you just need to click them once, then click apply and then OK.
In general, I wouldn't use Tor for torrents, as it's horribly slow, plus torrent clients are prone to leaking info even when properly configured to use Tor. You're better off getting Private Internet Access or TorGuard for three or four bucks a month.
I guess if you really wanted to do this, Whonix would be an option, as 100% of traffic would be funneled through Tor. But again, I would go for the VPN option.

Cannot access websites on apache from outside the server

I have a debian 7.5 based Ubuntu server, apache 2.2.22.
It's a rather vanilla installed XAMP used as a basic web server.
It used to work fine and I have no idea why it stopped working suddenly (there was some maintenance today but it worked when I left it - I changed partition sizes with Gparted).
When I try to access a website from the server (tried with w3m) all is working OK, including PHP and MySQL access.
When I try to access the same host (using a domain) from the outside, the browser keeps loading for a long while, eventually (after few minutes) saying the page could not be loaded.
I made sure that ports are open and accessible with outside scanner.
So I'm sure the Apache is available (working from inside the network, websites loading from SSH using w3M and pinging)
I'm sure the server is connected to the web (I can use putty to SSH)
the host is resolving to the correct IP (but won't ping from outside, only inside)
The ports seems to be opened (scanned and got OK for port 80)
I'm not a professional IT, so If there is info I can add that could help just ask away.
would really appreciate any idea or direction.
Thanks!
I still suspect the UFW/iptables firewall is blocking all incoming connections... Please go through this article and double check
http://www.cyberciti.biz/faq/ubuntu-server-disable-firewall/
If you're sure that the firewall config is OK, please try packet capturing with Wireshark to see what's going on underneath.
http://www.youtube.com/watch?v=sOTCRqa8U9Y How to install
Thanks for the help,
Oddly enough - It just started working again after 12 hours of not working.
A friend of mine, an IT person just called to try and help, and he simply connected (5 mins after I tried) and said it's all working for him.
I tried, and it's working for me also.
Have no idea why it stopped working, and why it is working now.
I think it might be an ISP problem or a router issue... The server is in our offices so I guess it could be both. I just don't understand why SSH would work and HTTP wouldn't.

Slow website even though VPS is up and running

Sorry if this is a bit of a newbie question, but I am quite new to VPS and the relatively more complicated set up. I have a VPS set up, and every day or twice a day the site loads for a bout 10 minutes with no luck. Then when it comes back on line its fine after that. Upon logging on to Plesk, the server is up and running, very low CPU usage (0.10 and drops to 0.00 after a few minutes) and around 18% RAM usage.
The MySQLAdmin loads up fine.
So it seems the VPS is running fine.
Is there maybe another reason? The domain is with Daily.co.uk and the VPS is with LCN.com. Could there be another problem somewhere? On daily.co.uk, there are two nameservers set. ns0.etc*** and ns1.etc***. I did a tracert on windows cmd, this traced down to the server, with two timeouts.
I also tried a check on http://dnscheck.pingdom.com/ while the site was slow and this came back fine, except this: Too few IPv4 name servers (1). Only one IPv4 name server was found for the zone. You should always have at least two IPv4 name servers for a zone to be able to handle transient connectivity problems.
Any help would be appreciated. I have tried searching but with no luck.
The recommended diagnostic check for the issue you are experiencing is called a DIG.
On your Windows system, this check is not intrinsically available, but it can be downloaded from http://members.shaw.ca/nicholas.fong/dig/
Once you have installed it, you'll want to run it from the command prompt with the following syntax:
C:> dig -insert your domain here- +trace
This will show you how DNS resolution is happening from your location to the requested end point. Chances are, the error you received is correct. Most DNS setups have several name servers to assign to your domain registration to allow the round-robining of delegated name servers in the event that one becomes unresponsive.
My personal recommendation would be to outsource the DNS to a managed provider. Doing so will increase the availability of the zone, and reduce latency.

Is it good idea to put NodeJs behind nginx

Is it good Idea to put nodeJs behind nginx , also can someone let me know nginx supports http 1.1;
Also how to make sure websockets works with this setup ( nodeJs Behind nginx)
If you want WebSockets, don't put it behind nginx. There might be some way that I don't know of, but DotCloud doesn't support WebSockets with Node.js because of its reliance on nginx, and they know nginx pretty well.
I assume you want to run your server on Port 80. If node is your main server, that will mean either:
Running node as root. This is often not ideal because there is potential for bugs in app code and with root access it could cause more damage. If a VM is set aside for a very particular purpose, all backups are made to outside of the VM, and rebuilding is quick, this may not be a big problem, though.
Using iptables to forward network traffic to port 80 to a higher-numbered port. I set this up and I felt like it was a good solution.
Edit: You can also run node.js as root and downgrade to a non-root user with setuid after binding to Port 80. The Jetty project (a web server for Java) suggests this technique.
nginx doesn't fully support HTTP 1.1. However, work is being done and will possibly be integrated into the development branch soon so keep your digits crossed and have a look at this mailing list thread to see what I'm talking about (there are patches but I haven't tried them as yet). More discussion here.
Depending on your needs you can do what Ben suggests using IPTables although I would also 'stealth' the high port using the mark module; I've upped a simple shell script that will do it for you.
If you need other applications on 80 you'll need to proxy; haproxy is one option but you can keep it all node using the excellent node-http-proxy.

Resources