PHP-FPM and capistrano, "No input file specified" - linux

I use capistrano to deploy new versions of a website to servers that run nginx and php-fpm, and sometimes it seems like php-fpm gets a bit confused after deployment and expects old files to exist, generating the "No input file specified" error. I thought it could have had something to do with APC, which I uninstalled, but I realize the process doesn't get as far as checking stuff with APC.
Is there a permission friendly way to tell php-fpm that after deployment it needs to flush its memory (or similar), that I could use? I don't think I want to do sudo restarts.
rlimit_files isn't set in php-fpm.conf, and ulimit -n is 250000.

Nginx has it's own rather aggressive filecache. It's worse when NFS is involved since that has it's own cache as well. Tell capistrano to restart nginx after deployment.
It can also be an issue with your configuration as Mohammad suggests, but then a restart shouldn't fix the issue, so you can tell the two apart.

Related

Recovering loaded Varnish config

I've accidentally copied over my default.vcl and erased my fairly complex configuration. So long as I don't try to reload the configuration or restart Varnish everything is running fine - I'm hoping there's a way to view or "extract" my loaded configuration from Varnish so I don't have to rewrite it from scratch. Thanks for any ideas.
You may as well login in the admin console ( varnishadm) and run vcl.list to list all the vcl that are loaded. And then vcl.show to display the most recent one.
Vcl list is cleared when varnish service is restarted or stopped.
Solved my issue by recovering my loaded config file using -
grep -i -a -B100 -A100 'text' /dev/vda
Replaced 'text' with a line of code I remembered from the config.

Having an issue getting the GitLab-CE docker container running on VPS

OK, I have a test setup running on a local server that is running like a champ.
I would like to reimplement this on my VPS. The config file only differs with regards to the mail server section, as the VPS has this enabled, my local server does not.
The issue that is most apparent (perhaps more) is that when I hit my domain:9080 it redirects to the login page, but loses that port information. My local install does not.
I for the life of me, cannot figure out what I need to change to fix this issue.
To get an idea of what I mean, if the above was unclear, you can goto shadow.schotty.com:9080 and that works perfectly (well obviously not the new user part, as the email isnt setup). schotty.com:9080 has that redirection issue.
As for the obvious questions for me:
Here is the docker publish ports copied from my start script:
--publish 9443:443 --publish 9080:80 --publish 9022:22 \
No, I did not copy over any existing part of the install on the local host, as I wanted to also document what the hell I did and to ensure that since I am using a newer version I wanted none of the potential issues that crop up with incompatible config files.
I did copy my startup script, and modified it appropriately for the volume directories.
The only modifications to any configuration files are the mail server section entries.
Thanks to anyone who can toss an idea my way.
Andrew.
OK, Figured a few things out here that should be of help to others.
First off something had changed somewhat since I had done the install on shadow. But now both are behaving the same since both are on the exact same revision.
To fix the web port across the board, you will need to pick a port to use that the rest of the software suite does not use, nor the obvious of other containers/daemons on the host. 8080 is indeed used, so I chose to stick with 9080.
There are 2 places this matters and has a very specific way of needing to be done. First is in the config -- you will need to setup the variable as follows:
external_url 'http://host.domain.tld:9080'
I am sure many tried stopping there and failed (I sure the heck did). The second spot is in the docker container initialization. For some reason it used to work, but does not anymore. But the simple fix is just map 1:1 the external port to the internal one. So in my case I am using 9080, so the following publish must be used:
--publish 443:443 --publish 9080:9080 --publish 22:22 \
This fixes everything.
Now off to other issues :D

NFSClient issue on FreeBSD: "rpc.umntall: not found"

We have a FreeBSD 8 server that hasn't been restarted since it got booted. It has been restarted now and we're trying to reconnect the NFS mount to it.
$ sudo /etc/rc.d/nfsclient start
NFS access cache time=60
rpc.umntall: not found
The obvious reason for the error rpc.umntall: not found is because the program doesn't exist on the computer.
Is there any other way to mount to a NFS server that is connected to the network than using NFSClient. Or can I force the client to move past the part in the script that requires rpc.umntall?
I only ask because it was started before, and I'd be very surprised if we removed any programs from it.
rpc.umntall is installed as part of the base system, usually in /usr/sbin/.
If you take a look at the contents of /etc/rc.d/nfsclient, you'll find this:
unmount_all()
{
# If /var/db/mounttab exists, some nfs-server has not been
# successfully notified about a previous client shutdown.
# If there is no /var/db/mounttab, we do nothing.
if [ -f /var/db/mounttab ]; then
rpc.umntall -k
fi
}
A cheap work around would be to delete /var/db/mounttab.
However, if you want to fix the problem, you'll want to fix the missing rpc.umntall. Is it not in /usr/sbin/? If not, you could try to restore it from a published image, or you could attempt to build it from source.
If it's somewhere else on the computer, you could try to find it using find / | grep rcp.umntall.
If it exists in /usr/sbin, but isn't working, then that would likely mean that something is wrong with the PATH variable being used by your rc subsystem. You could double check that by hardcoding the path to rpc.umntall right in the /etc/rc.d/nfsclient script.

Why does mongodb complain about transparent_hugepage?

A few questions are already asking about how to fix the mongodb warning:
** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always.'
** We suggest setting it to 'never'
But I'm wondering if it should be fixed. I get this warning from MongoDB 3.0.1 on a Ubuntu VM running on Google's Cloud. Should I trust MongoDB that 'never' is better? Or should I trust Google/Ubuntu that they set it to 'always' for a good reason? I imagine there are tradeoffs to be considered and don't know what I'd be trading to keep it or fix it.
Asking how to fix it is fine, but asking whether to fix it seems wiser.
Edit: Mongodb have addressed this issue since I wrote this answer. Their recommendation is at https://docs.mongodb.com/master/tutorial/transparent-huge-pages/ and probably ought to be your go-to solution. My original answer will still work, but I'd consider it a hack now that an official solution is available.
Original answer: According to the MongoDB documentation, http://docs.mongodb.org/manual/reference/transparent-huge-pages/, and support, https://jira.mongodb.org/browse/DOCS-2131, transparent_hugepage (THP) is designed to create fewer large memory blocks rather than many small memory blocks in systems with a lot of memory. This is great if your software needs large contiguous memory accesses. For MongoDB, however, regardless of memory available, it requires numerous smaller memory accesses and therefore performs better with THP disabled.
That makes me think either way will work, but you'll get better mongo (or any database) performance with THP off, giving you smaller bites of memory. If you don't have much memory anyway, THP probably ought to be off no matter what you run.
Several ways to do that are outlined in the link above. The most universally applicable appears to be editing rc.local.
$ sudo nano /etc/rc.local
Insert the following lines before the "exit 0" line.
...
if test -f /sys/kernel/mm/transparent_hugepage/khugepaged/defrag; then
echo 0 > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
exit 0
Note: redhat-based systems may use "redhat_transparent_hugepage" rather than "transparent_hugepage" and can be checked by:
ls /sys/kernel/mm/*transparent_hugepage*/enabled
cat /sys/kernel/mm/*transparent_hugepage*/enabled
To apply the changes, reboot (which will run rc.local) or:
$ sudo su
# source /etc/rc.local
# service mongod restart
# exit
to properly apply the changes made above
For Ubuntu using upstart scripts:
Since we are deploying machines with Ansible I don't like modifying rc files or GRUB configs.
I tried using sysfsutils / sysfs.conf but ran into timing issues when starting the services on fast (or slow machines). It looked like sometimes mongod was started before sysfsutils. Sometimes it worked, sometimes it did not.
Since mongod is an upstart process I found that the cleanest solution was to add the file /etc/init/mongod_vm_settings.conf with the following content:
# Ubuntu upstart file at /etc/init/mongod_vm_settings.conf
#
# This file will set the correct kernel VM settings for MongoDB
# This file is maintained in Ansible
start on (starting mongod)
script
echo "never" > /sys/kernel/mm/transparent_hugepage/enabled
echo "never" > /sys/kernel/mm/transparent_hugepage/defrag
end script
This will run the script just before mongod will be started.
Restart mongod (sudo service mongod restart) and done.
In Ubuntu I used the option 'Init Script' of this document: http://docs.mongodb.org/manual/tutorial/transparent-huge-pages/
None of these worked for me on Amazon ec2 instance running Ubuntu 14.04, not even the init.d script recommended by MongoDB. I had to use the hugeadm tool by first installing it via apt-get and then running sudo hugeadm --thp-never, this post pointed me to hugeadm. I'm still trying to figure out how to disable the transparent_hugepage defrag. hugeadm doesn't seem to have an easy way to do that.

How do I restore CronTab to my WebMin system

I don't know if this was an effect of the shellshock attack which my server was victim to (or another attack that worked) but it basically enabled the hacker to overwrite my SSH config file when the server rebooted.
This new file used wget to load in a file from a website, then another library of hack functions which I guessed he then used to run hacks/DOS from my server. I caught it pretty fast and ideally want to upgrade but because I have cancer and just had a big operation it is too much effort at the moment.
Therefore I did a lot of house keeping, changing passwords, removing shell access, reverting back to DASH, replacing the default shell for root and any other users to another folder with symbolic links, restoring the config file for SSH, removing CGI functionality from config files e.g
ScriptAlias /cgi-bin/ /home/searchmysite/cgi-bin/
#
allow from all
#
Removed AW stats and Webalizer for all virtual min sites.
I already had DenyHosts and Fail2Ban installed.
I also blocked in/outbound traffic to the IPs of the sites he was getting the files from.
However it seems since this change I have lost the visual cron manager from webmin.
When I go to the menu item "Scheduled Cron Jobs", it says, "The command crontab for managing user Cron configurations was not found. Maybe Cron is not installed on this system?"
However I can see in the file system it exists.
When I run crontab -l or crontab -e I get "Permission Denied"
whoami shows "root"
I did think at the time of the hack this was all related and he had used SSH and a Cron job to get his hack running.
What I want to know is how I can get the CronTab manager back.
All the cron jobs are still running such as importing feeds into my websites, running scheduled emails and so on, what I don't know is how to resolve this without a full rebuild.
If I had the time and energy I would do that but I am totally drained and before this hack everything was just running smoothly and my websites which bring me in money were working fine.
They currently are still working fine and I regularly check my logs for IPs that look odd, have strong htacess rules for xss/sql/path travesal/file hacks and ban whole countries from Cloudflare which the site sits behind. So I don't "think" the machine is compromised at the moment even if it is old - could be wrong though!
details of box
Operating system Debian Linux 5.0 Virtualmin version 3.98.gpl GPL WebMin Version: 1.610 Kernel and CPU Linux 2.6.32.9-rscloud on x86_64
So if anyone can help me get my crontab manager back that would be great.
Thanks
1) check if chattr exists, if not, download a new one.
2) type whereis crontab, then chattr -isa /path/to/crontab.(usually /usr/bin/cron) then chmod crontab back to it original settings.
3) navigate to /var/spool/ and
chattr -isa cron
cd cron
chattr -isa crontabs
4) remove cron entry in /etc/cron.weekly
Look in /etc/cron.weekly for any new

Resources