Since this weekend I get a mail every hour from my server with the following message:
/etc/cron.hourly/mcelog.cron:
mcelog: Cannot access bus threshold trigger `bus-error-trigger': Permission denied
With the subject: "Cron <root#s1> run-parts /etc/cron.hourly"
On my VPS I run CentOS 6.7 and Plesk v12.0.18.
Does anyone know how I can fix this?
Thanks, Alexander
I've seen this on a couple of Plesk servers with SELinux enabled. The problem is that the security contexts of the scripts under /etc/mcelog are incorrect, so SELinux prevents mcelog from executing them. To fix this, run the following commands as root:
# semanage fcontext -a -t bin_t '/etc/mcelog/.*-error-trigger'
# restorecon -R /etc/mcelog
(If the semanage command is not available, install the policycoreutils-python package. You could just use chcon, but this would not survive a filesystem relabel.)
See: http://forum.odin.com/threads/mcelog-cron-error.334110/
Related
I am having a problem keeping SSH running on the Windows Subsystem for Linux. It seems that if a shell is not open and running bash, all processes in the subsystem are killed. Is there a way to stop this?
I have tried to create a service using nssm but have not be able to get it working. Now I am attempting to start a shell and then just send it to the background but I haven't quite figured out how.
You have to keep at least one bash console open in order for background tasks to keep running: As soon as you close your last open bash console, WSL tears-down all running processes.
And, yes, we're working on improving this scenario in the future ;)
Update 2018-02-06
In recent Windows 10 Insider builds, we added the ability to keep daemons and services running in the background, even if you close all your Linux consoles!
One remaining limitation with this scenario is that you do have to manually start your services (e.g. $ sudo service ssh start in Ubuntu), though we are investigating how we might be able to allow you to configure which daemons/services auto-start when you login to your machine. Updates to follow.
To maintain WSL processes, I place this file in C:\Users\USERNAME\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\wsl.vbs
set ws=wscript.createobject("wscript.shell")
ws.run "C:\Windows\System32\bash.exe -c 'sudo /etc/rc.local'",0
In /etc/rc.local I kick off some services and finally "sleep" to keep the whole thing running:
/usr/sbin/sshd
/usr/sbin/cron
#block on this line to keep WSL running
sleep 365d
In /etc/sudoers.d I added a 'rc-local' file to allow the above commands without a sudo password prompt:
username * = (root) NOPASSWD: /etc/rc.local
username * = (root) NOPASSWD: /usr/sbin/cron
username * = (root) NOPASSWD: /usr/sbin/sshd
This worked well on 1607 but after the update to 1704 I can no longer connect to wsl via ssh.
Once you have cron running you can use 'sudo crontab -e -u username' to define cron jobs with #reboot to launch at login.
Just read through this thread earlier today and used it to get sshd running without having a wsl console open.
I am on Windows 10 Version 1803 and using Ubuntu 16.04.5 LTS in WSL.
I needed to make a few changes to get it working. Many thanks to google search and communities like this.
I modified /etc/rc.local as such:
mkdir /var/run/sshd
/usr/sbin/sshd
#/usr/sbin/cron
I needed to add the directory for sshd or I would get an error "Missing privilege separation directory /var/run/sshd
I commented out cron because I was getting similar errors and haven't had the time or need yet to fix it.
I also changed the sudoers entries a little bit in order to get them to work:
username ALL = ....
Hope this is useful to someone.
John Butler
Having just updated to the newest Windows 10 release (build 14316), I immediately started playing with WSL, the Windows Subsystem for Linux, which is supposed to run an Ubuntu installation on Windows.
Maybe I'm trying the impossible by trying to install Apache on it, but then someone please explain me why this won't be possible.
At any rate, during installation (sudo apt-get install apache2), I received the following error messages after the dependencies were downloaded and installed correctly:
initctl: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: No such file or directory
runlevel:/var/run/utmp: No such file or directory
* Starting web server apache2 *
* The apache2 configtest failed.
Output of config test was:
mktemp: failed to create directory via template '/var/lock/apache2.XXXXXXXXXX': No such file or directory
chmod: missing operand after '755'
Try 'chmod --help' for more information.
invoke-rc.d: initscript apache2, action "start" failed.
Setting up ssl-cert (1.0.33) ...
Processing triggers for libc-bin (2.19-0ubuntu6.7) ...
Processing triggers for ureadahead (0.100.0-16) ...
Processing triggers for ufw (0.34~rc-0ubuntu2) ...
WARN: / is group writable!
Now, I understand that there seem to be some folders and files missing for Apache2 to work. Before I start changing anything that will mess with my Windows installation, I want to ask whether there's a different way? Also, should I worry about / being group writable or is this just standard Windows behaviour?
In order to eliminate this warning
Invalid argument: AH00076: Failed to enable APR_TCP_DEFER_ACCEP
Add this to the end of /etc/apache2/apache2.conf
AcceptFilter http none
Note the following in your output
failed to create directory via template '/var/lock/apache2.XXXXXXXXXX': No such file
I tried listing /var/lock. It points to /run/lock, which doesn't exist.
Create the directory with
mkdir -p /run/lock
The install should now work (you may need to clean the installation first)
You have to start bash.exe in administrator mode to avoid a lot of problems related to network.
i installed Lamp (Apache/MySQL/Php) without any problem :
Start bash.exe in administrator mode
type : sudo apt-get install lamp-server^
add these 2 lines in /etc/apache2/apache2.conf :
Servername localhost
AcceptFilter http none
then you can start apache :
/etc/init.d/apache2 start
Following the great advice here I edited apache2.conf and inserted the following to end of file after receiving all the various errors above and apache2 then worked great on the debian wsl package:
Servername localhost
AcceptFilter http none
AcceptFilter https none
I've installed GitLab on Google Compute Engine using "Click to Deploy" from the project interface. The deployment is successful after a few minutes. I can SSH into the instance, and muck around with it as expected.
I can also log in to GitLab using the web interface, and add SSH keys to my profile. So far, so good. However, when I attempt to push or pull to a new example repository, I receive this message:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've removed my local SSH config so it doesn't interfere. Do I need to setup an SSH tunnel of some sort? What am I missing?
UPDATE: Wiping out my local ~/.ssh folder, and regenerating an SSH key (which I've added to my profile in GitLab) produces the following error:
Received disconnect from {GITLAB_IP_ADDRESS}: 2: Too many authentication failures for git
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
UPDATE 2: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
You need to create an SSH tunnel to communicate with GitLab.
1. Log into your development server as your user, and create a key.
ssh-keygen -t rsa
Follow the steps, and create a passcode (that you can remember) as you'd need this to pull and push code from/to GitLab.
2. Now that you've created your key, we can copy it;
cat id_rsa.pub
Copy the output of that command (including ssh-rsa), and add it to your GitLab profile. (http://my-gitlab-server.com/profile/keys/new).
3. Ensure you have the correct privilege to the project(s)
Ensure you are at role developer at the very least. (Screengrab of roles: http://i.stack.imgur.com/DSSvl.jpg)
4. Now, copy the project link
Go into your project, and find the SSH link in the top right;
5. Now back to your development server
Navigate to your directory where you'd like to work, and run the following;
$ git init
$ git remote add origin <<project_url>>
$ git fetch
Where <<project_url>> is the link we copied in step 4.
You will be prompted your password (this is your ssh key password, not your server password) and to add the host to your known_hosts file. After that, the project will start to download and you can enjoy development.
I did these steps on a CentOS 6.4 machine with Digital Ocean. But they shouldn't differ from using Google CE.
Edit
Quote from Marty Penner answer as per this comment
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
UPDATE: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
In my situation the git user wasn´t set up completely. If you get in your log files messages like "User git not allowed because account is locked" (Under Centos or Redhat it´s /var/log/secure) than you simply need to activate the user via "passwd -d git"
varnishlog is returning:
_.vsm: No such file or directory
Has anyone else seen this before?
It looks like varnishlog is not pointing to the correct directory, or has not access to it.
Please check the command line options of varnishd. If the deamon run with -n <instancename> argument, you have to add it to varnishlog as well.
The second thing, is to see the permissions of varnish directory.
In order to see the current directory used, you must log into root and run the command below :
$ lsof -p <PID of varnishd> | grep vsm
Once revealed, you just had to be sure the full path has read permission for your user.
In Varnish 4.1 the root cause can be due to incorrect rights for reading _.vsm file. For example:
# service varnishncsa start
* Starting HTTP accelerator log deamon [fail]
Can't open log - retrying for 5 seconds
Can't open VSM file (Cannot open /var/lib/varnish/dev-me/_.vsm: Permission denied
Varnishncsa works from varnishlog user. But /var/lib/varnish/dev-me/_.vsm can be readable from varnish group or root user only:
# ls -l /var/lib/varnish/dev-me/_.vsm
-rw-r----- 1 root varnish 84934656 Apr 15 05:58 /var/lib/varnish/dev-me/_.vsm
So you can fix this problem in the following way:
# usermod -a -G varnish varnishlog
# id varnishlog
uid=110(varnishlog) gid=116(varnishlog) groups=116(varnishlog),115(varnish)
And now you can start varnishncsa.
In our case the hostname of the server was changed.
If you do not specify an instance name, varnish uses the hostname. It was looking for a directory holding the shared memory logging configuration with the new hostname, but the instance was still running from the directory with the old hostname.
Restarting varnish solved the problem.
I just had the same error message while trying to issue varnishadm commands. Turned out that I renamed my machine without stopping varnish. There was some directory in /var/varnish/ corresponding to the machine name that varnish needed access to. "sudo service varnish restart" fixed this for me.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I am trying to set up FTP on Amazon Cloud Server, but without luck.
I search over net and there is no concrete steps how to do it.
I found those commands to run:
$ yum install vsftpd
$ ec2-authorize default -p 20-21
$ ec2-authorize default -p 1024-1048
$ vi /etc/vsftpd/vsftpd.conf
#<em>---Add following lines at the end of file---</em>
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
$ /etc/init.d/vsftpd restart
But I don't know where to write them.
Jaminto did a great job of answering the question, but I recently went through the process myself and wanted to expand on Jaminto's answer.
I'm assuming that you already have an EC2 instance created and have associated an Elastic IP Address to it.
Step #1: Install vsftpd
SSH to your EC2 server. Type:
> sudo yum install vsftpd
This should install vsftpd.
Step #2: Open up the FTP ports on your EC2 instance
Next, you'll need to open up the FTP ports on your EC2 server. Log in to the AWS EC2 Management Console and select Security Groups from the navigation tree on the left. Select the security group assigned to your EC2 instance. Then select the Inbound tab, then click Edit:
Add two Custom TCP Rules with port ranges 20-21 and 1024-1048. For Source, you can select 'Anywhere'. If you decide to set Source to your own IP address, be aware that your IP address might change if it is being assigned via DHCP.
Step #3: Make updates to the vsftpd.conf file
Edit your vsftpd conf file by typing:
> sudo vi /etc/vsftpd/vsftpd.conf
Disable anonymous FTP by changing this line:
anonymous_enable=YES
to
anonymous_enable=NO
Then add the following lines to the bottom of the vsftpd.conf file:
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
Your vsftpd.conf file should look something like the following - except make sure to replace the pasv_address with your public facing IP address:
To save changes, press escape, then type :wq, then hit enter.
Step #4: Restart vsftpd
Restart vsftpd by typing:
> sudo /etc/init.d/vsftpd restart
You should see a message that looks like:
If this doesn't work, try:
> sudo /sbin/service vsftpd restart
Step #5: Create an FTP user
If you take a peek at /etc/vsftpd/user_list, you'll see the following:
# vsftpd userlist
# If userlist_deny=NO, only allow users in this file
# If userlist_deny=YES (default), never allow users in this file, and
# do not even prompt for a password.
# Note that the default vsftpd pam config also checks /etc/vsftpd/ftpusers
# for users that are denied.
root
bin
daemon
adm
lp
sync
shutdown
halt
mail
news
uucp
operator
games
nobody
This is basically saying, "Don't allow these users FTP access." vsftpd will allow FTP access to any user not on this list.
So, in order to create a new FTP account, you may need to create a new user on your server. (Or, if you already have a user account that's not listed in /etc/vsftpd/user_list, you can skip to the next step.)
Creating a new user on an EC2 instance is pretty simple. For example, to create the user 'bret', type:
> sudo adduser bret
> sudo passwd bret
Here's what it will look like:
Step #6: Restricting users to their home directories
At this point, your FTP users are not restricted to their home directories. That's not very secure, but we can fix it pretty easily.
Edit your vsftpd conf file again by typing:
> sudo vi /etc/vsftpd/vsftpd.conf
Un-comment out the line:
chroot_local_user=YES
It should look like this once you're done:
Restart the vsftpd server again like so:
> sudo /etc/init.d/vsftpd restart
All done!
Appendix A: Surviving a reboot
vsftpd doesn't automatically start when your server boots. If you're like me, that means that after rebooting your EC2 instance, you'll feel a moment of terror when FTP seems to be broken - but in reality, it's just not running!. Here's a handy way to fix that:
> sudo chkconfig --level 345 vsftpd on
Alternatively, if you are using redhat, another way to manage your services is by using this nifty graphic user interface to control which services should automatically start:
> sudo ntsysv
Now vsftpd will automatically start up when your server boots up.
Appendix B: Changing a user's FTP home directory
* NOTE: Iman Sedighi has posted a more elegant solution for restricting users access to a specific directory. Please refer to his excellent solution posted as an answer *
You might want to create a user and restrict their FTP access to a specific folder, such as /var/www. In order to do this, you'll need to change the user's default home directory:
> sudo usermod -d /var/www/ username
In this specific example, it's typical to give the user permissions to the 'www' group, which is often associated with the /var/www folder:
> sudo usermod -a -G www username
To enable passive ftp on an EC2 server, you need to configure the ports that your ftp server should use for inbound connections, then open a list of available ports for the ftp client data connections.
I'm not that familiar with linux, but the commands you posted are the steps to install the ftp server, configure the ec2 firewall rules (through the AWS API), then configure the ftp server to use the ports you allowed on the ec2 firewall.
So this step installs the ftp client (VSFTP)
> yum install vsftpd
These steps configure the ftp client
> vi /etc/vsftpd/vsftpd.conf
-- Add following lines at the end of file --
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
> /etc/init.d/vsftpd restart
but the other two steps are easier done through the amazon console under EC2 Security groups. There you need to configure the security group that is assigned to your server to allow connections on ports 20,21, and 1024-1048
Thanks #clone45 for the nice solution. But I had just one important problem with Appendix b of his solution. Immediately after I changed the home directory to var/www/html then I couldn't connect to server through ssh and sftp because it always shows following errors
permission denied (public key)
or in FileZilla I received this error:
No supported authentication methods available (server: public key)
But I could access the server through normal FTP connection.
If you encountered to the same error then just undo the appendix b of #clone45 solution by set the default home directory for the user:
sudo usermod -d /home/username/ username
But when you set user's default home directory then the user have access to many other folders outside /var/www/http. So to secure your server then follow these steps:
1- Make sftponly group
Make a group for all users you want to restrict their access to only ftp and sftp access to var/www/html. to make the group:
sudo groupadd sftponly
2- Jail the chroot
To restrict access of this group to the server via sftp you must jail the chroot to not to let group's users to access any folder except html folder inside its home directory. to do this open /etc/ssh/sshd.config in the vim with sudo.
At the end of the file please comment this line:
Subsystem sftp /usr/libexec/openssh/sftp-server
And then add this line below that:
Subsystem sftp internal-sftp
So we replaced subsystem with internal-sftp. Then add following lines below it:
Match Group sftponly
ChrootDirectory /var/www
ForceCommand internal-sftp
AllowTcpForwarding no
After adding this line I saved my changes and then restart ssh service by:
sudo service sshd restart
3- Add the user to sftponly group
Any user you want to restrict their access must be a member of sftponly group. Therefore we join it to sftponly by:
sudo usermod -G sftponly username
4- Restrict user access to just var/www/html
To restrict user access to just var/www/html folder we need to make a directory in the home directory (with name of 'html') of that user and then mount /var/www to /home/username/html as follow:
sudo mkdir /home/username/html
sudo mount --bind /var/www /home/username/html
5- Set write access
If the user needs write access to /var/www/html, then you must jail the user at /var/www which must have root:root ownership and permissions of 755. You then need to give /var/www/html ownership of root:sftponly and permissions of 775 by adding following lines:
sudo chmod 755 /var/www
sudo chown root:root /var/www
sudo chmod 775 /var/www/html
sudo chown root:www /var/www/html
6- Block shell access
If you want restrict access to not access to shell to make it more secure then just change the default shell to bin/false as follow:
sudo usermod -s /bin/false username
Great Article... worked like a breeze on Amazon Linux AMI.
Two more useful commands:
To change the default FTP upload folder
Step 1:
edit /etc/vsftpd/vsftpd.conf
Step 2: Create a new entry at the bottom of the page:
local_root=/var/www/html
To apply read, write, delete permission to the files under folder so that you can manage using a FTP device
find /var/www/html -type d -exec chmod 777 {} \;
In case you have ufw enabled, remember add ftp:
> sudo ufw allow ftp
It took me 2 days to realise that I enabled ufw.
It will not be ok until you add your user to the group www by the following commands:
sudo usermod -a -G www <USER>
This solves the permission problem.
Set the default path by adding this:
local_root=/var/www/html
Don't forget to update your iptables firewall if you have one to allow the 20-21 and 1024-1048 ranges in.
Do this from /etc/sysconfig/iptables
Adding lines like this:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 20:21 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 1024:1048 -j ACCEPT
And restart iptables with the command:
sudo service iptables restart
I've simplified clone45 steps:
Open the ports as he mentioned
sudo su
sudo yum install vsftpd
echo -n "Public IP of your instance: " && read publicip
echo -e "anonymous_enable=NO\npasv_enable=YES\npasv_min_port=1024\npasv_max_port=1048\npasv_address=$publicip\nchroot_local_user=YES" >> /etc/vsftpd/vsftpd.conf
sudo /etc/init.d/vsftpd restart
I followed clone45's answer all the way to the end. A great article! Since I needed the FTP access to install plug-ins to one of my wordpress sites, I changed the home directory to /var/www/mysitename. Then I continued to add my ftp user to the apache(or www) group like this:
sudo usermod -a -G apache myftpuser
After this I still saw this error on WP's plugin installation page: "Unable to locate WordPress Content directory (wp-content)". Searched and found this solution on a wp.org Q&A session: https://wordpress.org/support/topic/unable-to-locate-wordpress-content-directory-wp-content and added the following to the end of wp-config.php:
if(is_admin()) {
add_filter('filesystem_method', create_function('$a', 'return "direct";' ));
define( 'FS_CHMOD_DIR', 0751 );
}
After this my WP plugin was installed successfully.
maybe worth mentioning in addition to clone45's answer:
Fixing Write Permissions for Chrooted FTP Users in vsftpd
The vsftpd version that comes with Ubuntu 12.04 Precise does not
permit chrooted local users to write by default. By default you will
have this in /etc/vsftpd.conf:
chroot_local_user=YES
write_enable=YES
In order to allow local users to write, you need to add the following parameter:
allow_writeable_chroot=YES
Note:
Issues with write permissions may show up as following FileZilla errors:
Error: GnuTLS error -15: An unexpected TLS packet was received.
Error: Could not connect to server
References:
Fixing Write Permissions for Chrooted FTP Users in vsftpd
VSFTPd stopped working after update
In case you are getting 530 password incorrect
1 more step needed
in file /etc/shells
Add the following line
/bin/false
FileZila is good FTP tool to setup with Amazon Cloud.
Download FileZila client from https://filezilla-project.org/
Click on File -> Site Manager - >
New Site
Provide Host Name IP address of your amazon cloud location (Port if any)
Protocol - SFTP (May change based on your requirement)
Login Type - Normal (So system will not ask for password each time)
Provide user name and password.
Connect.
You need to do these step only 1 time, later it will upload content to the same IP address and same site.