where do I keep my amazon .pem file on a mac - security

Where should I keep this file for security? At the moment it is on my desktop - should I put it somewhere else?

The 'standard' location would be a .ssh directory in your $HOME. i.e.
/Users/$USER/.ssh/
You should protect this directory with permissions 700. You can set up a config file to automatically use the .pem, and set the username when sshing to EC2 instances as explained here.

Related

What is the most secure way to use Let's Encrypt certificates with Node.js?

I am developing a secure web server on node.js and I am using Let's Encrypt certificates with the https module.
I want it to run on Ubuntu/Debian machines.
By default, the certificate and private key are stored in:
/etc/letsencrypt/live/domain.name.example/fullchain.pem
/etc/letsencrypt/live/domain.name.example/privkey.pem
These files permissions only allow the root user to read them, so the problem is that the node.js server can't load them normally using:
const cert = fs.readFileSync("/etc/letsencrypt/live/domain.name.example/fullchain.pem");
const privKey = fs.readFileSync("/etc/letsencrypt/live/domain.name.example/privkey.pem");
(Which will throw a permission error)
The only solutions to this I know are:
running the node server as root so it has the permission to the files (not recommended for node).
Copy the files with sudo cp to a local directory and apply permissions with sudo chmod +r so they can be accessed by the server after every certificate renewal (let's encrypt does not recommend to copy these files, this is my current solution though).
running node as root, load the certificate and private key, and then change the uid to a non-root user with process.setgid() and process.setuid(), which will drop root privileges.
My question is if there is a better solution to achieve this, or maybe one of these methods are just fine?
Use setgid.
Set the group ownership of the directory to the group you're using to run nodejs. If your user and group are itay:staff for example, say this
chgrp -R staff /etc/letsencrypt/live/domain.name.example
Then set the setgid bit of the directory's permissions like so.
chmod 02755 staff /etc/letsencrypt/live/domain.name.example
Thereafter, any files written to that directory will be owned by that group, staff in this example. So, your nodejs program will be able to read them without any further ado.
As per O. Jones comment, I solved this problem by using nginx as a reverse-proxy for my nodejs server. This way nginx handles the SSL without permission issues, and nodejs only needs to run an http server.
The problem was solved by following the second recommendation here by letsencrypt documentation (Quote B) that doesn't require me to create any script to move or copy the files whenever the certificate auto renews (I installed mine with the --apache plug-in, as a side note, it you have your redirect from http to https inside your virtual host, when you first run certbot use --no-redirect to avoid an error being reported during the installation of the certificates).
Despite that I found unnecessary to move or copy the pem files. In the certbot documentation here, I don't find that letsencrypt doesn't recommend to move the certificates, in their documentation as of now they even tell you how to do it right:
Quote A:
If you would like the live certificate files whose symlink location
Certbot updates on each run to reside in a different location, first
move them to that location, then specify the full path of each of the
four files in the renewal configuration file. Since the symlinks are
relative links, you must follow this with an invocation of certbot
update_symlinks.
For example, say that a certificate’s renewal configuration file
previously contained the following directives:
archive_dir = /etc/letsencrypt/archive/example.com
cert = /etc/letsencrypt/live/example.com/cert.pem
privkey = /etc/letsencrypt/live/example.com/privkey.pem
chain = /etc/letsencrypt/live/example.com/chain.pem
fullchain = /etc/letsencrypt/live/example.com/fullchain.pem
The following commands could be used to specify where these files are
located:
mv /etc/letsencrypt/archive/example.com /home/user/me/certbot/example_archive
sed -i 's,/etc/letsencrypt/archive/example.com,/home/user/me/certbot/example_archive,' /etc/letsencrypt/renewal/example.com.conf
mv /etc/letsencrypt/live/example.com/*.pem /home/user/me/certbot/
sed -i 's,/etc/letsencrypt/live/example.com,/home/user/me/certbot,g' /etc/letsencrypt/renewal/example.com.conf
certbot update_symlinks
Quote B (my solution, just because it is the simples -KISS principle)
Regarding permissions and group ownerships they say the following:
For historical reasons, the containing directories are created with
permissions of 0700 meaning that certificates are accessible only to
servers that run as the root user. If you will never downgrade to an
older version of Certbot, then you can safely fix this using chmod
0755 /etc/letsencrypt/{live,archive}.
For servers that drop root privileges before attempting to read the
private key file, you will also need to use chgrp and chmod 0640 to
allow the server to read /etc/letsencrypt/live/$domain/privkey.pem.
Which is VERY interesting, they are 700 only for historical reasons. What they don't clarify is that the /etc/letsencrypt/live and keys folders are 700, and in 20.04 Ubuntu you can't even see that the folder exists unless you become root, yes sudo doesn't work, folder not found error.
the -d or domain folders are 755 (/etc/letsencrypt/live.domain.com) and the symlinks themselves to the .pem files are 777.
Letsencrypt documentation says:
the pem files in the directory mentioned above, are only symlinks:
/etc/letsencrypt/archive and /etc/letsencrypt/keys contain all
previous keys and certificates, while /etc/letsencrypt/live symlinks
to the latest version
The keys themselves have permissions: 600
In my Ubuntu 20.04 system with cerbot --apache certificate and installation I find that the keys folder has 000x_key-certbot.pem files with permissions 600, and the the archive directory has the actual cert1.pem, chain1.pem, fullchain1.pem and privkey1.pem files with permissions: 644, 644, 644 and 600 respectively.
The /etc/letsencrypt/archive/domain.com# folder has permissions 755 and /etc/letsencrypt/archive folder has permissions 700.
So access is blocked by hiding the directory and and blocking the keys themselves.

Where to administer apache fallback directory

I have been put in charge of an Ubuntu 13 server installation. Apache is configured to use /var/www as the default directory which is correct. The issue is that it seems there is a fallback directory configured that points to /usr/share. So if I type into a browser (www.address.com) it will serve the documents out of /var/www, but if I know the name of a directory in /usr/share and type in (www.address.com/sharedir) then it will serve out of the /usr/share directory. I have looked in the apache config file and default site config file and do not see this association. I do not want this behavior and am concerned that this is the default behavior out of the box.
Can anyone guide me to another areas where this behavior may be controlled/managed.
Thanks for any assistance.
Open your
/etc/apache2/sites-available/default
file and replace
/var/www
to
/path/to/folder/you/wish
save and it will be better to restart apache by
service apache2 restart
Now put website contents to the new location /path/to/folder/you/wish.
Once you changed the Document root of the of the site as mentioned above, Then no files will be fetched from any other location. Hopes this will help you. :)
[SOLVED] After a bunch more digging around I discovered that the user that originally set up this server erroneously put .conf files in the 'conf.d' directory and 'mods-enabled' directory that were routing traffic to the other directories. Sorry to anyone that noodled on this one.

Users can't upload files, even with permissions set to them using vsftpd

I have a cloud hosting linux solution. I had vsftpd working on it, but after having issues and tinkering with a lot of settings, I now have an issue where users can login using FTP and connect to the correct home directory, navigate within it, download files but they cannot upload files to the server. They get a time out error, which appears to be a permissions error, but I can't narrow it down any more than that. /var/logs/syslog gives nothing away.
The folders belong to the users. The parent www folder is set to 555. Can anyone help with this issue at all?
Cheers,
T
Try to set the permissions to 755, 555 doesn't allow writing for anyone. Are your user and group different?
You also may need to enable logging for FTP server. The time out error may include some other errors, not only permission denied.
To have extended logging change the variables in your ftp config file:
dual_log_enable=YES
log_ftp_protocol=YES
xferlog_enable=YES
syslog_enable=NO
and check the log file name there.
you must create a folder into user folder (Example : /var/www/user1/upload).
and set permission 777 (Example : chmod 777 /var/www/user1/upload).
then upload file into this folder.

File permissions changing on save ( using root )

Using a fresh installation of CENTOS 6.2, when I connect to the server ( SFTP mount with nautilus ) and edit files, no matter what permission the file had before, it is reset to 700, read+write+execute only for the owner.
When SSHing directly into the machine and editing files on the command line - no permissions are changed.
The files I am editing are website scripts sitting in my Apache folders.
Why is this behavior happening? Any suggestions are welcome.
Your FTP client might be "downloading and reuploading" your files when you edit them. Change your umask if you want different permissions, or use SSH and a proper editor if you want to keep the permissions...

How to allow file uploading outside home directory with SSH?

I'm running a Fedora 8 Core server. SSH is enabled and I can login with Transmit (FTP client) on port 22. When logged in, I can successfully upload files to the users home directory. Outside the home directory I can only browse files, not upload/change anything. How can I allow file uploading to a specific directory outside the users home directory?
an easy method is to grant the user rights to the folder you want them to be able to upload to, then add a symlink (link -s) from their home folder to the destination.
You can also just use
scp file user#server:/path
which will let you upload to any directory you have permissions to
file is the file to copy
user & server should be obvious
/path is any destination path on the server which you have rights to; so /home/user/ would be your likely default home folder
You need to make those directories writable by the proper users, or (easier) that user's group. This is of course a huge security hole, so be careful.
HI,
Give the FTP user write permission on the directory where you want to upload your files.

Resources