For the context : I'm a student and I must do a project with some other people of my class. My role is to prepare them a web server that each one can use and access from anywhere. I plan to host everything on a dedicated server that I already have to avoid additional cost and give to each people a subdomain that will be redirected with VirtualHosts. They will be able to send files to the server with a SFTP server (openssh), they will get an account per person and it will be chrooted to their virtualhost directory.
My main problem : Will this be secure ? I mean, if one of the user set an easy password or just do anything risky, can someone access the other's people virtualhost or even the host dedicated machine ? I already thought about .htaccess and they will be deactivated. Is there another way to get out of an apache virtualhost ?
Things to note : they will have apache, php and an access to a mysql (or maybe mariadb, I don't know for now) database. So, they may be able to upload some old, unsecure code. Some of these users are not very educated to cybersecurity.
The server is a Ubuntu 16.04 LTS.
Thanks for the advices,
If you limit their access to only their own home directory, that's a good start.
A good layer of security would also be to implement 2FA, check out Duo Mobile, you can implement it for SSH logins (or need more details, eg. what options do they have to login into the server?)
If the users are not very educated in cybersecurity as you mentioned, it will be difficult for them to escape the virtual host they have access to.
Although i need more details such as each virtual host will have a separate database or it will be talking to a central database? also, for a paranoid measure, consider where the server is hosted. There are lots of variables that can be affirmed from what you described, but it is best to keep the server on its own network with nothing critical in the same subnet. Just in case.
Related
I have programmed an application that users can use to process genome data. This application relies on a 10GB database file, that users have to download in order to run the application. At the moment, I have stored this file on Google Drive, but the download bandwith is limited, so if a number of users download the file on a certain day, it will not work for others and they will get errors running the application.
My solution would be to host the file on our research server, create a user that only has access rights to this folder and nothing else, and make the file downloadable from the server via scp within the application (which is open source) through that user.
My question now is, is this safe to do or are people potentially able to hack into our server? If this method would be a security risk, what would be a better way to provide this file?
Thank you in advance!
Aloha
You can setup something like free Seafile https://www.seafile.com/en/home/, or ask the admin to set it up for you which is pretty secure like a self hosted google drive with 2fa authentication.
Another nice and easy tool is Filebrowser on github (https://github.com/filebrowser/filebrowser)
I would not really advice giving people shell/scp access inside your network.
And hosting anything inside a company network is in general not wisest idea, there is a always a risk involved.
I would setup a Seafile/filebrowser solution at a cheap rented server outside your network and upload it there. Or if you have a small pc left set it up in a DMZ Zone, a zone that has special access restrictions inside your company.
You want to use SSH (scp) as a transportation and authentication method for file hosting. It's possible to keep this safe with caution. For example, GitHub uses SSH for transport when providing git access with the git+ssh protocol.
Now for the caution part, if you haven't done it before, it's not a trivial task.
The proper way to achieve this would be set up an isolated SSH server in a chroot environment, and set up an SSH user on this isolated SSH instance only (not a user in the system that is added by eg useradd). Then you can add the files that's absolutely necessary to the chroot, and provide SSH access to users.
(Nowadays you might want to consider using Linux filesystem namespaces, if applicable, to replace chroot, but I'm not sure on this.)
As for other options, setting up a simple Nginx server for static file hosting might be a lot easier, provided you have some understanding of HTTP and TLS. There're lots of writings on the Internet about this.
Both ways, if you are to expose your server to the Internet or Intranet, you need to make sure of firewalling. Consider to learn about nftables or firewalld or the like, if you haven't already.
SSH is reasonably safe. Always keep software up-to-date.
Set up an sftp-only user with chrooted directory. In /etc/ssh/sshd_config:
Match User MyUser
ChrootDirectory /var/ssh/chroot
ForceCommand internal-sftp
AllowTcpForwarding no
PermitTunnel no
X11Forwarding no
This user will not get a shell (because of internal-sftp), and cannot see files outside of /var/ssh/chroot.
Use a certificate client-side, additional to password.
Good description of the setup process for certificates:
https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server
Your solution is moderately safe.
A better solution is to put it on a server accessible via sftp, behind a password, but also encrypt the file: in this way you introduce a double layer of protection.
On a Linux server you should be able to use a tool like gpg to encrypt your file.
Next you share the decryption key with your partners using a secure channel with e.g. an end2end encrypted messaging software.
I have a Linux web server that is looking for a Kerberos realm. I need to give it a .keypass file, which I can do. However, what's really getting me is the KDC. I cannot determine the parent KDC, and I don't know which server would be the admin server. Also, I'm not sure how to go about the process with Ktpass. Has anyone done this before, if so, how did you do it?
This has been really frustrating me as I know the architectural process, but I can't figure it out in a Windows domain with multiple DCs. The linux portion isn't a problem, I know what to do where, but I have no idea how to pull that information from Windows in a way that Tomcat can read.
Any help would be appreciated. Thanks!
In theory, you can map any machine in an DNS domain to any kerberos realm by getting every machine involved to use the same krb5.conf file. However, in practice the machine with DNS
name web.foo.com is in the realm FOO.COM.
To find the KDC for a realm, you can generally do dns querys for these SRV records.
dig -t SRV _kerberos._udp.foo.com
AD supports this.
I'm looking to launch a linux EC2 instance.
Although I understand linux quite well my ability to security/harden a linux OS would undoubtedly leave me vulnerable to attach. eg: there are others who know more about linux security than me.
I'm looking to just run Linux, Apache & PHP5.
Are there any recommended Amazon AMI's that would come pre-harden running linux/apache/php or something similar to this?
Any advice would be greatly appreciated.
thankyou
Here is an older article regarding this (I haven't read it, but it's probably a good place to start): http://media.amazonwebservices.com/Whitepaper_Security_Best_Practices_2010.pdf
I would recommend a few best practices off the top of my head
1) Move to VPC, and control inbound and outbound access.
2a) Disable password authentication in SSH & only allow SSH from known IP's
2b) If you cannot limit SSH access via IP (due to roaming etc) allow password authentication and use google authenticator to provide multi-factor authentication.
3) Put an elastic load balancer in front of all public facing websites, and disable access to those servers except from the ELB
4) Create a central logging server, that holds your logs in a different location in case of attack.
5) Change all system passwords every 3 months
6) Employ an IDS, as a simple place to start I would recommend tripwire.
7) check for updates regularly (you can employ a monitoring system like Nagios w/NRPE to do this on all your servers) If you're not a security professional you probably don't have time to be reading bugtraq all day, so use the services provided by your OS (CentOS/RHEL it's yum)
8) Periodically (every quarter) do an external vulnerability assessment. You can learn and use nessus yourself (for non-corporate use) or use a third party such as qualys.
If you're concerned and in doubt, contract a security professional for an audit. This shouldn't be to cost prohibitive and can give you some great insight.
Actually, you can always relaunch your server from pre-configured AMI, if something happened.
It can be done very easy with Auto Scaling, for example. Use SSH Without a Password. Adjust your Security Groups accordingly. Here's good article on Securing Your EC2 Instance.
You have to understand 2 things:
Tight security make life hard for attackers as well as for you...
Security is an on-going task.
having your server secure at specific point in time don't say anything about the future.
New exploits and patches published every day, and lot of "development" acts render security unstable.
Solution?
You might consider services like https://pagodabox.com/
Where you are getting specific PHP resources without having to manage Linux/Security and so...
Edit:
Just to empathize...
Running Production system, where you are responsible for the on going security of the site, force you to do much more than starting up with a secure instance!
Otherwise, your site will become much less secure as time passed by (and as more people will learn about it)
As I see it (for a real production site), you have 2 options:
Get a security expert (in house or freelance) that will check your site regularly and will apply needed patches and so.
Get hosting service that will manage the security aspect for you.
I pointed to one service like that, where you can put your PHP code in and they will take care of everything else for you.
I would check this type of service for every production site that don't have the ability to get real periodically security checkup/fixes
Security is a very complex field... do not underestimate the risks...
One of the things I like most about using Amazon is how quickly and easily I can restrict my attack surface. I've made a prioritized list here. Near the end it gets a bit advanced.
Launch in a VPC
Put your webserver behind a loadbalancer ELB or ALB (terminate SSL there too)
Only allow web traffic from your load balancer
Create a restrictive security group. The only things allowed into your host should be incoming traffic from the load balancer and ssh from your IP (or your dhcp subnet if your ISP does not offer a static address)
Enable automatic security updates
yum-cron (amazon linux)
or unattended-upgrades (ubuntu)
Harden ssh
disallow root login and default amazon accounts
disallow password login in favor of ssh keys
Lock down your aws root account with 2fa and a long password.
Create and use IAM credentials for day-to-day operations
If you have a data layer deploy encrypted RDS and put it in a private subnet
Explore connecting to RDS with IAM credentials (no more db password saved in a conf file)
Check out yubikey for 2fa ssh.
Advanced: For larger or more important deployments you might consider using something like ThreatStack. They can warn you of AWS misconfig (s3 bucket containing customer data open to the world?), security vulnerabilities in packages on your hosts. They also alert on signals of compromise and keep a command log which is useful for investigating security incidents.
I've installed a WordPress instance on a Linux server, and I need to give it FTP access in order to install plugins and execute automatic backup/restores. I've just installed vsftpd, and started the service, but now what?
How do I figure out/set what the username/pass is?
Should I allow anonymous access?
Is the hostname just 'localhost'?
Any advice would be appreciated. I've never messed with FTP on linux before. Thanks-
Your question is a little unclear because you don't specify what aspect of wordpress "wants" FTP access. If you got WP installed, you clearly have at least some access to the machine already. That said, I'll try to answer around that inclarity.
Your questions in order, then some general thoughts:
How do I figure out/set what the username/pass is?
Remember that the man page for a program is a good first stop. A good man page will also contain a FILES or "SEE ALSO" section near the bottom that will point you to relevant config files.
In this case, "man vsftpd" mentions /etc/vsftpd.conf, so you can then do "man vsftpd.conf" to get info on how to configure it.
VSFTPD is configurable, and can allow users to log in in several ways. In the man page, check out "guest_enable" and "guest_username", "local_enable" and "user_sub_token".
*The easiest route for your single user usage is probably configuring local_enable, then your username and password would be whatever it is in /etc/password.*
Should I allow anonymous access?
No. Since you're using this to admin your Wordpress, there's no reason anyone else should be using this FTP. VSFTPD has this off by default.
Is the hostname just 'localhost'?
Depends where you're coming from. 'localhost' maps back to the loopback, or the same physical machine you're on. So if you need to put ftp configuration information for Server A into a wordpress configuration file on Server A, then 'localhost' is perfectly acceptable. If you're trying to configure the pasv_addr_resolve/pasv_addr flag of VSFTPD, then no, you'll want to either pass in the fully qualified name of Server A (serverA.mydomain.com), or leave it off an rely on the IP address.
EDIT: I actually forgot the critical disclaimer to never send credentials over plain FTP. Plain old FTP (meaning not SFTP) sends your username and password in cleartext. I didn't install VSFTP and play with it, but you'll want to make sure that there is some form of encryption happening when you connect. Try hitting it with WinSCP (from windows) or sftp (from linux) to make sure you're getting an ecrypted SFTP, rather than plaintext FTP.
Apologies if you already knew that ;)
You would probably get better answers on server fault.
That said:
vsftp should use your local users by default, and drop you in that user's home directory on login.
disable anonymous access if you don't need it, I don't think wordpress will care but your server will be safer.
yes, or 127.0.0.1, or your public IP if you think you might split the front and back end some day.
WordPress does not natively support SFTP. You can get around this two ways:
chmod permissions in the appropriate directories to allow the normal, automatic update to work correctly. This is the approach most certain to work, as long as it doesn't trip over any local security policies.
Try hacking it in yourself. There have been any number of threads on this at the WordPress.org forums. Here is a recent one which is also talking about non-standard ports. Here is an article about how to try to get it working on Debian Lenny (which also addresses the non-standard port issue).
We received access to the environment, but I now need to go through the process of securing it so that the previous vendor can no longer access it, or the Web applications running on it. This is a Linux box running Ubuntu. I know I need to change the following passwords:
SSH
FTP
MySQL
Control Panel Admin
Primary Application Admin
However, how do I really know I've completely secured the system using best practices, and am I missing anything else that I need to do other than just changing passwords?
3 simple steps
Backup configurations / source files from HTTP / SQL tables
Reinstall operating system
Follow standard hardening steps on fresh OS
Regardless of who it was, they could have installed any old crap on there (rootkits) that you can't configure away.
You will probably get more responses at serverfault.com on these kinds of questions.
There are several things you can do to secure SSH by editing your sshd_config file which is usually in /etc/ssh/:
Disable Root Logins
PermitRootLogin no
Change the ssh port from Port 22
Port 9222
Manually specifying which accounts can login
AllowUsers Andrew,Jane,Doe
SecurityFocus has a good article about securing MySQL, although it's a bit dated.
The best thing you could do would be reinstall and make sure when you bring over files from the old system to the new that it is just data, and not executables that could be nasty. If this is to much, changing all the passwords, and watching the logs for a few weeks, as well as playing with iptables to block former vendor. Also given that it could have a rootkit at the kernel level its probably good idea to change that out, and also watch traffic coming out of the box fro something that might be going to the vendor. It really is a hassle to take someone else's machine and say that is safe now, I would go as far to say it is nearly impossible.
side note. This isn't really programming related so probably shouldn't be on this site.