What is Secure Shell or SSH? How does it work? [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am in the process of setting up a digital ocean droplet. I have very little experience with networking and sysadmin tasks.
All of the documentation and tutorials about setting up this droplet highly suggests that I set up an SSH connection. Upon googling, I get very broad definitions and videos of what SSH is, but I cannot seem to conceptualize exactly how it works.
I've even followed the directions of the some of the tutorials without any issue, so apparently I've even accomplished doing this before with my other droplets. However, whenever I log into my droplet with PuTTY or WinSCP, I still need to provide a username and password (even if the password is saved, I need to type it in to save it).
Other pieces of information I've obtained:
When stepping through this process, I noticed that linux will STILL ask me to create a passphrase. But, a lot of the reading I did seemed to suggest I would not need to for some reason.
There is a public and private key. I can't seem to understand what each is for, or what's the difference.
I don't do anything to my home computer. Is an SSH connection verifying that I am indeed logging into my server through my home computer? If that is in fact the case, how does this process know I am logging into my server with my home computer if I did not provide any information about my home computer at all? (Everything was done through PuTTY on my server remotely).
According to a lot of what I read, after setting up SSH, you are then supposed to disable root user access. I'm just not seeing why.
I'm just not really understanding what it is that I'm doing when I create private and public keys. I still have to provide my username and a password when logging into my server with WinSCP and PuTTY. Am I doing something wrong? In reference to SSH; what am I doing? Why am I doing this? Am I doing it right despite the fact that I still have to provide a password when logging in?
If possible, take an "explaining this to a 5-year old" approach.

PuTTY is an SSH client, so you've already been logging into your server via SSH without knowing it. Public-private keys are just an alternative way to log in (besides password login). The way it works is that you generate a public/private key pair on your home computer. Then you give your public key to the server, and instead of logging in using your password (which requires you to type it in), you can log in automatically using your private key. Private key login is also considered much more secure than password-based login when it is done right.
There are already a lot of resources for explaining how public-private key encryption works, so here's one I found on Reddit:
Another way of looking at it is the familiar box analogy. Imagine you want to send a briefcase of information to your friend across the US but need it to be locked so that thieves can't see it. Obviously you can't just put your own lock on there and send it because your friend doesn't have your key to that lock.
The box analogy offers a solution. You put your own lock on the bag and send it to your friend. There, your friend also puts HIS own lock and sends it back. You then unlock your own lock with your key, meaning that the only lock left is your friend's lock. Send it back, and they can easily unlock it and take a look at the information. This is foolproof because a thief would need to know both lock's keys to open the briefcase.
Computing uses a similar model but rather than locks and keys it uses one master lock that can be opened with combinations of three keys, one public key and two private ones that you and your friend each know. Also it takes into account the properties of prime numbers and modular arithmetic. When studying CS, I found that this video helps a lot in understanding how the numberized process of locking and unlocking works.
Source:
https://www.reddit.com/r/explainlikeimfive/comments/1kocba/eli5_rsa_algorithm_and_publicprivate_keys/cbr0l24
In addition, if you want to get public-private key login working with PuTTY, here's a tutorial on that (and it's even specific to digitalocean!):
https://www.digitalocean.com/community/tutorials/how-to-create-ssh-keys-with-putty-to-connect-to-a-vps

Related

SSH session without ANY authentication

I have a special user, called udpate, whose shell is a special command that fetches any pending updates to our system.
I'd like to be able to open an ssh session with this user without any kind of authentication (password or ppk, or anything), so if anyone wants to update a system, they could do "ssh update#<>", without having to know a password, or have a pre-shared public key on the box.
Insecure, I know, but this is over a VPN, so it should not be a problem, and they will only run the update, and then be thrown out.
Can this be done?
VPN is not a good reason to avoid authentification when using ssh. Even if there is a way to do this, you shouldn't use it. Use a ssh-key is the best way to do it. If you really want to do thing like this, use the same key and distribute it on each box.
What did you do if the local network of your box is compromised ? You just have a security hole.
as this rfc points out, there is support for host based authentication https://www.ietf.org/rfc/rfc4252.txt
So using it carefully should be possible by following this tutorial https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Host-based_Authentication#Server_Configuration_for_host-based_authentication.
That may not be a final solution, but helping finding one.
But really, you should not do it for this usecase... Just offer a basic web endpoint which does only start the update process on the next cron run. I know, its not so "simple" but its a lot more secure.
Or if they have access to this server anyway, add a script with super user bit set which triggers the update.
Also, if you have a central server in your company, where everyone has access too, you can use this as step in between to host the key pair, so you dont need to manage X keys for everyone.
Or you use a more modern setup with puppet or anything, or you just configure the server to always update without user interaction needed....

making a website local

I'm going to build a website for file manipulations. The idea is that the user will manage to upload his files to the website, and click the "manipulate" button, then he will get the resulted file. Also the user will have to pay in accordance with the amount of files he's trying to manipulate.
The code for the file manipulation is already written in JAVA.
The thing is, some of these files will probably be truly sensitive and private, so users will not be delighted to upload to my site over the internet.
I thought about making a local version of the website, and let the user download it (the local version) to his computer (and the only access the internet will be for the payment action).
But there seem to be two problems:
When i'll decide to change anything in my website, it will not affect the local users.
The local site will be very easy to "crack" in order not to pay...
This is my first website,
do you have any suggestions of how to solve one of these 2 problems?
Thanks!
Concerning question
(1) you would have to implement some update mechanism, for example your "local web site" (which might be a .jar file containing a web server) could check over the internet if a new version is available and then download and install it (however, you should generally ask for user's permission to do so, as many users are not delighted with silently auto-updating software). Concerning question
(2) you might use some code obfuscator to make your compiled java classes more difficult to decompile, and use an encrypted SSL connection for the transactions related to payment (while checking for server certificate to avoid man-in-the-middle attacks by the end user); however any software that a user can have on its computer will be eventually cracked by somebody. Therefore, the best solution is possibly to keep all on your server, while securing as much as possible the whole: use encrypted connections with SSL for everything, or even if the files are highly sensitive, provide a public key so users can encrypt their files with GPG (or similar software) before sending them to the site, and encrypt the files to be sent back to the user by using its public key (that he/she has to provide you and that is not critical at all to be transferred over the internet). Also carefully check the security of your web server and all the software running on it, to avoid bugs that might allow somebody to hack into it. Using the encryption with GPG/public keys and only storing encrypted data on your server might be already a good protection (but you have to make sure that it is impossible to get your private key in any way!).

Encrypting user data [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have an Android application that transmits some user account information as json over ssl to a central server. 3rd parties can send messages to the users if they have the users' username.
The username can never be queried from our server, infact no user data can be queried. The entire system relies on the fact that the user willingly shared there information with the 3rd parties and the 3rd parties can use that information to deliver messages to the users.
The transmitted data is not encrypted by me, since it is already sent over ssl. I feel that I need to encrypt the data stored on my server to keep it safe. What would be the best way to encrypt this data? When I query the user data, must I encrypt the supplied value and compare it to what is stored in the database or must I rather decrypt the database?
Or is it an overkill since only my server application will ever have access to this data?
It's not overkill to encrypt the private data of your users, customers, etc on your filesystems. For one thing that hard drive will eventually end up out of your control --- and it's extremely unlikely that you're going to properly destroy it after it seems to be non-function, even though there's a bunch of private data on it and potentially accessible to someone with a modicum of data recovery expertise and initiative.
I'd suggest PyCrypto.
The real challenge is how you'll managed your keys. One advantage of PK (public key) cryptography is that you can configure your software with the public (encrypting) key in the code and exposed ... that's sufficient for the portions of your application which are storing the data. Then you need to arrange a set of procedures and policies to keep the private key private. That means it can't be in your source code nor your version control system ... some part of your software has to prompt for it and get it (possibly typed in, possibly pushed in from a USB "keyboard emulator" or other password/key vault device).
This has to be done for every restart of your server software (that need to read back this customer data) ... but this can be a long running daemon and thus only need this a few times per year -- or less. You might use something like a modified copy of the ssh-agent to decouple the password management functionality from the rest of your application/server.
(If you wondering where there's value in keeping the private key in memory if it's always in memory when the machine is running --- consider what happens if someone breaks in and steals your computer. In the process they'll almost certainly power it off, thus protecting your data from the eventual re-start. One option, though weaker, is to use a small USB drive for the private key (password/passphrase) storage. This is still vulnerable to the risk of theft, but less of a problem when it comes to your eventual drive disposal. Hard drives are relatively hard and expensive to properly destroy --- but physically destroying a small, cheap USB drive isn't difficult at all).

What's the most secure way to send data from a-b? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
If I had let's say a sensitive report in PDF format and wanted to send it to someone, what is the most secure way?
Does a desktop application make it more secure? Since we are basically doing a client to server communication via private IP address? Then add some kind of standard encryption algorithm to the data as you send it over the wire?
What about a web based solution? In web based, you have a third person in the loop. Sure, it would do the same kind of encryption that I would have on a desktop.. but now instead of client->server directly, you have client->server | server<- client... You also have exposure to the broad internet for any intruders to jump in, making yourself more open to man-in-middle attack... One thing the web has going for it is digitial certificates but I think that is more authentication than authorization.. which the desktop problem doesnt have?
Obviously from a usability point of view - a person wants to just goto a web page and download a report he's expecting. But most secure? Is desktop the answer? Or is it just too hard to do from a usability perspective?
OK there seems to be some confusion. I am a software engineer and am facing a problem where business users have some secure documents that they need to distribute - I am just wondering if using the web and SSL/CA is the standard solution to this, or maybe a desktop application could be the answer??
The method that comes to mind as being very easy (as in it has been done a lot and is proven) is just distributing via a web site that is secured with SSL. It's trivial to set up (doesn't matter if you're running Windows, *nix, etc) and is a familiar pattern to the user.
Setting up a thick client is likely more work because you have to do the encryption yourself (not difficult these days, but there is more to know in terms of following best practices). I don't think that you'll gain much (any?) security from having to maintain a significantly larger set of code.
Most secure would be print it, give it to a courier in a locked briefcase, and have the courier hand deliver it. I think that'd be going overboard, though :)
In real world terms, unless you're talking national security (in which case, see courier option above), or Trade Secrets Which Could Doom Your Company (again, see courier option above), having a well encrypted file downloaded from the web is secure enough. Use PGP encryption (or similar), and I recommend the Encrypt and Sign option, make the original website a secure one as well, and you're probably fine.
The other thing about a desktop application is: how is it getting the report? If it's not generating the report locally, it's really doing just as many steps as a web page: app requests report, report generated, server notifies client, client downloads.
A third option, though, is to use something other than the website to download the reports. For instance, you could allow the user to request the report through the web, but provide a secure FTP (SFTP or FTPS) site or AS2 (or AS3) connection for the actual download.
Using a secure file transfer (or managed file transfer) is definitely the best option for securely transferring electronic data. There are smaller, more personal-use solutions out there like Dropbox or Enterprise solutions like BiscomDeliveryServer.com
Print it off, seal it in an envelope, hire some armed guards for protection and hand deliver it to them.
You may think its a silly answer, but unless you can identify what your threat vectors are any answer is pretty meaningless, since there is no guarantee it will address those threats.
Any system is only as secure as it's weakest link. If you sent the document securely and the user downloaded / saved it to their desktop then you'd be no better off than an unsecure system. Even worse they could get the docuemnt and then send it onto loads of people that shouldn't see it, etc. That leads on to a question whether you have an actual requirement that they can only view and not download the document? If not, why go to all this effort?
But if they are able to down load it, then the most secure method may be to send them an email telling them that the document is available. They then connect to a system (web / ftp?) using credentials sent separately to authenticate their access.
I'm surprised no one has mentioned a PK-encryption over email solution. Everyone in the "enterprise" gets a copy of everyone else's public key and their own private key. Lots of tools exist to do the heavy-lifting. Start with PGP and work from there.

Best way to implement an SFTP server solution? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm currently setting up a commercial SFTP server and I'm just looking for some of your opinions on the set-up I'm currently thinking of implementing, as well as a recommendation as to what commercial Secure FTP server software would be best to suit. Bear in mind that the data i'm responsible for is highly sensitive so any comments/feedback is much appreciated.
Here's the scenario:
1) Before file upload, files are compressed & encrypted using AES 256 with a salt.
2) Files uploaded from the clients' server over SFTP (port 22) to our SFTP server.
3) Files are then downloaded over HTTPS by our other client using one time password verification (strong 10 char alphanumeric password)
The specifics of the implementation I'm thinking of are:
For part (2) above, the connection is opened using host key matching, public key authentication and a user name/password combination. The firewall at both sides is restricted to only allow the static IP of the client server to connect.
For part (3), the other client is supplied with a user name/password on a per user basis (for auditing) to log into their jailed account on the server. the encryption password for the file itself is supplied on a per file basis, so i'm trying to apply two modes of encryption at all times here (except when the files are resting on the server).
Along with dedicated firewalls on both sides, Access control on the SFTP server will be configured to block IP addresses with a certain number of failed attempts over a short time, invalid passwords attempts will lock out users, password policies will be implemented etc.
I like to think that I've covered as much as possible but I'd love to hear what you guys think about this implementation?
For the commercial server side of things, I've narrowed it down to GloalSCAPE SFTP w/ SSH & HTTP module or JSCAPE Secure FTP server - I'll be assessing the suitability of each over the weekend but if any of you have any experience with either i'd love to hear about it also.
Since the data is clearly both important and sensitive from your clients' perspectives, I'd suggest you consult a security professional. Home-grown solutions are typically a combination of over- and underkill, resulting in mechanisms that are both inefficient and insecure. Consider:
The files are pre-encrypted, so the only gain from SFTP/HTTPS is encryption of the session itself (e.g. login), but...
You're using PKI for upload and OTP for download, so there's no risk of exposing passwords, only user IDs -- is that significant to you?
How will you transmit the one-time passwords? Is the transmission secure?
Keep in mind that any lockout scheme should be temporary, otherwise a hacker can disable the entire system by locking each account.
Questions to ask yourself:
What am I protecting?
From whom am I protecting it?
What are the attack vectors?
What are the likelihoods and risks of a breach?
Once you've answered those questions, you'll have a better idea of the implementation.
In general:
Your choice of AES256 + salt is very reasonable.
Multi-factor authentication is probably better than multiple iterations of encryption. It's often thought of as "something you have, plus something you know," such as a certificate and a password, requiring both for access.
As far as available utilities, many off-the-shelf packages are both secure and easy to use. Look into OpenSSH, OpenVPN, and vsftp for starters.
Good luck - please let us know what method you choose!
So what's wrong with OpenSSH that comes with Linux and the BSDs?
Before file upload, files are compressed & encrypted using AES 256 with a salt.
This part rings some alarm bells...have you written some code to do this encryption/compression? How are you doing the key management? You also say your key is password derived, so your use of AES 256 and salt is giving you a false sense of security - your real key space is much less. Also the use of the term 'salt' is inappropriate here, which suggests further weaknesses.
You would be better off to use a well proven implementation (e.g. something like PGP or GPG).
Also, if you use PGP style public key encryption for the file itself (and decent key management), the security of your SFTP server will matter a lot less. Your files could be encrypted at rest.
The argument for the security of the rest of the system is very convoluted (lots of protocols, authentication schemes, and controls) - it would be a lot easier to secure the file robustly, then do best practices for the rest (which will matter a lot less and also be independent controls).

Resources