SSH session without ANY authentication - linux

I have a special user, called udpate, whose shell is a special command that fetches any pending updates to our system.
I'd like to be able to open an ssh session with this user without any kind of authentication (password or ppk, or anything), so if anyone wants to update a system, they could do "ssh update#<>", without having to know a password, or have a pre-shared public key on the box.
Insecure, I know, but this is over a VPN, so it should not be a problem, and they will only run the update, and then be thrown out.
Can this be done?

VPN is not a good reason to avoid authentification when using ssh. Even if there is a way to do this, you shouldn't use it. Use a ssh-key is the best way to do it. If you really want to do thing like this, use the same key and distribute it on each box.
What did you do if the local network of your box is compromised ? You just have a security hole.

as this rfc points out, there is support for host based authentication https://www.ietf.org/rfc/rfc4252.txt
So using it carefully should be possible by following this tutorial https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Host-based_Authentication#Server_Configuration_for_host-based_authentication.
That may not be a final solution, but helping finding one.
But really, you should not do it for this usecase... Just offer a basic web endpoint which does only start the update process on the next cron run. I know, its not so "simple" but its a lot more secure.
Or if they have access to this server anyway, add a script with super user bit set which triggers the update.
Also, if you have a central server in your company, where everyone has access too, you can use this as step in between to host the key pair, so you dont need to manage X keys for everyone.
Or you use a more modern setup with puppet or anything, or you just configure the server to always update without user interaction needed....

Related

How to "hide" top-secret data that need to be fed to the app

Let say I have an application that should run on a VPS. The app utilizes a configuration file that contains very important private keys, in a sense that no one should ever have access to! I know VPS providers can easily access my files. So, how may I "hide" the sensitive data from malicious acts while still have them usable for the app?
I believe encryption will be of no help, since the decryption should be done on the same machine! Also, I know running my own private server is a no-brainier; but, that's not an option, unfortunately.
You cannot solve this problem. Whatever workaround you can find, there will be a way for someone with access to repeat the same steps. You can only solve this if you have full control over the server (both hardware and software), otherwise, it's a lost battle.
Some links:
https://cheatsheetseries.owasp.org/cheatsheets/Key_Management_Cheat_Sheet.html
https://owaspsamm.org/model/implementation/secure-deployment/stream-b/
https://security.stackexchange.com/questions/223457/how-to-store-api-keys-when-algo-trading
You can browse security SE for some direction, and ask a more target question.
This problem is mitigated with using your own servers, using specialized hardware for key storage, trusting to your host provider or cloud, and using well-designed security protocols.
But the VPS provider doesn't know how your app will decrypt the keys in the file? Perhaps your app has a decrypt key embedded in it, or maybe it is something even simpler. Without decompiling your app they are no closer to learning the secrets. Of course if your "app" is just a few scripts then they can work it out.
For example if the first key in the file is customerID, they don't know that all the other keys are simply xor'ed against a hash of your customerID - they don't even know the hashing algorithm you used.
Ok, that might be too simplistic of you used one of the few well known hashes, but if there are only a few clients, it can be enough.
Obviously, they could be listening to the network traffic your app is sending, but then that should be end-to-end encrypted already, if you are that paranoid.

Security precautions to take, related to the `pivotal` user, after installing Chef (12)?

I've just installed Chef (12), on a public facing server, along with the Chef Manage interface.
The user pivotal is created by default, but I can't find any obvious information about the security considerations, which are crucial for public web services.
As far as I see, it's not possible to login to the web interface.
Is there anything that needs to be performed on such user, after installation (eg. change password, rename it, disable permissions, etc.)?
That user is magic and internal to Chef Server. You will never touch it directly or be able to see it unless you do something very wrong like copying superuser authentication keys manually to a workstation.

How to protect password in Selenium scripts

I am writing selenium (seleno) scripts to test a c# MVC web application which requires users to log in. At the moment the username and password are hard-coded into the script but I need to make sure the password is protected before I can commit the scripts to our code repository.
The scripts will be run autonomously through CI (TeamCity) so the password must be available to the program without any human input.
In terms of security requirements, the password is common knowledge amongst devs but it is also bundled with the software that is deployed to clients (which obviously opens a back door to anyone in possession of the password - for better or for worse). So if someone gains access to our codebase we need to be sure that they cant get at the password. The password itself is stored (salted) in a sqlite database.
If I pass an encrypted value into the program and then decrypt it will that protect us? Im not too bothered about the password being in memory on the server where the test runs as that server should be securely locked down and will only exist for the duration of the tests.
The only other thing I can think of is to insert a temp password into the sqlite database once TeamCity has spun up the temp server instance and before the tests are run. Not sure how to achieve that though.
I would have thought this would be a really common problem with selenium but I havent as yet been able to find a definitive solution.
The solution is to set your passwords at runtime. I would suggest environment variables. Then they are not in your codebase and instead somebody would need to hack into where you run your tests from.
As SiKing suggests, the solution is to use a temporary, test specific password which wont make it into production code. Simples.
One approach that I have used is to execute javascript to evaluate things:
<td>storeEval</td>
<td>prompt("What password")</td>
<td>secretPassword</td>
That only really works for user run stuff via webdriver though.
You could setup some kind of small ajax request at the start of the test to http://localhost/credentials.json or similar, which is set up on your CI instance (but not available anywhere else).
Add a password manager extension like bitwarden,keepass etc.., and configure it to auto-login... give 2-3 sec in code to auto-login

My applications need to send emails, where and how should I store the SMTP password?

It seems like every application I create needs to be able to send the occasional email. E.g. status emails. For this question, assume my application is a backup tool, locally installed on many windows clients, and each installation needs to send daily status mails. It could be installed on an organization's server or on a private computer.
I am asking the user to provide the credentials to an email account he owns (STMP host, port, username, password, from-address). I copied this approach from applications like Atlassian Jira/Confluence or JFrog Artifactory. Where and how are they storing the SMTP passwords anyway?
My current understanding is: Salting/Hashing approaches do not apply here as I need to be able to retrieve the plaintext password to actually send the emails. I don't want to store the passwords in plaintext, so it's got to be some kind of encryption/decryption approach (right?).
I can tell the user not to use his main email account, but to use some secondary account or, even better, setup a special email account just to be used by my application. If the user is an admin of an organization, he might be able to setup an email account on his exchange server or configure SMTP relaying. But, I know me, and I know my private users, some of them will just use their main email account anyway, so I want to do everything I can to keep their credentials as safe as possible (by that I mean "follow best practices").
Preferrably I would like to store the encrypted password in the application's database.
I've spent hours and hours reading through questions on stackoverflow, but I cannot see a consensus (like there is for user account login credentials). I find this surprising, as I expect basically every developer to be confronted with this problem sooner or later.
There must be some best practices to follow, some established way to go about this, but I haven't found it yet.
Please point me to resources on SO/the web that explain how to tackle this problem. If at all possible written by some specialist in the field.
Some SO questions I have looked at:
Protecting user passwords in desktop applications (Rev 2)
Windows equivalent of OS X Keychain?
It would be good if you would have provided more details on the operating system and the programming language...
However here are some general advices:
The most important thing you have to know is: If your application is able to decrypt it without user interaction (e.g. a password by the user or a hardware token) any attacker will be able to do it. All measures you implement will just increase the complexity of gaining this password.
Of course you should raise the bar as high as possible. For Windows, the DPAPI will be your friend. You can find some Information on how to use it for example here: http://www.c-sharpcorner.com/UploadFile/mosessaur/dpapiprotecteddataclass01052006142332PM/dpapiprotecteddataclass.aspx with C# (I don't know which environment you use).
You can also implement your own configuration and encrypt it using a RSA with a key stored in the local key container - see http://msdn.microsoft.com/en-us/library/system.security.cryptography.rsacryptoserviceprovider%28v=vs.100%29.aspx.
Maybe some other people can help you with other operating systems, but the concept there will be the same.
What also may be possible is to use some kind of SSO authentication like Kerberos or NTLM or ..., but this means modifications on the mail server.

Can I allow my program to run scripts?

Some users are suggesting that my (C#) program should be able to run scripts after completing it's job. This would be done through a command line to be input in my configuration dialog.
I'm no security expert, so I'm not sure if this acceptable in terms of security. Since the app runs with admin privileges (on Windows), wouldn't that be a huge security risk? Someone could just modify the config files of my application to point to a potentially dangerous script, couldn't they?
On the other hand, plenty of applications allow this, while requesting admin privileges, so I guess it must be ok, but I thought I'd better seek advice before opening wide security holes everywhere =)
Can I allow my application running with full privileges to launch user-specified scripts?
You can restrict access to your config in different ways - from obfuscating the config file to using NTFS permissions to limit access of non-admin accounts to it.
C# certainly allows you to run a user script. System.Diagnostics.Process makes that real easy. The question of security here is another problem.
Running scripts when a process completes can be an incredibly useful and can make or break your target audience's opinion of your application. Understandably, you don't want your product to be turned against your own consumers through a malicious hack like you're thinking.
The root of this problem is that your options are (I'm assuming) text based and easily editable. Your best bet is to encrypt your config file to prevent outside changes to it. Note that this doesn't prevent people from using your app to change your options to allow a malicious script, but for somebody to do that, they need access to an instance of your application instead of simply file read/write access.
This does bring to question one more aspect you should watch for. Don't use the same key for every installation of your application. If you do that, then Bob could cause Alice to run a malicious script by copying Alice's config, using his instance of your app to decrypt it and make the change and then Bob can replace Alice's config with the new malicious config.
Here is another SO question for how to encrypt strings in C#.

Resources