I'd like to allow users of my web application to upload the contents of a directory via rsync. These are just users who've signed up online, so I don't want to create permanent unix accounts for them, and I want to ensure that whatever files they upload are stored on my server only under a directory specific to their account. Ideally, the flow would be something like this:
user says "I'd like to update my files with rsync" via authenticated web UI
server says "OK, please run: rsync /path/to/yourfiles uploaduser123abc#myserver:/"
client runs that, updating whatever files have changed onto the server
upload location is chrooted or something -- we want to ensure client only writes to files under a designated directory on the server
ideally, client doesn't need to enter a password - the 123abc in the username is enough of a secret token to keep this one rsync transaction secure, and after the transaction this token is destroyed - no more rsyncs until a new step 1 occurs.
server has an updated set of user's files.
If you've used Google AppEngine, the desired behavior is similar to its "update" command -- it sends only the changed files to appengine for hosting.
What's the best approach for implementing something like this? Would it be to create one-off users and then run an rsync daemon in a chroot jail under those accounts? Are there any libraries (preferably Python) or scripts that might do something like this?
You can run ssh jailrooted and rsync normally, just use PAM to authenticate against an "alternate" authdb.
Related
We desire to make subversion repositories read only. Doing this for a single repository in a subversion instance did not work regarding ssh. ssh access appears to bypass the controls of svn.
Followed the suggestions here:
Read-only access of Subversion repository
Write access should be restricted but that did not happen.
The repository is still write accessible despite changes to the repository for read only.
The easiest way to restrict access (assuming there are no users who require write access) is to remove the w (write) bit on the files in the SVN repo.
chmod -R gou-w /path/to/svn-repo
That will prevent writes at the filesystem / OS level.
If some users still require access, you can create separate svn+ssh endpoints for each user class that map to different users on the host server, using group write vs other write bits to determine which group has access to affect writes:
mkgrp writers-grp
chgrp -R writers-grp /path/to/svn-repo
chmod ug+w /path/to/svn-repo
chmod o-w /path/to/svn-repo
I would then register the SSH keys for writers against the writing user on the server, and prevent password access.
The "read-only" users could be allowed a well-known password.
This isn't as "clever" or "elegant" as configuring the SVN server configs, but it works pretty darned well as long as the users keep their SSH keys secret.
Restrict commit access with a start-commit hook.
Description
The start-commit hook is run before the commit transaction is even
created. It is typically used to decide whether the user has commit
privileges at all.
If the start-commit hook program returns a nonzero exit value, the
commit is stopped before the commit transaction is even created, and
anything printed to stderr is marshalled back to the client.
Input Parameter(s)
The command-line arguments passed to the hook program, in order, are:
Repository path
Authenticated username attempting the commit
Colon-separated list of capabilities that a client passes to the server, including depth, mergeinfo, and log-revprops (new in
Subversion 1.5).
Common uses
Access control (e.g., temporarily lock out commits for some reason).
A means to allow access only from clients that have certain
capabilities.
I have a PHP application, with Usernames and Public SSH Keys in it. I would like to use these accounts as the user back end of openssh.
I think I need to use pam_exec and a PHP/Bash script. I've written a php script that I can execute at CLI (The shebang sets an env of php executable). If I need to wrap this in a bash script instead to access environment variables I can do that. The script currently takes a username as its first and only parameter like so:
/opt/scripts/my-auth-script.php user_to_look_for
The script will exit zero on success (the user exists) or exit 1 if not. It currently echoes OK or Failed also but I can easily turn that off.
So, my question is, how do I have pam_exec call my script to look for user accounts, before looking on the actual host system for user accounts?
I've got it working. The way to do this is to set the AuthorizedKeysCommand and AuthorisedKeyUser settings of openssh in sshd_config. There is a caveat, the reason that github and others provide ssh as a service through a single login user shared among customers is that the user being called must be resolvable by the system being logged into, so they muxt exist locally, or the user db must be connected to a remote source like LDAP, which would also then have to be integrated into the application.
The way to get around this though, is that the AuthorizedKeyCommand can take parameters, %u for username, and also in this case %k for key or %f for sha256 fingerprint of the key. Then, that script can ignore the generic username it was given, and then just check the database for a match for the key or fingerprint. If we find it, we have the user for that key and successful authentication. If not we dont.
With mongoose.connect('mongodb://username:password#host:port/database?options...');, which I use in a script, I don't suppose there is any real way to hide the password?
Should I even be concerned if the Mongodb is only listening on 127.0.0.1? If my server can get exploited then the can just cat that script to get the password.
You can put the password in a environment variable when launching node, or read it from a file not checked into source control. If mongodb is only listening on localhost, an attacker would not be able to connect directly to the database from a remote machine. It would still be advisable to configure your firewall to block remote access, just in case some configuration change opens mongodb up publicly.
Here may be one related topic Store db password as plain text in node.js
Solution 1:
Use an environment variable.
Run your app with MONGO_PASSWORD=yourpasswd node app
Then you can access it inside the app with process.env.MONGO_PASSWORD
Solution 2:
Make a module (I call it "secrets") that exports all of your secret credentials. Don't check it into source control. Then, your app can just require('secrets').
Solution 3:
Trousseau is an encrypted key-value store designed to be a simple, safe and trustworthy place for your data.
All the answers above are good suggestions, but they still leave the password visible on the host in a easy to figure out location...rather in shell env variable or a file.
What I decided to do is upon every server boot up, make a job that creates a file for the mongoose script to be read that has the password. Then, have a cron job that deletes the file after 5 minutes after boot up. That password still exists on the system, but it would be much harder to trace where.
You will create a .env file on your node file.Then you put your User name and Password just like this DB_USER=Your Username and DB_PASS=Your password.
Then you will insert it to your index.js file by enter image description here
i want to save the user's IP when he connects to it's home folder, this is because i'm a user in a server where my team has a folder where our public_html is located, but we use the same account, so i just want to register who connected.
So i want to make a script that triggers when a connection is made and save the user's IP into a hidden file.
But i don't know if i could leave running a script in background to do it, and How?
If you're a root on that machine, you can simply check the auth log / messages / journal / ... (depends on the distribution). By default sshd logs all you need already.
If you're not a root, then you'll have to keep in mind this will never be secure. You can do this in the user's bash profile, but:
Since it's running as the same user, whoever logs in can just change the file (you can't hide it)
Anyone can workaround the script by executing some other command instead of the shell (for example ssh user#host /some/command will not be logged)
It's not secret.
If that's ok with you, then you just need to add this to bashrc
echo "new connection at $(date) from ${SSH_CLIENT}" >> ~/your_connection_log
Different solution, which should've been the default actually. Most distributions provide login history which you can request for your account without root privileges.
Running last your_username should give you the details of last few logins which cannot be manipulated by the user. (the log can possibly be spammed with entries however)
I am using FileZilla to log in to an SFTP host with my credentials. However, I need to use an equivalent of sudo su - user (as used in linux) to change the user. There are no passwords set for this general user, and hence direct login is not allowed.
What FTP command can I use with the "Enter custom command.." option in FileZilla to switch users after connecting?
(This is required so I can transfer files as a different user and not my login.)
SFTP protocol doesn't support changing user in the middle of transfer session (so no case to login and then change user with some custom command). But you can launch sftp server under needed user using sudo, by changing SFTP client configuration. Don't know whether this trick is supported by FileZilla, but it's supported by PuTTY or WinSCP. There in the sftp server settings you can specify something like "sudo /bin/sftp-server" in order to launch transfer session under different user.
For example, instruction how to do this with WinSCP:
https://winscp.net/eng/docs/faq_su#sudo