ubuntu copy files from windows to /var/www risks - security

I have a windows machine for development. When code is working I ssh and copy files from developmnet to production which is /var/www.
Is this copy and paste safe? I see that some files are marked for read/write and some others only for read. Should I be concerned when transfering files from dev to prod this way?
What should be the output of the files when I do >ls /var/www?
If somebody can give some hints. Thanks

/var/www/ is the default documentroot and as such should / would have root folders of your web sites. ls would list them out while ls -ltr would give you all the property info on those folders / files.
You can ssh into Dev and transfer files to prod, but a better way would be to setup secure ftp and a specific user account that only has access to specific directories under var/www/ and then you button it up to all others. That way only one user has access to Prod at the filesystem level and it is very limited access and logged.

Related

Access Control Lists vs. chmod for proper nginx permissions in /var/www

On our server, I work with another developer. We want to both be able to edit files, make files, etc. We both want to be able to edit each other's files and be in complete collaboration with each other with no permission errors or having to use sudo all the time. We are also using Git. We have had issues in the past with Git making weird database files with the root username or group or only one of us. Then when we try to push to the repository, we get crazy errors and have to chmod everything back to one of us so the author is unified. Just a general mess. We are using a debian server.
Should we make 2 usernames and add them to the www-data group? Is adding us to www-data and having people visit the website secure with that?
Should we then chmod all folders to 755 permission and files to 644?
We want all new files made within /var/www to be in one of our usernames but with the www-data group by default so that we can both edit the file. Is the chmod -R g+rws /var/www enough for this? We want files to be ready as soon as they are made. File permissions set properly by default.
Should we use ACL for this instead of all the chmod stuff?
Is this a good guide to follow?
http://machiine.com/2013/easy-way-to-give-user-permission-to-edit-and-add-files-in-varwww/
Thanks

Copy site but preserve permissions

I wanted to copy my Drupal site to another location (VDS), I got full backup from my provider, (in tar.gz), untarred and ungzipped it, deleted some folders, zipped it again in 7zip format, then copied it with sftp to /var/www on VDS and unzipped, but all permissions now are read-only and so Drupal doesn't work at all cause it cannot acess files.
Can anyone tell when I lost my permissions, the right way to migrate to my VDS or (and) how can I manage with my corrupted-permission Drupal now (maybe I just can change them?)
Read only permission is generally fine for a Drupal site, except for the upload folder (it's nomally called files and in can be in sites/default or in sites/YOUR_SITE_CONFIGURATION_FOLDER or wherever you set it to be in admin/config/media/file-system). The files folder, and every subfolder it contains must be writable from the web server, so if your web server is running as the www-data user (the standard user for Apache in Ubuntu, other systems may differ) you can for example do
chmod -R o+w sites/default/files
chown -R www-data sites/default/files

Where to administer apache fallback directory

I have been put in charge of an Ubuntu 13 server installation. Apache is configured to use /var/www as the default directory which is correct. The issue is that it seems there is a fallback directory configured that points to /usr/share. So if I type into a browser (www.address.com) it will serve the documents out of /var/www, but if I know the name of a directory in /usr/share and type in (www.address.com/sharedir) then it will serve out of the /usr/share directory. I have looked in the apache config file and default site config file and do not see this association. I do not want this behavior and am concerned that this is the default behavior out of the box.
Can anyone guide me to another areas where this behavior may be controlled/managed.
Thanks for any assistance.
Open your
/etc/apache2/sites-available/default
file and replace
/var/www
to
/path/to/folder/you/wish
save and it will be better to restart apache by
service apache2 restart
Now put website contents to the new location /path/to/folder/you/wish.
Once you changed the Document root of the of the site as mentioned above, Then no files will be fetched from any other location. Hopes this will help you. :)
[SOLVED] After a bunch more digging around I discovered that the user that originally set up this server erroneously put .conf files in the 'conf.d' directory and 'mods-enabled' directory that were routing traffic to the other directories. Sorry to anyone that noodled on this one.

db.* files in /home from Perforce?

I see several db.* files in my /home directory, and it seems they come from perforce. For example, some files are db.archmap, db.bodtext, db.change, db.changex
Are these files useful? Can I delete them? They are making my /home directory messy
You have started a server using your home directory as the Perforce server's P4ROOT folder. Those files are files that are generated from starting the server and cannot be deleted unless you want to hose your server installation. It's not clear to me how you've started the server instance, so I'll try and cover multiple bases with my answer.
If you want to start up the server under your own account, you should set the P4ROOT environment variable and point it to where you want the server to store its files. Alternatively, when you start the server, you can specify the root folder on the command line using the -r option:
p4d -r /home/mark/p4server
which would put the server's files into the directory called 'p4server' off of my home directory.
Typically it is best to run the perforce server using a user that is dedicated to running perforce. I use a user called 'perforce'. I set P4ROOT (and other variables) in that users environment. If you cannot use a separate user, it might be easier to use the -r command line option that I mentioned above.
Those files are only server files, not client files. So it is safe to delete them, but if you start the server back up it will recreate them. So you might want to uninstall the server.
Unless you are running a beta version, they have p4sandbox coming soon(maybe in the beta, I forget) which MAY create those files. I don't have a beta version, so I can't verify what new files the client may or may not create.
You can check the documentation here to see what these files do/are for.

How to allow file uploading outside home directory with SSH?

I'm running a Fedora 8 Core server. SSH is enabled and I can login with Transmit (FTP client) on port 22. When logged in, I can successfully upload files to the users home directory. Outside the home directory I can only browse files, not upload/change anything. How can I allow file uploading to a specific directory outside the users home directory?
an easy method is to grant the user rights to the folder you want them to be able to upload to, then add a symlink (link -s) from their home folder to the destination.
You can also just use
scp file user#server:/path
which will let you upload to any directory you have permissions to
file is the file to copy
user & server should be obvious
/path is any destination path on the server which you have rights to; so /home/user/ would be your likely default home folder
You need to make those directories writable by the proper users, or (easier) that user's group. This is of course a huge security hole, so be careful.
HI,
Give the FTP user write permission on the directory where you want to upload your files.

Resources