Is there any way to say to GIT to stop copying file group and owner settings? My situation is as followed:
I am developing on home server where I need to use my users permissions (not root) in order to develop in Eclipse IDE (Eclipse crying if files are in root owner and group as it cannot work with them).
Once I am done, I am using GIT to synchronize with remote server which is running on Red Hat and file/folders groups and owners are server specific. However when I will synchronize it will copy my home servers permissions as well and apache on remote server throwing errors as it cannot read files so I need to reset it myself after every commit on new/changed files.
Any thoughts how to change my workflow?
P.S: I am using Linux/Debian on home server
Check the answer here: How do you deal with file ownership in git?
You're not doing anything wrong, this is just basic git behavior. You can change the permissions locally to what they need to be on the server and do a new commit. Or you can create a script to fix all of the ownerships/permissions on the server when you do your sync.
If you are using a git push to push the changes to your server via a git repository on the server, you can create a post-receive hook to call this script.
http://git-scm.com/book/en/Customizing-Git-Git-Hooks
One other thing did occur to me. A lot of Linux distros set the default umask as 0077 or 0007. Since I'm the only one using my laptop, I changed mine to 0002 since it just makes many things easier (plus my home directory is still 700). So all files I create will be rwxrwxr-x. Changing your umask would keep you from needing to think about setting the permissions later.
I found my answer to my question after a while and though I will answer just for the record.
User/group ownership is not shared (and therefore stored) through the repository. Only numeric file permissions are transferred (e.g. 644).
The file which is updated/created will adapt user and group ownership from the current user that is running Git commands.
Related
I am tasked with monitoring the changes made to the source files of a website. I am not developing the website, just watching it. I am a firm believer in using version control, and am a fan of git, but the developer who is actually maintaining the site is not, and I have decided it is better to let him continue to work however he wants (don't ask). I do not want to have to give him any instructions whatsoever (except possibly telling him that I am adding files or directories that he can ignore).
I consider myself an intermediate-level user of git, so I want to run this by an expert or two.
I am thinking I can install git on the (Linux) server, and then ask for status, and do commits, via SSH. Will this work without jeopardizing the normal operation of the web server?
Yes, using Git on a server should not interfere with the normal operation of the server (as mentioned in the comments, doing this on a production server is dodgy but I'll leave that to one side.)
Note that using Git normally will create a .git directory at the root of whatever you're tracking. If that is your web server root directory, you might want to consider whether this is a risk as far as external access to the contents of the .git directory (depending on your server setup, this may or may not be a concern).
If you want to create the .git directory somewhere else outside your working tree, see the GIT_DIR environment variable.
I have two computers: Work, and Home.
My workspace in my Work Laptop is synced in a Dropbox. That's why I could access my work at home.
But when I try to setup my Perforce at home, I don't know how to link or detect that I have an existing workspace in my Dropbox.
How?
First, I'll assume you can access your Perforce server from home, since you don't mention this in your problem statement.
Next, I'll assume you're able to use the same directory structure at home that you use at work (e.g. C:\Dropbox\projx).
When you create your client spec at work, be sure to edit out the HOST: line, since you'll be using it on two machines.
Use the same client name at home that you use at work.
If for some reason you are unable to use the same directory structure on both machines, you may have to use "p4 client" to change the ROOT: line of your client spec every time you switch between home and work.
Alternatively, you could use two different clients, and use "p4 shelve" to move files to the server when you're done for the day.
You basically just need to tell Perforce the name of your client. Usually you'll do this by adding a .p4config file in your Perforce client's root directory (or in one of its parent directories) that contains the line:
P4CLIENT=your-perforce-client-name
You might also need to add an environment variable that points to this file: P4CONFIG=.p4config.
If the local path to your client's root is different between your work and home machines, you'll also need to set AltRoots in your Perforce client specification and add the path for your home machine.
I am creating a WordPress framework that has an auto update facility. When the system updates the framework, it downloads a .zip file (works ok, stored in a temp folder), and afterwards tries to extract that zip file to a place within the theme. When unzipping, it throws an error complaining about not being able to create a directory ("mkdir_failed").
The parent of target folder has permission "775" for user "bitnami" and group "bitnami";
root#linux:/home/bitnami# ls -al /opt/bitnami/apps/wordpress/htdocs/wp-content/themes/nexus
...
drwxrwxr-x 6 bitnami bitnami 4096 Oct 23 14:02 nexusframework
...
And I tried to put the "daemon" user in the "bitnami" group;
usermod -a -G bitnami daemon
Which indeed is assigned correctly I would say, as i see:
root#linux:/home/bitnami# id daemon
uid=1(daemon) gid=1(daemon) groups=1(daemon),1000(bitnami)
So; if the "daemon" user is in the "bitnami" group and the folder has 775 access rights, then why does it fail with "mkdir_failed"?
(note; assigning "777" to the parent folder solves the problem, but this is not an option because of security).
Thanks!
- Gert-Jan
update;
After doing more investigation on Linux in general, I read that Linux automatically creates a 'private' group for each user (so bitnami group for the bitnami user, etc.). I don't know if the problem is caused by the fact that I was trying (and apparently succeeded?) to add other users to the same group or not.
update;
See my answer below on how I resolved my issue.
Ok, thanks for all the comments. I eventually decided not to continu my investigation but to head for another direction, as having to rely on the container's folder to have "775" permission would be unwise for the framework (many clients would have 755 instead, so getting this to work for a group is nice but would eventually not solve my problem).
Instead I further investigated how WordPress themselves download and unzip themes and decided to follow that route.
The key problem i was trying to tackle, was to not have the unzipped files be owned by the 'daemon' user, but by the 'bitnami' user. The reason why it "impersonated" to the daemon user, was because i manually told the code to use the "direct" fs_method (as it appears, WP offers various ways to interact with the filesystem, where the easiest one is 'direct', see here). However, using the 'direct' FS_METHOD is the core reason why I have this problem, as that one will use the credentials of the webserver (the 'daemon' user in my case). So by using a different FS_METHOD, I know am able to unzip the files in the folder, using the correct 'bitnami' user (since the container is owner and has permissions (775, or 755 wouldn't matter) now my problem is solved. Note that instead of writing directly to the filesystem, now PHP will use FTP (see here).
Does it work if you change the group of the folder to daemon?
chgrp -R daemon /opt/bitnami/apps/wordpress/htdocs/wp-content/themes/nexus
I have a web dev. client using a shared host that doesn't allow shell access, and thus no access to SVN, Git, etc. I've tried to convince him to move to one of the many cheap options that allow it, but he won't do it. If I use version control on my staging server, are there any tools that will allow me to replicate the changes to production via ftp? Locally I have both mac & windows, the staging server is linux, so something that works on any of those platforms....
Using your Linux staging server you could keep a separate checked out copy that you use specifically for that host and then use a utility to mirror that directory with the host server.
LFTP is useful for this kind of thing. Its available for most Linux distributions and includes a 'mirror' function:
Mirror specified source directory to
local target directory. If target
directory ends with a slash, the source base name is appended to
target
directory name. Source and/or target can be URLs pointing to
directories.
Some kind of ftp mirror software is what you need. Not tested it but a quick search gave me this Java application. You could run that over your up-to-date checked out repository.
Good thing for keeping SVN repo and FTP copy in sync is svn2web. May I suggest creating separate branch for production copy and do merges to that branch for uploading to production server.
You probably need to write a batch file that is able to
Export the SVN repository
Upload the exported files to your Linux server via FTP
Short of finding / implementing some FUSE based CoW file system that supports immutable versions .. I'd just find another (more developer friendly) host. As far as I know, no FTP server supports this natively, nor can I think of any elegant means of putting it in place with script hackery.
I could be wrong.
This question (and answer) really helped me just now as I implemented version control via gitolite on a separate server and lftp.
Here’s what I did:
Set up gitolite on my ubuntu staging server
created base repo (i.e. foo.git) on staging server
cloned foo.git into working directory on staging server
cloned foo.git into working directory on local development machine
Developed locally
Pushed changes to foo.git repo on staging server
On staging server, logged into working directory, and pulled in changes from foo.git
lftp-ed into shared host (like you mention above)
Once in shared host, ran:
mirror -R --only-newer --delete --parallel=10 /source/directory/ /target/directory
Notes on the mirror command options:
-R - this pushes the source/directory to the target/directory. (mirror pulls in from target to source without this, think reverse)
—only-newer - without this option, even if you only changed one file, the mirror command will send all the files in the source directory over to the target directory. with this option only the changed (newer) files are transferred over the wire.
—delete - deletes files that are no longer in the source directory but still in the target directory. one of my pushes involved deleting expired assets. without this option, the same files would have stayed put on my shared host after executing the mirror command.
—parallel=10 - transfers 10 files at once (instead of 1 by default). this made the process much faster
While this is what worked for me, I’m sure there are ways to improve on this. I was grateful for this question and thought i’d share my experience.
Rsync will do this over an FTP connection. You probably already have it installed if you’re on a Unix-like system.
I am having a frequent problems with my web hosting (its shared)
I am not able to delete or change permission for a particular directory. The response is,
Cannot delete. Directory may not be empty
I checked the permissions and it looks OK. There are 100's of files in this folder which I don't want.
I contacted my support and they solved it saying it was permission issue. But it reappeared. Any suggestions?
The server is Linux.
You can't rmdir a directory with files in it. You must first rm all files and subdirectories. Many times, the easiest solution is:
$ rm -rf old_directory
It's entirely possible that some of the files or subdirectories have permission limitations that might prevent them from being removed. Occasionally, this can be solved with:
$ chmod -R +w old_directory
But I suspect that's what your support people did earlier.
This could also be because your FTP client might not be showing the hidden files (like cache, or any hiddn files that your application might create), while the hidden files are preventing you from deleting the directory. (though, in your case, I am not sure if this is the cause .. .it could be permission issue with your hosting provider.. Webserver running as another user (like apache or www) combined with your directories having global write perms).
I assume that's a response from an FTP server?
Usually, a message from an FTP server really means it. If it says the directory is not empty, there might be certain files you cannot see that exists in the directory which maybe one of:
Your PHP/JSP/ASP/whatever scripts may run under a different user account thus creating files which you may not be able to see/delete
Is your hosting's web interface run under your FTP account? There might be conflicting permissions there if you manage some files from the web interface and then later via FTP.
Hosting server/operating system files created unintentionally e.g. from the hosting's web interface
If it comes from a script, write a one-time throw-away script that delete the files and that directory and then uploads and executes it.
And just to be sure, some FTP server doesn't support direct directory deletion, you need all the files first, is that the case?