We have a strange problem here at work that I've been unable to figure out. We all use MacBooks with Snow Leopard on our desktops and we have a handful of Linux servers we also use remotely. Some of my team members put git repositories on an NFS filesystem that's shared between both the Mac's and the Linux servers so they don't have to think about sharing code between repositories in their personal workflow.
This is where the strange starts, on the OSX machines git will randomly show some files out of date in status when you try to merge or switch branches etc. If you run git status no files are shown out of date. gitk will show the files as modified but not committed in the same way status normally does. If you reset --hard those files you can sometimes change branches before this reoccurs but mostly not. If you log into one of the Linux machines and view the same repository everything works perfectly. The files are not marked as changed and you can do whatever you like.
I've eliminated Line ending differences and file mode differences as the culprit but I'm not sure what else to try. Is there some OSX specific NFS interaction that we have to work around somehow?
Maybe unsynchronized time between the servers and workstations makes the modification times of the files unreliable. Does setting of core.trustctime help? (it is true by default).
There is an even heavier setting: core.ignoreStat to ignore complete stat(2) information in the change detection code.
Related
I wrote this script that runs several executable updates in a shared network folder. Several separate machines must run these updates.
I would like to archive these updates once they are run. However, as you may see the dilemma, if the first machine runs an update and archives the executable,
the rest of connected devices won't run as they will no longer appear in the cwd. Any ideas?
It took me a while to understand what you meant by "archiving", but probably moving to another folder on a network shared mount. Also, the title should definitely be changed, I accidentally marked it as OK in Review Triage system.
You probably want to assign some ID to each machine, then have each of them create a new file once they finish the installation (e.g. empty finished1.txt for PC with ID 1, finished2.txt for PC 2 etc.). Then, one "master" PC should periodically scan for such files, and when finding all it expects, deleting/moving/archiving the installers. It may be good idea to add timeout functionality to the script on master PC, so when one of PCs will get stuck, you will get notified in some way.
I hope I can explain this in a simple way ...
The files I am adding to git is on a Linux server. I access these files from various computers, depending on where I am. Sometimes it is with a Windows machine, with a drive mapped to a network drive. Sometimes I ssh into the server.
I created my git repository while working on the Windows machine with a network drive mapped to the appropriate file system, lets call it W:. I was in W:\ when I created the repository.
When I ssh into the server the directory would be something like: \home\mydir\WORKING_DIR\
Can I now, while in my ssh session, issue git commands to update the repository on the Linux macine?
This is not an answer, but it is too long for the comments.
I'm getting to the end of my tether with git. It has now completely messed up everything. Trying to google for a solution is really fruitless. Nothing is specific enough and then when you do try something that might be relevant it just totally screws things up further.
I tried changing the path in the config file manually. But I really didn't know what to change it to. If it should be relative, then relative to what?
I tried a couple of things and ended up with /home/myname/myworkingdir/
However, now it deleted my files again and set me back to some unknown state. Fortunately I backed my files up beforehand. So I tried to copy them back into place and add them again. I get "fatal: 'myfilename and path in here' is beyond a symbolic link. I have no idea what that is supposed to mean.
git status just shows more things to be deleted.
There are probably situations where this works without any issue (e.g. git status) and others where git assumes exclusive access (e.g. attempting to commit the same change simultaneously from two computers which both have access to the same working directory).
Wanting to ask this seems like a symptom of misunderstanding the Git model, anyway. You'll be much better off with a separate working directory on each computer (or even multiple check-outs on the same computer). Git was designed for distributed, detached operation - go with that, and you'll be fine.
How good is the idea of ​​preserving the Linux configuration files in version control system (SVN, for example)? For example, to see all changes in the firewall, change history. In particular, it is possible to keep these groups of files -
Access files
Booting and login / logout
File system
System administration
Networking
System commands
Daemons
...
Ie For example, I am making changes to the firewall and do commit the file to the repository. Then, if something goes not so, I can extract the file and compare it to that is. This can help detect some unauthorized access.
This is old hat, so to speak.
Chek this out.
We are deploying a new development platform.
We have a really complicated environment that we cannot reproduce on developer's computers so people cannot clone the GIT repository on their computer.
Instead, they clone the repository into a Mapped network drive (SAMBA share) thats is the DocumentRoot of a Website for the developer in our servers
Each developer has is own share+DocumentRoot/website and so they cannot impact people this way.
Developers have Linux or Windows as Operating system.
We are using 1Gbits/sec connection and GIT is really slow comparing to local use.
Our repository size is ~900 MB.
git status on samba share takes about 3mins to accomplish, that's unusable.
We tried some SAMBA tuning, but still, it's really slow.
Has someone an idea?
Thank you for time.
Emmanuel.
I believe git status works by simply looking for changes in your repository. It does this by examining all of the files and checking for ones that changed. When you execute this against a samba, or any other share, it's having to do the inspection over a network connection.
I don't have any intimate knowledge of the git implementation but my imagination is that it essentially boils down to
Examine all files in directory
Repeat for every directory
So instead of creating a single persistent connection to the share it's creating one for every single file in the repository and with a 900MB share that's going to be slow even with a fast connection.
Have you considered having the following work flow instead?
Have every developer clone to their local machine
Do work on the local machine
Push changes to their share when they need to deploy / test / debug
This would avoid the use of git on the actual share and eliminate this problem.
I'm trying to figure out how to do this with Eclipse. We currently run SVN and everything works great, but I'd really like to cut my SSH requests in half and use Eclipse to modify some files directly on the server. I'm using the below build of eclipse... how can I do this?
Eclipse for PHP Developers
Build id: 20100218-1602
Update
I have no intention of eliminating SVN from the equation, but when we need to make a hotfix, or run a specific report or function as a one-time thing, I'd much rather use Eclipse than terminal for modifying that kind of thing.
Have a look at How can I use a remote workspace over SSH? on the Eclipse wiki. I'm just quoting the summary below (read the whole section):
Summing up, I would recommend the
following (in order of preference):
VNC or NX when available remotely, Eclipse can be started remotely and
the network is fast enough (try it
out).
Mounted filesystem (Samba or SSHFS) when possible, the network is fast
enough and the workspace is not too
huge.
rsync when offline editing is desired, sufficient tooling is
available locally, and no merge issues
are expected (single user scenario).
RSE on very slow connections or huge workspaces where minimal data
transfer is desired.
EFS on fast connections when all tooling supports it, and options
like VNC or mounted filesystem or
rsync are not available.
But whatever you'll experiment, don't bypassing the version control system.
You could use something like SSHFS, but really, it's a better idea to use some kind of source control system instead of editing files directly on the server. If Subversion isn't sufficient, perhaps you might try a DVCS like Git or Mercurial.