FTP folders mounted locally used for SVN repository - linux

I would like to create a SVN repository remotely using FTP protocol.
Is it advisable to do the following steps
mount the FTP directory as local with culftpfs
create a repository as if it is local with svnadmin create
use it like in everyday life?
Do you know any issue with that approach?
RESULT AFTER MY ATTEMPT
I did try an attempy but I get an errro that looks like a timeout. THe real problem is that this approach is too slow. The solution of copying the repository each time looks more feasable or a simple script to back-up the folder.

It is a dangerous approach, however if you are working alone(as in "single user"), it would work. The biggest problems are:
You cannot provide exclusive locking mechanics over network
All Users will have direct access to all repositorie's internal files, if somebody deletes a file in revs, your repository is damaged beyond repair
You should setup an apache with
SVNAutoversioning on
then you could mount your repoURL as WebDav folder. Each change on these files will result in a single commit without need of a workingcopy

Related

How to perform multithreading with PowerShell?

How can we use multithread in PowerShell.
My Query is below:
I have a root folder which has various base folder under it.
I want to sync the folders from server to another using SFTP on remote.
So I want all the folders from the left folder structure to get synced to another server folder using multithreading so that the transfer becomes faster.
I am using WinSCP.net SynchronizeDirectories to sync, but its quite slow.
Please suggest a better way if any one can.

git: can I issue commands from two computers mounted to same file system

I hope I can explain this in a simple way ...
The files I am adding to git is on a Linux server. I access these files from various computers, depending on where I am. Sometimes it is with a Windows machine, with a drive mapped to a network drive. Sometimes I ssh into the server.
I created my git repository while working on the Windows machine with a network drive mapped to the appropriate file system, lets call it W:. I was in W:\ when I created the repository.
When I ssh into the server the directory would be something like: \home\mydir\WORKING_DIR\
Can I now, while in my ssh session, issue git commands to update the repository on the Linux macine?
This is not an answer, but it is too long for the comments.
I'm getting to the end of my tether with git. It has now completely messed up everything. Trying to google for a solution is really fruitless. Nothing is specific enough and then when you do try something that might be relevant it just totally screws things up further.
I tried changing the path in the config file manually. But I really didn't know what to change it to. If it should be relative, then relative to what?
I tried a couple of things and ended up with /home/myname/myworkingdir/
However, now it deleted my files again and set me back to some unknown state. Fortunately I backed my files up beforehand. So I tried to copy them back into place and add them again. I get "fatal: 'myfilename and path in here' is beyond a symbolic link. I have no idea what that is supposed to mean.
git status just shows more things to be deleted.
There are probably situations where this works without any issue (e.g. git status) and others where git assumes exclusive access (e.g. attempting to commit the same change simultaneously from two computers which both have access to the same working directory).
Wanting to ask this seems like a symptom of misunderstanding the Git model, anyway. You'll be much better off with a separate working directory on each computer (or even multiple check-outs on the same computer). Git was designed for distributed, detached operation - go with that, and you'll be fine.

backing up entire linux server on external hard drive or another cluster

We have a linux server for the database. Most of our data are on /var/. I would like to backup entire directory on external hard drive or on another linux system so that if something goes wrong I can entirely replace the directory. Since the directory has many files, I do not want to copy and paste every time instead I like to sync them.
Is there easy way to do that? rsyn can do that, how do I avoid of login every time the server? BTW I have limited knowledge of linux system.
Appreciated any comments and suggestions.
Bikesh
Rsyncing database files is not a recommended way of backing them up. I believe you have mysql running on the server. In that case, you can take a full database dump in the server, using steps mentioned in following link:
http://www.microhowto.info/howto/dump_a_complete_mysql_database_as_sql.html#idp134080
And, then syncing these files to your backup server. You can use rsych command for this purpose:
http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
Make sure that you have installed mysql in the backup server too. You can also copy the mysql configuration file /etc/my.cnf file to the database backup server. In case you require your database to be updated always, you can setup mysql replication. You can follow the below mentioned guide to do the same.
http://kbforlinux.blogspot.in/2011/09/setup-mysql-master-slave-replication.html

Git slow when cloning to Samba shares

We are deploying a new development platform.
We have a really complicated environment that we cannot reproduce on developer's computers so people cannot clone the GIT repository on their computer.
Instead, they clone the repository into a Mapped network drive (SAMBA share) thats is the DocumentRoot of a Website for the developer in our servers
Each developer has is own share+DocumentRoot/website and so they cannot impact people this way.
Developers have Linux or Windows as Operating system.
We are using 1Gbits/sec connection and GIT is really slow comparing to local use.
Our repository size is ~900 MB.
git status on samba share takes about 3mins to accomplish, that's unusable.
We tried some SAMBA tuning, but still, it's really slow.
Has someone an idea?
Thank you for time.
Emmanuel.
I believe git status works by simply looking for changes in your repository. It does this by examining all of the files and checking for ones that changed. When you execute this against a samba, or any other share, it's having to do the inspection over a network connection.
I don't have any intimate knowledge of the git implementation but my imagination is that it essentially boils down to
Examine all files in directory
Repeat for every directory
So instead of creating a single persistent connection to the share it's creating one for every single file in the repository and with a 900MB share that's going to be slow even with a fast connection.
Have you considered having the following work flow instead?
Have every developer clone to their local machine
Do work on the local machine
Push changes to their share when they need to deploy / test / debug
This would avoid the use of git on the actual share and eliminate this problem.

Linux: Uploading files to a live server - How to automate process?

I'm developing on my local machine (apache2, php, mysql). When I want to upload files to my live server (nginx, mysql, php5-fpm), I first backup my www folder, extract the databases, scp everything to my server (which is tedious, because it's protected with opiekey), log myself in, copy the files from my home directory on the server to my www directory and if I'm lucky and the file permissions and everything else works out, I can view the changes online. If I'm unlucky I'll have to research what went wrong.
Today, I changed only one file, and had to go through the entire process just for this file. You can imagine how annoying that is. Is there a faster way to do this? A way to automate it all? Maybe something like "commit" in SVN and off you fly?
How do you guys handle these types of things?
PS: I'm very very new to all this, so bear with me! For example I'm always copying files into my home directory on the server, because scp cannot seem to copy them directly into the /var/www folder?!
There are many utilities which will do that for you. If you know python, try fabric. If you know ruby, you may prefer capistrano. They allow you to script both local and remote operations.
If you have a farm of servers to take care of, those two might not work at the scale you want. For over 10 servers, have a look at chef or puppet to manage your servers completely.
Whether you deploy from local checkout, packaged source (my preferred solution), remote repository, or something entirely different is up to you. Whatever works for you is ok. Just make sure your deployments are reproducible (that is you can always say "5 minutes ago it wasn't broken, I want to have what now what I had 5 minutes ago"). Whatever way of versioning you use is better than no versioning (tagged releases are probably the most comfortable).
I think the "SVN" approach is very close to what you really want. You make a cron job that will run "svn update" every few minutes (or hg pull -u if using mercurial, similar with git). Another option is to use dropbox (we use it for our web servers sometimes) - this one is very easy to setyp and share with non-developers (like UI designers)...
rsync will send only the changes between your local machine and the remote machine. It would be an alternative to scp. You can look into how to set it up to do what you need.
You can't copy to /var/www because the credentials you're using to log in for the copy session doesn't have access to write on /var/www. Assuming you have root access, change the group (chown) on /var/www (or better yet, a sub directory) to your group and change the permissions to allow your group write access (chmod g+w).
rsync is fairly lightweight, so it should be simple to get going.

Resources