SVN Setup Of Existing Directory - linux

I have been going through documentation and such and have SVN working, but I want to put it on an existing directory. I imported that directory, so do I rename/delete the non SVN directory and then checkout the SVN to the non SVN directory location? I am just trying to understand how to get it to start posting to our website URL.
If so, is there any way to keep the current non SVN and make it SVN rather than import and overwrite?
Thanks, I am trying to understand SVN, but find a lot of the tutorials and such on the web to be confusing.

Yes, you have it exactly. Once your code has been added to the repository, you can get rid of or rename your original code directory. Then checkout the project from the repository into the same location as your previous code and continue working from there.
UPDATE
To make it so that your website is updated from the repository, you actually need two working directories, and a repository.
Repository: The repository stores the code and changesets, but isn't directly accessible as a file system. Keep a backup!.
Working directory 1: You develop and test your code from a working directory checked out from the repository. Commit changes back to the repository.
Working directory 2: Rename the code directory on your webserver. Checkout a copy of the code to your web server in its place. Technically it is now a working directory, since it contains the .svn metadata directories, though you won't usually make changes here.
Make changes to your code from your development working directory (1) and commit them back to the repository. When you are satisfied that they are working correctly and have been properly tested, on the web server's code copy (2) do svn update (or if you're using Tortoise SVN on the web server, do an update). This will synchronize the server code with the current development version.
Subversion will not automatically push updates to your web server. You will need to pull them in with an update when you need to. It is possible to use what's called a "post-commit hook" to cause Subversion to execute a script when commits are made, and that script could update or export code to your production web server. However, you would need to write the script and it's kind of an advanced usage of Subversion. I would recommend trying out the method I described with a working copy on the web server to get accustomed to the workflow befrore trying anything more complicated.
Addendum If you really want to do this (and I don't really recommend it unless you really test well) a very easy method would be to schedule a cron job that does svn update every couple of hours (or minutes) on your production site.
Don't forget that if you do happen to modify your code directly on the web server, you must commit it back to the repository from there, and do an update on your development working copy.

Related

Is it possible to svn checkout in a database

I a writing a little webapp, which allows the users to browse through a Visual SVN server. I would like to add an online editor like github in this webapp, so users can edit the files online, leave a message and the changes appear in the repository.
For that I need to checkout the files locally. My idea was to check them in a mongodb out, so I can save the changes per user like a local working copy.
Is there a way (without reimplementing the svn protocol) to make a checkout in a database or even just the memory and then write it in the database.
If there are any questions, just ask :)
Btw. if someone is interested, here is the code https://bitbucket.org/Knerd/svn-browser
There is no way to do svn checkout to directly to database. But there is some options.
First of all, you can simple create virtual disk that resides in memory and perform checkouts to that disk. Than you can store checked out files to database.
Another option is to use rich Subversion API directly. Note, that Subversion is written in C, so you will need to build bridge between Node.js and SVN (as far as I can remember, there is no official Subversion bindings for Node.js, but there is for Python and Java and there is unofficial nodesvn package available for Node.js). Using the API you can implement your own 'in-database' working copy.
Also you can use svnmucc utility (which is shipped with VisualSVN Server) to make commits directly in the repository (without even making a working copy). If you combine it with svn ls, svn info etc. you can implement repository browsing and editing of files.

Creating SVN Repo and Checking Out

I'm moving my current server contents to a new one, and am currently in the process of setting up SVN. I'm fairly unfamiliar with SVN, typically using it to the extent of commits and updates.
I have two locations that I use SVN on the old server:
PROD location:
/var/www/html/new_dwutils/
and local:
/home/{user_name}/public_html/new_dwutils/
My interaction svn-wise is normally committing and updating at the /new_dwutils/ level.
Note: Running svn --version says I'm at version 1.6.11 for both servers.
I'm now trying to recreate this structure on the new server. My initial thought was to create the svn repo using something like:
svnadmin create /var/www/html/new_dwutils/
This creates the repo dir, but, when I copy my files into the dir, I am unable to do svn commands like status. However, when I go into a sub-dir of the copied data, I can use the svn commands.
This has me thinking that the repo is /new_dwutils/ and the copied data is considered a project? And the sub-dirs are working copies then?
Going off that thought, I deleted the repo, and made the html dir a repo:
svnadmin create /var/www/html/
I then copied my new_dwutils dir, and sure enough, I was able to do svn commands like I use too. What I've noticed is that when creating the repo, a few things are added that were not on the previous server: conf/, db/, format, hooks/, locks/, and README.txt. I get that these are svn files, but I'm not seeing the .svn file. I know that there was an update for svn that "removed" .svn files, but these files are now in /var/www/html/.
Now I want to setup my local working copy.
I've been doing (location /home/{user_name}/public_html/):
svn checkout file:///var/www/html/
Problem is it copies the html/ file, but nothing in it, and I don't want the html/ file I want the html/new_dwutils/ file.
I feel like I'm doing it wrong from the start, and would greatly appreciate some explanation on how to get on the right track. A step by step would be extremely useful, and if further clarification is need for files or directory paths, I would gladly detail.
Thanks!
The Subversion Manual will answer all of your questions.
If you're making a Subversion repository under /var/www.html, I'm assuming you're using Apache httpd as your server. Look at Chapter 6. If you already have a repo, create a dump file, then use that dump file to recreate the repo. Look at Chapter 5 on moving repositories.
If you don't know anything about Subversion, or are confused by the difference between the repository location directory and a working directory, read the on-line manual. It's one of the best pieces of documentation I've seen.
From description of your question it appears that '/var/www/html/new_dwutils/' is your working copy and not a repo.
Go to '/var/www/html/new_dwutils/' on the old server and type "svn info" this should give you location of the old repo. You should simply be able to 'svn co ' into the new location to checkout a copy of all your files from the old server (everything that is checked in - you will not get anything that is not checked in on the old server).
However, if your repo was local on the old server and you want to move it to your new server too. Then you can simply copy the entire folder to the new server and access it directly using its new location in 'svn co' command.

What happens when SVN isn't used?

I am wondering what happens in SVN when a file is updated directly instead of using SVN? The main reason I am asking is that there was a problem updating the SVN on my machine (windows) when the server (linux) had 2 names that were the same, but different case. I resolved this on the server, but didn't do it through SVN since it won't update correct, but I still get the issue. Do I need to run some kind of command to update it?
Thanks.
EDIT:
I deleted the comflicting file in the working direcotry and wanted to know if doing things directory in the working directory get tracked at all or what needs to be done to resync.
When SVN gets blocked because the repository is more "up to date" than the local checkout, one brain dead foolproof solution is:
Move (or remove) the files that are causing the conflict at the command line (don't use SVN tools to do this, and don't use the GUI if you have tortoise installed).
svn update the repository, which will restore the current copy of the files from the subversion server.
Decide what to do with your cached copies of the old files. Either manually merge them back into the repository, discard them, or remake the changes in the new svn managed files (depending on your needs).
Note that if you move the files into a directory using tortoise, make sure that you move it into a directory that's not associated with ANY SVN project. It's not fun trying to undo the helpful changes tortoise does in thinking your wanting a SVN move to accompany the file system move.
There is no need to run any special commands. If you updated the sources, the next time you will run svn update subversion will seamlessly merge the changes and you will get an uptodate working copy.
If you changed some files, they will appear modified or conflicted depending on the changes made by you and other users.

Syncing website files between local and live servers using GIT?

Say I have two web servers, one local development and one live.
Under SVN I would checkout the website files to my local webserver's public_html directory and also to the live webserver's public_html directory. I would then work on the files directly on the local server and commit any changes to the central repository.
When I'm ready for those changes to go live on the live server, I would SSH in and perform an SVN update.
Essentially I have two working copies, one on live and one locally, though other users may also have working copies on their local machines. But there will only ever be one working copy on the live server. The reason for this is so that we can just perform SVN update on live server every time we want changes to be published.
How can a simiar workflow be accomplished using GIT?
To model your current work flow almost exactly do:
Set up a git repo.
Clone the repo on the server and locally.
Work locally
git push to the git repo
ssh to server
git pull.
Another way to do it would be to set up a "production" branch in git, have a cron job that continually pulls this branch on the server, and then just merge and push to the "production" branch any time you want to publish your changes. Sounds like you need a more concrete branching strategy.
See: Git flow branching model && git flow cli tool
Good luck! This is a very solvable problem with git.
You might find this useful: http://joemaller.com/990/a-web-focused-git-workflow/
In your local working copy:
git push ssh://you#yourserver/path/to/your/wc
will push the commited changes in your local version to yourserver.
Having a setup that triggers automatically pulling like leonbloy and codemac suggested may seem like a good idea at first but it tends to be very fragile. I suggest a different alternative.
http://toroid.org/ams/git-website-howto

Using hg repository as web site

This is somewhat related to my security question here. Is it a bad idea to use an hg / mercurial repository for a live website? If so, why?
Furthermore, we have dev, test and production installations of our website, like dev.example.com, test.example.com and www.example.com. If it's a bad idea to use a repository for a live/production website, would it be OK to use an hg repository for the dev and test sites?
I'm also concerned about ease of deployment. We have technical and less technical co-workers who will be working with the site. The technical people (software engineers) won't have any problem working with the command line or TortoiseHG. I'm more concerned about the less technical people (web designers). They won't be comfortable working on the command line, and may even find TortoiseHG daunting. These co-workers mostly upload .css files and images to the server. I'd like for these files (at least the .css files) to be under version control, but I want this to be as transparent as possible for the non technical team members.
What's the best way to achieve this?
Edit:
Our 'site' is actually a multi-site CMS setup with a main repository and several subrepositories. Mock-up of the repository structure:
/root [main repository containing core files and subrepositories]
/modules [modules subrepository]
/sites/global [subrepository for global .css and .php files]
/sites/site1 [site1 subrepository]
...
/sites/siteN [siteN subrepository]
Software engineers would work in the root, modules and sites/global repositories. Less technical people (web designers) would work only in the site1 ... siteN subrepositories.
Yes, it is a bad idea.
Do not have your repository as your website. It means that things checked in, but unworking, will immediately be available. And it means that accidental checkins (it happens) will be reflected live as well (i.e. documents that don't belong there, etc).
I actually address this "concept" however (source control as deployment) with a tool I've written (a few other companies are addressing this topic now, as well, so you'll see it more). Mine is for SVN (at the moment) so it's not particularly relevant; I mention it only to show that I've considered this previously (not on a Repository though; a working copy, in that scenario the answer is the same: better to have a non-versioned "free" are as the website directory, and automate (via user action) the copying of the 'versioned' data to that directory).
Many folks keep their sites in repositories, and so long as you don't have people live-editing the live-site you're fine. Have a staging/dev area where your non-revision control folks make their changes and then have someone more RCS-friendly do the commit-pull-merge-push cycle periodically.
So long as it's the conscious action of a judging human doing the staging-area -> production-repo push you're fine. You can even put a hook into the production clone that automatically does a 'hg update' of the working directory within that production clone, so that 'push' is all it takes to deploy.
That said, I think you're underestimating either your web team or tortoiseHg; they can get this.
me personally (i'm a team of 1) and i quite like the idea of using src control as a live website. more so with hg, then with svn.
the way i see it, you can load an entire site, (add/remove files) with a single cmd
much easier then ftp/ssh this, delete that etc
if you are using apache (and probably iis as well) you can make a simple .htaccess file that will block all .hg files (or .svn if you are using svn)
my preferred structure is
development site is on local machine running directly out of a repository (no security is really required here, do what you like commit as required)
staging/test machine is a separate box or vm running a recent copy of the live database
(i have a script to push committed changes to staging server and run tests)
live machine
(open ssh connection, push changes to live server, test again, can all be scripted reasonably easily, google for examples)
because of push/pull nature of hg, it means you can commit changes and test without the danger of pushing a broken build to the live website. like you say in your comments, only specific people should have permission to push a version to the live site. (if it fails, you should easily be able to revert to the previous version, via src control)
Why not have a repo also be an active web server (for dev or test/QA environment anyway)?
Here's what I am trying to implement:
Developers have local test environments in which they can build and test their code
Developers make a clone of the dev environment on their local dev machine
Developers commit as often as they want to their local repo
When chunk of work is done and tested, then developer pushes working change sets to dev repo
Changes would be merged and tested on Dev, then pushed to Test/QA, and so on.
BTW, we're using Mercurial. I believe this model would only work using a distributed source code management tool.

Resources