Handmade continuous integration with shell script - linux

I did a temporary solution for a small continuous integration with shell script.
It updates from svn and then copies files to site's root directory.
So it looks like this
cd ...
update svn
cp -R ... ...
And then put it in crontab.
Well, it works fine for temp. solution, but it would want to make some improvement and to define somehow that svn was changed (new revision appeared) and only in this case to copy files (well, its connected with every-minut copying files makes server work slower).
But im a mean user of linux :(
So the question is:
how to define, using bash script, that svn got new commits and only in this case to make an update and other stuff, like copying files.

You can do 'svn info' in the directory (and use awk|grep|your favorite tool) to extract the revision number of what you've checked out. Do the same to the location you copy to. If the revision number in the checkout directory is higher than the one in the destination directory, then do the copy.
That's assuming that you copy everything including the .svn directories.
If you exclude them, then you should 'svn info' before you update, and again after, and compare the two revisions.

Stop.
Why in the world would you do this?
With eighty totally free, super easy to install, CI tools out there, why in the world would you start hacking your own together with shell scripts and cron?
Unless you're looking to work on your scripting / cron skills and want to use building your own CI as an fun little scenario to play through, you're just wasting your time here.

Related

how to use git with a package I am distributing

I have been using git for some time now and I feel I have a good handle on it.
I did however, build my first small program as a distribution (something with ./configure make and make install) and I want to put it up on github but I am not sure how to exactly go about tracking it.
Should I, for instance, initialize git but only track the source code file, manpage, and readme (since the other files generated by autoconf and automake seem a bit superfluous)
Or should I make an entirely different directory and put the source files in there and then manually rebuild everything for version 0.2 when it is time?
Or do something else entirely?
I have tried searching but I cannot come up with any search terms that give me the kind of results I am looking for.
for instance initialize git but only track the source code file, manpage, and readme (since the other files generated by autoconf and automake seem a bit superfluous)
Yes: anything used to build needs to be tracked.
Anything being the result of the build does not need to be tracked.
should I make an entirely different directory
No: in version control, you could make a new tag to mark each version, and release branches from those tags to isolate patches which could be specific to the implementation details of a fixed release.
But you don't create folders (that was the subversion way)
should I make an entirely different directory for sources
Yes, you can (if you have a large set of files for sources)
But see also "Makefiles with source files in different directories"; you don't have just one Makefile.
The traditional way is to have a Makefile in each of the subdirectories (part1, part2, etc.) allowing you to build them independently.
Further, have a Makefile in the root directory of the project which builds everything.
And don't forget to put your object files in a separate folder (not tracked) as well.
See also this question as a concrete example.

svn checkout on subdirectory, which is then renamed

So, we probably did something we shouldn't have done, but now I wonder how this should typically be handled.
We have a large project with multiple applications which are grouped in different sub-systems. I was working on one specific application which was found in the following subdirectory:
/svnroot/subSystemName/myApp
Since I didn't need the whole SVN, I just did a checkout of that subdirectory.
Some time later, someone else figured that the name of the sub-system wasn't quite right, so he did a svn rename on the directory, so that my application is now found in:
/svnroot/subSystemNewName/myApp
As you might imagine, this causes problems because when I try to do an update for instance, it says "target path does not exist", as it's still looking for the original path.
What am I to do? Is the only solution to do a full checkout again? How should this have been handled in the first place?
PS: I'm on Linux.
svn switch should do the trick. Run svn switch <url_of_new_location> <local_checkout_dir>

shell script to create backup file when creating new file in particular directory

Recently I was asked the following question in an interview.
Suppose I try to create a new file named myfile.txt in the /home/pavan directory.
It should automatically create myfileCopy.txt in the same directory.
A.txt then it automatically creates ACopy.txt,
B.txt then BCopy.txt in the same directory.
How can this be done using a script? I may know that this script should run in crontab.
Please don't use inotify-tools.
Can you explain why you want to do?
Tools like VIM can create a backup copy of a file you're working on automatically. Other tools like Dropbox (which works on Linux, Windows, and Mac) can version files, so it backs up all the copies of the file for the last 30 days.
You could do something by creating aliases to the tools you use for creating these file. You edit a file with the tools you tend to use, and the alias could create a copy before invoking a tool.
Otherwise, your choice is to use crontab to occasionally make backups.
Addendum
let me explain suppose i have directory /home/pavan now i create the file myfile.txt in that directory , immediately now i should automatically generate myfileCopy.txt file in the same folder
paven
There's no easy user tool that could do that. In fact, the way you stated it, it's not clear exactly what you want to do and why. Backups are done for two reasons:
To save an older version of the file in case I need to undo recent changes. In your scenario, I'm simply saving a new unchanged file.
To save a file in case of disaster. I want that file to be located elsewhere: On a different computer, maybe in a different physical location, or at least not on the same disk drive as my current file. In your case, you're making the backup in the same directory.
Tools like VIM can be set to automatically backup a file you're editing. This satisfy reason #1 stated above: To get back an older revision of the file. EMACs could create an infinite series of backups.
Tools like Dropbox create a backup of your file in a different location across the aether. This satisfies reason #2 which will keep the file incase of a disaster. Dropbox also versions files you save which also is reason #1.
Version control tools can also do both, if I remember to commit my changes. They store all changes in my file (reason #1) and can store this on a server in a remote location (reason #2).
I was thinking of crontab, but what would I backup? Backup any file that had been modified (reason #1), but that doesn't make too much sense if I'm storing it in the same directory. All I would have are duplicate copies of files. It would make sense to backup the previous version, but how would I get a simple crontab to know this? Do you want to keep the older version of a file, or only the original copy?
The only real way to do this is at the system level with tools that layer over the disk IO calls. For example, at one location, we used Netapps to create a $HOME/.snapshot directory that contained the way your directory looked every minute for an hour, every hour for a day, and every day for a month. If someone deleted a file or messed it up, there was a good chance that the version of the file exists somewhere in the $HOME/.snapshot directory.
On my Mac, I use a combination of Time Machine - which backs up the entire drive every hour, and gives me a snapshot of my drive that stretches back over a year and a half) and Dropbox which keeps my files stored in the main Dropbox server somewhere. I've been saved many times by that combination.
I now understand that this was an interview question. I'm not sure what was the position. Did the questioner want you to come up with a system wide way of implementing this, like a network tech position, or was this one of those brain leaks that someone comes up with at the spur of the moment when they interview someone, but were too drunk the night before to go over what they should really ask the applicant?
Did they want a whole discussion on what backups are for, and why backing up a file immediately upon creation in the same directory is a stupid idea non-optimal solution, or were they attempting to solve an issue that came up, but aren't technical enough to understand the real issue?

Copying just files not present with SCP

I need to move my web server directory to another server. I'd like to do it with a simple "scp -r destination:destdirectory". But in the meanwhile the directory will be filled with another stuff: so I'll take the old server down the time I need to move the newest file to the new one. How can I do an scp which is gonna write just the differences? So it'll take not much time, and I won't have to take the website down for too long!
Probably not at all, or just with pains. But if you have the possibility to use rsync, just do that. It automatically excludes files that haven't changed, and for changed files, it just transfers the differences.

SVN: Ignoring an already committed file

I have a settings file that is under version control using subversion. Everybody has their own copy of this file, and I need this not to be ever committed. However, like I said, there is already a copy under version control. My question is: how do I remove this file from version control without deleting everyone's file, then add it to the ignore list so it won't be committed? I'm using linux command line svn.
Make a clean checkout, svn delete the file and add the ignore. Then commit this. Everyone else will have to take care (once) that their local copy isn't deleted on the next svn update, but after that, the local file would stay undisturbed and ignored by SVN.
If you remove the file from version control, how does a developer new to the project (or the one who accidentally deleted his local copy) get it after initial checkout? What if there are additions to the settings file?
I would suggest the following: Keep a default settings file (with no passwords, hostnames, connection strings, etc.) in SVN, name it something like settings.dist, and let the code work with a copy of this, named settings. Every developer has to make this copy once, and can then work with her personalized settings. If there are additions, add them to settings.dist – everyone else will get them with a update and can merge then into her personalized copy.
After you delete the file, your users will have to recover the file from the repository using svn export.
$ svn export -r x path ./
Where x is a revision where the file existed before it was deleted, path is the full path to the file, and ./ is where the file will be placed.
See svn help export for more information.
simply define a file containing settings that will override the default ones. This file is not checked into Subversion and each developer is responsible for maintaining this file according to their environments.
In an Ant-based world, you would have the files:
settings.properties
settings-local.properties (ignored for Subversion)
and in your build.xml file
<property file="settings-local.properties"/>
<property file="settings.properties"/>
For those who couldn't connect the dots:
modify the build.xml file like proposed
set the setting-local.properties as ignored
in an init target of your build, copy the settings.properties to settings-local.properties
wait a couple of days until everyone had the chance to run this target
delete the setting.properties from Subversion
Voila, every developer has its own setting-local.properties and everything was done automatically (and no developer lost his or her settings, which happens if you brutally delete the file from Suvbersion and there is no "Everyone else will have to take care...")
I have a similar issue. In my case it's an auto-generated user settings file (visual studio) that was accidentally checked in very early in the project. While just deleting it might work, it seems more correct to have it removed from the history, as it was never supposed to be in there in the first place.
I came across this, which might be a new feature since this question was originally posted 7.5 years ago:
https://stackoverflow.com/a/6025750/779130
Seems like an idea would be to:
1) create a dump of the project.
2) filter the dump using `svndumpfilter` to exclude the unwanted file(s).
3) load the dump as a new project.
This might be the only way to completely get rid of the file. In most cases the "delete and ignore" approach might be good enough.
[[ I'm new to subversion, so maybe this doesn't make sense. marking this as wiki -- if you know the right answer, please APPEND in the later section ]]
Couldn't you have a custom set of checkout steps so each user gets a different settings folder?
$ svn checkout http://example.com/project project
..
$ dir project
original_settings\ folder1\ folder2\
$ svn checkout http://example.com/project/aaron_settings project\settings
..
$ dir project
original_settings\ folder1\ folder2\ settings\
Or for new users
$ svn import project\settings http://example.com/project/aaron_settings
What I'm getting at is you want each user to have a custom view of the repository. In other version control systems, you could set up a custom listing of which projects you were using and which you weren't and which you put in odd places.
Does this work in subversion? The above code looks really risky, but maybe i'm doing it wrong.
WIKI:
(nothing yet)

Resources