I'm running a long build script compiling a lot of projects, and for most builds make usually returns with make: Nothing to be done for 'all', which is good because I'm not changing that much code on every build. The script still takes a long time to finish, however, simply because it calls autotools configurescript for each project, and some other scripts.
I've read somewhere that make uses timestamps to test if it needs to build.
Is there a way to ask Make if it needs to build or not before calling the configure script?
Or maybe another way to prevent re-configuring of all projects?
Having configured each project once, you do not need to run each configure script again unless those configure scripts themselves change, or you have reason to believe that the test results will have changed. If necessary, you can run config.status instead to recreate the output files without re-running all the tests. If you can be confident that the output files do not need to be recreated, either, then you can just run make.
Moreover, if you are using Automake in conjunction with Autoconf, then your Makefiles are built with tooling that detects when configure's m4 source, normally configure.ac, has changed, therefore requiring a reconfiguration. As long as the Makefile generated by Automake-based configure remains, you can just run make, and it will do the right thing. If the Makefile is missing, then you needed to run configure or config.status anyway.
Additionally, you could consider using a shared cache file for all your configure scripts, by specifying that file to each one via the --config-cache option. That should reduce the time each configure script consumes by avoiding redundant testing.
Related
Cargo has the --target-dir flag which specifies a directory to store temporary or cached build artifacts. You also can set it user-wide in the ~/.cargo/config file. I'd like to set it to single shared directory to make maintenance easier.
I saw some artifact directories are suffixed with some unique(?) hashes in the target-dir which looks safe, but the final products are not suffixed with hashes, which doesn't seem to be safe for name clashes. I'm not sure on this as I am not an expert on Cargo.
I tried setting ~/.cargo/config to
[build]
target-dir = "./.build"
My original intention was to use the project's local ./.build directory, but somehow Cargo places all build files into ~/.build directory. I got curious what would happen I put all build files from every project into a single shared build directory.
It has worked well with several different projects so far, but working for a few samples doesn't mean it's designed or guaranteed to work with every case.
In my case, I am using single shared build directory for all projects of all workspaces of a user. Not only projects in a workspace. Literally every project in every workspace of a user. As far as I know, Cargo is designed to work with a local target directory. If it is designed to work with only local directory, a shared build directory is likely to cause some issues.
Rust/Cargo 1.38.0.
Yes, this is intended to be safe.
I agree with the comments that there are probably better methods of achieving your goal. Workspaces are a simple solution for a small group of crates, and sccache is a more principled caching mechanism.
See also:
Fix running Cargo concurrently (PR #2486)
Allow specifying a custom output directory (PR #1657)
Can I prevent cargo from rebuilding libraries with every new project?
I need to install small programs I do not fully trust.
Therefore I would like to monitor all files for changes - whether this script places some files it is not supposed to or edits others.
As I want to monitor all folders and files I thought about using something similar to rsync - but is there an alternative to only watch for changes?
Does this way guarantee that I catch everything the software changes? Or are there some kind of "registry-entries" / changes in the configuration, I could miss?
Thanks a lot!
I would suggest you use some kind of sandbox (probably the most straightforward way nowadays is to use Docker).
You could use Git to track all the changes that are made into the sandbox/container:
Initialize a git repo in the root dir
Add all files and commit as the base version
Execute the install script you do not trust
Using git status is going to show you all the changes that were made during installation.
I've just started to get to grips with Jenkins. It currently performs the following tasks:
Pulls the latest codebase from git
Uploads the codebase via sftp to my environment
Sends a notification email to the testers and the PM to inform them of a completed deployment.
However for it to be truly useful I need it to perform two more tasks:
Delete the robots.txt and .htaccess file which exists in the git repo and replace it with a predefined version which is specific for the server
Go through all the code and remove specific code-blocks (perhaps something in between comments: eg. /** Dev only **/ Code to be removed goes here /** Dev only **/ or something like that).
Are there any plugins which can accomplish these things or would I have to read up on writing groovy scripts for this sort of thing (I don't know anything about those yet).
On a related note: I'd also love it if it could combine kit and SASS files, however I can't see a plugin for these things, however I assume I can just install compass on my build server and then run it via command line in the build process. Is that correct?
Instead of putting your build tasks directly into the Jenkins job, I recommend writing a build script to accomplish your publishing/deployment tasks.
Jenkins is great for having a single point of automation that is easy to run, can publish build results, and can track successes and failures. In my experience though, you're better off not putting your individual tasks and configuration steps into the Jenkins job configuration. At some point, you'll want to be able to run this job without Jenkins, either because you want to test local changes, or you want to handle multiple jobs and trying to keep job configurations in sync is not fun, or because you're moving to another build/deployment system. Also, putting the build script into a file allows you to put it into your source control system and track changes.
My advice: choose a scripting language (Python, Ruby, Perl, whatever you're comfortable with) or build system (SCons and Rake are options) and write a build script. In Python Ruby, and Perl, it's easy to manipulate files (#1) and all have a wide choice of templating systems that will accomplish #2. Then the Jenkins job becomes running your build script on the command line (or executing through a language-specific builder). And the build script can include running any of the tasks that you decide to put in your build (compass, etc).
I write company internal software in PHP and C++.
What are the best methods of deploying this type of software to linux machine? Currently, we use svn export, are there any other methods?
We use checkinstall. Just write a simple Makefile that copies the files to target directories on the target machine and then run checkinstall to create RPM, DEB or TGZ package, which you can later easily install with distribution package management tools.
You can even add shell scripts that are executed before and after files are copied, so you can do some pre and post processing like adding user accounts, crontab entries, etc.
Once you get more advanced, you can add dependencies to these packages so it could also pull and install PHP, MySQL, Apache, GCC libraries and even required PHP or Apache modules or some extenal C++ libs you might need, all with a single command.
I think it depends on what you mean by deploy. Typically a deploy process for web projects involves a configuration scripting step in which you can take the same deploy package and cater it to specific servers (staging, development, production) by altering simple configuration directives.
In my experience with Linux serviers, these systems are often custom built, and in my experience often use rsync rather than svn export and/or scp alone.
A script might be executed from the command line like so:
$ deploy-site --package=app \
--platform=dev \
--title="Revsion 1.2"
Internally, the system would take whatever was in trunk for the given package from SVN (I'm sure you could adapt this really easily for git too), generate a new unique tag with the log entry "deploying Revision 1.2".
Then it would patch any configuration scripts with the appropriate changes (urls, hosts, database passwords, etc.) before rsyncing it the appropriate destination.
If there are issues with the deployment, it's as easy as running the same command again only this time using one of your auto-generated tags from an earlier deploy:
$ deploy-site --package=app \
--platform=dev \
--title="Reverting to Revision 1.1" \
--tag=20090714200154
If you have to also do a compile on the other end, you could include as part of your configuration patching a Makefile and then execute a command via ssh that would compile the recently deployed code once the rsync process completes.
There is, in my experience, a tradeoff between security and ease of deployment.
For my deployment, I've never had a problem using scp to move the files from one machine to another. You can write a simple BASH script to take a list of machines (from a text file or STDIN) and push a given directory/application to a given directory on all of the machines. Say you hypothetically did it to a bin directory, the end user would never know the difference.
The only problem with that would be when you have multiple architectures and OSes, where it has to be compiled on each one individually. In that case, you could just write a script (the first example that pops into my mind is Net::SSH from Ruby) to take that list of servers, cd to the given directory, and run the compilation script. However, if all machines use the same architecture and configuration, you can hypothetically just compile it once on the machine that you are using to distribute.
I've been doing some research into finally automating our Development builds and still have one nagging question that I'm hoping the StackOverflow community can solve for me.
My understanding is that an IntervalTrigger when setup properly will check VSS every X seconds for changes and if it finds a modified file, will run my tasks. One of my tasks would be to checkout the AssemblyInfo files and update the version numbers. After these files are updated they would be checked back into VSS.
Thinking about this solution it doesn't make much sense because in my mind, I'm forcing the check for changed files to true every time the trigger fires. Am I missing something here? Is there a way of doing this without triggering an automatic build on the AssemblyInfo check-in?
You can use a Filtered Source Control Block to exclude certain files from the trigger.
I just posted a bunch about my default build process here which may be of some interest to you: SVN Website Development and Deployment Solution
The way I usually configure my projects with CC.NET is to have two project blocks per solution. One configured as an interval trigger that does nothing more than get the latest from my repository, build the solution, and run unit tests. The other is a schedule trigger that does all the things the other one does, but actually publishes a build. This includes changing version numbers, publishing files, etc. This might work in your case, since the change in version would cause the interval project to trigger, but only once.
Checking the automatically generated AssemblyInfo into the version control system is a bad idea, don't do it. You'll get a lot of noise (50% of all commits!) in your history. Also, it does not give you any new information - you can always pull this from VCS. Have your build script autogenerate those files is a good practice, but don't push those changes back!