Is it possible to have a directory isolated bin folder? All packages installed to be available only in that specific directory?
For example I have a directory ~/projects and I would like to have git command available only in that folder.
I think you may be interested in using one of these two tools:
https://github.com/kennethreitz/autoenv
https://github.com/direnv/direnv
The first tool (autoenv, mostly written in Bash) is simpler to install and use but is not maintained anymore, and the second tool (direnv, mostly written in Go) provides more features, including the ability to unset environment variables.
For more details on their respective features, you can take a look at this GitHub issue.
Related
Cargo has the --target-dir flag which specifies a directory to store temporary or cached build artifacts. You also can set it user-wide in the ~/.cargo/config file. I'd like to set it to single shared directory to make maintenance easier.
I saw some artifact directories are suffixed with some unique(?) hashes in the target-dir which looks safe, but the final products are not suffixed with hashes, which doesn't seem to be safe for name clashes. I'm not sure on this as I am not an expert on Cargo.
I tried setting ~/.cargo/config to
[build]
target-dir = "./.build"
My original intention was to use the project's local ./.build directory, but somehow Cargo places all build files into ~/.build directory. I got curious what would happen I put all build files from every project into a single shared build directory.
It has worked well with several different projects so far, but working for a few samples doesn't mean it's designed or guaranteed to work with every case.
In my case, I am using single shared build directory for all projects of all workspaces of a user. Not only projects in a workspace. Literally every project in every workspace of a user. As far as I know, Cargo is designed to work with a local target directory. If it is designed to work with only local directory, a shared build directory is likely to cause some issues.
Rust/Cargo 1.38.0.
Yes, this is intended to be safe.
I agree with the comments that there are probably better methods of achieving your goal. Workspaces are a simple solution for a small group of crates, and sccache is a more principled caching mechanism.
See also:
Fix running Cargo concurrently (PR #2486)
Allow specifying a custom output directory (PR #1657)
Can I prevent cargo from rebuilding libraries with every new project?
I need to install small programs I do not fully trust.
Therefore I would like to monitor all files for changes - whether this script places some files it is not supposed to or edits others.
As I want to monitor all folders and files I thought about using something similar to rsync - but is there an alternative to only watch for changes?
Does this way guarantee that I catch everything the software changes? Or are there some kind of "registry-entries" / changes in the configuration, I could miss?
Thanks a lot!
I would suggest you use some kind of sandbox (probably the most straightforward way nowadays is to use Docker).
You could use Git to track all the changes that are made into the sandbox/container:
Initialize a git repo in the root dir
Add all files and commit as the base version
Execute the install script you do not trust
Using git status is going to show you all the changes that were made during installation.
How would you get ansible or puppet to deal with the following use case:
An application, X version 1, is installed with its configuration variables for version 1. Subsequently X version 2 is released with a different config variable set (i.e. that appplication may have added or removed a variable from their files under /etc). I want to upgrade to X version 2 and preserve old configuration from X version 1. I also want the option to rollback to X version 1 at a later date restoring it to the configuration state it had prior to upgrading to X version 2.
How would you go about ensuring this using Ansible or Puppet?
Your question is likely to be flagged as overly broad because there are so many potential answers/approaches, and it's going to depend greatly upon a number of other questions, such as:
Are you using a package manager (rpm, apt, etc) or are you installing applications manually, using gnu automake, or something else?
Precisely what sorts of configuration files are involved, how many, where are they located, etc?
At the most basic level, if you're relying on well-maintained packages then simply using the appropriate package manager may suffice. If you're doing anything beyond that then you're going to have to customize things based on your own preferences. There is no single wrong or right answer as to how to do this sort of thing simply because there are so many different approaches based on your individual needs/requirements.
As way of one example, suppose you have an application that relies on the configuration file /etc/service.conf, which only has a single entry containing a version number:
version: 1.2.3
You could simply template this file and specify the version number in Ansible or Puppet. For example, in Ansible you would just have a template that looks like this:
version: {{ version }}
And then a playbook that looks something like this:
- hosts: localhost
vars:
version: 1.2.3
tasks:
- yum: name=package-{{ version }}
state=present
- template: src=service.template
dest=/etc/service.conf
Of course you might want to expand this to ensure other versions of the package are removed so only the latest version exists.
If your environment is more complex, for example having a lot of different configuration files that need to be maintained and/or templating not being a viable solution then you probably want to implement some sort of backup/archiving of the configuration files before updating them. This could also be done any one of a number of ways, for example:
Using the Ansible fetch module to fetch configuration files from the target server
Simply invoking tar, cp, or something similar to make a backup of the files on the target server
You could also design a completely unique method of maintaining multiple versions of applications. For example, we use symlinks to manage multiple versions of third party applications as well as our own applications. We build and install different versions of Apache in locations like /deploy/software/httpd-2.2.21, /deploy/software/httpd-2.4.0, etc. and have a symlink named /deploy/software/httpd that points to the version we currently want to run. All the version-specific configuration files, etc. remain in the version-specific directory, so switching versions is as simple as shutting down Apache, changing the symlink, and restarting Apache.
I write company internal software in PHP and C++.
What are the best methods of deploying this type of software to linux machine? Currently, we use svn export, are there any other methods?
We use checkinstall. Just write a simple Makefile that copies the files to target directories on the target machine and then run checkinstall to create RPM, DEB or TGZ package, which you can later easily install with distribution package management tools.
You can even add shell scripts that are executed before and after files are copied, so you can do some pre and post processing like adding user accounts, crontab entries, etc.
Once you get more advanced, you can add dependencies to these packages so it could also pull and install PHP, MySQL, Apache, GCC libraries and even required PHP or Apache modules or some extenal C++ libs you might need, all with a single command.
I think it depends on what you mean by deploy. Typically a deploy process for web projects involves a configuration scripting step in which you can take the same deploy package and cater it to specific servers (staging, development, production) by altering simple configuration directives.
In my experience with Linux serviers, these systems are often custom built, and in my experience often use rsync rather than svn export and/or scp alone.
A script might be executed from the command line like so:
$ deploy-site --package=app \
--platform=dev \
--title="Revsion 1.2"
Internally, the system would take whatever was in trunk for the given package from SVN (I'm sure you could adapt this really easily for git too), generate a new unique tag with the log entry "deploying Revision 1.2".
Then it would patch any configuration scripts with the appropriate changes (urls, hosts, database passwords, etc.) before rsyncing it the appropriate destination.
If there are issues with the deployment, it's as easy as running the same command again only this time using one of your auto-generated tags from an earlier deploy:
$ deploy-site --package=app \
--platform=dev \
--title="Reverting to Revision 1.1" \
--tag=20090714200154
If you have to also do a compile on the other end, you could include as part of your configuration patching a Makefile and then execute a command via ssh that would compile the recently deployed code once the rsync process completes.
There is, in my experience, a tradeoff between security and ease of deployment.
For my deployment, I've never had a problem using scp to move the files from one machine to another. You can write a simple BASH script to take a list of machines (from a text file or STDIN) and push a given directory/application to a given directory on all of the machines. Say you hypothetically did it to a bin directory, the end user would never know the difference.
The only problem with that would be when you have multiple architectures and OSes, where it has to be compiled on each one individually. In that case, you could just write a script (the first example that pops into my mind is Net::SSH from Ruby) to take that list of servers, cd to the given directory, and run the compilation script. However, if all machines use the same architecture and configuration, you can hypothetically just compile it once on the machine that you are using to distribute.
I have 3 Linux machines, and want some way to keep the dotfiles in their home directories in sync. Some files, like .vimrc, are the same across all 3 machines, and some are unique to each machine.
I've used SVN before, but all the buzz about DVCSs makes me think I should try one - is there a particular one that would work best with this? Or should I stick with SVN?
I've had this problem for years, and I don't think version control is necessarily the right way to go. I've had good success with the the Unison file synchronizer which is designed for the express purpose of maintaining consistent home directories on two machines. I'm currently managing seven replicas with unison, and the details are a bit tricky, but it is a great tool and if you start with two you will be extremely pleased.
The key difference between Unison and a VCS is that Unison is willing to delay dealing with conflicts that have to be merged. Plus it gets all the defaults right. And it is fast: I use it daily, over a DSL line, to synchronize about 40GB of data.
Any DVCS would likely work fine. My favorite is Bazaar. It would be easiest to keep your config files in .config, version that, and then symlink as appropriate.
A benefit of DVCS is that you can version the per-machine config files as well, without interfering with versioning global configs.
I've had the same problem, and built a tool on top of Subversion that adds permission, ownership and secontext tracking, keeps the .svn directories out of the actually versioned trees, and adds a concept of layers so you can for example track all your config related to development, which you then only check out on machines you use for developing.
This has helped me organize my settings much better across the 50+ machines I log into.
Here's the project page. It's still a little rough around the edges, but we also use it at work to version system configuration for our 60+ servers.
In general, any version control system that uses some sort of metadata files to track stuff is going to cause you pain as is when actually using it.
Version control software isn't really great for home directories. Worse, some software doesn't really like the .svn folders or starts to interpret their contents. You could of course try to fix this with some very complex mirroring setup, but that's hard.
Here's a Mozilla developer that's tried to do this: Version controlling my home dir, there's a couple of suggestions in the comments.
git or Mercurials's cheap branching would work great for this situation. I started with Mercurial, because it is simpler, but have subsequently moved to git.
One way to handle this very flexibly is to have a build directory under revision control, not try and svn your actual home directory (which has its own issues)
so inside this you keep a structure like
/home/you/code/dotfiles
/home/you/code/dotfiles/dotbashrc
/home/you/code/dotfiles/dotemacs
...
/home/you/code/dotfiles/makefile
and the makefile can contain logic for specializing files (or not)
might be heavier than you need, but if your actual setup is complex (I've done this across 3 or 4 different unices at a time) then it's worth doing something like this.
I use git for this. So far, I have been able to keep the home directories on several machines synchronized, with no need for branching and merging. Instead, I use git rebase. Conflicts so far have been few and far between and easy to resolve.
I keep files that need to have separate contents out of revision control by putting them into .gitignore.
I keep configuration files for the following tools in git:
various shells
emacs and applications, i.e.
gnus
BBDB
emacs-w3m
mutt
screen
various utilities and scripts
I keep notes and such in a subdirectory which has its own git repository.
I would suggest looking into etckeeper if you haven't already. It's designed for versioning configuration files in /etc using a version control system:
etckeeper is a collection of tools to
let /etc be stored in a git,
mercurial, darcs, or bzr repository.
It hooks into apt (and other package
managers including yum and pacman-g2)
to automatically commit changes made
to /etc during package upgrades. It
tracks file metadata that revison
control systems do not normally
support, but that is important for
/etc, such as the permissions of
/etc/shadow. It's quite modular and
configurable, while also being simple
to use if you understand the basics of
working with revision control.
Although it's designed for /etc I think it would probably also work well (perhaps with some adaptation) for home directories since the basic needs are the same.
I know this is an old thread but found it while searching for some dotfiles.
My current system is using subversion. The key thing I did was check out the working copy into ~/.svnhome/ (in hindsight should have called it .dotfiles or something more generic). I then create symlinks to the files I actual use on that computer into home. For example my .procmail and .spamassassin folders are only needed on the mail server so I don't link those on my home server.
The only file that has some differences is the .bashrc file has some extra lines on my mac for macports. So at the bottom of .bashrc I have it check if .bashrc_local exists and parses that.
This is the last remaining thing I have using subversion (everything else is using git aside from work). The benefit of svn is because it's not a dvcs so I don't have to worry about accidentally committing on one server and forgetting to push it.
I have considered moving it to git so I could create branches. Using the above example I would have a branch for my main server that I would add the .procmail and .spamassassin folders but not have those in the master branch. But the current system has worked fine for years--before git even existed--and don't have any particular motivation to change it now.