I am creating a build system for development purposes for the FreeCAD application. Repo is here if you want to get a better scope of what I'm talking about.
Essentially the folder structure is:
(Main)
(Linux)
(Ubuntu)
ubuntu.sh
ubuntu.Dockerfile
(Fedora)
fedora.sh
fedora.Dockerfile
(Windows)
(Mac)
.env
What I want to do is use the env variables in .env as a central source of truth for all the build scripts in the tree. But I don't want to have to explicitly define the path of the .env inside the files, absolute or relative paths, as I'm still iterating and I don't want to update all the files if I rearrange the tree. Alternatively, I don't want to put independent .env's in all the child dirs for the same reason (unless they auto update somehow)
My question is as follows:
How do I just explicitly define the "local" path of .env in each script, Dockerfile, etc but only have to modify one top level .env file to auto-update an evolving tree. In a cross platform way
Some things I thought through:
Windows uses "hard links" which are equivalent but non compatible with POSIX hardlinks. I thought about creating windows.env and posix.env in each child dir that point to the same main .env. But most config files can only take one .env path argument.
I thought about writing a script that will update all the .env's when run (would rather not have to), or alternatively, I will accept an answer that uses some dotenv tooling to accomplish the same goal as long as it's cross-platform, and runs locally. I'm just not super familiar with those toolings. I would prefer the tooling or script run as a service and not have to be run everytime in order to update the files.
IF I'm using Git AND only referring to shell scripts, then a command at the top of the script such as . /$(git rev-parse --show-toplevel)/.env works well but has major limitations for use with dockerfiles and other yml based file types.
I currently use a run.sh file at the top level dir that sources the .env and then calls the other files within it. This seems to be the most used pattern I see in other repos. But this means I need to have two files run.sh and run.pwsh which just seems extranuous and hacky to add extras files that are basically one liners.
Related
I need to use a variable between node modules folder and src folder.Is there a way to use a variable to be used within the whole project?
Thanks.
Typically this is done with process environment variables. Any project can look for a common environment variable and handle that case accordingly, as well as the node modules of said project. As long as they agree on the name, they all share the same environment. Have a good default, don't force people to set this environment variable.
The environment can be set using an environment file (do not check this into source control!), on your container or cloud configuration, or even right on the command line itself.
TLOG=info npm test
That is an example that I use frequently. My project runs logging for the test cases only at alert level - there are a lot of tests so it makes the output less verbose. However, sometimes while developing I want to see all the logs, so my project is looking for an environment variable TLOG (short for "test logging") and I can set it just for that run! Also no code change is needed, which is nicer than JavaScript variables that need to be set back to original values, forget to be turned off, etc.
I'm using the Kaldi toolset for speech recognition from a computer in which I don't have the rights to modify the contents of the install in /var/kaldi. The directory contains a folder of scripts that are provided as a sample of utilisation, the scripts are also heavily linked to each other.
The structure is as follows, the main scripts folder for dataset mydataset is found in /var/kaldi/egs/mydataset/v1/, where scripts such as run.sh or path.sh are located. In particular, the user is expected to run the run.shscript which then calls path.sh which then exports a KALDI_ROOT variable:
export KALDI_ROOT=`pwd`/../../..
The scripts folder also contains many links that point to folders in other scripts' locations, so that scripts can be re-used if the're not changed. An example would be for the local entry in v2 to point to the local folder in v1 as follows:
IntxLNK^A.^#.^#/^#v^#1^#/^#l^#o^#c^#a^#l^#/^#
or
../v1/local/
I have to run the scripts from a folder I've been given somewhere else in the sytem as inmyfolder/egs/mydataset/v2/.
How can I modify path.sh and/or link to the installation folder so that I can run everything located in the intended kaldi root /var/kaldi, but also link to the rest of the scripts in myfolder/egs?
After talking with the admin of the system, the solution is to rebuild each link one by one to point to the new scripts locations. I'll leave the answer unanswered in case someone wants to add something else. Also, feel free to delete the question if you believe it not to be useful.
What I do is make a ProgramFiles directory in home i.e. ~/ProgramFiles
There I make folders for all programs I want to install or git-clone.
In path.sh I always use the whole /home//ProgramFiles/kaldi as root. Defnining absolute path helps overcome many errors along the way. You may have to define DATA_ROOT at some points in your path.sh
This is probably a basic question but I've been Googling for a while on it... I have a Cabal-ized Haskell project and I'm in the process of writing integration tests for it. I want to be able to include test resources for my project in the same repo and access them in tests. For example, here are a couple things I want to accomplish:
1) Check a dummy database instance into my repo, including a shell script that spins up a database process. I want to write an Hspec integration test that spins up the database process, makes some calls to it, and then shuts it down. So I need to be able to find the shell script so I can use System.Process.createProcess on it.
2) Check in paired "input" and "output" files. My test should process each of the input files and compare them to a corresponding output file to make sure they match. (I've read about "golden" but it doesn't seem to solve the problem of finding/reading the input files in the first place?)
In short, how can I go about creating a "resources" folder in the root folder of my Haskell project and find the path to it inside tests?
Have a look at an existing project that uses input and output file.
For example, take haddock, the source code is at https://github.com/haskell/haddock. They have the test files under a folder (https://github.com/haskell/haddock/tree/master/html-test/ref) and they are referenced as extra-source-files in the cabal file (https://github.com/haskell/haddock/blob/master/haddock.cabal). Then the test code (https://github.com/haskell/haddock/blob/master/html-test/run.lhs) uses some CPP macro (__FILE__) to get the current directory, and can then resolve the files relative to that folder.
I know puppet modules always have a files directory and I know where it's supposed to be and I have used the source => syntax effectively from my own, handwritten modules but now I need to learn how to deploy files using Hiera.
I'm starting with the saz-sudo module and I've read the docs but I can't see anything about where to put the sudoers file; the one I want to distribute.
I'm not sure whether I need to set up a site-wide files dir in /etc/puppetlabs/puppet and then make subdirs in there for every module or what. And does Hiera know to look in /etc/puppetlabs/puppet/files/sudo if I say, source => "puppet:///files/etc/sudoers" ? Do I need to add a pathname in /etc/hiera.yaml? Add a line - files ?
Thanks for any clues.
My cursory view of the puppet module, given their example of using hiera:
sudo::configs:
'web':
'source' : 'puppet:///files/etc/sudoers.d/web'
'admins':
'content' : "%admins ALL=(ALL) NOPASSWD: ALL"
'priority' : 10
'joe':
'priority' : 60
'source' : 'puppet:///files/etc/sudoers.d/users/joe'
Suggest it assumes you have a "files" puppet module. So under you puppet modules section:
mkdir -p files/files/etc/sudoers.d/
Drop your files in there.
Explanation:
The url 'puppet:///files/etc/sudoers.d/users/joe' is broken down thus:
puppet: protocol
///: Three slashes indicate the source of the file is in a module.
files: name of the module
etc/sudoers.d/users/joe: full path to the file within the module's "files" directory.
You don't.
The idea of a module (Hiera backed or not) is to lift the need to manage the whole sudoers file from you. Instead, you can manage each single entry in the sudoers file.
I recommend reviewing the documentation carefully. You should definitely not have a file { "/etc/sudoers": } resource in your manifest.
Hiera doesn't have to do anything with Files.
Hiera is like a Variables Database, and servers you based on the hierarchy you have.
the files inside puppet, are usually accessed in methods like source => but also these files are using some basic structure.
In most cases when you call an file or template.
A template can serve your needs to automatically build an sudoers based on that.
There are also modules that supports modifying sudoers too.
It is up to you what to do.
In this case, saz stores the location of the file in hiera, but the real location can be a file inside your puppet (like a module file or something similar).
Which is completely unrelated.
Read about puppet file server
If you have questions, just ask.
V
I'm trying to store whole the output of my build, this includes some empty folders. These aren't included by the artefact mechanism in teamcity:
What doesn't work:
OAR\=> OAR.zip
OAR->OAR.zip
OAR
Inside of OAR i have a folder structure that needs to be stored. I know i could put a placeholder file in each but that is not the answer i'm after. Otherwise ill have to zip it myself?
Unfortunately TeamCity, by design, searches for files and uploads them as artifacts which means that empty folders are never included. Given the open and very old issue in the TeamCity tracker I doubt they are going to fix it any time soon.
I would recommend zipping the folder yourself, that is the approach we have taken. How you implement that depends on the build technology you are using. For example, if you are building using Nant you could add the zip task to your build, there are similar options for MSBuild and Ant.
If you don't want to rely on the build performing the zip I would recommend installing 7zip on your build agents and using the command line to perform the zip. Just remember if you want 7zip to include empty directories use * as the wildcard rather than *. * like so:
7z a -r OAR.zip *
Technically you could use powershell to do the zipping, which would be better than having to install something on your agents. I haven't tried this option myself.
Apologies for not linking all my references above. Apparently, and understandably so, I need at least 10 reputation to post more than 2 links.