How to edit and save Python file via command-line - linux

I have a Dockerfile that in one of its RUN instructions creates a conan file. I'd like to edit and save that conan file in my Dockerfile to set project specific settings. Is there a way to do so via command line, for example the Python prompt?
Alternatively is there a way to embed a Python file in a Dockerfile?

I don't know any command to do so. But I would suggest you to use another approach :
Create a Conan template file with environment variables as placeholders (youconanfile.dist).
use envsubst command in order to runtime-create the file you need with current project variables.
I use this technique in a Docker stack to generate multiple files (wp-cli.yml, deploy.php...). My example is in a Makefile. If you need to use it in your Dockerfile, it is possible assuming that
envsubst is installed on your container
COPY command is used for pushing the Conan template file in your container.

Related

mlflow run git-uri clone to specific directory

I am using mlflow run with a GitHub uri.
When I run using the below command
mlflow run <git-uri>
The command sets up a conda environment and then clones the Git repo into a temp directory, But I need it setup in a specific directory
I checked the entire document, but I can't find it. Is there no such option to do so in one shot?
For non-local URIs, MLflow uses the Python's tempfile.mkdtemp function (source code), that creates the temporary directory. You may have some control over it by setting the TMPDIR environment variable as described in Python docs (it lists TMP & TEMP as well, but they didn't work for me on MacOS) - but it will set only "base path" for temporary directories and files, the directory/file names are still will be random.

I am using coverity to analyse node-ts template for a service. What should I use to build it?

Steps:
Installed coverity
Configured compiler
cov-configure --javascript
cov-configure --cs
I am stuck at the build step of cov-build. Yarn is used to run and configure the service. But I am not sure what coverity wants here.
I tried a couple of npm run commands, every time end up getting this:
[WARNING] No files were emitted. This may be due to a problem with your configuration
or because no files were actually compiled by your build command.
Please make sure you have configured the compilers actually used in the compilation.
I also tried different compilers, but no luck.
What should be done in this case?
You need to do a file system capture for Javascript files. You can accomplish this by running cov-build with the --no-command flag.
cov-build --dir CoverityIntermedediateDir --no-command --fs-capture-list list.txt
Lets break down these commands:
--dir: intermediate directory to store the emitted results (used for cov-analyze later).
--no-command: Do not run a build command and to look for certain file types
--fs-capture-list: Use the file that is provided to specify which files to look at and possibly emit to the intermediate directory.
A recommended way to generate the list.txt file is to grab it from your source control. If using git run:
git ls-files > list.txt
I want to also point out that if you don't have a convenient way to get a file listing in order to use the --fs-capture-list command you can use --fs-capture-search command and pair that with a filter to exclude the node_modules directory.
The coverity forums have some useful questions and answers:
Node.js File system capture
Really, the best place to look is at the documentation. There are several examples of what you want to do in their guides.

Dockerizing Node.js app - what does: ENV PATH /app/node_modules/.bin:$PATH

I went through one of very few good dockerizing Vue.js tutorials and there is one thing I don't understand why is mandatory in Dockerfile:
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /usr/src/app/package.json #not sure though how it relates to PATH...
I found only one explanation here which says:
We expose all Node.js binaries to our PATH environment variable and
copy our projects package.json to the app directory. Copying the JSON
file rather than the whole working directory allows us to take
advantage of Docker’s cache layers.
Still, it doesn't made me any smarter. Anyone able to explain it in plain english?
Error prevention
I think this is just a simple method of preventing an error where Docker wasn't able to find the correct executables (or any executables at all). Besides adding another layer to your image, there is in general as far as I know no downside in adding that line to your Dockerfile.
How does it work?
Adding node_modules/bin to the PATH environment variable ensures that the executables created during the npm build or the yarn build processes can be found. You could also COPY your locally builded node_modules folder to the image but it's advised to build it inside the Docker container to ensure all binaries are adapted to the underlying OS running in the container. The best practice would be to use multistage builds.
Furthermore, adding the node_modules/bin at the beginning of the PATH environment variable ensures that exactly these executables (from the node_modules folder) are used instead of any other executables which might also be installed on the system inside the Docker image.
Do I need it?
Short answer: Usually no. It should be optional.
Long answer: It should be enough to set the WORKDIR to the path where the node_modules is located for the issued RUN, CMD or ENTRYPOINT commands in your Dockerfile to find the correct binaries and therefore to successfully get executed. But I for example had a case where Docker wasn't able to find the files (I had a pretty complex setup with a so called devcontainer in VSCode). Adding the line ENV PATH /app/node_modules/.bin:$PATH solved my problem.
So, if you want to increase the stability of your Docker setup in order to make sure that everything works as expected, just add the line.
So I think the benefit of this line is to add the node_modules path from the Docker container to the list of PATHs on the relevant container. If you're on a Mac (or Linux I think) and run:
$ echo $PATH
You should see a list of paths which are used to run global commands from your terminal i.e. gulp, husky, yarn and so on.
The above command will add node_modules path to the list of PATHs in your docker container so that such commands if needed can be run globally inside the container they will work.
.bin (short for 'binaries') is a hidden directory, the period before the bin indicates that it is hidden. This directory contains executable files of your app's modules.
PATH is just a collection of directories/folders that contains executable files.
When you try to do something that requires a specific executable file, the shell looks for it in the collection of directories in PATH.
ENV PATH /app/node_modules/.bin:$PATH adds the .bin directory to this collection, so that when node tries to do something that requires a specific module's executable, it will look for it in the .bin folder.
For each command, like FROM, COPY, RUN, CMD, ..., Docker creates a image with the result of this command, and this images are called as layers. The final image is the result of merge of all layers.
If you use the COPY command to store all the code in one layer, it will be greater than store a environment variable with path of the code.
That's why the cache layers is a benefit.
For more info about layers, take a look at this very good article.

Making an Executable out of an entire Python Project

Is there any way I can make an executable out of my Python project? There are many Python scripts that are in my Project and there are SQLite db files as well as other files and folders that are required for the software to run correctly. What is the best way of making this entire project executable?, Should I only make the Python scripts executable?
I have tried Pyinstaller but I am not sure how to bundle all the files into 1 single executable. Shown above is a copy of all the files and folders in my directory.
I think you need to modify the spec file, which PyInstaller creates on a first run. There is a special parameter for data files:
binaries: non-python modules needed by the scripts, including names given by the --add-binary option;
Try adding your database and other data files to this field and they should be included to you package.
For further question I recommend to refer to official documentation and check examples on Github

build Jenkins use properties from file

I want to use the option "This build has the parameters" in Jenkins ( hudson ) and then instead of String parameters, I want to load these settings from an external file that contains all the parameters (val=value ...) .
I find this plugin in the "Trigger parameterized" that puts a file paraetres but after a build, me I need this file in the first build
thank you
I would suggest that you specify the path to the file as a String parameter, call it PARAMS_FILE
The file should look like.
VAR1=someValue
VAR2=someOtherValue
If you use bash in your build steps then you could do.
. ${PARAMS_FILE}
At the beginning of an execute shell and the params would be set for that shell.
That would solve the problem for using shell build steps atleast.
If you use other build steps you would have to do another solution.

Resources