How to set and use ENV variable with pip in requirements.txt - azure

I use Personal Access Token to download a package from private repo. I dont want to store it in requirements.txt so I want to use environment variables for that.
So I set it in the requirements.txt:
git+https://random:${PAT_AZURE}#myorg.visualstudio.com/myproject/_git/myrepo
Then I set it in Windows locally using:
set PAT_AZURE=MYACCESSGENERTEACCESSTOKEN
But its not working, when I try to pip install -r requirements.txtI get authentication failure (it works when I hardcode the token)
Any idea how to make it work?

You can open requirements.txt and add
.
.
variable:
PAT_AZURE=MYACCESSGENERTEACCESSTOKEN
.
.

Related

Docker File and Python

Apologies I am very new to Docker. I have the following Docker file which contains the following commands (see below). I am not sure I understand all commands and I would appreciate some explanation. I commented all the lines I understood but put a question mark in others. Please see below
#That this line means that python will be our base. Can some comment here please and explain this line more?
FROM python:3.9 as base
#create a working directory in the virtual machine (VM)
WORKDIR /code
# copy all the python requirements stored in requirements.txt into the new directoy (in the VM)
COPY ./requirements.txt /code/requirements.txt
# activate the package manager pip. But why use no-cache-dir?
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
# copy all files to the new directory (in the VM)
COPY ./ /code/
# I don't understand the line below. Please explain? why uvicorn? app.main:app is the
#location of the fastapi
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "180"]
Thank you
A Docker file states all steps that Docker will execute when creating your image. From that image, a container can be created.
#That this line means that python will be our base. Can some comment here please and explain this line more?
FROM python:3.9 as base
This is very basic docker stuff, follow a (beginners) tutorial and you will learn a lot more than just someone spoon-feeding little bits of knowledge.
#create a working directory in the virtual machine (VM)
WORKDIR /code
You are creating a container image, not a VM. That is a similar but very different concept and should not be mixed.
# copy all the python requirements stored in requirements.txt into the new directoy (in the VM)
COPY ./requirements.txt /code/requirements.txt
This copies all files to the image.
# activate the package manager pip. But why use no-cache-dir?
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
RUN is an image build step, and the outcome will be committed to the Docker image. So, in this step, you are telling docker that you want an image that has everything installed as outlined in requirements.txt with pip. No cache, by default PIP saves the whl's of the packages you are installing, but that only would increase the image and are no longer required. So, no cache.
# copy all files to the new directory (in the VM)
COPY ./ /code/
Again, not VM but image, an image that will later be used to create a container.
# I don't understand the line below. Please explain? why uvicorn? app.main:app is the
#location of the fastapi
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "180"]
Because you are trying to run a FastAPI project, and FastAPI is just the app; you need a server to actually be able to fire request at FastAPI. This is explained on the very first page of the FastAPI documentation actually.
"app.main:app" express your project has such python file:
<Project Root Dir>
app - folder
main.py -- python file
In the main.py, you init a FastAPI instance and named app, like this:
# main.py
....
app = FastAPI()
...
unicorn use above rules to get the FastAPI instance app, then load it.

How to install a global npm package using fnm (Fast Node Manager)?

My Problem
I have installed fnm (Fast Node Manager) from this github repo and it works all great except for installing global npm packages. For example, the well-known package nodemon is something I want installed globally and not im my node_modules project directory.
When installing the package globally there seems to be no problem:
And when checking the global package list, nodemon seems to be there:
But when running the command nodemon I get the following output:
As also seen in the fnm repository documentation there is a need to run this piece of code eval "$(fnm env --use-on-cd)"; on load in order to get fnm to work properly and this is what I have done in the .bashrc file.
Note
I am using windows 10, seems to be working on my mac laptop.
The Question
How can I have a global npm package installed for all or at least a single fnm node version? And what I mean by this, is that by running fnm use <NODE_VERION> you specify what node version to use as also seen in the repository documentation. I want to be able to run the nodemon command without it being installed in a project's node_modules directory.
You do not need to delete the multishells. The problem is the Git Bash path.
Fix is here: https://github.com/Schniz/fnm/issues/390
Put this in your .bashrc
eval $(fnm env | sed 1d)
export PATH=$(cygpath $FNM_MULTISHELL_PATH):$PATH
if [[ -f .node-version || -f .nvmrc ]]; then
fnm use
fi
As mentioned this actually worked on my OS X machine (aka my mac book pro) but not on my windows 10 computer. The solution I came up with after analyzing thoroughly the behaviour of fnm is the following:
Go to C:\Users\<YOUR_USER>\AppData\Local\fnm_multishells and delete the directory if it exists.
When downloading global packages do it via CMD or any terminal which isn't bash (or the terminal that has the "$(fnm env --use-on-cd)"; script) as this makes fnm then search for the global package in the wrong place.
This approach mitigates any path errors as I found that this was the core problem. As shown in the screenshot above when trying to run nodemon it looks for it in C:\Program Files\Git\Users\Valeri..... but this directory simply does not exist. After removing the directory mentioned in step 1 fnm stops looking for nodemon in that path and instead uses the one installed via CMD.
Essentially, the "$(fnm env --use-on-cd)"; script allows us to use fnm properly but at the same time causes this issue. Simply download global npm packages from a terminal that does not run this command.
Edit
I just had the same issue and to confirm you don't even need to delete the fnm_multishells directory. Just run npm -g remove <whatever> and install it via cmd or powershell. A command-line which does not run "$(fnm env --use-on-cd)"; on load.

Should i dockerize django app as non-root?

Should i dockerize Django app as a root user? If yes how can i set up non-root user for Django? Because in node.js app should have USER:node which is a better practice.
Code example from official docker page which does not include non-root:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
It's generically a good practice.
At the start of your Dockerfile, before you COPY anything in, create the user. It doesn't need to have any specific properties and it doesn't need to match any specific host user ID. The only particular reason to do this early is to avoid repeating it on rebuilds.
At the end of your Dockerfile, after you run all of the build steps, only then switch USER to the new user. The code and any installed libraries will be owned by the root user; and that's good, because it means the application can't accidentally overwrite the application code.
FROM python:3
# Create the non-root user. Doing this before any COPY means it won't
# be repeated on rebuild, for marginal savings in space and rebuild time.
# The user can have any name and any uid; it does not need to match any
# particular host system where the image might run.
RUN adduser --system --no-create-home someuser
# Install the application as in the question (still as root).
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
# Explain how to run the container. Only switch to the non-root user now.
EXPOSE 8000
USER someuser
CMD ["./main.py"]
Do not try to write files inside the container; instead, use a separate database container for persistence. Do not pass a host user ID as a build argument. Do not configure a password for the user or otherwise attempt to set up interactive logins. Do not create a home directory; it won't be used.

Setting up a way for different computers to access the same virtual environment

I'm trying to set up a system on multiple computers where I'll be able to run the some set of scripts and have it work on all of them. If I have the same version of Python installed locally all of the computers, am I able to set up a virtual environment on a network location? If so, does the Python executable need to be on the local drive or can it be in the network location?
If this isn't possible then what is the best way to do it?
Thanks.
Yes you can. You can export all dependencies with pip freeze > requirements.txt and prepare a script which installs missing packages on each machine.
I would personally implement script to sort dependencies in requirements.txt alphabetically and store both in a git repository and define a cron job on each machine to pull current version of requirements.txt from the remote and install missing dependencies and the other script to update the requirements.txt and push changes so they could broadcast.

Installing a software and setting up environment variable in Dockerfile

I have a jar file, which I need to create a docker image. My jar file is dependent on an application called ImageMagick. Basically, ImageMagick will be installed and the path to image magick will be added as an environmental variable. I am new to Docker, and based on my understanding, I believe, a container can access only resource within the container.
So I created a docker file, as such
FROM openjdk:8
ADD target/eureka-server-1.0.0-RELEASE.jar eureka-server-
1.0.0-RELEASE.jar
EXPOSE 9991
RUN ["yum","install","ImageMagick"]
RUN ["export","imagemagix_home", "whereis ImageMagick"](Here is what am
struggling that, i need to set env variable by taking the installation
directory of imagemagick. Currently iam getting null)
ENTRYPOINT ["java","-jar","eureka-server-1.0.0-RELEASE.jar"]
Please let me know, whether the solution am trying is proper, or is there any better solution for my problem.
Update,
As am installing an application and setting env variable at the build time, passing an argument in -e runtime is no use.I have updated my docker file as below,
FROM openjdk:8
ADD target/eureka-server-1.0.0-RELEASE.jar eureka-server-
1.0.0-RELEASE.jar
EXPOSE 9991
RUN ["yum","install","ImageMagick"]
ENV imagemagix_home = $(whereis ImageMagick)
RUN ["wget","https://johnvansickle.com/ffmpeg/builds/ffmpeg-git-64bit-
static.tar.xz"]
RUN ["tar","xvf","ffmpeg-git-*.tar.xz"]
RUN ["cd","./ffmpeg-git-*"]
RUN ["cp","ff*","qt-faststart","/usr/local/bin/"]
ENV ffmpeg_home = $(whereis ffmpeg)
ENTRYPOINT ["java","-jar","eureka-server-1.0.0-RELEASE.jar"]
And while am building, iam getting an error that,
OCI runtime create failed: conatiner_linux.go: starting container process caused "exec": "\yum": executable file not found in $PATH: unknow.
Update
yum is not available in my base image package, so I changed yum as apt-get as below,
RUN apt-get install build-essential checkinstall && apt-get build-dep
imagemagick -y
Now am getting package not found build-essential, check install. returned
a non-zero code 100
Kindly let me know whats going wrong
It seems build-essential or checkinstall is not available. Try installing them in separate commands. Or searching for them.
Maybe you need to do apt-et update to refresh the repository cache before installing them.

Resources