gitlab code to deploy Azure Function- how to control version numbers - linux

Background Information
I'm trying to ensure that no matter how many times / when I run my gilab-ci.yml file, it will consistently download and install the EXACT same Azure Function deployment environment each time. I don't want to run the script today and have Azure CLI version 2.25 and then tomorrow when we trigger the pipeline, it will install / use version 2.26.
I recently came across an article that shows how to deploy an Azure Function. It's found here: https://dev.to/alandecastros/gitlab-ci-script-to-deploy-a-azure-function-3gc4
For ease of readability, I've copied and pasted the gitlab-ci.yml code here:
stages:
- deploy
deploy:
stage: deploy
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- curl -sL https://aka.ms/InstallAzureCLIDeb | bash
- apt-get install curl && curl -sL https://deb.nodesource.com/setup_12.x | bash -
- apt-get install nodejs
- npm install -g azure-functions-core-tools#3 --unsafe-perm true
- az login --service-principal -u $APPLICATION_ID -p $APPLICATION_SECRET --tenant $TENANT_ID
- func azure functionapp publish $FUNCTION_APP --csharp
only:
- master
QUESTIONS
From what I can tell, it feels like the first command under the scripts section will install the latest version of the Azure CLI. Is this correct? I reviewed the https://azurecliprod.blob.core.windows.net/$root/deb_install.sh file and it seems it's adding the necessary repositories to the Debian image and then runs
apt-get install -y azure-cli
In the case of nodejs, it seems it will always install major version 12... but the sub version can change. Is this correct?
How can I change this logic to control version numbers? One idea is to create my own docker image using this logic once, and then just keep reusing the custom image. I've tested it and its working.
But is there a way to install a very specific version of node? I tried to test like this:
# apt-get install curl && curl -sL https://deb.nodesource.com/setup_12.x | bash -
I can see it's installed 12.22.1
Unpacking nodejs (12.22.1-1nodesource1) ...
Setting up nodejs (12.22.1-1nodesource1) ...
I tried to follow up and do something like this:
# apt-get install nodejs12.22.1
and also
#apt-get install node_12.22.1
But in both cases I'm getting errors that it can't find these packages.
Thanks for reading / for the help.

Both the Azure CLI and Node.js offer a bash script for installation of the tools - with the drawback of always getting the latest release (then again for the majority of users this is probably a good thing). These scripts - as you figured out - do additional things like managing the repositories and trust.
Azure CLI
Microsoft offers a documentation on how to pin to a version: https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=apt#install-specific-version
In essence, you have to manually trust the signing key and add the repository as would be done by the script. Afterwards you can use the regular apt-get <package>=<version> syntax to specify a version.
Node.js
In case of Node, they are at least offering different scripts for each major release. But otherwise, it seems to be a bit more involved as is evident from https://github.com/nodesource/distributions/issues/33. I haven't tried the proposed workarounds from there as I personally am not interested in pinning Node.js to a minor release.

Related

How to download and install Nodejs from Nexus proxy repo

I am trying to download and install nodejs in a dockerfile.
It works when run below i commands in dockerfile -
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
RUN apt-get install -y nodejs
but as per my company policy, i need to use nexus for any third party component and i need to download Nodejs thru nexus. Can someone help me how can i do it.
How can i replace https://deb.nodesource.com/ with https://comoany-nexus.com/repository/ or is there any other way like using APIs or Hosting package to proxy.
Note - nexus version is -
version 3.30.1-01
Edition PRO
You can set the custom registry to your company's nexus by having .npmrc config file in the root directory of your project.
https://docs.npmjs.com/cli/v8/configuring-npm/npmrc
file: .npmrc
; Set a new registry for a scoped package
registry=https://mycustomregistry.example.org

Pulumi cli not available after installing it within docker image

I am building a web application that makes use of the Pulumi automation SDK. The application is to be deployed within a linux docker container, so I added the following lines to the Dockerfile:
RUN curl -fsSL https://get.pulumi.com | sh
ENV PATH="/root/.pulumi/bin:${PATH}"
However, it looks like the Pulumi CLI is not available in bash after the image is built. I have validated that the Pulumi executables were indeed installed into /root/.pulumi/bin
I suspect that the cause is pulumi installed into the /root folder rather than to ~/ but I am not sure how to fix it.
Thanks

Install Linux package on launch when deploying Airflow with Helm in Azure Kubernets Service

I'm trying to install Airflow on Azure Kubernetes Service (AKS) using Helm. I've found some guides to do so, and with some difficulty I've managed to get it working fine. I was now trying to run some DAGs I made and in one of those DAGs I use the bash operator to run a specific command, and that command needs a Linux package that does not come with the default image Helm uses for Airflow...
https://github.com/airflow-helm/charts/tree/main/charts/airflow
Is there a way to include extra Linux packages for the Helm Airflow chart installation? I've looked all over for it but couldn't find anything
Hope someone can help me on this.
Helm itself is a templating language. Most of the helm charts give you a flexibility to change your base image of the applications. As long as you are using the related image, the helm chart will create a correct deployment for you.
In your case, if you want to extend the functionalities or install more packages in the base image, you will need to create your own image and push to an image repository.
For example, you can install your package like this with Dockerfile defined locally.
FROM apache/airflow:1.10.12-python3.6
RUN apt update && \
apt install vim -y && \
rm -rf /var/lib/apt/lists/*
Then, run the following command to build and upload your custom image.
docker build -t my-account/my-airflow:latest .
docker push my-account/my-airflow:latest
In your values file, you can specify your image name and tag then.
airflow:
image:
repository: my-account
tag: my-airflow
Once you apply this values file, helm will help you to change the default image to your customized one.
The tutorial in the doc also mentions the custom image but it is for the DAG building. But with the same technique, you can customize your own base image.

Creating a custom NodeJSDocker image on rhel7

I am building some base Docker images for my organization to be used by applications teams when they deploy their applications in OpenShift. One of the images I have to make is an NodeJS image (we want our images to be internal rather than sourced from DockerHub). I am building on RedHat's RHEL7 Universal Base Image (ubi). However I am having trouble configuring NodeJS to work in the container. Here is my Dockerfile:
FROM myimage_rhel7_base:1.0
USER root
RUN INSTALL_PKGS="rh-nodejs10 rh-nodejs10-npm rh-nodejs10-nodejs-nodemon nss_wrapper" && \
yum install -y --setopt=tsflags=nodocs $INSTALL_PKGS && \
rpm -V $INSTALL_PKGS && \
yum clean all
USER myuser
However when I run the image there are no node or npm commands available unless I run scl enable rh-nodejs10 bash. This does not work in the Dockerfile as it creates a subshell that will not be usable to a user accessing the container.
I have tried installing from source, but I have run into a different issue of needing to upgrade the gcc/g++ versions despite them not being available in my configured repos from my org. I also figure that if I can get NodeJS to work from the package manager it will help get security patches and such should the package be updated.
My question is, what are the recommended steps to create an image that can be used to build applications running on NodeJS?
Possibly this is a case where the best code is code you don't write at all. Take a look at https://github.com/sclorg/s2i-nodejs-container
It is a project that creates an image that has nodejs installed. This might be a perfect solution out of the box, or it could also serve as a great example of what you're trying to build.
Also, their readme attempts to describe how they get around the scl enable command.
Normally, SCL requires manual operation to enable the collection you
want to use. This is burdensome and can be prone to error. The
OpenShift S2I approach is to set Bash environment variables that serve
to automatically enable the desired collection:
BASH_ENV: enables the collection for all non-interactive Bash sessions
ENV: enables the collection for all invocations of /bin/sh
PROMPT_COMMAND: enables the collection in interactive shell
Two examples:
* If you specify BASH_ENV, then all your #!/bin/bash scripts do not need to call scl enable.
* If you specify PROMPT_COMMAND, then on execution of the podman exec ... /bin/bash command, the collection will be automatically
enabled.
I decided in the end to install node using the binaries rather than our rpm server. Here is the implementation
FROM myimage_rhel7_base:1.0
USER root
# Get node distribution from nexus and install it
RUN wget -P /tmp http://myrepo.example.com/repository/node/node-v10.16.3-linux-x64.tar.xz && \
tar -C /usr/local --strip-components 1 -xf /tmp/node-v10.16.3-linux-x64.tar.xz && \
rm /tmp/node-v10.16.3-linux-x64.tar.xz

gcloud component update fails

I've deployed to VM's running Debian on GCE and have cron scripts that use gcloud commands.
I noticed that gcloud components update retuns this error
ERROR: (gcloud.components.update) The component manager is disabled for this installation
My mac works fine to update gcloud and add new components.
The built in gcloud tools that were in the VM image won't update. I have not found out how to enable the component manager.
UPDATED
Now you can use sudo apt-get install google-cloud-sdk command to install or update Google Cloud SDK.
You may need to add Cloud SDK repository in your Linux machine. This is the instructions.
Note: The following workaround should not be used anymore.
The component manager is enabled on latest images and gcloud components update command should be working now.
In case you're still experiencing this issue, use the following command to enable updater:
sudo sed -i -e 's/true/false/' /usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/config.json
You cannot update components using the built in SDK tools on a compute engine instance. However you can download another local copy of the SDK from https://cloud.google.com/sdk/ (curl https://sdk.cloud.google.com | bash) and update your path accordingly to use the new SDK install, and you will have the component manager enabled.
Came here while trying to gcloud components install [x] on a Docker container from google/cloud-sdk and getting the same error (I am probably not the only one on this situation).
Unfortunately, apt-get install google-cloud-sdk (as suggested on the most upvoted answer) didn't help.
But the ugly sed on config file did the trick. Dirty but efficient fix (for the moment).
RUN sed -i -e 's/"disable_updater": true,/"disable_updater": false,/' /usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/config.json
Building off of Vilas's explanation above: you can't run the updater for the built in gcloud image. However you can install a copy of gcloud outside of the package manager and run the updater on that gcloud install.
You can now run sudo apt-get install google-cloud-sdk on the Google Compute Engine default images to update the Cloud SDK.

Resources