I am building a web application that makes use of the Pulumi automation SDK. The application is to be deployed within a linux docker container, so I added the following lines to the Dockerfile:
RUN curl -fsSL https://get.pulumi.com | sh
ENV PATH="/root/.pulumi/bin:${PATH}"
However, it looks like the Pulumi CLI is not available in bash after the image is built. I have validated that the Pulumi executables were indeed installed into /root/.pulumi/bin
I suspect that the cause is pulumi installed into the /root folder rather than to ~/ but I am not sure how to fix it.
Thanks
Related
I am working on a project that needs pdf files created based on a webpage, and went with wkhtmltopdf. The project consists of a python-based web app that runs in an Ubuntu 20 environment. An Azure pipeline is used to deploy the project to a Linux-based Azure app service that uses Python 3. The project runs on a localhost, but deploying it to an Azure app service has been causing issues.
After searching and trial and error, I came up with deploying my project to the Azure pipeline in Ubuntu, and then once the project has been uploaded to the Azure app service, I go into Azure, navigate to the SSH for the app service, and manually install wkhtmltopdf. For some reason, the app service runs on Debian 9, so I cannot create a script in the .yml file for the pipeline. The wkhtmltopdf package that is installed by the pipeline doesn't work with Debian.
I was wondering if there is a way to automatically have the debian app service install wkhtmltopdf. It can be done manually via the SSH in Azure, but with a lot of builds, it would be very time consuming.
Another option is changing the yml file to Debian 9 (which appears not to be supported here), or changing the app service OS to Ubuntu, which I could not find out how to do after hours of searching. It appears that it is automatically Debian 9 based on here
Here is a screenshot of the SSH on Azure
I was able to get wkhtmltopdf to install in the Debian environment by creating a script in the /home directory. I then set the Startup Command in Azure to point to this script.
I don't think Azure runs any auto scripts if you give it a startup command, thus I start the application myself at the end of the script.
Here is the script:
wget https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6-1/wkhtmltox_0.12.6-1.stretch_amd64.deb
apt-get install -y ./wkhtmltox_0.12.6-1.stretch_amd64.deb
rm wkhtmltox_0.12.6-1.stretch_amd64.deb
gunicorn --bind=0.0.0.0 --timeout 600 app:app
Note that I also had to add a pip install python-dotenv above pip install requirements.txt in the .yml file. Not sure why, as dotenv is in the requirements file, but I would get a dependency exception without this line in the yml.
Background Information
I'm trying to ensure that no matter how many times / when I run my gilab-ci.yml file, it will consistently download and install the EXACT same Azure Function deployment environment each time. I don't want to run the script today and have Azure CLI version 2.25 and then tomorrow when we trigger the pipeline, it will install / use version 2.26.
I recently came across an article that shows how to deploy an Azure Function. It's found here: https://dev.to/alandecastros/gitlab-ci-script-to-deploy-a-azure-function-3gc4
For ease of readability, I've copied and pasted the gitlab-ci.yml code here:
stages:
- deploy
deploy:
stage: deploy
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- curl -sL https://aka.ms/InstallAzureCLIDeb | bash
- apt-get install curl && curl -sL https://deb.nodesource.com/setup_12.x | bash -
- apt-get install nodejs
- npm install -g azure-functions-core-tools#3 --unsafe-perm true
- az login --service-principal -u $APPLICATION_ID -p $APPLICATION_SECRET --tenant $TENANT_ID
- func azure functionapp publish $FUNCTION_APP --csharp
only:
- master
QUESTIONS
From what I can tell, it feels like the first command under the scripts section will install the latest version of the Azure CLI. Is this correct? I reviewed the https://azurecliprod.blob.core.windows.net/$root/deb_install.sh file and it seems it's adding the necessary repositories to the Debian image and then runs
apt-get install -y azure-cli
In the case of nodejs, it seems it will always install major version 12... but the sub version can change. Is this correct?
How can I change this logic to control version numbers? One idea is to create my own docker image using this logic once, and then just keep reusing the custom image. I've tested it and its working.
But is there a way to install a very specific version of node? I tried to test like this:
# apt-get install curl && curl -sL https://deb.nodesource.com/setup_12.x | bash -
I can see it's installed 12.22.1
Unpacking nodejs (12.22.1-1nodesource1) ...
Setting up nodejs (12.22.1-1nodesource1) ...
I tried to follow up and do something like this:
# apt-get install nodejs12.22.1
and also
#apt-get install node_12.22.1
But in both cases I'm getting errors that it can't find these packages.
Thanks for reading / for the help.
Both the Azure CLI and Node.js offer a bash script for installation of the tools - with the drawback of always getting the latest release (then again for the majority of users this is probably a good thing). These scripts - as you figured out - do additional things like managing the repositories and trust.
Azure CLI
Microsoft offers a documentation on how to pin to a version: https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=apt#install-specific-version
In essence, you have to manually trust the signing key and add the repository as would be done by the script. Afterwards you can use the regular apt-get <package>=<version> syntax to specify a version.
Node.js
In case of Node, they are at least offering different scripts for each major release. But otherwise, it seems to be a bit more involved as is evident from https://github.com/nodesource/distributions/issues/33. I haven't tried the proposed workarounds from there as I personally am not interested in pinning Node.js to a minor release.
I'm trying to install Airflow on Azure Kubernetes Service (AKS) using Helm. I've found some guides to do so, and with some difficulty I've managed to get it working fine. I was now trying to run some DAGs I made and in one of those DAGs I use the bash operator to run a specific command, and that command needs a Linux package that does not come with the default image Helm uses for Airflow...
https://github.com/airflow-helm/charts/tree/main/charts/airflow
Is there a way to include extra Linux packages for the Helm Airflow chart installation? I've looked all over for it but couldn't find anything
Hope someone can help me on this.
Helm itself is a templating language. Most of the helm charts give you a flexibility to change your base image of the applications. As long as you are using the related image, the helm chart will create a correct deployment for you.
In your case, if you want to extend the functionalities or install more packages in the base image, you will need to create your own image and push to an image repository.
For example, you can install your package like this with Dockerfile defined locally.
FROM apache/airflow:1.10.12-python3.6
RUN apt update && \
apt install vim -y && \
rm -rf /var/lib/apt/lists/*
Then, run the following command to build and upload your custom image.
docker build -t my-account/my-airflow:latest .
docker push my-account/my-airflow:latest
In your values file, you can specify your image name and tag then.
airflow:
image:
repository: my-account
tag: my-airflow
Once you apply this values file, helm will help you to change the default image to your customized one.
The tutorial in the doc also mentions the custom image but it is for the DAG building. But with the same technique, you can customize your own base image.
It's a kind of not normal thing, but this is something, that temporarily is a solution.
I have laradock installed in a system and laravel app.
All that I'm using from laradock provides me command below
docker-compose up -d nginx mysql php-worker workspace redis
I need to add node package (https://www.npmjs.com/package/tiktok-scraper) installed globally in my docker, so I can get results by executing php code like below
exec('tiktok-scraper user username-n 3 -t json');
This needs to be available for php-fpm and php-worker level, as I need this in jobs and for endpoints, that should invoke scrape.
I know, that I'm doing wrong, but I have tried to install it within workspace like using
docker-compose exec workspace bash
npm i -g tiktok-scraper
and after this it's available in my workspace (I can run for instance tiktok-scraper --help) and it will show me the different options.
But this doesn't solve the issue, as I'm getting nothing by exec('tiktok-scraper user username-n 3 -t json'); in my laravel app.
I'm not so familiar with docker and not sure, in which dockerfile should I put something like
RUN npm i -g tiktok-scraper
Any help will be appreciated
Thanks
To execute the npm package from inside your php-worker you would need to install it in the php-worker container. But for the php exec() to have an effect on your workspace this workspace would need to be in the same container as your php-worker.
I've deployed to VM's running Debian on GCE and have cron scripts that use gcloud commands.
I noticed that gcloud components update retuns this error
ERROR: (gcloud.components.update) The component manager is disabled for this installation
My mac works fine to update gcloud and add new components.
The built in gcloud tools that were in the VM image won't update. I have not found out how to enable the component manager.
UPDATED
Now you can use sudo apt-get install google-cloud-sdk command to install or update Google Cloud SDK.
You may need to add Cloud SDK repository in your Linux machine. This is the instructions.
Note: The following workaround should not be used anymore.
The component manager is enabled on latest images and gcloud components update command should be working now.
In case you're still experiencing this issue, use the following command to enable updater:
sudo sed -i -e 's/true/false/' /usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/config.json
You cannot update components using the built in SDK tools on a compute engine instance. However you can download another local copy of the SDK from https://cloud.google.com/sdk/ (curl https://sdk.cloud.google.com | bash) and update your path accordingly to use the new SDK install, and you will have the component manager enabled.
Came here while trying to gcloud components install [x] on a Docker container from google/cloud-sdk and getting the same error (I am probably not the only one on this situation).
Unfortunately, apt-get install google-cloud-sdk (as suggested on the most upvoted answer) didn't help.
But the ugly sed on config file did the trick. Dirty but efficient fix (for the moment).
RUN sed -i -e 's/"disable_updater": true,/"disable_updater": false,/' /usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/config.json
Building off of Vilas's explanation above: you can't run the updater for the built in gcloud image. However you can install a copy of gcloud outside of the package manager and run the updater on that gcloud install.
You can now run sudo apt-get install google-cloud-sdk on the Google Compute Engine default images to update the Cloud SDK.