We have a requirement to run the image scan enforcer as msi.exe installer /package in AKS windows node. We have to install the package as daemon set, so that the image scan enforcer will be applied to all the windows node. I have tried building an image to install msi package as DS but its not installed on the host. Is there any other option to install it in windows node.
Related
When creating a docker, i am currently doing pip install -r requirements.txt.
Instead of pip install, can I just copy all the already installed modules in my venv of project on local host into docker? Is it equivalent or is there a difference? I am assuming here that local host is same as docker container in terms of image and configuration.
It is not recommended to copy the installed modules from your host machine to the container. The code might not work if your host OS is different than the container’s base OS. Moreover you may be copying unwanted cache files, which will increase the docker image size.
I have started learning django and Python.
I have developed one service using Visual Studio Code on my Windows machine.
It is working as expected on Windows machine.
Now I want to deploy the same on a Ubuntu server. (But failing as there is no 'bin' folder inside virtual environment)
How can I do it? I know I am missing something basic here.
Could you please help/ point me where I can read about it?
Let's suppose you are deploying your application at path /opt/.
Follow the below steps to deploy:
To get started we need to install some packages, run the following commands:
sudo apt-get update
sudo apt-get install python3-pip python3-dev libmysqlclient-dev ufw virtualenv
Above command would install the basics python dev packages on ubuntu server.
Create a virtualenv at some path of your choice(might be at /opt/.env), following commands:
virtualenv .env
Activate the environment: source .env/bin/activate
Install all requirement packages in virtual-env that you require to run your Django application.
Test your service manually by running: python manage.py runserver (this would shows that all dependencies are installed)
After that, You can install the web server gateway i.e Gunicorn and Supervisor as process monitoring tool for your service. Please refer: http://rahmonov.me/posts/run-a-django-app-with-nginx-gunicorn-and-supervisor/
Nginx can be run on server as web server for routing request to your application port/socket file.
Above are the high level steps to deploy the Django Application.
when I'm trying to publish my azure function in python to azure I get this following error.
pip version 10.0.1, however version 19.2.1 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.
ERROR: cannot install cryptography-2.7 dependency: binary dependencies without wheels are not supported. Use the --build-native-deps option to automatically build and configure the dependencies using a Docker container. More information at https://aka.ms/func-python-publish
So, I have install docker and I have tried to push my function using the following command
func azure functionapp publish timertriggerforstreaming --build-native-deps
I didn't do anything I have just installed docker,
when I tried to publish I get the following error when I was using docker on windows container(My machine is windows 10).
Error running docker pull mcr.microsoft.com/azure-functions/python:2.0.12493-python3.6-buildenv.
output: 2.0.12493-python3.6-buildenv: Pulling from azure-functions/python
image operating system "Linux" cannot be used on this platform
Again I switch from Windows to Linux and again tried the same now it's taking a long time and I'm seeing the following output for a long time.
see the image
Is the way I'm doing it right or wrong, or I need to Dockerize azure function on my own and to publish it
Follow this doc for deployment : https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image
I have provisioned an Azure HDInsight cluster type ML Services (R Server), operating system Linux, version ML Services 9.3 on Spark 2.2 with Java 8 HDI 3.6.
I am able to login to Rstudio on the head node via SSH access and I ran the script
from this tutorial - https://blogs.msdn.microsoft.com/azuredatalake/2017/06/26/run-h2o-ai-in-r-on-azure-hdinsight/
located here:
https://bostoncaqs.blob.core.windows.net/scriptaction/install-h2opackages.sh
to install H2o related packages unto the head and worker nodes.
When I run the library(sparklyr) and library(dplyr) it works fine, however Rstudio does not find the h2o package and when I try to install the h2o package it fails because RCurl is not installed. Then when I try to install RCurl I get the following error "Error : package 'bitops' required by 'RCurl' could not be found". When I install bitops it successfully installs but RCurl does not seem to be finding the bitops package within the default install directory temp folder on the HDInsight head node VM's harddrive.
My question is, how do I get the Rstudio Server to recognize where packages are installed on my HDInsight head node? I am using the default install directory when installing each package but the subsequent packages do not recognize dependent packages are installed.
Thanks!
I did not realize that I was not installing the packages under the Edge node, when I installed the packages on all of the nodes I had no problems with the packages.
I'm trying to run Kubernetes on a local Centos server and have had some issues (for example, with DNS). A version check shows that I'm running Kubernetes 1.2 Alpha 1. Since the full release is now available from the Releases Download page, I'd like to upgrade and see if that resolves my issue. The documentation for installing a prebuilt binary release states:
Download the latest release and unpack this tar file on Linux or OS X, cd to the created kubernetes/ directory, and then follow the getting started guide for your cloud.
However, the Getting Started Guide for Centos says nothing about using a prebuilt binary. Instead, it tells you to set up a yum repo and run a yum install command:
yum -y install --enablerepo=virt7-docker-common-release kubernetes
This command downloads and installs the Alpha1 release. In addition, it attempts to install Docker 1.8 (two releases down from the current 1.10), which fails if Docker is already installed.
How can I install from a prebuilt binary and use an existing Docker?
According to the Table of Solutions for installing Kubernetes, the maintainer of the CentOS getting started guide is #coolsvap. You should reach out to him to ask about getting the pre-built binary updated to the official release.