I have pushed a container to Container registry and am able to deploy it to kubernetes. Whats the best way to run this container to test if the deployment is working fine?
I have gone through the documentation and have seen that I can setup a endpoint but I am unable to figure out how to call the container once I have setup a post request to the endpoint. Note the container hosts a python script that basically runs a ml model and spits out a prediction. So I would like a way to do api calls to the cluster to run the container and a call to print the results of the container.
Or instead of setting a endpoint are there better ways to accomplish this?
• Setting an endpoint to the Kubernetes pod for accessing the container and executing the python script in the container is a good approach.
• As suggested in the Microsoft documentation, there are three options through which we can deploy API management in front of AKS. You can see the same in the picture provided in the document.
https://learn.microsoft.com/en-us/azure/api-management/api-management-kubernetes#kubernetes-services-and-apis
• Once you configure the API with Kubernetes cluster, you can deploy a model to Azure Kubernetes Service cluster, for that, you need to create a deployment configuration that describes the compute resources needed. For example, the number of cores and memory. You also need an inference configuration, which describes the environment needed to host the model and web service. For more information on creating the inference configuration, see how and where to deploy models.
For more information on how you can deploy and reference a python ML model, you can refer to this document below: -
https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-and-where?tabs=azcli
Related
I am facing the following problem: I need to execute on-demand long running workers on Azure VMs. These workers are wrapped in a docker image.
So I looked at what Azure is offering and I seem to have the following two options:
Use a VM with docker-compose. This means I need to be able to programatically start a VM, run the docker image on it, and then shutdown the VM (the specs we use are quite expensive and we can't let it run indefinitely). However this means writing orchestration logic ourselves. Is there a service that maybe we could use to make life easier?
Setting up a k8s cluster. However, I am not sure how pricing works here. Would I be able to add the type of the VMs we use to the cluster, and then use the k8s API to start on-demand containers? How would I get priced in this case?
If the only thing you need are workers, there are a few more options you have. Which service suits best depends on the requirements you have. Based on what's in your question, I would think one of the following two might fit best:
Azure Container Instances
Azure Container Instances offers the fastest and simplest way to run a container in Azure, without having to manage any virtual machines and without having to adopt a higher-level service.
Azure Container Instances is a great solution for any scenario that can operate in isolated containers, including simple applications, task automation, and build jobs.
Azure Container Apps (preview)
Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. Common uses of Azure Container Apps include:
Deploying API endpoints
Hosting background processing applications
Handling event-driven processing
Running microservices
According to Azure's Container services page, here are your options:
IF YOU WANT TO
USE THIS
Deploy and scale containers on managed Kubernetes
Azure Kubernetes Service (AKS)
Deploy and scale containers on managed Red Hat OpenShift
Azure Red Hat OpenShift
Build and deploy modern apps and microservices using serverless containers
Azure Container Apps
Execute event-driven, serverless code with an end-to-end development experience
Azure Functions
Run containerized web apps on Windows and Linux
Web App for Containers
Launch containers with hypervisor isolation
Azure Container Instances
Deploy and operate always-on, scalable, distributed apps
Azure Service Fabric
Build, store, secure, and replicate container images and artifacts Azure
Container Registry
EDIT:
Based on the comment
Let's say the only requirement is that I am able to use the resources on-demand, so I only end up spending the amount of money that would take for a certain job to finish execution. What would you use?
the answer would most probably be Container Apps, if the code you have available is not easily migrated to an Azure Function. The most important reason: they are Serverless, which means they scale to zero and you only pay for actual consumption. Next to that, you have to write limited to no orchestration logic, since the container apps can scale based on event sources.
Enables event-driven application architectures by supporting scale based on traffic and pulling from event sources like queues, including scale to zero.
Another great resource is Comparing Container Apps with other Azure container options.
MarkLogic publishes its DB to Docker Hub.
I would like to explore how to run ML docker hub images on ACI.
I try to follow below link to do it.
(It works with that Sample Microsoft aci-helloword image deployment. I assume it should also work for ML.)
However I got below error message.
{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InaccessibleImage","message":"The image 'store/marklogicdb/marklogic-server:10.0-8.1-centos-1.0.0-ea2' in container group 'ml-container' is not accessible. Please check the image and registry credential."}]}
The issue is probably with image type. It's free, but since you're required to subscribe, it's not private. Try with private image type and you'll likely need to authenticate against Docker Hub. We have a detailed example at https://github.com/marklogic/marklogic-docker but I'm not sure how you can setup private image access on Azure.
I think Azure Container Instances Linux is based on ubuntu and there are some known limitations. That may be the reason why there are still some compatibility issues of deploying ML docker image to run in ACI.
It is pretty straightforward and cost-effective to follow the below guide to run ML in Azure.
https://www.marklogic.com/resources/deploy-on-azure/
ML regularly update its azure images in the marketplaces.
https://azuremarketplace.microsoft.com/en-in/marketplace/apps/marklogic.marklogic-developer-10?tab=Overview
As a result, there is no significate benefit of enabling ML to run in ACI.
Deploying a multi-container application to Azure Kubernetes Services without using Azure DevOps
We have use case with Java Application (Spring ) with oracle Database as two containers .
We have to try the same in AKS ( Not using Azure DevOps).
Both App (8080) and DB (1521) runs on different Ports
Let me know if you have similar use case implemented.
The point of discussion here might be that whether you want to use a CI/CD Tool other than Azure Devops or not?
If yes, you'll need to setup a pipeline, write some Kubernetes Templates, Build Code, Push Image, and then deploy.
You can always refer Kubernetes Official Docs for more in depth knowledge of Multi-Container Pods, and Jenkins Official Docs for understanding CI/CD Process
I´m trying to get some docker images running on Azure. To be concrete, I have a Redis service, a MongoDB server (not CosmosDB) from bitnami and the coralproject talk. In order to start the docker container locally, I have to set some environment variables like
docker run -e key1=value1 -e key2=value2 -p 80:3000 ...
Now, I am trying to get the app running in Azure. Searching for how to start docker container in Azure, I found several options:
Container Instances
App Services
Virtual Machine
Managed Kubernetes (Preview state)
Container Services (somehow deprecated, will be replaced by Managed Kubernetes in the future)
Running a VM for one docker instance doesn´t sound economical. A Managed Kubernetes or Container Service is maybe a bit too much for now, whereby I can not select any version even with "Managed Kubernetes". I guess this is related to the current Preview state. I also tried App Services, but without success, e.g. no proper settings for environment variables. I saw that in App Services you can set a Start File, but without explanations from Microsoft. What should this be, a Start File? So I tried number one, Container Instances.
Unfortunately I can not find a way how to pass multiple environment variables at the time of starting the container. At the setup wizard you can set one environment variable and another two if you like to:
First, it is limited to three environment parameters. I need more. Second, the value needs to be alphanumeric, setting a domain is not possible.
Does anyone here has some experience in running docker instances on Azure? What is the best setup for you?
Thanks in advance.
I would like to provide some pre-defined scripts which need to be run in Azure Container Service Swarm cluster VMs. Is there a way I can provide these pre-defined scripts to ACS, so that when the cluster is deployed, this pre-defined scripts automatically gets executed in all the VMs of that ACS cluster?
This is not a supported feature of ACS, though you can run arbitrary scripts on the VMs once created. Another option is to use ACS Engine which allows much more control over the configuration of cluster as it outputs templates that you can configure. See http://github.com/Azure/acs-engine.