Question
How do I specify what virtual-network (vnet) or subnet an Azure Docker Instances (ACI) runs on with the Azure Container Agents Plugin for Jenkins?
Assumptions
In order to get lots of data transferred between two machines in Azure, those machines ought to be in the same vnet.
It is possible to get ACI's to run within a subnet of a vnet to get this fast communication.
Background
I'm running an Azure VM with Jenkins on it. This same VM also has Nexus installed on it for proxying/caching 3rd party dependencies. I'm running my builds on Docker Containers that are dynamically created as needed and destroyed afterwards for cost savings. This ACI creation/destruction introduces a problem in that the local .m2 cache does not survive one build to the next. Nexus is being used to fix this problem by facilitating fast access to 3rd party dependencies.
But for Nexus to really solve this problem, it seems as if the ACI's need to be in the same vnet as Nexus. I'd also like the advantage of not needing to open up ports to the world, but can pass data around within the vnet without having to open ports from that vnet to the internet.
My problem is that I seem to have no control over which vnet or subnet the ACI's run on with the plugin I'm using (Azure Container Agents Plugin).
I've found instructions on how to specify the subnet on an ACI in general (link), but that doesn't help me as I need a solution that works with the Jenkins Plugin I'm using.
But perhaps this plugin will not work for my purposes and I need to abandon it for another approach. If so, suggestions?
AFAIK Azure Container Agents Plugin for Jenkins currently doesn't support specifying what virtual network (vnet) an ACI runs on.
I think you should raise an issue here to see if you get better response.
And yes, you may need to abandon this approach of using ACI Jenkins agents and as a workaround (for now) should go with VM's as Jenkins agents
or run the Jenkins jobs on Jenkins master itself so that local .m2 cache would survive one build to the next.
Related References:
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-jenkins
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-jenkins?toc=%2Fen-us%2Fazure%2Fjenkins%2Ftoc.json&bc=%2Fen-us%2Fazure%2Fbread%2Ftoc.json
Hope this helps!!
Related
I am running an Ubuntu self-hosted build agent for Azure DevOps in Container Instances and container outputs only: Determining matching Azure Pipelines agent. and that's it.
It has PAT with full access to whole organization, given agent pool really exists and the URL is correct as well. THe only thing that comes to my mind is that I see our URL as https://XXXX.visualstudio.com/ but I gave the agent url like https://dev.azure.com/XXX which still seems to be working when used in the browser.
How to solve this, please?
I suppose that your issue is caused by the agent upgrades to support .NET 6 (.NET core 3.1 will be out of support in December). You could test to upgrade the container version to 20.04 or higher.
You could also refer to this issue#3834 for more information.
The problem was that Agent was put into the subnet which had not NSG, therefore it denied all in/outbound traffic. So we added a NSG to this subnet with outbound rule for port 443 TCP and it works now.
I'am using AzureDevOps to build and pack my docker application.
The goal is to execute docker-compose commands in Azure VM, which is behind the firewall and can be access only thru vpn connection or standard web-browsing ports.
You can use deployment groups to achieve that. Reason this will work, because it is a one way communication (from agent to Azure Devops), so you dont really need to open ports for the VM, the VM has to only be able to reach Azure Devops endspoints (more on this).
TLDR. Using agents will work, because its an outgoing connection from the agent, not from Azure Devops to the agent.
If I created a VM Azure machine and setup/installed all required nodejs, npm modules, IIS, mongoDb. on basis on my usages I shut down the VM machine. After start the VM machine, Will I get the entire machine with all installation.
Yes, you will. VMs are stateful. The only thing that might change (depending on the setup) is the internal\external IP addresses.
Azure even provides auto shutdown feature, to save you some clicking.
I'm trying to deploy a simple WordPress example (WordPress & MySQL DB) on Microsofts new Azure Container Service with Mesos & Marathon as the underlying orchestration platform. I already ran this on the services offered by Google (Kubernetes) and Amazon (ECS) and thought it would be an easy task on ACS as well.
I have my Mesos cluster deployed and everything is up and running. Deploying the MySQL container isn't a problem either, but when I deploy my WordPress container I can't get a connection to my MySQL container. I think this might be because MySQL runs on a different Mesos agent?
What I tried so far:
Using the Mesos DNS to get ahold of the MySQL container host (for now I don't really care which container I get ahold of). I set the WORDPRESS_DB_HOST environment var to mysql.marathon.mesos and specified the host of MySQL container as suggested here.
I created a new rule for the Agent Load Balancer and a Probe for port 3306 in Azure itself, this worked but seems like a very complicated way to achieve something so simple. In Kubernetes and ECS links can be simply defined by using the container name as hostname.
An other question that came up, what difference is their in Marathon between setting the Port in the Port Mappings Section and in the Optional Settings section. (See screenshot attached)
Update: If I ssh into the master node than I can dig by using mysql.marathon.mesos, how ever I can't get a connection to work from within an other container (in my case the wordpress container).
So there are essentially two questions here: one around stateful services on Marathon, the other around port management. Let me first clarify that neither has to do anything with Azure or ACS in the first place, they are both Marathon-related.
Q1: Stateful services
Depending on your requirements (development/testing or prod) you can either use Marathon's persistent volumes feature (simple but no automatic failover/HA for the data) or, since you are on Azure, a robust solution like I showed here (essentially mounting a file share).
Q2: Ports
The port mapping you see in the Marathon UI screen shot is only relevant if you launch a Docker image and want to explicitly map container ports to host ports in BRIDGE mode, see the docs for details.
I've been evaluating Azure for a couple months. I'm using it via my MSDN subscription. The intention is to determine if my development team should migrate from VMWare to Azure machines.
I managed to setup multiple VMs and work on them successfully. I tend to shut down all VMs as often as I can in order to not use up my monthly resource allowance.
Very often I lose RDP connectivity to all my VMs. Sometimes it helps to resize the VM but not always. I've tried all steps included on the link below, for instance.
What I am missing?
https://azure.microsoft.com/pl-pl/documentation/articles/virtual-machines-troubleshoot-remote-desktop-connections/
Thx guys (and sorry). It was indeed due to network issues (DNS fails on my home internet provider from time to time).