I'am using AzureDevOps to build and pack my docker application.
The goal is to execute docker-compose commands in Azure VM, which is behind the firewall and can be access only thru vpn connection or standard web-browsing ports.
You can use deployment groups to achieve that. Reason this will work, because it is a one way communication (from agent to Azure Devops), so you dont really need to open ports for the VM, the VM has to only be able to reach Azure Devops endspoints (more on this).
TLDR. Using agents will work, because its an outgoing connection from the agent, not from Azure Devops to the agent.
Related
We have an on-premises data centre that is connected to Azure via VPN.
There are some on-premises Jenkins jobs that need to run when code is pushed in the Azure repository, and there is an on-premises Nexus server to store artifacts from other Azure pipelines. The rest can and should run in Azure.
I know there is a possibility to use a self-hosted agent that is placed in the Azure virtual network which could then connect to on-premises, but we do not want to manage/pay for a self-hosted agent.
My question is, is there something like a virtual network integration for Azure DevOps? The idea is to let DevOps connect to on-premises resources via the Azure VNet and the VPN without self-hosted agents in between.
E.g., does the ARM Service Connection only allow to access resources like VMs for deployments or does it also allow to connect to a VNet and the via VPN connected resources on-premises?
Thanks in advance!
I have already created a service connection between DevOps and the Azure subscription. I cannot check weather the connection to on-premises works for internal reasons.
The Microsoft hosted agents for Azure DevOps only allow for public internet connections to other resources. VPN and Use of expressroute or other connections to the internal corporate network are not supported. See this section of the docs for reference.
Ping uses ICMP protocol
RDP uses TCP protocol (how different is this from WinRM?)
Azure DevOps uses RDP/Winrm? (tasks such as: WindowsMachineFileCopy(uses Robocopy),
PowerShellOnTargetMachines)
What does AzCopy use to move files between storage account and VM?
What method does Azure Automation runbook use? (Clearly it is not what
DevOps use because I have hardened private VMs not accessible by Azure
DevOps but can be accessed to run scripts inside the VM with ease using Automation Runbooks)
Edit:
Hardened VM = registry settings forbidding the VM via firewall and winrm (disabled). So all connections coming into the VM is forbidden. Even a selfhosted agent fails. But automation runbook succeedes to connect to the VM.
What does AzCopy use to move files between storage account and VM? It uses the standard Azure Storage REST APIs( It relies on HTTP) which is based on TCP.
To copy files from git to the VM : You can use self hosted agents, which will initiate the connection from the VM, see https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops It uses HTTPS
I am using ( Agent pool : Azure pipeline ; Agent specification : windows-2019 )
for CI/CD build, I also want same agent to deploy war to Azure VM which is bound to specific domain.
Although, I am able to deploy war using self hosted agent, but for this I have to allocate one VM for CI/CD builds.
I want to eliminate this and use the same agent to build and deploy war to server.
You don't. Microsoft-provided hosted agents don't have connectivity to your on-prem environment. If you need to deploy to a server that is not internet-addressable, then you need to create a private agent within your network.
According to this document:
If your on-premises environments do not have connectivity to a Microsoft-hosted agent pool (which is typically the case due to intermediate firewalls), you'll need to manually configure a self-hosted agent on on-premises computer(s).
If in your case, it's the internal firewalls that block the hosted agent, I think you do have the need to use self-agent for deployment. Otherwise, you can try port forwarding like 4c74356b41 suggests.
I have looked for documentation on the right set of steps to get an agent within our network to deploy to a service fabric cluster (also within our network) using gMSA.
The error received is "##[error]Could not ping any of the provided Service Fabric gateway endpoints."
The same agent can connect to the cluster using PowerShell just fine. What's worse, there is a development cluster on the agent itself and it cannot even connect to that.
There is nothing about how to ensure an on-prem agent can connect to an on-prem machine if using the online version (Azure DevOps) and gMSA for the Service Connection. If anyone has had success in this area or has pointers to better documentation, it would be greatly appreciated.
I'd think your agent service needs to run under the gMSA identity, not under system identity or network service. reinstall it\reconfigure it to use gMSA identity and it should work
I'd like to setup Jenkins with a mix of Azure and on-premise agents. Ideally I would like the Jenkins master to be in Azure and have on-premise agents connect to that master, however the on-premise agents will not be publicly exposed but will be able to access the Jenkins master on Azure.
Is it possible to have a mix of Azure and on-premise agents? Is it possible to have the on-premise agents talk to a Jenkins master on Azure? If so, how would I configure this and what would I need to know?
Yes, it's possible. You just need to create an Azure VPN and then the on-premise agent could connect the Azure master agent through its private IP. In this way, you do not need to expose the on-premise agent to the public.
yes, this is possible. jenkins doesnt care where agents are located. for jenkins agent to talk to master on Azure they'd need to be able to communicate. Normally via ssh, I'm not quite sure if master talks to slaves or vice versa, but either way its possible to achieve using proper networking on Azure\on-premise.
If you'd need to connect to Azure master from agents, you'd need to assign public ip address to the master, in the reverse scenario you'd need to expose some port on your infrastructure that can be used to talk to your agents.
Another alternative is using site-to-site VPN\express route, to establish direct connection to on-premise stuff (this is a bit harder to achieve, but might be beneficial in the long run).