How to deploy using Azure build agent to internal server? - azure

I am using ( Agent pool : Azure pipeline ; Agent specification : windows-2019 )
for CI/CD build, I also want same agent to deploy war to Azure VM which is bound to specific domain.
Although, I am able to deploy war using self hosted agent, but for this I have to allocate one VM for CI/CD builds.
I want to eliminate this and use the same agent to build and deploy war to server.

You don't. Microsoft-provided hosted agents don't have connectivity to your on-prem environment. If you need to deploy to a server that is not internet-addressable, then you need to create a private agent within your network.

According to this document:
If your on-premises environments do not have connectivity to a Microsoft-hosted agent pool (which is typically the case due to intermediate firewalls), you'll need to manually configure a self-hosted agent on on-premises computer(s).
If in your case, it's the internal firewalls that block the hosted agent, I think you do have the need to use self-agent for deployment. Otherwise, you can try port forwarding like 4c74356b41 suggests.

Related

Gitlab - Using one self-hosted instance to deploy on several cloud providers?

Is it a 'good idea' to use one self-hosted instance of Gitlab (let's say on Azure) and use it to deploy on multiple clouds (e.g AWS, Azure, GCP) ?
What is first coming to my mind is that it needs a private network link between the cloud providers so that the private self-hosted agents can communicate with the self-hosted Gitlab instance.
I don't find architecture examples introducing this kind of solution.
There is not necessarily any reason why you cannot deploy solutions on multiple clouds, irrespective of where your GitLab runners happen to be hosted. Your runners don't have to be hosted in the cloud to which you are deploying. You can even deploy to all clouds from GitLab.com hosted runners, for example.

Is port 9000 allowed in Microsoft hosted ubuntu agent in azure devops

I have sonarqube configured in my localhost with 9000 port http://www.localhost:9000/
Now, I have created a service connection in azure DevOps with this url.
when I tried to analyze a project via pipeline building on a Microsoft hosted ubuntu image.. it throws error as error connection refused for port 9000 during prepare sonaranalysis task.
error:
##[error][SQ] API GET '/api/server/version' failed, error was: {"code":"ENOTFOUND","errno":"ENOTFOUND","syscall":"getaddrinfo","hostname":"www.localhost","host":"www.localhost","port":"9000"}
Finishing: SonarQubePrepare
Could someone help to fix this?
If the sonarqube configured in your local machine. Cloud hosted agents will not be able to access to the local hosted sonarqube server, for your local machine cannot be accessed from Microsoft cloud hosted agents.
You will need to create a self-hosted agent on your local machine. And run your pipeline on this self-hosted agent by targeting your private agent pool when queuing your pipeline. For the local hosted sonarqube server can be accessed from your local machine.
Another workaround is to expose your localhost:9000 to the public network using tools like ngrok as 4c74356b41 mentioned.
yeah, basically this will never work, you need to have an externally available endpoint for your sonarqube and connect to that one
so something like ngrok.com might help you with that if you want to host sonarqube on your workstation

Jenkins with mix of Azure and On-premise agents

I'd like to setup Jenkins with a mix of Azure and on-premise agents. Ideally I would like the Jenkins master to be in Azure and have on-premise agents connect to that master, however the on-premise agents will not be publicly exposed but will be able to access the Jenkins master on Azure.
Is it possible to have a mix of Azure and on-premise agents? Is it possible to have the on-premise agents talk to a Jenkins master on Azure? If so, how would I configure this and what would I need to know?
Yes, it's possible. You just need to create an Azure VPN and then the on-premise agent could connect the Azure master agent through its private IP. In this way, you do not need to expose the on-premise agent to the public.
yes, this is possible. jenkins doesnt care where agents are located. for jenkins agent to talk to master on Azure they'd need to be able to communicate. Normally via ssh, I'm not quite sure if master talks to slaves or vice versa, but either way its possible to achieve using proper networking on Azure\on-premise.
If you'd need to connect to Azure master from agents, you'd need to assign public ip address to the master, in the reverse scenario you'd need to expose some port on your infrastructure that can be used to talk to your agents.
Another alternative is using site-to-site VPN\express route, to establish direct connection to on-premise stuff (this is a bit harder to achieve, but might be beneficial in the long run).

azure VM staging to production

I am currently trying to figure out the best architecture for my Azure VMs.
I have a production website that is hosted in IIS in 2 VMs with a load-balancer over them, this is called Prod1 and for now is the live site.
I also have the exact same setup as my "staging" (called Prod2) environment, obviously this is not live .
I wish to deploy to Prod2, test and when happy switch it to live. Thereby making Prod1 not live.
Now i can simply drop the TTL and re-point the A record on the sites Domain name to the public IP of Prod2s load balancer.
But is there a better way of doing this to enable faster switching between these?
Azure Traffic Manager would be my suggestion for a seamless blue-green deployment.
For your workloads that are running in Azure, the recommendation is to setup the Blue environment, which has the old code, and Green environment, which has the new code, in separate Azure Resource Groups.
If the endpoint is external, you can use any continuous integration and deployment tool to manage and deploy the two environments.
Once you have the environments ready you can create a Traffic Manager profile using the Azure portal, PowerShell, or CLI, with weighted round-robin as the routing method and add the endpoints corresponding to these environments.
See Dilip's full post here.

How to deploy application from AzureDevOps to custom VM inside Azure?

I'am using AzureDevOps to build and pack my docker application.
The goal is to execute docker-compose commands in Azure VM, which is behind the firewall and can be access only thru vpn connection or standard web-browsing ports.
You can use deployment groups to achieve that. Reason this will work, because it is a one way communication (from agent to Azure Devops), so you dont really need to open ports for the VM, the VM has to only be able to reach Azure Devops endspoints (more on this).
TLDR. Using agents will work, because its an outgoing connection from the agent, not from Azure Devops to the agent.

Resources