I am running an Ubuntu self-hosted build agent for Azure DevOps in Container Instances and container outputs only: Determining matching Azure Pipelines agent. and that's it.
It has PAT with full access to whole organization, given agent pool really exists and the URL is correct as well. THe only thing that comes to my mind is that I see our URL as https://XXXX.visualstudio.com/ but I gave the agent url like https://dev.azure.com/XXX which still seems to be working when used in the browser.
How to solve this, please?
I suppose that your issue is caused by the agent upgrades to support .NET 6 (.NET core 3.1 will be out of support in December). You could test to upgrade the container version to 20.04 or higher.
You could also refer to this issue#3834 for more information.
The problem was that Agent was put into the subnet which had not NSG, therefore it denied all in/outbound traffic. So we added a NSG to this subnet with outbound rule for port 443 TCP and it works now.
Related
My team have created an IR in an on-premises VM and we are trying to create a Linked Service to an on-prem DB using that IR
Whenever we click on Test Connection in the Linked Service, the connection fails and IR goes into a limited state
We also whitelisted the IPs provided by Microsoft for IR ADF and also checked the network traces and all seems fine there
Also, we stopped and restarted the IR, uninstalled and installed it again but still the problem resists
Have anyone faced a similar kind of issue?
As this has been a long time we are facing this issue which has now become a blocker for us
Thanks!
This is observed when nodes can't communicate with each other.
You can Log in to the node-hosted virtual machine (VM). Go to Applications and Services Logs > Integration Runtime, open Event Viewer, and filter the error logs. If you find the error System.ServiceModel.EndpointNotFoundException or Cannot connect to worker manager
Follow the official documentation with detailed steps for Troubleshooting Error message: Self-hosted integration runtime node/logical self-hosted IR is in Inactive/ "Running (Limited)" state
As it states:
try one or both of the following methods to fix:
- Put all the nodes in the same domain.
- Add the IP to host mapping in all the hosted VM's host files.
I ran into same issue. Our organization has firewall rules preventing specific ports or url's from outside network. We added Data factory services tags with internet facing in Route table, and IR then connected successfully.
I am getting below error while deploying app service via Azure DevOps. I tried to search for this issue but could not found root cause of this.
Error :
2021-03-15T06:01:27.7479723Z ##[error]Error: Error Code: ERROR_COULD_NOT_CONNECT_TO_REMOTESVC
More Information: Could not connect to the remote computer ("web-app.scm.azurewebsites.net") using the specified process ("Web Management Service") because the server did not respond. Make sure that the process ("Web Management Service") is started on the remote computer. Learn more at: http://go.microsoft.com/fwlink/?LinkId=221672#ERROR_COULD_NOT_CONNECT_TO_REMOTESVC.
Error: The remote server returned an error: (403) Forbidden.
Error count: 1.
'''
I tried everything until I spotted (after reading this) that my (dev) shared App Service service plan was out of storage space! When I upgraded it to a bigger one I could deploy again!
According to this document, the error is caused by that Web Deploy cannot connect to the remote service. Please refer to the follow points to troubleshoot your problem:
Please make sure Azure app service works fine. You can ping the remote machine.
That the msdepsvc(“Microsoft Web Deployment Agent Service”) or wmsvc(“Web Management Service”) service is started on the remote server.
Your firewall is not blocking incoming connections of your ports on the destination. If you used the default installation, then it would be 80 for msdepsvc and 8172 for wmsvc.
In addition, you could try to add -retryInterval:6000 -retryAttempts:10 to Additional Arguments in Azure App Service Deploy task as this thread stated.
BTW, if this issue still exists in Azure pipeline, please check if this issue exists locally. You could refer to this thread: Got 403 Error when doing Web Deployment and Web Deploy results in ERROR_COULD_NOT_CONNECT_TO_REMOTESVC for more guidance.
Thank you Edward for insightful explanation for possible root cause. Issue is resolved now.
Root cause was the agent pool selected did not have rights for deployment(IP are not whitelisted for production App service) since
We are not using agent provided by DevOps directly for production environment.
An instance that'd worked great for years starting giving me this error yesterday during Web Deploy. No changes from our side. No amount of poking around non-invasively solved it, but simply hitting the Restart button on the Azure app service Overview page put it to bed quite easily.
In short: Double-check your publish profile (each element).
Bit longer: In my case, my publish profile contained a ResourceGroup element which pointed to the wrong resource group. (I'm using WebPublishMethod: MSDeploy) I went over all elements and made sure they point to the correct resource, credentials and whatnot.
That seemed to solve the issue.
In my case, I had modified machine.config to captur traffic in Fiddler
<system.net>
<defaultProxy
enabled = "true"
useDefaultCredentials = "true">
<proxy autoDetect="false" bypassonlocal="false" proxyaddress="http://127.0.0.1:8888" usesystemdefault="false" />
</defaultProxy>
</system.net>
and this was interfering with the VS deployment to Azure
Question
How do I specify what virtual-network (vnet) or subnet an Azure Docker Instances (ACI) runs on with the Azure Container Agents Plugin for Jenkins?
Assumptions
In order to get lots of data transferred between two machines in Azure, those machines ought to be in the same vnet.
It is possible to get ACI's to run within a subnet of a vnet to get this fast communication.
Background
I'm running an Azure VM with Jenkins on it. This same VM also has Nexus installed on it for proxying/caching 3rd party dependencies. I'm running my builds on Docker Containers that are dynamically created as needed and destroyed afterwards for cost savings. This ACI creation/destruction introduces a problem in that the local .m2 cache does not survive one build to the next. Nexus is being used to fix this problem by facilitating fast access to 3rd party dependencies.
But for Nexus to really solve this problem, it seems as if the ACI's need to be in the same vnet as Nexus. I'd also like the advantage of not needing to open up ports to the world, but can pass data around within the vnet without having to open ports from that vnet to the internet.
My problem is that I seem to have no control over which vnet or subnet the ACI's run on with the plugin I'm using (Azure Container Agents Plugin).
I've found instructions on how to specify the subnet on an ACI in general (link), but that doesn't help me as I need a solution that works with the Jenkins Plugin I'm using.
But perhaps this plugin will not work for my purposes and I need to abandon it for another approach. If so, suggestions?
AFAIK Azure Container Agents Plugin for Jenkins currently doesn't support specifying what virtual network (vnet) an ACI runs on.
I think you should raise an issue here to see if you get better response.
And yes, you may need to abandon this approach of using ACI Jenkins agents and as a workaround (for now) should go with VM's as Jenkins agents
or run the Jenkins jobs on Jenkins master itself so that local .m2 cache would survive one build to the next.
Related References:
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-jenkins
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-jenkins?toc=%2Fen-us%2Fazure%2Fjenkins%2Ftoc.json&bc=%2Fen-us%2Fazure%2Fbread%2Ftoc.json
Hope this helps!!
In Azure, I turned on IP restrictions for:
Web App (Networking > Access Restrictions)
SQL server (Firewalls and virtual networks > Add client IP)
SQL database (Set server settings)
The solution still builds locally and in DevOps (aka Team Foundation Server).
However, Azure App Service Deploy now fails:
##[error]Failed to deploy App Service.
##[error]Error Code: ERROR_COULD_NOT_CONNECT_TO_REMOTESVC
More Information: Could not connect to the remote computer
("MYSITENAME.scm.azurewebsites.net") using the specified process ("Web Management Service") because the server did not respond. Make sure that the process ("Web Management Service") is started on the remote computer.
Error: The remote server returned an error: (403) Forbidden.
Error count: 1.
How can I deploy through the firewall?
Do I need a Virtual Network to hide Azure resources behind my whitelisted IP?
The REST site scm.azurewebsites.net must have Allow All, i.e. no restriction. Also, Same restrictions as ***.azurewebsites.net should be unchecked.
It does not need additional restriction because url access already requires Microsoft credentials. If restrictions are added, deploy will fail the firewall, hence the many complications I encountered.
I think the answer is incorrect as you might face data ex-filtration and that's the reason Microsoft provide the feature to lock down SCM portal (Kudu console)
There is also a security issue on Kudu portal as it can display the secret of your keyvault (if you use keyvault) and you don't want someone in your organisation to access the Kudu portal for example.
You have to follow this link
https://learn.microsoft.com/en-us/azure/devops/organizations/security/allow-list-ip-url?view=azure-devops
It will provide you Azure DevOPS IP range that you need to allow on the SCM Access restriction.
Update: To make it works as expected and to use App Service Access Restriction (same for an Azure Function), you need to use the Service Tags "AzureCloud" and not the Azure DevOPS IP range as it's not enough. on the Azure Pipeline logs, you can see the IP blocked so you can see that it's within the ServiceTags "AzureCloud" in the Service Tags JSON file
It's not really clear on the MS Doc but the reason is that they struggled to define a proper IP range for Azure DevOPS Pipeline so they use IPs from AzureCloud Service Tag.
https://www.microsoft.com/en-us/download/details.aspx?id=56519
In my case I was deploying using Azure DevOps and got the error. It turned out the app service where my API was being deployed to, had the box checked "Same restrictions as xxxx.azurewebsites.net", under access restrictions or IP restrictions. you need to allow scm.azurewebsites.net.
Try adding the application setting WEBSITE_WEBDEPLOY_USE_SCM with a value of false to your Azure App Service. This was able to solve my issues deploying to a private endpoint.
In my case it was because the daily quota was overpassed.
So the solution in this case is either wait or pay more (scale up) the app service
In my case this was because the wrong agent (Windows Hosting) was being used when I should have been using a self hosted internal agent... so I needed to change it at the following location
I'am using AzureDevOps to build and pack my docker application.
The goal is to execute docker-compose commands in Azure VM, which is behind the firewall and can be access only thru vpn connection or standard web-browsing ports.
You can use deployment groups to achieve that. Reason this will work, because it is a one way communication (from agent to Azure Devops), so you dont really need to open ports for the VM, the VM has to only be able to reach Azure Devops endspoints (more on this).
TLDR. Using agents will work, because its an outgoing connection from the agent, not from Azure Devops to the agent.