Azure Data Factory Integration Runtime Going Into Limited State - azure

My team have created an IR in an on-premises VM and we are trying to create a Linked Service to an on-prem DB using that IR
Whenever we click on Test Connection in the Linked Service, the connection fails and IR goes into a limited state
We also whitelisted the IPs provided by Microsoft for IR ADF and also checked the network traces and all seems fine there
Also, we stopped and restarted the IR, uninstalled and installed it again but still the problem resists
Have anyone faced a similar kind of issue?
As this has been a long time we are facing this issue which has now become a blocker for us
Thanks!

This is observed when nodes can't communicate with each other.
You can Log in to the node-hosted virtual machine (VM). Go to Applications and Services Logs > Integration Runtime, open Event Viewer, and filter the error logs. If you find the error System.ServiceModel.EndpointNotFoundException or Cannot connect to worker manager
Follow the official documentation with detailed steps for Troubleshooting Error message: Self-hosted integration runtime node/logical self-hosted IR is in Inactive/ "Running (Limited)" state
As it states:
try one or both of the following methods to fix:
- Put all the nodes in the same domain.
- Add the IP to host mapping in all the hosted VM's host files.

I ran into same issue. Our organization has firewall rules preventing specific ports or url's from outside network. We added Data factory services tags with internet facing in Route table, and IR then connected successfully.

Related

How can I diagnose a connection failure to my Load-balanced Service Fabric Cluster in Azure?

I'm taking my first foray into Azure Service Fabric using a cluster hosted in Azure. I've successfully deployed my cluster via ARM template, which includes the cluster manager resource, VMs for hosting Service Fabric, a Load Balancer, an IP Address and several storage accounts. I've successfully configured the certificate for the management interface and I've successfully written and deployed an application to my cluster. However, when I try to connect to my API via Postman (or even via browser, e.g. Chrome) the connection invariably times out and does not get a response. I've double checked all of my settings for the Load Balancer and traffic should be getting through since I've configured my load balancing rules using the same port for the front and back ends to use the same port for my API in Service Fabric. Can anyone provide me with some tips for how to troubleshoot this situation and find out where exactly the connection problem lies ?
To clarify, I've examined the documentation here, here and here
Have you tried logging in to one of your service fabric nodes via remote desktop and calling your API directly from the VM? I have found that if I can confirm it's working directly on a node, the issue likely lies within the LB or potentially an NSG.

MSDTC, Communication with the underlying transaction manager has failed + Windows Azure VM

My application is deployed on 2 windows azure virtual machines. 1 machine for sql server and the other for the application.
In the application, I am using TransactionScope. So I applied the configuration of transaction on both VMs as shown in the image below.
In addition, I have allowed the Distributed Transaction Coordinator in the Firewall on both machines.
I have a long running process that have a loop, inside each loop i have a separate TransactionScope. Sometimes, not always, I am getting the below exception.
Communication with the underlying transaction manager has failed. ------- Inner Exception: The MSDTC transaction manager was unable to pull the transaction from the source transaction manager due to communication problems. Possible causes are: a firewall is present and it doesn't have an exception for the MSDTC process, the two machines cannot find each other by their NetBIOS names, or the support for network transactions is not enabled for one of the two transaction managers.
The "System Center Endpoint Protection" is installed on Both VMs, I turned off the real-time protection also with no result.
I tried to run the process on the sql VM, everything worked fine with no exception.
Actually, I found the root of the problem after several days of searching and investigating. The problem was that the 2 machines were not ping-able by the net bios name. They were ping-able only by IP. After fixing the ping issue. Everything worked fine.

WorkerRole in Azure Cloud Service net connection

This afternoon I have uploaded my WorkerRole in Cloud Service on Azure, this service run on VM with Windows Server 2012. I have realized that WorkerRole can't get query from Databases (BigQuery, TSQL). When I have read the service log in VM I have seen the following error:
The VM and host networking components failed to negotiate protocol version '5.0'
I think that Hyper-V-vsc has something to do. Anybody knows what happens?
Thanks,
Roger
First thing I could check is to make sure the databases you are trying to connect too have whitelisted the VIP for the cloud service you're connecting from. And if you haven't already, remote into an instance of the worker and try reaching the DB's using a thin a client UI as you can.
In my experience, these issues are usually on the db end. Azure doesn't do much with blocking outbound connections. Those that fail are usually more a matter of protocol (UDP multicast for example).

Services down after implementing High Availability on Windows Azure

After trying to implement High Availability to one of the existing servers following this article
https://www.windowsazure.com/en-us/documentation/articles/virtual-machines-capture-image-windows-server/
After I was done the newly created machine is running, however I cannot RDP or PING any of the services that are running on the server existing. It shows that the VM is running
Has anyone faced such a problem before ?
In case this unexpected issue happens. The workaround is to create an XSmall VM and RDP to the VM that is not responding through internal IP. This should get it to start.

Azure Connect won't connect

Just installed azure connect on my localhost, but it won't connect. I see my machine dbates-HP as a active endpoint in my vistual network/connect section on my azure portal and organized it into a group.
I can see in the azure connect portal that the machine endpoint is active, and that it refreshes since the last connected updates.
My local connect client lists the following diagnostics messages:
Policy Check: There is no connectivity policy on this machine.
IPsec certificate check: No IPsec certificate was found.
Also tried with firewall turned off.
Duncan
In some scenarios getting Windows Azure connect to working becomes very complex. I have worked on multiple such scenarios and found most common issues are related with network settings. To start investigate you need to collect the Azure Connect logs first from your machine and try to figure the problem out by yourself. I have described some info about collecting log here:
http://blogs.msdn.com/b/avkashchauhan/archive/2011/05/17/collecting-diagnostics-information-for-windows-azure-connect-related-issues.aspx
To open a free Windows Azure support incident please use link below:
https://support.microsoft.com/oas/default.aspx?gprid=14928&st=1&wfxredirect=1&sd=gn
Have you "linked" your Azure role with the machine group you created? The message "There is no connectivity policy on this machine" suggests that you haven't defined (in portal) to whom this machine should connect to.

Resources