Am I getting something fundamentally wrong here...
I created a Service Fabric Mesh application in Visual Studio 2017, created 2 services and then tested on my local 5 Mesh Node cluster - worked as expected.
I then created a Service Fabric Cluster in Azure.
Next, I published my app to my Azure Subscription in the same Resource Group as my SF Cluster.
My app gets published fine and I can see the SF Mesh Application in my Azure Subscription, and I can access the services directly via the IP addresses that the Publish process tells me.
However, when I look at the SF Cluster Explorer for the Azure Cluster I created before publishing it doesn't show my Application or Services there?
I only see the services listed under the Mesh Application - but I can't see the public IPs listed anywhere for those services.
What am I missing? The services are clearly working as I can get data back out from them.
If I understand correctly, I believe you cannot see anything in SF Explorer because it's connected to regular Service Fabric and not your Mesh application which is not running on the Service Fabric cluster you provisioned. I haven't found any information on connecting to SF Explorer for Mesh yet. I expect MS will provide more information in the future. If you want to find the public IP for your Mesh application, try
az mesh gateway show -g ResourceGroupName -n GatewayName
Related
How to create single azure application gateway for multisites. Both applications are under different resource groups and VNET.
Need single application gateway for test.example.com and test1.example.com.
• Since you have included the tag of ‘terraform’ in your question, I am assuming that you want to create an application gateway for your purpose using the terraform IAC. Thus, you can surely create an application gateway for multiple sites across different tenants/subscriptions and across different virtual networks if the app services hosted are reachable over the internet, have IP connectivity and are accessible. Kindly refer to the official Microsoft documentation link below for your query regarding communication outside instances of the application gateway’s virtual network: -
https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-faq#can-application-gateway-communicate-with-instances-outside-of-its-virtual-network-or-outside-of-its-subscription
Thus, to create an application gateway as required by you through terraform, kindly follow the steps as below: -
a) Install the ‘Azure Terraform’ extension in Visual Studio code as given in the below documentation: -
https://learn.microsoft.com/en-us/azure/developer/terraform/configure-vs-code-extension-for-terraform?tabs=azure-cli
b) Once done as stated in the above documentation link, then edit the ‘main.tf’ file with the code given in the below link and modify the ‘variables.tf’ file with the required values of the parameters related to the application gateway deployment in Azure: -
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/application_gateway
c) Once the above has been done, initialize the terraform with the code ‘terraform init’, then create the execution plan with the command ‘terraform plan -out main.tfplan’
d) Then apply the terraform code specified in the above plan with the command ‘terraform apply main.tfplan’
Thus, in this way, the application gateway will be deployed using terraform IAC.
I have created a custom OS images on Azure containing my app, omsagent, and a configuration file to track my application logs. I verified that the custom logs were available on Log Analytics Workspace for that VM.
When I create a new VM with this custom OS, using Python SDK, I don't receive logs in the workspace. I verified that omsagent is working. It is sending heartbeats that are visible on the Log Analytics Workspace > Insights > Agents.
I found out that the new is was not Connected to the workspace.
So my question is how do I automatically connect a VM to Log Analytics Workspace at creation time?
I would advise against baking the Log Analytics Agent (OMS Agent) into the image directly - Azure doesn't recommend this kind of setup. Instead, you should be using an Azure Policy that they provide exactly for this scenario.
We have dozens of VMs and Scale Sets we need to manage with Log Analytics Agent installed on each of them when we build the custom images. In the beginning everything was working fine but a couple of months later those images stopped working.
After spending some time investigating with the Azure team we found out that the agent's certificate wasn't being renewed and it wouldn't connect to the Workspace automatically. Even worse was that because of this, it was failing all our images builds.
We were told that this is not the right practice and we should look at Azure Policies. They are rather easy to setup - just assign once and forget about them. It's also good for compliance and will let you know if there's any machine that's non-compliant.
Check this link for more info about Azure Monitor policies.
And this link will open your Azure Portal directly into the policies page for Windows VMs.
Here's a preview of the policies available:
I have created a new Service Fabric cluster in Azure via ARM template, and this works fine. But now if I switch to the Azure portal, I get in the "Overview" only the information "Updating".
I use the Service Fabric version: 6.4.637.9590.
In the "Activity log" is written only "Create or Update Cluster" for 2 hours. Now it looks to me as if there are some issues, but I don't know where I can get more information.
Is there a way I can get some deploy / create information via Azure CLI?
You can get the health information for a Service Fabric cluster by using Get-ServiceFabricClusterHealth command. Service Fabric reports the following health states:
OK. The entity meets health guidelines.
Error. The entity does not meet health guidelines.
Warning. The entity meets health guidelines but experienced some issue.
You can also use Service Fabric Explorer to visualize your cluster. Service Fabric Explorer (SFX) is an open-source tool for inspecting and managing Azure Service Fabric clusters. Service Fabric Explorer is a desktop application for Windows, macOS and Linux.
After long research work I get my solution.
You can connect to you service fabric cluster nodes if they were created successfully. Use the public IP which you can find in the overview of the "VM scale set" and use Remote desktop connection (RDP) to connect to a VM.
How to set up this is described here
Check the event log, check log in the Service fabric installation directory - it is on D:\SvcFab\Log\ check all directories for log.
Azure Service Fabric can be run on Windows Server. Can Service Fabric Mesh be hosted that way as well?
The underline platform is the same service fabric binaries, the only difference is that on MESH you don't manage nodes and all definitions is based on Containers and Hardware Resources (Network, CPU, storage), you will be able to simulate a "Single Node" MESH cluster like you do with current SF and deploy you mesh applications in there.
If your plan is to have a production environment onPrem I haven't got in much details about it, now that it is becoming an opensource solution, I assume yes, but I don't think it is on their top priority.
For now, there is not much documentation about it, so the best you can find will be on these links:
https://learn.microsoft.com/en-gb/azure/service-fabric-mesh/
https://azure.microsoft.com/en-us/blog/azure-service-fabric-mesh-is-now-in-public-preview/
How to setup local development cluster:
https://learn.microsoft.com/en-gb/azure/service-fabric-mesh/service-fabric-mesh-howto-setup-developer-environment-sdk
I have deployed a Web API written with .net Core to a local dev Azure Service Fabric cluster on my machine. I have plenty of disk space, memory, etc, and the app gets deployed there. However, success is intermittent. Sometimes it doesn't deploy to all the nodes, and now I have it deployed to all the nodes, but within the Azure Service Fabric Manager page, I see each application in each node has an Error status with the message: "The ServiceType was not registered within the configured timeout." I don't THINK I should have to remove and redeploy everything. Is there some way I can force it to 're-register' the installed service type? Microsoft docs are really really thin on troubleshooting these clusters.
Is there some way I can force it to 're-register' the installed service type?
On your local machine you can set the deployment to always remove the application when you're done debugging. However, if it's not completing in the first place I'm not sure if this workflow would still work.
Since we're on the topic, in the cloud I think you'd just have to use the Powershell scripts to first compare the existing app types and version and remove them before "updating". Since the orchestration of this is complicated I like to use tools to manage it.
In VSTS for instance there is an overwrite SameAppTypeAndVersion option.
And finally, if you're just tired of using the Service Fabric UI to remove the Application over and over while you troubleshoot it might be faster to use the icon in the system tray to reset the cluster to a fresh state.