Can I get state/information of an "Updating" Service Fabric cluster? - azure

I have created a new Service Fabric cluster in Azure via ARM template, and this works fine. But now if I switch to the Azure portal, I get in the "Overview" only the information "Updating".
I use the Service Fabric version: 6.4.637.9590.
In the "Activity log" is written only "Create or Update Cluster" for 2 hours. Now it looks to me as if there are some issues, but I don't know where I can get more information.
Is there a way I can get some deploy / create information via Azure CLI?

You can get the health information for a Service Fabric cluster by using Get-ServiceFabricClusterHealth command. Service Fabric reports the following health states:
OK. The entity meets health guidelines.
Error. The entity does not meet health guidelines.
Warning. The entity meets health guidelines but experienced some issue.
You can also use Service Fabric Explorer to visualize your cluster. Service Fabric Explorer (SFX) is an open-source tool for inspecting and managing Azure Service Fabric clusters. Service Fabric Explorer is a desktop application for Windows, macOS and Linux.

After long research work I get my solution.
You can connect to you service fabric cluster nodes if they were created successfully. Use the public IP which you can find in the overview of the "VM scale set" and use Remote desktop connection (RDP) to connect to a VM.
How to set up this is described here
Check the event log, check log in the Service fabric installation directory - it is on D:\SvcFab\Log\ check all directories for log.

Related

How to enable logging in new Azure VMs, automatically?

I have created a custom OS images on Azure containing my app, omsagent, and a configuration file to track my application logs. I verified that the custom logs were available on Log Analytics Workspace for that VM.
When I create a new VM with this custom OS, using Python SDK, I don't receive logs in the workspace. I verified that omsagent is working. It is sending heartbeats that are visible on the Log Analytics Workspace > Insights > Agents.
I found out that the new is was not Connected to the workspace.
So my question is how do I automatically connect a VM to Log Analytics Workspace at creation time?
I would advise against baking the Log Analytics Agent (OMS Agent) into the image directly - Azure doesn't recommend this kind of setup. Instead, you should be using an Azure Policy that they provide exactly for this scenario.
We have dozens of VMs and Scale Sets we need to manage with Log Analytics Agent installed on each of them when we build the custom images. In the beginning everything was working fine but a couple of months later those images stopped working.
After spending some time investigating with the Azure team we found out that the agent's certificate wasn't being renewed and it wouldn't connect to the Workspace automatically. Even worse was that because of this, it was failing all our images builds.
We were told that this is not the right practice and we should look at Azure Policies. They are rather easy to setup - just assign once and forget about them. It's also good for compliance and will let you know if there's any machine that's non-compliant.
Check this link for more info about Azure Monitor policies.
And this link will open your Azure Portal directly into the policies page for Windows VMs.
Here's a preview of the policies available:

Monitoring request duration on azure aks

I'm working on a project hosted on Azure using AKS. I was asked to monitor the performance of some requests. I remember I could use Application Insights to see requested URLs and their duration when hosting an application in Azure AppService.
Is there something similar for AKS? I'd like to see the URLs I'm hitting and the completion times of those requests.
Yes, you can. Adding to #ZakiMa's comment above, codeless instrumentation of Azure Kubernetes Service is currently available for only for Java applications through the standalone agent. To monitor applications in other languages use the SDKs.

Service Fabric Mesh - Doesn't Show Application In Service Fabric Cluster

Am I getting something fundamentally wrong here...
I created a Service Fabric Mesh application in Visual Studio 2017, created 2 services and then tested on my local 5 Mesh Node cluster - worked as expected.
I then created a Service Fabric Cluster in Azure.
Next, I published my app to my Azure Subscription in the same Resource Group as my SF Cluster.
My app gets published fine and I can see the SF Mesh Application in my Azure Subscription, and I can access the services directly via the IP addresses that the Publish process tells me.
However, when I look at the SF Cluster Explorer for the Azure Cluster I created before publishing it doesn't show my Application or Services there?
I only see the services listed under the Mesh Application - but I can't see the public IPs listed anywhere for those services.
What am I missing? The services are clearly working as I can get data back out from them.
If I understand correctly, I believe you cannot see anything in SF Explorer because it's connected to regular Service Fabric and not your Mesh application which is not running on the Service Fabric cluster you provisioned. I haven't found any information on connecting to SF Explorer for Mesh yet. I expect MS will provide more information in the future. If you want to find the public IP for your Mesh application, try
az mesh gateway show -g ResourceGroupName -n GatewayName

Troubleshooting Azure Service Fabric: "The ServiceType was not registered within the configured timeout."

I have deployed a Web API written with .net Core to a local dev Azure Service Fabric cluster on my machine. I have plenty of disk space, memory, etc, and the app gets deployed there. However, success is intermittent. Sometimes it doesn't deploy to all the nodes, and now I have it deployed to all the nodes, but within the Azure Service Fabric Manager page, I see each application in each node has an Error status with the message: "The ServiceType was not registered within the configured timeout." I don't THINK I should have to remove and redeploy everything. Is there some way I can force it to 're-register' the installed service type? Microsoft docs are really really thin on troubleshooting these clusters.
Is there some way I can force it to 're-register' the installed service type?
On your local machine you can set the deployment to always remove the application when you're done debugging. However, if it's not completing in the first place I'm not sure if this workflow would still work.
Since we're on the topic, in the cloud I think you'd just have to use the Powershell scripts to first compare the existing app types and version and remove them before "updating". Since the orchestration of this is complicated I like to use tools to manage it.
In VSTS for instance there is an overwrite SameAppTypeAndVersion option.
And finally, if you're just tired of using the Service Fabric UI to remove the Application over and over while you troubleshoot it might be faster to use the icon in the system tray to reset the cluster to a fresh state.

Azure Service Fabric - continuous integration on VSTS

Is it possible to setup Continuous Integration on VSTS without using external VM as build agent (https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration/)?
What I would like to achieve is to have one Service Fabric Solution with 2 statefull/stateless services (serviceA and serviceB). I want to build and deploy them separately as different build jobs on VSTS, but to deployed them to the same Service Fabric Cluster on Azure (fabric:/App/ServiceA, fabric:/App/ServiceB).
As of the Service Fabric SDK 2.1.150 and Runtime 5.1.150 release, it is possible to deploy Service Fabric application using VSTS's hosted build agent as the dependencies can be added via a NuGet package - refer to the following video for details. http://www.dotjson.uk/azure-service-fabric-continous-integration-and-deployment-in-15-minutes/
In your specific case; just create 2 build definitions (1 for each service) and 2 release definitions (1 for each service) and hook them up to the same hosted Service Fabric cluster.
Unfortunately deploying applications relies on the Service Fabric SDK being installed so you'll need to set up an agent as the instructions suggest. If you don't want to pay for the Azure VM, you might want to consider running the agent service locally e.g. On your devbox.
Note that with Service Fabric you deploy applications, not services. You can however update services independently.
It sounds like you need to have service fabric SDK installed on the build machine, and I'm guessing the hosted agent doesn't have that. If that's the case, then yes you need to create your own build server VM

Resources