I am created a SparkPlugin and it running well local in intellij. I could see my custom metrics together with other metrics.
https://blog.madhukaraphatak.com/spark-plugin-part-4
But if I connect local to my bitnami spark-docker container I could not find my created custom metrics.
https://dzlab.github.io/bigdata/2020/07/03/spark3-monitoring-1/
Do somebody know it where I can find my custom metrics or have an example?
Related
I'm trying to achieve the following : I want the Performance (page view urlhost) under Applications Insights to be displayed in Grafana. I connected Azure to Grafana and can pull through metrics but I want to filter on certain data, below is a screenshot from where I want the data.
I did add variables under the datasources and followed this link to add the variables : https://grafana.com/docs/grafana/latest/datasources/azuremonitor/
Any help or input would be appreciated.
Thanks.
I managed to resolve this issue on my own.
I've gone through the metrics of the connected Azure Monitor service and got the specific metric I was looking for and then just set it to the metric I wanted.
Due to additional details in the screenshot I won't be able to share it but feel free to ask me for additional details.
Thanks.
I deployed an Azure Machine Learning model to AKS, and would like to know how to set an alert if the deployment status changes to any value other than 'Healhty'. I looked at the monitoring metrics in the workspace, but it looks like they are more related to the training process (Model and Run) and Quotas. Please let me know if you have any suggestions
Thanks!
Aazure Machine Learning does not provide a way to continuously monitor the health of your webservice and generate alerts.
You can set up this fairly easily using Application Insights(AML Workspace comes with a provisioned Application Insights).
You can monitor the webservice scoring endpoint using URL ping or web test in App Insights.
problem statement.
as per my understanding, we can run an elastic search, kibana and logstash etc as a pod in kubernates cluster for log management. but it is also memory heavy intensive application. AWS provides various managed services like Cloudwatch, cloud trail and ELK stack for log management.
do we have a similar substitute in Azure as well i.e. some managed service?
you can use AKS with Azure Monitor (reading). I'm not sure you can apply this to not AKS cluster (at least not in a straight forward fashion).
Onboarding (for AKS clusters) is really simple and can be done using various methods (portal included).
You can read more on the docs I've linked (for example, about capabilities).
Azure Monitor for Containers is available now and once integrated some cluster metrics as well as the logs will be automatically collected and made available through log analytics.
I am using AWS RDS SQL server and I need to do enhanced level monitoring via Cloudwatch. By default there are some basic monitoring available but I want use custom metrics as well.
In my scenario I need to create an alarm whenever we get more number of deadlock in SQL server. We are able to fetch the details of deadlock via script and I need to prepare custom metrics for the same.
Can any one help on this or kindly suggest any alternate solution?
I have some Selenium test code that I need to run in parallel. In order for Selenium to run effectively, certain configurations have to be done on the machine (I.E. zone settings, Chrome and Firefox installs, etc.) and these settings are hard (if not impossible) to apply via an automated approach. I've manually created a VM, done all the setup and created an image following the directions in Microsoft's documentation.
Now I need to setup my code so that I can specify a VM image to use when creating the nodes. I've searched as much as I can and not found any documentation that explains how I can go about doing this. The example in the DotNetTutorial sample doesn't seem to have any way to specify an image.
There is a feedback item here on this same topic and shows the request as started on Jun 1st 2015. I'm hoping this means that it's done now and that it just hasn't been documented well.
Q: How I can specify a custom VM image as the source for my Azure Batch nodes?
https://github.com/Azure/azure-sdk-for-net/blob/AutoRest/src/Batch/Client/changelog.md
• Added support for deploying nodes using custom VHDs, via the OSDisk property of VirtualMachineConfiguration. Note that the Batch account being used must have been created with PoolAllocationMode = UserSubscription to allow this.
Updated Answer on 2017-12-05:
Custom images are now supported through normal Batch accounts (i.e., Batch service pool allocation mode accounts). You will need to specify a valid ARM Image Id and use Azure Active Directory authentication to create custom images (shared key auth does not support custom images).
Updated Answer on 2017-03-17:
Custom images are now supported through "User Subscription" Batch accounts. You can create these types of accounts in Azure Portal or through the newest management SDKs for supported languages.
Previous Answer:
Currently, custom VM images are not supported. As you noted, this is a feature that is being worked on. In addition to uservoice, you can periodically check for product updates at this site.