[Question posted by a user on YugabyteDB Community Slack]
Is there a metrics endpoint so that clients can determine how loaded the servers are and throttle/backoff requests accordingly. Currently, the /prometheus-metrics endpoint returns too many metrics, most of which are irrelevant. I want a way to only obtain the metrics that I care about, which are rpcs_in_queue, rpc_{in,out}bound_calls_alive, etc.
It is possible to filter for Prometheus metrics since f7438c2 by building a request like the one below:
/prometheus-metrics?metrics=<metric-substring1>,<metric-substring2>
Related
Here is some data about the monitoring metrics Spark exposes.
I don't want to pull from the rest API. Is there a way of telling the driver to send specific metrics that I found valuable to elastic search (So I can then display them on Kibana)?
Here are all the metrics available but I couldn't understand how can I push, (not pull as for example - Prometheus) those metrics.
Good day
Is there a way to see what is pulling data out of the system and how much?
I have looked at the Access History(OData refresh) but I am thinking the API can also be an issue.
We currently experiencing massive data pulls via the IIS on our server and I can't see what is pulling the data.
Any ideas or suggestions will be helpful I
You can monitor lots of things such as SQL and Memory through the Request Profiler.
Search for Request Profiler in the search box.
Click Log Requests and Log SQL to enable full logging.
Remember to turn it off when you are done as it will have a small performance hit.
An alternative is to use the License Monitoring Console within Acumatica. You can view historical transactions whether they are commercial or ERP related.
From the help file, commercial transactions are:
Commercial transactions are a subset of ERP transactions. The system
regards a transaction as commercial when the transaction creates or
updates any of the following entities: sales orders, shipments,
Accounts Receivable invoices, payments, purchase orders, purchase
receipts, and Accounts Payable invoices. All requests generated by
using the web services API that create or update data in these
documents are also considered commercial transactions.
Also, you can review the number of web service API requests, requests per minute and maximum number of users. This can also help determine whether your client needs to be on a higher tier for Acumatica.
You can also follow the troubleshooting recommendations listed on Acumatica's help site.
We have legacy applications that currently write out various run time metrics (SQL calls run time, api / http request run times etc) to local SQL DB.
format:( source, event, data, executionduration)
We are moving away from storing those in local SQL DB, and are now publishing those same metrics to azure event hub.
Looking for a good place to store those metrics for the purpose of monitoring the health of the application. Simple solution would be to store in some DB and build custom application to visualize the data in custom ways.
We are also considering using Azure Monitor for this purpose via data collector API (https://learn.microsoft.com/en-us/azure/azure-monitor/platform/data-collector-api)
QUESTION: Are there any issues with azure monitor that would prevent us from achieving this type of health monitoring?
Details
each event is small (few hundred characters)
expecting ~ 10 million events per day
retention of 1-2 days is enough
ability to aggregate old events per source per event is important (to have historical run time information)
Thank you
You can do some simple graphs and with the Log Analytics query language, you can do just about any form of data analytics you need.
Here's a pretty good article on Monitor Visualizations.
learn.microsoft.com/en-us/azure/azure-monitor/log-query/charts
I just want to make an API requests rate limiting per account plan so let's say that we have users and every user have a plan that has some limits of how many API requests per day they can make.
So now, How can i make an API limit policy in loopback 3.x.
Thanks
If you're planning on using Loopback on IBM Bluemix hosting you can use their API Connect service that includes customer plan based policies with API level throttling, monitoring, API billing and many other API management features.
StrongLoop API Microgateway is used by API Connect but is now open sourced (Apr 2017).
Since Loopback is just a layer on top of Express, you can alternatively just use an Express lib.
For rate limiting on a single standalone Loopback server you can use one of these Express libs:
express-rate-limit
express-throttle
If you plan to use this on a cluster of Loopback servers you'll need to store the API call counts as part of the shared server state of each user or user session. The weapon of choice for this is Redis since it's a high performance in memory data store that can be scaled. Rate limiting Express libs that support Redis include:
strict-rate-limiter
express-brute
express-limiter
Finally, you could also implement rate limiting on a reverse proxy. See Nginx Rate Limiting
This is an access control policy.
You can handle this by custom roles created by role resolver.
By creating a custom role and checking in that resolver callback if the current user exceeded from rate limit or not.
such a policy can only* be made with a database, such as redis/memcached. For my projects I rely on redback which is based on Redis. It has a built in RateLimit helper (among others) and it takes care of some raceconditions and atomic transactions.
* if you don't have a database, you could store it in-memory (in a hash or array) and use intervals to flush it, but I'd go with redback :)
We want to enable Geo-Replication in Azure SQL Database. However for compliance reasons, we want to be sure that replication to secondary region happens over a secure encrypted channel.
Is there any documentation available to confirm that data in-transit during geo-replication goes over a secure encrypted channel?
I have looked into Microsoft Azure Trust center and there is a brief mention about using standard protocols for in-transit data. However I could not find information related to which protocols are used and how security of in-transit data is ensured.
Thank you for this question. Yes, the geo-replication uses a secure channel. If you are using V11 servers the SSL certificates are global and regularly rotated. If you are using V12 servers the certificates are scoped to the individual logical servers. This provides secure channel isolation not only between different customers but also between different applications. Based on this post I have filed a work time to reflect this in the documentation as well.