I have a Vitess operator-controlled deployment (v12), running on Kubernetes. I want to generate logs recording every read query from the DB made by specific users, aggregate the logs, and sent them elsewhere.
I can't find anywhere in Vitess docs about audit logs or audit records, How do I generate them?
Related
I am running a couple of VM's inside Azure portal and I have my own private besu nodes running on them. I have my metrics set up inside the Prometheus but I was hoping to hook it up securely to Grafana but I tried everything and I can't. So the next thing is to see can I get the metrics available through prometheus into azure monitor, specifically into log analytics?
the aim is to get the sync status, and the highest block number on each node, into log analytics so we can see what each is doing. That way we know, on a quick look, the status of each node and by extension, the condition of the private chain. What worries me is that although I have alerts if blocks stop being created or nodes lose peers we cannot see it quickly.
Prometheus is one option to give us those stats. If we can get data from prometheus into log anaytics that would solve the problem.
Can anyone help me to how I can go about it or any links. All I am seeing is for containers but I want for my VM's
Currently I want to have a monitoring to know who and what is executing at the gcloud level, for example to know if someone executes:
gcloud iam service-accounts list.
The objective is to have a control in case an attacker or another person manages to enter and know the list of service accounts. The objective is to be able to visualize it through the Logs Explorer and then make a Sink towards the SIEM.
Can this be done?
Everytime someone (or something ... eg Terraform) makes changes to your GCP environment or performs some sensitive access, audit records are automatically recorded and are immutable. This means that they can not be deleted or otherwise hidden. These audit records are written to GCP Cloud Logging and can be viewed/reviewed using the Cloud Logging explorer tools. Should you need, you can also set up alerts or other triggers that are automatically fired if certain log records (audit activities) are detected. The full documentation for GCP Audit Logs can be found here:
https://cloud.google.com/logging/docs/audit
Rather than try and repeat that information, let me encourage you to review that article in depth.
For the specific question on gcloud, it helps to realize that everything in GCP happens through API. This means that when you execute a gcloud command (anywhere), that results in an API request to perform the task being sent to GCP. It is here the GCP writes the audit records into the log.
As far as sinking the audit trail written to Cloud Logging to a SIEM, that is absolutely possible. My recommendation is to split the overall puzzle into parts. For part 1, prove to yourself that the audit records you care about are being written to Cloud Logging ... and for part 2, prove to yourself that any and all Cloud Logging records can (with filters) be exported out of Cloud Logging to an external SIEM or to GCP Cloud Storage for long term storage.
Is there a way to monitor access to the Firestore DB by users?
I'm wondering about this because, from my understanding, at least one user must be owner of a GCP project, therefore this means that a user has to have full access to the DB in the production environment.
Even if the project owner is a highly trusted person, I would like to be able to monitor reads of the DB by the user to ensure there is no unnecessary access.
I tried exploring Cloud Monitoring to do so, but was not able to find any solutions.
You can monitor a user's activity(data read, write etc..) on Firestore by data access audit logs.
Firestore's data access audit log is not enabled by default, So go to IAM&Admin -> Audit logs.
Then you can enable firestore audit logs like the screenshot below.
After that audit logs can be seen at Cloud logging.
Refer this page to how can find audit logs on Cloud logging.
We have configured Grafana user and admin roles using Grafana.ini which works great.
Now we want to provide some permission to user to
see specific dashboards, e.g. user X can see 5 dashboard and user Y can see 8 dashboards according to some configurations (permissions).
We were able to keep this config in Grafana UI but if the pod (K8S) is fail the details is deleted, we are using latest prom helm .
My question is how should we
store this data right, even if the pod is restarted?
https://grafana.com/docs/grafana/latest/permissions/dashboard-folder-permissions/
https://github.com/grafana/helm-charts
https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml#L253
Any solution/direction will be helpful as I believe that grafana store this data someware but not sure where ...
I found this link which is talking about the store of the users on database etc
https://grafana.com/docs/grafana/latest/administration/configuration/#database
Not sure what is missing as the data should be kept in k8s volume...
If there is any other solution or a way to solve it please let me know.
You need to deploy your Grafana instance with a persistent storage. Either:
Keep using the built-in sqlite db - just ensure to use PVC to store it's data. The default path can be set using this config property
Use external db, like SQL, and configure Grafana to talk with it. See the database config section for more details.
Grafana persistency will be used to persist other settings as well, and also persist dashboards, alerts etc.
All settings can be set via the grafana.ini helm chart variable
I was trying to configure the default cosmos db metrics on azure monitor to get requests, throughputs, and other related info. as given in documentation.
One issue I found was that if I have a collection by the name of test in my database in cosmos db account , I sometimes see two collections in Azure monitor under my database that are Test and test.
But this is kind of intermittent and if I change the time range it sometimes start showing one collection only. I have checked there is no collection by the name of "Test" (with capital T) in my database.
And also the results provided are actually distributed within the two metrics.
Could not find anything on documentation regarding the same.
Is this something on azure's side or something wrong with any configuration?
(screenshot for the same.)