I'm trying to create some metrics graphs to track our API calls and I want to start breaking down by event names. Looking through the web interface and cli, I have to scroll through a lot of data to see different types of events.
I just want the list of all Event names.
Thank you
Devon
If you just want to see a list of event names, you can use the --query option to filter the data returned by the service. For example:
aws cloudtrail lookup-events --query Events[].EventName
[
"ConsoleLogin",
"DescribeAccountLimits",
"ConsoleLogin",
...
]
Is that what you are looking for?
You can also get a fairly comprehensive list of CloudTrail Events here so you know what you're looking for.
Related
I have ~20 Services that I want to monitor differently so for example I want the monitor to alert me if SerivceA is over 1 second but ServiceB is over 3 seconds. I currently have a list of services text file that is setup like
ServiceName,Threshold
For example:
ServiceA,1
ServiceB,3
(For context eventualy I want other tools to access this list of services so I kind of just want central list to maintain for all the tools)
I use the for_each loop in terraform to access each String(ServiceA,1)
Then use ${tolist(split(",", "${each.key}"))[0]} -> Name(ServiceA)
or ${tolist(split(",", "${each.key}"))[1]} -> Threshold(1)
In my datadog dashboard it creates and seperates the name from the threshold fine in the SLO. But when I want to create a monitor for this SLO I use:
query = "error_budget("${datadog_service_level_objective.latency_slo["${tolist(split(",", "${each.key}"))[0]}"].id}").over("7d") > 100"
But I am getting an error like this: Error Message
The ".id" Worked before and currently is working for the Availability monitor that is using a text file with just the names of the services. So no ",2" in the text file.
So I want to be able to loop through this list and have it create custom monitors based on the metadata I put in the text file. My end goal is to have multiple points of data to get really granualar for over 100 services eventualy.. I do not want to do this manually
I have tried creating a variable for the list of services but I need to loop through the list inside the resouce with meta data. I really do not see how having a seperate list for just the meta data would even work. I would love and apprieciate any feedback or advice. Thank you in advance
I want to my discord bot to see incoming audit log entries and do certain actions based on the entries' data while it's on. Even after reading the documentation, I still have no clue on how to go about this.
How would the code look?
There isnt currently event function to define actions for audit log but you can use audit log list with Guild.audit_logs() iterator to check available audits for example:
async for entry in guild.audit_logs(action=discord.AuditLogAction.ban):
print(f'{entry.user} banned {entry.target}')
For more information:
https://discordpy.readthedocs.io/en/master/api.html?highlight=audit_logs#discord.Guild.audit_logs
https://discordpy.readthedocs.io/en/master/api.html?highlight=audit_logs#audit-log-data
We have an ELK setup and the Logstash is receiving all the logs from the Filebeat installed on the server. So when I open Kibana and it asks for an index I put just a * for the index value and go to the Discover tab to check the logs and it shows each line of the log in a separate expandable section.
I want to be able to group the logs based on the timestamp first and then on a common ID that is generated in our logs per request to identify it from the rest. An example of the logs we get :
DEBUG [2018-11-23 11:28:22,847][298b364850d8] Some information
INFO [2018-11-23 11:27:33,152][298b364850d8] Some information
INFO [2018-11-24 11:31:20,407][b66a88287eeb] Some information
DEBUG [2018-11-23 11:31:20,407][b66a88287eeb] Some information
I would like to see all logs for request ID : 298b364850d8 in the same drop down given they are continuous logs. Then it can break into the second dropdown again grouped by the request ID : b66a88287eeb in the order of timestamp.
Is this even possible or am I expecting too much from the tool?
OR if there is a better strategy to grouping of logs I'm more than happy to listen to suggestions.
I have been told by a friend that I could configure this in logstash to group logs based on some regex n stuff but I just don't know where and how to configure it to fo the grouping.
I am completely new to the whole ELK stack to bear with my questions which might be quite elementary in nature.
Your question is truly a little vague and broad as you say. However, I will try to help :)
Check the index that you define in the logstash output. This is the index that need to be defined Kibana - not *.
Create an Index Pattern to Connect to Elasticsearch. This will parse the fields of the logs and will allow you to filter as you want.
It recommend using a GUI tool (like Cerebro) to better understand what is going on in you ES. It would also help you to get better clue of the indices you have there.
Good Luck
You can use #timeStamp filter and search query as below sample image to filter what you want.
In the API-Management's Developer Portal, we have the problem that all Operation (API) calls are listed in a long list, making it difficult for our customers to find out what calls that belong together. What we'd like is the possibility to group calls by something, i.e. the controller name. (In Swagger this can be done by using the tags field in the Swagger specification.)
In the templates section, there's an option Operation list (grouped), which, by the name of it, might be able to solve our problems. But how can I use this template?
I'm currently importing the API list using the OpenAPI specification.
Update 1:
This is what it looks like in a sample operation list for us. There's no search box available.
(grouped) template is used when user selects "Group by tags" option - rightmost button next to search field on APIs/operations list. Here is how it looks on UI:
I need to be able to create a different item from an existing one, but still keep most of the details and only change some information.
Thanks,
Your best bet is to use event receivers. The ItemAdded(Synchronous) or ItemAdding(Asynchronous) event receivers will allow you to access the data from the item that was just added.
You can use this information to create a different item, either in the same list, in a new list, or in any type of storage medium you can get too.
Custom workflows will also give you much of the same ability.
Excellent article on ER's --> http://developers.de/blogs/adis_jugo/archive/2009/03/12/develop-and-deploy-a-sharepoint-event-receiver-from-the-scratch.aspx
Have you tried using SPListItem.Copy( oldItemURL, newItemURL ) ?