We have been ingesting logs from an MDM for approximately 2 years. Recently, the MDM folks upgraded security on their (Windows) hosts. Logstash previously showed <6> March 29 HH:MM:SS AirWatch... as the beginning of message. With the security updates, Logstash is now interpreting this as a field within _source.
When we send this to Elasticsearch, it quickly tells us we have more that 1,000 fields and crashes.
Need help on how to either delete this entry (the log receipt time stamp is sufficient) or to insert a field in front of the entry as a consistent field.
We can make the change in message with mutate / gsub, but it has no impact on _source and the incorrect "field".
changed codec from default (json) to line.
Getting clean input now.
Related
I want to extract alerts log in CSV format to show that I have received this type of alerts.
But unable to extract from azure log query Or I have to install some agent?
You may list all existing alerts, where the results can be filtered on the basis of multiple parameters (e.g. time range). The results can then be sorted on the basis specific fields, with the default being lastModifiedDateTime:
GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.AlertsManagement/alerts?api-version=2018-05-05
Similar with Optional Parameters:
GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.AlertsManagement/alerts?targetResource={targetResource}&targetResourceType={targetResourceType}&targetResourceGroup={targetResourceGroup}&monitorService={monitorService}&monitorCondition={monitorCondition}&severity={severity}&alertState={alertState}&alertRule={alertRule}&smartGroupId={smartGroupId}&includeContext={includeContext}&includeEgressConfig={includeEgressConfig}&pageCount={pageCount}&sortBy={sortBy}&sortOrder={sortOrder}&select={select}&timeRange={timeRange}&customTimeRange={customTimeRange}&api-version=2018-05-05
To check other URI parameter for Logging, you may refer this URL.
And finally when you have availed response(s) in JSON format, you may get that automatically converted into CSV format using any of the freely available online conversion utilities (like this service HERE)
We have an ELK setup and the Logstash is receiving all the logs from the Filebeat installed on the server. So when I open Kibana and it asks for an index I put just a * for the index value and go to the Discover tab to check the logs and it shows each line of the log in a separate expandable section.
I want to be able to group the logs based on the timestamp first and then on a common ID that is generated in our logs per request to identify it from the rest. An example of the logs we get :
DEBUG [2018-11-23 11:28:22,847][298b364850d8] Some information
INFO [2018-11-23 11:27:33,152][298b364850d8] Some information
INFO [2018-11-24 11:31:20,407][b66a88287eeb] Some information
DEBUG [2018-11-23 11:31:20,407][b66a88287eeb] Some information
I would like to see all logs for request ID : 298b364850d8 in the same drop down given they are continuous logs. Then it can break into the second dropdown again grouped by the request ID : b66a88287eeb in the order of timestamp.
Is this even possible or am I expecting too much from the tool?
OR if there is a better strategy to grouping of logs I'm more than happy to listen to suggestions.
I have been told by a friend that I could configure this in logstash to group logs based on some regex n stuff but I just don't know where and how to configure it to fo the grouping.
I am completely new to the whole ELK stack to bear with my questions which might be quite elementary in nature.
Your question is truly a little vague and broad as you say. However, I will try to help :)
Check the index that you define in the logstash output. This is the index that need to be defined Kibana - not *.
Create an Index Pattern to Connect to Elasticsearch. This will parse the fields of the logs and will allow you to filter as you want.
It recommend using a GUI tool (like Cerebro) to better understand what is going on in you ES. It would also help you to get better clue of the indices you have there.
Good Luck
You can use #timeStamp filter and search query as below sample image to filter what you want.
I'm using the snapshot version of apache chainsaw http://people.apache.org/~sdeboy and I just need to read in a text log file. It works fine when I'm reading in keyword columns ex: LEVEL, MESSAGE ect... but when I want to add in a user defined column, it doesn't work.
To read in the text file, I use TIMESTAMP: LOGGER: LEVEL : MESSAGE : PROP(TIER) as my log format where tier is my user defined property.
User-specified properties via PROP work fine in general - I'm pretty sure the issue is that the MESSAGE field is not the last field in your log format.
Can you reformat your log format to make MESSAGE the last field?
If you can't, I'd try replacing the MESSAGE entry in your log format with a user-defined property like PROP(TEXT).
Either option may work for you.
Is it possible to delete audit log data pertaining to specific entity only? We have a huge audit log which we wanted to reduce by purging log data of specific entities though we do want to keep other entities logs.
There is no support method for deleting Audit Log entries by entity type. The only method support for audit deletion is by date (i.e., all records older than X date.) *Note: that depending on the SQL environment the available end dates may be limited to the end date of an audit log partition. *
That said, there is an unsupported method for meeting this requirement. CRITICAL: Take your CRM server offline, backup the database, and test a restore before attempting - there is no support available for what I'm going to suggest, since this goes against the supported actions on Dynamics CRM 2011 SQL database.
The audit logs are stored in a table dbo.AuditBase. This table does not have an extension base, so there is only one record per audit entry to worry about.
You will need the ObjectTypeCode of the entity. You can get this from the database by running the following script:
SELECT [EntityId],[Name],[ObjectTypeCode]
FROM [].[MetadataSchema].[Entity] ORDER BY Name
Now that you have the ObjectTypeCode simply replace the xxxx in the script below with the value and run the script.
DELETE FROM [].[dbo].[AuditBase] WHERE ObjectTypeCode = xxxx
Audit records for specific entity type are now gone!
I know it isn't quite what you are looking for, but there is a DeleteAuditDataRequest API message that you can call to delete all audit data before a specific date.
As far as deleting specific records I don't believe you can. If you try out the following code you will get the following error The 'Delete' method does not support entities of type 'audt'
orgService.Delete("audit",auditId);
If it is an on premise environment you have direct DB access and you can archive the audit records or delete them via SQL.
Hope that helps.
I'm currently writing an application that moves Notes documents between databases based on the amount of days that have elapsed from the creation/modified/last accessed dates. I would just like to get ideas on a simple and convenient way to create documents with specific dates, without having to change the time on the Domino server, so that I could test out my application.
The best way I found so far was to create a local replica and change the system clock to the date I want. Unfortunately there are problems associated with this method. It does not work on the modified date - I'm not sure how it is getting the modified date information when the location is set to Island (Disconnected) - and it also changes the modified and last accessed dates when the documents are replicated to the server replica.
Someone suggested trying to create a DXL of the document, modify the date time in the DXL file, then import it back into the database as a Notes document; but that does not work. It just takes on the date-time that it was created.
Can anyone offer any other suggestions?
You can set the created date for a document by setting the UNID (which is fundamentally a struct of timestamps, although the actual implementation has changed in recent versions). Accessed and modified times, though, would be unsettable from within the Notes/Domino environment, since the changes you make would be overwritten by the process of saving the changes. If you have a flair for adventure and a need to run with scissors, you could make the changes in the database file itself either programmatically from an external application, or manually with a hex editor. (Editing the binary will work -- folks have been using hex editors to clear the "hide design" flag safely for years. Keep in mind that signed docs will blow up badly, and that you need to ensure that local encryption is off for the database file.)
There's actually a very simple way to spoof the creation date/time: just add a field called $Created with whatever date/time you want. This is alluded to in the Notes C API header file nsfdata.h:
Time/dates associated with notes:
OID.Note Can be Timedate when the note was created
(but not guaranteed to be - look for $CREATED
item first for note creation time)
Obtained by NSFNoteGetInfo(_NOTE_OID) or
OID in SEARCH_MATCH.
Unfortunately, there's no analogous technique for spoofing the mod or access dates. At least none that's ever been documented, as far as I know.
I imagine given how dependent Lotus Notes is on timestamps (for replication, mainly), there isn't an API call that allows you to change the modified, created, or last access dates of a note. (More on the internals of Lotus Notes can be found here.)
I dug around the Notes C API documentation, and found only one mention on how to get/set information in the note's header, including the modified date. However, the documentation states that when you try to update that note (i.e. write it to disk), the last modified date will be overwritten with the date/time it is written to disk.
As an alternative, I would suggest creating your own set of date items within the documents that only you control, for example MyCreated, MyModified, and MyAccessed, and reference those in your code that moves documents based on dates. You would then be able to change these dates as easily as changing any other document item (via agents, forms, etc.)
For MyCreated, create a hidden calculated form field with the formula of #CREATED or #NOW. Set the type to computed when composed.
For MyModified, create a hidden calculated form field with the formula #NOW, and set the type to computed.
MyAccessed gets a bit tricky. If you can do without it, I suggest you live work with just the MyCreated and MyModified. If you need it, you should be able to manage it by setting a field value within the QueryOpen or PostOpen events. Problems occur if your users have only read access to a document - the code to update the MyAccessed field won't be able to store that value.
Hope this helps!