I have Windows eventlogs being sent from nxlog to logstash. My windows box and my receiving server is in UTC.
NXlog appears to be adding EventTime to the log it ships, and the datetime is 7 hours behind UTC. No explaination, I'm not setting it, and it doesn't match the timezone of either of my VM's or my host VM.
What is this EventTime? Is NXLog creating it? How come it has the wrong timezone or date?
EventTime stores the value of TimeCreated in case of im_msvistalog. When EventTime is converted to a string (e.g. by to_json()) it will be shown in local time.
Related
Application server is showing date as UTC, database server also showing as UTC. When we are inserting systimestamp in a timestamp column from database server then it is showing as UTC format but if we insert from application server then data is getting loaded in MST timezone.
Need some help to figure out what is the issue.
How do you insert date from application server? Most likely as LOCALTIMESTAMP or CURRENT_TIMESTAMP.
You have several possibilities:
Change timezone of your application server to UTC
Use SYSTIMESTAMP instead of LOCALTIMESTAMP or CURRENT_TIMESTAMP
Insert LOCALTIMESTAMP AT TIME ZONE 'UTC' or SYS_EXTRACT_UTC(LOCALTIMESTAMP)
I have zabbix version 3.4. i have 2 templates. one for monitoring the OS and the other for monitoring Databases. i have few servers with CentOS 6.9 added to these templates. everything works just fine.
then i added 4 servers to these templates with CentOS 7. items works correctly. they have the expected results. the problem is when a trigger is activated for these 4 servers, they don't resolve and stay active and we see them in dashboard.
for example, in Database template, we have an item that is for service status. if it is 1 then it means service is running, if it is something other than 1 it means service is not running. i stopped the service on one of those CentOS 7 servers. the result that agent got was 0. trigger was activated. then i started the service. in latest data i can see that the value is 1 which means service is running, but the trigger did't resolved and it is still up.
then i did the above steps for one of the CentOS 6.9 servers and everything works just fine.
why this happens and how i can fix it?
Update:
the trigger expression is:
{log-b:db2stat.db2instance_service[].last()}<>1
Long story short: maybe check the DB logs if some inserts/updates are not failing (especially in event_recovery and problem tables)
Short story long:
We observe the similar behaviour on ZBX 4.4 and only with certain triggers checking last 10 minutes of data (e.g. item_key.str('problem',10m)=1 ). The problem get detected but later will not get resolved even after several days event though the trigger conditions are no longer matched.
In our particular case:
I looked into DB and found the event with appropriate eventid (e.g. 123) in events table and noted down objectid (e.g.100123)
Then I checked the events table for the specific objectid (100123) and found that indeed there was a "resolution" event (e.g. 125)
when checked event_recovery table I couldn't find an entry which would match those two eventids (while in case of other triggers, they had an entry in event_recovery table after they got resolved)
I simply created the entry: insert into event_recovery (eventid, r_eventid) VALUES ('123', '125');
it is not enough however as the similar pairing needs to be adjusted in problems table
in problems table I found a problem with my eventid (123) and simply mapped recovery event to that: update problem set r_eventid='125' where eventid='123' and objectid='100123';
The problem with this is that this is not a solution, just a one time workaround. The issue keeps popping up and at this time we suspect the problem is on database side (we have a primary+standby DB with selects directed to standby which can cause certain select operations which do write in the end to fail as standby DB is in read-only mode).
We will try to redirect everything to primary DB to see if it helps.
I have created manage instance in azure using UTC timezone at time of creation. Now I want to change timezone to GMT. So is this any way to make timezone change of Manage instance SQL server?
This can't be changed once managed instance is created. You need to redeploy managed instance with correct timestamp and use cross instance PITR to move databases.
The date/time is derived from the operating system of the computer on which the instance of SQL Server is running.
I searched a lot and according my experience , we can not change the timezone once the SQL server instance is created.
The only thing we can to is convert the UTC timezone to GMT. Many people has post similar problem on Stack overflow. Such as:
Date time conversion from timezone to timezone in sql server
SQL Server Timezone Change
Azure gives the built-in function AT TIME ZONE (Transact-SQL) applies to SQL Server 2016 or later. AT TIME ZONE implementation relies on a Windows mechanism to convert datetime values across time zones.
inputdate AT TIME ZONE timezone
For example, Convert values between different time zones:
USE AdventureWorks2016;
GO
SELECT SalesOrderID, OrderDate,
OrderDate AT TIME ZONE 'Pacific Standard Time' AS OrderDate_TimeZonePST,
OrderDate AT TIME ZONE 'Central European Standard Time' AS OrderDate_TimeZoneCET
FROM Sales.SalesOrderHeader;
I don't have the Azure SQL MI, so I could test it for you.
Hope this helps.
We have the following infra structure to index application log data to ELK.
filebeat -------> Logstash ------> Elastic search-----> kibana
All were working fine but suddenly Logstash server consume 99.9% CPU after which no indexing is happening. In filebeat we could see that "Error publishing events (retrying): EOF"
If we restart logstash service it starts indexing but when it reaches CPU 99.9%, it does not do anything.
Elastic search and kibana : AWS service
Logstash : AWS Medium server
Filebeat : AWS instance of our application test environment.
Please help us to resolve this issue.
Let me know if you need any other details.
Thanks in advance.
Thanks Arivazhgan and daniel for your support and suggestions.
I found the problem, the actual problem is my filter logic taking more time to process the log message. I have modified the log message format and optimized grok expression. Now everything working fine.
following changes have been done:
01. I have used mutate to convert few fields to int and float. This changes i did in pattern file itself.
02. I modified the log message format.
03. optimized the grok expression.
We recently moved the database to Amazon RDS SQL Server. We have some difficulties with the date times (timezone). By default RDS provides UTC date. Is there any way to overwrite / manipulate the local timezone at database level in SQL Server. Please help me on this.
Thanks in advance,
SqlLover
https://forums.aws.amazon.com/thread.jspa?messageID=161339
The time zone is currently not modifiable. There is indeed a Parameter value in rds-describe-db-parameters called "default_time_zone" but it's marked as not modifiable.