Why use To_Localtime when analyzing IIS logs - iis

I've searched for several examples to analyze IIS logs using the Log Parser, taking time into account... For example, this query that shows the number of hits per hour:
SELECT
QUANTIZE(TO_LOCALTIME(TO_TIMESTAMP(date, time)), 3600) AS Hour,
COUNT(*) AS Hits
FROM D:\Logs\*.log
Group By Hour
However I cannot understand why use "TO_LOCALTIME"... Also, if there is a time difference (and a difference in results while using "TO_LOCALTIME" or not), how is that?... Thank you!

All IIS uses UTC for all times in its logs regardless of the time zone of the server, so to get your local time, you can use TO_LOCALTIME.
I guess if you are fine with UTC, you don't need to use TO_LOCALTIME.

Related

Ongoing time frame in Azure Application Insights

This line is in my Azure Application Insights Kusto query:
pageViews
| where timestamp between(datetime("2020-03-06T00:00:00.000Z")..datetime("2020-06-06T00:00:00.000Z"))
Each time I run it, I manually replace the datetime values with current date and the current date minus ~90 days. Is there a way to write the query in a way that no matter what day I run it, it uses that day minus 90 days by default?
The reason for 90 is I believe Azure Application Insights allows a maximum of the most recent 90 days to exported. In other queries I might choose to use minus 30 days or minus 7 days, if it's possible.
If this is easily spotted in Microsoft documentation and I have missed it in my exploration, I apologize.
Thank you for any insight anyone may have.
IIUC, you're interested in running something like this:
pageViews
| where timestamp between(startofday(ago(90d)) .. startofday(now()))
(depending on your requirement, you can omit the startofday()s, or use endofday(), or perform any other datetime-manipulation/arithmetics)
It should be easy to use ago operator. The query is as below:
pageViews
| where timestamp >ago(90d) //d means days here.
And for this The reason for 90 is I believe Azure Application Insights allows a maximum of the most recent 90 days to exported. You can take a look at Continuous Export feature, it's different from export via query. And you can choose the better one between them as per your requirement.

Dealing with a daily time window across timezones in Node.js

Currently, I'm working on a project that requires a window of time to be selected that is used as a valid window to trigger an event within. This window is selected by the user as a start time (24 hour time), end time (24 hour time), and a timezone. My goal is to then be able to convert these times into UTC based on the offset from the provided timezone and save into MySQL.
The main problem is I have set up the entire flow to deal with time-only data types from the mobile app all the way back to the MySQL database. I have been trying to figure out a solution that won't require changing all those data types to include date and time which would require changes in many parts of the project.
Can I make this calculation without dealing with the date? I don't believe I can as timezone offsets range from -12:00 to +14:00 which would push some windows to the next or previous days when turned into UTC.
Is the correct approach to add in the date component and then continue to update it as time progresses? I also want to ensure daylight savings doesn't create errors.
Ultimately I would like the best approach to take so if I have to change a lot now I'd rather do that then deal with a headache later. Any thoughts would be greatly appreciated!

Azure : Resource usage API issue

I tried to pull the Azure resource usage data for billing metrics. I followed the steps as mentioned in the blog to get Usage data of resources.
https://msdn.microsoft.com/en-us/library/azure/mt219001.aspx
Even If I set "start and endtime" parameter in the URL, its not take effect. It returns entire output [ from resource created/added time ].
For example :
https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/providers/Microsoft.Commerce/UsageAggregates?api-version=2015-06-01-preview&reportedStartTime=2017-03-03T00%3a00%3a00%2b00%3a00&reportedEndTime=2017-03-04T00%3a00%3a00%2b00%3a00&aggregationGranularity=Hourly&showDetails=true"
As per the above URL, it should return the data between "2017-03-03 to 2017-03-04". But It shows the data from 2nd March [ 2017-03-02]. don't know why this return entire output and time filter section is not working.
Note : Endtime parameter value takes effect, mean it shows the output upto what mentioned in the endtime. But it doesn't consider the start time.
Anyone have a suggestion on this.
So there are a few things to consider:
There is usage date/time and then there is reported date/time.
Former tells you the date/time when the resources were used while the
latter tells you the date/time when this information was received by
the billing sub-system. There will be some delay in when the
resources used versus when they are reported. From this link:
Set {dateTimeOffset-value} for reportedStartTime and reportedEndTime
to valid dateTime values. Please note that this dateTimeOffset value
represents the timestamp at which the resource usage was recorded
within the Azure billing system. As Azure is a distributed system,
spanning across 19 datacenters around the world, there is bound to be
a delay between the resource usage time (when the resource was
actually consumed) and the resource usage reported time (when the
usage event reached the billing system) and callers need a predictable
way to get all usage events for a subscription for a given time
period.
The query only lets you search for reported date/time and there is no provision for usage date/time. However the data returned back to you contains usage date/time and not the reported date/time.
Long story short, because of the delay in propagating the usage information to the billing sub-system, the behavior you're seeing is correct. In my experience, it takes about 24 hours for all the usage information to show up in the billing sub-system.
The way we handle this scenario in our application is we fetch the data for a longer duration and then pick up only the data we're interested in seeing. So for example, if I need to see the data for 1st of March then we query the data for reported date/time from 1st March to say 4th March (i.e. today's date) and then discard any data where usage date is not 1st of March.
If we don't find any data (which is quite possible and is happening in your case as well), we simply tell the users that usage information is not yet available.

Run a CRON job that depends on entries of a database in NodeJS using AWS

I want to make schedules that depend on entries of a database to schedule cron jobs. Like if there's an entry in database with a timestamp 2:00 PM, 3rd of Apr, I want to send a mail to users on 2nd of Apr. I also want to send notifications at 1:55 PM 3rd of Apr.
So, this means I have to look into the database, find the entries after the current times tamp, see if they suit the criteria for notification (like 5 minutes to time stamp or 1 day to time stamp) and send the notification or mail. I'm only worried that every one minute seems like too much overload. Are the AWS web workers built for this sort of thing?
Any suggestions on how this can be accomplished?
I don't think crontab will be the best choice but if you're familiar with it, it's fine.
First you should estimate how frequently your entries are created. If, let's say, only a couple of hundred a day. My suggestion is to create the crontab job right after the entry is created. But if more than a hundred a minutes, pooling will be fine.
But there are also side effects, like canceling or updating the cron job .
I think it's better to use a proper MQ.

Time of logs of cloud front

I try to collect cloud-front-logs from S3's bucket and put those it into database.
Date time of logs in these files are really problem.
Is it logged in the time of Standard Time? or the time of x-edge-location?
If I want to fix this to Japan's Standard Time should I calculate by x-edge-location?
I have one more question.
Do logs delay when those written on S3 bucket??
If I observe my s3bucket by using "s3cmd ls s3://mys3bucket/".
Log's count changes within 2 hours.
https://forums.aws.amazon.com/thread.jspa?threadID=30346
The date and hour are specified according to the GMT time zone.
I found this answer.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html
CloudFront saves log files within 24 hours after receiving the corresponding requests.

Resources