I have a nodeJS script that generates HAR files by connecting to Chrome (in debug mode)
What I'm trying to figure out is what the "timestamp" property in "Network.requestWillBeSent" object stands for. This is the sample "message.params" object:
{
requestId: '6868.1321',
frameId: '6868.2',
...
timestamp: 70683.66357,
wallTime: 1449020270.68483
}
I initially thought that timestamp is in milliseconds or seconds. So I have tried doing addition/subtraction on wallTime. All the results that I got don't make much sense. Can anybody help?
Let me know if you need additional info on this. Thanks!
UPDATE:
I think I got it figured out. It seems wallTime is in seconds, while timestamp is in microseconds. I'll go with this for now. I'll provide another update in case something new turns up.
Related
Trying to integrate Cronofy to an app using the Python package pycronofy.
But cant seem to get the results back in the right timezone even after setting it as required.
events = cronofy.read_events(calendar_ids=('CAL_ID',),from_date=from_date, to_date=to_date, tzid='Asia/Kolkata')
print (events.json())
The returning json is always in the UTC timezone. This library doesnt seem to have an SO tag but hoping that someone could help.
The tzid parameter relates to the restriction of events for the request, to get richer time information in the response you need to request localized_times:
events = cronofy.read_events(calendar_ids=('CAL_ID',),
from_date=from_date,
to_date=to_date,
tzid='Asia/Kolkata',
localized_times=True)
I burned a couple of hours on a problem today and thought I would share.
I tried to start up a previously-working Azure Stream Analytics job and was greeted by a quick failure:
Failed to start Streaming Job 'shayward10ProcessLogs'.
I looked at the JSON log and found nothing helpful whatsoever. The only description of the problem was:
Stream Analytics job has validation errors: The given key was not present in the dictionary.
Given the error and some changes to our database, I tried the following to no effect:
Deleting and Recreating all Inputs
Deleting and Recreating all Outputs
Running tests against the data (coming from Event Hub) and the output looked good
My query looked as followed:
SELECT
dateTimeUtc,
context.tenantId AS tenantId,
context.userId AS userId,
context.deviceId AS deviceId,
changeType,
dataType,
changeStatus,
failureReason,
ipAddress,
UDF.JsonToString(details) AS details
INTO
[MyOutput]
FROM
[MyInput]
WHERE
logType = 'MyLogType';
Nothing made sense so I started deconstructing my query. I took it down to a single field and it succeeded. I went field by field, trying to figure out which field (if any) was the cause.
See my answer below.
The answer was simple (yet frustrating). When I got to the final field, that's where the failure was:
UDF.JsonToString(details) AS details
This was the only field that used a user-defined function. After futsing around, I noticed that the Function Editor showed the title of the function as:
udf.JsonToString
It was a casing issue. I had UDF in UPPERCASE and Azure Stream Analytics expected it in lowercase. I changed my final field to:
udf.JsonToString(details) AS details
It worked.
The strange thing is, it was previously working. Microsoft may have made a change to Azure Stream Analytics to make it case-sensitive in a place where it seemingly wasn't before.
It makes sense, though. JavaScript is case-sensitive. Every JavaScript object is basically a dictionary of members. Consider the error:
Stream Analytics job has validation errors: The given key was not present in the dictionary.
The "udf" object had a dictionary member with my function in it. The UDF object would be undefined. Undefined doesn't have my function as a member.
I hope my 2-hour head-banging session helps someone else.
I was looking for a way to calculate a ratio on Kibana. After many researches i found this way :
Using the "JSON Input" feature in a visualisation.
I have all my informations in an index, with 2 types of documents (boots and reboots).
I am looking for the script which count the number of documents with the type boots, same for the reboots type then divide the second by the first.
It sounds really easy, but i do not find any way to get it after my researches, and i am not used to groovy enough yet to do it by myself.
I found many ways to manipulate documents values (doc['mydocname'].values etc), but nothing about the type.
Thanks in advance.
EDIT : I tried this
{
"aggs" : {
"boots_count" : { "value_count" : { "_type" : "boots" } }
}
}
Which is supposed to count the number of fields (here the field _type) in the index. But when i put it into "JSON Input" in a visualisation, that results in an error :
Error: Request to Elasticsearch failed: {"error":"SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[BbXJ0O6tRxa_OcyBfYCGJQ][informationbe][0]: SearchParseException[[informationbe][0]: from[-1],size[0]: Parse Failure [Failed to parse source [{\"size\":0,\"aggs\":{\"2\":{\"terms\":{\"field\":\"#sitePoste\",\"size\":5,\"order\":{\"1\":\"desc\"}},\"aggs\":{\"1\":{\"avg\":{\"script\":\"0\",\"lang\":\"expression\",\"ratio\":{\"boots_count\":{\"value_count\":{\"_type\":\"boots\"}}}}}}}}
I am wrong. But where ?
EDIT2 : In other hand, i am trying scripted fields, with something like this using lucene expression :
doc['_type:boots'].count / doc['_type:reboots'].count
but it doesnt work more, i am pretty confident about the "doc['_type:boots']" part, i guess the problem is on the "XXX.count" part.
After many attempts, i understand better and better how it works. Default scripted fields scope is on the document, not on the whole index, so i cant do a count action of whole values of the index from documents in it.
I am looking for a workaround, i'll post it it if find something interesting.
I finally solved my problem :
I added a scripted field, if the type of the document is boots, the scripted field = 1, else 0. Then i created a search with only boots and reboots documents (filter _type:boots _type:reboots) and calculated the average of the scripted field in a metric.
Everything works well !
I have a simple XPage that is doing a partial refresh every 30 second for the demo purpose.
And randomly new Date() is returning a date that is one hour wrong.
But if I do
var d=session.createDateTime(new Date())
d.setNow()
it will return the correct datetime allways.
I've also tried printing everything on the console and the result is the same.
A database showing the problem can be found here
http://l.bitcasa.com/gco-V3cq
Anybody know what could cause this ?
When running iisdirinfo on iis7 I'm seeing an error
.dll,1,GET,HEAD,POST,DEBUG
BUILD FAILED
C:\iisinfo.build(9,2):
Error retrieving info for virtual directory 'WebServices' on 'localhost:80' (wesite: Webservice).
Object reference not set to an instance of an object.
Total time: 0.5 seconds.
This is after displaying a number of the properties correctly. So I guess it's being tripped up by another property later on.
Anyone seen this or have any ideas on what could be causing the issue?
So I forked the repo and put some extra debug in and it seems like the failure was down to a null value for a property.
Using appcmd it looks like
<redirectHeaders>
</redirectHeaders>
I've put a check for null into the code and will be submitting a pull request later today