Comparing two values evaluates to false instead of true - node.js

I'm using Node.js with Express.js and Redis. I'm recording the uptime of a site component by incrementing a redis key. I want to update the uptimerecord:tracker key once the current uptime > the current uptime record but somehow it's not updating it and evaluating uptimeTracker > uptimeRecordTracker with false even though it's true.
Is there anything I'm missing?
Thanks!
db.get("uptime:tracker", function(err, uptimeTracker) {
db.get("uptimerecord:tracker", function(err, uptimeRecordTracker) {
console.log("[Stats] uptimeTracker: " + uptimeTracker)
console.log("[Stats] uptimeRecordTracker: " + uptimeRecordTracker)
console.log("[Stats] Compare: " + (uptimeTracker > uptimeRecordTracker))
if(uptimeTracker > uptimeRecordTracker) {
console.log("[Stats] Tracker Records updated")
db.set('uptimerecord:tracker', uptimeTracker)
}
});
});
The console output:
[Stats] uptimeTracker: 213
[Stats] uptimeRecordTracker: 99
[Stats] Compare: false

It looks like you're comparing strings instead of integers, in fact:
"213" > "99" == false
while
213 > 99 == true
Try converting them to integers before doing the comparison:
parseInt(uptimeTracker) > parseInt(uptimeRecordTracker)

Related

Watir Scroll For All Comments

I'm trying to scroll until there are no more contents on a Youtube video. I'm testing a counter versus the current number of comments displayed.
mycount = 0
1.upto(20) do
thiscount = browser.div(id: "contents").divs(id: "comment-content").size
puts "#{ mycount } : #{ thiscount }"
break if mycount == thiscount
mycount = thiscount
browser.driver.execute_script("window.scrollBy(0,1000)")
sleep 10
end
After the first pagedown, the count of comments should increase. It isn't. I've entered a sleep 10 for the comments to load and the count of those comments to also update. It doesn't update. I keep getting 20 : 20 so it breaks and leaves this iteration after a single iteration.
I'm not sure why that valuation isn't updating. How can I fix this so that it can get to the end of the comments?
I ran into 2 problems when running your script:
browser.div(id: "contents").divs(id: "comment-content") returned nothing. There were many "contents" divs, with the first one not including any comments. I removed this locator.
Scrolling by 1000 is not enough to get to the bottom of the page. Therefore the loading of additional comments does not get triggered.
From the following script:
scroll_by = 1000000000000
browser.goto('https://www.youtube.com/watch?v=u9DPBxiZZfo&ab_channel=America%27sTestKitchen')
browser.driver.execute_script("window.scrollBy(0,#{scroll_by})")
sleep 15
mycount = 0
1.upto(20) do
thiscount = browser.divs(id: "comment-content").size
puts "#{ mycount } : #{ thiscount }"
break if mycount == thiscount
mycount = thiscount
browser.driver.execute_script("window.scrollBy(0,#{scroll_by})")
sleep 10
end
You can see that having a larger scroll gives the expected results:
0 : 20
20 : 40
40 : 60
60 : 80
80 : 100
100 : 120
120 : 140
140 : 160
160 : 180
180 : 185
185 : 185
In contrast, setting the scroll_by to just "1000" did not trigger the comments. The output was just:
0 : 0
This works for me:
total = 0
while true
before = browser.divs(id: "comment-content").count
browser.div(id: "columns").scroll.to :bottom
sleep 10 # or sleep 5
total = browser.divs(id: "comment-content").count
puts "There are #{total} comments for now"
break if before == total
end
puts "There are #{total} comments in total"
instead while true you can use 20.times do but that would only count up to ~400 comments.

Presto fails to read hexadecimal string: Not a valid base-16 number

Is there a way for presto to check if a string is hex or not? I have the following query keeps failing:
from_base(hexstring, 16)
with error
> /usr/local/lib/python3.7/dist-packages/pyhive/presto.py in _process_response(self, response)
> 347 self._state = self._STATE_FINISHED
> 348 if 'error' in response_json:
> --> 349 raise DatabaseError(response_json['error'])
> 350
> 351
>
> DatabaseError: {'message': 'Not a valid base-16 number:
> ffffffffffdfae90', 'errorCode': 7, 'errorName':
> 'INVALID_FUNCTION_ARGUMENT', 'errorType': 'USER_ERROR', 'failureInfo':
> {'type': 'io.prestosql.spi.PrestoException', 'message': 'Not a valid
> base-16 number: ffffffffffdfae90', 'cause': {'type':
> 'java.lang.NumberFormatException', 'message': 'For input string:
> "ffffffffffdfae90"', 'suppressed': [], 'stack':
>
However, python is ok with the string:
int('ffffffffffdfae90',16)
returns
18446744073707433616
from_base returns BIGINT which can hold up to 2^63 - 1 i.e. 9223372036854775807 which is less then 18446744073707433616 while python's int is undounded, so this particular number is just too big for Presto.

Logstash convert date duration from string to hours

I have a column like this:
business_time_left
3 Hours 24 Minutes
59 Minutes
4 Days 23 Hours 58 Minutes
0 Seconds
1 Hour
and so on..
What I want to do in Logstash is to convert this entirely into hours.
So mu value should entirety convert to something like
business_time_left
3.24
0.59
119.58
0
1
Is this possible?
My config file:
http_poller {
urls => {
snowinc => {
url => "https://service-now.com"
user => "your_user"
password => "yourpassword"
headers => {Accept => "application/json"}
}
}
request_timeout => 60
metadata_target => "http_poller_metadata"
schedule => { cron => "* * * * * UTC"}
codec => "json"
}
}
filter
{
json {source => "result" }
split{ field => ["result"] }
}
output {
elasticsearch {
hosts => ["yourelastuicIP"]
index => "inc"
action=>update
document_id => "%{[result][number]}"
doc_as_upsert =>true
}
stdout { codec => rubydebug }
}
Sample Json input data, when the url is hit.
{"result":[
{
"made_sla":"true",
"Type":"incident resolution p3",
"sys_updated_on":"2019-12-23 05:00:00"
"business_time_left":" 59 Minutes"} ,
{
"made_sla":"true",
"Type":"incident resolution l1.5 p4",
"sys_updated_on":"2019-12-24 07:00:00"
"business_time_left":"3 Hours 24 Minutes"}]}
Thanks in advance!
Q: Is this possible?
A: Yes.
Assuming your json- and split-filters are working correctly and the field business_time_left holds a single value like you showed (e.g. 4 Days 23 Hours 58 Minutes) I personally would do the following:
First, make sure that your data is in a kind of pattern meaning, you standardize the "quantity-descriptions". This means that the minutes are always labeled as "Minutes" not Mins, min or whatever.
Nextup, you can parse the field with the grok-filter like so:
filter{
grok{
match => { "business_time_left" => "(%{INT:calc.days}\s+Days)?%{SPACE}?(%{INT:calc.hours}\s+Hours)?%{SPACE}?(%{INT:calc.minutes}\s+Minutes)?%{SPACE}?(%{INT:calc.seconds}\s+Seconds)?%{SPACE}?" }
}
}
This will extract all available values into the desired fields, e.g. calc.days. The ? character prevents that grok fails if e.g. there are no seconds. You can test the pattern on this site.
With the data extracted, you can implement a ruby filter to aggregate the numeric values like so (untested though):
ruby{
code => '
days = event.get("calc.days")
hours = event.get("calc.hours")
minutes = event.get("calc.minutes")
sum = 0
if days
days_numeric = days.to_i
days_as_hours = days_numeric * 24
sum += days_as_hours
end
if hours
sum += hours.to_i
end
if minutes
sum += (minutes.to_i / 100)
end
# seconds and so on ...
event.set("business_time_left_as_hours", sum)
'
}
So basically you check if the values are present and add them to a sum with your custom logic.
event.set("business_time_left_as_hours", sum) will set the result as a new field to the document.
These code snippets are not intended to be working out of the box they are just hints. So please check the documentations about the ruby filter, ruby coding in general and so on.
I hope I could help you.

Redis Node - Querying a list of 250k items of ~15 bytes takes at least 10 seconds

I'd like to query a whole list of 250k items of ~15 bytes each.
Each item (some coordinates) is a 15 bytes string like that xxxxxx_xxxxxx_xxxxxx.
I'm storing them using this function :
function setLocation({id, lat, lng}) {
const str = `${id}_${lat}_${lng}`
client.lpush('locations', str, (err, status) => {
console.log('pushed:', status)
})
}
Using nodejs, doing a lrange('locations', 0, -1) takes between 10 seconds and 15 seconds.
Slowlog redis lab:
I tried to use sets, same results.
According to this post
This shouldn't take more than a few milliseconds.
What am I doing wrong here ?
Update:
I'm using an instance on Redis lab

Using reduce with a composite key in couchdb view returns no result on GET

I have a couchdb view with the following map function:
function(doc) {
if (doc.date_of_operation) {
date_triple = doc.date_of_operation.split("/");
d = new Date(date_triple[2], date_triple[1]-1, date_triple[0], 0, 0, 0, 0)
emit([d, doc.name], 1);
}
}
When I issue a GET request for this, I get the whole view's data (2.8MB):
$ curl -X GET http://somehost:5984/ops-db/_design/ops-views/_view/counts
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2751k 0 2751k 0 0 67456 0 --:--:-- 0:00:41 --:--:-- 739k
However, when I add a reduce function:
function (key, values, rereduce) {
return sum(values);
}
I no longer get any data when using curl:
$ curl -X GET http://somehost:5984/ops-db/_design/ops-views/_view/counts
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 42 0 42 0 0 7069 0 --:--:-- --:--:-- --:--:-- 8400
The result looks like this:
{"rows":[
{"key":null,"value":27065}
]}
This view & map & reduce functions were added using the Futon interface and when the Reduce checkbox is checked there, I do get one row for every 'date, name' pairs with values accumulated for that pair. What changes when queried through a GET?
When you calling the view through curl you can try passing in the necessary parameters for triggering the reduce and grouping
e.g.
Explicitly tell CouchDB to run the reduce function
$ curl -X GET http://somehost:5984/ops-db/_design/ops-views/_view/counts?reduce=true
Or the group and group_level params
You can read more on the available options Here (under Querying Options section)
The reduce should look like this
_sum
So a simple "view" would look like this:
{
"_id": "_design/foo",
"_rev": "2-6145338c3e47cf0f311367a29787757c",
"language": "javascript",
"views": {
"test1": {
"map": "function(doc) {\n emit(null, 1);\n}",
"reduce": "_sum"
}
}
}

Resources