What would be the most efficient way to compare string datetime values? - python-3.x

I have a dynamoDB that has two string time objects. One is when the item was created and the other is when it was updated(processed). I am checking if the item was updated within 2 days of creation using the following conversions and then just using an if statement to compare if updateTime is greater than creationTime. But is this the best way to go about this?
creationTime = datetime.datetime.strptime(acc.createdAt,'%H:%M')
modcreationTime = creationTime + datetime.timedelta(days=2)
updateTime = datetime.datetime.strptime(acc.updatedAt,'%H:%M')

Creating the Metrics in CloudWatch using the data turned out to be the best solution. It simplified the code to a great extent and made the data easy to work with.

Related

How to visualize a count of all values in an array field in Kibana

I am having trouble creating a particular type of visualization in Kibana. My events in Kibana are statistics on communications between two ip address. Two of the fields are lists of ports used by the particular ip address. An example of the fields would be:
ip1 = 192.168.101.2
ip2 = 192.168.101.3
ip2Ports = 80,443
ip1Ports = 80,57000,0
I would like to have a top count of all the values such as
port count
80 2
57000 1
443 1
I have been able to parse ip2Ports to be ip2Ports_List.column1, ip2Ports_List.column2, ect, but I can only choose one term with term aggregation in the visualization. I can split the chart, but that leads to separate counts for each field. If I go by the original ip2Ports field, it is just aggregated as the string such as, "80,443".
Is it even possible to create a top count visualization of fields with multiple values? If so, how would I do so. If not, is there a way to restructure my data so I can do it? Thank you!
My issue stemmed from the format of the values being sent in by Logstash. I had thought that the 'ip2Ports_List.column1' format, which was a result from using the csv filter, was part of an array. It wasn't. After analyzing it, 'ip2Ports_List.column1' didn't seem to be much different from a new field.
Elastic needed an array to give me the visualization I wanted. I wasn't sure what the best way to produce it was, so I just ended up using the ruby filter. This is what the code ended up looking like:
ruby {
code => "fields = event.get('portsIp').split(',')
event.set('portsIpArray',fields)"
}
Where 'portsIp' looked something like "80,443". Splitting it turned 'portsIp' into a Ruby array. I just set that array as the value for a new event field, 'portsIpArray'.
From there when I tried visualize the 'portsIpArray' field, it looked exactly how I wanted it to, treating each port as separate value, and still associating each port with the same event/field.
Extra:
Also something I discovered is if you're writing your code like I was, directly in the Logstash conf file, Logstash doesn't like it if you use double quotes within the double quoted code. In hindsight it makes sense, but it doesn't give a clear error so it's difficult to figure out.

Netsuite - Transfer Inventory error

I have been using NetSuite for only a short time, and already hate it. I am sorry if this is a stupid question, but I haven't been able to find an answer so far, either in the Netsuite docs, StackOverflow or other websites. In fact, the answers I found have resulted in an error.
My company requires a script to transfer inventory based on an EDI input file. Reading the file is no problem, even parsing it is working. However, actually inserting the data is proving problematic.
I have been able to insert normal records, but Inventory Transfer records are giving me problems.
From Stack Overflow I found and adapted some code into the following:
var xfer = nlapiCreateRecord("inventorytransfer");
xfer.setFieldValue("trandate", FormatDate("20160101"));
xfer.setFieldValue("location", 9);
xfer.setFieldValue("transferlocation", 9);
nlapiSelectNewLineItem('invt');
nlapiSetLineItemValue("invt","invtid",1, 189);
nlapiSetLineItemValue("invt","adjustqtyby", 1, "5");
nlapiCommitLineItem('invt');
var id = nlapiSubmitRecord(xfer);
The FormatDate function just exchanges the date from the text file into a system date NetSuite can understand.
However, when I run this code I get the following error:
USER_ERROR: You must enter at least one line item for this transaction.
I thought inserting the line item was the reason to use nlapiSelectNewLineItem, but I guess not. Also nlapiCreateNewLineItem doesn't seem to exist.
The values I am inserting are all just test data, as I'm testing this in the debugger. Location 9 exists, as does item 189.
My full script finds these id's based on string values from the text files. But since this is the section that doesn't work I have set it apart to test.
Can anyone help with this?
You did not specify the type of script you are using, but, looks like you are not setting the fields on the record object, but, setting the value on current record. Below, is the suggested code.
Also, there is no sublist named invt, it should be inventory. Also, there is no field as such invtid, I think most probably you want to setup item, the field Id should be item. You might want to refer SuiteScript Record Browser as well for a help on correct Ids
var xfer = nlapiCreateRecord("inventorytransfer");
xfer.setFieldValue("trandate", FormatDate("20160101"));
xfer.setFieldValue("location", 9);
xfer.setFieldValue("transferlocation", 9);
xfer.selectNewLineItem('inventory');
xfer.setCurrentLineItemValue("inventory", "item", 189);
xfer.setCurrentLineItemValue("inventory","adjustqtyby", "5");
xfer.commitLineItem('inventory');
var id = nlapiSubmitRecord(xfer);
If you are using Bin/Lot Numbered Items, please see help topic "Sample Scripts for Advanced Bin / Numbered Inventory Management"

Presto - static date and timestamp in where clause

I'm pretty sure the following query used to work for me on Presto:
select segment, sum(count)
from modeling_trends
where segment='2557172' and date = '2016-06-23' and count_time between '2016-06-23 14:00:00.000' and '2016-06-23 14:59:59.000';
group by 1;
now when I run it (on Presto 0.147 on EMR) I get an error of trying to assigning varchar to date/timestamp..
I can make it work using:
select segment, sum(count)
from modeling_trends
where segment='2557172' and date = cast('2016-06-23' as date) and count_time between cast('2016-06-23 14:00:00.000' as TIMESTAMP) and cast('2016-06-23 14:59:59.000' as TIMESTAMP)
group by segment;
but it feels dirty...
is there a better way to do this?
Unlike some other databases, Presto doesn't automatically convert between varchar and other types, even for constants. The cast works, but a simpler way is to use the type constructors:
WHERE segment = '2557172'
AND date = date '2016-06-23'
AND count_time BETWEEN timestamp '2016-06-23 14:00:00.000' AND timestamp '2016-06-23 14:59:59.000'
You can see examples for various types here: https://prestosql.io/docs/current/language/types.html
Just a quick thought.. have you tried omitting the dashes in your date? try 20160623 instead of 2016-06-23.
I encountered something similar with SQL server, but not used Presto.

Node's Postgres module pg returns wrong date

I have declared a date column in Postgres as date.
When I write the value with node's pg module, the Postgres Tool pgAdmin displays it correctly.
When I read the value back using pg, Instead of plain date, a date-time string comes with wrong day.
e.g.:
Date inserted: 1975-05-11
Date displayed by pgAdmin: 1975-05-11
Date returned by node's pg: 1975-05-10T23:00:00.000Z
Can I prevent node's pg to appy time-zone to date-only data? It is intended for day of birth and ihmo time-zone has no relevance here.
EDIT Issue response from Developer on github
The node-postgres team decided long ago to convert dates and datetimes
without timezones to local time when pulling them out. This is consistent
with some documentation we've dug up in the past. If you root around
through old issues here you'll find the discussions.
The good news is its trivially easy to over-ride this behavior and return
dates however you see fit.
There's documentation on how to do this here:
https://github.com/brianc/node-pg-types
There's probably even a module somewhere that will convert dates from
postgres into whatever timezone you want (utc I'm guessing). And if
there's not...that's a good opportunity to write one & share with everyone!
Original message
Looks like this is an issue in pg-module.
I'm a beginner in JS and node, so this is only my interpretation.
When dates (without time-part) are parsed, local time is assumed.
pg\node_modules\pg-types\lib\textParsers.js
if(!match) {
dateMatcher = /^(\d{1,})-(\d{2})-(\d{2})$/;
match = dateMatcher.test(isoDate);
if(!match) {
return null;
} else {
//it is a date in YYYY-MM-DD format
//add time portion to force js to parse as local time
return new Date(isoDate + ' 00:00:00');
But when the JS date object is converted back to a string getTimezoneOffset is applied.
pg\lib\utils.js s. function dateToString(date)
Another option is change the data type of the column:
You can do this by running the command:
ALTER TABLE table_name
ALTER COLUMN column_name_1 [SET DATA] TYPE new_data_type,
ALTER COLUMN column_name_2 [SET DATA] TYPE new_data_type,
...;
as descripted here.
I'd the same issue, I changed to text.
Just override node-postgres parser for the type date (1082) and return the value without parsing it:
import pg from pg
pg.types.setTypeParser(1082, value => value)

Allocating matrix / structure data instead of string name to variable

I have a script that opens a folder and does some processing on the data present. Say, there's a file "XYZ.tif".
Inside this tif file, there are two groups of datasets, which show up in the workspace as
data.ch1eXYZ
and
data.ch3eXYZ
If I want to continue with the 2nd set, I can use
A=data.ch3eXYZ
However, XYZ usually is much longer and varies per file, whereas data.ch3e is consistent.
Therefore I tried
A=strcat('data.ch3e','origfilename');
where origfilename of course is XYZ, which has (automatically) been extracted before.
However, that gives me a string A (since I practically typed
A='data.ch3eXYZ'
instead of the matrix that data.ch3eXYZ actually is.
I think it's just a problem with ()'s, []'s, or {}'s but Ican't seem to figure it out.
Thanks in advance!
If you know the string, dynamic field references should help you here and are far better than eval
Slightly modified example from the linked blog post:
fldnm = 'fred';
s.fred = 18;
y = s.(fldnm)
Returns:
y =
18
So for your case:
test = data.(['ch3e' origfilename]);
Should be sufficient
Edit: Link to the documentation

Resources