Calculate time differences in nodejs from a manually created format - node.js

I wanted to calculate the time differences (in minutes). However the data I gotten is not using a conventional time format, it is using the following format "yyyy-mm-dd-HH-MM-ss" in UTC time. I can't use this directly in moments or other library as seems. What is the recommendations to handle this specific format?
How can I use library such as "moment" to calculate the time differences with this time format from my data?

Not sure but possibly try it with moment, something like:
const moment = require('moment');
const yourSpecialFormat = 'YYYY-MM-DD-HH-mm-ss';
const someDateInYourFormat = '2020-02-22-05-58-57';
let now = moment(moment().format(yourSpecialFormat), yourSpecialFormat);
let then = moment(someDateInYourFormat, yourSpecialFormat);
console.log('Hours ----> ', moment.duration(now.diff(then)).asHours());
console.log('Minutes ----> ', moment.duration(now.diff(then)).asMinutes ());

Related

siphon error - 400: NETCDF4 format not supported for ANY_POINT feature typ

I'm trying to get a dataset from TDScatalog with siphon but with multiples variables show me that error or the last line. Here the code:
import siphon
from siphon.catalog import TDSCatalog
import datetime
from xarray.backends import NetCDF4DataStore
import xarray as xr
point = [-7, 41]
hours = 48
best_gfs = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/grib/NCEP/GFS/'
'Global_0p25deg/catalog.xml?dataset=grib/NCEP/GFS/Global_0p25deg/Best')
best_gfs.datasets
best_ds = list(best_gfs.datasets.values())[0]
ncss = best_ds.subset()
query = ncss.query()
query.lonlat_point( point[1], point[0] ).time_range(datetime.datetime.utcnow(), datetime.datetime.utcnow() + datetime.timedelta(hours))
query.accept('netcdf4')
query.variables('Temperature_surface',
'Relative_humidity_height_above_ground',
'u-component_of_wind_height_above_ground',
'v-component_of_wind_height_above_ground',
'Wind_speed_gust_surface'
)
data = ncss.get_data(query)
Thanks!
That message is because your point request is trying to return a mix of time series (your _surface variables) and time series of profiles (the u/v wind components). The combination of different features in a single netCDF file is unsupported by the netCDF CF-Conventions.
One work-around is to request CSV or XML formatted data instead (which siphon can still parse and return as a dictionary-of-arrays).
The other is to make separate requests for fields with different geometry. So one for Temperature_surface and Wind_speed_gust_surface, one for u-component_of_wind_height_above_ground and v-component_of_wind_height_above_ground, and one final one for Relative_humidity_height_above_ground. This last split is working around an apparent bug in the THREDDS Data Server where profiles with different vertical levels can't be combined either.

How to easily create a datetime object from time with unit since epoch?

let's say I got a timestamp since epoch in microseconds 1611590898133828 how could I convert this easily into a datetime object considering the unit microseconds.
from datetime import datetime
timestamp_micro = 1611590898133828
dt = datetime.datetime.fromtimestamp(timestamp_micro / 1e6)
I would like to be able to do easy conversions since sometimes I have microseconds, sometimes seconds, sometimes nanoseconds to convert.
timestamp_micro = 1611590898133828
dt = datetime.datetime.fromtimestamp(timestamp_micro, unit="us")
Is this somehow possible? For me using python's datetime package is just one pain. Maybe you can also recommend another package in which timestamp handling is easier?
pandas.to_datetime provides the option to set the unit as a keyword:
import pandas as pd
t, UNIT = 1611590898133828, 'us'
dt = pd.to_datetime(t, unit=UNIT)
print(dt, repr(dt))
# 2021-01-25 16:08:18.133828 Timestamp('2021-01-25 16:08:18.133828')
You can now work with pandas' timestamps or convert to a regular Python datetime object like
dt.to_pydatetime()
# datetime.datetime(2021, 1, 25, 16, 8, 18, 133828)
Please also note that if you use fromtimestamp without setting a time zone, you'll get naive datetime, which will be treated as local time by Python (UTC offset might not be 0). See e.g. here.
You can create new javascript date objects by simply calling const dt = new Date(timestamp). The timestamp value here is an epoch up to milliseconds precision. JavaScript does have native support for higher precision.
If you constantly need to work with dates, I would recommend you to use a package such as momentJS, since native JS is quite a pain to handle dates/times.

How to include large numbers in transactions >=1e21

I am testing an ERC20 Token on the Rinkeby testnet.
I am sending transfer transactions of 1e23 units.
The response from web3 says
I have tried converting the amount to a string using the Javascript toString method.
And converting with web3.utils.toHex().
Both return errors
dat=token.methods.transfer(w3.utils.toHex(to),web3.utils.toHex(amount)).encodeABI()
/*
OR
dat=token.methods.transfer(w3.utils.toHex(to),web3.utils.toHex(amount)).encodeABI()
*/
w3.eth.sendTransaction({from:from,to:TOKEN_ADDRESS,data:dat,gas:gasLimit()},(err,txhash)=>{
if(err) throw err
console.log(txhash)
callback(txhash)
})
Uncaught Error: Please pass numbers as strings or BigNumber objects to avoid precision errors.
TLDR
Use the built-in util functions to convert ether to wei:
var amount = web3.utils.toWei('1000000','ether');
Old answer below:
Literally just follow the advise of the error.
The to number should initially be of type string because the number type in javascript is too small to store addresses.
If the amount starts of as a reasonable number then convert it to BigNumber using a bignumber library. Web3 internally uses bn.js as its bignumber library so for full compatibility you should also use the same but bignum is also compatible in my experience:
const BN = require('bn.js');
token.methods.transfer(new BN(to),new BN(amount)).encodeABI()
Based on your comment it appears you are trying to pass 1e+24 as a number. The problem is it is too large to fit in a double without losing precision. Web3 is refusing to use the number because it has already lost precision even before web3 has a chance to process it. The fix is to use a string instead:
var amount = '1000000000000000000000000';
token.methods.transfer(to,amount).encodeABI()
If you really don't want to type 24 zeros you can use string operations:
var amount = '1' + '0'.repeat(24);
Or if this is amount is really a million ethers it's better to use the built-in util functions to show what you really mean:
var amount = web3.utils.toWei('1000000','ether');
I know this is old but i was having troubble with some tests for solidity using chai and i added this comment:
/* global BigInt */
With that you can use big numbers
const testValue = 2000000000000000000n;

Pandas .rolling.corr using date/time offset

I am having a bit of an issue with pandas's rolling function and I'm not quite sure where I'm going wrong. If I mock up two test series of numbers:
df_index = pd.date_range(start='1990-01-01', end ='2010-01-01', freq='D')
test_df = pd.DataFrame(index=df_index)
test_df['Series1'] = np.random.randn(len(df_index))
test_df['Series2'] = np.random.randn(len(df_index))
Then it's easy to have a look at their rolling annual correlation:
test_df['Series1'].rolling(365).corr(test_df['Series2']).plot()
which produces:
All good so far. If I then try to do the same thing using a datetime offset:
test_df['Series1'].rolling('365D').corr(test_df['Series2']).plot()
I get a wildly different (and obviously wrong) result:
Is there something wrong with pandas or is there something wrong with me?
Thanks in advance for any light you can shed on this troubling conundrum.
It's very tricky, I think the behavior of window as int and offset is different:
New in version 0.19.0 are the ability to pass an offset (or
convertible) to a .rolling() method and have it produce variable sized
windows based on the passed time window. For each time point, this
includes all preceding values occurring within the indicated time
delta.
This can be particularly useful for a non-regular time frequency index.
You should checkout the doc of Time-aware Rolling.
r1 = test_df['Series1'].rolling(window=365) # has default `min_periods=365`
r2 = test_df['Series1'].rolling(window='365D') # has default `min_periods=1`
r3 = test_df['Series1'].rolling(window=365, min_periods=1)
r1.corr(test_df['Series2']).plot()
r2.corr(test_df['Series2']).plot()
r3.corr(test_df['Series2']).plot()
This code would produce similar shape of plots for r2.corr().plot() and r3.corr().plot(), but note that the calculation results still different: r2.corr(test_df['Series2']) == r3.corr(test_df['Series2']).
I think for regular time frequency index, you should just stick to r1.
This mainly because the result of two rolling 365 and 365D are different.
For example
sub = test_df.head()
sub['Series2'].rolling(2).sum()
Out[15]:
1990-01-01 NaN
1990-01-02 -0.355230
1990-01-03 0.844281
1990-01-04 2.515529
1990-01-05 1.508412
sub['Series2'].rolling('2D').sum()
Out[16]:
1990-01-01 -0.043692
1990-01-02 -0.355230
1990-01-03 0.844281
1990-01-04 2.515529
1990-01-05 1.508412
Since there are a lot NaN in rolling 365, so the corr of two series in two way are quit different.

Sphinx 4 Transcription Time Index

How do I get time index (or frame number) in Sphinx 4 when I set it to transcribe an audio file?
The code I'm using looks like this:
audioURL = ...
AudioFileDataSource dataSource = (AudioFileDataSource) cm.lookup("audioFileDataSource");
dataSource.setAudioFile(audioURL, null);
Result result;
while ((result = Recognizer.recognize()) != null) {
Token token = result.getBestToken();
//DoubleData data = (DoubleData) token.getData();
//long frameNum = data.getFirstSampleNumber(); // data seem always null
String resultText = token.getWordPath(false, false);
...
}
I tried to get time of transcription from result/token objects, e.g. similar to what a subtitler do. I've found Result.getFrameNumber() and Token.getFrameNumber() but they appear to return the number of frames decoded and not the time (or frame) where the result was found in the context of entire audio file.
I looked at AudioFileDataSource.getDuration()[=private] and the Recognizer classes but haven't figure out how to get the needed transcribed time-index..
Ideas? :)
Frame number is the time multiplied by frame rate which is 100 frames/second.
Anyway, please find the patch for subtitles demo which returns timings here:
http://sourceforge.net/mailarchive/forum.php?thread_name=1380033926.26218.12.camel%40localhost.localdomain&forum_name=cmusphinx-devel
The patch applies to subversion trunk, not to the 1.0-beta version.
Please note that this part is under major refactoring, so the API will be obsolete soon. However, I hope you will be able to create subtitles with just few calls without all current complexity.

Resources