Node.js - Oracle DB and fetchAsString format - node.js

I am stuck on a problem and I am not sure what is the best way to solve it. I have a date column that I want to select and I want to fetch it as a string. Which is great, node-oracledb module has this option with fetchAsString mehotd. But it fetches the date like this for example 10-JAN-16 and I want to fetch it like this 10-01-2016. Is there a way to do that from the node-oracledb module, or I should modify the date after I get the result from the query?
UPDATE: I mean solution without to_char in the query and without query modifications

Check out this section of my series on Working with Dates in JavaScript, JSON, and Oracle Database:
https://dzone.com/articles/working-with-dates-using-the-nodejs-driver
The logon trigger shows an example of using alter session to set the default date format. Keep in mind that there is NLS_DATE_FORMAT, NLS_TIMESTAMP_FORMAT, NLS_TIMESTAMP_TZ_FORMAT.
I only show NLS_TIMESTAMP_TZ_FORMAT because I convert to that type in the examples that follow as I need to do some time zone conversion for the date format I'm using.
Another way to set the NLS parameters is to use environment variables of the same name. Note that this method will not work unless you set the NLS_LANG environment variable as well.

Related

JDBC variable names when called in the groovy script not giving the correct value for a CSV parameterization

I have a JMeter test where a CSV file containing multiple rows of comma separated values example:- internalID,drivername, usreg,canadareg. I am basically using the CSV file to compare the values with the database table values. To compare the values to database values, I am adding a JDBC request with a query 'select internalID,drivername,usreg,canadareg from data where internalid ='${internalID'}' and providing the variables names to store the column data result. I use the groovy JSR233 and call the variables names in the script by declaring String a = vars.get("dintID_${counter}") where dintID is the variable name provided in the JDBC . The issue is when I run the script the first line of data in CSV files gets executed successfully, then the second line data in CSV file is passed to SQL statement correct, however the vars.get("dintid_${counter}") always stays at previous record meaning it does not go to next internalid(dintID). I have checked that my counter is incrementing. No idea how to resolve the issue. Does anyone know what mistake I am doing.
If you take a look at JSR223 Sampler documentation you will see that:
The JSR223 test elements have a feature (compilation) that can significantly increase performance. To benefit from this feature:
Use Script files instead of inlining them. This will make JMeter compile them if this feature is available on ScriptEngine and cache them.
Or Use Script Text and check Cache compiled script if available property.
When using this feature, ensure your script code does not use JMeter variables or JMeter function calls directly in script code as caching would only cache first replacement. Instead use script parameters.
So if the counter is a JMeter Variable - it will always be the initial value and it won't increment on subsequent iterations.
So you need to change the line to:
String a = vars.get('dintID_' + vars.get('counter'))
More information on Groovy scripting in JMeter: Apache Groovy - Why and How You Should Use It

Increase column's size in Oracle DB tables with Knex.js

I have a Password's column in a table, stored in OracleDB 11g.
In order to store hashed passwords on it, I need to increment its size from 25 to 60 or 100 BYTE.
I do not want to do this manually, I hope I can find a script or anything else using KnexJS (Something like migrations or seeds)
Thank you.
The correct term for what you want to do is "increase", not "increment". It looks like Knex.js supports changing the default DDL for columns (which is to create) to alter via the alter method. http://knexjs.org/#Schema-alter
In theory, it should work something like this:
knex.schema.alterTable('user', function(t) {
t.string('password', 100).alter();
});
I must admit, the following verbage in this method has me a little concerned:
Alter is not done incrementally over older column type so if you like to add notNull and keep the old default value, the alter statement must contain both .notNull().defaultTo(1).alter().
I'm not sure what that means at the end of the day. Just be sure to test this in development before trying it in production!

icCube - How can I modify the information that will be exported in Excel file

This the OutPut of my Excel file :
I want to Change the date to be more comprehensible. Thanks for your Help
This is known issue, will be part of next release.
As a workaround you can use DateToString function in your measure.
DateToString function does not convert currently, will be done in the next release, measures to their values (see issues).
As a workaround you'll have to do this manually :
[Measures].[My Date].value
or
[Measures].[My Date].value->asValue()
The second is needed if you're using a special aggregation method (e.g. min/max/open/close ) and will need Java to be active in icCube (doc)

Node's Postgres module pg returns wrong date

I have declared a date column in Postgres as date.
When I write the value with node's pg module, the Postgres Tool pgAdmin displays it correctly.
When I read the value back using pg, Instead of plain date, a date-time string comes with wrong day.
e.g.:
Date inserted: 1975-05-11
Date displayed by pgAdmin: 1975-05-11
Date returned by node's pg: 1975-05-10T23:00:00.000Z
Can I prevent node's pg to appy time-zone to date-only data? It is intended for day of birth and ihmo time-zone has no relevance here.
EDIT Issue response from Developer on github
The node-postgres team decided long ago to convert dates and datetimes
without timezones to local time when pulling them out. This is consistent
with some documentation we've dug up in the past. If you root around
through old issues here you'll find the discussions.
The good news is its trivially easy to over-ride this behavior and return
dates however you see fit.
There's documentation on how to do this here:
https://github.com/brianc/node-pg-types
There's probably even a module somewhere that will convert dates from
postgres into whatever timezone you want (utc I'm guessing). And if
there's not...that's a good opportunity to write one & share with everyone!
Original message
Looks like this is an issue in pg-module.
I'm a beginner in JS and node, so this is only my interpretation.
When dates (without time-part) are parsed, local time is assumed.
pg\node_modules\pg-types\lib\textParsers.js
if(!match) {
dateMatcher = /^(\d{1,})-(\d{2})-(\d{2})$/;
match = dateMatcher.test(isoDate);
if(!match) {
return null;
} else {
//it is a date in YYYY-MM-DD format
//add time portion to force js to parse as local time
return new Date(isoDate + ' 00:00:00');
But when the JS date object is converted back to a string getTimezoneOffset is applied.
pg\lib\utils.js s. function dateToString(date)
Another option is change the data type of the column:
You can do this by running the command:
ALTER TABLE table_name
ALTER COLUMN column_name_1 [SET DATA] TYPE new_data_type,
ALTER COLUMN column_name_2 [SET DATA] TYPE new_data_type,
...;
as descripted here.
I'd the same issue, I changed to text.
Just override node-postgres parser for the type date (1082) and return the value without parsing it:
import pg from pg
pg.types.setTypeParser(1082, value => value)

Search formula not working on Linux machine

I have a strange behavior: an agent called via an AJAX request should search documents to display in a calendar. For that reason I compute a search formula and then run the search method of my database in Lotusscript. This is the formula:
form="mholiday" | form="mserviceevent" | (form="mereignis" & co_status!="9") & #texttotime(#text(startdatetime)) >= [29.09.2014] & #texttotime(#text(enddatetime)) =< [10.11.2014]
Everything's fine on Domino on Windows but fails with "formula error" on a Linux machine. Am I missing something?
If I omit the term with the dates everything is fine, so this is the part that causes the error.
Try it with #ToTime() and #Date() instead. That might help to get away from local settings' influence on server:
... & #ToTime(startdatetime) >= #Date(2014; 9; 29) & ...
#ToTime() doesn't convert the field if it's a date time value already.
#Date doesn't depend on local settings whereas [29.09.2014] probably does.
I don't think it's a Linux problem, I think it's a data problem. It sounds like either a date format problem or a problem with the UNK table, used by full text search.
If the first document created on that server that had a field called "startdatetime" had a text value, then any search expects "startdatetime" to be a text value, even if there is another field in the database called startdatetime that is a date or the startdatetime field is subsequently changed to be a date. To confirm this, you can use the search bar and select the field. The operators it offers will confirm if it's expecting a date or a text value. See this answer for details on how to resolve "Query is not understandable" - Full text searching where field types have changed.
Alternatively, it may be a problem with the date format, as Knut says. In which case a test for 9/9/2014 would work but 29/9/2014 wouldn't.

Resources