Node-red output a message to CSV - node.js

For the past couple of days I have been trying and reading to get something very specific done in Node-Red: I want to send the (LoRa) message to a CSV.
This CSV should contain the following items:
topic
date
payload
I can insert the date using a function node:
var str1 = Date();
I have been playing around with CSV node, but I can't get it to output comma separated values. All this has probably to do with my lack of javascript programming skills, which is why I turn to you.
Can you help me out?
Edit:
I'm still looking for the answer, which has brought me the following:
Function node:
var res = Date() + "," + msg.topic + "," + msg.payload; return [ { payload: res } ];
Output:
[{"col1":"Mon Oct 17 2016 17:10:20 GMT+0200 (CEST)","col2":"1/test/1","col3":"string1"}]
All I want now is to lose the extra information such as column names and [{}]

The CSV node works only on the msg.payload field so you will have to copy the extra data into the payload object to get it to output what you want.
So to format the data correctly you need to place a function node with the following before the CSV node:
var originalPayload = msg.payload;
var newPayload = {};
newPayload.date = new Date().toString();
newPayload.topic = msg.topic;
newPayload.payload = originalPayload;
msg.payload = newPayload;
return msg;
And configure the CSV node to output columns "date,topic,payload"

Related

Failed to execute 'atob' on 'Window': The string to be decoded is not correctly encoded - hex is different

I have a legacy classic asp application, that allows us to upload pdf files. The file is saved as hex into a MS SQL DB into an image column. Here is a snip of the data in the column:
0x4A564245526930784C6A634E436957317462573144516F784944416762324A7144516F...
We are now working on a rework of the app using node.js as our backend and angular4 as the front end. As part of that, we need to be able to download the same file in the new app. If I upload the same file using angular4 then the image data looks like this:
0x255044462D312E370D0A25B5B5B5B50D0A312030206F626A0D0A3C3C2F547970652F43...
As you can see the hex is completely different and I am not sure what is causing this. I have tried to look at the classic asp code, but its very old and there is no special encoding or such that is happening that would cause this.
In Angular4 we are using the standard input type file control with the change event to capture the contents of the file and then saving it into our db:
myReader.onloadend = (f) => {
let image64;
if (myReader.result.toString()) {
let base64id = ';base64,';
image64 = myReader.result.substr(myReader.result.indexOf(base64id) + base64id.length);
}
someService.upload(image64);
}
So nothing crazy going on, right? The problem now is that I am able to download the file uploaded via angular fine, but not the ones that were uploaded via classic asp. I receive the following error when I try:
Failed to execute 'atob' on 'Window': The string to be decoded is not
correctly encoded.
Here is the code used to upload the files via classic ASP:
Public Function Load()
Dim PosBeg, PosEnd, PosFile, PosBound, boundary, boundaryPos, Pos
Dim Name, Value, FileName, ContentType
Dim UploadControl
PosBeg = 1
PosEnd = InstrB(PosBeg,m_FormRawData,getByteString(chr(13)))
boundary = MidB(m_FormRawData,PosBeg,PosEnd-PosBeg)
boundaryPos = InstrB(1,m_FormRawData,boundary)
'Get all data inside the boundaries
Do until (boundaryPos=InstrB(m_FormRawData,boundary & getByteString("--")))
Set UploadControl = Server.CreateObject("Scripting.Dictionary")
Pos = InstrB(BoundaryPos,m_FormRawData,getByteString("Content-Disposition"))
Pos = InstrB(Pos,m_FormRawData,getByteString("name="))
PosBeg = Pos+6
PosEnd = InstrB(PosBeg,m_FormRawData,getByteString(chr(34)))
Name = getString(MidB(m_FormRawData,PosBeg,PosEnd-PosBeg))
PosFile = InstrB(BoundaryPos,m_FormRawData,getByteString("filename="))
PosBound = InstrB(PosEnd,m_FormRawData,boundary)
If PosFile<>0 AND (PosFile<PosBound) Then
PosBeg = PosFile + 10
PosEnd = InstrB(PosBeg,m_FormRawData,getByteString(chr(34)))
FileName = getString(MidB(m_FormRawData,PosBeg,PosEnd-PosBeg))
UploadControl.Add "FileName", FileName
Pos = InstrB(PosEnd,m_FormRawData,getByteString("Content-Type:"))
PosBeg = Pos+14
PosEnd = InstrB(PosBeg,m_FormRawData,getByteString(chr(13)))
ContentType = getString(MidB(m_FormRawData,PosBeg,PosEnd-PosBeg))
UploadControl.Add "ContentType",ContentType
PosBeg = PosEnd+4
PosEnd = InstrB(PosBeg,m_FormRawData,boundary)-2
Value = MidB(m_FormRawData,PosBeg,PosEnd-PosBeg)
UploadControl.Add "value" , Value
m_dicFileData.Add LCase(name), UploadControl
Else
Pos = InstrB(Pos,m_FormRawData,getByteString(chr(13)))
PosBeg = Pos+4
PosEnd = InstrB(PosBeg,m_FormRawData,boundary)-2
Value = getString(MidB(m_FormRawData,PosBeg,PosEnd-PosBeg))
UploadControl.Add "value" , Value
End If
m_dicForm.Add LCase(name), UploadControl
BoundaryPos=InstrB(BoundaryPos+LenB(boundary),m_FormRawData,boundary)
Loop
Load = m_blnSucceed
End Function
Any idea what I can do here to fix this? I need to be able to download the old files as well. Is there an encoding or something else I am missing here?
After further looking into this, it seems the following is causing the error: Node.js will send a buffer to the front-end. On the front-end I then convert the buffer into base64 using the below code:
_arrayBufferToBase64(buffer) {
let base64 = '';
let bytes = new Uint8Array(buffer);
let len = bytes.byteLength;
for (let i = 0; i < len; i++) {
bbase64 += String.fromCharCode(bytes[i]);
}
return base64;
}
The returned base64 from the old classic asp is returned as garbage and looks like this:
*R{£ÌõGpÃì#j¾é>i¿ê A l Ä ð!!H!u!¡!Î!û"'"U""¯"Ý#
(?(q(¢(Ô))8)k))Ð5*hÏ++6+i++Ñ,,9,n,¢,×-''I'z'«'Ü(
3F33¸3ñ4+4e44Ø55M55Â5ý676r6®6é7$7`77×88P88È99B99¼9ù:6:t
I am still not sure how to fix this though.

multiple column copy format postgresql Node.js

I using postgres stream to insert record into postgres ,
for single column works fine , but what is ideal data format for copy for multiple columns
code snippets
var sqlcopysyntax = 'COPY srt (starttime, endtime) FROM STDIN delimiters E\'\\t\'';
var stream = client.query(copyFrom(sqlcopysyntax));
console.log(sqlcopysyntax)
var interndataset = [
['1', '4'],
['6', '12.074'],
['13.138', '16.183'],
['17.226', '21.605'],
['22.606', '24.733'],
['24.816', '27.027'],
['31.657', '33.617'],
['34.66', '37.204'],
['37.287', '38.58'],
['39.456', '43.669'],
['43.752', '47.297'],
['47.381', '49.55'],
];
var started = false;
var internmap = through2.obj(function(arr, enc, cb) {
/* updated this part by solution provided by #VaoTsun */
var rowText = arr.map(function(item) { return (item.join('\t') + '\n') }).join('')
started = true;
//console.log(rowText)
rowText=rowText+'\\\.';
/* end here*/
started = true;
cb(null, rowText);
})
internmap.write(interndataset);
internmap.end();
internmap.pipe(stream);
wherein i got error: (due to delimiter)missing data for column "endtime"(resolved) but got below error
error: end-of-copy marker corrupt
COPY intern (starttime, endtime) FROM STDIN
1 4
6 12.074
13.138 16.183
17.226 21.605
22.606 24.733
24.816 27.027
31.657 33.617
34.66 37.204
37.287 38.58
39.456 43.669
43.752 47.297
47.381 49.55
any pointer on how to resolve this .
what would be ideal format for multiple column inserts using copy command
With immense help from #jeromew from github community.
and proper implementation of node-pg-copy-streams(takes away copy command complexity ). we were able to solve this issue
https://github.com/brianc/node-pg-copy-streams/issues/65
below is working code snippets
var sqlcopysyntax = 'COPY srt (starttime, endtime) FROM STDIN ;
var stream = client.query(copyFrom(sqlcopysyntax));
console.log(sqlcopysyntax)
var interndataset = [
['1', '4'],
['6', '12.074'],
['13.138', '16.183'],
['17.226', '21.605'],
['22.606', '24.733'],
['24.816', '27.027'],
['31.657', '33.617'],
['34.66', '37.204'],
['37.287', '38.58'],
['39.456', '43.669'],
['43.752', '47.297'],
['47.381', '49.55'],
];
var started = false;
var internmap = through2.obj(function(arr, enc, cb) {
var rowText = (started ? '\n' : '') + arr.join('\t');
started = true;
cb(null, rowText);
})
data.forEach(function(r) {
internmap.write(r);
})
internmap.end();
internmap.pipe(stream);
https://www.postgresql.org/docs/current/static/sql-copy.html
DELIMITER
Specifies the character that separates columns within each row (line)
of the file. The default is a tab character in text format, a comma in
CSV format. This must be a single one-byte character. This option is
not allowed when using binary format.
try using not default delimiter (as tabulation can be replaced on copy/paste), eg:
t=# create table intern(starttime float,endtime float);
CREATE TABLE
t=# \! cat 1
COPY intern(starttime,endtime) FROM STDIN delimiter ';';
1;4
6;12.074
13.138;16.183
17.226;21.605
22.606;24.733
24.816;27.027
31.657;33.617
34.66;37.204
37.287;38.58
39.456;43.669
43.752;47.297
47.381;49.55
49.633;54.68
54.763;58.225
59.142;62.98
64.189;68.861
69.82;71.613
72.364;76.201
76.285;78.787
78.871;81.832
\.
t=# \i 1
COPY 20
Also in your question you lack \., try typing in psql - you will see instructions:
t=# COPY intern(starttime,endtime) FROM STDIN delimiter ';';
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
End with a backslash and a period on a line by itself.

NetSuite - get csv values using suitescript

I would like to do update a whole bunch of 'rev rec plan' records. Updating using mass update isn't available nor is an import possible.
So the only way I think I could do this is via script (probably scheduled), but I am not sure how to retrieve the 3 fields in my excel (internal id, start date and end date). I need the script to get the date fields from my file and using the internal id update the rec rec dates.
Thanks
You should be able to get started with the following code. Just get the internal id of the file you want to parse.
function parseCSVFromFile(fileId){
var csvFile = nlapiLoadFile(fileId);
var csv = csvFile.getValue();
var parsedCSV = CSVParser().parse(csv);
var data= parsedCSV.data;
return data;
}
function CSVParser(){function e(r,n){if(Array.isArray(r)){var i=[];return r.forEach(function(t){"object"==typeof t?i.push(e(t.file,t.config)):i.push(e(t,n))}),i}var i={data:[],errors:[]};if(!/(\.csv|\.txt)$/.test(r))return i.errors.push({type:"",code:"",message:"Unsupported file type.",row:""}),i;try{var a=fs.readFileSync(r).toString();return t(a,n)}catch(s){return i.errors.push(s),i}}function t(e,t){var r=a(t),i=new n(r),s=i.parse(e);return f(r.complete)&&r.complete(s),s}function r(e,t){function r(){"object"==typeof t&&("string"==typeof t.delimiter&&1==t.delimiter.length&&-1==l.BAD_DELIMITERS.indexOf(t.delimiter)&&(o=t.delimiter),("boolean"==typeof t.quotes||t.quotes instanceof Array)&&(f=t.quotes),"string"==typeof t.newline&&(d=t.newline))}function n(e){if("object"!=typeof e)return[];var t=[];for(var r in e)t.push(r);return t}function i(e,t){var r="";"string"==typeof e&&(e=JSON.parse(e)),"string"==typeof t&&(t=JSON.parse(t));var n=e instanceof Array&&e.length>0,i=!(t[0]instanceof Array);if(n){for(var s=0;s<e.length;s++)s>0&&(r+=o),r+=a(e[s],s);t.length>0&&(r+=d)}for(var f=0;f<t.length;f++){for(var l=n?e.length:t[f].length,u=0;l>u;u++){u>0&&(r+=o);var p=n&&i?e[u]:u;r+=a(t[f][p],u)}f<t.length-1&&(r+=d)}return r}function a(e,t){if("undefined"==typeof e||null===e)return"";e=e.toString().replace(/"/g,'""');var r="boolean"==typeof f&&f||f instanceof Array&&f[t]||s(e,l.BAD_DELIMITERS)||e.indexOf(o)>-1||" "==e.charAt(0)||" "==e.charAt(e.length-1);return r?'"'+e+'"':e}function s(e,t){for(var r=0;r<t.length;r++)if(e.indexOf(t[r])>-1)return!0;return!1}var f=!1,o=",",d="\r\n";if(r(),"string"==typeof e&&(e=JSON.parse(e)),e instanceof Array){if(!e.length||e[0]instanceof Array)return i(null,e);if("object"==typeof e[0])return i(n(e[0]),e)}else if("object"==typeof e)return"string"==typeof e.data&&(e.data=JSON.parse(e.data)),e.data instanceof Array&&(e.fields||(e.fields=e.data[0]instanceof Array?e.fields:n(e.data[0])),e.data[0]instanceof Array||"object"==typeof e.data[0]||(e.data=[e.data])),i(e.fields||[],e.data||[]);throw"exception: Unable to serialize unrecognized input"}function n(e){function t(){if(E&&m&&(p("Delimiter","UndetectableDelimiter","Unable to auto-detect delimiting character; defaulted to '"+l.DefaultDelimiter+"'"),m=!1),e.skipEmptyLines)for(var t=0;t<E.data.length;t++)1==E.data[t].length&&""==E.data[t][0]&&E.data.splice(t--,1);return r()&&n(),a()}function r(){return e.header&&0==w.length}function n(){if(E){for(var e=0;r()&&e<E.data.length;e++)for(var t=0;t<E.data[e].length;t++)w.push(E.data[e][t]);E.data.splice(0,1)}}function a(){if(!E||!e.header&&!e.dynamicTyping)return E;for(var t=0;t<E.data.length;t++){for(var r={},n=0;n<E.data[t].length;n++){if(e.dynamicTyping){var i=E.data[t][n];"true"==i||"TRUE"===i?E.data[t][n]=!0:"false"==i||"FALSE"===i?E.data[t][n]=!1:E.data[t][n]=u(i)}e.header&&(n>=w.length?(r.__parsed_extra||(r.__parsed_extra=[]),r.__parsed_extra.push(E.data[t][n])):r[w[n]]=E.data[t][n])}e.header&&(E.data[t]=r,n>w.length?p("FieldMismatch","TooManyFields","Too many fields: expected "+w.length+" fields but parsed "+n,t):n<w.length&&p("FieldMismatch","TooFewFields","Too few fields: expected "+w.length+" fields but parsed "+n,t))}return e.header&&E.meta&&(E.meta.fields=w),E}function o(t){for(var r,n,a,s=[","," ","|",";",l.RECORD_SEP,l.UNIT_SEP],f=0;f<s.length;f++){var o=s[f],d=0,u=0;a=void 0;for(var p=new i({delimiter:o,preview:10}).parse(t),c=0;c<p.data.length;c++){var h=p.data[c].length;u+=h,"undefined"!=typeof a?h>1&&(d+=Math.abs(h-a),a=h):a=h}u/=p.data.length,("undefined"==typeof n||n>d)&&u>1.99&&(n=d,r=o)}return e.delimiter=r,{successful:!!r,bestDelimiter:r}}function d(e){e=e.substr(0,1048576);var t=e.split("\r");if(1==t.length)return"\n";for(var r=0,n=0;n<t.length;n++)"\n"==t[n][0]&&r++;return r>=t.length/2?"\r\n":"\r"}function u(e){var t=g.test(e);return t?parseFloat(e):e}function p(e,t,r,n){E.errors.push({type:e,code:t,message:r,row:n})}var c,h,m,g=/^\s*-?(\d*\.?\d+|\d+\.?\d*)(e[-+]?\d+)?\s*$/i,y=this,v=0,b=!1,w=[],E={data:[],errors:[],meta:{}};if(f(e.step)){var x=e.step;e.step=function(n){if(E=n,r())t();else{if(t(),0==E.data.length)return;v+=n.data.length,e.preview&&v>e.preview?h.abort():x(E,y)}}}this.parse=function(r){if(e.newline||(e.newline=d(r)),m=!1,!e.delimiter){var n=o(r);n.successful?e.delimiter=n.bestDelimiter:(m=!0,e.delimiter=l.DefaultDelimiter),E.meta.delimiter=e.delimiter}var a=s(e);return e.preview&&e.header&&a.preview++,c=r,h=new i(a),E=h.parse(c),t(),!f(e.complete)||b||y.streamer&&!y.streamer.finished()||e.complete(E),b?{meta:{paused:!0}}:E||{meta:{paused:!1}}},this.pause=function(){b=!0,h.abort(),c=c.substr(h.getCharIndex())},this.resume=function(){b=!1,h=new i(e),h.parse(c),b||(y.streamer&&!y.streamer.finished()?y.streamer.resume():f(e.complete)&&e.complete(E))},this.abort=function(){h.abort(),f(e.complete)&&e.complete(E),c=""}}function i(e){e=e||{};var t=e.delimiter,r=e.newline,n=e.comments,i=e.step,a=e.preview,s=e.fastMode;if(("string"!=typeof t||1!=t.length||l.BAD_DELIMITERS.indexOf(t)>-1)&&(t=","),n===t)throw"Comment character same as delimiter";n===!0?n="#":("string"!=typeof n||l.BAD_DELIMITERS.indexOf(n)>-1)&&(n=!1),"\n"!=r&&"\r"!=r&&"\r\n"!=r&&(r="\n");var f=0,o=!1;this.parse=function(e){function l(){return w.push(e.substr(f)),v.push(w),f=c,y&&p(),u()}function d(t){v.push(w),w=[],f=t,O=e.indexOf(r,f)}function u(e){return{data:v,errors:b,meta:{delimiter:t,linebreak:r,aborted:o,truncated:!!e}}}function p(){i(u()),v=[],b=[]}if("string"!=typeof e)throw"Input must be a string";var c=e.length,h=t.length,m=r.length,g=n.length,y="function"==typeof i;f=0;var v=[],b=[],w=[];if(!e)return u();if(s){for(var E=e.split(r),x=0;x<E.length;x++)if(!n||E[x].substr(0,g)!=n){if(y){if(v=[E[x].split(t)],p(),o)return u()}else v.push(E[x].split(t));if(a&&x>=a)return v=v.slice(0,a),u(!0)}return u()}for(var D=e.indexOf(t,f),O=e.indexOf(r,f);;)if('"'!=e[f])if(n&&0===w.length&&e.substr(f,g)===n){if(-1==O)return u();f=O+m,O=e.indexOf(r,f),D=e.indexOf(t,f)}else if(-1!==D&&(O>D||-1===O))w.push(e.substring(f,D)),f=D+h,D=e.indexOf(t,f);else{if(-1===O)break;if(w.push(e.substring(f,O)),d(O+m),y&&(p(),o))return u();if(a&&v.length>=a)return u(!0)}else{var A=f;for(f++;;){var A=e.indexOf('"',A+1);if(-1===A)return b.push({type:"Quotes",code:"MissingQuotes",message:"Quoted field unterminated",row:v.length,index:f}),l();if(A===c-1)return w.push(e.substring(f,A).replace(/""/g,'"')),v.push(w),y&&p(),u();if('"'!=e[A+1]){if(e[A+1]==t){w.push(e.substring(f,A).replace(/""/g,'"')),f=A+1+h,D=e.indexOf(t,f),O=e.indexOf(r,f);break}if(e.substr(A+1,m)===r){if(w.push(e.substring(f,A).replace(/""/g,'"')),d(A+1+m),D=e.indexOf(t,f),y&&(p(),o))return u();if(a&&v.length>=a)return u(!0);break}}else A++}}return l()},this.abort=function(){o=!0},this.getCharIndex=function(){return f}}function a(e){"object"!=typeof e&&(e={});var t=s(e);return("string"!=typeof t.delimiter||1!=t.delimiter.length||l.BAD_DELIMITERS.indexOf(t.delimiter)>-1)&&(t.delimiter=o.delimiter),"\n"!=t.newline&&"\r"!=t.newline&&"\r\n"!=t.newline&&(t.newline=o.newline),"boolean"!=typeof t.header&&(t.header=o.header),"boolean"!=typeof t.dynamicTyping&&(t.dynamicTyping=o.dynamicTyping),"number"!=typeof t.preview&&(t.preview=o.preview),"function"!=typeof t.step&&(t.step=o.step),"function"!=typeof t.complete&&(t.complete=o.complete),"boolean"!=typeof t.skipEmptyLines&&(t.skipEmptyLines=o.skipEmptyLines),"boolean"!=typeof t.fastMode&&(t.fastMode=o.fastMode),t}function s(e){if("object"!=typeof e)return e;var t=e instanceof Array?[]:{};for(var r in e)t[r]=s(e[r]);return t}function f(e){return"function"==typeof e}var o={delimiter:"",newline:"",header:!1,dynamicTyping:!1,preview:0,step:void 0,comments:!1,complete:void 0,skipEmptyLines:!1,fastMode:!1},l={};return l.parse=t,l.parseFiles=e,l.unparse=r,l.RECORD_SEP=String.fromCharCode(30),l.UNIT_SEP=String.fromCharCode(31),l.BYTE_ORDER_MARK="\ufeff",l.BAD_DELIMITERS=["\r","\n",'"',l.BYTE_ORDER_MARK],l.DefaultDelimiter=",",l.Parser=i,l.ParserHandle=n,l}
Example:
var parsedCSV = parseCSVFromFile(123);
nlapiLogExecution('DEBUG', "parsedCSV ", JSON.stringify(parsedCSV));

SoapUI Load test groovy sequentially reading txt file

I am using free version of soapui. In my load test, I want to read request field value from a text file. The file looks like following
0401108937
0401109140
0401109505
0401110330
0401111204
0401111468
0401111589
0401111729
0401111768
In load test, for each request I want to read this file sequentially. I am using the code mentioned in Unique property per SoapUI request using groovy to read the file. How can I use the values from the file in a sequential manner?
I have following test setup script to read the file
def projectDir = context.expand('${projectDir}') + File.separator
def dataFile = "usernames.txt"
try
{
File file = new File(projectDir + dataFile)
context.data = file.readLines()
context.dataCount = context.data.size
log.info " data count" + context.dataCount
context.index = 0; //index to read data array in sequence
}
catch (Exception e)
{
testRunner.fail("Failed to load " + dataFile + " from project directory.")
return
}
In my test, I have following script as test step. I want to read the current index record from array and then increment the index value
def randUserAccount = context.data.get(context.index);
context.setProperty("randUserAccount", randUserAccount)
context.index = ((int)context.index) + 1;
But with this script, I always get 2nd record of the array. The index value is not incrementing.
You defined the variable context.index to 0 and just do +1
You maybe need a loop to read all values.
something like this :
for(int i=0; i <context.data.size; i++){
context.setProperty("randUserAccount", i);
//your code
}
You can add this setup script to the setup script section for load test and access the values in the groovy script test step using:
context.LoadTestContext.index =((int)context.LoadTestContext.index)+1
This might be a late reply but I was facing the same problem for my load testing for some time. Using index as global property solved the issue for me.
Index is set as -1 initially. The below code would increment the index by 1, set the incremented value as global property and then pick the context data for that index.
<confirmationNumber>${=com.eviware.soapui.SoapUI.globalProperties.setPropertyValue( "index", (com.eviware.soapui.SoapUI.globalProperties.getPropertyValue( "index" ).toLong()+1 ).toString()); return (context.data.get( (com.eviware.soapui.SoapUI.globalProperties.getPropertyValue( "index" )).toInteger())) }</confirmationNumber>

NodeSchool IO Exercies 3

I've started learning node.js
I'm currently on exercise 3, where we have to, based on a file buffer, calculate the number of new line characters "\n"
I pass the tester but somehow if I create my own file file.txt, I am able to get the buffer, and print out the string, but it is unable to calculate the number of new lines (console.log(newLineNum)) returns 0
Here is the code
//import file system module
var fs = require("fs");
//get the buffer object based on argv[2]
var buf = fs.readFileSync(process.argv[2]);
//convert buffer to string
var str_buff = buf.toString();
//length of str_buff
var str_length = str_buff.length;
var numNewLines = 0;
for (var i = 0; i < str_length; i ++)
{
if(str_buff.charAt(i) == '\n')
{
numNewLines++;
}
}
console.log(numNewLines);
If i understand your question correctly, you are trying to get the line length of current file.
From the documentation:
The first element will be 'node', the second element will be the name
of the JavaScript file.
So you should replace process.argv[2] with process.argv[1].
Edit:
If you are passing a parameter for a file name on command-line like:
node server.py 'test.txt'
your code should work without any problem.
Your code is fine. You should check the file that you are using for the input.

Resources