I've started learning node.js
I'm currently on exercise 3, where we have to, based on a file buffer, calculate the number of new line characters "\n"
I pass the tester but somehow if I create my own file file.txt, I am able to get the buffer, and print out the string, but it is unable to calculate the number of new lines (console.log(newLineNum)) returns 0
Here is the code
//import file system module
var fs = require("fs");
//get the buffer object based on argv[2]
var buf = fs.readFileSync(process.argv[2]);
//convert buffer to string
var str_buff = buf.toString();
//length of str_buff
var str_length = str_buff.length;
var numNewLines = 0;
for (var i = 0; i < str_length; i ++)
{
if(str_buff.charAt(i) == '\n')
{
numNewLines++;
}
}
console.log(numNewLines);
If i understand your question correctly, you are trying to get the line length of current file.
From the documentation:
The first element will be 'node', the second element will be the name
of the JavaScript file.
So you should replace process.argv[2] with process.argv[1].
Edit:
If you are passing a parameter for a file name on command-line like:
node server.py 'test.txt'
your code should work without any problem.
Your code is fine. You should check the file that you are using for the input.
Related
I am working with a system that syncs files between two vendors. The tooling is written in Javascript and does a transformation on file names before sending it to the destination. I am trying to fix a bug in it that is failing to properly compare file names between the origin and destination.
The script uses the file name to check if it's on destination
For example:
The following file name contains a special character that has different encoding between source and destination.
source: Chinchón.jpg // hex code: ó
destination : Chinchón.jpg // hex code: 0xf3
The function that does the transformation is:
export const normalizeText = (text:string) => text
.normalize('NFC')
.replace(/\p{Diacritic}/gu, "")
.replace(/\u{2019}/gu, "'")
.replace(/\u{ff1a}/gu, ":")
.trim()
and the comparison is happening just like the following:
const array1 = ['Chinchón.jpg'];
console.log(array1.includes('Chinchón.jpg')); // false
Do I reverse the transformation before comparing? what's the best way to do that?
If i got your question right:
// prepare dictionary
const rawDictionary = ['Chinchón.jpg']
const dictionary = rawDictionary.map(x => normalizeText(x))
...
const rawComparant = 'Chinchón.jpg'
const comparant = normalizeText(rawComparant)
console.log(rawSources.includes(comparant))
So, I've run into an issue here that has me sort of perplexed. To give you an idea of what I'm trying to accomplish, I'm executing this from After Effects, to get a path for images in a directory, and then write a text file that holds the path for each image as a new line. Currently everything works almost exactly as I want it, and the issue is coming down to how AE lists a file path. I feel like I'm missing something simple, but here's the chunk of code I'm having an issue with:
var saveTextFile = File(savePath + "images.txt");
if(saveTextFile.exists)
saveTextFile.remove();
saveTextFile.encoding = "UTF8";
saveTextFile.open("e", "TEXT", "????");
var files = Folder (savePath).getFiles("*.PNG");
if (files.length == 0) return;
for each (var file in files){
//var drive = '/x';
//var fixName = fileName.replace(drive, 'X:');
//name = fixName.toString();
//$.writeln(name)
saveTextFile.writeln(('file ' + "'" + file.toString() + "'"));
}
saveTextFile.close();
The issue exists in the for each (var file in files) section. If I run it as is, I end up with a path similar to what's listed here:
file '/x/_CURRENT_/sequence_PNG_00000.png'
file '/x/_CURRENT_/sequence_PNG_00001.png'
file '/x/_CURRENT_/sequence_PNG_00002.png'
file '/x/_CURRENT_/sequence_PNG_00003.png'
file '/x/_CURRENT_/sequence_PNG_00004.png'
file '/x/_CURRENT_/sequence_PNG_00005.png'
Now this is great, except for the fact that it's reading the drive letter as "/x". This is problematic, so If I uncomment the variables in the for each loop, I end up with something similar to this:
file 'X:/_CURRENT_/sequence_PNG_00000.png'
file 'X:/_CURRENT_/sequence_PNG_00000.png'
file 'X:/_CURRENT_/sequence_PNG_00000.png'
file 'X:/_CURRENT_/sequence_PNG_00000.png'
file 'X:/_CURRENT_/sequence_PNG_00000.png'
file 'X:/_CURRENT_/sequence_PNG_00000.png'
And so that's great because it formats the X drive properly in the string.... But alas, it renames the incremental number in the string to 00000.png for every image.
Can anyone spot what I might be overlooking?
for each (var file in files){
does not look like valid JS syntax. Also Extendscript is ES3 so there is no [1,2,3].forEach(function(ele,i,arr){}) either.
Try it with
for(var i = 0; i < files.length;i++){
var file = files[i];
// ...and so on
}
I'm not sure if my code is the most efficient here, but it's effective, and at the moment that's all I can ask for.
This is the chunk that ended up working for me.
var saveTextFile = File(savePath + "images.txt");
if(saveTextFile.exists)
saveTextFile.remove();
saveTextFile.encoding = "UTF8";
saveTextFile.open("e", "TEXT", "????");
var files = Folder (savePath).getFiles("*.PNG");
if (files.length == 0) return;
for(var i = 0; i < files.length;i++){
var file = files[i];
var drive = '/x';
var fileIt = file.toString();
var fixName = fileIt.replace(drive, 'X:');
$.writeln(fixName);
saveTextFile.writeln(('file ' + "'" + fixName + "'"));
}
saveTextFile.close();
};
I have a legacy classic asp application, that allows us to upload pdf files. The file is saved as hex into a MS SQL DB into an image column. Here is a snip of the data in the column:
0x4A564245526930784C6A634E436957317462573144516F784944416762324A7144516F...
We are now working on a rework of the app using node.js as our backend and angular4 as the front end. As part of that, we need to be able to download the same file in the new app. If I upload the same file using angular4 then the image data looks like this:
0x255044462D312E370D0A25B5B5B5B50D0A312030206F626A0D0A3C3C2F547970652F43...
As you can see the hex is completely different and I am not sure what is causing this. I have tried to look at the classic asp code, but its very old and there is no special encoding or such that is happening that would cause this.
In Angular4 we are using the standard input type file control with the change event to capture the contents of the file and then saving it into our db:
myReader.onloadend = (f) => {
let image64;
if (myReader.result.toString()) {
let base64id = ';base64,';
image64 = myReader.result.substr(myReader.result.indexOf(base64id) + base64id.length);
}
someService.upload(image64);
}
So nothing crazy going on, right? The problem now is that I am able to download the file uploaded via angular fine, but not the ones that were uploaded via classic asp. I receive the following error when I try:
Failed to execute 'atob' on 'Window': The string to be decoded is not
correctly encoded.
Here is the code used to upload the files via classic ASP:
Public Function Load()
Dim PosBeg, PosEnd, PosFile, PosBound, boundary, boundaryPos, Pos
Dim Name, Value, FileName, ContentType
Dim UploadControl
PosBeg = 1
PosEnd = InstrB(PosBeg,m_FormRawData,getByteString(chr(13)))
boundary = MidB(m_FormRawData,PosBeg,PosEnd-PosBeg)
boundaryPos = InstrB(1,m_FormRawData,boundary)
'Get all data inside the boundaries
Do until (boundaryPos=InstrB(m_FormRawData,boundary & getByteString("--")))
Set UploadControl = Server.CreateObject("Scripting.Dictionary")
Pos = InstrB(BoundaryPos,m_FormRawData,getByteString("Content-Disposition"))
Pos = InstrB(Pos,m_FormRawData,getByteString("name="))
PosBeg = Pos+6
PosEnd = InstrB(PosBeg,m_FormRawData,getByteString(chr(34)))
Name = getString(MidB(m_FormRawData,PosBeg,PosEnd-PosBeg))
PosFile = InstrB(BoundaryPos,m_FormRawData,getByteString("filename="))
PosBound = InstrB(PosEnd,m_FormRawData,boundary)
If PosFile<>0 AND (PosFile<PosBound) Then
PosBeg = PosFile + 10
PosEnd = InstrB(PosBeg,m_FormRawData,getByteString(chr(34)))
FileName = getString(MidB(m_FormRawData,PosBeg,PosEnd-PosBeg))
UploadControl.Add "FileName", FileName
Pos = InstrB(PosEnd,m_FormRawData,getByteString("Content-Type:"))
PosBeg = Pos+14
PosEnd = InstrB(PosBeg,m_FormRawData,getByteString(chr(13)))
ContentType = getString(MidB(m_FormRawData,PosBeg,PosEnd-PosBeg))
UploadControl.Add "ContentType",ContentType
PosBeg = PosEnd+4
PosEnd = InstrB(PosBeg,m_FormRawData,boundary)-2
Value = MidB(m_FormRawData,PosBeg,PosEnd-PosBeg)
UploadControl.Add "value" , Value
m_dicFileData.Add LCase(name), UploadControl
Else
Pos = InstrB(Pos,m_FormRawData,getByteString(chr(13)))
PosBeg = Pos+4
PosEnd = InstrB(PosBeg,m_FormRawData,boundary)-2
Value = getString(MidB(m_FormRawData,PosBeg,PosEnd-PosBeg))
UploadControl.Add "value" , Value
End If
m_dicForm.Add LCase(name), UploadControl
BoundaryPos=InstrB(BoundaryPos+LenB(boundary),m_FormRawData,boundary)
Loop
Load = m_blnSucceed
End Function
Any idea what I can do here to fix this? I need to be able to download the old files as well. Is there an encoding or something else I am missing here?
After further looking into this, it seems the following is causing the error: Node.js will send a buffer to the front-end. On the front-end I then convert the buffer into base64 using the below code:
_arrayBufferToBase64(buffer) {
let base64 = '';
let bytes = new Uint8Array(buffer);
let len = bytes.byteLength;
for (let i = 0; i < len; i++) {
bbase64 += String.fromCharCode(bytes[i]);
}
return base64;
}
The returned base64 from the old classic asp is returned as garbage and looks like this:
*R{£ÌõGpÃì#j¾é>i¿ê A l Ä ð!!H!u!¡!Î!û"'"U""¯"Ý#
(?(q(¢(Ô))8)k))Ð5*hÏ++6+i++Ñ,,9,n,¢,×-''I'z'«'Ü(
3F33¸3ñ4+4e44Ø55M55Â5ý676r6®6é7$7`77×88P88È99B99¼9ù:6:t
I am still not sure how to fix this though.
I using postgres stream to insert record into postgres ,
for single column works fine , but what is ideal data format for copy for multiple columns
code snippets
var sqlcopysyntax = 'COPY srt (starttime, endtime) FROM STDIN delimiters E\'\\t\'';
var stream = client.query(copyFrom(sqlcopysyntax));
console.log(sqlcopysyntax)
var interndataset = [
['1', '4'],
['6', '12.074'],
['13.138', '16.183'],
['17.226', '21.605'],
['22.606', '24.733'],
['24.816', '27.027'],
['31.657', '33.617'],
['34.66', '37.204'],
['37.287', '38.58'],
['39.456', '43.669'],
['43.752', '47.297'],
['47.381', '49.55'],
];
var started = false;
var internmap = through2.obj(function(arr, enc, cb) {
/* updated this part by solution provided by #VaoTsun */
var rowText = arr.map(function(item) { return (item.join('\t') + '\n') }).join('')
started = true;
//console.log(rowText)
rowText=rowText+'\\\.';
/* end here*/
started = true;
cb(null, rowText);
})
internmap.write(interndataset);
internmap.end();
internmap.pipe(stream);
wherein i got error: (due to delimiter)missing data for column "endtime"(resolved) but got below error
error: end-of-copy marker corrupt
COPY intern (starttime, endtime) FROM STDIN
1 4
6 12.074
13.138 16.183
17.226 21.605
22.606 24.733
24.816 27.027
31.657 33.617
34.66 37.204
37.287 38.58
39.456 43.669
43.752 47.297
47.381 49.55
any pointer on how to resolve this .
what would be ideal format for multiple column inserts using copy command
With immense help from #jeromew from github community.
and proper implementation of node-pg-copy-streams(takes away copy command complexity ). we were able to solve this issue
https://github.com/brianc/node-pg-copy-streams/issues/65
below is working code snippets
var sqlcopysyntax = 'COPY srt (starttime, endtime) FROM STDIN ;
var stream = client.query(copyFrom(sqlcopysyntax));
console.log(sqlcopysyntax)
var interndataset = [
['1', '4'],
['6', '12.074'],
['13.138', '16.183'],
['17.226', '21.605'],
['22.606', '24.733'],
['24.816', '27.027'],
['31.657', '33.617'],
['34.66', '37.204'],
['37.287', '38.58'],
['39.456', '43.669'],
['43.752', '47.297'],
['47.381', '49.55'],
];
var started = false;
var internmap = through2.obj(function(arr, enc, cb) {
var rowText = (started ? '\n' : '') + arr.join('\t');
started = true;
cb(null, rowText);
})
data.forEach(function(r) {
internmap.write(r);
})
internmap.end();
internmap.pipe(stream);
https://www.postgresql.org/docs/current/static/sql-copy.html
DELIMITER
Specifies the character that separates columns within each row (line)
of the file. The default is a tab character in text format, a comma in
CSV format. This must be a single one-byte character. This option is
not allowed when using binary format.
try using not default delimiter (as tabulation can be replaced on copy/paste), eg:
t=# create table intern(starttime float,endtime float);
CREATE TABLE
t=# \! cat 1
COPY intern(starttime,endtime) FROM STDIN delimiter ';';
1;4
6;12.074
13.138;16.183
17.226;21.605
22.606;24.733
24.816;27.027
31.657;33.617
34.66;37.204
37.287;38.58
39.456;43.669
43.752;47.297
47.381;49.55
49.633;54.68
54.763;58.225
59.142;62.98
64.189;68.861
69.82;71.613
72.364;76.201
76.285;78.787
78.871;81.832
\.
t=# \i 1
COPY 20
Also in your question you lack \., try typing in psql - you will see instructions:
t=# COPY intern(starttime,endtime) FROM STDIN delimiter ';';
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
End with a backslash and a period on a line by itself.
For the past couple of days I have been trying and reading to get something very specific done in Node-Red: I want to send the (LoRa) message to a CSV.
This CSV should contain the following items:
topic
date
payload
I can insert the date using a function node:
var str1 = Date();
I have been playing around with CSV node, but I can't get it to output comma separated values. All this has probably to do with my lack of javascript programming skills, which is why I turn to you.
Can you help me out?
Edit:
I'm still looking for the answer, which has brought me the following:
Function node:
var res = Date() + "," + msg.topic + "," + msg.payload; return [ { payload: res } ];
Output:
[{"col1":"Mon Oct 17 2016 17:10:20 GMT+0200 (CEST)","col2":"1/test/1","col3":"string1"}]
All I want now is to lose the extra information such as column names and [{}]
The CSV node works only on the msg.payload field so you will have to copy the extra data into the payload object to get it to output what you want.
So to format the data correctly you need to place a function node with the following before the CSV node:
var originalPayload = msg.payload;
var newPayload = {};
newPayload.date = new Date().toString();
newPayload.topic = msg.topic;
newPayload.payload = originalPayload;
msg.payload = newPayload;
return msg;
And configure the CSV node to output columns "date,topic,payload"