I have a CSV file that has 5 lines at the top of the file that I want to remove using node.js. I then want to add my own header line that better matches the header I would use. I have no control of the original csv file so unable to to do this at the source.
It will be easiest using one of the following modules:
https://www.npmjs.com/package/csv
https://www.npmjs.com/package/tsv
or other that you find in:
https://www.npmjs.com/browse/keyword/csv
https://www.npmjs.com/browse/keyword/tsv
(don't worry if it's CSV or TSV - just make sure that you use the correct delimiter which is comma in your case).
You might do it all manually parsing the file as text but using a module for that will be much less error prone.
(cat good-header.csv; tail -n +5 original-file.csv) > the-result.csv
Related
I have a .c file that I want to open with python 3 to update a specific number on a specific line.
It seems like the most common way to do this would be to read the file in, write each line to a temporary file, when I get to the line I want, modify it, then write it to the temp file and keep going. Once I'm done, write the contents of the temp file back to the original file.
The problem that I have, is that in the comments of the file there are Japanese characters. I know I can still read it in by adding the error equal ignore argument, that allows me to still read the lines in but it gets rid of the Japanese characters completely and I need to preserve those.
I haven't been able to find a way how to do this. Is there any way to read in a file that's part in Japanese and part in English?
I want to convert the csv file to pdf file from command line using soffice command. But my csv file is colon separated instead of comma.
If I use command:
soffice --convert-to pdf ./sampleCSVFile.csv
This will give me pdf file but there are ; in the file. I found a article to convert to convert ods to csv with semicolon as delimiter: https://ask.libreoffice.org/t/cli-convert-ods-to-csv-with-semicolon-as-delimiter/5021
So similar to that I tried:
unoconv -f pdf -e FilterOptions="59,34,0,1" ./sampleCSVFile.csv
But it didn't help.
sampleCSVFile.csv as follow:
Level 1;Level2
Level 1;Level2
Level 1 ;Level2
Level 1;Level2
Level 1 ;Level2
Level 1;Level2
Level 1;Level2
Level 1;Level2
Level 1;Level2
Is there a way to convert this colon separated csv file to pdf?
(without changing the delimiter colon to comma)
Traditionally in DOS you used Edline to write a text file then either Copy or Type to the Con, Com or Lpn device (Line PriNter).
Windows still allows the print command to do that, and its possible to echo text via Notepad to a PDF virtual printer as a port. I will skip that as it not quite suited to your usage.
However by way of example, here I take your file and print virtually to PDF FilePort then call the Port result to the console. I could use one line rather than two but its more GUI visual.
However its not cross platform, and there are other simpler ways to convert text to pdf per platform.
You ask about Soffice and the principles are much the same since before PDFs were invented. soffice --infilter="calc_pdf_export" --convert-to pdf sampleCSVFile.csv
The text you transPort to exPort is the same as you imPort. However printing blind can add default print headers, footers (Page 1) and styles.
Because it is the most basic of methods
Whatever is in your Character Separated Values File.txt will be similar output. The only difference is there is no such thing as a tab or line wrap in a PDF (as its a virtual laser printer) not a mechanical line feed one.
I want to split a large file into small files of 10000 lines each. I know I can do the same using:
split --lines=10000
However, the above command does not give extensions to the splitted files. I want to give all my split files the extension .txt Is it possible to do the same using split in linux. If yes, then how?
Also is it possible to number the files such that the first file has the name a1.txt. The second file has the name a2.txt, and so on. I know split gives names of the files as aa,ab, etc. but I want to replace this with a1.txt, a2.txt, a3.txt, a4.txt, a5.txt, a6.txt, a7.txt, etc.
Uses the -d parameter, as:
split --lines=10000 -d <file>
I'm trying to filter out lines from all .js source files, and put into a separate file. (Specifically, I'm trying to grep all calls to a string translation function and post-process them).
I think I have the different parts figured out but can't make them fit together.
For each file, process it
Write each file's grep:ed lines to output.
Append the result to a file
I've tried to through.push(<output per file>) from the plugin, but the following step expects a file, not a string.
From there, I expect I could do something like gulp-concat or stream merge on the results and pipe it on to gulp.dist, but there's bit missing here.
I figured out a way - simply replace the Vinyl file's content with the lines to output, and push that through to through.push.
OK, what I need is fairly simple.
I want to download LOTS of different files (from a specific server), via cURL and would want to save each one of them as a specific new filename, on disk.
Is there an existing way (parameter, or whatever) to achieve that? How would you go about it?
(If there was an option to input all URL-filename pairs in a text file, one per line, and get cURL to process it, would be ideal)
E.g.
http://www.somedomain.com/some-image-1.png --> new-image-1.png
http://www.somedomain.com/another-image.png --> new-image-2.png
...
OK, just figured a smart way to do it myself.
1) Create a text file with pairs of URL (what to download) and Filename (how to save it to disk), separated by comma (,), one per line. And save it as input.txt.
2) Use the following simple BASH script :
while read line; do
IFS=',' read -ra PART <<< "$line";
curl $PART[0] -o $PART[1];
done < input.txt
*Haven't thoroughly tested it yet, but I think it should work.