Nodejs Edit single row in CSV - node.js

I am trying to understand how I would use TypeScript and Nodejs to grab a Csv file by row number and edit that row. I assume the steps would be to 'get line 2, read the single line, save that to an object, I change the data, delete line 2, write a new line 2 with my new object data".
I'm able to read the file and iterate row by row. So I can easily grab the row id. I'm running into a wall trying to figure out the actual editing of a single row, and seems the only way to 'edit' is to delete the row and add it back. I need to add the line back to the exact line number, since I'm iterating row by row and if I just added the new line to the end of the file my script would never stop.
I'm using the csv-writer package to write a full Csv file, the csv-parse package in conjunction with the fs.createReadStream() to iterate row by row of a Csv.

Related

How to transfer each line of a text file to Excel cell?

I need to transfer some pdf table content to Excel. I used the PyMuPDF module to be able to put the PDF content to a .txt file. In which it is easier to work with and I did it successfully.
As you can see in the .txt file I was able to transfer each column and row of the pdf. They are displayed sequentially.
- I need some way to read the txt strings sequentially so I can put each line of the txt into a .xlsx cell.
- Some way to setup triggers to start reading the document sequentially and lines to throw away.
Example: Start reading after a specific word, stop reading when some word is reached. Things like this. Because these documents have headers and unuseful information that are also transcript to the txt file. So I need to ignore some contents of the txt to gather only the useful information to put in the .xlsx cells.
*I'm using the xlrd library, I would like to know how I can work with things here. (optional)
I don't know if it is a problem, but when I use the count method to count the number of lines, it returned only 15 lines. The document has 568 lines in total. It only showed the full empty ones.
with open(nome_arquivo_nota, 'r'):
for line in nome_arquivo_nota:
count += 1
print(count)
= 15 .

Excel with CSV table

We got a measuring device thats connected to a CSV file. Whenever we measure a product the CSV get's updated.
This CSV file is then connected to a excel file to make it user friendly and to add comments. So the CSV file is connected and displayed in a table. We added another column to this table to put a comment.
The problem is when we the csv file updates (new row) the comment of the last row goes down to the new row.
Example: The digits come from the CSV file, and the ok is added manually.
When we refresh the workbook so the new row gets added we get the following result:
The ok from row 2 goes down to new row.
It should be like this:
Is there a way to save/stuck the manually added value to a row?

When I public batch (upload csv files) in Amazon Mechanical Turk, it always pumped errors for the row after my last data observation?

I am trying to publish a batch on Amazon Mechanical Turk.
All the design part and csv file organizing part have been done by my professor and I. I am pretty sure these parts are correct.
However, my data only has 27921 rows (the last line number in csv is 27921). But after I click publish tab, the MTturk always pumped up an error message regarding the line 27922, which is completely empty in my file!
I have tried to download the template and paste my original data into that template. It didn't work.
The Error is:
Line 27722: Row has 1 column while the header has 2
I just had the exact same problem.
For some reason mturk doesn't identify a new blank line as an end of the file.
I opened the csv file in a text editor (in my case notepad++ but I guess a regular text editor will work aswell) and just deleted the last line.

Change starting line number of SublimeText

I have access to a small portion of a code file, when I get an exception, the line number in the exception will refer to the entire code file, rather than just the section I have access too.
This means I'll get a error message for line 300, which is actually line 5 in my code file. The starting number varies depending on the file I'm working on.
To get around this at the moment, I just insert the relevant number of blank lines so that my line 1 lines up with where it will in the parent file.
I'd like to know if there is a way to get the line numbers in Sublime to start at something other than 1.
That way I'd be able to set the first line number in my file to the actual line number it will be when its inserted into the parent file.
The line numbers in Sublime are based on the number of lines that are actually in the current view. There is no method in the API to alter their appearance, unfortunately.

Outputting a single Excel file with multiple worksheets

Is there a component in Talend Open Studio for Data Integration to be able to output a single Excel file but with 2 separate sheets in it?
I want to separate some columns in the original file into another sheet and another set of columns to the second sheet.
You'll need to output your data into two separate tFileOutputExcel components with the second one set to append the data to the file as a different sheet.
A quick example has some name and age data held against a unique id that needs to be split into two separate sheets with id and name on one sheet and id and age on another sheet.
I'm generating this data using the tRowGenerator component configured to generate a sequence for the id and random first names and ages between 18 and 75:
I then split this data using a tMap component:
The first flow of data can go to the first tFileOutputExcel component to create the file with a "Names" sheet:
Unfortunately we can't just output the second sheet of data straight away to the next file as Talend will need to open a write lock on the Excel file. So instead we stash the data into memory using the tBufferOutput component in this case (although we could also use a tHashOutput component or potentially stash the data on disk in either a temporary file or database if this is likely to exceed total memory).
Once the first sub job is completed writing the names data to the Names sheet of our target file we can then read the Age data out of the buffer and into the second tFileOutputExcel which is then configured to append the sheet of data to the target file:

Resources