How to transfer each line of a text file to Excel cell? - python-3.x

I need to transfer some pdf table content to Excel. I used the PyMuPDF module to be able to put the PDF content to a .txt file. In which it is easier to work with and I did it successfully.
As you can see in the .txt file I was able to transfer each column and row of the pdf. They are displayed sequentially.
- I need some way to read the txt strings sequentially so I can put each line of the txt into a .xlsx cell.
- Some way to setup triggers to start reading the document sequentially and lines to throw away.
Example: Start reading after a specific word, stop reading when some word is reached. Things like this. Because these documents have headers and unuseful information that are also transcript to the txt file. So I need to ignore some contents of the txt to gather only the useful information to put in the .xlsx cells.
*I'm using the xlrd library, I would like to know how I can work with things here. (optional)
I don't know if it is a problem, but when I use the count method to count the number of lines, it returned only 15 lines. The document has 568 lines in total. It only showed the full empty ones.
with open(nome_arquivo_nota, 'r'):
for line in nome_arquivo_nota:
count += 1
print(count)
= 15 .

Related

Filter specific Values from a csv file

I would like to build a python3 script that will download a CSV file every day and send it to me by mail.
I have managed to download the file and sending the mail. The problem is the filtering. The CSV file contains
"155;03155;Northeim;1669;1261.7;9;35;0;131.55;66;49.9"
and I want to filter only the last value 49.9 and send it to me by mail. This Value changes daily. But the order of the values doesn't.
Can someone help me?
If your CSV file only contains 1 row then read the file, split at ; and access the last element of the list using [-1].
with open('your_csv_file.csv') as fp:
value = fp.read().split(';')[-1]

When I public batch (upload csv files) in Amazon Mechanical Turk, it always pumped errors for the row after my last data observation?

I am trying to publish a batch on Amazon Mechanical Turk.
All the design part and csv file organizing part have been done by my professor and I. I am pretty sure these parts are correct.
However, my data only has 27921 rows (the last line number in csv is 27921). But after I click publish tab, the MTturk always pumped up an error message regarding the line 27922, which is completely empty in my file!
I have tried to download the template and paste my original data into that template. It didn't work.
The Error is:
Line 27722: Row has 1 column while the header has 2
I just had the exact same problem.
For some reason mturk doesn't identify a new blank line as an end of the file.
I opened the csv file in a text editor (in my case notepad++ but I guess a regular text editor will work aswell) and just deleted the last line.

csv file is changing when I just open it and save it again without editing it

I am using a software that starts by importing in it some csv files. These csv files are given to me but I need to make some changes and import them again in that software in order to take some results. If I just open these csv files and without doing any changes save them again I am getting a message writing 'Some features in your workbook might be lost'. If I import the new csv file that in reality they are the same with before the software I am using is not possible to run.
As I understand there is something that changes in the csv files only from opening them and saving them. Does anybody knows what is happening?
Thank you in advance!
Consider the following example csv file:
toto,titi,tata
1,2,3,4,5
1,2,3,4
1,2,3
1,2
1
1,2,3,4,5
1,2,3,4
1,2,3
1,2
1
Notice that not each row has the same number of elements. If I load it into Excel then save it back as a csv file, excel will add the necessary delimiters (, in my example) so every row will have the same number of "column" (even if some are empty).
Sure enough, if I open (in a normal text editor) the new .csv file saved by excel, I get:
toto,titi,tata,,
1,2,3,4,5
1,2,3,4,
1,2,3,,
1,2,,,
1,,,,
1,2,3,4,5
1,2,3,4,
1,2,3,,
1,2,,,
1,,,,
This is the behaviour of Excel and I couldn't find an option in Excel to change that. If this is what makes your program import fail, you'll have to consider making your changes to the csv file from a normal text editor (which doesn't make automatic assumtions like Excel).

Excel CSV. file with more than 1,048,576 rows of data

I have been given a CSV file with more than the MAX Excel can handle, and I really need to be able to see all the data. I understand and have tried the method of "splitting" it, but it doesnt work.
Some background: The CSV file is an Excel CSV file, and the person who gave the file has said there are about 2m rows of data.
When I import it into Excel, I get data up to row 1,048,576, then re-import it in a new tab starting at row 1,048,577 in the data, but it only gives me one row, and I know for a fact that there should be more (not only because of the fact that "the person" said there are more than 2 million, but because of the information in the last few sets of rows)
I thought that maybe the reason for this happening is because I have been provided the CSV file as an Excel CSV file, and so all the information past 1,048,576 is lost (?).
DO I need to ask for a file in an SQL database format?
You should try delimit it can open up to 2 billion rows and 2 million columns very quickly has a free 15 day trial too. Does the job for me!
I would suggest to load the .CSV file in MS-Access.
With MS-Excel you can then create a data connection to this source (without actual loading the records in a worksheet) and create a connected pivot table. You then can have virtually unlimited number of lines in your table (depending on processor and memory: I have now 15 mln lines with 3 Gb Memory).
Additional advantage is that you can now create an aggregate view in MS-Access. In this way you can create overviews from hundreds of millions of lines and then view them in MS-Excel (beware of the 2Gb limitation of NTFS files in 32 bits OS).
Excel 2007+ is limited to somewhat over 1 million rows ( 2^20 to be precise), so it will never load your 2M line file. I think that the technique you refer to as splitting is the built-in thing Excel has, but afaik that only works for width problems, not for length problems.
The really easiest way I see right away is to use some file splitting tool - there's tons of 'em and use that to load the resulting partial csv files into multiple worksheets.
ps: "excel csv files" don't exist, there are only files produced by Excel that use one of the formats commonly referred to as csv files...
You can use PowerPivot to work with files of up to 2GB, which will be enough for your needs.
First you want to change the file format from csv to txt. That is simple to do, just edit the file name and change csv to txt. (Windows will give you warning about possibly corrupting the data, but it is fine, just click ok). Then make a copy of the txt file so that now you have two files both with 2 millions rows of data. Then open up the first txt file and delete the second million rows and save the file. Then open the second txt file and delete the first million rows and save the file. Now change the two files back to csv the same way you changed them to txt originally.
I'm surprised no one mentioned Microsoft Query. You can simply request data from the large CSV file as you need it by querying only that which you need. (Querying is setup like how you filter a table in Excel)
Better yet, if one is open to installing the Power Query add-in, it's super simple and quick. Note: Power Query is an add-in for 2010 and 2013 but comes with 2016.
If you have Matlab, you can open large CSV (or TXT) files via its import facility. The tool gives you various import format options including tables, column vectors, numeric matrix, etc. However, with Matlab being an interpreter package, it does take its own time to import such a large file and I was able to import one with more than 2 million rows in about 10 minutes.
The tool is accessible via Matlab's Home tab by clicking on the "Import Data" button. An example image of a large file upload is shown below:
Once imported, the data appears on the right-hand-side Workspace, which can then be double-clicked in an Excel-like format and even be plotted in different formats.
I was able to edit a large 17GB csv file in Sublime Text without issue (line numbering makes it a lot easier to keep track of manual splitting), and then dump it into Excel in chunks smaller than 1,048,576 lines. Simple and quite quick - less faffy than researching into, installing and learning bespoke solutions. Quick and dirty, but it works.
Try PowerPivot from Microsoft. Here you can find a step by step tutorial. It worked for my 4M+ rows!
"DO I need to ask for a file in an SQL database format?" YES!!!
Use a database, is the best option for this problem.
Excel 2010 specifications .
Use MS Access. I have a file of 2,673,404 records. It will not open in notepad++ and excel will not load more than 1,048,576 records. It is tab delimited since I exported the data from a mysql database and I need it in csv format. So I imported it into Access. Change the file extension to .txt so MS Access will take you through the import wizard.
MS Access will link to your file so for the database to stay intact keep the csv file
The best way to handle this (with ease and no additional software) is with Excel - but using Powerpivot (which has MSFT Power Query embedded). Simply create a new Power Pivot data model that attaches to your large csv or text file. You will then be able to import multi-million rows into memory using the embedded X-Velocity (in-memory compression) engine. The Excel sheet limit is not applicable - as the X-Velocity engine puts everything up in RAM in compressed form. I have loaded 15 million rows and filtered at will using this technique. Hope this helps someone... - Jaycee
I found this subject researching.
There is a way to copy all this data to an Excel Datasheet.
(I have this problem before with a 50 million line CSV file)
If there is any format, additional code could be included.
Try this.
Sub ReadCSVFiles()
Dim i, j As Double
Dim UserFileName As String
Dim strTextLine As String
Dim iFile As Integer: iFile = FreeFile
UserFileName = Application.GetOpenFilename
Open UserFileName For Input As #iFile
i = 1
j = 1
Check = False
Do Until EOF(1)
Line Input #1, strTextLine
If i >= 1048576 Then
i = 1
j = j + 1
Else
Sheets(1).Cells(i, j) = strTextLine
i = i + 1
End If
Loop
Close #iFile
End Sub
You can try to download and install TheGun Text Editor. Which can help you to open large csv file easily.
You can check detailed article here https://developingdaily.com/article/how-to/what-is-csv-file-and-how-to-open-a-large-csv-file/82
Split the CSV into two files in Notepad. It's a pain, but you can just edit each of them individually in Excel after that.

save datawindow as text in powerbuilder with some additional text

***Process Date From:
01/05/2012 0:00
Group;Member
Status:****
Rcp Cd Health Num Rcp Name Rcp Dob
1042231 1 MARIA TOVAR DIAS 14-Feb-05
1042256 2 KHALID KHAN 04-Mar-70
1042257 3 SAMREEN ISMAT 25-Mar-80
1042257 5 SAMREEN ISMAT 25-Mar-80
1042257 4 SAMREEN ISMAT 25-Mar-80
I want my Powerbuilder datawindow Save As text look like this Bold text are the additional text want to add and rest is the current save as text result.
Text files cannot contain formatting. There's no way to get bold text in a plain text file. I suggest adding the text to your datawindow header band (bolded, with an expression to make sure it only displays on the first page), then saving the results as HTML.
Well, you didn't mention which version of PB you are using, so I'll assume a recent one in which case you have some better options such as SaveAsAscii and/or SaveAsFormattedText which offer more flexibility in displaying column headers, computed fields, etc.
If you want to add the top section, I would add one or more additional dummy columns (or computed fields) to your dataobject for the additional data. Then either populate the dummy columns manually after retrieve, or via expression in computed field. You could put all of it in one computed field that wraps, or use four different ones (e.g. process_date_label, process_datetime, group_status, status).
The two newer versions of SaveAs will work better for you as they display column header values instead of the column header name. SaveAsAscii came out pretty early somewhere around version 7 of PowerBuilder. SaveAsFormattedText is relatively new and came out somewhere around PB version 11 and it is a lot like SaveAsAscii but it lets you choose file encoding.
If you need more explicit detail let me know but I am sure you can get something to work using SaveAsAscii and extra columns.
Pseudo code
Do the SaveAs to temp file
Open the temp file for read in line mode
Open output file for write (replace) in line mode
Write your additional text lines to the output file (note: you can include CRLF to
write multiple lines at once)
Loop:
Read line from temp file
If EOF exit loop. Note: 0 is not EOF, -100 is EOF
Write line to output file
Close temp file, output file
Delete temp file

Resources