How can I achieve the following in Python 3?
cat file | sigtool –hex-dump | head -c 2048
I’m attempting to read a .ndb database which I have created in order to check for malicious PHP files, however I need to be able to create hex signatures of files in python in order to check against this database.
Try using the binascii.hexlify function:
import binascii
# get the filename from somewhere
# read the raw bytes
with open(filename, 'rb') as f:
raw_bytes = f.read(1024) # two hex digits per byte, so read 1024
# convert to hex
hex_bytes = binascii.hexlify(raw_bytes) # this will be 2048 bytes long
# use the hex, as desired
hex_str = hexbytes.decode('ascii') # hexidecimal uses a subset of ASCII
print(hex_str) # or put this in a database, or whatever
I converted the hex_bytes into a string so I could print them out, but you might omit that if you can use the bytes object returned by hexlify directly (e.g. writing it to a file in binary mode, or saving it directly into a database).
Related
I have a binary file (.man), containing data that I want to read, using python 3.7. The idea is to convert this binary file into a txt or a csv file.
I know the total number of values in the binary file but not the number of bytes per value.
I have red many post talking about binary file but none was helpful...
Thank you in advance,
Simply put, yes.
with open('file.man', 'rb') as f:
data = f.readlines()
print(data) # binary values represented as string
Opening a file with the optimal parameter 'rb' means that it will read a binary file and translate it to ASCII (abstracted) for you.
The solution I found is that:
import struct
import numpy as np
data =[]
with open('binary_file', "rb") as f:
while len(data)<length(binary_file):
data.extend([struct.unpack('f',f.read(4))])
Of course, this works because I know that the encoding is simple precision.
I am currently using the below code to import 6,000 csv files (with headers) and export them into a single csv file (with a single header row).
#import csv files from folder
path =r'data/US/market/merged_data'
allFiles = glob.glob(path + "/*.csv")
stockstats_data = pd.DataFrame()
list_ = []
for file_ in allFiles:
df = pd.read_csv(file_,index_col=None,)
list_.append(df)
stockstats_data = pd.concat(list_)
print(file_ + " has been imported.")
This code works fine, but it is slow. It can take up to 2 days to process.
I was given a single line script for Terminal command line that does the same (but with no headers). This script takes 20 seconds.
for f in *.csv; do cat "`pwd`/$f" | tail -n +2 >> merged.csv; done
Does anyone know how I can speed up the first Python script? To cut the time down, I have thought about not importing it into a DataFrame and just concatenating the CSVs, but I cannot figure it out.
Thanks.
If you don't need the CSV in memory, just copying from input to output, it'll be a lot cheaper to avoid parsing at all, and copy without building up in memory:
import shutil
import glob
#import csv files from folder
path = r'data/US/market/merged_data'
allFiles = glob.glob(path + "/*.csv")
allFiles.sort() # glob lacks reliable ordering, so impose your own if output order matters
with open('someoutputfile.csv', 'wb') as outfile:
for i, fname in enumerate(allFiles):
with open(fname, 'rb') as infile:
if i != 0:
infile.readline() # Throw away header on all but first file
# Block copy rest of file from input to output without parsing
shutil.copyfileobj(infile, outfile)
print(fname + " has been imported.")
That's it; shutil.copyfileobj handles efficiently copying the data, dramatically reducing the Python level work to parse and reserialize. Don't omit the `allFiles.sort()!†
This assumes all the CSV files have the same format, encoding, line endings, etc., the encoding encodes such that newlines appear as a single byte equivalent to ASCII \n and it's the last byte in the character (so ASCII and all ASCII superset encodings work, as does UTF-16-BE and UTF-32-BE, but not UTF-16-LE and UTF-32-LE) and the header doesn't contain embedded newlines, but if that's the case, it's a lot faster than the alternatives.
For the cases where the encoding's version of a newline doesn't look enough like an ASCII newline, or where the input files are in one encoding, and the output file should be in a different encoding, you can add the work of encoding and decoding without adding CSV parsing/serializing work, with (adding a from io import open if on Python 2, to get Python 3-like efficient encoding-aware file objects, and defining known_input_encoding to some string representing the known encoding for input files, e.g. known_input_encoding = 'utf-16-le', and optionally a different encoding for output files):
# Other imports and setup code prior to first with unchanged from before
# Perform encoding to chosen output encoding, disabling line-ending
# translation to avoid conflicting with CSV dialect, matching raw binary behavior
with open('someoutputfile.csv', 'w', encoding=output_encoding, newline='') as outfile:
for i, fname in enumerate(allFiles):
# Decode with known encoding, disabling line-ending translation
# for same reasons as above
with open(fname, encoding=known_input_encoding, newline='') as infile:
if i != 0:
infile.readline() # Throw away header on all but first file
# Block copy rest of file from input to output without parsing
# just letting the file object decode from input and encode to output
shutil.copyfileobj(infile, outfile)
print(fname + " has been imported.")
This is still much faster than involving the csv module, especially in modern Python (where the io module has undergone greater and greater optimization, to the point where the cost of decoding and reencoding is pretty minor, especially next to the cost of performing I/O in the first place). It's also a good validity check for self-checking encodings (e.g. the UTF family) even if the encoding is not supposed to change; if the data doesn't match the assumed self-checking encoding, it's highly unlikely to decode validly, so you'll get an exception rather than silent misbehavior.
Because some of the duplicates linked here are looking for an even faster solution than copyfileobj, some options:
The only succinct, reasonably portable option is to continue using copyfileobj and explicitly pass a non-default length parameter, e.g. shutil.copyfileobj(infile, outfile, 1 << 20) (1 << 20 is 1 MiB, a number which shutil has switched to for plain shutil.copyfile calls on Windows due to superior performance).
Still portable, but only works for binary files and not succinct, would be to copy the underlying code copyfile uses on Windows, which uses a reusable bytearray buffer with a larger size than copyfileobj's default (1 MiB, rather than 64 KiB), removing some allocation overhead that copyfileobj can't fully avoid for large buffers. You'd replace shutil.copyfileobj(infile, outfile) with (3.8+'s walrus operator, :=, used for brevity) the following code adapted from CPython 3.10's implementation of shutil._copyfileobj_readinto (which you could always use directly if you don't mind using non-public APIs):
buf_length = 1 << 20 # 1 MiB buffer; tweak to preference
# Using a memoryview gets zero copy performance when short reads occur
with memoryview(bytearray(buf_length)) as mv:
while n := infile.readinto(mv):
if n < buf_length:
with mv[:n] as smv:
outfile.write(smv)
else:
outfile.write(mv)
Non-portably, if you can (in any way you feel like) determine the precise length of the header, and you know it will not change by even a byte in any other file, you can write the header directly, then use OS-specific calls similar to what shutil.copyfile uses under the hood to copy the non-header portion of each file, using OS-specific APIs that can do the work with a single system call (regardless of file size) and avoid extra data copies (by pushing all the work to in-kernel or even within file-system operations, removing copies to and from user space) e.g.:
a. On Linux kernel 2.6.33 and higher (and any other OS that allows the sendfile(2) system call to work between open files), you can replace the .readline() and copyfileobj calls with:
filesize = os.fstat(infile.fileno()).st_size # Get underlying file's size
os.sendfile(outfile.fileno(), infile.fileno(), header_len_bytes, filesize - header_len_bytes)
To make it signal resilient, it may be necessary to check the return value from sendfile, and track the number of bytes sent + skipped and the number remaining, looping until you've copied them all (these are low level system calls, they can be interrupted).
b. On any system Python 3.8+ built with glibc >= 2.27 (or on Linux kernel 4.5+), where the files are all on the same filesystem, you can replace sendfile with copy_file_range:
filesize = os.fstat(infile.fileno()).st_size # Get underlying file's size
os.copy_file_range(infile.fileno(), outfile.fileno(), filesize - header_len_bytes, header_len_bytes)
With similar caveats about checking for copying fewer bytes than expected and retrying.
c. On OSX/macOS, you can use the completely undocumented, and therefore even less portable/stable API shutil.copyfile uses, posix._fcopyfile for a similar purpose, with something like (completely untested, and really, don't do this; it's likely to break across even minor Python releases):
infile.seek(header_len_bytes) # Skip past header
posix._fcopyfile(infile.fileno(), outfile.fileno(), posix._COPYFILE_DATA)
which assumes fcopyfile pays attention to the seek position (docs aren't 100% on this) and, as noted, is not only macOS-specific, but uses undocumented CPython internals that could change in any release.
† An aside on sorting the results of glob: That allFiles.sort() call should not be omitted; glob imposes no ordering on the results, and for reproducible results, you'll want to impose some ordering (it wouldn't be great if the same files, with the same names and data, produced an output file in a different order simply because in-between runs, a file got moved out of the directory, then back in, and changed the native iteration order). Without the sort call, this code (and all other Python+glob module answers) will not reliably read from a directory containing a.csv and b.csv in alphabetical (or any other useful) order; it'll vary by OS, file system, and often the entire history of file creation/deletion in the directory in question. This has broken stuff before in the real world, see details at A Code Glitch May Have Caused Errors In More Than 100 Published Studies.
Are you required to do this in Python? If you are open to doing this entirely in shell, all you'd need to do is first cat the header row from a randomly selected input .csv file into merged.csv before running your one-liner:
cat a-randomly-selected-csv-file.csv | head -n1 > merged.csv
for f in *.csv; do cat "`pwd`/$f" | tail -n +2 >> merged.csv; done
You don't need pandas for this, just the simple csv module would work fine.
import csv
df_out_filename = 'df_out.csv'
write_headers = True
with open(df_out_filename, 'wb') as fout:
writer = csv.writer(fout)
for filename in allFiles:
with open(filename) as fin:
reader = csv.reader(fin)
headers = reader.next()
if write_headers:
write_headers = False # Only write headers once.
writer.writerow(headers)
writer.writerows(reader) # Write all remaining rows.
Here's a simpler approach - you can use pandas (though I am not sure how it will help with RAM usage)-
import pandas as pd
import glob
path =r'data/US/market/merged_data'
allFiles = glob.glob(path + "/*.csv")
stockstats_data = pd.DataFrame()
list_ = []
for file_ in allFiles:
df = pd.read_csv(file_)
stockstats_data = pd.concat((df, stockstats_data), axis=0)
I'm updating an older application for Python 3, but trying to maintain compatibility if possible with Python 2.7. One of the issues I've encountered deals with inconsistencies in ast.literal_eval() between Python 2 & 3, when handling a UTF-8 string.
Specifically, one of the functions my application performs involves:
Reading a string from a UTF-8 encoded text file that represents a Python list of file names
Converting that UTF-8 string to a Python list via literal_eval()
Using that list to access those files and perform other processing.
My test .txt file has this string:
['FileName1.txt', 'CP1252-1-àlacrème.txt', 'dUTF8-1-木兰辞.txt']
I'm using this brief test script to emulate what the larger application does:
import io
from ast import literal_eval
with io.open('z.txt','r',encoding='utf_8') as inFile:
inStr = inFile.read()
print('Input string is length '+str(len(inStr)))
fileList = literal_eval(inStr)
print(fileList)
Now, when I run this test script on Python 3, I get the following (all OK and as expected) result:
Input string is length 61
['FileName1.txt', 'CP1252-1-àlacrème.txt','dUTF8-1-???.txt']
(The question marks are expected as this is a Windows CMD window; it doesn't handle non-latin-1 characters)
But anyway, when I run the same script with the same file on Python 2.7, I get this result:
Input string is length 61
['FileName1.txt', 'CP1252-1-\xc3\xa0lacr\xc3\xa8me.txt', 'dUTF8-1-\xe6\x9c\xa8\xe5\x85\xb0\xe8\xbe\x9e.txt']
So literal_eval() isn't maintaining the UTF-8 encoding in the resulting list. (Or, I guess, it's trying to maintain the encoding but the best it can do is represent the non-ASCII data as individual byte values.)
My question is: is there any way to make the Python 2 literal_eval() give the same result as the Python 3 version? Or am I stuck with this as a limitation?
As mentioned in comments, ast.literal_eval of the input parses differently between Python 2 and 3. Better to not write Python source as a data file, but use a module like pandas with .csv files:
If the input is a UTF-8 file with content:
FileName1.txt,CP1252-1-àlacrème.txt,dUTF8-1-木兰辞.txt
Then pandas can read it with:
import pandas as pd
data = pd.read_csv('test.txt',encoding='utf8',header=None)
print(data)
Output (Windows terminal Python 3, need appropriate font):
0 1 2
0 FileName1.txt CP1252-1-àlacrème.txt dUTF8-1-木兰辞.txt
Output (Windows IDLE, Python 2 in console needs appropriate code page to view ideographs):
0 1 2
0 FileName1.txt CP1252-1-àlacrème.txt dUTF8-1-木兰辞.txt
I am a newbie in Python3.
I have a question in writing a string into a file.
The below string is what I tried to write into a file.
ÀH \x10\x08\x81\x00 (in hex, c04820108810)
When I checked the file using xxd command, I could check there is a difference between the string and the file.
00000000: c380 4820 1008 c281 00 ..H .....
This is code I wrote.
s = 'ÀH \x10\x08\x81\x00'
with open('test', 'w') as f:
f.write(s)
The question is how can I write this string into file in its entirety.
It seems that you want to write binary data. In that case, you should use the bytes type instead of str as this gives you full control over the binary content of the sequence.
When dealing with strings, you have to take into account that Python will internally handle everything as UTF-8, so by the time you enter something like À, the file encoding will decide on what is actually entered. You can always encode() a string to look at its bytes:
>>> 'ÀH \x10\x08\x81\x00'.encode()
b'\xc3\x80H \x10\x08\xc2\x81\x00'
You can convert this to hex using the binascii module for a more readable hex string of those bytes:
>>> binascii.hexlify('ÀH \x10\x08\x81\x00'.encode())
b'c38048201008c28100'
As you can see, this is the same that was written to your file. So Python already does the correct thing. It’s just that the input is not what you want it to be.
So instead, use a bytes string and write to the file in binary mode:
# use a bytes string
s = b'\xc0\x48\x20\x10\x88\x10'
# open the file in binary mode
with open('test', 'bw') as f:
f.write(s)
Btw. if you look at the encoded string from the beginning, you can already see that you have a different encoding in mind than Python when you entered that string. You expected À to be 0xc0 in binary which is somewhat correct since that its Latin-1 representation. But when you lookup its other representations, you can see that in UTF-8, which is what Python uses by default, it is 0xc380 instead—which is again the value we got when encoding it in Python.
You have to setup coding style to utf-8 and also use raw strings because you have \ escape characters. So add coding style and put r before your string to make it raw.
# -*- coding: utf-8 -*-
s = r'ÀH \x10\x08\x81\x00'
with open('test.txt', 'w') as f:
f.write(s)
The problem is that for some archives or files up-loaded to the python application, ZipFile's namelist() returns badly decoded strings.
from zip import ZipFile
for name in ZipFile('zipfile.zip').namelist():
print('Listing zip files: %s' % name)
How to fix that code so i always decode file names in unicode (so Chineeze, Russian and other languages supported)?
I've seen some samples for Python 2, but since string's nature is changed in python3, i have no clue how to re-encode it, or apply chardet on it.
How to fix that code so i always decode file names in unicode (so Chineeze, Russian and other languages supported)?
Automatically? You can't. Filenames in a basic ZIP file are strings of bytes with no attached encoding information, so unless you know what the encoding was on the machine that created the ZIP you can't reliably get a human-readable filename back out.
There is an extension to the flags on modern ZIP files to tell you that the filename is UTF-8. Unfortunately files you receive from Windows users typically don't have it, so you'll left guessing with inherently unreliable methods like chardet.
I've seen some samples for Python 2, but since string's nature is changed in python3, i have no clue how to re-encode it, or apply chardet on it.
Python 2 would just give you raw bytes back. In Python 3 the new behaviour is:
if the UTF-8 flag is set, it decodes the filenames using UTF-8 and you get the correct string value back
otherwise, it decodes the filenames using DOS code page 437, which is pretty unlikely to be what was intended. However you can re-encode the string back to the original bytes, and then try to decode again using the code page you actually want, eg name.encode('cp437').decode('cp1252').
Unfortunately (again, because the unfortunatelies never end where ZIP is concerned), ZipFile does this decoding silently without telling you what it did. So if you want to switch and only do the transcode step when the filename is suspect, you have to duplicate the logic for sniffing whether the UTF-8 flag was set:
ZIP_FILENAME_UTF8_FLAG = 0x800
for info in ZipFile('zipfile.zip').filelist():
filename = info.filename
if info.flag_bits & ZIP_FILENAME_UTF8_FLAG == 0:
filename_bytes = filename.encode('437')
guessed_encoding = chardet.detect(filename_bytes)['encoding'] or 'cp1252'
filename = filename_bytes.decode(guessed_encoding, 'replace')
...
Here's the code that decodes filenames in zipfile.py according to the zip spec that supports only cp437 and utf-8 character encodings:
if flags & 0x800:
# UTF-8 file names extension
filename = filename.decode('utf-8')
else:
# Historical ZIP filename encoding
filename = filename.decode('cp437')
As you can see, if 0x800 flag is not set i.e., if utf-8 is not used in your input zipfile.zip then cp437 is used and therefore the result for "Chineeze, Russian and other languages" is likely to be incorrect.
In practice, ANSI or OEM Windows codepages may be used instead of cp437.
If you know the actual character encoding e.g., cp866 (OEM (console) codepage) may be used on Russian Windows then you could reencode filenames to get the original filenames:
filename = corrupted_filename.encode('cp437').decode('cp866')
The best option is to create the zip archive using utf-8 so that you can support multiple languages in the same archive:
c:\> 7z.exe a -tzip -mcu archive.zip <files>..
or
$ python -mzipfile -c archive.zip <files>..`
Got the same problem, but with defined language (Russian).
Most simple solution is just to convert it with this utility: https://github.com/vlm/zip-fix-filename-encoding
For me it works on 98% of archives (failed to run on 317 files from corpus of 11388)
More complex solution: use python module chardet with zipfile. But it depends on python version (2 or 3) you use - it has some differences on zipfile. For python 3 I wrote a code:
import chardet
original_name = name
try:
name = name.encode('cp437')
except UnicodeEncodeError:
name = name.encode('utf8')
encoding = chardet.detect(name)['encoding']
name = name.decode(encoding)
This code try to work with old style zips (having encoding CP437 and just has it broken), and if fails, it seems that zip archive is new style (UTF-8). After determining proper encoding, you can extract files by code like:
from shutil import copyfileobj
fp = archive.open(original_name)
fp_out = open(name, 'wb')
copyfileobj(fp, fp_out)
In my case, this resolved last 2% of failed files.