I have a .tgz file that was formatted as shell code, it looks like this (Hex):
"\x1F\x8B\x08\x00\x44\x7A\x91\x4F\x00\x03\xED\x59\xED\x72.."
It was generated this way (python3):
import os
def main():
dump_src = "MyPlugin.tgz"
fc = ""
try:
with open(dump_src, 'rb') as fd:
fcr = fd.read()
for byte in bytearray(fcr):
fc += "\\x{:02x}".format(byte)
except:
fcr = dump_src
for byte in bytearray(fcr):
fc += "\\x{:02x}".format(byte)
print(fc)
# failed attempt:
fcback = bytes(int(fc[i+2:i+4], 16) for i in range(0, len(fc), 4))
print (fcback)
if __name__ == "__main__":
main()
How can I convert this back to the original tgz archive?
Edit: failed attempt in the last section outputs this:
b'\x8b\x00\x10]\x03\x93o0\x85%\xe2!\xa4H\xf1Fi\xa7\x15\xf61&\x13N\xd9[\xfag\x11V\x97\xd3\xfb%\xf7\xe3\\\xae\xc2\xff\xa4>\xaf\x11\xcc\x93\xf1\x0c\x93\xa4\x1b\xefxj\xc3?\xf9\xc1\xe8\xd1\xd9\x01\x97qB"\x1a\x08\x9cO\x7f\xe9\x19\xe3\x9c\x05\xf2\x04a\xaa\x00A,\x15"RN-\xb6\x18K\x85\xa1\x11\x83\xac/\xffR\x8a\xa19\xde\x10\x0b\x08\x85\x93\xfc]\x8a^\xd2-T\x92\x9a\xcc-W\xc7|\xba\x9c\xb3\xa6V0V H1\x98\xde\x03#\x14\'\n 1Y\xf7R\x14\xe2#\xbe*:\xe0\xc8\xbb\xc9\x0bo\x8bm\xed.\xfd\xae\xef\x9fT&\xa1\xf4\xcf\xa7F\xf4\xef\xbb"8"\xb5\xab,\x9c\xbb\xfc3\x8b\xf5\x88\xf4A\x0ek%5eO\xf4:f\x0b\xd6\x1bi\xb6\xf3\xbf\xf7\xf9\xad\xb5[\xdba7\xb8\xf9\xcd\xba\xdd,;c\x0b\xaaT"\xd4\x96\x17\xda\x07\x87& \xceH\xd6\xbf\xd2\xeb\xb4\xaf\xbd\xc2\xee\xfc\'3zU\x17>\xde\x06u\xe3G\x7f\x1e\xf3\xdf\xb6\x04\x10A\x04\x10A\x04\x10A\x04\x10A\xff\x9f\xab\xe8(\x00'
And when I output it to a file (e.g. via python3 main.py > MyFile.tgz) the file is corrupted.
Since you know the format of the data (each byte is encoded as a string of 4 characters in the format "\xAB") it's easy to revert the conversion and get the original bytes again. It'll only take one line of Python code:
data = bytes(int(fc[i+2:i+4], 16) for i in range(0, len(fc), 4))
This uses:
range(start, stop, step) with step 4 to iterate in groups of 4 characters through your string
slicing to get each group of 2 hexadecimal digits
int(x, base) to convert the hexadecimal string to an integer
a generator expression to immediately pass the converted elements to:
bytes() to create a bytes object with the data
The variable data is now of type bytes and you could directly write it to a file (to decompress with an external zip program), or pass it to zlib.decompress() (to further process it in Python).
UPDATE (follow-up on the comments and updated question):
Firstly, I have tested the above code and it does result in the same bytes as the input. Are you really sure that the example output in your question is the actual result of the code in your question? Please try to be careful when copying code and/or output. A few remarks:
Your code is not properly formatted, so I cannot run it without making modifications. And when I have made modifications to the code, I might run different code than you do, yielding different results. So next time please copy-paste your exact (working, tested) code without modifications.
The format string in your code uses lowercase hexadecimal format, and your first example output uses uppercase. So that output cannot be from this code.
I don't have access to your file "MyPlugin.tgz", but when I test your code with another .tgz file (after fixing the IndentationErrors), my output is correct. It starts with \x1f\x8b as expected (this is the magic number in the gzip header). I can't explain why your output is different...
Secondly, it seems like you don't fully understand how bytes and string representations work. When you write print(fcback), a string representation of the Python object fcback (in this case a bytes object) is printed. The string representation of a bytes object is not the same as the binary data! When printing a bytes object, each byte that corresponds to a printable ASCII character is replaced by that character, other bytes are escaped (similar to the formatted string that your code generates). Also, it starts with b' and ends with '.
You cannot print binary data to your terminal and then pipe the output to a file. This will result in a different file. The correct way to write the data to a file is using file.write(data) in your Python code.
Here's a fully working example:
def binary_to_text(data):
"""Convert a bytes object to a formatted text string."""
text = ""
for byte in data:
text += "\\x{:02x}".format(byte)
return text
def text_to_binary(text):
"""Convert a formatted text string to a bytes object."""
return bytes(int(text[i+2:i+4], 16) for i in range(0, len(text), 4))
def main():
# Read the binary data from input file:
with open('MyPlugin.tgz', 'rb') as input_file:
input_data = input_file.read()
# Convert binary to text (based on your original code):
text = binary_to_text(input_data)
print(text[0:100])
# Convert the text back to binary:
output_data = text_to_binary(text)
print(output_data[0:100])
# Write the binary data back to a file:
with open('MyPlugin-restored.tgz', 'wb') as output_file:
output_file.write(output_data)
if __name__ == '__main__':
main()
Note that I only print the first 100 elements to keep the output short. Also notice that the second print-statement prints a much longer text. This is because the first print gets 100 characters (which are printed "as is"), while the second print gets 100 bytes (of which most bytes are escaped, causing the output to be longer).
Related
I have used tweepy to store the text of tweets in a csv file using Python csv.writer(), but I had to encode the text in utf-8 before storing, otherwise tweepy throws a weird error.
Now, the text data is stored like this:
"b'Lorem Ipsum\xc2\xa0Assignment '"
I tried to decode this using this code (there is more data in other columns, text is in 3rd column):
with open('data.csv','rt',encoding='utf-8') as f:
reader = csv.reader(f,delimiter=',')
for row in reader:
print(row[3])
But, it doesn't decode the text. I cannot use .decode('utf-8') as the csv reader reads data as strings i.e. type(row[3]) is 'str' and I can't seem to convert it into bytes, the data gets encoded once more!
How can I decode the text data?
Edit: Here's a sample line from the csv file:
67783591545656656999,3415844,1450443669.0,b'Virginia School District Closes After Backlash Over Arabic Assignment: The Augusta County school district in\xe2\x80\xa6 | #abcde',52,18
Note: If the solution is in the encoding process, please note that I cannot afford to download the entire data again.
The easiest way is as below. Try it out.
import csv
from io import StringIO
byte_content = b"iam byte content"
content = byte_content.decode()
file = StringIO(content)
csv_data = csv.reader(file, delimiter=",")
If your input file really contains strings with Python syntax b prefixes on them, one way to workaround it (even though it's not really a valid format for csv data to contain) would be to use Python's ast.literal_eval() function as #Ry suggested — although I would use it in a slightly different manner, as shown below.
This will provide a safe way to parse strings in the file which are prefixed with a b indicating they are byte-strings. The rest will be passed through unchanged.
Note that this doesn't require reading the entire CSV file into memory.
import ast
import csv
def _parse_bytes(field):
"""Convert string represented in Python byte-string literal b'' syntax into
a decoded character string - otherwise return it unchanged.
"""
result = field
try:
result = ast.literal_eval(field)
finally:
return result.decode() if isinstance(result, bytes) else result
def my_csv_reader(filename, /, **kwargs):
with open(filename, 'r', newline='') as file:
for row in csv.reader(file, **kwargs):
yield [_parse_bytes(field) for field in row]
reader = my_csv_reader('bytes_data.csv', delimiter=',')
for row in reader:
print(row)
You can use ast.literal_eval to convert the incorrect fields back to bytes safely:
import ast
def _parse_bytes(bytes_repr):
result = ast.literal_eval(bytes_repr)
if not isinstance(result, bytes):
raise ValueError("Malformed bytes repr")
return result
I have a text file in my hard disk which is really big. It has around 8 million json files which are separated by comma and I want to remove the last json ; however, because it is really big I cannot do it via regular editors (Notepad++, Sublime, Visual Studio Code, ...). So, I decided to use Python, but I have no clue how to erase part of an existing file using python. Any kind of help would be appreciated.
P.S: My file has such a structure:
json1, json2, json3, ...
when each json looks like {"a":"something", "b":"something", "c":"something"}
The easiest way would be to make the file content valid JSON by enclosing it with [ and ] so it becomes a list of dicts, and after removing the last item from the list, you can dump it back into a string and then remove its first and the last characters, which will be [ and ], which your original text file does not want:
import json
with open('file.txt', 'r') as r, open('newfile.txt', 'w') as w:
w.write(json.dumps(json.loads('[%s]' % r.read())[:-1])[1:-1])
Since you only want the last JSON object removed from the file, a much more efficient method would be to identify the first valid JSON object at the end of the file and truncate the file from where that JSON object's preceding comma is positioned.
This can be accomplished by seeking and reading backwards from the end of the file, one relatively small chunk at a time, split the buffer by { (since it marks the beginning of a JSON object), and prepend the fragments one at a time to a buffer until the buffer is parsable as a JSON object (this makes the code able to handle nested dict structures), at which point you should find the preceding comma from the preceding fragment and prepend the comma to the buffer, so that finally, you can seek the file to where the buffer starts and truncate the file:
import json
chunk_size = 1024
with open('file.txt', 'rb+') as f:
f.seek(-chunk_size, 2)
buffer = ''
while True:
fragments = f.read(chunk_size).decode().split('{')
f.seek(-chunk_size * 2, 1)
i = len(fragments)
for fragment in fragments[:0:-1]:
i -= 1
buffer = '{%s%s' % (fragment, buffer)
try:
json.loads(buffer)
break
except ValueError:
pass
else:
buffer = fragments[0] + buffer
continue
break
next_fragment = fragments[i - 1]
# if we don't have a comma in the preceding fragment and it is already the first
# fragment, we need to read backwards a little more
if i == 1 and ',' not in fragments[0]:
f.seek(-2, 1)
next_fragment = f.read(2).decode() + next_fragment
buffer = next_fragment[next_fragment.rindex(','):] + buffer
f.seek(-len(buffer.encode()), 2)
f.truncate()
I have a program that I created with two sections.
The first one copies a text file with an integer in the middle of the file name in this format.
file = "Filename" + "str(int)" + ".txt"
the user can create as many copies of the file that they would like.
The second part of the program is what I am having the problem with. There is an integer at the very bottom of the file that is to correspond with the integer in the file name. After the first part is done, I open each file one at a time in "r+" read/write format. So I can file.seek(1000) to about where the integer is in the file.
Now in my opinion the next part should be easy. I should just simply have to write str(int) into the file right here. But it wasn't that easy. It worked just fine doing it like that in Linux at home, but at work on Windows it proved difficult. What I ended up having to do after file.seek(1000) is write to the file using Unicode UTF-8. I accomplished this with this code snippet of the rest of the program. I will document it so that it is able to be understood what is going on. Instead of having to write this in Unicode, I would love to be able to write this in good old regular English ASCII characters. Eventually this program will be expanded to include a lot more data at the bottom of each file. Having to write the data in Unicode is going to make things extremely difficult. If I just write the data without turning it into Unicode this is the result. This string is supposed to say #2 =1534, instead it says #2 =ㄠ㌵433.
If someone can show me what I am doing wrong that would be great. I would love to just use something like file.write('1534') to write the data to the file instead of having to do it in Unicode UTF-8.
while a1 < d1 :
file = "file" + str(a1) + ".par"
f = open(file, "r+")
f.seek(1011)
data = f.read() #reads the data from that point in the file into a variable.
numList= list(str(a1)) # "a1" is the integer in the file name. I had to turn the integer into a list to accomplish the next task.
replaceData = '\x00' + numList[0] + '\x00' + numList[1] + '\x00' + numList[2] + '\x00' + numList[3] + '\x00' #This line turns the integer into Utf 8 Unicode. I am by no means a Unicode expert.
currentData = data #probably didn't need to be done now that I'm looking at this.
data = data.replace(currentData, replaceData) #replaces the Utf 8 string in the "data" variable with the new Utf 8 string in "replaceData."
f.seek(1011) # Return to where I need to be in the file to write the data.
f.write(data) # Write the new Unicode data to the file
f.close() #close the file
f.close() #make sure the file is closed (sometimes it seems that this fails in Windows.)
a1 += 1 #advances the integer, and then return to the top of the loop
This is an example of writing to a file in ASCII. You need to open the file in byte mode, and using the .encode method for strings is a convenient way to get the end result you want.
s = '12345'
ascii = s.encode('ascii')
with open('somefile', 'wb') as f:
f.write(ascii)
You can obviously also open in rb+ (read and write byte mode) in your case if the file already exists.
with open('somefile', 'rb+') as f:
existing = f.read()
f.write(b'ascii without encoding!')
You can also just pass string literals with the b prefix, and they will be encoded with ascii as shown in the second example.
I need to read a text file which contains csv data with headers separating individual blocks of data. The headers always start with the dollar sign $. So my text file looks like:
$Header1
2
1,2,3,4
2,4,5,8
$Header2
2
1,1,0,19,9,8
2,1,0,18,8,7
What I want to do is if the program reaches to $Header2, I want to read all the next lines following it till it reaches, say, $Header3 or end of the file. I think I can use `cmp' in Julia for this. I tried with a small file that contains following text:
# file julia.txt
Julia
$Julia
and my code reads:
# test.jl
fname = "julia.txt"
# set some string values
str1 ="Julia";
str2 ="\$Julia";
# print the strings and check the length
println(length(str1),",",str1);
println(length(str2),",",str2);
# now read the text file to check if you are able to find the strings
# str1 and str2 above
println ("Reading file...");
for ln in eachline(fname)
println(length(ln),",",ln);
if (cmp(str1,ln)==0)
println("Julia match")
end
if (cmp(str2,ln)==0)
println("\$Julia match")
end
end
what I get as output from the above code is:
5,Julia
6,$Julia
Reading file...
6,Julia
7,$Julia
I don't understand why I get character length of 6 for string Julia and 7 for the string $Julia when they are read from the file. I checked the text file by turning on white spaces and there are none. What am i doing wrong?
The issue is that the strings returned by eachline contain a newline character at the end.
You can use chomp to remove it:
julia> first(eachline("julia.txt"))
"Julia\n"
julia> chomp(first(eachline("julia.txt")))
"Julia"
Also, you can simply use == instead of cmp to test whether two strings are equal. Both use a ccall to memcmp but == only does that for strings of equal length and is thus probably faster.
I have a array of integer which I want to dump in one binary file (HEX file to be specific) using python script
I have written a code as
MemDump = Debug.readMemory(ic.IConnectDebug.fRealTime, 0, 0xB0009CC4, 0xCFF, 1)
MemData = MemDump[:3321]
# Create New file in binary mode and open for writing
fp = open("MON.dmp", 'w')
sys.stdout = fp
for byte in MemData:
print(byte)
Here MemDump contains an array of integer values. From this array first 3321 bytes I want to dump in file.
Here I am getting the the output in file MON.dmp but in ASCII fromat.
and if I create file in binary format using
fp = open("MON.dmp", 'wb')
print(byte) command gives me an error saying
'str' does not support the buffer interface
Thank you in Advance.
You need to convert byte to a binary string before you can write it to a file opened in 'wb' mode. This can be done using the bytearray() function. So in this case you should use:
for byte in MemData:
print(bytearray(byte))