I've created the below sample code in ActionScript3 to deflate-uncompress a string:
var byteArray = new ByteArray();
byteArray.position = 0;
byteArray.writeUTF("stackoverflow");
byteArray.compress("deflate");
trace("Compressed: " + byteArray.toString());
byteArray.uncompress("deflate");
trace("Uncompressed: " + byteArray.toString());
It seems that ActionScript3 modifies RFC1951 slightly to remove the headers. At this time, I am unable to replicate the code snippet in Python 3. I have tried using the Py3AMF library, however I did not see a method to do deflate uncompress.
Thanks!
Solved it! Solution is to use
-zlib.MAX_WBITS
Here's a code snippet in Python 3:
f = open("decode.txt", "rb")
data = f.readline()
print(data)
print((zlib.decompress(data, -zlib.MAX_WBITS)).decode("utf-8"))
Related
I am having some issues with some code. I have set about a project for creating Bitcoin wallets in an attempt to turn a hobby into a learning experience, whereby I can understand both Python and the Bitcoin protocol in more detail. I have posted here rather than in the Bitcoin site as the question is related to Python programming.
Below I have some code which I have created to turn a private key into a WIF key. I have written this out for clarity rather than the most optimal method of coding, so that I can see all the steps clearly and work on issues. This code was previously a series lines which I have now progressed into a class with functions.
I am following the example from this page: https://en.bitcoin.it/wiki/Wallet_import_format
Here is my current code:
import hashlib
import codecs
class wif():
def private_to_wif(private_key):
extended_key = wif.create_extended(private_key)
address = wif.create_wif_address(extended_key)
return address
def create_extended(private_key):
private_key1 = bytes.fromhex(private_key)
private_key2 = codecs.encode(private_key1, 'hex')
mainnet = b'80'
#testnet = b'ef'
#compressed = b'01'
extended_key = mainnet + private_key2
return extended_key
def create_wif_address(extended_key):
first_hash = hashlib.sha256(extended_key)
first_digest = first_hash.digest()
second_hash = hashlib.sha256(first_digest)
second_digest = second_hash.digest()
second_digest_hex = codecs.encode(second_digest, 'hex')
checksum = second_digest_hex[:8]
extended_key_chksm = (extended_key + checksum).decode('utf-8')
wif_address = base58(extended_key_chksm)
return wif_address
def base58(extended_key_chksm):
alphabet = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz'
b58_string = ''
leading_zeros = len(extended_key_chksm) - len(extended_key_chksm.lstrip('0'))
address_int = int(extended_key_chksm, 16)
while address_int > 0:
digit = address_int % 58
digit_char = alphabet[digit]
b58_string = digit_char + b58_string
address_int //= 58
ones = leading_zeros // 2
for one in range(ones):
b58_string = '1' + b58_string
return b58_string
I then use a few lines of code to get this working, using the example private key from the above guide, as follows:
key = ‘0C28FCA386C7A227600B2FE50B7CAE11EC86D3BF1FBE471BE89827E19D72AA1D‘
address = wif.private_to_wif(key)
Print(address)
I should be getting the output: 5HueCGU8rMjxEXxiPuD5BDku4MkFqeZyd4dZ1jvhTVqvbTLvyTJ
Instead I’m getting:
5HueCGU8rMjxEXxiPuD5BDku4MkFqeZyd4dZ1jvhTVqvbWs6eYX
It’s only the last 6 characters that differ!
Any suggestions and help would be greatly appreciated.
Thank you in advance.
Connor
I have managed to find the solution by adding a missing encoding step.
I am posting for all those who run into a similar issue and can see the steps that brought resolution.
def create_wif_address(extended_key):
extended_key_dec = codecs.decode(extended_key, 'hex')
first_hash = hashlib.sha256(extended_key_dec)
first_digest = first_hash.digest()
second_hash = hashlib.sha256(first_digest)
second_digest = second_hash.digest()
second_digest_hex = codecs.encode(second_digest, 'hex')
checksum = second_digest_hex[:8]
extended_key_chksm = (extended_key + checksum).decode('utf-8')
wif_address = base58(extended_key_chksm)
return wif_address
So above I added in a step to the function, at the beginning, to decode the passed in variable from a hexadecimal byte string to raw bytes, which is what the hashing algorithms require it seems, and this produced the result I was hoping to achieve.
Connor
I have the following script:
hex_string = "c23dba5fcac1048b3c050266ceb6a0e870670021"
hex_bytes = bytearray.fromhex(hex_raw)
print(hex_bytes.reverse())
The problem it prints/returns None. I was wondering, because in this example it works.
Thanks in advance.
I found the issue. The method .reverse() dont return anything, but changes the bytearray, so the code must be:
hex_string = "c23dba5fcac1048b3c050266ceb6a0e870670021"
hex_bytes = bytearray.fromhex(hex_raw)
hex_bytes.reverse()
print(hex_bytes)
I try to load-if exists, update and write new input files in flopy. I try many things but I can't. Here is my code:
rchFile = os.path.join(modflowModel.model_ws, "hydrogeology.rch")
info = modflowModel.get_nrow_ncol_nlay_nper()
if "RCH" in modflowModel.get_package_list():
rchObject = ModflowRch.load(rchFile, modflowModel)
rchData = rchObject.rech
else:
rchData = dict()
for ts in range(info[3]):
rchData[ts] = np.zeros((info[0], info[1]))
for feat in iterator:
for ts in range(info[3]):
currValue = "random value"
rchData[ts][feat["row"]-1, feat["column"]-1] = currValue
rchObject = ModflowRch(modflowModel, nrchop=3, ipakcb=None, rech=rchData, irch=0, extension='rch', unitnumber=None, filenames="hydrogeology.rch")
rchPath = os.path.join(modflowModel.model_ws, 'rch.shp')
rchObject.export(f=rchPath)
# rchObject.write_file()
# modflowModel.add_package(rchObject)
modflowModel.write_input()
modflowModel is and flopy.modflow.Modflow object. Comments at the end of the codes are lines that I try to write updated new inputs but does not work.
What error(s) are you getting exactly when you say it doesnt work? I routinely modify forcing packages with flopy like this:
m = flopy.modflow.Modflow.load()
for kper in range(m.nper):
arr = m.rch.rech[kper].array
arr *= 0.8 # or something
m.rch.rech[kper] = arr
I found my error. modflowModel.write_input() works very well. Problem occures while loading model object Modflow.load(...). Now I load model whereever I need. Because if I load it another py etc., there might me model_ws confusion.
So I have a specific need to download and extract a cab file but the size of each cab file is huge > 200MB. I wanted to selectively download files from the cab as rest of the data is useless.
Done so much so far :
Request 1% of the file from the server. Get the headers and parse them.
Get the files list, their offsets according to This CAB Link.
Send a GET request to server with the Range header set to the file Offset and the Offset+Size.
I am able to get the response but it is in a way "Unreadable" cause it is compressed (LZX:21 - Acc to 7Zip)
Unable to decompress using zlib. Throws invlid header.
Also I did not quite understand nor could trace the CFFOLDER or CFDATA as shown in the example cause its uncompressed.
totalByteArray =b''
eofiles =0
def GetCabMetaData(stream):
global eofiles
cabMetaData={}
try:
cabMetaData["CabFormat"] = stream[0:4].decode('ANSI')
cabMetaData["CabSize"] = struct.unpack("<L",stream[8:12])[0]
cabMetaData["FilesOffset"] = struct.unpack("<L",stream[16:20])[0]
cabMetaData["NoOfFolders"] = struct.unpack("<H",stream[26:28])[0]
cabMetaData["NoOfFiles"] = struct.unpack("<H",stream[28:30])[0]
# skip 30,32,34,35
cabMetaData["Files"]= {}
cabMetaData["Folders"]= {}
baseOffset = cabMetaData["FilesOffset"]
internalOffset = 0
for i in range(0,cabMetaData["NoOfFiles"]):
fileDetails = {}
fileDetails["Size"] = struct.unpack("<L",stream[baseOffset+internalOffset:][:4])[0]
fileDetails["UnpackedStartOffset"] = struct.unpack("<L",stream[baseOffset+internalOffset+4:][:4])[0]
fileDetails["FolderIndex"] = struct.unpack("<H",stream[baseOffset+internalOffset+8:][:2])[0]
fileDetails["Date"] = struct.unpack("<H",stream[baseOffset+internalOffset+10:][:2])[0]
fileDetails["Time"] = struct.unpack("<H",stream[baseOffset+internalOffset+12:][:2])[0]
fileDetails["Attrib"] = struct.unpack("<H",stream[baseOffset+internalOffset+14:][:2])[0]
fileName =''
for j in range(0,len(stream)):
if(chr(stream[baseOffset+internalOffset+16 +j])!='\x00'):
fileName +=chr(stream[baseOffset+internalOffset+16 +j])
else:
break
internalOffset += 16+j+1
cabMetaData["Files"][fileName] = (fileDetails.copy())
eofiles = baseOffset + internalOffset
except Exception as e:
print(e)
pass
print(cabMetaData["CabSize"])
return cabMetaData
def GetFileSize(url):
resp = requests.head(url)
return int(resp.headers["Content-Length"])
def GetCABHeader(url):
global totalByteArray
size = GetFileSize(url)
newSize ="bytes=0-"+ str(int(0.01*size))
totalByteArray = b''
cabHeader= requests.get(url,headers={"Range":newSize},stream=True)
for chunk in cabHeader.iter_content(chunk_size=1024):
totalByteArray += chunk
def DownloadInfFile(baseUrl,InfFileData,InfFileName):
global totalByteArray,eofiles
if(not os.path.exists("infs")):
os.mkdir("infs")
baseCabName = baseUrl[baseUrl.rfind("/"):]
baseCabName = baseCabName.replace(".","_")
if(not os.path.exists("infs\\" + baseCabName)):
os.mkdir("infs\\"+baseCabName)
fileBytes = b''
newRange = "bytes=" + str(eofiles+InfFileData["UnpackedStartOffset"] ) + "-" + str(eofiles+InfFileData["UnpackedStartOffset"]+InfFileData["Size"] )
data = requests.get(baseUrl,headers={"Range":newRange},stream=True)
with open("infs\\"+baseCabName +"\\" + InfFileName ,"wb") as f:
for chunk in data.iter_content(chunk_size=1024):
fileBytes +=chunk
f.write(fileBytes)
f.flush()
print("Saved File " + InfFileName)
pass
def main(url):
GetCABHeader(url)
cabMetaData = GetCabMetaData(totalByteArray)
for fileName,data in cabMetaData["Files"].items():
if(fileName.endswith(".txt")):
DownloadInfFile(url,data,fileName)
main("http://path-to-some-cabinet.cab")
All the file details are correct. I have verified them.
Any guidance will be appreciated. Am I doing it wrong? Another way perhaps?
P.S : Already Looked into This Post
First, the data in the CAB is raw deflate, not zlib-wrapped deflate. So you need to ask zlib's inflate() to decode raw deflate with a negative windowBits value on initialization.
Second, the CAB format does not exactly use standard deflate, in that the 32K sliding window dictionary carries from one block to the next. You'd need to use inflateSetDictionary() to set the dictionary at the start of each block using the last 32K decompressed from the last block.
Here is my problem with urllib in python 3.
I wrote a piece of code which works well in Python 2.7 and is using urllib2. It goes to the page on Internet (which requires authorization) and grabs me the info from that page.
The real problem for me is that I can't make my code working in python 3.4 because there is no urllib2, and urllib works differently; even after few hours of googling and reading I got nothing. So if somebody can help me to solve this, I'd really appreciate that help.
Here is my code:
request = urllib2.Request('http://mysite/admin/index.cgi?index=127')
base64string = base64.encodestring('%s:%s' % ('login', 'password')).replace('\n', '')
request.add_header("Authorization", "Basic %s" % base64string)
result = urllib2.urlopen(request)
resulttext = result.read()
Thankfully to you guys I finally figured out the way it works.
Here is my code:
request = urllib.request.Request('http://mysite/admin/index.cgi?index=127')
base64string = base64.b64encode(bytes('%s:%s' % ('login', 'password'),'ascii'))
request.add_header("Authorization", "Basic %s" % base64string.decode('utf-8'))
result = urllib.request.urlopen(request)
resulttext = result.read()
After all, there is one more difference with urllib: the resulttext variable in my case had the type of <bytes> instead of <str>, so to do something with text inside it I had to decode it:
text = resulttext.decode(encoding='utf-8',errors='ignore')
What about urllib.request ? It seems it has everything you need.
import base64
import urllib.request
request = urllib.request.Request('http://mysite/admin/index.cgi?index=127')
base64string = bytes('%s:%s' % ('login', 'password'), 'ascii')
request.add_header("Authorization", "Basic %s" % base64string)
result = urllib.request.urlopen(request)
resulttext = result.read()
An alternative using OpenerDirector that installs the auth headers for all future urllib requests
login_pass = base64.b64encode(f'{login}:{password}'.encode()).decode()
opener = urllib.request.build_opener()
opener.addheaders = [('Authorization', f'Basic {login_pass}')]
urllib.request.install_opener(opener)
response = urllib.request.urlopen(API_URL)
print(response.read().decode())
A further example using HTTPBasicAuthHandler although a bit more work required if need to send credentials unconditionally:
password_mgr = urllib.request.HTTPPasswordMgrWithPriorAuth()
password_mgr.add_password(None, API_URL, login, password, is_authenticated=True)
auth_handler = request.HTTPBasicAuthHandler(password_mgr)
opener = request.build_opener(auth_handler)
request.install_opener(opener)
response = urllib.request.urlopen(API_URL)
print(response.read().decode())