AWS Cloudfront Signed URL still valid after expiry time - amazon-cloudfront

To generate AWS cloudfront signed url , I have enabled restrict viewer access --> Yes --> Trusted signer while creating distribution.
from datetime import datetime,timedelta, timezone
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import padding
from botocore.signers import CloudFrontSigner
import base64
CLOUDFRONT_KEY_BASE64 = "*******"
def rsa_signer(message):
private_key_string = base64.b64decode(CLOUDFRONT_KEY_BASE64)
private_key_ascii = private_key_string.decode('ascii')
private_key = serialization.load_pem_private_key(
private_key_ascii.encode('UTF-8'),
password=None,
backend=default_backend()
)
return private_key.sign(message, padding.PKCS1v15(), hashes.SHA1())
key_id = '*******'
url = 'https://*****.cloudfront.net/hello.pdf'
expire_date = datetime(2022, 4, 24,11,33)
cloudfront_signer = CloudFrontSigner(key_id, rsa_signer)
signed_url = cloudfront_signer.generate_presigned_url(url, date_less_than=expire_date)
print(signed_url)
The signed url is generated:
https://****.cloudfront.net/hello.pdf?Expires=1650799980&Signature=******&Key-Pair-Id=*****
This url works even after expiry time 2022-04-24 11:33:00
But when I generate URL of old date (2022-04-23), the url doesnot work. I checked with today date 2022-04-24 but older time 2022-04-24 07:33:00, url works even after expiry.
How to invalidate the signed url after expiry time?

Related

Coinbase Advanced trade API connectivity using python3 is not working

I tried below code based on the coinbase documentaion coinbase doc
The documentation is given for Python2 but i have modified and used it for Python3 because i am trying to connect to advanced trade API in Coinbase Coinbase Advanced trade doc
import datetime
import time
import hmac
import hashlib
import http.client
secret_key='***' #hidden
api_key='***' #hidden
date_time = datetime.datetime.utcnow()
timestamp=int(time.mktime(date_time.timetuple())) # timestamp should be from UTC time and no decimal allowed
method = "GET" # method can be GET or POST. Only capital is allowed
request_path = 'api/v3/brokerage/accounts'
body=''
message= str(timestamp) + method + request_path + body
signature = hmac.new(secret_key.encode('utf-8'), message.encode('utf-8'), hashlib.sha256).hexdigest()
headers={
'accept':'application/json',
'CB-ACCESS-KEY': api_key,
'CB-ACCESS-TIMESTAMP': timestamp,
'CB-ACCESS-SIGN': signature
}
conn = http.client.HTTPSConnection("api.coinbase.com")
payload = ''
conn.request("GET", "/api/v3/brokerage/accounts", payload, headers)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
When executing this code i was expecting account details. But i am getting unauthoried error and error code 401 as return from API.
I was able to connect to Coinbase Pro API earlier and everything was fine till the merger of coinbase and Coinbase Pro.Now unable to figure out how to connect to the Advanced trade feature in coinbase.
OK, so I made two updates to get this running.
Insert a / before api in request_path = '/api/v3/brokerage/accounts' :)
Change the way you generate your timestamp to timestamp = str(int(time.time())).
I thought updating your timestamp to a string would fix it, but it didn't, so I reverted to the way I was generating it. Maybe someone can tell us why one works and one doesn't. I'll certainly update this post if I figure it out.
Here's the full working code. I kept everything else the same but replaced your comments with mine.
import datetime
import time
import hmac
import hashlib
import http.client
secret_key = '***'
api_key = '***'
# This is the first part where you were getting time
#date_time = datetime.datetime.utcnow()
# And this is the second part where you format it as an integer
#timestamp=int(time.mktime(date_time.timetuple()))
# I cast your timestamp as a string, but it still doesn't work, and I'm having a hard time figuring out why.
#timestamp=str(int(time.mktime(date_time.timetuple())))
# So I reverted to the way that I'm getting the timestamp, and it works
timestamp = str(int(time.time()))
method = "GET"
request_path = '/api/v3/brokerage/accounts' # Added a forward slash before 'api'
body=''
message= str(timestamp) + method + request_path + body
signature = hmac.new(secret_key.encode('utf-8'), message.encode('utf-8'), hashlib.sha256).hexdigest()
headers={
'accept':'application/json',
'CB-ACCESS-KEY': api_key,
'CB-ACCESS-TIMESTAMP': timestamp,
'CB-ACCESS-SIGN': signature
}
conn = http.client.HTTPSConnection("api.coinbase.com")
payload = ''
conn.request("GET", "/api/v3/brokerage/accounts", payload, headers)
# You were probably troubleshooting, but the above line is redundant and can be simplified to:
# conn.request("GET", request_path, body, headers)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
I see someone open-sourced a python library for the Advanced Trade endpoints: https://github.com/KmiQ/coinbase-advanced-python

How do I pass an API Token into Cookie via Python Request?

I'm having issues w/ authenticating to pkgs.org api, a token was produced, by support mentioned it needs to be passed as a cookie. I've never worked with cookies before.
import requests
import json
import base64
import urllib3
import sys
import re
import os
token=('super-secret')
#s = requests.Session()
head = {'Accept':'application/json'}
r = requests.get('https://api.pkgs.org/v1/distributions', auth=(token), headers=head)
print(r)
print(r.connection)
print(r.cookies)
I tried to use the request.session method, to handle the cookie, but i honestly don't know syntax on how to ever 1 create a cookie, let alone pass the cookie.
If I read the API documentation correctly you should set acces_token cookie:
import requests
token = "super-secret"
cookies = {"access_token": token}
headers = {"Accept": "application/json"}
r = requests.get(
"https://api.pkgs.org/v1/distributions", cookies=cookies, headers=headers
)

Access Denied Signed URL Python3 using cloud front

I am trying to create signed urls for my s3 bucket to which only select people will have access to until the time expires.
I am not able to find the issue in my code. Please help
import boto
from boto.cloudfront import CloudFrontConnection
from boto.cloudfront.distribution import Distribution
import base64
import json
import rsa
import time
def lambda_handler(event, context):
url = "https://notYourUrl.com/example.html"
expires = int(time.time() + 36000)
pem = """-----BEGIN RSA PRIVATE KEY-----
myKey
-----END RSA PRIVATE KEY-----"""
Cloudfront console
key_pair_id = 'myKey'
policy = {
"Statement": [
{
"Resource":url,
"Condition":{
"DateLessThan":{"AWS:EpochTime":expires},
}
}
]
}
policy = json.dumps(policy)
private_key = rsa.PrivateKey.load_pkcs1(pem)
policy = policy.encode("utf-8")
signed = rsa.sign(policy, private_key, 'SHA-1')
policy = base64.b64encode(policy)
policy = policy.decode("utf-8")
signature = base64.urlsafe_b64encode(signed)
signature = signature.decode("utf-8")
policy = policy.replace("+", "-")
policy = policy.replace("=", "_")
policy = policy.replace("/", "~")
signature = signature.replace("+", "-")
signature = signature.replace("=", "_")
signature = signature.replace("/", "~")
print("%s?Expires=%s&Signature=%s&Key-Pair-Id=%s" % (url,expires, signature, key_pair_id))
When I test the file on lambda I am able to produce and print a URL but when I access the URL I receive an access denied error message from the XML file.
I am not sure what I am doing wrong at this point. To test if I am able to generated any SignedUrl I created a node.js lambda in which I am successfully able to generate the URL and even access my page.
<Error>
<Code>AccessDenied</Code>
<Message>Access denied</Message>
</Error>
After many failed tries to make my code work I decided to go with a different approach and used node.js to fullfill my needs. The code below works perfectly and I am able to generate signed url's
For now I used a hardcoded time value to test my code and will later on work on getting that dynamically using datetime.
var AWS = require('aws-sdk');
var keyPairId = 'myKeyPairId';
var privateKey = '-----BEGIN RSA PRIVATE KEY-----' + '\n' +
'-----END RSA PRIVATE KEY-----';
var signer = new AWS.CloudFront.Signer(keyPairId, privateKey);
exports.handler = function(event, context) {
var options = {url: "https://notYourUrl.com/example.html", expires: 1621987200, 'Content-Type': 'text/html'};
//console.log(options);
const cookies = signer.getSignedCookie(options);
const url = signer.getSignedUrl(options);
console.log("Printing URL "+url);
console.log(cookies);
};

Hyperledger Sawtooth Supply Chain Send Transaction

I'm trying to add the first block to the supply chain, because the validator doesn't accept the batch.
[2019-11-05 17:40:31.594 DEBUG publisher] Batch c14df4b31bd7d52cf033a0eb1436b98be3d9ff6b06affbd73ae55f11a7cc0cc33aa6f160f7712d628e2ac644b4b1804e3156bc8190694cb9c468f4ec70b9eb05 invalid, not added to block.
[2019-11-05 17:40:31.595 DEBUG publisher] Abandoning block (1, S:, P:eb6af88e): no batches added
I installed Sawtooth and the Supply Chain on Ubuntu 16.04. I'm using the following Python Code to send the transaction. I'm not sure about the payload. I've got this from sample data of the fish client example. Is it perhaps necessary to change the keys?
#Creating a Private Key and Signer
from sawtooth_signing import create_context
from sawtooth_signing import CryptoFactory
from hashlib import sha512
context = create_context('secp256k1')
private_key = context.new_random_private_key()
signer = CryptoFactory(context).new_signer(private_key)
public_key = signer.get_public_key()
#Encoding Your Payload
import cbor
payload = {
"username": "ahab",
"password": "ahab",
"publicKey": public_key.as_hex(),
"name": "Ahab",
"email": "ahab#pequod.net",
"privateKey": "063f9ca21d4ef4955f3e120374f7c22272f42106c466a91d01779efba22c2cb6",
"encryptedKey": "{\"iv\":\"sKGty1gSvZGmCwzkGy0vvg==\",\"v\":1,\"iter\":10000,\"ks\":128,\"ts\":64,\"mode\":\"ccm\",\"adata\":\"\",\"cipher\":\"aes\",\"salt\":\"lYT7rTJpTV0=\",\"ct\":\"taU8UNB5oJrquzEiXBV+rTTnEq9XmaO9BKeLQQWWuyXJdZ6wR9G+FYrKPkYnXc30iS/9amSG272C8qqnPdM4OE0dvjIdWgSd\"}",
"hashedPassword": "KNYr+guWkg77DbaWofgK72LrNdaQzzJGIkk2rEHqP9Y="
}
payload_bytes = cbor.dumps(payload)
#Create the Transaction Header
from hashlib import sha512
from sawtooth_sdk.protobuf.transaction_pb2 import TransactionHeader
txn_header_bytes = TransactionHeader(
family_name='supply_chain',
family_version='1.1',
#inputs=[],
#outputs=[],
signer_public_key=signer.get_public_key().as_hex(),
# In this example, we're signing the batch with the same private key,
# but the batch can be signed by another party, in which case, the
# public key will need to be associated with that key.
batcher_public_key=signer.get_public_key().as_hex(),
# In this example, there are no dependencies. This list should include
# an previous transaction header signatures that must be applied for
# this transaction to successfully commit.
# For example,
# dependencies=['540a6803971d1880ec73a96cb97815a95d374cbad5d865925e5aa0432fcf1931539afe10310c122c5eaae15df61236079abbf4f258889359c4d175516934484a'],
dependencies=[],
payload_sha512=sha512(payload_bytes).hexdigest()
).SerializeToString()
#Create the Transaction
from sawtooth_sdk.protobuf.transaction_pb2 import Transaction
signature = signer.sign(txn_header_bytes)
txn = Transaction(
header=txn_header_bytes,
header_signature=signature,
payload= payload_bytes
)
#Create the BatchHeader
from sawtooth_sdk.protobuf.batch_pb2 import BatchHeader
txns = [txn]
batch_header_bytes = BatchHeader(
signer_public_key=signer.get_public_key().as_hex(),
transaction_ids=[txn.header_signature for txn in txns],
).SerializeToString()
#Create the Batch
from sawtooth_sdk.protobuf.batch_pb2 import Batch
signature = signer.sign(batch_header_bytes)
batch = Batch(
header=batch_header_bytes,
header_signature=signature,
transactions=txns
)
#Encode the Batch(es) in a BatchList
from sawtooth_sdk.protobuf.batch_pb2 import BatchList
batch_list_bytes = BatchList(batches=[batch]).SerializeToString()
#Submitting Batches to the Validator
import urllib.request
from urllib.error import HTTPError
try:
request = urllib.request.Request(
'http://localhost:8008/batches',
batch_list_bytes,
method='POST',
headers={'Content-Type': 'application/octet-stream'})
response = urllib.request.urlopen(request)
except HTTPError as e:
response = e.file
Your transaction is missing a nonce. That is required. I do not know what else is missing, if anything.
If the batch is invalid, it is because it's not well-formed (missing parameters or incorrectly set parameters).
You can enable verbose debugging by starting the client, REST API, Validator, and your transaction processor with -vvv added to the command line for all 3. If it is too verbose, just pass -vv.
I found a solution for my case. I used the .proto files of the Supply Chain Repo and I compiled these files to Python files. Then I took the addressing.py and the supply_chain_message_factory.py of the Supply Chain Repo. I changed the imports in supply_chain_message_factory.py and now I can create an agent with main.py.
from sc_message_factory import SupplyChainMessageFactory
import urllib.request
from urllib.error import HTTPError
#Create new agent
new_message = SupplyChainMessageFactory()
transaction = new_message.create_agent('test')
batch = new_message.create_batch(transaction)
try:
request = urllib.request.Request(
'http://localhost:8008/batches',
batch,
method='POST',
headers={'Content-Type': 'application/octet-stream'})
response = urllib.request.urlopen(request)
except HTTPError as e:
response = e.file

How to import firefox cookies to python requests

I'm logged in on some page in Firefox and I want to take the cookie and try to browse webpage with python-requests. Problem is that after importing cookie to the requests session nothing happen (like there is no cookie at all). Structure of the cookie made by requests differ from the one from Firefox as well.
Is such it possible to load FF cookie and use it in requests session?
My code so far:
import sys
import sqlite3
import http.cookiejar as cookielib
import requests
from requests.utils import dict_from_cookiejar
def get_cookies(final_cookie, firefox_cookies):
con = sqlite3.connect(firefox_cookies)
cur = con.cursor()
cur.execute("SELECT host, path, isSecure, expiry, name, value FROM moz_cookies")
for item in cur.fetchall():
if item[0].find("mydomain.com") == -1:
continue
c = cookielib.Cookie(0, item[4], item[5],
None, False,
item[0], item[0].startswith('.'), item[0].startswith('.'),
item[1], False,
item[2],
item[3], item[3]=="",
None, None, {})
final_cookie.set_cookie(c)
cookie = cookielib.CookieJar()
input_file = ~/.mozilla/firefox/myprofile.default/cookies.sqlite
get_cookies(cookie, input_file)
#print cookie given from firefox
cookies = dict_from_cookiejar(cookie)
for key, value in cookies.items():
print(key, value)
s = requests.Session()
payload = {
"lang" : "en",
'destination': '/auth',
'credential_0': sys.argv[1],
'credential_1': sys.argv[2],
'credential_2': '86400',
}
r = s.get("mydomain.com/login", data = payload)
#print cookie from requests
cookies = dict_from_cookiejar(s.cookies)
for key, value in cookies.items():
print(key, value)
Structure of cookies from firefox is:
_gid GA1.3.2145214.241324
_ga GA1.3.125598754.422212
_gat_is4u 1
Structure of cookies from requests is:
UISTestAuth tesskMpA8JJ23V43a%2FoFtdesrtsszpw
After all, when trying to assign cookies from FF to session.cookies, requests works as I import nothing.
It looks like there are two type of cookies in the Firefox - request and response. It could be seen while Page inspector > Network > login (post) > Cookies:
Response cookies:
UISAuth
httpOnly true
path /
secure true
value tesskMpA8JJ23V43a%2FoFtdesrtsszpw
Request cookies:
_ga GA1.3.125598754.422212
_gat_is4u 1
_gid GA1.3.2145214.241324
The request cookies are stored in the cookies.sqlite file in the
~/.mozilla/firefox/*.default/cookies.sqlite
and can be load to the python object in more ways, for example:
import sqlite3
import http.cookiejar
def get_cookies(cj, ff_cookies):
con = sqlite3.connect(ff_cookies)
cur = con.cursor()
cur.execute("SELECT host, path, isSecure, expiry, name, value FROM moz_cookies")
for item in cur.fetchall():
c = cookielib.Cookie(0, item[4], item[5],
None, False,
item[0], item[0].startswith('.'), item[0].startswith('.'),
item[1], False,
item[2],
item[3], item[3]=="",
None, None, {})
print c
cj.set_cookie(c)
where cj is CookieJar object and ff_cookies is path to Firefox cookies.sqlite. Taken from this page.
The whole code to load cookies and import to the python requests using session would looks like:
import requests
import sys
cj = http.cookiejar.CookieJar()
ff_cookies = sys.argv[1] #pass path to the cookies.sqlite as an argument to the script
get_cookies(cj, ff_cookies)
s = requests.Session()
s.cookies = cj
Response cookie is basically session ID, which usualy expires at the end of the session (or some timeout), so they are not stored.
There is a package on PyPi, browser-cookie3, which does exactly this.
import browser_cookie3
import requests
cookiejar = browser_cookie3.firefox(domain_name='signed-in-website.tld')
resp = requests.get('https://signed-in-website.tld/path/', cookies=cookiejar)
print(resp.content)
browser_cookie3.firefox() retrieves Firefox cookies as a cookiejar with a domain name as optional argument.

Resources