I am looking for an answer at a conceptual level. So please refrain from simply providing a link to Aws documentation as an answer.
It is how a canned policy is generated by boto
#staticmethod
def _canned_policy(resource, expires):
"""
Creates a canned policy string.
"""
policy = ('{"Statement":[{"Resource":"%(resource)s",'
'"Condition":{"DateLessThan":{"AWS:EpochTime":'
'%(expires)s}}}]}' % locals())
return policy
And it is how custom policy is generated by the same library
#staticmethod
def _custom_policy(resource, expires=None, valid_after=None, ip_address=None):
"""
Creates a custom policy string based on the supplied parameters.
"""
condition = {}
# SEE: http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/RestrictingAccessPrivateContent.html#CustomPolicy
# The 'DateLessThan' property is required.
if not expires:
# Defaults to ONE day
expires = int(time.time()) + 86400
condition["DateLessThan"] = {"AWS:EpochTime": expires}
if valid_after:
condition["DateGreaterThan"] = {"AWS:EpochTime": valid_after}
if ip_address:
if '/' not in ip_address:
ip_address += "/32"
condition["IpAddress"] = {"AWS:SourceIp": ip_address}
policy = {"Statement": [{
"Resource": resource,
"Condition": condition}]}
return json.dumps(policy, separators=(",", ":"))
To my mind, a canned policy is essentially a custom policy but with fewer attributes.
If it is a correct observation, then why the need for two different policies?
Yes, a canned policy can convey only a specific subset of the attributes of a custom policy, but the distinction between the two is more significant.
When you use a canned (pre-defined) policy, the contents of the resulting canned policy document are so deterministic and predictable -- from the elements of the request, itself -- that the policy document doesn't even need to be sent to CloudFront along with the request.
Instead, it's generated locally so that you can sign it, but then it's discarded. The server generates the identical document based on the request parameters, and validates the signature.
By contrast, with a custom policy, the policy document itself is sent with the request, base-64 encoded, in &Policy= in the URL. This makes the URL longer, since the policy document has to be sent along, but the policy document itself is now allowed to contain elements that can't be simply extrapolated from the request by simple examination.
Canned policies, then, are (at least to some extent) more "lightweight" -- shorter URLs mean fewer bytes included in the request, and somewhat less processing needed to use them, but they have less flexibility than custom policies.
Comparison matrix:
Using signed URLs # docs.aws.amazon.com
Related
I am trying to list of all Parameters along with all their tags, I am trying to do so without listing the value of the parameters.
My initial approach was to do a describe_parameters and then loop through the Key Names and then perform list_tags, while doing so I found out that the ARNs are needed to perform list_tags which are not returned in the describe parameters.
Is there a way to get the parameters along with their tags without actually getting the parameters?
You can do this with the resource groups tagging api IF THEY ARE ALREADY TAGGED. Here's a basic example below without pagination.
import boto3
profile = "your_profile_name"
region = "us-east-1"
session = boto3.session.Session(profile_name=profile, region_name=region)
client = session.client('resourcegroupstaggingapi')
response = client.get_resources(
ResourceTypeFilters=[
'ssm',
],
)
print(response)
If you're wanting to discover untagged parameters, this won't work. Better would be to setup config rules to highlight these issues without you having to manage searching for them.
I called below AZURE API from postman.
https://management.azure.com/subscriptions/{SubscriptionID}/providers/Microsoft.Commerce/UsageAggregates?api-version=2015-06-01-preview&reportedstartTime=2019-12-29T00%3a00%3a00%2b00%3a00&reportedEndTime=2019-12-30T00%3a00%3a00%2b00%3a00&$top=1
I got a response with empty value field along with some nextlink. when I again called the API with Nextlink URL response was having blank value field.
{
"value": [],
"nextLink": somelink
}
I am able to get proper response using same API for some different subscription.
Empty Array shows that it doesn;t have any usage details available for requested time frame.
Firstly as suggested by Tony, check if you have the usage resource for the requested date.
**Important**
Please note that the dateTime value format must be URL encoded as ISO 8601 format, and non-numeric characters must use escape codes (i.e. colon is escaped to %3a, plus sign is escaped to %2b) so that it is URI friendly. These refer to the start and end time ranges of your query. This dateTime parameter must also be specified in Universal Time Coordinated (UTC).
Secondly , Set {aggregationGranularity} to ‘Hourly’. This is an optional parameter with two discrete potential values: Daily and Hourly. As the values suggest, the former one returns the data in daily granularity whereas the latter one is hourly resolution. Daily is the default.
Sample URI:
**https://management.azure.com/subscriptions/{subscription-Id}/providers/Microsoft.Commerce/UsageAggregates?api-version=2015-06-01-preview\&reportedStartTime=2014-05-01T00%3a00%3a00%2b00%3a00\&reportedEndTime=2015-06-01T00%3a00%3a00%2b00%3a00\&aggregationGranularity=Hourly\&showDetails=f**
Hope it helps.
Make sure that you have enough permissions for the subscription in question.
Empty result [] can be returned when the permissions are missing. There is no error shown in such case.
The easiest way to ensure the permissions are available is to use Contributor or Owner role.
Additionally, Get-UsageAggregates -Debug (from Az PowerShell module) can be used to check how the request URL should look like.
Get-UsageAggregates -ReportedStartTime "2022-05-24 10:00" -ReportedEndTime "2022-05-24 15:00" -Debug -AggregationGranularity Hourly
According to this answer one can retrieve immediate "subdirectories" by querying by prefix and then obtaining CommonPrefix of the result of Client.list_objects() method.
Unfortunately, Client is a part of so-called "low level" API.
I am using different API:
session = Session(aws_access_key_id=access_key,
aws_secret_access_key=secret_key)
s3 = session.resource('s3')
my_bucket = s3.Bucket(bucket_name)
result = my_bucket.objects.filter(Prefix=prefix)
and this method does not return dictionary.
Is it possible to obtain common prefixes with higher level API in boto3?
As noted in this answer, it seems that the Resource doesn't handle Delimiter well. It is often annoying, when your entire stack relies on Resource, to be told that, ah, you should have instantiated a Client instead...
Fortunately, a Resource object, such as your Bucket above, contains a client as well.
So, instead of the last line in your code sample, do:
paginator = my_bucket.meta.client.get_paginator('list_objects')
for resp in paginator.paginate(Bucket=my_bucket.name, Prefix=prefix, Delimiter='/', ...):
for x in resp.get('CommonPrefixes', []):
print(x['Prefix'])
You can access client from session.
session.client('s3').list_objects(Bucket=bucket_name, Prefix= prefix)
Here is what RFC 5789 says:
The PATCH method requests that a set of changes described in the request entity be applied to the resource identified by the Request-URI. The set of changes is represented in a format called a "patch document" identified by a media type. If the Request-URI does not point to an existing resource, the server MAY create a new resource, depending on the patch document type (whether it can logically modify a null resource) and permissions, etc.
The difference between the PUT and PATCH requests is reflected in the way the server processes the enclosed entity to modify the resource identified by the Request-URI. In a PUT request, the enclosed entity is considered to be a modified version of the resource stored on the origin server, and the client is requesting that the stored version be replaced. With PATCH, however, the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version.
Let's say I have { "login": "x", "enabled": true }, and I want to disable it.
According to post "Please. Don't Patch Like An Idiot.", the proper PATCH request would be
[{ "op": "replace", "path": "/enabled", "value": false }]
However, let's take this request:
{ "enabled": false }
It also 'contains a set of instructions describing how a resource currently residing on the origin server should be modified', the only difference is that JSON property is used instead of JSON object.
It seems less powerful, but array changes can have some other special syntax if required (e.g. {"a":{"add":[], "remove":[]}}), and server logic might not be able to handle anything more powerful anyway.
Is it an improper PATCH request as per RFC? And if so, why?
And, on the other hand, would a { "op": "disable" } be a correct PATCH request?
the only difference is that JSON property is used instead of JSON object.
It's actually a bit deeper than that. The reference to RFC 6902 is important. The first request has a Content-Type of application/json-patch+json, but the second is application/json
The important thing is that you use a 'diff media type,' one that's useful for this purpose. You don't have to use JSON-PATCH, (I'm a big fan of json-merge-patch), but you can't just use anything you want. What you're asking about in the second part is basically 'can I make my own media type' and the answer is 'yes,' just please document it and register it with the IANA.
I'm trying to provide 'read-only' access for my blobs via a Web Service. The Web Service has a method that takes in the Blob and Container information and then returns back a URL with the Shared Access Signature that the user can use to access the blob. Since these images (blobs) are cached on the Phone, I would like to keep the signatures valid for up to 1 day.
I am using the following code:
var blobClient = GetBlobClient();
var container = blobClient.GetContainerReference(containerName);
if (container != null)
{
container.CreateIfNotExist();
}
var policy = new SharedAccessPolicy()
{
SharedAccessStartTime = DateTime.Now,
Permissions = SharedAccessPermissions.Read,
SharedAccessExpiryTime = DateTime.Now.AddDays(days)
};
if (permissions.Contains("w"))
{
policy.Permissions = policy.Permissions | SharedAccessPermissions.Write;
policy.SharedAccessExpiryTime = DateTime.Now.AddMinutes(10);
}
//The shared access policy provides read/write access to the container for 10 hours.
BlobContainerPermissions containerPerms = new BlobContainerPermissions();
// The public access setting explicitly specifies that the container is private,
// so that it can't be accessed anonymously.
containerPerms.PublicAccess = BlobContainerPublicAccessType.Off;
containerPerms.SharedAccessPolicies.Clear();
containerPerms.SharedAccessPolicies.Add("mypolicy", policy);
// Set the permission policy on the container.
container.SetPermissions(containerPerms);
var blob = container.GetBlobReference(blobName);
// Get the shared access signature to share with users.
var blobPolicy = new SharedAccessPolicy();
blobPolicy.SharedAccessExpiryTime = DateTime.Now.AddDays(days);
blobPolicy.Permissions = SharedAccessPermissions.Read;
string sas = blob.GetSharedAccessSignature(blobPolicy, "mypolicy");
return sas;
Every time I try to use this code, I get the following error:
Signature did not match. String to sign used was r
2012-01-03T08:38:52Z
/myContainer/12100/12409/29cae1b6-2955-4a33-ab27-ff99f0bb6470_m.jpg
mypolicy
Can anyone please guide me with this?
I suspect the issue lies in the "signature" component of your URL (the sig parameter).
The URL to access your BLOB needs to be in this form if you're using a 60 minute URL without a policy on it:
http://[storage account name].blob.core.windows.net/[top level container name]/[filename of BLOB]?sr=b&st=2012-01-19T12:21:40Z&se=2012-01-19T13:21:40Z&sp=r&sig=[Base-64 encoded signature]
Or in this form if you're using a policy:
http://[storage account name].blob.core.windows.net/[top level container name]/[filename of BLOB]?sr=b&st=2012-01-19T12:21:40Z&se=2012-01-19T13:21:40Z&si=[name of security policy]&sig=[Base-64 encoded signature]
About the signature ( sig parameter on the URL: )
Microsoft pseudocode showing how they want us to generate the signature:
Signature=Base64(HMAC-SHA256(UTF8(StringToSign)))
How do you make the string-to-sign? See http://msdn.microsoft.com/en-us/library/windowsazure/ee395415.aspx
StringToSign = signedpermissions + "\n"
+ signedstart + "\n"
+ signedexpiry + "\n"
+ canonicalizedresource + "\n"
+ signedidentifier
The linefeeds are crucial -- these are equivalent to the hex character 0xA. Standard Java "\n" linefeeds are fine. Don't leave them out or it won't work.
It's OK for the signedpermissions to be null -- as long as you still include the linefeed after signedpermissions if it's null.
If signedpermissions is populated then it's OK for signedidentifier to be null. You don't need to put a linefeed character after it.
You MUST make sure your string is converted into UTF-8 (Unicode 8) before you run the HMAC SHA256 hash over it.
See http://msdn.microsoft.com/en-us/library/windowsazure/hh508996.aspx
The string-to-sign is a unique string constructed from the fields that must be verified in order to authenticate the request. The signature is an HMAC computed over the string-to-sign and key by using the SHA256 algorithm, and then encoded by using Base64 encoding.
It looks like you are setting the SAS policy completely for 'mypolicy' on the container. Once you do that, they are not open to subsequent modification from query string params. It is a 'fill-in-the-blank' system. The only parts you can specify on query string are the parts not already specified and saved on container policy (i.e. filling in blanks). So, in this case, you have
blobPolicy.SharedAccessExpiryTime = DateTime.Now.AddDays(days);
blobPolicy.Permissions = SharedAccessPermissions.Read;
But, those two options were already saved on the policy, so you cannot specify them again (they are adding to resulting querystring). If you want to specify those, you should not have them already saved on the initial SetPermission().
You can prove this by commenting out those two lines and your resulting signature should be valid.
Maybe this is because a shared access signature cannot be for more than one hour. In order to have a more than one hour SAS, you need to use container level policy (that you can revoke)
Excerpt from article
One way that you can manage a Shared Access Signature is to control
its lifetime by ensuring that it expires within an hour. If you want
to continue to grant a client access to the blob after that time
period, you must issue a new signature. This is the behavior of a
Shared Access Signature that is not associated with a container-level
access policy. A Shared Access Signature not bound to a
container-level access policy cannot be revoked. If a start time is
specified, the expiration time must be 60 or fewer minutes from the
start time, or the signature is invalid and cannot be used. If no
start time is specified, the signature is valid only during the 60
minute period before the expiration time. This policy is intended to
minimize risk to a storage account in the event that the signature is
leaked.
Another way to manage a Shared Access Signature is to associate the
signature with a container-level access policy. The container-level
access policy is represented by the signedidentifier field on the URL.
A container-level access policy provides an additional measure of
control over one or more Shared Access Signatures, including the
ability to revoke the signature if needed.