Is there a way to associate web acl with aws api gateway using python?
I tried to test my code using CLI command
"*aws waf-regional associate-web-acl --region us-east-1 --web-acl-id '**************' --resource-arn 'arn:aws:apigateway:us-east-1::/restapis/******/stages/test'*
but it is not working, i even tried associating web acl to api gateway from aws console, but the list of web acl's are not listed in the dropdown, though i have already created web acl's.
You are calling "waf-regional" (aka WAF Classic API) with WAFv2 parameter.
For WAFv2, you need to call it as:
aws wafv2 associate-web-acl
Check out documentation for more info: https://docs.aws.amazon.com/cli/latest/reference/wafv2/associate-web-acl.html
Also, the console not showing the association is a known issueL https://forums.aws.amazon.com/thread.jspa?messageID=926212#926212
Related
Is there anyway to allow the ports from CLI?
I have an instance in GCP and I have installed a service which by default runs on Port:8080. I know there is an option to change the firewall rules to allow ports from the GCP dashboard but I'm wondering if there is any way to allow the required ports from the CLI
In my case I'm using Git Bash rather than the native GCP Cloud Console
I have seen the documentation to allow ports from command line GCP Firewall-rules-from CLI but this is throwing a ERROR since I'm using the Git Bash.
Here is the error log:
[mygcp#foo~]$ gcloud compute firewall-rules create FooService --allow=tcp:8080 --description="Allow incoming traffic on TCP port 8080" --direction=INGRESS
Creating firewall...failed.
ERROR: (gcloud.compute.firewall-rules.create) Could not fetch resource:
- Request had insufficient authentication scopes.
[mygcp#foo~]$ gcloud compute firewall-rules list
ERROR: (gcloud.compute.firewall-rules.list) Some requests did not succeed:
- Request had insufficient authentication scopes.
Is there any option to allow required ports directly from the Git Bash CLI?
By default, the Compute Engine uses the default service account + scopes to handle the permissions.
The default scopes limit the API access even if your default compute engine service account has the editor role (by the way, a too wide role, never use it!).
To solve your issue, 2 solutions:
Use a custom service account on your Compute Engine
Add the required scopes to your current compute engine with the default compute engine service account used on it.
In both cases, you must stop the VM to update that security configuration.
I am currently attempting to create a redis cache for azure via the cli using the typical example
az redis create --location westus2 --name MyRedisCache --resource-group MyResourceGroup --sku Basic --vm-size c0
However, what I'd love to do is use the ----redis-configuration add-on to tell redis I do NOT want to deal with security via the "requirepass" : " property
No matter how I try to add this property, I'm given an error.
Has anyone successfully used --redis-configuration to pass in additional requirements for the deployment?
Considering Azure Redis is a fully managed service where Microsoft creates and manages the Redis instance(s) (updates, automatic failover etc.) on behalf of the customer, not all configuration settings (like requirepass) are exposed to users.
Looking at the REST API documentation for creating an Azure Redis instance, few configuration settings that can be changed are:
rdb-backup-enabled,rdb-storage-connection-string,rdb-backup-frequency,maxmemory-delta,maxmemory-policy,notify-keyspace-events,maxmemory-samples,slowlog-log-slower-than,slowlog-max-len,list-max-ziplist-entries,list-max-ziplist-value,hash-max-ziplist-entries,hash-max-ziplist-value,set-max-intset-entries,zset-max-ziplist-entries,zset-max-ziplist-value
etc.
I have written an AWS Lambda nodejs function for creating a stack in CloudFormation, using CloudFormation template and given input parameters from UI.
When I run my Lambda function with respected inputs, a stack is successfully creating and instances like (ec2, rds, and vpc, etc.) are also created and working perfectly.
Now I want to make this function as public and use this function with user AWS credentials.
So public user uses my function with his AWS credentials those resources should be created in his account and user doesn't want to see my template code.
How can I achieve this?
You can leverage AWS Cloud Development Kit better, than directly using CloudFormation for this purpose. Although CDK may not be directly used within Lambda, a workaround is mentioned here.
AWS CloudFormation will create resources in the AWS Account that is associated with the credentials used to create the stack.
The person who creates the stack will need to provide (upload) a template file or they can reference a template that is stored in Amazon S3, which is accessible to their credentials (meaning that it is either public, or their credentials have been given permission to access the template in S3).
I created a thing in the AWS IoT console, assigned certificate and policies. I tested aws iot list-things and I got:
{
"things": []
}
But, as I told, there is a thing created. Same occurs with list-certificates. Any help?
When getting inconsistent results between using the CLI and what you see on the AWS web based interfaces, always double check :
The AWS credential you are using for your CLI
The default region configured for your CLI
Logout of your shell and login again, double check the above, and you should see the same results.
Remember that AWS Iot artifacts (things, certificates, policies, ...) are always created in a certain region.
$aws configure set region=CrossRegion-US
$ aws iam get-user.
Could not connect to the endpoint URL: https://iam.CrossRegion-US.amazonaws.com/
Is this happening because I have set an incorrect region or is Softlayer in progress of improving the API support?
I have also used the region from authentication endpoints. Still, I get the same error.
Setting custom endpoints is not possible within the ~/.aws/config or ~/.aws/credentials files, instead it must be passed as an argument to each command. In your example above, you were trying to connect to AWS because a custom endpoint was not provided to let the CLI know where to connect.
For example, to list the contents of bucket-1:
aws --endpoint-url=https://{endpoint} s3 ls s3://bucket-1/
In the case of IBM Cross-Region object storage, the default endpoint would be s3-api.us-geo.objectstorage.softlayer.net. (In this case, the region would be us-standard, although this is not necessary to explicitly declare as it is the only region currently offered.)
For more information, the documentation has information on both using the AWS CLI and connecting to endpoints.
All that said, user information is not accessible using the implementation of the S3 API. Some user information can be accessed using the SoftLayer API, but generally speaking user information isn't directly used by the object storage system in this release, as permissions are issued at the storage account level.