How to get user information using awscli for Softlayer? - aws-cli

$aws configure set region=CrossRegion-US
$ aws iam get-user.
Could not connect to the endpoint URL: https://iam.CrossRegion-US.amazonaws.com/
Is this happening because I have set an incorrect region or is Softlayer in progress of improving the API support?
I have also used the region from authentication endpoints. Still, I get the same error.

Setting custom endpoints is not possible within the ~/.aws/config or ~/.aws/credentials files, instead it must be passed as an argument to each command. In your example above, you were trying to connect to AWS because a custom endpoint was not provided to let the CLI know where to connect.
For example, to list the contents of bucket-1:
aws --endpoint-url=https://{endpoint} s3 ls s3://bucket-1/
In the case of IBM Cross-Region object storage, the default endpoint would be s3-api.us-geo.objectstorage.softlayer.net. (In this case, the region would be us-standard, although this is not necessary to explicitly declare as it is the only region currently offered.)
For more information, the documentation has information on both using the AWS CLI and connecting to endpoints.
All that said, user information is not accessible using the implementation of the S3 API. Some user information can be accessed using the SoftLayer API, but generally speaking user information isn't directly used by the object storage system in this release, as permissions are issued at the storage account level.

Related

Integrate AWS API Gateway and Web ACL

Is there a way to associate web acl with aws api gateway using python?
I tried to test my code using CLI command
"*aws waf-regional associate-web-acl --region us-east-1 --web-acl-id '**************' --resource-arn 'arn:aws:apigateway:us-east-1::/restapis/******/stages/test'*
but it is not working, i even tried associating web acl to api gateway from aws console, but the list of web acl's are not listed in the dropdown, though i have already created web acl's.
You are calling "waf-regional" (aka WAF Classic API) with WAFv2 parameter.
For WAFv2, you need to call it as:
aws wafv2 associate-web-acl
Check out documentation for more info: https://docs.aws.amazon.com/cli/latest/reference/wafv2/associate-web-acl.html
Also, the console not showing the association is a known issueL https://forums.aws.amazon.com/thread.jspa?messageID=926212#926212

Is it possible to create stack in my AWS account and resources like (ec2, vpc, rds) created in client AWS account?

I have written an AWS Lambda nodejs function for creating a stack in CloudFormation, using CloudFormation template and given input parameters from UI.
When I run my Lambda function with respected inputs, a stack is successfully creating and instances like (ec2, rds, and vpc, etc.) are also created and working perfectly.
Now I want to make this function as public and use this function with user AWS credentials.
So public user uses my function with his AWS credentials those resources should be created in his account and user doesn't want to see my template code.
How can I achieve this?
You can leverage AWS Cloud Development Kit better, than directly using CloudFormation for this purpose. Although CDK may not be directly used within Lambda, a workaround is mentioned here.
AWS CloudFormation will create resources in the AWS Account that is associated with the credentials used to create the stack.
The person who creates the stack will need to provide (upload) a template file or they can reference a template that is stored in Amazon S3, which is accessible to their credentials (meaning that it is either public, or their credentials have been given permission to access the template in S3).

DC/OS private registry with authentication fails

I got a running DC/OS cluster on Azure and i'm trying to configure it to use private registry credentials.
I'm running Azure Private Registry with admin. I can log in and use the images.
I followed the guide provided by DC/OS but it recommends saving it on the nodes themselves. I want to use Azure File Storage instead.
I saved the config.json file to auth to the loginserver on a blob and provide the URI with deployment configuration.
config.json:
auths:
stageon.azurecr.io:
auth "..."
Now the configuration just keeps running without any output so I assume it's hanging on pulling the image.
I am providing the direct link URL to the file and when I access it through webbrowser it returns the JSON.
Did anyone do something similar before I found this thread for amazon before but I can't seem to get it working.
I've used a customization to acs-engine a few times to push registry credentials to the agent nodes.
This approach makes sure that the credentials will be present even when you add nodes later on.
The code is here: https://github.com/xtophs/acs-engine-1/tree/xtoph-registry. Example cluster API model is at: https://github.com/xtophs/acs-engine-1/blob/xtoph-registry/examples/privateregistry/dcos1.8.4.json

How can I check if the AWS SDK was provided with credentials?

There are many ways to provide the AWS SDK with credentials to perform operations.
I want to make sure any of the methods were successful in setting up the interface before I try my operation on our continuous deployment system.
How can I check if AWS SDK was able to find credentials?
You can access them via the config.credentials property on the main client. All AWS service libraries included in the SDK have a config property.
Class: AWS.Config
The main configuration class used by all service objects to set the region, credentials, and other options for requests.
By default, credentials and region settings are left unconfigured. This should be configured by the application before using any AWS service APIs.
// Using S3
var s3 = new AWS.S3();
console.log(s3.config.credentials);

How to use Cyber Duck CLI with a custom end point url

I am trying to use Cyberduck CLI to connect to an S3 compatible S3-compatible CEPH API by UKFast (https://www.ukfast.co.uk/cloud-storage.html). It has the same function as Amazon but uses a different url/ server obviously. The connection is via secret key and pass phrase the same as S3. Cyberduck CLI protocols are listed here: https://trac.cyberduck.io/wiki/help/en/howto/cli
I have tried using the below command the windows command prompt. The problem is that Cyberduck auto adds amazon AWS URL. So how do I use all the S3 options with a custom end point?
C:\> duck --list s3://< Host >/ -i < AccessKey > -p < Secret Key>
The s3:// scheme is reserved for AWS in Cyberduck CLI. If you want to connect to a third party services compatible with the S3 protocol you will need to create a custom connection profile. A connection is a XML property list .cyberduckprofile file that you install, providing another connection scheme. An example of such a profile is the Rackspace profile shipped within the application bundle in Profiles/Rackspace US.cyberduckprofile adding the rackspace:// scheme to connect to OpenStack Swift compatible Rackspace Cloud. You can download one of the other S3 profiles available and use it as a template. Make sure to change at least the Vendor key to the protocol scheme you want to use such as ukfast and put in the service endpoint of UKFast as the value for the Default Hostname key (Which corresponds to s3.amazonaws.com; I cannot find any documentation for the S3 endpoint of UKFast.
When done, verify the new protocol is listed in duck --help. You can then use the command
duck --list ukfast://bucket/ --username <AccessKey> --password <Secret Key>
to list files in a bucket.
You might also want to request UKFast to provide such a profile file for you and other users to make setup simpler. The same connection profile can also be used with Cyberduck.

Resources