I am trying to use Cyberduck CLI to connect to an S3 compatible S3-compatible CEPH API by UKFast (https://www.ukfast.co.uk/cloud-storage.html). It has the same function as Amazon but uses a different url/ server obviously. The connection is via secret key and pass phrase the same as S3. Cyberduck CLI protocols are listed here: https://trac.cyberduck.io/wiki/help/en/howto/cli
I have tried using the below command the windows command prompt. The problem is that Cyberduck auto adds amazon AWS URL. So how do I use all the S3 options with a custom end point?
C:\> duck --list s3://< Host >/ -i < AccessKey > -p < Secret Key>
The s3:// scheme is reserved for AWS in Cyberduck CLI. If you want to connect to a third party services compatible with the S3 protocol you will need to create a custom connection profile. A connection is a XML property list .cyberduckprofile file that you install, providing another connection scheme. An example of such a profile is the Rackspace profile shipped within the application bundle in Profiles/Rackspace US.cyberduckprofile adding the rackspace:// scheme to connect to OpenStack Swift compatible Rackspace Cloud. You can download one of the other S3 profiles available and use it as a template. Make sure to change at least the Vendor key to the protocol scheme you want to use such as ukfast and put in the service endpoint of UKFast as the value for the Default Hostname key (Which corresponds to s3.amazonaws.com; I cannot find any documentation for the S3 endpoint of UKFast.
When done, verify the new protocol is listed in duck --help. You can then use the command
duck --list ukfast://bucket/ --username <AccessKey> --password <Secret Key>
to list files in a bucket.
You might also want to request UKFast to provide such a profile file for you and other users to make setup simpler. The same connection profile can also be used with Cyberduck.
Related
Anyone can HELP? This one is really driving me crazy... Thank you!
I tried to use a google cloud platform API Speech-to-text.
Tools: WINDOWS 10 && GCP &&Python(Pycharm IDE)
I've created a service account as a owner for my speech-to-test project and generated a key from GCP console in json, then I set the environment variables.
Code I ran on WIN10 Powershell && CMD:
$env:GOOGLE_APPLICATION_CREDENTIALS="D:\GCloud speech-to-text\Speech
To Text Series-93e03f36bc9d.json"
set GOOGLE_APPLICATION_CREDENTIALS=D:\GCloud speech-to-text\Speech To
Text Series-93e03f36bc9d.json
PS: the added environment variables disappear in CMD and Powershell after reboot my laptop but do show in the env list if added again.
I've enabled the google storage api and google speech-to-text api in GCP console.
I've tried the explicitely showing credential method via python, same problem.
I've installed the google cloud SDK shell and initialized by using command to log in my account.
PYTHON SPEECH-TO-TEXT CODE(from GCP demo)
import io
import os
# Imports the Google Cloud client library
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
# Instantiates a client
client = speech.SpeechClient()
# The name of the audio file to transcribe
file_name = os.path.join(
os.path.dirname(__file__),
'test_cre.m4a')
# Loads the audio into memory
with io.open(file_name, 'rb') as audio_file:
content = audio_file.read()
audio = types.RecognitionAudio(content=content)
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=16000,
language_code='en-US')
# Detects speech in the audio file
response = client.recognize(config, audio)
for result in response.results:
print('Transcript: {}'.format(result.alternatives[0].transcript))
----Expected to receive a "200OK" and the transcribed text when runing the code above (a demo of short speech to text api from GCP Document)
----But got:
D:\Python\main program\lib\site-packages\google\auth_default.py:66: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK. We recommend that most server applications use service accounts instead. If your application continues to use end user credentials from Cloud SDK, you might receive a "quota exceeded" or "API not enabled" error. For more information about service accounts, see https://cloud.google.com/docs/authentication/
warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
google.api_core.exceptions.ResourceExhausted: 429 Quota exceeded for quota metric 'speech.googleapis.com/default_requests' and limit 'DefaultRequestsPerMinutePerProject' of service 'speech.googleapis.com' for consumer 'project_number:764086051850'.
ANOTHER WEIRD THING: the error info shows that 'project_number:764086051850', which is different from my speech-to-text project_number on GCP (I do distinguish project number and project ID), the project_number shown in the error info also varies every time the code runs. It seems I was sending cloud requirement of the wrong project?
My GOOGLE_APPLICATION_CREDENTIALS system environment variables disappear after I restart my laptop next time. After adding again, it will appear in the env list but can't be stored after reboot again.
Appreciate it if someone can help, thank you!
try to do this:
Run gcloud init -> authenticate with your account and choose your project
Run gcloud auth activate-service-account <service account email> --key-file=<JSON key file>
Run gcloud config list to validate your configuration.
Run your script and see if it's better.
Else, try to do the same thing on a micro-vm for validating your code, service account and environment (and for validating that there is a problem only with Windows)
For Windows issues, I'm on ChromeBook, I can't test and help you on this. However, I checked about EnvVar on internet, and this update the registry. Check if you don't have stuff which protect Registry update (Antivirus,....)
D:\Python\main program\lib\site-packages\google\auth_default.py:66:
UserWarning: Your application has authenticated using end user
credentials from Google Cloud SDK. We recommend that most server
applications use service accounts instead. If your application
continues to use end user credentials from Cloud SDK, you might
receive a "quota exceeded" or "API not enabled" error. For more
information about service accounts, see
https://cloud.google.com/docs/authentication/
warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING)
This error means that your code is not using a service account. Your code is configured to use ADC (Application Default Credentials). Most likely your code is using the Google Cloud SDK credentials configured and stored by the CLI gcloud.
To determine what credentials the Cloud SDK is using, execute this command:
gcloud auth list
The IAM Member ID, displayed as ACCOUNT, with the asterisk is the account used by the CLI and any applications that do not specify credentials.
To learn more about ADC, read this article that I wrote:
Google Cloud Application Default Credentials
google.api_core.exceptions.ResourceExhausted: 429 Quota exceeded for
quota metric 'speech.googleapis.com/default_requests' and limit
'DefaultRequestsPerMinutePerProject' of service
'speech.googleapis.com' for consumer 'project_number:764086051850'.
The Cloud SDK has the concept of default values. Execute gcloud config list. This will display various items. Look for project. Most likely this project does not have the API Cloud Speech-to-Text enabled.
ANOTHER WEIRD THING: the error info shows that
'project_number:764086051850', which is different from my
speech-to-text project_number on GCP (I do distinguish project number
and project ID), the project_number shown in the error info also
varies every time the code runs. It seems I was sending cloud
requirement of the wrong project?
To see the list of projects, Project IDs and Project Numbers that your current credentials can see (access) execute:
gcloud projects list.
This command will display the Project Number given a Project ID:
gcloud projects list --filter="REPLACE_WITH_PROJECT_ID" --format="value(PROJECT_NUMBER)"
My GOOGLE_APPLICATION_CREDENTIALS system environment variables
disappear after I restart my laptop next time. After adding again, it
will appear in the env list but can't be stored after reboot again.
When you execute this command in a Command Prompt, it only persists for the life of the Command Prompt: set GOOGLE_APPLICATION_CREDENTIALS=D:\GCloud speech-to-text\Speech To
Text Series-93e03f36bc9d.json. When you exit the Command Prompt, reboot, etc. the environment variable is destroyed.
To create persistent environment variables on Windows, edit the System Properties -> Environment Variables. You can launch this command as follows from a Command Prompt:
SystemPropertiesAdvanced.exe
Suggestions to make your life easier:
Do NOT use long path names with spaces for your service account files. Create a directory such as C:\Config and place the file there with no spaces in the file name.
Do NOT use ADC (Application Default Credentials) when developing on your desktop. Specify the actual credentials that you want to use.
Change this line:
client = speech.SpeechClient()
To this:
client = speech.SpeechClient().from_service_account_json('c:/config/service-account.json')
Service Accounts have a Project ID inside them. Create the service account in the same project that you intend to use them (until you understand IAM and Service Accounts well).
I tried to find set aws-cli locally using IAM role & without using access key/secret access key. But unable to get information from meta url[http://169.256.169.256/latest/meta-data].
I am running Ec2 instance with Ubuntu Server 16.04 LTS (HVM), SSD Volume Type - ami-f3e5aa9c.I have tried to configure aws-cli on that instance.I am not sure what type of role/policy/user needed to get aws-cli configured in my Ec2 instance.
Please provide me step by step guide to achieve that.I just need direction.So useful link also appreciated.
To read Instance Metadata, you dont need to configure the AWS CLI. The problem in your case, is you are using a wrong URL to read the Instance Metadata. The correct URL to use is http://169.254.169.254/ . For example, if you want to read the AMI id of the Instance, you can use the follow command.
curl http://169.254.169.254/latest/meta-data/ami-id
However, if you would like to configure the AWS cli without using the Access/Secret Keys. Follow the below steps.
Create an IAM instance profile and Attach it to the EC2 instance
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Roles, Create role.
On the Select role type page, choose EC2 and the EC2 use case. Choose Next: Permissions.
On the Attach permissions policy page, select an AWS managed policy that
grants your instances access to the resources that they need.
On the Review page, type a name for the role and choose Create role.
Install the AWS CLI(Ubuntu).
Install pip if it is not installed already.
`sudo apt-get install python-pip`
Install AWS CLI.
`pip install awscli --upgrade --user`
Configure the AWS CLI. Leave AWS Access Key ID and AWS Secret Access
Key as blank as we want to use a Role.
$ aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: us-west-2
Default output format [None]: json
Modify the Region and Output Format values if required.
I hope this Helps you!
AWS Documentation on how to setup an IAM role for EC2
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
I got a running DC/OS cluster on Azure and i'm trying to configure it to use private registry credentials.
I'm running Azure Private Registry with admin. I can log in and use the images.
I followed the guide provided by DC/OS but it recommends saving it on the nodes themselves. I want to use Azure File Storage instead.
I saved the config.json file to auth to the loginserver on a blob and provide the URI with deployment configuration.
config.json:
auths:
stageon.azurecr.io:
auth "..."
Now the configuration just keeps running without any output so I assume it's hanging on pulling the image.
I am providing the direct link URL to the file and when I access it through webbrowser it returns the JSON.
Did anyone do something similar before I found this thread for amazon before but I can't seem to get it working.
I've used a customization to acs-engine a few times to push registry credentials to the agent nodes.
This approach makes sure that the credentials will be present even when you add nodes later on.
The code is here: https://github.com/xtophs/acs-engine-1/tree/xtoph-registry. Example cluster API model is at: https://github.com/xtophs/acs-engine-1/blob/xtoph-registry/examples/privateregistry/dcos1.8.4.json
$aws configure set region=CrossRegion-US
$ aws iam get-user.
Could not connect to the endpoint URL: https://iam.CrossRegion-US.amazonaws.com/
Is this happening because I have set an incorrect region or is Softlayer in progress of improving the API support?
I have also used the region from authentication endpoints. Still, I get the same error.
Setting custom endpoints is not possible within the ~/.aws/config or ~/.aws/credentials files, instead it must be passed as an argument to each command. In your example above, you were trying to connect to AWS because a custom endpoint was not provided to let the CLI know where to connect.
For example, to list the contents of bucket-1:
aws --endpoint-url=https://{endpoint} s3 ls s3://bucket-1/
In the case of IBM Cross-Region object storage, the default endpoint would be s3-api.us-geo.objectstorage.softlayer.net. (In this case, the region would be us-standard, although this is not necessary to explicitly declare as it is the only region currently offered.)
For more information, the documentation has information on both using the AWS CLI and connecting to endpoints.
All that said, user information is not accessible using the implementation of the S3 API. Some user information can be accessed using the SoftLayer API, but generally speaking user information isn't directly used by the object storage system in this release, as permissions are issued at the storage account level.
In trying to automate some deploy tasks to S3, I noticed that the credentials I provided via aws configure are not picked up by the Node.js SDK. How can I get the shell and a gulp task to reference the same file?
After lots of searching, it was the excerpt from this article that caused a eureka moment.
If you've been using the AWS CLI, you might already have a credentials
file, which is in the same location as the new credentials file, but
is named config. If so, the CLI will continue to use that file.
However, if you create a new credentials file, the CLI will use that
one instead. (Be aware that the aws configure command that you can
use to set credentials from the command line will put the credentials
in the config file, not the credentials file.)
By moving ~/.aws/config to ~/.aws/credentials now both the CLI and SDK read from the same location. Sadly, I haven't found any interface for maintaining ~/.aws/credentials other than hand-editing just yet.