Was trying to run the nodejs quickstart for google cloud spanner. I started of the emulator instance via running below command on my development server:
docker run -p 9010:9010 -p 9020:9020 gcr.io/cloud-spanner-emulator/emulator
On the development server I could also create instances as follows:
# configuration first
gcloud config configurations create emulator
gcloud config set auth/disable_credentials true
gcloud config set project my-project
gcloud config set api_endpoint_overrides/spanner http://localhost:9020/
# creating instance
gcloud spanner instances create test-instance \
--config=emulator-config --description="Test Instance" --nodes=1
I could successfully create instance.
Now I am trying to run the quick start samples from a different machine on the same network. I made the following changes in the schema.js file (line number 30).
const spanner = new Spanner({
projectId: projectId,
apiEndpoint: 'http://dev-server-ip',
port: 9020
});
And I run the program as follows using nodejs:
node schema.js createDatabase test-instance example-db my-project
I got the following error:
schema.js createDatabase <instanceName> <databaseName> <projectId>
Creates an example database with two tables in a Cloud Spanner instance.
Options:
--version Show version number
[boolean]
--help Show help
[boolean]
Error: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-
started for more information.
at GoogleAuth.getApplicationDefaultAsync (D:\work\gcloud-connectors\nodejs-spanner\samples\node_modules\go
ogle-auth-library\build\src\auth\googleauth.js:183:19)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async GoogleAuth.getClient (D:\work\gcloud-connectors\nodejs-spanner\samples\node_modules\google-auth-l
ibrary\build\src\auth\googleauth.js:565:17)
at async GrpcClient._getCredentials (D:\work\gcloud-connectors\nodejs-spanner\samples\node_modules\google-
gax\build\src\grpc.js:145:24)
at async GrpcClient.createStub (D:\work\gcloud-connectors\nodejs-spanner\samples\node_modules\google-gax\b
uild\src\grpc.js:308:23)
EDIT
Issue resolved. You need to set the environment variable.
export SPANNER_EMULATOR_HOST=dev-server-ip:9010
Note the port is the gRPC port. 9010. The code change for the Spanner constructor is also not necessary.
Related
Wanted to learn AWS and found the tutorial Build a Serverless Web Application. In my research the closest Q&A I could find for my issue was Unable to locate credentials aws cli.
My process has been:
Created a repo in Github
Navigated to IAM and created a user trainer. Tutorial didn't specify policies so chose AdministratorAccess. Per instructions went the Security credentials and Create access key. Downloaded the file locally.
Went to Configuration basics and did Importing a key pair via .CSV file with the command of:
aws configure import --csv file:///Users/path/to/file/aws-training.csv
params:
User name: trainer
Access key ID: ****57
Secret access key: *****1b
but then found that the file didn't contain region or format so did:
aws configure --profile trainer
and re-did all values based on the CSV (Quick Setup):
AWS Access Key ID: ****57
AWS Secret Access Key: *****1b
Default region name: us-east-1
Default output format: json
Made sure to reboot my terminal and locally in a directory I run the command:
aws s3 cp s3://wildrydes-us-east-1/WebApplication/1_StaticWebHosting/website ./ --recursive
The terminal has a delay then throws:
fatal error: Unable to locate credentials
Research
Q&As I've read through to try and see if I could diagnose the problem:
aws cli with shell script: upload failed: Unable to locate credentials
Bash with AWS CLI - unable to locate credentials
Unable to locate credentials aws cli
Unable to locate credentials in boto3 AWS
Get "fatal error: Unable to locate credentials" when I'm copying file from S3 to EC2 using aws cli
Unable to locate credentials when trying to copy files from s3-bucket to my ec2-instance
How can I resolve my error of Unable to locate credentials and what am I doing wrong or misunderstanding?
Per the comment:
Check the content of ~/.aws/credentials and ~/.aws/config
credentials
command:
nano ~/.aws/credentials
renders:
[training]
aws_access_key_id = *****57
aws_secret_access_key = ***1b
[trainer]
aws_access_key_id = *****57
aws_secret_access_key = ***1b
config
command:
nano ~/.aws/config
renders:
[profile training]
region = us-east-1
output = json
[profile trainer]
region = us-east-1
output = json
You've configured the profile with the name trainer. You didn't create a default profile, you created a named profile. You're getting the current error because the CLI tool is looking for a default profile, and you don't have one configured.
In order to use the trainer profile you either have to add --profile trainer to every aws command you run in the command line, or you need to set the AWS_PROFILE environment variable inside your command line environment:
export AWS_PROFILE=trainer
It looks like you also tagged this with nodejs, so I recommend going the environment variable route, which will also work with the nodeJS AWS SDK.
What I want to do
I want to create Node.js (built with Nest.js) server in the infrastructure as follows:
infra-structure-image
GitHub repo is here.
Notice:
ECS is settled in private subnet.
I want to use private link to connect with AWS services (ECR and S3 in my case) rather than NAT gateway in public subnet.
Infrastructure is built from CloudFormation stack in AWS CDK Toolkit.
Node.js server is a simple app that responses 'Hello World!'.
Current behavior
When I deploy the AWS CloudFormation stack with cdk deploy, it is stuck in the ECS service creation at CREATE_IN_PROGRESS state. I can see ECS task execution error logs in ECS management console as follows:
STOPPED (ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post https://api.ecr.ap-northeast-1.amazonaws.com/: dial tcp 99.77.62.61:443: i/o timeout)
If I don't delete stack or set minimum number of task to 0, ECS service continuously try to execute tasks for hours and finally get timeout error.
I have already checked some points based on this official article.
Create VPC endpoints (com.amazonaws.region.ecr.dkr, com.amazonaws.region.ecr.api, S3)
Configure VPC endpoints (security group, subnets to settle in, IAM policy)
Add permissions to ECS task execution role so that ECS can pull image from ECR
Check if the image exists in ECR
And I have checked 'hello world' with this docker image in local machine.
Reproduction Steps
A minimal GitHub repo is here.
$ git clone https://github.com/Fanta335/cdk-ecs-nest-app
$ cd cdk-ecs-nest-app
$ npm install
AWS CDK toolkit is used in this project, so you need to run npm install -g aws-cdk if you have not installed AWS CDK toolkit in your local machine.
And if you have not set default IAM user configuration in aws cli, you need to run aws configure in order to pass environment variables to the CloudFormation stack.
$ cdk deploy
Then the deployment should be stuck.
Versions
MacOS Monterey 12.6
AWS CDK cli 2.43.1 (build c1ebb85)
AWS cli aws-cli/2.7.28 Python/3.9.11 Darwin/21.6.0 exe/x86_64 prompt/off
Docker version 20.10.17, build 100c701
Nest cli 9.1.3
The problem was DNS resolution has not been enabled in ECR VPC endpoints. I should have set privateDnsEnabled: true manually to the InterfaceVpcEndpoint instances in cdk-ecs-nest-app-stack.ts file as follows:
const ECSPrivateLinkAPI = new ec2.InterfaceVpcEndpoint(this, "ECSPrivateLinkAPI", {
vpc,
service: new ec2.InterfaceVpcEndpointService(`com.amazonaws.${REGION}.ecr.api`),
securityGroups: [securityGroupPrivateLink],
privateDnsEnabled: true, // HERE
});
const ECSPrivateLinkDKR = new ec2.InterfaceVpcEndpoint(this, "ECSPrivateLinkDKR", {
vpc,
service: new ec2.InterfaceVpcEndpointService(`com.amazonaws.${REGION}.ecr.dkr`),
securityGroups: [securityGroupPrivateLink],
privateDnsEnabled: true, // HERE
});
According to the CDK docs, the default value of privateDnsEnabled is defined by the service which uses this VPC endpoint.
privateDnsEnabled?
Type: boolean (optional, default: set by the instance of IInterfaceVpcEndpointService, or true if not defined by the instance of IInterfaceVpcEndpointService)
I didn't checked the default privateDnsEnabled values of com.amazonaws.${REGION}.ecr.api and com.amazonaws.${REGION}.ecr.dkr but we have to set true manually in CDK Toolkit.
I am using python google app engine
could you tell me, how i can run python3 google app engine with ndb on local system?
Help me
https://cloud.google.com/appengine/docs/standard/python3
Please try this
Go to service account https://cloud.google.com/docs/authentication/getting-started
create json file
and add install this pip
$ pip install google-cloud-ndb
now open linux terminal
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json"
if window then open command prompt
set GOOGLE_APPLICATION_CREDENTIALS=C:\path\to\credentials.json
run this code in python3 in your terminal/command prompt
from google.cloud import ndb
client = ndb.Client()
with client.context():
contact1 = Contact(name="John Smith",
phone="555 617 8993",
email="john.smith#gmail.com")
contact1.put()
see this result in your datastore.. Google console
App Engine is a Serverless service provided by Google Cloud Platform where you can deploy your applications and configure Cloud resources like instances' CPU, memory, scaling method, etc. This will provide you the architecture to run your app.
This service is not meant to be used on local environments. Instead, it is a great option to host an application that (ideally) has been tested on local environments.
Let's say: You don't run a Django application with Datastore dependencies using App Engine locally, You run a Django application with Datastore (and other) dependencies locally and then deploy it to App Engine once it is ready.
Most GCP services have their Client libraries so we can interact with them via code, even on local environments. The ndb you asked belongs to the Google Cloud Datastore and can be installed in Python environments with:
pip install google-cloud-ndb
After installing it, you will be ready to interact with Datastore locally. Please find details about setting up credentials and code snippets in the Datastore Python Client Library reference.
Hope this is helpful! :)
You can simply create emulator instance of the datastore on your local:
gcloud beta emulators datastore start --project test --host-port "0.0.0.0:8002" --no-store-on-disk --consistency=1
And then use it in the code in main app file:
from google.cloud import ndb
def get_ndb_client(namespace):
if config.ENVIRONMENT != ENVIRONMENTS.LOCAL:
# production
db = ndb.Client(namespace=namespace)
else:
# localhost
import mock
credentials = mock.Mock(spec=google.auth.credentials.Credentials)
db = ndb.Client(project="test", credentials=credentials, namespace=namespace)
return db
ndb_client = get_ndb_client("ns1")
I have been trying to start an already launched EC2 instance via python. I have configured AWS CLI from command prompt using the command below
aws configure
aws_access_key_id = MY_ACCESS_KEY
aws_secret_access_key = MY_SECRET_KEY
region=us-west-2b
output=Table
Now I used the following code from Spyder IDE of Anaconda
import boto3
instanceID = 'i-XXXXXXXXXXad'
ec2 = boto3.client('ec2', region_name='us-west-2b')
ec2.start_instances(InstanceIds=['i-XXXXXXXXXad'])
This gives the following error
EndpointConnectionError: Could not connect to the endpoint URL: "https://ec2.us-west-2b.amazonaws.com/"
I have been trying to debug the error for hours now, any kind of help will be useful. Also, I have a .pem as well as .ppk file created to start the instance via Putty, the .ppk file also has a paraphrase, do I need to do any kind of additional steps for this?
region=us-west-2b
is not a region, it is an availability zone. Try:
region=us-west-2
You can test by:
$ host ec2.us-west-2b.amazonaws.com
Host ec2.us-west-2b.amazonaws.com not found: 3(NXDOMAIN)
$ host ec2.us-west-2.amazonaws.com
ec2.us-west-2.amazonaws.com has address 54.240.251.131
I'm using Windows 7x64, gcloud installed version
Google Cloud SDK 0.9.71
app 2015.07.24
app-engine-java 1.9.24
app-engine-python 1.9.24
app-engine-python-extras 1.9.21
bq 2.0.18
bq-win 2.0.18
core 2015.07.24
core-win 2015.07.24
gcloud 2015.07.24
gsutil 4.13
gsutil-win 4.13
preview 2015.07.24
windows-ssh-tools 2015.06.02
I'm trying to run on preview and deploy the tutorial example from here. Note that app.yaml from this example has "nodejs" set as runtime.
After running command
gcloud preview app run --host localhost:8080 app.yaml
I get
RuntimeError: Unknown runtime 'nodejs'; supported runtimes are 'custom', 'go', 'java', 'java7', 'php', 'php55', 'python, 'python27', 'vm'.
If I put "vm" for runtime it wants to use docker, which doesn't work for me either and I wanted to use the option to do this without docker anyhow.
If I put "custom" for runtime in yaml file I get:
ValueError: The --custom_entrypoint flag must be set for custom runtimes
Example given in the help output for this switch is the following
--custom_entrypoint="gunicorn -b localhost:{port} mymodule:application"
I tried with this, best guess
gcloud preview app run --custom_entrypoint="nodejs -b localhost:{8080} mymodule:application" app.yaml
and got this
ERROR: Argument [--custom_entrypoint=nodejs -b localhost:{8080} mymodule:application] is not a valid deployable file.
ERROR: (gcloud.preview.app.run) Errors occurred while parsing the App Engine app configuration.
Thanks for your time.
The gcloud command seems to be undergoing some changes, so this question seems no longer valid, since we're meant to run dev_appserver.py instead of gcloud to run devserver processes; you can also just straight-up run the node server, or even use docker to build the image from your dockerfile and run that as a container.
If running from dev_appserver.py, make sure you have runtime: custom and a Dockerfile sourcing FROMgcr.io/google_appengine/nodejs, since dev_appserver.py currently raises:
RuntimeError: Unknown runtime 'nodejs'; supported runtimes are 'custom', 'go', 'java', 'java-compat', 'java7', 'php55', 'python', 'python-compat', 'python27'.