I'm trying to use DynamoDB Local. It works perfectly fine using the AWS CLI, but when I try to use it with the AWS SDK in Node, I keep getting a "Method Not Allowed" error. The same code works perfectly fine with the real DynamoDB, so I know it's not an issue with the code.
This is how I've setup the SDK. My understanding is the region is ignored, so it doesn't matter.
new DocumentClient({
region: 'local',
endpoint: 'http://localhost:8000',
sslEnabled: false,
})
Node just gives me:
UnknownError: Method Not Allowed
at Request.extractError (/.../node_modules/aws-sdk/lib/protocol/json.js:51:27)
at Request.callListeners (/.../node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/.../node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/.../node_modules/aws-sdk/lib/request.js:683:14)
at Request.transition (/.../node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/.../node_modules/aws-sdk/lib/state_machine.js:14:12)
at /.../node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/.../node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/.../node_modules/aws-sdk/lib/request.js:685:12)
at Request.callListeners (/.../node_modules/aws-sdk/lib/sequential_executor.js:116:18)
I'm running DynamoDB Local on macOS 10.14.6 with Java:
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
But I also tried with Amazon's Docker image and still the same error.
The port was in use by another application. And Java didn't bother to mention it when starting the DynamoDB Local server...
But that doesn't explain why the AWS CLI was working. Now I'm confused...
put any valid region like "us-east-1" instead of "local".
Related
I'm working with aws-cli dynamodb-local, in docker-compose I have an entry as:
volumes:
dynamo-db:
driver: local
services:
dynamodb-local:
container_name: local-db
image: amazon/dynamodb-local
restart: always
command: -jar DynamoDBLocal.jar -sharedDb -dbPath /home/dynamodblocal/
volumes:
- dynamo-db:/home/dynamodblocal
ports:
- '8000:8000'
env_file:
- ...
The error I receive during the application startup is
ResourceNotFoundException: Cannot do operations on a non-existent table
2023-02-06T23:13:33.829569752Z at Request.extractError (/srv/node_modules/aws-sdk/lib/protocol/json.js:52:27)
2023-02-06T23:13:33.829573169Z at Request.callListeners (/srv/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
2023-02-06T23:13:33.829576085Z at Request.emit (/srv/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
2023-02-06T23:13:33.829578835Z at Request.emit (/srv/node_modules/aws-sdk/lib/request.js:686:14)
2023-02-06T23:13:33.829581502Z at Request.transition (/srv/node_modules/aws-sdk/lib/request.js:22:10)
2023-02-06T23:13:33.829584252Z at AcceptorStateMachine.runTo (/srv/node_modules/aws-sdk/lib/state_machine.js:14:12)
2023-02-06T23:13:33.829587169Z at /srv/node_modules/aws-sdk/lib/state_machine.js:26:10
2023-02-06T23:13:33.829601794Z at Request.<anonymous> (/srv/node_modules/aws-sdk/lib/request.js:38:9)
2023-02-06T23:13:33.829605460Z at Request.<anonymous> (/srv/node_modules/aws-sdk/lib/request.js:688:12)
2023-02-06T23:13:33.829608252Z at Request.callListeners (/srv/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
2023-02-06T23:13:33.829610960Z message: Cannot do operations on a non-existent table
2023-02-06T23:13:33.829613585Z code: ResourceNotFoundException
2023-02-06T23:13:33.829616169Z requestId: 40aaa8a1-575f-45be-b8b1-e64fb28c9cb4
2023-02-06T23:13:33.829618710Z statusCode: 400
I found articles saying that this might be caused when -sharedDb flag is missing, but I do have specific path for this value. The config from the app level is correct, because I change only url, comparing to other environments where everything works properly. Any suggestions or articles explaining this image's behaviour are more than welcome.
If it helps, I'm using node16 & dynamoose.
If you do not use -sharedDb it means that you have a unique environment for every set of access keys provided. So if you spin up multiple environments with different keys, then you will have to create the tables in that environment.
Either set -sharedDb or ensure you use the same access keys for all invocations.
Docs
The AWS SDKs for DynamoDB require that your application configuration specify an access key value and an AWS Region value. Unless you're using the -sharedDb or the -inMemory option, DynamoDB uses these values to name the local database file. These values don't have to be valid AWS values to run locally. However, you might find it convenient to use valid values so that you can run your code in the cloud later by changing the endpoint you're using.
I have configured Hadoop and spark in docker through k8s agent container which we are using to run the Jenkins job and we are using AWS EKS. but while running the spark-submit job we are getting the below error
py4j.protocol.Py4JJavaError: An error occurred while calling o40.exists.
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: xxxxxxxxx, AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID: xxxxxxxxxxxxxxx/xxxxxxxx
we have created a service account in k8s and added annotation as IAM role.(IAM role to access s3 which created in aws )
we see it can copy files from s3 but getting this error in job and not able to find out root cause .
note : Spark version 2.2.1
hadoop version : 2.7.4
Thanks
this is a five year old version of spark built on an eight year old set of hadoop binaries, including the s3a connector. "uch some of the binding logic to pick up iam roles simply isn't there.
Upgrade to spark 3.3.x with a full set of the hadoop-3.3.4 jars and try again.
(Note that "use a recent release" is step one of any problem with an open source application, it'd be the first action required if you ever file a bug report)
I am attempting to configure my AWS Amplify app, and am running into an error using amplify configure.
After properly installing and configuring the AWS/CLI, and installing the aws-amplify/cli module as per this answer, I attempted using the amplify configure command as per this tutorial. However, I am met with the following error (assume 'user' is my valid username):
C:\Users\user\project>amplify configure
Follow these steps to set up access to your AWS account:
Sign in to your AWS administrator account:
https://console.aws.amazon.com/
Press Enter to continue
2020-02-16T02:12:08.705Z - error: uncaughtException: spawn cmd ENOENT date=Sat Feb 15 2020 18:12:08 GMT-0800 (Pacific Standard Time), pid=1820, uid=null, gid=null, cwd=C:\Users\user\CMAA, execPath=C:\Program Files\nodejs\node.exe, version=v12.16.0,
argv=[C:\Program Files\nodejs\node.exe, C:\Users\user\AppData\Roaming\npm\node_modules\#aws-amplify\cli\bin\amplify, configure], rss=253734912, heapTotal=211009536, heapUsed=180695704, external=13705474, loadavg=[0, 0, 0], uptime=232949, trace=[column=19,
file=internal/child_process.js, function=Process.ChildProcess._handle.onexit, line=267, method=onexit, native=false, column=16,
file=internal/child_process.js, function=onErrorNT, line=469, method=null, native=false, column=21,
file=internal/process/task_queues.js, function=processTicksAndRejections, line=84, method=null, native=false], stack=[Error: spawn cmd ENOENT,
at Process.ChildProcess._handle.onexit (internal/child_process.js:267:19),
at onErrorNT (internal/child_process.js:469:16),
at processTicksAndRejections (internal/process/task_queues.js:84:21)]
I've tried deciphering this,but I can't find child_process.js, which makes me think that it's just some child process, however that gives me even less clue on fixing it.
There is no difference in behavior between Node.js Command Prompt and Windows Powershell
Has anybody else encountered a problem like this, and how did you fix it?
Also let me know if this question needs to be moved to SuperUser, I just put it here after I found the aforementioned answer.
Be sure to include cmd.exe in your %PATH% variable. Had me stuck, and that's all it was
Serverless invoke local -f function_name -m POST
This command is not working on local for azure. Everything is working perfectly for AWS but not for Azure.
I'm able to deploy these functions perfectly on azure by using serverless but not able to invoke locally.
Here is the response of this invocation:
Serverless: URL for invocation: http://localhost:7071/api/project
Serverless: Invoking function createProject with POST request
Error --------------------------------------------------
Error: connect ECONNREFUSED 127.0.0.1:7071
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:14)
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: win32
Node Version: 10.16.3
Framework Version: 1.53.0
Plugin Version: 3.1.0
SDK Version: 2.1.1
Components Core Version: 1.1.1
Components CLI Version: 1.2.3
I found the answer my self. I was confused, in AWS we don't run it as server as we need to do in Azure. Like in AWS we jut run serverless invoke command and it run the function and respond with an output but it does not work in Azure. In Azure we need to run the server first by using the command
serverless offline
and then we'll be able to use our functions as a server. Like we can make the calls as we're used to do with express or any other normal server.
I configured .env file to have AWS credentials, it doesn't work.
in the docs, it is written the config will automatically be loaded from .env file. but it doesn't.
I tried to add the following
aws.config.update({
region: process.env.AWS_region,
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
});
and that worked.
any idea why AWS SDK doesn't load the options automatically?
"aws-sdk": "^2.288.0",
"dotenv": "^6.0.0",
Old question, but answering as I had this issue with a test.
This is due to the AWS SDK capturing the credentials when the sdk is first required or imported.
When you run dotenv.config(), it has already completed this and does not re-read the environment variables.
Updating the AWS config yourself sets the values and is a reasonable solution.
I had the same issue and then figured that I had to export the env variables in the shell profile (~/.zshrc in my case zsh - just add export AWS_ACCESS_KEY_ID=<key> and the same for other AWS vars ). Restarted the terminal console and then my node aws sdk was able to pick it up. If you are using node aws sdk, then I'd suggest print process.env.AWS_ACCESS_KEY_ID in your code to verify that your node code is indeed able to read the env variable in the first place. Hope that helps.