I have a basic NodeJS web application which scans a DynamoDB table with only ~10 items. When I run the application on locally on my machine, the operation takes less than 1 second.
However, when I deploy the application on an EC2 instance, the same operation takes almost 5 seconds. The EC2 instance(t2.micro) and DynamoDB table are in the same region. I have also enabled VPC Endpoint Gateway service for DynamoDB, but the latency remains the same.
Here are the curl requests to test the performance:
curl -X POST http://localhost:9000/login -H 'Content-Type: application/json' -d '{ "email": "xyz#gmail.com", "password": "admin", "type": "talent" }' -s -o /dev/null -w "%{time_starttransfer}\n" 0.394470
curl -X POST http://EC2_IP_ADDRESS:9000/login -H 'Content-Type: application/json' -d '{ "email": "xyz#gmail.com", "password": "admin", "type": "talent" }' -s -o /dev/null -w "%{time_starttransfer}\n" 5.207561
Please help me understand what can be causing these latencies and how do I achieve low latency while querying DynamoDB from an EC2 instance.
You need to find what is the time taken by the database and the network. To do that you can check the following
Run your query directly on the DynamoDB to check the amount of time the database is taking. Check whether the query itself
Check whether the turnaround time difference for your query between code exection on EC2 server and your local machine the same with or without VPC endpoint?
The extended latency is very likely not due to DynamoDB if you think about the network hops
EC2 => EC2 => dynamodb (curl http://localhost/...)
Your desktop => internet => EC2 => dynamodb (curl http://EC2_IP/...)
It's possible you would experience a similar high latency if you just bypass dynamodb call e.g. comment out the code that makes a db call and return a mock response - I suspect you'll get a similar latency as #2
Related
I have managed to deploy the appication with back4app as well as got it web-hosted there. However, I can't access the routes of my API (BookReadingApp). Every time I try to access any route I get an error: {"error": "unathorized"}
403 error persists no matter what type of request I'm trying to send to the server (get, post, put, patch, delete). The same thing applies to every error I get when trying to send the wrong request to the routes that don't exist (for testing purposes only).
Console logs show that "Database was connected successfully" and the server was launched with no errors (as I pointed out in my app.js file to know that api works as expected).
Please, help me to resolve this issue - any help is highly appreciated.
Kind Regards
Vladyslav
I tried testing API (with POSTMAN and in Chrome) intentionally sending the wrong requests - the error does not change: it's always 'unathorized' (403). The console of the application shows that the server started successfully and that DB was also connected.
unauthorized always said when something internally failed or when you use the cloud functions and didn't specify the keys in the header:
curl -X POST \
-H "X-Parse-Application-Id: key_here" \
-H "X-Parse-REST-API-Key: key_here" \
-H "Content-Type: application/json" \
-d "{}" \
https://parseapi.back4app.com/functions/hello
TL;DR: What is the correct way to send a GET Request to do a pull subscription from a Pub/Sub server. What is the correct URL to use?
I am running a local Google Pub/Sub fake using gcloud beta emulators pubsub start, I have been able to successfully publish to it using a ruby script I wrote, however I have been unable to do a pull subscription.
I am aiming to accomplish this just using a GET request, not a script. One of the issues I've found is there's tons of documentation on doing pull subscriptions with a client or gcloud, but very little on how to access the server by URL. Perhaps I am misunderstanding what is possible - but I want to publish a message to pub/sub using a ruby client, and then use Postman to do a GET request to the pub/sub server to retrieve the message.
I am fairly certain the issue is with how I am making a get request, but I have reproduced everything else below for context
Ruby Publishing Code
require "google/cloud/pubsub"
require 'json'
class Publisher
def publish(event)
puts "1==============>>>>>>>> publishing..."
pubsub = Google::Cloud::PubSub.new(
project_id: "grpc-demo-proj",
emulator_host: "localhost:8085"
)
topic_id = "event_topic"
topic = pubsub.topic topic_id
begin
topic.publish_async "receive_event#event",
event: JSON.generate(event) do |result|
raise "Failed to publish the message." unless result.succeeded?
puts "2==============>>>>>>>> Message published asynchronously."
end
# Stop the async_publisher to send all queued messages immediately.
topic.async_publisher.stop.wait!
rescue StandardError => e
puts "3==============>>>>>>>> Received error while publishing: #{e.message}"
end
end
end
This seems to work, as I get
1==============>>>>>>>> publishing...
DEBUG GRPC : calling localhost:8085:/google.pubsub.v1.Publisher/GetTopic
DEBUG GRPC : calling localhost:8085:/google.pubsub.v1.Publisher/Publish
2==============>>>>>>>> Message published asynchronously.
In my terminal.
I also have the Pub/Sub server running using the following shell scripts.
#!/bin/bash
# Kill the existing process if it's already running
if [ "$(lsof -i:8085)" ]; then
kill $(lsof -t -i:8085)
fi
# Kick off the new process
gcloud beta emulators pubsub start --project=grpc-demo-proj
# Connect to environment variables
$(gcloud beta emulators pubsub env-init)
PubSub Setup Script
#!/bin/bash
# Wait for the pubsub emulator to boot up
sleep 7
while [[ ! "$(lsof -i:8085)" ]]
do
echo '#===> PUBSUB EMULATOR SETUP: Waiting for PubSub Emulator to start...'
sleep 3
done
# Create topics
curl --header "Content-Type: application/json" \
--request PUT \
http://localhost:8085/v1/projects/grpc-demo-proj/topics/event_topic
# Create test subscriptions for each topic
curl --header "Content-Type: application/json" \
--request PUT \
--data '{"topic": "projects/grpc-demo-proj/topics/event_topic"}' \
http://localhost:8085/v1/projects/grpc-demo-proj/subscriptions/event_topic.test_sub1
Again. These seem to work well.
Where I have trouble...
is doing a pull subscription from the pub/sub server using a GET request (Either from PostMan, or just in browser's URL bar)
http://localhost:8085/v1/projects/grpc-demo-proj/subscriptions/event_topic.test_sub1:pull
Returns
{
"error": {
"code": 400,
"message": "Invalid [subscriptions] name: (name=projects/grpc-demo-proj/subscriptions/event_topic.test_sub1:pull)",
"status": "INVALID_ARGUMENT"
}
}
But the subscription name is valid, as
http://localhost:8085/v1/projects/grpc-demo-proj/subscriptions/event_topic.test_sub1
returns
{
"name": "projects/grpc-demo-proj/subscriptions/event_topic.test_sub1",
"topic": "projects/grpc-demo-proj/topics/event_topic",
"pushConfig": {},
"ackDeadlineSeconds": 10,
"messageRetentionDuration": "604800s"
}
Which would seem to confirm the server is working, and the topics and subscriptions have been successfully created.
Although -NOT- the solution I am looking for, I tried using gcloud in command line:
bgc#jadzia:~$ gcloud beta pubsub subscriptions pull test_sub1
ERROR: (gcloud.beta.pubsub.subscriptions.pull) NOT_FOUND: Resource not found (resource=test_sub1).
Even though other sources seem to confirm this subscription does exist.
While it could possibly be an issue with Ruby incorrectly saying it successfully published the message, or something is wrong with the server. I suspect I'm just not doing the GET request correctly. I've tried several variations on the above GET request but won't list them all here.
So, without using a script - how can I get a message back from the pub/sub server? Ideally a URL for a GET request I can plug into PostMan, but command-line based solutions may also work here.
I did reproduce your local fake Pub/sub server using the all the scripts you posted. As you commented I used POST instead of GET and got a response. Pub/Sub subscriptions pull reference
POST https://pubsub.googleapis.com/v1/{subscription}:pull
POST request for subscriptions pull:
curl --header "Content-Type: application/json" \
--request POST \
--data "{
"maxMessages": "1"
}" \
http://localhost:8085/v1/projects/my-project/subscriptions/event_topic.test_sub1:pull
Output of subscriptions pull:
Pubsub Message is encoded in base 64. Take note that I ran everything (creation of pubsub server, topics, subscriber, publish of message, pulling of message) in the Google cloud shell.
EDIT 1:
As Brian mentioned, this is the request that is working for him. This works on my testing as well!
curl --header "Content-Type: application/json" \
--request POST \
localhost:8085/v1/projects/my-prject/subscriptions/event_topic.test_sub1:pull?maxMessages=5
Output:
Using the AWS RDS console it is very easy to see the number of connections to an instance in the "Current Activity" column:
How do I get this information from the aws cli? As far as I can tell, aws rds describe-db-instances does not seem to have this particular piece of information.
NOTE: For my purposes, it would be enough to know if there are any connections.
For metrics, you should be using the aws cloudwatch tool. To get the number of database connections currently, you could use something like this:
aws cloudwatch get-metric-statistics --namespace AWS/RDS
--metric-name DatabaseConnections --start-time 2018-06-14T16:00:00Z
--end-time 2018-06-14T16:01:00Z --period 60 --statistics "Maximum"
--dimensions Name=DBInstanceIdentifier,Value=your-db-identifier
You'll need to combine it with code or a script to insert the correct --start-time and --end-time values.
mon-get-stats --region "your-region" --metric-name="DatabaseConnections" --namespace="AWS/RDS" --dimensions="DBInstanceIdentifier=your-db-name" --statistics Maximum > your-db-name.txt
awk 'END{printf "%.0f",$3}' your-db-name.txt > your-db-name-final.txt
echo "$(cat your-db-name-final.txt)"
Above small script show your RDS Current Connections
After couchdb upgrade it was no longer possible to create new databases or update _security of old databases.
I've just run into this with CouchDB 2.1.1. My problem was that the security object I was attempting to pass in was malformed.
From https://issues.apache.org/jira/browse/COUCHDB-2326,
Attempts to write security objects where the "admins" or "members" values are malformed will result in an HTTP 500 response with the following body:
{"error":"error","reason":"no_majority"}
This should really be an HTTP 400 response with a "bad_request" error value and a different error reason.
To reproduce:
$ curl -X PUT http://localhost:15984/test
{"ok":true}
$ curl -X PUT http://localhost:15984/test/_security -d '{"admins":[]}'
{"error":"error","reason":"no_majority"}
Yet another reason could be that old nodes were lingering in _membership configuration.
I.e., _membership showed:
{
"all_nodes": [
"couchdb#localhost"
],
"cluster_nodes": [
"couchdb#127.0.0.1",
"couchdb#localhost"
]
}
when it should show
{
"all_nodes": [
"couchdb#localhost"
],
"cluster_nodes": [
"couchdb#localhost"
]
}
Doing a deletion of the bad cluster node as described in docs helped.
Note, that _nodes might not be available on port 5984, but on 5986 only.
For a PUT to /db/_security, if the user is not a a db or server admin, the response is HTTP status 500 with {"error":"error","reason":"no_majority"} but the server logs are more informative, including: {forbidden,<<"You are not a db or server admin.">>}
One reason could be couchdb process reached its maximum number of open files, which lead to read errors and (wrongly) no_majority errors.
Another reason could be that the server switched from single node configuration to multiple node configuration (for example during an upgrade).
Changing back number of nodes to 1 helped
We have a spark standalone that has 2 masters. We are using consul to discover all of our services. So that instead of writing in worker configuration such as:
spark://172.40.101.1:7077,172.40.102.2:7077
we just write
spark://spark-master.service:7077
The problem is that if for example 172.40.101.1 is standby and 172.40.102.2 is active, and in the first time the worker will get 101.1 then it will not try again. Seems like it is static.
Now I can work around using dig and linux parsing, But my questions are:
Is the worker config static ?
Is there a best practice for this issue ?
There are two parts to this problem. The first is how do you identify an active (or standby) spark? The second is how can you use that information to connect to the proper one?
If you can tell, either by a web url get or a process manipulation which one is active, and which one(s) are standby, you can create a service / health check based on that. Googling around a bit, I see the spark consul service and it's health check here:
{
"service": {
"name": "spark-master",
"port": 7077,
"checks": [
{
"script": "ps aux | grep -v grep | grep org.apache.spark.deploy.master.Master",
"interval": "10s"
}
]
}
}
This health check finds a java process via a script. If the process is found, then the health check succeeds. This particular health check doesn't care if it is active or standby, either matches. You would need a health check, under a service with a different name, that determines if the spark node is active. I don't know anything about spark, but looking on the net I found this spark-submit command. If this command works as I imagine, this might do the trick:
{
"service": {
"name":"spark-active"
,"port":7077
,"checks":[{"script": "curl --silent http://127.0.0.1:8080/ | grep '<li><strong>Status:</strong> ALIVE</li>'| wc -l | awk '{exit (\$0 - 1) }'"
}
}
Then you would connect using:
spark://spark-active.service:7077
Your health check can also connect via http. Consul service checks are documented here: https://www.consul.io/docs/agent/checks.html
-g