I am bit confused about minio s3 gateway. Do we required aws sdk when we are running the minio server with s3 gateway? MY server started running and browsers is showing me the s3 buckets but I can't connect to the server through my node app. It is stating that port 9000 is invalid. Is that anything relevent to aws sdk or something else needs to be done here?
I have gone through the document of minio but didn't find anything for this in proper way. The docs are divided in different blocks and It doesn't stating anything like this. I've been stuck into this since 2 days. I would really grateful if someone can help me in this.
The error log as as below:
InvalidArgumentError: Invalid port : 9000,
at new Client (/var/www/html/learn-otter-api/node_modules/minio/dist/main/minio.js:97:13)
The error came from the fact that minio verifies the type of every options.
if (!(0, _helpers.isValidPort)(params.port)) {
throw new errors.InvalidArgumentError(`Invalid port : ${params.port}`);
}
function isValidPort(port) {
// verify if port is a number.
if (!isNumber(port)) return false;
...
Since it checks the port number against number type, you'll need to cast to number if you read the port number from process.env like me.
After that you'll probably find yourself encountering another error alike, but this time the error message is more explanatory.
if (!(0, _helpers.isBoolean)(params.useSSL)) {
throw new errors.InvalidArgumentError(`Invalid useSSL flag type : ${params.useSSL}, expected to be of type "boolean"`);
} // Validate region only if its set.
So in case you did read options from process.env, try to cast them to the required types.
const minioOptions = {
"endPoint": process.env.MINIO_ENDPOINT,
"port": 1 * process.env.MINIO_PORT,
"useSSL": "true" === process.env.MINIO_USE_SSL,
"accessKey": process.env.MINIO_ACCESS_KEY,
"secretKey": process.env.MINIO_SECRET_KEY
}
Related
i'm using Firebase version 9 (MODULAR) as database for a web application in nodejs v18.
When i try to access data from localhost i get the error below, but from an heroku deployment works like a charm.
[2022-06-26T07:32:22.447Z] #firebase/firestore: Firestore (9.8.4): Connection GRPC stream error. Code: 7 Message: 7 PERMISSION_DENIED: Permission denied on resource project "my_project",.
[2022-06-26T07:32:22.447Z] #firebase/firestore: Firestore (9.8.4): Could not reach Cloud Firestore backend. Connection failed 1 times. Most recent error: FirebaseError: [code=permission-denied]: 7 PERMISSION_DENIED: Permission denied on resource project "my_project",.
This typically indicates that your device does not have a healthy Internet connection at the moment. The client will operate in offline mode until it is able to successfully connect to the backend.
[2022-06-26T07:32:22.466Z] #firebase/firestore: Firestore (9.8.4): Connection GRPC stream error. Code: 7 Message: 7 PERMISSION_DENIED: Permission denied on resource project "my_project",.
node:internal/process/promises:288
triggerUncaughtException(err, true /* fromPromise */);
^
[FirebaseError: Failed to get document because the client is offline.] {
code: 'unavailable',
customData: undefined,
toString: [Function (anonymous)]
}
Node.js v18.0.0
For context:
the web app its an internal tool so i do my own authentication server-side and then the app makes some checks on some data on the firestore db. So basically who makes the queries its the app,not the user. This makes me thing that i should use a service account with firestore-admin api.
The problem is that the provided code snippet from the docs,does not work for me..
var admin = require("firebase-admin");
var serviceAccount = require("path/to/serviceAccountKey.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount)
});
And i already changed the access rule to be the horribly unsecure one:
rules_version = '2';
service cloud.firestore {
match /databases/{database}/documents {
match /{document=**} {
allow read, write: if true;
}
}
}
Any idea on what i should do? Or maybe why it doesnt work on local development?
Since the documentation it an incomplete maze of stuff i am probably missing something..
Thanks to anyone that can help! (at least to understand why)
As the GitHub link suggests, the issue might be due to useFetchStreams being enabled by default, in the new modular build (v9).
Hence, modify your code accordingly:
import {initializeFirestore} from 'firebase/firestore'
const db = initializeFirestore(firebaseApp, {useFetchStreams: false})
Also, the error might arouse not because the android device or emulator you are debugging is offline but because the "databaseURL" setting was missing in the part that defines the Firebase configuration object. I think that the same error might occur if the setting is incorrect.
Also, I suggest you have a look at this Stackoverflow case, which talks in detail about all the probable reasons behind the ‘Client is offline’ issue.
However, if the above solution does not, suggest you enable debug logging by calling setLogLevel.
firebase.firestore.setLogLevel('debug');
It might take a while to explain what I'm trying to do but bear with me please.
I have the following infrastructure specified:
I have a job called questo-server-deployment (I know, confusing but this was the only way to access the deployment without using ingress on minikube)
This is how the parts should talk to one another:
And here you can find the entire Kubernetes/Terraform config file for the above setup
I have 2 endpoints exposed from the node.js app (questo-server-deployment)
I'm making the requests using 10.97.189.215 which is the questo-server-service external IP address (as you can see in the first picture)
So I have 2 endpoints:
health - which simply returns 200 OK from the node.js app - and this part is fine confirming the node app is working as expected.
dynamodb - which should be able to send a request to the questo-dynamodb-deployment (pod) and get a response back, but it can't.
When I print env vars I'm getting the following:
➜ kubectl -n minikube-local-ns exec questo-server-deployment--1-7ptnz -- printenv
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=questo-server-deployment--1-7ptnz
DB_DOCKER_URL=questo-dynamodb-service
DB_REGION=local
DB_SECRET_ACCESS_KEY=local
DB_TABLE_NAME=Questo
DB_ACCESS_KEY=local
QUESTO_SERVER_SERVICE_PORT_4000_TCP=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_PORT=8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_PORT=8000
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
QUESTO_SERVER_SERVICE_SERVICE_HOST=10.97.189.215
QUESTO_SERVER_SERVICE_PORT=tcp://10.97.189.215:4000
QUESTO_SERVER_SERVICE_PORT_4000_TCP_PROTO=tcp
QUESTO_SERVER_SERVICE_PORT_4000_TCP_ADDR=10.97.189.215
KUBERNETES_PORT_443_TCP_PROTO=tcp
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP_ADDR=10.107.45.125
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
QUESTO_SERVER_SERVICE_SERVICE_PORT=4000
QUESTO_DYNAMODB_SERVICE_SERVICE_HOST=10.107.45.125
QUESTO_DYNAMODB_SERVICE_PORT=tcp://10.107.45.125:8000
KUBERNETES_SERVICE_PORT_HTTPS=443
NODE_VERSION=12.22.7
YARN_VERSION=1.22.15
HOME=/root
so it looks like the configuration is aware of the dynamodb address and port:
QUESTO_DYNAMODB_SERVICE_PORT_8000_TCP=tcp://10.107.45.125:8000
You'll also notice in the above env variables that I specified:
DB_DOCKER_URL=questo-dynamodb-service
Which is supposed to be the questo-dynamodb-service url:port which I'm assigning to the config here (in the configmap) which is then used here in the questo-server-deployment (job)
Also, when I log:
kubectl logs -f questo-server-deployment--1-7ptnz -n minikube-local-ns
I'm getting the following results:
Which indicates that the app (node.js) tried to connect to the db (dynamodb) but on the wrong port 443 instead of 8000?
The DB_DOCKER_URL should contain the full address (with port) to the questo-dynamodb-service
What am I doing wrong here?
Edit ----
I've explicitly assigned the port 8000 to the DB_DOCKER_URL as suggested in the answer but now I'm getting the following error:
Seems to me there is some kind of default behaviour in Kubernetes and it tries to communicate between pods using https ?
Any ideas what needs to be done here?
How about specify the port in the ConfigMap:
...
data = {
DB_DOCKER_URL = ${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000
...
Otherwise it may default to 443.
Answering my own question in case anyone have an equally brilliant idea of running local dybamodb in a minikube cluster.
The issue was not only with the port, but also with the protocol, so the final answer to the question is to modify the ConfigMap as follows:
data = {
DB_DOCKER_URL = "http://${kubernetes_service.questo_dynamodb_service.metadata.0.name}:8000"
...
}
As a side note:
Also, when you are running various scripts to create a dynamodb table in your amazon/dynamodb-local container, make sure you use the same region for both creating the table like so:
#!/bin/bash
aws dynamodb create-table \
--cli-input-json file://questo_db_definition.json \
--endpoint-url http://questo-dynamodb-service:8000 \
--region local
And the same region when querying the data.
Even though this is just a local copy, where you can type anything you want as a value of your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and actually in the AWS_REGION as well, the region have to match.
If you query the db with a different region it was created with, you get the Cannot do operations on a non-existent table error.
I have already checked this post. But even though I tried that method, it didn't work, so I open a new issue.
I use AWS EC2 server and deploy with aws pipeline. So When I push to github repository, it will automatically build and deploy to production server.
At first it's works fine, and there are no errors in the console.
But one day an error began to occur. So when I checked the console, there was an error as below.
[Error Message in console]
set greenlockOptions.notify to override the default logger
certificate_order (more info available: account subject altnames challengeTypes)
Error cert_issue:
[acme-v2.js] authorizations were not fetched for 'mydomain.com':
{"type":"urn:ietf:params:acme:error:rateLimited","detail":"Error creating new order :: too many certificates already issued for exact set of domains: mydomain.com: see https://letsencrypt.org/docs/rate-limits/","status":429,"_identifiers":[{"type":"dns","value":"mydomain.com"}]}
Error: [acme-v2.js] authorizations were not fetched for 'mydomain.com':
{"type":"urn:ietf:params:acme:error:rateLimited","detail":"Error creating new order :: too many certificates already issued for exact set of domains: mydomain.com: see https://letsencrypt.org/docs/rate-limits/","status":429,"_identifiers":[{"type":"dns","value":"mydomain.com"}]}
at Object.E.NO_AUTHORIZATIONS (/home/project/build/node_modules/#root/acme/errors.js:75:9)
at /home/project/build/node_modules/#root/acme/acme.js:1198:11
at processTicksAndRejections (internal/process/task_queues.js:97:5)
Error cert_issue:
[acme-v2.js] authorizations were not fetched for 'mydomain.com':
In my opinion, I think there was a limit to the process of reissuing the certificate every time I push the code, but I don't know where the problem occurred even if I check the code.
My code structure is written as below and developed with Express.
[server.js]
"use strict";
const app = require("./app.js");
require("greenlock-express")
.init({
packageRoot: __dirname,
configDir: "./greenlock.d",
// contact for security and critical bug notices
maintainerEmail: process.env.EMAIL,
// whether or not to run at cloudscale
cluster: false
})
// Serves on 80 and 443
// Get's SSL certificates magically!
.serve(app);
[greenlock.d/config.json]
{ "sites": [{ "subject": "mydomain.com", "altnames": ["mydomain.com"] }] }
[.greenlockrc]
{"configDir":"./greenlock.d"}
[package.json (scripts.start line)]
"scripts": {
"start": "node server.js"
},
I am aware of the seven-day limit from Let's Encrypt. So I want to find a way to solve this problem.
in my express folder, I do
sudo chmod 775 ./greenlock.d
then I delete greenlock.d(one time) and npm start
I dont have issue since
When using the elasticsearch client (from the elasticsearch npm version 15.4.1), the AWS elasticsearch service complains about an Invalid Host Header. This happens for every request even though they work.
I double-checked the configuration for initializing the elasticsearch client and the parameter "host" is correctly formed.
let test = require('elasticsearch').Client({
host: 'search-xxx.us-west-1.es.amazonaws.com',
connectionClass: require('http-aws-es')
});
I expected to get a clean ElasticsearchRequest without a corresponding InvalidHostHeaderRequests (I can see these logs on the Cluster health dashboard of the Amazon Elasticsearch Service).
Found the problem.
When using elasticsearch library to connect to an AWS ES cluster, the previous syntax can lead to problems, so the best way to initialize the client is specifying the entire 'host' object as follows:
host: {
protocol: 'https',
host: 'search-xxx.us-west-1.es.amazonaws.com',
port: '443',
path: '/'
The problem here is that probably AWS ES Cluster expects the host field inside the host object and this leads to the "Invalid Host Header" issue. Hope this will help the community to write better code.
Refer to https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/16.x/host-reference.html for reference.
I have an ASP.NET MVC 5 application running in the azure german cloud as Azure Web App (single instance - Standard S3 size).
I'm calling a non azure hosted REST/SOAP service on a particular host and the web requests either succeed promptly or timeout after 21 / 42 seconds.
I've load tested the requests and the percentile of requests timing out is between 20 and 80.
One particular remarkable property of the timeout is, that they occur after exactly 21 or 42 seconds (this is serious, no reference to hitchhiker's guide to the galaxy intended).
Calling a different service from the web app works just fine, temporarily at least.
We've already checked the firewall of the non azure service and if the timeout occurs, not a single packet reached the host.
This issue occurred once in the past one year ago and support was unable to tell what the cause was until the issue suddenly went away roughly two weeks after first occuring, so the ticket got closed as fixed itself but now its back.
The code is using https://github.com/canton7/RestEase (uses HttpClient underneath) and looks like
[Header("Content-Type", "application/json")]
public interface IApi
{
[Post("/Login")]
Task<LoginToken> Login([Body]LoginRequest request);
}
private static Dictionary<string, IApi> ApiClientsByHost = new Dictionary<string, IApi>();
private IApi GetApiForHost(string host)
{
if (!ApiClientsByHost.TryGetValue(host, out var client))
{
lock (ApiClientsByHost)
{
if (!ApiClientsByHost.TryGetValue(host, out client))
{
ApiClientsByHost[host] = client = RestClient.For<IApi>(host);
}
}
}
return client;
}
var client = GetApiForHost("https://production/");
var loginToken = await client.Login(new LoginRequest { Username = username, Password = password });
By different service, i mean using "https://testserver/" instead of "https://production/" (testserver is located in a different data center with different IP and all).
The API authentication is passing a token via query but it timeouts already before being able to get a token.
The code is caching the IApi to avoid the TCP starvation problems of disposing HttpClients (but i've never run into port exhaustion).
Restarting the app does not resolve the issue and the issue only occurs to production currently (but a year ago, when this issue occurred on production, we've switched to testserver which worked initially but after some time, ran into the same problem)
EDIT: Found some explanation in the last answer as to where those magical 21 seconds are comming from.
EDIT: One way i've found to workaround is, is to setup a azure vm with a proxy on it and configure defaultProxy to pass through that vm.
That's TCP retransmission timing out. It's odd that you are getting different values though.