Hope you're doing well! I'm looking for some kind of help. Will be really grateful for you if you provide me with some sort of a hint.
I'm setting up developer portal based on [Backstage.io][1] according to Getting started docs instructions.
I've added #backstage/plugin-auth-backend to the project root and configured it as described [here][2].
As soon as I try to load backstage web app I get the following error:
{"error":{"name":"AuthenticationError","message":"Refresh failed; caused by InputError: Missing session cookie"
And it's followed by the list of .ts files with errors including some scripts connected with node_modules/#backstage/plugin-auth-backend. Guys, really I don't know where I'm missing something.
Here are some key points in my app-config.yaml:
app:
title: Scaffolded Backstage App
baseUrl: http://0.0.0.0:3000
backend:
baseUrl: http://<publicIP>:7007
listen:
port: 7007
csp:
default-src: ["'self'", 'http:', 'https:']
connect-src: ["'self'", 'http:', 'https:']
script-src: ["'self", 'http:', 'https:','unsafe-inline', http://www.github.com", http://api.github.com"]
cors:
origin: http://*:3000
methods: [GET, POST, PUT, DELETE]
credentials: true
headers: X-Requested-With
auth:
environment: development
providers:
github:
development:
clientId: ${GITHUB_CLIENTID}
clientSecret: ${GITHUB_CLIENT_SECRET}
I have exported those env variables to current terminal and ran yarn dev
So when I try to login to Backstage using Github I get 401 Unathorized error response while trying to approach thar Request URL: http://<publicIP>:7007/api/auth/github/refresh?optional&env=development
[1]: https://backstage.io/
[2]: https://backstage.io/docs/auth/add-auth-provider
I have a skeleton Google App Engine running on the URL provided by GAE in the format of "myproject.appspot.com". I have two services - one running a backend Node/Express server, and another for a React front-end. The backend service has url "api-dot-myproject.appspot.com" and the front end "client-dot-myproject.appspot.com". I have independently deployed each service along with the root-level dispatch.yaml, and have gotten these GAE-provided urls to work. When I go to the URLs provided by Google, my deployed services work as intended. However, I've also tried used a custom domain from Google Domains which is causing me trouble. I've followed the instructionss provided by Google - I first bought the domain from Google, then added it to Google App Engine. My GAE app then provided me with A, AAAA, and CNAME records to add to my Google Domains "Custom resource records" which I did.
I then waited over 24 hours before trying to access mydomain.app (the domain name I purchased). However, attempting to access the page results in a 404 error.
I've been trying to figure this out for a while now, and I've searched through every previous stackoverflow question on this topic but wasn't able to resolve it. Any help would be greatly appreciated.
dispatch.yaml:
dispatch:
- url: "*client-dot-myproject.appspot.com/*"
service: client
- url: "*mydomain.app/*"
service: client
- url: "*api-dot-myproject.appspot.com/*"
service: api
- url: "*/*"
service: client
api.yaml
runtime: nodejs12
service: api
handlers:
- url: /api/
script: auto
- url: /
script: auto
client.yaml
runtime: nodejs12
service: client
handlers:
- url: /static/js/(.*)
static_files: build/static/js/\1
upload: build/static/js/(.*)
- url: /static/css/(.*)
static_files: build/static/css/\1
upload: build/static/css/(.*)
- url: /static/media/(.*)
static_files: build/static/media/\1
upload: build/static/media/(.*)
- url: /(.*\.(json|ico|png))$
static_files: build/\1
upload: build/.*\.(json|ico|png)$
- url: /
static_files: build/index.html
upload: build/index.html
http_headers:
Access-Control-Allow-Origin: "*"
- url: /.*
static_files: build/index.html
upload: build/index.html
http_headers:
Access-Control-Allow-Origin: "*"
Truly one of the dumbest mistakes - I had switched over gcloud projects at some point and forgot to reinitialize with gcloud init.
I have a NodeJs application running inside a Kubernetes cluster (I am using microk8s). I've also followed the official steps to setup Elasticsearch on Kubernetes.
Issue
But I am unable to connect to the Elasticsearch cluster. I get this error:
ConnectionError: self signed certificate in certificate chain
This is a code snippet of my connection:
const client = new elasticsearch.Client({
node: process.env.elasticsearch_node,
// https://elasticsearch-es-http.default.svc.cluster.local:9200
});
Minimal reproduction
I've created a minimal reproduction of this issue here: https://github.com/flolu/elasticsearch-k8s-connection. (Setup instructions are in the README)
Basically, everything works fine when running Elasticsearch inside Docker compose, but I can't connect when running inside Kubernetes.
The reason for this is probably because I didn't setup TLS certificates correctly, but I haven't found any information about it. Do I configure it inside my NodeJs application when creating the ES client or at a cluster level?
The solution is to configure SSL and the Elastic user when creating the Client
const client = new elasticsearch.Client({
node: process.env.elasticsearch_node,
auth: {
username: "elastic",
password: process.env.elasticsearch_password || "changeme",
},
ssl: {
ca: process.env.elasticsearch_certificate,
rejectUnauthorized: false,
},
});
The password and the certificate are provided by Elastic. They are stored inside Kubernetes secrets.
So I've just passed the password and the certificate into my NodeJs service via environment variables like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: search-deployment
spec:
selector:
matchLabels:
app: search
replicas: 1
template:
metadata:
labels:
app: search
spec:
containers:
- name: search
image: search:placeholder_name
imagePullPolicy: Always
env:
- name: elasticsearch_node
value: https://elasticsearch-es-http.default.svc.cluster.local:9200
- name: elasticsearch_certificate
valueFrom:
secretKeyRef:
name: elasticsearch-es-http-ca-internal
key: tls.crt
- name: elasticsearch_password
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
I'd like to build on top of #Florian Ludewig's answer on 2 points, since I struggled myself to have it work on my side.
1. Don't turn off rejectUnauthorized
const client = new elasticsearch.Client({
node: 'node httpS url here',
ssl: {
ca: process.env.elasticsearch_certificate,
rejectUnauthorized: true, // <-- this is important
},
});
If you set rejectUnauthorized to false, the underlying nodejs https agent will bypass the certificate check. Of course if you are confident in the security of your cluster, you could disable it, but it renders the idea of providing the CA cert useless in the first place.
2. Make sure you provide a PEM CA cert - not the base64 encoded version
Perhaps you are providing the CA cert from your own config file without using Kubernetes' secret injection - possibly because the ES client application is in a different namespace and thus cannot access the CA secret.
In this case you may find it useful to store the CA cert as a base64 string, in a config file, but you should not forget to provide a decoded string to your client:
const config = loadConfigFromFile('config.yml');
const caCertificate = Buffer.from(config.base64CaCertificate, 'base64').toString();
const client = new elasticsearch.Client({
node: 'node httpS url here',
ssl: {
ca: caCertificate,
rejectUnauthorized: true
},
});
This is an SSL problem
Please try to disable verifying SSL or apply for an SSL from CA.
Cloudflare SSL is also a good choice .
Here's how to disable verifying SSL for NodeJs
process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0';
In order to resolve your issue you will need to trust the CA, you should be able to do so using the following. Also found the following Question here.
If you wish to import the CA as a env variable as discussed you might be able to do something like:
- name: NODE_EXTRA_CA_CERTS
valueFrom:
secretKeyRef:
name: elasticsearch-ca
key: tls.crt
Note: I've not tried the above, an alternative would be to mound the secret as a volume and import it that way :)
Please note that If you wish to disable TLS on your Elasticsearch deployment you can do so as follows:
spec:
http:
tls:
selfSignedCertificate:
disabled: true
Please note that disabling TLS isn't recommended.
I ran into this error with Minikube using the elasticsearch JS client:
getaddrinfo ENOTFOUND <CUSTOM_NAME>-es-default-0.<CUSTOM_NAME>-es-default.default.svc {"name":"ConnectionError","meta":{"body":null,"statusCode":null,"headers":null,"meta":{"context":null,"request":{"params":{"method":"HEAD","path":"/xxx","body":null,"querystring":"","headers":{"user-agent":"elasticsearch-js/7.12.0 (linux 5.11.16-arch1-1-x64; Node.js v16.1.0)","x-elastic-client-meta":"es=7.12.0,js=16.1.0,t=7.12.0,hc=16.1.0"},"timeout":30000},"options":{},"id":2}
The following solution doesn't require disabling TLS verification on the client side using a self signed cert.
es-stack yaml:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: <CUSTOM_NAME>
namespace: default
spec:
version: 7.12.1
auth:
roles:
- secretName: roles
fileRealm:
- secretName: filerealm
http:
tls:
selfSignedCertificate:
subjectAltNames:
- ip: 127.0.0.1
Port forwarding to access the ES cluster:
kubectl port-forward service/<CUSTOM_NAME>-es-http 9200:9200
Settings in ES JS client:
const client = new Client(
{
node: `https://127.0.0.1:9200`,
sniffOnStart: true,
ConnectionPool: MyConnectionPool,
auth: {
username,
password,
},
ssl: {
ca: fs.readFileSync("/tmp/ca.pem"),
rejectUnauthorized: true,
},
};
)
I use the ca.crt key in <CUSTOM_NAME>-es-http-certs-public for /tmp/ca.pem. You can also get rid of this line if you set the system environment NODE_EXTRA_CA_CERTS to /tmp/ca.pem before you start node.js, like this:
NODE_EXTRA_CA_CERTS=/tmp/ca.pem node index.js
If you aren't sure what CA cert you need to refer to, just ssh into the ES container, and check the ca.crt file inconfig/http-certs/.
This is the most important step. Append this line to /etc/hosts on your local machine.
127.0.0.1 <CUSTOM_NAME>-es-default-0.<CUSTOM_NAME>-es-default.default.svc
You should be good to go. I came up with the idea after reading this tutorial about Ingress:
https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/
My configurations are as below
apiEndpoints:
api:
host: '*'
paths: '/ip'
approval-engine:
host: '*'
paths: '/app/*'
serviceEndpoints:
httpbin:
url: 'https://httpbin.org'
approval-engine:
url: 'http://localhost:8001/'
With proxy as
- proxy:
- action:
serviceEndpoint: approval-engine
ignorePath: false
prependPath: false
autoRewrite : true
changeOrigin: true
When i make a request to http://localhost:8080/app/category the request is routed to localhost:8001/app/category
My question is can we route the request to http://localhost:8001/category. I want to ignore the paths:/app/ part in proxy.
To accomplish this, you'll need to use the express-gateway rewrite plugin.
You can use the eg CLI to install the plugin.
eg plugin install express-gateway-plugin-rewrite
Make sure rewrite is included in the gateway config's policies whitelist.
In the pipeline that's handling the request, you can use the rewrite plugin like so:
policies:
- rewrite:
- condition:
name: regexpmatch
match: ^/app/(.*)$
action:
rewrite: /$1
This should remove /app from the path before the request is routed to the Service Endpoint.
I recently updated from Node 0.8.xx to 0.10.5. Suddenly, my https.request calls started failing. I believe 0.8.x was not verifying certificates, but 0.10.5 is. We have some in-house certificate authorities set up to verify internal https traffic. I would like to show Node.js https.request client the certificates. How do I do this?
Currently, I'm getting:
Problem with request: UNABLE_TO_VERIFY_LEAF_SIGNATURE
I have tried:
var https_request_options = {
host: 'myhost.com',
path: '/thepath',
port: 443,
method: 'POST',
headers: {
'Content-type': "application/x-www-form-urlencoded",
'Content-length': thebody.length,
},
// rejectUnauthorized: false,
ca: [fs.readFileSync('/whatever/path/ca_certs.pem')]
};
var authRequest = https.request(https_request_options, function(response) {
....
The calls work if I set rejectUnauthorized: false, but I'd like to start taking advantage of the improved security in Node 0.10.5.
I believe my .pem file is good because it works with Python httplib2 (${python}/Lib/site-packages/httplib2/cacerts.txt) and cURL ("curl-ca-bundle.crt").
well, maybe these helps,
/whatever/path/ must be a relative folder of your proyect.