When I run a firebase functions locally on emulator, it works expectedly on Windows, but the same code doesn't work on Mac environment(I tried on two Mac: M1 and Intel, and neither doesn't work).
What I want to do is axios.post() to a specific API endpoint with:
query strings as authorization keys
accessing through squid proxy(SSL enabled) to make source IP address fixed
My firebase functions code here:
import functions from 'firebase-functions'
import https from 'https'
import axios from 'axios'
import {createHash} from 'crypto'
export default functions.region('asia-northeast1').https.onRequest(async (req, res) => {
const sharedKey = "abcdefghijklmn"
const randomKey = "opqrstuvwxyz"
const sha256Hex = createHash("sha256").update(sharedKey+randomKey).digest("hex")
const base64Encoded = Buffer.from(sha256Hex).toString('base64')
await axios.post(
`https://my.target.api.com`,
{},
{
params: {
auth_cd: base64Encoded,
auth_key: randomKey,
companyId: "myCompany"
},
// Below is just to avoid axios's error "Hostname/IP does not match certificate's
httpsAgent: new https.Agent({
rejectUnauthorized: false,
}),
proxy: {
protocol: "https",
host: "my.squid.proxy.com",
port: 8080,
auth: {
username: "username",
password: "password"
}
}
}
)
.then(result => {
console.log(result)
}).catch(error => {
console.log(error);
})
res.status(200).end()
})
And this gives me an error "ERROR_LOGIN_AUTH_FAILED" on Mac, which is reproducible on Windows too in condition that one or more of the three params (auth_cd/auth_key/companyId) are wrong.
Also, I made development environment same both on Mac and Windows:
Node.js: 14.19.3 (actually, here is another problem that if it's v16, the code above doesn't work on Windows too...)
Java(firebase-functions uses): 11.0.16
all the npm packages versions
axios/firebase-functions/firebase-tools are the latest
The other codes are the same too (cloned freshly from remote git repo)
I suspect this must be triggered some sort of difference on internal handling (like char code?) in axios, but no clues in my hand...
Any advice would be appreciated.
Related
I am trying to connect with the Jira REST api using Deno. My Library of choice is Jira.js. I've used both installing the node_modules locally and referencing the modules through the library link. To no avail, deno gives me the same type of error.
This is my code.
//import { Version2Client } from "./node_modules/jira.js/src/index.ts";
import * as jira from "https://deno.land/x/jira#v2.10.4/src/index.ts";
const client = new Version2Client({
host: 'https://FFFFFF.atlassian.net',
authentication: {
basic: {
email: 'FFFFFFF#gmail.com',
apiToken: 'FFFFFFFF',
},
},
});
async function main() {
const projects = await client.projects.getAllProjects();
console.log(projects);
}
main();
jira.js does not support Deno directly. But you can run it with NPM compatibility mode, for that, you'll need to replace your import to use npm: specifier: npm:jira.js
import { Version2Client } from 'npm:jira.js';
const client = new Version2Client({
host: 'https://FFFFFF.atlassian.net',
authentication: {
basic: {
email: 'FFFFFFF#gmail.com',
apiToken: 'FFFFFFFF',
},
},
});
// ...
I'm mocking the next/router dependency in my Jest+React-testing-libray tests as I always have:
import * as nextRouter from 'next/router';
export const routerData = {
pathname: '/users/create',
route: '/users/create',
query: { },
asPath: '/users/create',
isFallback: false,
basePath: '',
isReady: true,
isPreview: false,
isLocaleDomain: false,
events: {},
};
// mock router
jest.mock('next/router');
nextRouter.useRouter.mockImplementation(() => (routerData));
describe('a component that requires next/router, () => ... );
This had been working correctly but after updating to NextJs 12.2.0 I get this warning:
No router instance found.
You should only use "next/router" on the client side of your app.
This warning makes all my tests with the mocked router to fail.
Ideas to fix this?
Well, it appears that this is not related to 12.2.0. Somehow my last version of Next - 12.0.0 - wasn't thrownig this error but other older versions did.
Thanks to bistacos for the response here.
const useRouter = jest.spyOn(require('next/router'), 'useRouter');
useRouter.mockImplementation(() => ({
pathname: '/',
...moreRouterData
}));
I'm trying to connect a node.js app (written in TS) to MongoDB at Yandex Cloud. I have successfully connected there via mongosh:
mongosh "mongodb://<user>:<pass>#<host>:<port>/?replicaSet=<rs>&authSource=<db>&ssl=true" \
--tls --tlsCAFile ./YandexInternalRootCA.crt
where YandexInternalRootCA.crt is the downloaded certificate. Now I'm trying to do the same via MongoClient like this (the code is adapted from their examples; node v15.14.0, mongodb ^4.1.2):
import { MongoClient, Db } from 'mongodb'
import fs from 'fs'
const connnectionString = '<same connection string as the above argument of mongosh>'
const options = {
useNewUrlParser: true,
replSet: {
sslCA: fs.readFileSync('./YandexInternalRootCA.crt')
},
//tlsInsecure: true,
}
const getStorage = async (): Promise<Db> => {
// ts-ignore here is due to some typing problem: once you use 2 arguments
// in .connect, TS shows that it promises void (which is not true)
// #ts-ignore
return (await MongoClient.connect(connnectionString, options)).db()
}
Unexectedly, this results in
MongooseServerSelectionError: self signed certificate in certificate chain
I've tried to add tlsInsecure where it is show commented out (from suggestion for Mongoose), but it doesn't make a difference. What can be the cause and how can I fix this?
PS I've also tried various things like
const getStorage = async (): Promise<Db> => {
return (await MongoClient.connect(config.mongo.connectionUri, {
tls: true,
//sslCA: fs.readFileSync('./YandexInternalRootCA.crt'),
tlsCertificateFile: './YandexInternalRootCA.crt',
tlsInsecure: true,
})).db()
}
which still gives the same result.
If you use mongodb npm package version 4 or higher, you should pass TLS options like this:
const options = {
tls: true,
tlsCAFile: './YandexInternalRootCA.crt'
}
I use this script to connect node.js with Azure Postgresql.
But the ssl verification of our firewall blocks the connection, so in the past I need to use a proxy. Where in the code can I add the proxy settings as like host and port?
Means when I start the code, vscode should connect through the proxy to postgresql.
const pg = require('pg');
const config = {
host: '<your-db-server-name>.postgres.database.azure.com',
// Do not hard code your username and password.
// Consider using Node environment variables.
user: '<your-db-username>',
password: '<your-password>',
database: '<name-of-database>',
port: 5432,
ssl: true
};
const client = new pg.Client(config);
client.connect(err => {
if (err) throw err;
else { queryDatabase(); }
});
function queryDatabase() {
console.log(`Running query to PostgreSQL server: ${config.host}`);
const query = 'SELECT * FROM inventory;';
client.query(query)
.then(res => {
const rows = res.rows;
rows.map(row => {
console.log(`Read: ${JSON.stringify(row)}`);
});
process.exit();
})
.catch(err => {
console.log(err);
});
}
To configure proxy for Visual Studio Code
Edit the settings.json file
Depending on your platform, the user settings file is located here:
Windows: %APPDATA%\Code\User\settings.json
macOS: $HOME/Library/Application Support/Code/User/settings.json
Linux: $HOME/.config/Code/User/settings.json
Modify and Add the below lines to configure your proxy
"http.proxy": "http://user:pass#proxy.com:portnumber",
"https.proxy": "http://user:pass#proxy.com:portnumber",
"http.proxyStrictSSL": false
If your proxy doesn't require authentication, you could simply use
"http.proxy": "http://proxy.com:portnumber",
"https.proxy": "http://proxy.com:portnumber"
"http.proxyStrictSSL": false
Restart VS Code
The documentation related to settings and schema of the settings.json file is here for reference
I am trying to connect to an amazon postgreSQL RDS using a NodeJS lambda.
The lambda is in the same VPC as the RDS instance and as far as I can tell the security groups are set up to give the lambda access to the RDS. The lambda is called through API gateway and I'm using knex js as a query builder. When the lambda attempts to connect to the database it throws an "unable to get local issuer certificate" error, but the connection parameters are what I expect them to be.
I know this connection is possible as I've already implemented it in a different environment, without receiving the certificate issue. I've compared the two environments but cannot find any immediate differences.
The connection code looks like this:
import AWS from 'aws-sdk';
import { types } from 'pg';
import { Moment } from 'moment';
import knex from 'knex';
const TIMESTAMP_OID = 1114;
// Example value string: "2018-10-04 12:30:21.199"
types.setTypeParser(TIMESTAMP_OID, (value) => value && new Date(`${value}+00`));
export default class Database {
/**
* Gets the connection information through AWS Secrets Manager
*/
static getConnection = async () => {
const client = new AWS.SecretsManager({
region: '<region>',
});
if (process.env.databaseSecret == null) {
throw 'Database secret not defined';
}
const response = await client
.getSecretValue({ SecretId: process.env.databaseSecret })
.promise();
if (response.SecretString == undefined) {
throw 'Cannot find secret string';
}
return JSON.parse(response.SecretString);
};
static knexConnection = knex({
client: 'postgres',
connection: async () => {
const secret = await Database.getConnection();
return {
host: secret.host,
port: secret.port,
user: secret.username,
password: secret.password,
database: secret.dbname,
ssl: true,
};
},
});
}
Any guidance on how to solve this issue or even where to start looking would be greatly appreciated.
First of all, it is not a good idea to bypass ssl verification, and doing so can make you vulnerable to various exploits and skips a critical step in the TLS handshake.
What you can do is programmatically download the ca certificate chain bundle from Amazon and place it in the root directory of the lambda along side the handler.
wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem -P path/to/handler
Note: you can do this in your buildspec.yaml or in your script that packages the zip file that gets uploaded to aws
Then set the ssl configuration option to the contents of the pem file in your code postgres client configuration, like this:
let pgClient = new postgres.Client({
user: 'postgres',
host: 'rds-cluster.cluster-abc.us-west-2.rds.amazonaws.com',
database: 'mydatabase',
password: 'postgres',
port: 5432,
ssl: {
ca: fs.readFileSync(path.resolve('rds-combined-ca-bundle.pem'), "utf-8")
}
})
I know this is old, but just ran into this today. Running with node 10 and an older version of the pg library worked just fine. Updating to node 16 with pg version 8.x caused this error (simplified):
UNABLE_TO_GET_ISSUER_CERT_LOCALLY
In the past, you could indeed just set the ssl parameter to true or 'true' and it would work with the default AWS RDS certificate. Now, it seems we need to at least tell node/pg to ignore the cert verification (since it's self generated).
Using ssl: 'no-verify' works, enabling ssl and telling pg to ignore the verification of the cert chain.
source
UPDATE
For clarity, here's what the connection string would look like. With Knex, the same client info is passed to pg, so it should look similar to a pg client connection.
static knexConnection = knex({
client: 'postgres',
connection: async () => {
const secret = await Database.getConnection();
return {
host: secret.host,
port: secret.port,
user: secret.username,
password: secret.password,
database: secret.dbname,
ssl: 'no-verify',
};
}