REST client proxy issue in VScode - node.js

When using the REST client extension in VSCode eg.
Send Request
GET http://localhost:4200/dashboard
###
I get following error :
Connection is being rejected. The service isn’t running on the server,
or incorrect proxy settings in vscode, or a firewall is blocking requests.
Details: RequestError: connect ECONNREFUSED 127.0.0.1:4200
How can I change my http-proxy to 4200 instead of 127.0.0.1:4200 ?

The solution that works for me, it's to change the server hostname with a string (hostname: "127.0.0.1").
Deno/TS/Rest client Extension vscode
app.ts
import { Drash } from "https://deno.land/x/drash#v1.4.3/mod.ts";
import { Api } from "./api.ts";
const server = new Drash.Http.Server({
response_output: "application/json",
resources: [Api],
});
server.run({
hostname: "127.0.0.1", // Just here !
port: 1854,
});
console.log("Server running...");
api.ts
// #ts-ignore
import { Drash } from "https://deno.land/x/drash#v1.4.3/mod.ts";
export class Api extends Drash.Http.Resource {
static paths = ["/"];
public GET() {
this.response.body = `Hello World! (on ${new Date()})`;
return this.response;
}
}
req.http
GET http://localhost:1854/ HTTP/1.1
OR
GET http://127.0.0.1:1854/ HTTP/1.1

Related

How to resolve socket.io Error "TransportError: xhr poll error"

I want to make a socket.io server using NestJS.
I'm trying communication between server and client on localhost.
however, client(socket.io-client) throw the following error.
TransportError: xhr poll error
the server was running on localhost:3000. and the client was run from node command(node index.js)
server (mock.gateway.ts)
import { SubscribeMessage, WebSocketGateway, MessageBody, ConnectedSocket, WebSocketServer} from '#nestjs/websockets';
import { Socket } from 'socket.io';
#WebSocketGateway()
export class MockGateway {
#WebSocketServer()
server;
#SubscribeMessage('mock')
mock(
#MessageBody() data: string,
#ConnectedSocket() socket: Socket
): void {
console.log(data);
}
}
server (main.ts)
import { NestFactory } from '#nestjs/core';
import { AppModule } from './app.module';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
await app.listen(3000);
}
bootstrap();
client
import { io, Socket } from 'socket.io-client';
const socket = io('ws://localhost:3000', {
path: '/socket.io',
});
socket.emit('mock', 'mock');
I tried set transports to the sockect.io-client. but it makes a timeout error.
import { io, Socket } from 'socket.io-client';
const socket = io('ws://localhost:3000', {
path: '/socket.io',
transports: ['websocket'],
});
socket.emit('mock', 'mock');
Maybe a mistake in module settings. so I use the wscat is debug-tool. It throws the following error, I'm wondering why wscat can connest Nest server.
$ wscat -c ws://localhost:3000/socket.io/\?transport=websocket
Connected (press CTRL+C to quit)
error: Invalid WebSocket frame: RSV1 must be clear
If anyone has any ideas, please comment.

Websocket disconnecting immediately

I'm creating a test application to know a little more about websocket, in this application I create a chat system between client and user.
I'm using:
SQLServer;
Nestjs;
Socket.io;
React;
In my backend I configured a websocket Adapter to keep the default settings for all Gateways
import { IoAdapter } from '#nestjs/platform-socket.io';
import { ServerOptions, Socket } from 'socket.io';
export class SocketAdapter extends IoAdapter {
createIOServer(
port: number,
options?: ServerOptions & {
namespace?: string;
server?: any;
},
) {
const server = super.createIOServer(port, {
...options,
cors: {
origin: '*',
},
pingInterval: 1000,
pingTimeout: 10000,
serveClient: true,
} as ServerOptions);
return server;
}
}
On my front, socket.io is configured as follows:
import socketIOClient from "socket.io-client";
export const socket = socketIOClient(import.meta.env.VITE_APP_URL_SOCKETIO, {
auth: {
token: ""
},
transports: ['websocket', 'polling'],
forceNew: true,
reconnection: true,
reconnectionDelay: 1000,
reconnectionDelayMax: 5000,
reconnectionAttempts: 5
});
I got to test several settings for pingInterval, pingTimeout, reconnectionDelay, forceNew but they all return the same problem.
The problem that occurs is, when the client is making a connection with the server, if for some reason the internet connection has fluctuated (remove the cable) and then returns, the SERVER should not disconnect the socket from the client immediately, thus having a waiting time for its reconnection.
Starting the server on my computer, I could see that the reconnection time works normally
enter image description here
However, when sending the server application to an instance on AWS, Heroku or railway, none of them maintain the connection so that the socket can reconnect, when the connection goes offline, the server immediately shows the disconnected socket.
Because of this I thought it would be some wrong configuration in AWS with Load Balancer, but using heroku the same thing happened
enter image description here
I tried following some settings I found, but none worked either.
https://medium.com/containers-on-aws/scaling-a-realtime-chat-app-on-aws-using-socket-io-redis-and-aws-fargate-4ed63fb1b681
AWS application load balancer and socket.io
socket.io on aws application load balancer
What can be done so that the connection remains in the socket for a short time without disconnecting?

NestJS and IPFS - no connection on the server instance

I am struggling with binding IPFS node with NestJS instance on the server. All was working fine on the local machine, but on the server, I have a working instance of the IPFS. I know that it works as I can see connected peers and I can see a file uploaded through the server console by https://ipfs.io/ipfs gateway.
The code of the IPFS service is quite simple and it does not produce any errors until I try to upload something.
import { Injectable } from '#nestjs/common';
import { create } from 'ipfs-http-client';
#Injectable()
export class IPFSClientService {
private client = create({
protocol: 'http',
port: 5001
});
public async upload(file: Express.Multer.File): Promise<string> {
const fileToAdd = { path: file.filename, content: file.buffer };
try {
const addedFile = await this.client.add(fileToAdd, { pin: true });
return addedFile.path;
} catch (err) {
console.log('err', err);
}
}
}
Unfortunatelly the error message is enigmatic.
AbortController is not defined
at Client.fetch (/home/xxx_secret/node_modules/ipfs-utils/src/http.js:124:29)
at Client.fetch (/home/xxx_secret/node_modules/ipfs-http-client/cjs/src/lib/core.js:141:20)
at Client.post (/home/xxx_secret/node_modules/ipfs-utils/src/http.js:171:17)
at addAll (/home/xxx_secret/node_modules/ipfs-http-client/cjs/src/add-all.js:22:27)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at Object.last [as default] (/home/xxx_secret/node_modules/it-last/index.js:13:20)
at Object.add (/home/xxx_secret/node_modules/ipfs-http-client/cjs/src/add.js:18:14)
at IPFSClientService.upload (/home/xxx_secret/src/ipfs/ipfs-client.service.ts:20:25)
I will appreciate any help in this matter as I don't have ideas regarding this issue :/

AWS Lambda - Error: unable to get local issuer certificate

I am trying to connect to an amazon postgreSQL RDS using a NodeJS lambda.
The lambda is in the same VPC as the RDS instance and as far as I can tell the security groups are set up to give the lambda access to the RDS. The lambda is called through API gateway and I'm using knex js as a query builder. When the lambda attempts to connect to the database it throws an "unable to get local issuer certificate" error, but the connection parameters are what I expect them to be.
I know this connection is possible as I've already implemented it in a different environment, without receiving the certificate issue. I've compared the two environments but cannot find any immediate differences.
The connection code looks like this:
import AWS from 'aws-sdk';
import { types } from 'pg';
import { Moment } from 'moment';
import knex from 'knex';
const TIMESTAMP_OID = 1114;
// Example value string: "2018-10-04 12:30:21.199"
types.setTypeParser(TIMESTAMP_OID, (value) => value && new Date(`${value}+00`));
export default class Database {
/**
* Gets the connection information through AWS Secrets Manager
*/
static getConnection = async () => {
const client = new AWS.SecretsManager({
region: '<region>',
});
if (process.env.databaseSecret == null) {
throw 'Database secret not defined';
}
const response = await client
.getSecretValue({ SecretId: process.env.databaseSecret })
.promise();
if (response.SecretString == undefined) {
throw 'Cannot find secret string';
}
return JSON.parse(response.SecretString);
};
static knexConnection = knex({
client: 'postgres',
connection: async () => {
const secret = await Database.getConnection();
return {
host: secret.host,
port: secret.port,
user: secret.username,
password: secret.password,
database: secret.dbname,
ssl: true,
};
},
});
}
Any guidance on how to solve this issue or even where to start looking would be greatly appreciated.
First of all, it is not a good idea to bypass ssl verification, and doing so can make you vulnerable to various exploits and skips a critical step in the TLS handshake.
What you can do is programmatically download the ca certificate chain bundle from Amazon and place it in the root directory of the lambda along side the handler.
wget https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem -P path/to/handler
Note: you can do this in your buildspec.yaml or in your script that packages the zip file that gets uploaded to aws
Then set the ssl configuration option to the contents of the pem file in your code postgres client configuration, like this:
let pgClient = new postgres.Client({
user: 'postgres',
host: 'rds-cluster.cluster-abc.us-west-2.rds.amazonaws.com',
database: 'mydatabase',
password: 'postgres',
port: 5432,
ssl: {
ca: fs.readFileSync(path.resolve('rds-combined-ca-bundle.pem'), "utf-8")
}
})
I know this is old, but just ran into this today. Running with node 10 and an older version of the pg library worked just fine. Updating to node 16 with pg version 8.x caused this error (simplified):
UNABLE_TO_GET_ISSUER_CERT_LOCALLY
In the past, you could indeed just set the ssl parameter to true or 'true' and it would work with the default AWS RDS certificate. Now, it seems we need to at least tell node/pg to ignore the cert verification (since it's self generated).
Using ssl: 'no-verify' works, enabling ssl and telling pg to ignore the verification of the cert chain.
source
UPDATE
For clarity, here's what the connection string would look like. With Knex, the same client info is passed to pg, so it should look similar to a pg client connection.
static knexConnection = knex({
client: 'postgres',
connection: async () => {
const secret = await Database.getConnection();
return {
host: secret.host,
port: secret.port,
user: secret.username,
password: secret.password,
database: secret.dbname,
ssl: 'no-verify',
};
}

Setting up socket.io server on AWS EC2

I've got a problem to run the server with socketIO module on AWS EC2 server.
error1 screenshot
I have tried to edit inbound rule but it did not work
error2 screenshot
Code works perfect on localhost but when I try it on aws ec2 , it is not working.
Frontend port : 80
backend socket.io server : 8081
tcp socket server: 8124
FRONTEND CODE
import React, {Component} from "react";
import socketIOClient from "socket.io-client";
class Nokta extends Component {
constructor() {
super();
this.state = {
res_enlem: 41.018350,
endpoint: "http://172.31.36.93:8081",
};
}
componentDidMount() {
const {endpoint} = this.state;
//Very simply connect to the socket
const socket = socketIOClient(endpoint);
socket.on("giden enlem", data => {
this.setState({res_enlem: data.enlem});
});
}
render() {
return (
<Marker
latitude={this.state.res_enlem}
longitude={this.state.res_boylam}>
</Marker>
)
}
}
export default Nokta;
website = website link

Resources