I have implemented a reverse proxy with hyper-rustls and tokio. It works as intended when I use it against a non-https site, but when it hits a https host I get this from curl:
curl: (52) Empty reply from server
Structs
#[derive(Serialize, Deserialize, Debug, Clone)]
struct Hive {
path: String,
hive: Option<String>,
host: Option<String>,
route: Option<String>,
java: Option<bool>,
}
impl Hive {
fn new() -> Hive {
Hive {
path: String::from(""),
hive: Option::from(String::from("")),
host: Option::from(String::from("")),
route: Option::from(String::from("")),
java: Option::from(false)
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct Hives {
local_hives: Vec<Hive>,
remote_hives: Vec<Hive>,
critical_hives: Vec<Hive>,
}
impl Hives {
fn new() -> Hives {
Hives {
local_hives: Vec::new(),
remote_hives: Vec::new(),
critical_hives: Vec::new()
}
}
}
#[derive(Serialize, Deserialize, Debug, Clone)]
struct ServerObject {
https: Option<bool>,
name: String,
host: String,
port: String,
hives: Hives,
}
impl ServerObject {
fn new() -> ServerObject {
ServerObject {
hives: Hives::new(),
https: Option::from(false),
name: String::from(""),
host: String::from(""),
port: String::from("")
}
}
}
#[derive(Serialize, Deserialize)]
struct List {
servers: Vec<ServerObject>,
}
the code is pretty simple, but for some reason it does not work...
// current_server is a json with config about routes and host url
let connector = hyper_rustls::HttpsConnectorBuilder::new()
.with_native_roots()
.https_or_http()
.enable_http1()
.build();
let client: Client<_, hyper::Body> = Client::builder().build(connector);
let client: Arc<Client<_, hyper::Body>> = Arc::new(client);
let make_service = make_service_fn(move |_| {
let current_server = current_server.clone();
let client = Arc::clone(&client);
async move {
let client = Arc::clone(&client);
Ok::<_, Infallible>(
service_fn(move |req| {
let client = Arc::clone(&client);
handle(req, current_server.clone(), client)
})
)
}
});
handle is a bit more complicated..
but in essense this is the part that does the reverse proxy
async fn handle(
mut req: Request<Body>,
server: Arc<ServerObject>,
client: Arc<hyper::Client<hyper_rustls::HttpsConnector<hyper::client::HttpConnector>>>
) -> Result<hyper::Response<hyper::Body>, Infallible> {
let headers = req.headers_mut();
headers.insert("cache-control", HeaderValue::from_static("no-store"));
headers.insert("service-worker-allowed", HeaderValue::from_static("/"));
let uri = req.uri();
let uri_str = match uri.query() {
None => format!("{}{}", server.host, uri.path()),
Some(query) => format!("{}{}?{}", server.host, uri.path(), query),
};
*req.uri_mut() = uri_str.parse().unwrap();
match client.request(req).await {
Ok(response) => {
Ok(response)
},
Err(err) =>{
println!("Error {}", err);
Ok(hyper::Response::builder()
.status(hyper::StatusCode::INTERNAL_SERVER_ERROR)
.body(Body::empty())
.unwrap())
}
}
}
Here is the request headers from the call:
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
Accept-Encoding: gzip, deflate, br
Accept-Language: da,en;q=0.9,en-GB;q=0.8,en-US;q=0.7
Cache-Control: no-cache
Connection: keep-alive
Cookie: sys_sesid="2022-02-12-11.58.41.684888"
Host: localhost:4001
Pragma: no-cache
sec-ch-ua: " Not A;Brand";v="99", "Chromium";v="98", "Microsoft Edge";v="98"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Windows"
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36 Edg/98.0.1108.43
Why does this happen and how can I fix it?
Related
Introduction
I hope to issue raw https requests through a http proxy tunnel. I've had problems because of bad requests by which the server could not understand.
These raw https requests include first connecting to example.com in the proxy, and then posting data to it.
var http = require("http");
var tls = require("tls");
let proxyhost, proxyport;
http.request({
host: proxyhost,
port: proxyport,
method: 'CONNECT',
path: `www.example.com:443`,,
headers
}).on('connect', (res, socket) => {
if (res.statusCode === 200) {
const conn = tls.connect({
socket: socket,
rejectUnauthorized: false,
servername: 'www.example.com'
}, () => {
conn.on('data', (data) => {
console.log("chunk:", data.toString('ascii'));
})
const args = [
'CONNECT www.example.com:443 HTTP/1.1',
'Host: www.example.com:443',
'Proxy-Connection: Keep-Alive',
'',
'POST / HTTP/1.1',
'Host: www.example.com',
'accept-encoding: deflate, gzip, br',
'accept: application/json, text/javascript, */*; q=0.01',
'content-type: application/text',
'origin: https://www.example.com',
'accept-language: en-US,en;q=0.9',
'content-length: 10',
'xxxxxxxxxx' // this is supposed to be the post body
];
// conn.write("GET / HTTP/1.1\r\nconnection: keep-alive\r\nhost: www.example.com\r\n\r\n", "ascii")
conn.write(args.join("\r\n"));
});
} else {
console.log('Could not connect to proxy!');
}
}).end();
I am trying to integrate device-detector npm module in my application in order to detect the browser details. For that I am using this module npm i device-detector-js
I have integrated as it is code snippet in my code.
Below is my code:
app.controller.ts
import { Controller, Get, Req } from '#nestjs/common';
import { AppService } from './app.service';
#Controller()
export class AppController {
constructor(private readonly appService: AppService) {}
#Get()
getHello(#Req() req): string {
console.log(req.headers);
return this.appService.getHello();
}
}
app.service.ts
import { Inject, Injectable } from '#nestjs/common';
import DeviceDetector = require("device-detector-js");
#Injectable()
export class AppService {
private readonly deviceDetector = new DeviceDetector();
getHello(): string {
const userAgent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.81(Windows; Intel windows 8_8.1_10_11) Safari/537.36";
const result = this.deviceDetector.parse(userAgent);
console.log(JSON.stringify(result));
return 'Hello World!';
}
}
Output
[Nest] 23300 - 12/04/2022, 1:26:55 pm LOG [RouterExplorer] Mapped {/test, GET} route +2ms
[Nest] 23300 - 12/04/2022, 1:26:55 pm LOG [NestApplication] Nest application successfully started +4ms
{
host: 'localhost:3000',
connection: 'keep-alive',
'cache-control': 'max-age=0',
'sec-ch-ua': '" Not A;Brand";v="99", "Chromium";v="98", "Google
Chrome";v="98"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
dnt: '1',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.82 Safari/537.36'
}
It's working but not giving correct info as I am using Windows but it's showing Macintosh. Why is this happening?
Just pass headers from controller into service, something like this:
// controller
getHello(#Req() req): string {
console.log(req.headers);
return this.appService.getHello(req.headers);
}
// service
getHello(headers: {'user-agent': string }): string {
const userAgent = headers['user-agent'];
const result = this.deviceDetector.parse(userAgent);
console.log(JSON.stringify(result));
return 'Hello World!';
}
A year ago we implemented ApplePay on the web in our project and everything worked just fine. But now it's unstable and payment can be successful on the third try sometimes. We are facing an ECONNRESET error while requesting POST https://apple-pay-gateway.apple.com/paymentservices/startSession and an error message “Payment not completed” on the client side
Error: read ECONNRESET at TLSWrap.onStreamRead (internal/stream_base_commons.js:209:20) {
errno: -104,
code: 'ECONNRESET',
config: {
url: 'https://apple-pay-gateway.apple.com/paymentservices/startSession',
method: 'post',
data: '{"merchantIdentifier":"merchant.com.xxxxxxxx","displayName":"xxxxxxxxxx","initiative":"web","initiativeContext":"xxxxxxxx.xxx","domainName":"xxxxxxxx.xxx"}',
headers: {
Accept: 'application/json, text/plain, */*',
'Content-Type': 'application/json',
'User-Agent': 'axios/0.19.2',
'Content-Length': 191
}
},
...
response: undefined,
isAxiosError: true,
Client side code:
applePaySession.onvalidatemerchant = async (event) => {
try {
const data = {
url: event.validationURL,
method: 'post',
body: {
merchantIdentifier,
displayName,
initiative: "web",
initiativeContext: window.location.hostname
},
json: true,
}
const merchantSession = await this.$axios.$post('/apple_pay_session', data);
if (merchantSession && merchantSession.merchantSessionIdentifier) {
applePaySession.completeMerchantValidation(merchantSession)
} else {
applePaySession.abort();
}
} catch (error) {
logReqError(error)
applePaySession.abort();
}
};
Server side:
const httpsAgent = new https.Agent({
rejectUnauthorized: false,
keepAlive: true,
cert: fs.readFileSync(`uploads/apple_pay/${id}/certificate.pem`),
key: fs.readFileSync(`uploads/apple_pay/${id}/private.key`)
});
const sessionRes = await axios({
url: request.body.url,
method: request.body.method.toLowerCase(),
data: request.body.body,
headers: {'Content-Type': 'application/json'}, httpsAgent
})
We checked the status of Merchant Domain - it's verified, Apple Pay Payment Processing Certificate and Apple Pay Merchant Identity Certificate are valid. I also checked project code with the official documentation, as well as with other sources.
I hope anyone has had a similar problem. Any code samples or guesses will be helpful anyway
I am quite desperate right now and I am looking for any kind of help.
I am trying to setup cache mechanism in my project using GraphQL and Redis.
This is how I configure GraphQLModule:
GraphQLModule.forRoot({
cache: new BaseRedisCache({
client: new Redis({
host: 'localhost',
port: 6379,
password: 'Zaq1xsw#',
}),
cacheControl: {
defaultMaxAge: 10000
},
}),
plugins: [
responseCachePlugin()
],
autoSchemaFile: path.resolve(__dirname, `../generated/schema.graphql`),
installSubscriptionHandlers: true,
}),
This is how I’ve created queries and mutations:
#Resolver()
export class AuthResolver {
constructor(
private readonly prismaService: PrismaService,
private readonly authService: AuthService,
){}
#Query(returns => String)
async testowe(#Args(`input`) input: String, #Info() info: any) {
info.cacheControl.setCacheHint({ maxAge: 5000, scope: 'PUBLIC' });
return 'test';
}}
When I am using GraphQL Playground and try this query I get the response and header looks like that:
HTTP/1.1 200 OK
X-Powered-By: Express
Access-Control-Allow-Origin: *
Content-Type: application/json; charset=utf-8
cache-control: max-age=5000, public
Content-Length: 28
ETag: W/"1c-2Df/lONPXcLzs1yVERHhOmONyns"
Date: Tue, 28 Dec 2021 21:35:11 GMT
Connection: keep-alive
Keep-Alive: timeout=5
As You may see there is a part with “cache-control”.
My problem is that I cannot see any keys or values stored in Redis. I am connected to Redis server with redis-cli tool and Ive tried “KEYS ‘*’” command. There is nothing stored in Redis.
Also I have problem with more complex queries - I do not even get a header with “cache-control” part.
Do You have any idea what I am doing wrong here? Should I be able to see stored values in Redis with such approach?
Thank You in advance for any advice.
For what i can see, you don't tell your resolver to store it's result in Redis. The Apollo Server docs are not super clear about this.
I've did a research project around caching & graphql so feel free to read my Medium post about it: https://medium.com/#niels.onderbeke.no/research-project-which-is-the-best-caching-strategy-with-graphql-for-a-big-relational-database-56fedb773b97
But to answer your question, I've implemented Redis with GraphQL this way:
Create a function that handles the caching, like so:
export const globalTTL: number = 90;
export const checkCache = async (
redisClient: Redis,
key: string,
callback: Function,
maxAge: number = globalTTL
): Promise<Object | Array<any> | number> => {
return new Promise(async (resolve, reject) => {
redisClient.get(key, async (err, data) => {
if (err) return reject(err);
if (data != null) {
return resolve(JSON.parse(data));
// logger.info("read from cache");
} else {
// logger.info("read from db");
let newData = await callback();
if (!newData) newData = null;
redisClient.setex(key, maxAge, JSON.stringify(newData));
resolve(newData);
}
});
});
};
Then in your resolver, you can call this function like so:
#Query(() => [Post])
async PostsAll(#Ctx() ctx: any, #Info() info: any) {
const posts = await checkCache(ctx.redisClient, "allposts", async () => {
return await this.postService.all();
});
return posts;
}
You have to pass your Redis client in to the context of GraphQL, that way you can acces your client inside your resolver using ctx.redisClient ...
This is how I've passed it:
const apolloServer = new ApolloServer({
schema,
context: ({ req, res }) => ({
req,
res,
redisClient: new Redis({
host: "redis",
password: process.env.REDIS_PASSWORD,
}),
}),
});
This way you should be able to store your data in your Redis cache.
The info.cacheControl.setCacheHint({ maxAge: 5000, scope: 'PUBLIC' }); way you are trying is for using another caching strategy within Apollo Server. Apollo is able to calculate the cache-control header with this information, but you have to set this setting then:
const apolloServer = new ApolloServer({
schema,
plugins: [
ApolloServerPluginCacheControl({
// Cache everything for 1 hour by default.
defaultMaxAge: 3600,
// Send the `cache-control` response header.
calculateHttpHeaders: true,
}),
],
});
Note: You can set the default max-age to a value that suits your needs.
Hope this solves your problem!
You can find my implementation of it at my research repo: https://github.com/OnderbekeNiels/research-project-3mct/tree/redis-server-cache
I faced the same problem. Try the following
GraphQLModule.forRoot({
plugins: [
responseCachePlugin({
cache: new BaseRedisCache({
client: new Redis({
host: 'localhost',
port: 6379,
password: 'Zaq1xsw#',
}),
}),
})
],
autoSchemaFile: path.resolve(__dirname, `../generated/schema.graphql`),
installSubscriptionHandlers: true,
}),
This is peculiar. Socket.io version ~1.3
io.sockets.on('connection', function (socket) {
console.log('Client connected from: ' + socket.handshake.address);
}
Returns
Client connected from: ::1
However
io.sockets.on('connection', function (socket) {
console.log(socket.handshake);
console.log('Client connected from: ' + socket.handshake.address);
}
Returns
{ headers:
{ host: 'localhost:8000',
connection: 'keep-alive',
origin: 'http://localhost:3000',
'user-agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTM
L, like Gecko) Chrome/43.0.2357.130 Safari/537.36',
accept: '*/*',
dnt: '1',
referer: 'http://localhost:3000/dev.html',
'accept-encoding': 'gzip, deflate, sdch',
'accept-language': 'en-US;q=0.8,en;q=0.6,ko;q=0.4,de;q=0.2,ru;q=0.2,fr;q=0.2,ja;q=0.2,it;q=0.2',
cookie: 'io=yhyuAabou3GufhzNAAAA' },
time: 'Wed Jun 24 2015 22:50:19 GMT+0200 (Central European Daylight Time)',
address: '::ffff:127.0.0.1',
xdomain: true,
secure: false,
issued: 1435179019584,
url: '/socket.io/?EIO=3&transport=polling&t=1435179017804-3',
query: { EIO: '3', transport: 'polling', t: '1435179017804-3' } }
Client connected from: ::ffff:127.0.0.1
Why? Is there some ES6 proxy in the way? I thought maybe some weird JS conversion magic was in place, but it doesn't seem like it.
::ffff:127.0.0.1 is an IPv6 version of 127.0.0.1 and ::1 is an IPv6 shortcut for both.
See Express.js req.ip is returning ::ffff:127.0.0.1 for a similar question.