Identify a terminal-services user by a simple http request? (not authentification) - node.js

it's a little bit weird, but a 3rd party application has an embedded IE with a configureable static URL. This IE calls my webserver (nodeJS). The application has no parameter for the url, so I have a simple nodeJS app which starts with the login of the user and connect to the webserver (websocket). This app is able to see the dynamic parameter of the 3rd party app by looking in the user registry. Both together I would the information to bring the right data to the embedded IE. But how can the webserver relate the IE request to the websocket connection of the "registry app". The websocket app knows the NT username and many more, but the IE only sends a simple http request.
If it wasn't a terminal-services, I would the IE run over the a local webserver, which would forwarding the request plus the registry data to the real webserver (like a proxy). But it's a terminal-services, so there is the port blocking problem. Or is there a way to bind a server to a kind of "virtual per user ip" on the same port?
//Jonny

Related

Is it safe to use http (in nodejs) on a localhost service behind proxy_pass? [duplicate]

We have a web application which will use self signed certificates, and after installing it on the server, the browser will open at "https://localhost" (no, for argument's sake, I will state that we cannot use the actual machine name).
This will generate a browser error, because "localhost" is not the certificate's domain.
An option, is to expose the application on HTTP only on the loopback (localhost).
Our application should be encrypted whenever it is passing outside of the server, so - the question..
Are there any security concerns around allowing HTTP access to our
application on localhost (and only on localhost)? Does this expose the
application to snooping from outside of the computer?
One can assume that if someone was able to access the machine's local user sessions, then we have bigger worries, and the lack of HTTP would hence be insignificant.
There could be other process sniffing the loopback interface. It could be a service running in you PC, sniffing and sending data outside to a remote server.
You can still use https with a domain name, like https://www.myowndomain.com and in the hosts file you map this domain to 127.0.0.1

Can we make our own web server to host a websites and respond to HTTP requests?

A web server responds to an HTTP request and sends the client a set of HTML, CSS, and JS files.
After we build a website or a web application, we usually have to host it on a well-known web server like IIS or Apache to make it accessible all around the world (on the internet).
Can't we just make our own web server so that it can responds to all incoming HTTP requests that the client sends to our computer without having to use IIS?
As wazz and Lex says, you could build your own server to listen the special port and handle the request based on your requirement, but it contains performance and security issue.
If you still want to build it by yourself. I suggest you could refer to some existing server like KestrelHttpServer.

I run a timer for each http request I get, I'd want to add a loadbalancer but the second request may/will not be sent to the right backend server

I run a timer per user API request (over HTTP).
I want to grow horizontally (adding servers) but if I had some servers behind a loadbalancer the user may not be sent to the same backend server for the second request and my timing function wouldn't work.
If I could use cookies it would be easy with sticky sessions.
I can recognize the user using a parameter in the URL but I would prefer not to have to create my own loadbalancing scheme using Nginx or similar solutions.
If that helps:
- App is in nodejs
- Hosted at DigitalOcean.
Anyone struck by a great idea?

how to use nginx 3rd party module when proxying connections to application servers

I developed a Nginx 3rd party dynamic module and did required configuration in nginx.conf. Able to run that module and see it doing processing. This module reads request request header, cookies etc., does some business logic execution and modify response header then send back response to client.
Problem -: "How to use nginx module when proxying connections to application servers"
I'm be using Nginx as proxy server and Tomcat or Node as application server and my application hosted on app server. I'm able to route the request through both web & app server and get response back but module isn't getting invoked. Not sure how to link/configure it so that my able to intercept request and modify response header as per need.
Flow -: Browser <-> Web Server (module sits here) <-> Application Server
Has anybody explored this part? If yes then please help.

How to scrape socket.io updates to a third-party site?

I basically want to know if its possible to use Socket.io using the server-side only with no client side? BUT I want to know if my server-side can instead connect with a different site that I cannot use Socket.io to connect to.
Use PhantomJS to load the third-party site and then inject your own javascript into the page to catch events and send those events back to your own server.
socket.io is a two-way connection. Client <--> Server. You must have a socket.io endpoint at both ends to even establish a connection in the first place. And, then once you establish the connection, you must have agreed upon messages that can be exchanged between the two ends for it to do anything useful.
It is not useful to have a server-side socket.io that doesn't actually connect to anything and nothing connects to it. It wouldn't be doing anything, just sitting there waiting for someone to connect to it.
It is possible to have two cooperating servers connect to one another with socket.io (one server just acts like a client in that case by initiating the connection to the other server). But, again both endpoints must participate in the connection for the connection to even be established and certainly for it to do anything useful.
If you just want to download the contents of a site for scraping purposes, then you would not use socket.io for that. You would just use the nodejs http module (or any of several other modules built on top of it). Your server would essentially pretend to be a browser. It would request a web page from any random web server using HTTP (not socket.io). That web server would return the web page via the normal HTTP request. Your receiving server can then do whatever it wants with that web page (scrape it, whatever).

Resources