Integrating Service Stack into Custom Server - servicestack

I have a custom HTTP server written in C# that provides specialized application services in my application domain, such as rendering a dashboard with graphing. I am not using IIS; the server is completely written in C# using the HttpListener class to bind directly to the port(s) that it serves.
I would like to use ServiceStack to add in a RESTful API for pulling data to browser-side code to improve the user experience, using the same web server (so it can bind to the same HTTPS port, and share my security module).
All of the example code on the web that I have found assumes either that IIS owns the web port, or that Service Stack owns the web port. In my use case, my custom application owns the web port, and wants to delegate certain HTTP requests to Service Stack.
Is there a simple way to pass off a single HttpListenerContext off to Service Stack to handle? I could not find documentation examples using google, since all of the key words seem to be too common.
In the context of my custom code, I will have already evaluated the URL, determined that the user has permission to access it, and just need to have HttpListenerResponse generated.
Simple sample code or a pointer to a web article would be fantastic.

We've never tested doing this before, but the AppHostHttpListenerBase.ProcessRequest entry point accepts a HttpListenerContext.
You would still need to call new AppHost().Init() but you shouldn't need to start it.

Related

Microservices, API Gateways, and Frontends [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have recently begun exploring the concept of microservices and API gateways and am particularly confused on how frontend endpoints should be hosted.
If I have an API gateway that acts as the middleman between requests to all of my services, where exactly should the frontend be hosted? If I request /api/example, I understand that my API gateway should route that to the appropriate service and forward that services response. I do not understand however, how an API gateway should handle /home/ in a microservice context. In this case, we want to deliver html/css/javascript corresponding to /home/ to the client making the GET request. Does this mean that we should have some sort of frontend service? Won't creating a service that just returns HTML/CSS/JS be redundant and add increased latency, since all we really need to do is just immediately return the HTML/CSS/JS associated with our frontend?
An alternative I was thinking about was to have the API gateway itself provide endpoints that return the HTML/CSS/JS required for the client to render the frontend. In other words, the API gateway could just immediately respond with the HTML corresponding to /home/ when receiving a GET request to /home/ rather than calling a service. However, I read online that API gateways should not be actually serving endpoints, rather just proxying them to services.
That is my main question: Where should frontend code go when your backend is built out using a microservice architecture?
I assume your frontend is Single Page Application. SPA has static content like HTML, CSS, images, fonts etc. It is the perfect candidate to be deployed as static website that gets data from backend using REST APIs.
In cloud environments like AWS, GCP it is recommended to host SPA applications separately from REST APIs. e.g. in AWS SPA can be deployed in Amazon S3 and it remains directly accessible without going through API gateway.
API gateway is supposed to be used to only route REST API calls and performing cross cutting concerns such as authentication etc. However the problem with this approach is that you will get CORS error while hitting REST APIs from frontend because SPA and API gateway will be hosted on different domains. To overcome this issue CORS need to be enabled at API gateway.
If you are looking for simpler solution, frontend can be served from a service through API gateway. Since static content is served only once in SPA it should be okay to start with this approach for simpler applications.
https://aws.amazon.com/blogs/apn/how-to-integrate-rest-apis-with-single-page-apps-and-secure-them-using-auth0-part-1/
https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
API Gateway as you have already mentioned should act as a proxy/router with minimal logic. And as the name says it all, its main purpose is to expose API and not GUI components/pages for different types of clients and address some of the non-functional aspects e.g. security. Provide high-level API [use case driven] based on requesting client[basically BFF/Backend for Frontend approach].
When it comes to having frontend/GUI fittin to microservices, you can think of having UI Gateway/Container which is backed by Micro FrontEnds. Also, refer to MF's site for details on micro frontends.
So, following microservices architecture for frontend, your frontend code should reside in micro frontends along with other microservices.
Before answering your question let me go through the different rendering strategies:
Traditional Server Side Rendering
Quite uncommon nowadays. You don't need any API gateway, since everything is computed and rendered in the server you can perform the necessary calls to other microservices there.
Pros: simplicity; instant interactivity for the user on page load; SEO
Cons: full reloads each time
Client Side Rendering
Most popular option in the last years. You provide a basic static HTML/CSS/JS bundle, perform requests to an API to retrieve some data and use some sort of template engine to render new pages and components.
Pros: no full reload; lazy loading; richer interactions; benefits from CDN
Cons: SEO; page not interactive on load; slower first load
Since the rendering happens in the browser you don't need to serve the static HTML/CSS/JS files from your server. Instead, you should use a CDN to deliver them faster.
The other requests to non-static resources will be performed against the API gateway, which is responsible to forward the request to other services (or make some sort of orchestration/aggregation).
Modern Server Side Rendering
This is gaining traction nowadays thanks to frameworks like Next.js. The page is initially rendered in the server using the same template engine that it's going to be used in the browser, so you can send an interactive page faster while keeping the features and benefits of client side rendering.
In this case all the static pages can be pre-rendered, cached and served through a CDN. For dynamic pages you can still send a partially cached/rendered page that will later fetch the additional necessary information from the API gateway.
TL;DR: You don't need to serve your static HTML/CSS/JS from your API gateway or your server. You should deliver them through a CDN to improve loading times.
Otherwise the static files shouldn't be served through the API gateway, IMHO. You could make your /api/{resource} requests go through it and forward all /{page} requests to static resources.
Using API responses for a Frontend app, does not make sense if you are returning the WHOLE body of the Frontend, as you point out.
However, you can load (initially) a frontend, that contains let's say header, with a menu, footer, and a main body section with few elements, like articles.
Upon interacting with this frontend, an action can be triggered to that API (usually via JS Ajax calls), which will request specific portions of new data from the API, and once received, the JS will update only the relevant portions on the webpage, and not reload or replace (or refresh) the WHOLE page.
This if done correctly can save a great deal of network traffic, thus making the website more flexible, without it being fully reloaded every time you need new data to visualize.
One simple example:
When you click on a link in your menu, to load a contact form, the API will return either the raw HTML for only the contact form, or it will return (usually) a JSON object/array, that will be used to further generate/replace a portion of your main body, to BECOME now the contact form..
Where exactly should the frontend be hosted?
Where should frontend code go when your backend is built out using a
microservice architecture?
Your Front-end (web) application usually sits BEFORE the API Gateway. Requests to resources like HTML/CSS/JS are served right from the Front-end application itself, hence no API-Gateway involvement here whatsoever. If the page contains an (AJAX) call to a back-end (Micro)service, it might (should) go through the API Gateway. From hosting perspective, they are hosted as a separate web application.
how an API gateway should handle /home/ in a microservice context?
It won't (i.e not intercepted by API Gateway at all). Requests like /home are page requests served directly by Front-end application itself (Of course, it will have its own scaling mechanisms like Clustering, Caching etc)

Extending existing RESTful API using Node.js

I am currently running a web service on an Apache Tomcat servlet container. The web service has a base URL and exposes my applications data using the following structure:
http://[hostname]:[port]/path/to/root/[db_table_name]/[primary_key]?fields=name,...
An HTTP GET call to a URL like the one above would return a JSON formatted string.
Though the documentation for my application describes this as a RESTful API, I am confused because I was under the impression that true RESTful APIs do not use query strings. Rather, as I understand it, a true restful API provides a uniform structure, in the form of resource endpoints.
My questions relate to how I can create a custom API to leverage the existing API using Node.js. I do not want to rewrite the application logic or database calls; I just need to know how I can create the API calls using Node.js (possibly using Express or some other framework) and let the existing API handle the request.
For example, I could write Node.js code using the Express module that has several routes, these routes would handle client requests that in turn would call the existing API (i.e. /path/to/root/[table_name]/[pk]... and return the response.
If my Apache Tomcat server is listening on port 8080, how would I deploy my Node.js server to listen on another port and then redirect requests to the existing WS URL on port 8080.
Does the Express framework support explicitly specifying a root path (such as http://localhost:3000/path/to/root/[table_name]/[pk]) as the default root path?
Finally, I know REST APIs support CRUD operations. In the case of a POST method, does Express (or Node.js) have built-in logic to handle duplicate POST requests so that duplicate records don't get created in the database.
I'm reading through different article and tutorials on REST but I think I'm missing something. Any information or advice that can take me in the right direction would be much appreciated.
there's a lot to cover here but I'll try to cover your three questions. Since you have mentioned using Express I will answer assuming that Express is the framework you are using.
If you are using Express, you can choose which port to listen to when you start the server, so you can choose any port that you like at that point (see here).
If you need to redirect a request you can do so easily with res.redirect() (see here). However, you could also call the other web service directly, retrieve the data and return it to the client instead of redirecting them if you prefer. That would require some more code to make the http requests in node.js though.
I am not 100% sure if this is the answer to your question, but there are ways to add a "base path" or namespace to all of your routes. I found this example where various namespaces are used but in your case you only need one which applies to all routes.
I don't think there is a built-in way to do this. The best I can think of is potentially creating some kind of ID for the request so that if it is sent twice you could use this to check but it's far from ideal.
I would like to add that I'm not sure where the idea that query parameters not being RESTful comes from? I think query parameters are fine because that is how you query! Otherwise you couldn't ask for the right data from your RESTful API. Let's say you have a /posts endpoint and you want to get the posts of a particular user (user ID = 1). The RESTful way to do this would be to issue a GET request to /posts?user=1.
Hope this helps!

Is there a default public chrome-cast receiver we can use?

From Google's documents we are asked to setup our own web server to host the receiver.html.
Is there a default public chrome-cast receiver that can be used by any one?
I just need the very basic receiver, the same as the receiver.html that provided in the CastSample, but I don't want to run my own server to host this simple file.
Thanks!
There is another simple way to do this. There is a site called pastehtml.com, where you can host a simple html for free and stays forever. You can even edit it whenever you want. Just now created mine, but I havent yet submitted it to google. Will keep you posted, once I get an approval or rejection from google on my device's whitelisting.
Chromecast receivers are tied to a URL, so I don't see how there could be a public one for a custom application, unless someone publishes a receiver that offers a service that your sender needs.
Bear in mind that the receiver needs to be publicly hosted on the web, but it doesn't not need to be dynamically generated. You could, for example, upload your html/css/js files to Amazon S3 and configure it to serve them very cheaply. There are probably other places where you can host files and have them served over http for minimal cost.

How is a web application started? Where is the entry point (if there's one)?

I am using IIS to develop some web applications. I used to believe that every application should have a entry point. But it seems a web application doesn't have one.
I have read many books and articles addressing how to build an ASP.NET application under IIS, but they are just not addressing the most obvious and basic thing that I want to know.
So could anyone tell me how is a web application started? What's the difference between a traditional desktop application and a web application in terms of their working paradigm, such as the starting and terminating logic.
Many thanks.
Update - 1 - 23:14 2011/1/4
My current understanding is:
When some request arrives, the URL contained in the request will be extracted by the IIS. I guess IIS must have maintained some kind of a internal table which maps a URL to corresponding physical directory on disk. Let's take the following URL as an example:
http://myhost/webapp/page1.aspx
With the help of the aforementioned internal table, IIS will locate the page1.aspx file on disk. And then this file is checked and the code-behind code file is located. And then proper page class instance will be contructed and its methods defined in the code-behind file will be invoked in a pre-defined order. The output of the series of method invoking will be the response sent to the client.
Update - 2 - 23:32 2011/1/4
The URL is nothing but an identifier that serves as an index into the aforementioned internal table. With this index, IIS (or any kind of web server technology) could find the physical location of the resource. Then with some hint (such as file extension name like *.aspx), the web server knows what handler (such as the asp.net ISAPI handler) should be used to process that resource. That chosen handler will know how to parse and execute the resource file.
So this also explains why a web server should be extensible.
It depends what language and framework you are using, but broadly there are a number of entry points that will be bound to HTTP requests (e.g. by URL). When the server receives a request that matches one of these bindings, the bound code is executed.
There may also be various filter chains and interceptors that are executed based on other conditions of the request. There will probably also be some set-up code that the server executes when it starts up. Ultimately, there is still a single entry-point - the main() function of the server - but from the web application's perspective it is the request bindings that matter.
Edit in response to question edits
I have never used IIS, but I would assume there is no "lookup table", but instead some lookup rules. I shall talk you through the invocation of a .jsp page on an Apache server, which should be basically the same process.
The webapp is written and placed in the file system - e.g. C:/www/mywebapp
The web server is given a configuration rule telling it that the URL path /webapp/ should be mapped to C:/www/mywebapp
The web server is also configured to recognise .jsp files as being JSP servlets
The web server receives a request for /webapp/page1.jsp, this is dispatched to a worker thread
The web server uses its mapping rules to locate C:/www/mywebapp/page1.jsp
The web server wraps the code in the JSP file in a class with method serveRequest(request, response) and compiles it (if not already done so)
The web server calls the serveRequest function, which is now the entry point of the user code
When the user code is finished, the web server sends the response to the client, and the worker thread terminates
This is the most basic system - resource-based servlets (i.e. .jsp or .aspx files). The binding rules become much more complicated when using technologies like MVC frameworks, but the essential concepts are the same.
Similar to what OrangeDog mentioned in his answer, there is plenty that goes on when serving these pages Before you even get to your code.
Not only in asp.net mvc, but in asp.net in general there are various pieces that come into play when you're executing a request.
There is code like modules, handlers, etc that again do processing Before it gets to the code of the page. Additionally you can map the same page to be able to process different urls.
The concept of handler in asp.net is important, as there are various handlers that are responsible of processing requests that match extensions and/or http verbs (get, head, post). If you take a look into %systemroot%\Microsoft.NET\Framework64\v4.0.30319\Config\web.config, you can see a section. You can also see the handlers in IIS (these can be changed x site).
For example, the HttpForbiddenHandler is one that just rejects the request. It is configured to be called for special files like the sources "*.cs".
You can define your own handler, that is nothing more than a class that implements an IHttpHandler interface. So it has 2 methods: ProcessRequest and IsReusable. This is more similar to your cgi program, as the implementation is mainly a method that produces HTML or any other type of output based on the information in the request.
Asp.net pages build on top of that, and have plenty of extra features meant to make it easier for you to develop pages. You implement a class that inherits from Page, and there are 2 code files associated to it (.aspx and .cs). The same can be said for asp.net mvc, but it is structured differently. There is much more than it, if you want to take advantage of it you'd need to learn about it.
The downside of those abstractions, is that it makes some developers lose track of the context they're at / about the underlying. The context is still the same, you're producing an application that takes a request and produces an output. The difference is that there is plenty more code in place intended to make it easier.
In terms of a more detailed list of IIS's ASP.NET Request Lifecycle, there are a quite a few stages in the HTTPApplication Pipeline. Faily recently there was a good blog post that I thought summarized them very concisely and well. It's "HTTP Request Lifecycle Events in IIS Pipeline that every ASP.NET Developer Should Know" by Suprotim Agarwal.
For a more detailed explanation you should check out the MSDN article on the subject. This will also go into information on what happens before that pipeline.

Pitfalls of accessing a webserver on 127.0.0.1 from js with a public site

I'm thinking about exploring the idea of having our client software run as a service on a high port and listen for simple http GET requests from 127.0.0.1. The theory is that I would be able to access this service via js from a web page that is served from my site.
1) User installs client software that installs itself as a service and waits for authenticated requests on 127.0.0.1:8080
2) When the user hits my home page js on the page makes an xhtml request to 127.0.0.1:8080 and asks for the status
3) The home page then makes another js request back to my web server sending the status that it received.
This would allow my users to upload/download and edit files on a USB attached device in real-time from a browser. Polling could be the fallback method which is close to what we do today.
Has anyone done this and what potential pitfalls are there? Will this even work?
I can't see any potential pitfalls. I do have a couple of points however.
1/ You probably want to make sure your service only accepts incoming connection from the local machine (127.0.0.1). Otherwise, anyone could look at your JavaScript and figure out that it's talking to [your-ip]:8080. They could then try that themselves from a remote site (security hole).
2/ I wouldn't use port 8080 as it's commonly used for other things (alternate HTTP servers, etc.). Make it configurable and choose a nice high random-type value.
3/ I'm not sure what you're trying to do with point 3 but I think you're trying to send the status back to the user. In which case, why wouldn't the JavaScript on your home page just get the status in a single session and output/update the HTML to be presented to the user? Your "another js request back to my web server" doesn't make sense to me.
You may not be able to do a xml http request to 127.0.0.1 as XMLHTTPRequest is usually limited to the same domain as the main content is being served from. I'm not sure if this restriction applies if the server is on the client's machine. That being said, you could still create a <script> tag that had the src pointing to 127.0.0.1, and have the web server return some Javascript to run. If you only need a simple response, this could work well.
I think it is much better for you to avoid implementation of application logic in JavaScript and html. Once user clicks button on a web page JavaScript should send request to your service and allow it do the rest of the work.
You could have problems with step 1 (Client installs itself) depending on your target user base.
You will need a customised install for each supported environment (Win2K, Vista, Linux, MAC OS 9.0/10.0 etc.).
If your user is on a locked down at work PC this simply wont be allowed.
To some users this might look distressingly similar to a trojan unless you explicitly point out you will be installing software that runs as a service.
You didnt mention an unistall procedure. Users resent "Adobe" like software which installs itself and provides no sensible un-install options
Ohterwise the approach is sound, and, there are are couple of commercial products out there that use exactly this approach!

Resources