I am doing a research on client/server architecture and web applications. I've been reading different thoughts and suggestions around the web. Some saying that web applications are not considered client/server architecture apps while others are saying the exact opposite. I was wondering what is actually the right thing and if someone can provide in depth explanation that would be highly appreciated?
It depends on the architecture/design of your web application(s). The rule of thumb would be: The client application has to be another piece of software than the (resource) server. There is no "one right way" to design a client/server architecture.
The most common implementations for web based applications are MVC (Model View Controller and SPAs (Single Page Applications).
MVC applications (like ASP.NET or ZendFramework) are applications that are booth rendering the client and handling the business logic in the backend and are not based on a client/server model. (An action in a controller handles a request, loads some data and renders an HTML view as the response).
But: If your MVC Application is acting as a proxy calling a "remote" web service internally (via SOAP or whatever), it should be considered a client application.
As an example: A CRM system is running in an intranet network and provides a data-services for desktop clients. You could write a web application that displays data from those services which is then another client application.
The SPA architecture requires the separation of the server from the frontend, the SPA being the frontend, which in turn is the client application. With this requirement you are basically already implementing a client/server architecture. Let's say an AngularJS frontend and the backend could be a REST service (like ASP.NET WebAPI or Lumen).
The choice of where you host the client application does not affect the client/server architecture, since the applications are still separated on execution: the browser executes the JavaScript SPA on the device of the visitor and calls the service in some data center.
Web application is a part of client-server architecture. Any implementations have always two or more tiers, so two or more process communicate each other.
You may take a look on my old presentation "Architecture of enterprise (automated) information system - Layers and levels" that shows different client-server architectures including web application case (the slide "Tiers are physical layers (examples)" shows examples).
Related
I am in the process of moving my existing desktop application to web. The GUI is developed using MFC/VC++ and the buisness logic is written in COM enabled VC++ DLL. This Dll has various responsibilities. Currently this Dll is loaded as part of the desktop application memory.Now I am in the initial stage of moving this application to modern web application. Below is the thought process for design considered till now,
Converting monolithic business logic to micro services.
Deploy the micro services in a server.
My business logic VC++ Com layer can interface with microservices and get data.
Have a API gateway which can communicate to microservices and it can serve to the web client.
In this process I wanted to reuse VC++ Com business logic layer as much as possible. The current com Dll is not supporting multi threading or multi user sessions. This needs to be supported. The next thing would be reusing existing MFC GUI in web.
What are the technologies can be considered to reuse my buisness logic?
For the most code-reuse I think you're on the right track.
You definitely want to separate out the business logic into it's own service and you can expose that via any communication protocol you prefer. The biggest downside to this is that every time functionality needs to be added, it needs to be added in two places, or in this case three: the business logic server, the MFC server, and the web server.
As others have pointed out, MFC was really intended for desktop applications. A "modern web application" is one that is stateless and communicates via message-passing over http(s) with the web browser being the client. There's really no re-using the MFC GUI in the web. The architectures are just too disconnected.
Having said that, and I haven't looked into it too much, but Blazor is a WASM compiler. It has limited support for the .net framework, even less so around the communication portion of it, but it might be able to compile your project. I'd bet against it, however.
I think you'd be better off just focusing on a decent web experience with a SPA and abandoning the MFC/Desktop portion. Maybe later you can circle back and build a GUI through MAUI or WPF that consumes the web API.
My company is big on sharepoint. but server side controls have inherent problem with performance. I want to move page rendering responsibility to client side with concept similar to SPA. what is best framework or architectural style for this.
Single Page Applications are gaining immense popularity these days mainly because of their fluidity and responsiveness. Clearly the framework and architectural style depends heavily upon the requirements.
Framework:-
There are host of frameworks available that can be leveraged depending upon the complexity of the SPA you are planning - Backbone, Angular, Knockout, Ember etc. I personally prefer Angular and Knockout frameworks because of their simplicity and data binding and directive capabilities. Moreover you can also efficiently handle REST calls to SharePoint using Breeze.js. Refer to this link for more details.
Architecture Styles:-
Typically SPAs use MVC or MVVM patterns to decouple the UI aspect from the business logic, but this again is requirement driven. Regardless of the style/pattern it is important to keep the code modular and no to expose implementation details as much as possible.
Packaging:-
As far as SPAs for SharePoint are concerned, the best way to package and deploy them is in the form of SharePoint Hosted Apps. SharePoint Hosted Apps only allow client side code and hence leveraging Javascript object model and REST API for SharePoint making them ideal for deployment of SPA over SharePoint.
References:-
https://www.pluralsight.com/courses/building-sharepoint-apps-spa-angularjs
I am implementing a product that will be accessible via web and mobile clients, and am doing thorough research to make sure that I have chosen a good set of tools before I begin. For front-end, I am using AngularJS (Angularjs + angular-ui on web, ionic + cordova on mobile), and because I want to have a single backend serving all types of clients, I plan on implementing a RESTful service (likely one that accepts and returns JSON data). I am leaning towards using Mongo, Node, and Express to create this RESTful API, but am open to suggestions on that front.
But the sticking point for me right now is this: certain parts of the application (including, for example, a live chat/messaging section) need to be real-time. I am aware of the various technologies and protocols for implementing real-time web services (webhooks, websockets, long polling, etc.) and the libraries and frameworks that implement them and expose that functionality (SockJS, Socket.io, etc.) and I want to be clear that I am not asking one of those "what is the best framework" types of questions.
My question is rather about the correct way to implement these two kinds of services side-by-side. Should I be serving the chat separately from the rest of the application? Or is there a clean way to integrate these two different protocols into the same application?
The express framework is quite modular so it can sit side by side with a websocket module if you so wish. The most common reason for doing this is to share authentication routines across http and websockets by using the same session store in both modules.
For example you would authenticate a user by http with the express framework when they login, which will allow access to your chat application. From then on you would take advantage of the realtime and speedy protocol of websockets and on your server code you will check the cookie that the client sends with the socket message and check that the request corresponds to an authenticated session from before.
Many websites use websockets for chat or other push updates, and a separate RESTful API over AJAX, delivered to the same page. There are great reasons to leave RESTful things as they are, particularly if caching is an issue--websockets won't benefit from web caches outside your servers. Websockets are better suited for chat on any modern browser, which trades a small keep-alive for a reconnecting long-poll. So two separate interfaces adds a little complexity that you may benefit from, when scaling and cost-per-user are considered.
If your app grows enough to require this scaling, you'll find this actually simplifies things greatly--clients in the same chat groups can map to the same server, and a load balancer can distribute RESTful calls appropriately.
If you are looking for one communication protocol to serve both needs (calling the server from the client, as well as pushing data from the server), you might have a look at WAMP.
WAMP is an open WebSocket subprotocol that provides two application
messaging patterns in one unified protocol: Remote Procedure Calls +
Publish & Subscribe.
If you want to dig a little deeper, this describes the why, the motivation and the design. WAMP has multiple implementations in different languages.
Now, if you want to stick to REST, then you cannot integrate push at the protocol level (since REST simply does not have that), but only at "framework level". You need a 2nd protocol. The options are:
WebSocket
Server Sent Events (SSE)
HTTP Long-Poll
SSE in a way could be a good complement to REST. However, it's unsupported on IE (not even IE11), and it's unclear if it ever will be.
WebSocket obviously works, but then why not have it all running over WebSocket? (This line of thinking leads to WAMP).
So IMO the natural complement for REST would be some HTTP Long-poll based mechanism for simulating push. You can make HTTP Long-poll work robustly. You'll have to live with the inefficiencies and limitations of HTTP (for use cases like this) with this solution then.
You could use a hosted real-time messaging (and even storage) service and integrate it into your frontend apps (web and mobile). These services leverage the websocket protocol and normally include HTTP Comet fallbacks.
The cool thing is that you don't need to manage the underlying infrastructure in terms of high-availability and unlimited scalability and focus only on developing a great app.
I work for Realtime so i'm a bit biased but I think the Realtime Framework could help you. More at http://framework.realtime.co
I am starting a new project. This will be a Java EE web application. The application will consist of 3 parts, each one having different functionality, but they all belong to one application. I am thinking of using the following architecture:
There will be 4 separate projects (JSF web applications). The first one will be responsible for communication with database and will expose remote EJBs. Let's call the first project "DataLayerProject". The other 3 applications, which I have mentioned above, will consume the EJBs from the "DataLayerProject" in order to communicate with the database. They will represent the presentation layer of the application.
In my opinion this approach will allow to maintain and develop the 3 parts independently of each other. This will make the project more scalable (in case there will be need to add another sub-projects to the main application).
Is this is a viable solution?
Should I use REST services instead of remote EJBs. (Sorry if I am misunderstanding something here)?
There will be the main page from which I will access the other 3 parts. The question is that I need to have single sign on for every application. Thus, by logging in on the main page user is automatically gets logged in the other 3 applications.
Should I use any portal solutions for making the separate web applications work together?
If all your applications are deployed on a single server, then you can use local interfaces for EJBs instead of remote ones and that would be the fastest implementation.
I'm reading about architecture and found the following expression:
For instance, in a 2-tier Windows Forms or ASP.NET application, the machine running the interface code must-have credentials to access the database server. Switching to a 3-tier model in Which the data access code runs on an application server machine running the Means That code no longer the interface Needs Those credentials, making the system Potentially more secure. (Rockford Lhotka)
I can not Realize why i should use 3-tier app.
In a three-tier application, the middle tier (the application server) controls all access to data, so it is possible to specify very fine and specific access control rules (in code), much more than the database itself offers. Whatever an end-user wants to do, has to go through your code (in a two-tier application, the end-user "directly" talks to the database).
OTOH, if you stop using the database access protections, securing the data is now entirely up to your application and coding errors can create huge security holes.