WCF and client communication on a self hosted WCF service - wcf-client

I am new to WCF services. I have been working with WCF for over two months now and love its capabilities. I am using a self hosted WCF in a Windows Service. The binding is netTCP because the client and service are on the same machine. My communication is duplex and I am using a WCF session. With these features, one of the design needs for my application is that UI should always be connected to the service - I am using a separate thread in my UI to always poll the connection status and re-create and open the channel in case it goes to faulted state. Since I have async call backs from the service, the client should always be connected. Here are a couple of questions:
Is it OK to use self host technique knowing that the client and service on the same machine? I used WCF for ease of inter process communication.
Does it make sense to keep this keep alive thread from the client or should I be using some other technique?
I want to get better in using and configuring WCF. is there a good book or online reading material on self hosted WCF services?
Please advice.
Thanks,
Subbu

I think it's absolutely fine to use self-hosting with WCF. I've implemented many services that are hosted in a Windows Service for example.
I'm assuming that you're talking about client and server being hosted in different processes on the same machine? If so, then ideally you should use binary over named pipes in your bindings.
If client and server and physically in the same process, then you might consider using something like Roman Kiss's Null Transport to reduce the serialization overhead. His CodeProject article can be found here: http://www.codeproject.com/KB/WCF/NullTransportForWCF.aspx
To answer point 2, I've suggested an alternative approach in my answer to another Stackover question: WCF net.tcp server disconnects - how to handle properly on client side?
Hope this helps.

Related

Azure ServiceBus vs ServiceRemoting, HTTP and WCF

The documentation of Service Fabric recommends service remoting, ICommunicationClient or WcfCommunicationClient to realize the communication between the micro services.
The ServiceBus, which I always used for inter-service communication, is not even mentioned. Why?
I think you misinterpreted the docs. It does not recommend any protocol or service (the word is not even present on the page). What it does do is list the built-in communication options and appropriate situations of when to use them.
There is nothing that prevent you from using service bus for inter service communications. In fact, if you google around you will find some projects like this one
The ability to plug in any desired service or protocol is one of the great things about SF, but they leave the implementation to you.
There are many approaches to do service to service communication, if they had to document all of then, they would spend more time writing the possible approaches than doing the actual communication.
They probably decided for the one with closest relation to the platform, but they could write about any possible, it is just a matter o preference.
I could name a few from many just to have an idea:
Http
Remoting
WCF
Service Bus
Event Hub
AMQP
MQTT
gRPC + protobuf
TCP
UDP
Pipes
And many more, Imagine if they had to document all of then.
The communication is flexible enough to let you implement using any communication mechanism.
Regarding the ones you mentioned,
I always opt for HTTP for being platform agnostic and widely implemented on most platforms, does not matter if is .Net, Java, NodeJs, Windows or Linux, they all talk the same language, the others are very tight to the .Net and Windows platform and force every other solution to be also tighten or adapted to then. And also there is the fact of some being synchronous and other asynchronous like Service bus.
Then, when performance is an issue, I evaluate the other options.

How to deploy a chat service?

I was making a thinking exercise about how could I deploy a chat service like WhatsApp or Slack (just wondering), so people could really use it. You need two main parts, the client software (e.g. the app running on the smartphones), and the server software. So how would you develop the server-side code and make it work?
The first idea that came to me was the classic hosting service, but it cannot be the simple "web hosting service", probably because something like this should be programmed at a lower level and not working with HTTP requests and responses. Maybe using specific server-side technology like Node.js (any other suggestion?) to manage different type of requests at lower level, let's say at the layer where TCP lives, would be a better solution.
So I heard about the Amazon Web Services (AWS), which is not classic hosting, it's a cloud computing service. The problem is that I don't know exactly how this works. Could I deploy a server-side application that works at that low level of networking and also makes requests to databases? Would it be difficult to offer this kinf of service using AWS?.
I would like to hear all your opinions about any aspect of this. Would you use other kinf of technology on the server? What do you think about AWS, and if you would think it's a good option, where can I get some info to learn how to use it?
Server Side Code
You can create a chat service backend using NodeJS + express(or Hapi) to cater input Http Requests.
For Hosting: Cloud servers are pretty available these days and allow you to scale if your app grows with time.
Database:
if you already have your DB available (cool) just use ORMs ( like (Sequelize) which provides easy interaction of Nodejs service with your DB. (I have used MySQL + Sql Server + Oracle)
If not, you can create a new DB (MySql- free on your hosting server (cloud?)
I used Microsoft Azure to host a Nodejs(+ Hapi.js) Backend Service ,to be consumed by my mobile application, over the internet.
Azure gives you $200 free credit which is sufficient for you to try and make your hands dirty with them. There are numerous tutorials available for MS-Azure Api App hosting which will guide you to a successful deployment.
I have not yet explored AWS as of now, but i trust that they will be similar in their learning curves.

Spring Cloud: ZUUL + Eureka + NodeJS

I am new to Spring-Boot(Cloud) and going to work on one new project.
Our project architect have designed this new application like this:
One front end Spring boot application(which is also microservice) with Angular-2.
One Eureka server, to which other microservices will connect.
ZUUL proxy server, which will connect to Front end and mircoservices.
Now, followings are the things I am confuse about and I can't ask him as he too senior to me:
Do I need to have separate ZUUL proxy server ? I mean, what is the pro and cons of using same front end application as ZUUL server?
How MicorService-1 will communicate with Node's MicroService-1? Some blogs suggesting the Sidecar. But again, why ? because I can directly invoke ReST api of NodeJS-1 from the Microservice-1.
(I know, this is very tough to guess but still asking) NodeJS services(which are not legacy services) are suppose to call some third party api or retrieve data from DB.
Now, what I am not getting is why we need NodeJS code at all? Why can't we do the same in the microservices written in Java?
Can anyone who have worked with similar kind of scenario shed some light on my doubts?
I do not have the full context of the problem that you are trying to solve, therefore answers bellow are quite general, but still may be useful:
Do I need to have separate ZUUL proxy server? I mean, what is the pro and cons of using same front end application as ZUUL server?
Yes, you gonna need a separate API Gateway service, which may be Zuul (or other gateways, e.g tyk.io).
The main idea here is that you can have hundreds or even thousands of microservices (like Amazon, Netflix, etc.) and they can be scattered across different machines or data centres. It would be really silly to enforce your API consumers (in your case it's Angular 2) to memorise all the possible locations of each microservice. Better to have one API Gateway that knows about all the services under it, so your clients can call your gateway and have access to the underlying services through one single place. Also having one in your system will decouple your clients from your services, so it's possible to evolve them independently.
Another benefit is that you can have access control, logging, security, etc. in one single place. And, BTW, I think that you are missing one more thing in your architecture - it's Authorization Server. A common approach in building security for microservices is OAuth 2.0.
How MicorService-1 will communicate with Node's MicroService-1? Some blogs suggesting the Sidecar. But again, why? because I can directly invoke ReST API of NodeJS-1 from the Microservice-1.
I think you could use Sidecar, but I have never used it. I suppose that the question 'why' is related to the Discovery Service (Eureka in your architecture).
You can't call microservice NodeJS-1 directly because there may be several instances of NodeJS-1, which one to call? Furthermore, you can't know whether service is down or alive at any given point in time. That's why we use Discovery Services like Eureka - they handle all of these. When any given service has started, it must register itself in Eureka. So if you have started several instances of NodeJS-1 they all will be registered in Eureka and whenever Microservice-1 wants to call NodeJS-1 service it asks Eureka for locations of live instances of NodeJS-1. The service then chooses which one to call.
(I know, this is very tough to guess but still asking) NodeJS services(which are not legacy services) are suppose to call some third party api or retrieve data from DB.
Now, what I am not getting is why we need NodeJS code at all? Why can't we do the same in the microservices written in Java?
I can only assume that the NodeJS has been chosen because it has an outstanding performance for IO operations, including HTTP requests that may come in hand when calling 3-rd party services. I do not have any other rational explanations for this.
In general, microservices give you a possibility to have your microservices written in different languages and it's cool indeed since each language solves some problems better than the other. On the other hand, this decision should be made with caution and answer the question - "do we really need a new language in our stack to solve problem X?".

Can I run a microservice which keeps a port open in the cloud?

I'm new to microservices. I envision them as a set of processes running in two or more machines (I suppose for a given process two instances must be run in separate machines for reliability). In that setup, depending on the kind of clients I have there may be one process working as a TCP server serving on a specific high port and speaking a non-HTTP protocol.
However, for my low-bandwidth, testing purposes, I haven't found a free cloud service which provides that kind of environment (machines to run processes on – say, Java on Linux – while keeping a high port open).
Maybe the facilities I'm expecting are only available to paying customers, or maybe implementing a microservice architecture in the cloud goes beyond simply running processes in machines and sharing a database? Could someone clarify? (and if possible direct me to one such free service)
Yes, you are right when you say Microservices are more about independent service (processes) that can be deployed in one or more cloud machines. Each service can communicate to other using non-http protocol like Message brokers, Thrift, Remote Procedure call (RPC) etc.
As the architecture point of view, services should mostly be decoupled enough to handle complexity of distributed computing. see the image on Microservices Architecture link
There's a concept of API Gateway which could be used for authentication and service registration and discovery purpose.
Coming back to your question, you can test microservices on single cloud (by running each service on different port) and use API Gateway to discover the service path for references here are the links which are worth to look at these.
For concept see links: Microservices.io and stackoverflow question
For Implementation: zookeeper and Auth0 (this is what i'm using)
If you are java lover great to look at infoQ article
Some of the free source that might can help in building and testing microservices are: Google App Engine, hook.io

Invoking Worklight adapter from non-worklight applications

When deploying adapters (be it HTTP, SQL, JMS or CastIron) to the Worklight Server in WebSphere application server, I believe we can invoke the adapters externally from any non-Worklight applications as below.
http://localhost:8080/invoke?adapter=ADAPTER_NAME&procedure=PROCEDURE_NAME&parameters=[PARAMETER1,PAREMETER2,...]
As noticed from this thread:
https://www.ibm.com/developerworks/forums/thread.jspa?threadID=453422
What are the pros and cons of using this approach? Is it really recommended?
Advantages:
Its easy to access from multiple application;accessing adapters URL and passing parameters.
Disadvantages:
Easy to hack the enabled authentication frameworks
Workaround:
I faced the same situation and i overcame it by injecting my custom listeners on server that listen every request and then based on my criteria, it forwards to adapter or worklight app. In this way i can prevent outside access.
There is another way to use a custom authentication model.
http://public.dhe.ibm.com/software/mobile-solutions/worklight/docs/v506/08_04_Custom_Authenticator_and_Login_Module.pdf
http://www.ibm.com/developerworks/mobile/worklight/getting-started.html
Ease of use is the biggest pro and security is the biggest con.
To be able to invoke a procedure in that fashion, your adapter must be free of any security tests (wl_unprotected). If your Worklight host and port are open to the internet (which is very likely), anyone having a whiff of the adapter name, procedure name etc. can invoke your adapter.

Resources