I had a Soap service which was consumed by another service. Now, our architecture has changed and the consumer has no access to the source Soap service, so we have to adapt our public API, developed with node.js, in order to communicate both services. Something like a bridge between them.
This has to be this way because we can't move the consumer service into the private environment where the Soap service has been placed, and also we can't modify the consumer in order to be able to query REST services.
I've tried to make it work with different libraries, as:
rest-to-soap-mapper To make calls to the Soap service from the API. I've been able to call it but I can't see how the original consumer could call this API as a Soap service.
soapThis is supposed to be a client and a server, and should be able to help me to reach what I want, but when I try to configure it as server, as it is explained in this example, I get some errors...(I just have the wsdl url to configur it).
soap-server It seems that I need to build the service in the API...if I could point to the external Soap service I think it could work...
So do you know any way to 'build' this bridge with any of these tools or any other?
Related
#here, please help me understanding microservice authentication with API Gateway.
Let's take an example - I have 10 different independent deployed microservices and I have implemented the API Gateway for all of them meaning all the request will be passed through that gateway, also instead of adding authorization/JWt in every microservice I added in API Gateway with this approach all is working fine, but my doubt and question is
1 What if an end user has the URL of deployed microservice and he tries to connect it without gateway (as I don't have the authorization place here, how do I stop this, do I need to add same authorization logic in every microservice as well but that would end in duplicating the code, then what is the use of API gateway.
let me know if any other input required, hoping I explained my problem correctly.
Thanks
CP Variyani
Generally speaking: your microservice(s) will either be internal or public. In other words, they either are or are not reachable by the outside world. If they are internal, you can opt to leave them unprotected, since the protection is basically coming from your firewall. If they are public, then they should require authentication, regardless of whether they are used directly or not.
However, it's often best to just require authentication always, even if they are internal-only. It's easy enough to employ client auth and scopes to ensure that only your application(s) can access the service(s). Then, if there is some sort of misconfiguration where the service(s) are leaked to the external network (i.e. Internet at large) or a hole is opened in the firewall, you're still protected.
API gateway is used to handle cross cutting concerns like "Authorziation", TLS etc and also Single point of entry to your services.
Coming to your question, If your API services are exposed for public access then you have to secure them. Normally API gateway is the only point exposed to public , rest of the services are behind firewall (virtual network) that can only be accessed by API gateway , unless you have some reason to expose your services publicly.
e.g. if you are using Kubernetes for your services deployment, your can set your services to be accessible only inside the cluster (services have private IPs) , and the only way to access them is API gateway. You don't need to do anything special then.
However if your services are exposed publicly (have public IPs) for any reason then you have to secure them. So in short it depends how you have deployed them and if they have public IP associated with them.
Based on your comments below. You should do the authentication in your API gateway and pass the token in your request to your services. Your services will only authenticate the token not redo the whole authentication. This way if you want to update/change the authentication provider or flow , it's easier to do if you keep it in API gateway.
I have some doubts about which is the most appropiate way to allow access to my company backend services from public Clouds like AWS or Azure, and viceversa. In our case, we need an AWS app to invoke some HTTP Rest Services exposed in our backend.
I came out with at least two options:
The first one is to setup an AWS Virtual Private Cloud between the app and our backend and route all traffic through it.
The second option is to expose the HTTP service through a reverse proxy and setup IP filtering in the proxy to allow only income connections from AWS. We donĀ“t want the HTTP Service to be public accesible from the Internet and I think this is satisfied whether we choose one option or another. Also we will likely need to integrate more services (TCP/UDP) between AWS and our backend, like FTP transfers, monitoring, etc.
My main goal is to setup a standard way to accomplish this integration, so we don't need to use different configurations depending on the kind of service or application.
I think this is a very common need in hybrid cloud scenarios so I would just like to embrace the best practices.
I would very much appreciate it any kind of advice from you.
Your option #2 seems good. Since you have a AWS VPC, you can get an IP to whitelist by your reverse proxy.
There is another approach. That is, expose your backends as APIs which are secured with Oauth tokens. You need some sort of an API Management solution for this. Then your Node.js app can invoke those APIs with the token.
WSO2 API Cloud allows your to create these APIs in the cloud and run the api gateway in your datacenter. Then the Node.js api calls will hit the on-prem gateway and it will validate the token and let the request go to the backend. You will not need to expose the backend service to the internet. See this blog post.
https://wso2.com/blogs/cloud/going-hybrid-on-premises-api-gateways/
Assume you want to deploy 2 apps of which one provides some API to the second application.
With services I'd just bind the service (or declare it as dependency in my manifest) to my application and hence get the information regarding host, port and credentials passed to my application (e.g. via env variables in node.js). Is there a similiar mechanism for application to application "communication"?
So far my approach is to use a RabbitMQ service (or any message broker/queue) which both applications are bound to and which I then use for cross-app communication.
Thanks!
Using a message broker, as you do, is definitely a viable solution. This allows for asynchronous communication. Yet you will have to take care of authentication yourself, as opposed to app <-> service communication, where authentication/authorization is established through through cloudfoundry service binding.
Another way would be to use a service registry for this. Both apps would register with the service registry and be able to discover each other.
You could try spring cloud service registry (Eureka) or consul. As for your message broker solution, this will not generate credentials for your apps, as a cloudfoundry service binding does.
From your use case, for microservice to microservice discovery, you need Spring Cloud Services and Eureka.
I don't have much experience on nodejs. But some googling, will give you some articles. Here's one that may help you - https://www.npmjs.com/package/eureka-js-client
This article will give you an overview from Java and Spring perspective - https://spring.io/guides/gs/service-registration-and-discovery/.
I was working on a side project and i deiced to redesign my Skelton project to be as Microservices, so far i didn't find any opensource project that follow this pattern. After a lot of reading and searching i conclude to this design but i still have some questions and thought.
Here are my questions and thoughts:
How to make the API gateway smart enough to load balnce the request if i have 2 node from the same microservice?
if one of the microservice is down how the discovery should know?
is there any similar implementation? is my design is right?
should i use Eureka or similar things?
Your design seems OK. We are also building our microservice project using API Gateway approach. All the services including the Gateway service(GW) are containerized(we use docker) Java applications(spring boot or dropwizard). Similar architecture could be built using nodejs as well. Some topics to mention related with your question:
Authentication/Authorization: The GW service is the single entry point for the clients. All the authentication/authorization operations are handled in the GW using JSON web tokens(JWT) which has nodejs libray as well. We keep authorization information like user's roles in the JWT token. Once the token is generated in the GW and returned to client, at each request the client sends the token in HTTP header then we check the token whether the client has the required role to call the specific service or the token has expired. In this approach, you don't need to keep track user's session in the server side. Actually there is no session. The required information is in the JWT token.
Service Discovery/ Load balance: We use docker, docker swarm which is a docker engine clustering tool bundled in docker engine (after docker v.12.1). Our services are docker containers. Containerized approach using docker makes it easy to deploy, maintain and scale the services. At the beginning of the project, we used Haproxy, Registrator and Consul together to implement service discovery and load balancing, similar to your drawing. Then we realized, we don't need them for service discovery and load balancing as long as we create a docker network and deploy our services using docker swarm. With this approach you can easily create isolated environments for your services like dev,beta,prod in one or multiple machines by creating different networks for each environment. Once you create the network and deploy services, service discovery and load balancing is not your concern. In same docker network, each container has the DNS records of other containers and can communicate with them. With docker swarm, you can easily scale services, with one command. At each request to a service, docker distributes(load balances) the request to a instance of the service.
Your design is OK.
If your API gateway needs to implement (and thats probably the case) CAS/ some kind of Auth (via one of the services - i. e. some kind of User Service) and also should track all requests and modify the headers to bear the requester metadata (for internal ACL/scoping usage) - Your API Gateway should be done in Node, but should be under Haproxy which will care about load-balancing/HTTPS
Discovery is in correct position - if you seek one that fits your design look nowhere but Consul.
You can use consul-template or use own micro-discovery-framework for the services and API-Gateway, so they share end-point data on boot.
ACL/Authorization should be implemented per service, and first request from API Gateway should be subject to all authorization middleware.
It's smart to track the requests with API Gateway providing request ID to each request so it lifecycle could be tracked within the "inner" system.
I would add Redis for messaging/workers/queues/fast in-memory stuff like cache/cache invalidation (you can't handle all MS architecture without one) - or take RabbitMQ if you have much more distributed transaction and alot of messaging
Spin all this on containers (Docker) so it will be easier to maintain and assemble.
As for BI why you would need a service for that? You could have external ELK Elastisearch, Logstash, Kibana) and have dashboards, log aggregation, and huge big data warehouse at once.
I have a Windows Azure Mobile Service up and running, however, there is a need for that service to consume some data expose through an external SOAP service and store that information in the mobile service database.
I would like to set up a worker in the mobile service, so the calls to the external SOAP service are executed in a fixed period of time.
I've been looking for a solution to this problem, but haven't found anything yet. So any help that would get me in the right direction would be appreciated.
Unfortunately there isn't an easy way to talk to a SOAP service from your Mobile Service backend. The backend is based on Node.js, and even though there are some Node modules for talking to SOAP services, they are currently not supported in Mobile Services. We are working on a solution that will enable you to use any Node module in your service, but it is not out yet.
If you control the SOAP service and it is written using WCF, you may be able to easily add a REST endpoint to the service with just a few config changes and then consume it from your Mobile Service via plain HTTP requests by using the "request" module.