Can somebody please explain how to implement service discovery with nodejs without any frameworks. only with nodejs and express.
All I can found it how to implement this with frameworks like sprint boot (which i don't want to use) on youtube.
What Steps I need to do to implement this. it very will help if example of the implementation.
A service discovery service is a service that provides endpoints to manage URLs or IPs of other services in the same environment. It's good practice because it allows to decouple services from each other. Reason is that with a service discovery system, services do not need to store each other's URLs or IPs. These values can simple be fetched from the discovery service. URLs or IPs can then also be updated at runtime (still with a grace period of course).
A very basic service discovery service would provide endpoints to
register a service instance,
to unregister a service instance, and
to lookup registered instances.
Translated to HTTP verbs:
POST /service/:name
DELETE /service/:name/:id
GET /service/:name
This of course does not take into account,
authentication and authorization to prevent unwanted manipulation
health checks to make sure that all registered services are actually alive and healthy
and how to set up storage so that the service is fast and scalable.
Though, it should give an idea on where to start. Generally i'd advise to use a proven solution like i.e. https://www.consul.io/
Related
We are looking at using webhooks from various vendors outside our network. They would publish the event to us. We would be the webhook listener/receiver, not pushing the events. We have done proof of concept of creating an Azure Function to receive the event. From the research we have done most have the security of passing a sha1/sha256/sha512 hash for us to verify they are who we want to receive the events. This all worked as expected with the POC Azure Function.
From a enterprise network security standpoint is there anything else available? The process above puts the security in the function. I'm sure our Network Security group would not want us to have 10 functions, one for each vendor to worry about the security. I've read about whitelisting of IP's that would be sending the events but most of our vendors are Cloud based so I'm not sure how readily that would be available. Maybe one function to validate all events that come in then let pass through? Would that be an acceptable solution? Azure API Gateway or API Management able to address somehow? Any other network type of product that handles webhook security specifically?
Any insight or link to information most appreciated.
Thanks.
Wow, that's really really so open conversation.
You can use Azure Front Door with the Web Application Firewall attached to it. So any SQL injection, DDoS or similar attacks can be prevented by AFD and WAF.
However, I would say the securest way is to put IP restriction as well. So you need to force your vendor to get their IP address. That can be multiple maybe hundreds. But that doesn't matter. You can implement CIDR IP address format so you can cover all network. And you can easily set these IP address restriction during the CI/CD pipeline with Azure PowerShell script.
You can also useAPI Management in front of Azure Functions and you can create access restriction policies. You can either restrict IP based or JWT based. APIM might be a little bit pricey tho.
https://learn.microsoft.com/en-us/azure/api-management/api-management-access-restriction-policies
You can also create advanced policies with APIM
https://learn.microsoft.com/en-us/azure/api-management/api-management-advanced-policies
Apart from that, the AFD & WAF and IP restriction are on the network layer. But you can also implement token-based authentication on your code side.
https://learn.microsoft.com/en-us/azure/app-service/overview-authentication-authorization
You can either you Azure Active Directory, IdentityServer or JWT for this.
Good luck!
I have some doubts about which is the most appropiate way to allow access to my company backend services from public Clouds like AWS or Azure, and viceversa. In our case, we need an AWS app to invoke some HTTP Rest Services exposed in our backend.
I came out with at least two options:
The first one is to setup an AWS Virtual Private Cloud between the app and our backend and route all traffic through it.
The second option is to expose the HTTP service through a reverse proxy and setup IP filtering in the proxy to allow only income connections from AWS. We donĀ“t want the HTTP Service to be public accesible from the Internet and I think this is satisfied whether we choose one option or another. Also we will likely need to integrate more services (TCP/UDP) between AWS and our backend, like FTP transfers, monitoring, etc.
My main goal is to setup a standard way to accomplish this integration, so we don't need to use different configurations depending on the kind of service or application.
I think this is a very common need in hybrid cloud scenarios so I would just like to embrace the best practices.
I would very much appreciate it any kind of advice from you.
Your option #2 seems good. Since you have a AWS VPC, you can get an IP to whitelist by your reverse proxy.
There is another approach. That is, expose your backends as APIs which are secured with Oauth tokens. You need some sort of an API Management solution for this. Then your Node.js app can invoke those APIs with the token.
WSO2 API Cloud allows your to create these APIs in the cloud and run the api gateway in your datacenter. Then the Node.js api calls will hit the on-prem gateway and it will validate the token and let the request go to the backend. You will not need to expose the backend service to the internet. See this blog post.
https://wso2.com/blogs/cloud/going-hybrid-on-premises-api-gateways/
Assume you want to deploy 2 apps of which one provides some API to the second application.
With services I'd just bind the service (or declare it as dependency in my manifest) to my application and hence get the information regarding host, port and credentials passed to my application (e.g. via env variables in node.js). Is there a similiar mechanism for application to application "communication"?
So far my approach is to use a RabbitMQ service (or any message broker/queue) which both applications are bound to and which I then use for cross-app communication.
Thanks!
Using a message broker, as you do, is definitely a viable solution. This allows for asynchronous communication. Yet you will have to take care of authentication yourself, as opposed to app <-> service communication, where authentication/authorization is established through through cloudfoundry service binding.
Another way would be to use a service registry for this. Both apps would register with the service registry and be able to discover each other.
You could try spring cloud service registry (Eureka) or consul. As for your message broker solution, this will not generate credentials for your apps, as a cloudfoundry service binding does.
From your use case, for microservice to microservice discovery, you need Spring Cloud Services and Eureka.
I don't have much experience on nodejs. But some googling, will give you some articles. Here's one that may help you - https://www.npmjs.com/package/eureka-js-client
This article will give you an overview from Java and Spring perspective - https://spring.io/guides/gs/service-registration-and-discovery/.
At the firm I am working currently, we have a lot of microservices, currently, most of them are deployed to Azure. In Azure, service to service authentication is simple: Azure Active Directory is an authorization server, and the service can request OAuth 2 tokens from it using either client credentials or client assertion (with JWT) flow. Then, the service can use this token to authenticate to other services.
In the last few months, we started moving some of our services to AWS. And this makes me wonder - is there an alternative to Azure Active Directory? I could not find something myself, so I thought it is better to ask: What is the recommended way to implement service to service authentication outside Azure? I know you can use Azure Active Directory also outside Azure. I am asking that because I guess there must be other tools out there, maybe with easier integration with AWS.
I didn't mention any programming language (we are using mainly C# here, and a little NodeJS recently) because I feel this question is language-agnostic - I will prefer solution that works well with many languages.
Thank you,
Omer
Amazon has AWS Directory Service.
It's Microsoft AD in AWS Cloud.
I wouldn't know of any AWS services that'll help you with that exact use-case. However, you could solve this by exposing two ports in the application; one for internal requests and one for external. You can use security groups to shield off the internal port for requests from the internet.
Another option that might involve more changes to your setup is to use a gateway. This pattern is used a lot for microservices, an elaborate description can be found e.g. here. The basic concept is that all outside (internet) requests go through a gateway service that allows certain routes and disallows certain other routes. In cases where users have to login, the gateway will usually handle the authentication.
I was working on a side project and i deiced to redesign my Skelton project to be as Microservices, so far i didn't find any opensource project that follow this pattern. After a lot of reading and searching i conclude to this design but i still have some questions and thought.
Here are my questions and thoughts:
How to make the API gateway smart enough to load balnce the request if i have 2 node from the same microservice?
if one of the microservice is down how the discovery should know?
is there any similar implementation? is my design is right?
should i use Eureka or similar things?
Your design seems OK. We are also building our microservice project using API Gateway approach. All the services including the Gateway service(GW) are containerized(we use docker) Java applications(spring boot or dropwizard). Similar architecture could be built using nodejs as well. Some topics to mention related with your question:
Authentication/Authorization: The GW service is the single entry point for the clients. All the authentication/authorization operations are handled in the GW using JSON web tokens(JWT) which has nodejs libray as well. We keep authorization information like user's roles in the JWT token. Once the token is generated in the GW and returned to client, at each request the client sends the token in HTTP header then we check the token whether the client has the required role to call the specific service or the token has expired. In this approach, you don't need to keep track user's session in the server side. Actually there is no session. The required information is in the JWT token.
Service Discovery/ Load balance: We use docker, docker swarm which is a docker engine clustering tool bundled in docker engine (after docker v.12.1). Our services are docker containers. Containerized approach using docker makes it easy to deploy, maintain and scale the services. At the beginning of the project, we used Haproxy, Registrator and Consul together to implement service discovery and load balancing, similar to your drawing. Then we realized, we don't need them for service discovery and load balancing as long as we create a docker network and deploy our services using docker swarm. With this approach you can easily create isolated environments for your services like dev,beta,prod in one or multiple machines by creating different networks for each environment. Once you create the network and deploy services, service discovery and load balancing is not your concern. In same docker network, each container has the DNS records of other containers and can communicate with them. With docker swarm, you can easily scale services, with one command. At each request to a service, docker distributes(load balances) the request to a instance of the service.
Your design is OK.
If your API gateway needs to implement (and thats probably the case) CAS/ some kind of Auth (via one of the services - i. e. some kind of User Service) and also should track all requests and modify the headers to bear the requester metadata (for internal ACL/scoping usage) - Your API Gateway should be done in Node, but should be under Haproxy which will care about load-balancing/HTTPS
Discovery is in correct position - if you seek one that fits your design look nowhere but Consul.
You can use consul-template or use own micro-discovery-framework for the services and API-Gateway, so they share end-point data on boot.
ACL/Authorization should be implemented per service, and first request from API Gateway should be subject to all authorization middleware.
It's smart to track the requests with API Gateway providing request ID to each request so it lifecycle could be tracked within the "inner" system.
I would add Redis for messaging/workers/queues/fast in-memory stuff like cache/cache invalidation (you can't handle all MS architecture without one) - or take RabbitMQ if you have much more distributed transaction and alot of messaging
Spin all this on containers (Docker) so it will be easier to maintain and assemble.
As for BI why you would need a service for that? You could have external ELK Elastisearch, Logstash, Kibana) and have dashboards, log aggregation, and huge big data warehouse at once.