Flower running the server and clients on different machines - federated-learning

I use Flower API for federated learning applications. Is there any way to run the server and clients on different machines for a real-world benchmark?

Related

How to host multiple databases for a micro-service application?

I have an application in mind that would be built using node, mongodb + other db, kubernetes , RabbitMQ, docker and react as a front end. The application will be built in a microservice architecture. We all know that for a monolith app all you need is one DB (MongoDB, MySQL etc etc) but for a micro one you can have multiple databases. My question would be, do I need to buy multiple, separate databases and connect each service to them ? or how does it work in a micro-services design.
At the moment a I have a sample microservices app that is running on my local machine using docker and its connected to multiple databases or database/service. I am just to trying to get an idea on how does this work with companies like DigitalOcean or AWS.
Any input on this would be great.
I am just trying to figure out how this going to work when it comes to production later so that I am ware of cost and deployments. I have done some research on Digital ocean, AWS etc etc but I still can figure out how do they work.
thanks in advance.
You don't need having multiple instances of DBMS running. You can easily use one VM with one MongoDB running on it.
When you scale you might want to have separate machines running DB instances for your services, but at start you may just separate it logically to ensure you do not communicate between services using DB.
Chris Richardson on his microservices.io website says:
There are a few different ways to keep a service’s
persistent data private. You do not need to
provision a database server for each service.
For example, if you are using a relational database
then the options are:
- Private-tables-per-service – each service owns a
set of tables that must only be accessed by that
service
- Schema-per-service – each service has a database
schema that’s private to that service
- Database-server-per-service – each service has
it’s own database server.
Source: https://microservices.io/patterns/data/database-per-service.html

Deploy and secure microservices communication - best practices

I have 2 microservices each connected to a db and two APIs and I was asking myself what are the best practices about deploying them in production.
Firstly, I thought about putting all the code on the same server. Each microservice will be in a docker container and same with the APIs. I will not use docker for databases(Mongo and MariaDB), instead they will be installed directly on the server. Only the port of the APIs will be opened and they will communicate with the microservices via TCP.
Secondly I was wondering how can I secure the microservices or if is needed. The validation logic will be only on the APIs.
Thank you!
Some food for thoughts:
Deploy several replicas of the same container to increase reliability and resilience. To manage load balancing and replicas you could use for instance kubernetes
If your containers communicate between each other, you could think about enabling mTLS.
Expose only the needed ports.
If you deploy them in your own server pay attention to the maintenance of the underlying operating system.

record load test of desktop app with load runner running on cloud

We have a windows app running on AWS. We access the app via remote desktop connection.
This app has the ability to serve 50 concurrent users. Now I have to launch loadtest to verify if its able to serve 50 concurrent user or not.
How can I record test with loadrunner for that app which is running on cloud? Please help me.
Are you trying to test the app infrastructure or the remote desktop infrastructure of AWS? These are two different questions.
What do you know about the architecture of this application, the next upstream architectural component that it communicates with, the communication method, etc...? You will need to understand this if you want to scrape away the interface and test at the application layer protocol or developer API layer. Barring this you either go to the source and use Visual Studio or go up a tier to use Remote Desktop, a protocol supported by LoadRunner

Making locally hosted server accessible ONLY by AWS hosted instances

Our system has 3 main components:
A set of microservices running in AWS that together comprise a webapp.
A very large monolithic application that is hosted within our network, and comprises of several other webapps, and exposes a public API that is consumed by the AWS instances.
A locally hosted (and very large) database.
This all works well in production.
We also have a testing version of the monolith that is inaccessible externally.
I would like to able to spin up any number of copies of the AWS environment for testing or demo purposes that can access the demo testing version of the monolith. However, because it's a test system, it needs to remain inaccessbile to the public. I know how to achieve this with AWS easily enough (security groups etc.), but how can I secure the monolith so it can be accessed ONLY by any number of dynamically created instances running in AWS (given that the IP addresses are dynamic and can therefore not be whitelisted)?
The only idea I have right now is to use an access token, but I'm not sure how secure that is.
Edit - My microservices are each running on an EC2 instance.
Assuming you are running your microservices on EC2, if you want API calls from your application servers running in AWS to come from a known IP/IPs then this can be accomplished by using a NAT instance or a proxy. This way even though your application servers are dynamic, the apparent source of the requests is not.
For a NAT you would run your EC2 instances in a private subnet and configure them to send all of their Internet traffic out over the NAT instance which will have a constant IP. Using a proxy server or fleet of proxy servers can be accomplished in much the same way, but would require your microservice applications be configured to use it.
The better approach would be to simply not send the traffic to your microservices over the public Internet.
This can be accomplished by establishing a VPN from your company network to your VPC. Alternatively, you could establish a Direct Connect to bridge the networks.
Side note, if your microservices are actually running in AWS Lambda then this answer does not apply.

Load balancing for custom client server app in the cloud

I'm designing a custom client server tcp/ip app. The networking requirements for the app are:
Be able to speak a custom application layer protocol through a secure TCP/IP channel (opened at a designated port)
The client-server connection/channel needs to remain persistent.
If multiple instances of the server side app is running, be able to dispatch the client connection to a specific instance of the server side app (based on a server side unique ID).
One of the design goals is to make the app scale so load balancing is particularly important. I've been researching the load-balancing capabilities of EC2 and Windows Azure. I believe requirement 1 is supported by most offerings today. However I'm not so sure about requirement 2 and 3. In particular:
Do any of these services (EC2, Azure) allow the app to influence the load-balancing policy, by specifying additional application-layer requirements? Azure, for example, uses round-robin job allocation for cloud services, but requirement 3 above clearly needs to be factored in as part of the load balancing decision, i.e. forward based on unique ID, but uses round-robin allocation if the unique ID is not found at any of the server side instances.
Do the load-balancer work with persistent connection, per requirement 2? My understanding from Azure is that you can specify a public and private port-pair as an end-point, so the load-balancer monitors the public port and forward the connection request to the private port of some running instance, so basically you can do whatever you want with that connection thereafter. Is this the correct understanding?
Any help would be appreciated.
Windows Azure has input endpoints on a hosted service, which are public-facing ports. If you have one or more instances of a VM (Web or Worker role), the traffic will be distributed amongst the instances; you cannot choose which instance to route to (e.g. you must support a stateless app model).
If you wanted to enforce a sticky-session model, you'd need to run your own front-end load-balancer (in a Web / Worker role). For instance: You could use IIS + ARR (application request routing), or maybe nginx or other servers supporting this.
What I said above also applies to Windows Azure IaaS (Virtual Machines). In this case, you create load-balanced endpoints. But you also have the option of non-load-balanced endpoints: Maybe 3 servers, each with a unique port number. This bypasses any type of load balancing, but gives direct access to each Virtual Machine. You could also just run a single Virtual Machine running a server (again, nginx, IIS+ARR, etc.) which then routes traffic to one of several app-server Virtual Machines (accessed via direct communication between load-balancer Virtual Machine and app server Virtual Machine).
Note: The public-to-private-port mapping does not let you do any load-balancing. This is more of a convenience to you: Sometimes you'll run software that absolutely has to listen on a specific port, regardless of the port you want your clients to visit.

Resources