Two servers in Perforce Swarm - perforce

We have two Perforce P4D instances running. Is it possible to integrate more than one P4D into Perforce Swarm? And if yes, how? I can not find something in the documentation about it.

No. A Swarm instance only works with one P4D. We've heard the enhancement request before though. Is there a reason you have 2 P4D servers? Just curious about your circumstances.

Related

Gitlab backup to different servers

I have a gitlab server and would need to create a backup of this server to 2 different servers and both running in different locations, so that in case the main gitlab server is down we can enable any of the 2 backup servers available in different location. Kindly help me if we have any solution to enable the backup as I have free gitlab version.
You have the choice between:
automatic backup (but you would still need to restore said backup on the original server)
HA (High Availability), which is closer to what you are looking for.
it is available for all GitLab tiers, with instructions here.

Create a failover server, with all configuration files and everything from master server

I have a primary server, where I'm running couple off websites.
A friend of mine has configured everything there. Im running Debian on my server.
ISPConfig (Where I manage all my domains, mails, ftp)
Apache
Mysql
PHPMyadmin
Now, I have very important websites which needs to up and running all the time and I want to purchase another server so if this one fails the other one should take over.
I'm planning to use DNSMadeEasy service..
I know I can use rcync to clone all of this but my question is:
How do I know what needs to be copied to the other files so I get all the configuration files of all different services i'm running.
Is there a way to clone on server to another or what is the best approach here?
Im super concerned that this server might go do, and I can not afford to have my website going do..
Any thought and ideas?
Your question is unclear, but here are a couple of basic technologies that you should consider:
(1) Set up another MySQL server which is a replication slave of the master. The two servers communicate so that the slave is always up-to-date.
(2) Use version control such as git to manage all the software that is installed on any server, and all versions and changes made to it. commit the changes as they are made and push the changes to an independent "repository," e.g. on (a private repository on a ...) public service such as GitHub or BitBucket.
(3) Arrange for all "asset files" to be similarly-maintained this time probably using rsync.

How to make a cluster of GitLab instances

Is it possible to create a cluster of multiple GitLab instances (multiple machines)? My instance is over utilized and I would like to add other machines, but at the same for the user should be transparent to access his project, he doesn't care which instance it will be hosted on.
What could be the best solution to help the users?
I'm on GitLab Community Edition 10.6.4
Thanks for your help,
Leonardo
I reckon you are talking about scaling GitLab server, not GitLab runners.
GitLab Omnibus is a fairly complex system with multiple components, some are stateless and some are stateful.
If you currently have everything on the same server, the easiest option is to scale up (move to bigger machine).
If you can't, you can extract stateful components to host them separately: PostgreSQL, Redis, files to NFS.
Funnily you can make performance worse here.
Next step you can scale out the stateless side.
But it is in no way an easy task.
I'd suggest to start with setting up proper monitoring to see where are your limitations (CPU, RAM, IO) and bottle-necks (in which components).
See docs, including some examples of scaling:
https://docs.gitlab.com/ee/administration/high_availability/
https://about.gitlab.com/solutions/high-availability/
https://docs.gitlab.com/charts/
https://docs.gitlab.com/ee/development/architecture.html
https://docs.gitlab.com/ee/administration/high_availability/gitlab.html

How to share a file (data) across multiple docker containers in azure

I want to run several docker containers in different regions (asia, eu, us) which host a nginx server.
However, they should all have the same configuration because I need to updated hostnames at runtime dynamically (one domain for every new tenant).
So I guess it would be the easiest way to just share one config file among all containers and reload them...
So how can I share data/files among n containers on azure?
In general, unless you want to use proprietary solutions specific to the platform at hand, the best way to synchronise files between multiple systems is with the help of rsync.
For example, in DNS, there exists a specialised protocol for transferring domain zones directly within the DNS software, called AXFR; one of the authors of a newer DNS implementation suggests that this AXFR protocol is crap, and rsync over ssh works much better — http://cr.yp.to/djbdns/tcp.html — and the ssh part is a nice thing about rsync, in that it can work over plain old ssh protocol as far as interconnection between the hosts goes, not requiring any special firewall considerations.
Have you considered using the Azure file share.

Keeping Multiple Servers in a Cluster In-Sync?

I'm currently managing a cluster of PHP-FPM servers, all of which tend to get out of sync with each other. The application that I'm using on top of the app servers (Magento) allows for admins to modify various files on the system, but now that the site is in a clustered set up modifying a file only modifies it on a single instance (on one of the app servers) of the various machines in the cluster.
Is there an open-source application for Linux that may allow me to keep all of these servers in sync? I have no problem with creating a small VM instance that can listen for changes from machines to sync. In theory, the perfect application would have small clients that run on each machine to be synced, which would talk to the master server which would then decide how/what to sync from each machine.
I have already examined the possibilities of running a centralized file server, but unfortunately my app servers are spread out between EC2 and physical machines, which makes this unfeasible. As there are multiple app servers (some of which are dynamically created depending on the load of the site), simply setting up a rsync cron job is not efficient as the cron job would have to be modified on each machine to send files to every other machine in the cluster, and that would just be a whole bunch of unnecessary data transfers/ssh connections.
I'm dealing with setting up a similar solution. I'm half way there. I would recommend you use lsyncd, which basically monitors the disk for changes and then immediately (or whatever interval you want) automatically syncs files to a list of servers using rsync.
The only issue I'm having is keeping the server lists up to date, since I can spin up additional servers at any time, I would need to have each machine in the cluster notified whenever a machine is added or removed from the cluster.
I think lsyncd is a great solution that you should look into. The issue I'm having may turn out to be a problem for you as well, and that remains to be solved.
Instead of keeping tens or hundreds of servers cross-synchronized it would be much more efficient, reliable, and most of all simple maintaining just one "admin node" and replicating changes from that to all your "worker nodes".
For instance at our company we use a Development server -> Staging server -> Live backends workflow where all the changes are transferred across servers using a custom php+rsync front end. That allows the developers to push updates to a Staging server in the live environment, test out changes, and roll them to Live backends incrementally.
A similar approach could very well work in your case as well. Obviously it's not a plug-and-play solution, but I see it as the easiest way to go - both in terms of maintainability and scalability.

Resources