I have a gitlab server and would need to create a backup of this server to 2 different servers and both running in different locations, so that in case the main gitlab server is down we can enable any of the 2 backup servers available in different location. Kindly help me if we have any solution to enable the backup as I have free gitlab version.
You have the choice between:
automatic backup (but you would still need to restore said backup on the original server)
HA (High Availability), which is closer to what you are looking for.
it is available for all GitLab tiers, with instructions here.
Related
I have a primary server, where I'm running couple off websites.
A friend of mine has configured everything there. Im running Debian on my server.
ISPConfig (Where I manage all my domains, mails, ftp)
Apache
Mysql
PHPMyadmin
Now, I have very important websites which needs to up and running all the time and I want to purchase another server so if this one fails the other one should take over.
I'm planning to use DNSMadeEasy service..
I know I can use rcync to clone all of this but my question is:
How do I know what needs to be copied to the other files so I get all the configuration files of all different services i'm running.
Is there a way to clone on server to another or what is the best approach here?
Im super concerned that this server might go do, and I can not afford to have my website going do..
Any thought and ideas?
Your question is unclear, but here are a couple of basic technologies that you should consider:
(1) Set up another MySQL server which is a replication slave of the master. The two servers communicate so that the slave is always up-to-date.
(2) Use version control such as git to manage all the software that is installed on any server, and all versions and changes made to it. commit the changes as they are made and push the changes to an independent "repository," e.g. on (a private repository on a ...) public service such as GitHub or BitBucket.
(3) Arrange for all "asset files" to be similarly-maintained this time probably using rsync.
We have our software hosted in Git on Azure DevOps and built using a build pipeline (which primarily uses a Cake script). We are now looking to deploy this software using the Azure DevOps release pipeline. However, all of our application servers are behind our firewall, inside of our network, and don't have any port open except for 80 and 443 for the web applications. We have dev, staging, and production servers for our apps (including some for load balancing). All I really need is to copy the artifact, backup the current code to a separate folder on the server, deploy and unzip the artifact file in the root deployment folder, and restart IIS on those servers.
My company is rather large and bureaucratic so there are some hoops we have to jump through for due diligence before we even attempt this new process. In that spirit, I am trying to find the best solution. If you can offer your advice, and in particular, offer any other solution we did not think of, that would be helpful:
The obvious solution would be to stand up servers on Azure cloud and move completely to the cloud. I know this is a solution, and this may be where we go, but my request is for non-cloud solution options so I can present this properly and make a recommendation.
Use a Hyper VPN tunnel to securely transfer the files and restart IIS. Probably the easiest and simplest method in regards to our already built build process on AzDO. Technically, this is the one I am least comfortable with.
Use build agents inside the network, connect to them from AzDO, have them build the software, and then have them deploy it or other agents. Lots of work to set it up but so far the least intrusive to our security. I also am not a fan because I wanted AzDO to handle builds and deployments.
Open the SFTP and SSH ports for each server and transfer the files that way. Maybe the least secure way but very simple?
If you have a better solution for this problem or a more common solution, let me know. If you think I should one of the 4 above solutions, let me know. If you can expand on any of the options above, please do.
ADO agents only require external connectivity, so they talk to ADO, not vice versa. So you only need 443 outbound to a couple of ADO urls.
Reading: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops#communication
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops#im-running-a-firewall-and-my-code-is-in-azure-repos-what-urls-does-the-agent-need-to-communicate-with
You could use Environments. Create Environment for each VM (that includes creating agent on your machine) and then use the environment parameter in YAML pipelines deployment job. The deployment job can then do whatever you need (deploy webapp, move files, backup, etc..) on your target machine, regardless whether it's on private network.
More reading - Azure DevOps Environments: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops
Using deployment job: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops
I'm evaluating the features of a full-fledged backup server for my NAS (synology). I need
FTP access (backup remote sites)
SSH/SCP access (backup remote server)
web interface (in order to monitor each backup job)
automatic mail alerting if jobs fail
lightweight software (no mysql, sqlite ok)
optional: S3/Glacier support (as target)
optional: automatic long-term storage after a given time (ie local disk for 3 months, Glacier after that)
seems like biggest player are Amanda, Bacula and duplicity (likewise)
Any suggestion?
thanks a lot
Before jumping on the full server backups, please clarify these questions:
Backup software's are agent and non agent based, which one do you want to use?
Are you interested to go for open source or proprietary software?
Determine your source and destination are they in the same LAN or in the Internet. Try to get the picture of the bandwidth between source and destination and the volume of data getting backed up?
Also if you are interested try to know gui requirements and various other os platform support for backup software.
Importantly try to know the mail notification configuration.
Presently am setting one for my project and so far have installed bacula-v7.0.5 with webmin as gui. Trying the same config in the amazon cloud utilizing s3 as storage by mounting s3fs into the ec2 instance.
My bacula software is a free community version.Haven't explored the mail notification until now.
I am looking for an enterprise subversion setup, that will fit the following requirements:
I need at least 2 instances of the repository server for high availability reasons
Management of multiple repositories
The 2 repository servers need to be synchronized.
Easy administration and configuration
User & authorization management with LDAP integration (web-interface) - optional
Backup & restore features, that guarantee the recovery with not more than 1 day of lost data
Fast and easy setup.
Monitoring of the repository(traffic, data volume, hotspots..) - optional
good security
either open source or low price tag, if possible
some pricing range, if a commercial tool is recommended.
a VMWare appliance would be great.
I am interested in an appliance or a set of subversion tools, that support these requirements. The operating system should be Ubuntu.
The configuration and setup of the toolset should be doable in hours or at the most a few days...
Our development team is not huge (about 30 people), but grows continually.
I have been unable to find anything (with the exception of Subversion MultiSite, that seems to big (and expensive? - they give no price information) for our enterprise)
Can anyone recommend a solution? Could you also describe your experiences with the recommended tool?
The easier and faster installation and configuration is, the better... If it is without a price tag, this is even better..
Thank you for any help.
I haven't seen a shrink-wrap setup for this, so far. If you want to build that from scratch, here are some pointers:
You can use builtin commands for the mirroring of the repo.
For multiple repos, just create a huge one and then add paths below the root.
For me, the command line is "easy admin&config", so can't help you there
To get user management, let subversion listen to localhost (127::1) and put an apache web server in front. There a loads of tools for user management for web servers.
For backup&restore, see your standard server backup tools.
VisualSVN Server answers most of your requirements.
From the web promo page (my emphasis):
Zero Friction Setup and Maintenance
One package with the latest versions of all required components
Next-Next-Finish installation
Smooth upgrade to new version
Enterprise-ready Server for Windows Platform
Stable and secure Apache-based Windows service
Support for SSL connections
SSL certificate management
Active Directory authentication and authorization with groups support
Logging to the Windows Event Log
Access and operational logging (Enterprise edition only)
Based on open protocols and standards
Configured by Subversion committer to work correctly out-of-the-box
I can vouch for visual SVN. I use the free version for our team of 4 developers, and it does everything it says on the tin reliably. Installation also took all of 5 minutes. That said, it does require a windows box.
Running a subversion server in a VMWare instance with one of VMWare's "High Availability" tools will give you most of what you need. There are pre-built VMWare Appliances that have a Subversion server built in. http://www.vmware.com/appliances/directory/308
VMWare's HA features will give you the redundancy of the SVN server instance. (You're going to need multiple physical servers for true redundancy. If one server fails, VMWare will re-start the instance on the new server.)
I don't know of any VMWare appliances that have special backup features, but this is pretty trivial to script. Just run an 'svnadmin hotcopy' once a day, so you have a copy of the repository ready to go in case of a corruption. (On top of this, you really should be using a SAN RAID array with tape backups.)
Our setup:
Rack of Blade Servers
VMWare Infrastructure
Virtualized Windows 2003 Server
If Windows crashes or one of the blades goes down, VMWare re-starts the Windows instance.
CollabNet Subversion Server, running Apache with SSPI authentication
SVN repo lives on a SAN
Nightly svnadmin hotcopy and verify of the repo (to another directory on the SAN), so we have a "hot" backup of the repo ready to go in case of a corruption problem.
Nightly tape backups of everything
Tapes taken offsite regularly
The cost of the server hardware and VMWare is going to be your biggest issue (assuming you don't already have this.) If you're not willing to make this kind of cash outlay, it may be worth looking at a hosted SVN provider.
We use svn for enterprise work. It is perfectly adequate. There are plenty of enterprise testimonials, including one from Fog Creek (Joel on Software, Stack Overflow).
I don't believe you need anything beyond the regular version.
I suppose you are aware that it is typical to use Subversion with TRAC, the issue tracking system.
What are your disaster recovery plans for Windows Sharepoint Services 3.0 ?
Currently we are backuping all databases (1 content, admin, search and config) using sql backup tools, and backuping the front end server via dataprotector.
To test our backups, we use another server farm, restore the content database (following the procedure on technet) and create a new application that uses this database. We just have to redeploy solutions on the newly created sharepoint application.
However, we have to change database access credentials (on sql server) : the user accounts used on production aren't the same as those used on our "test" farm.
At the end, we can restore our content database and access all our sites. Searching doesn't work, but we're investigating.
Is this restore scenario reliable (as in supported by microsoft) ?
You can't really backup / restore both config database and search database:
restoring config database only work if your new farm have exactly the same server names
when you restore the search database, the fulltext index is not synchronize. however, this is not a problem as you can just reindex.
As a result, I would say that yes, this a reliable for content. But take care of:
You may have to redo some configuration (AAM, managed path...).
This does not include customization, you want to keep a backup of your solution
Reliability is in the eye of the beholder. In this case, if your tests of the restore process is successful, then yes, it is reliable.
A number of my clients run SharePoint (both MOSS and WSS) in virtual environments, SQL Server is also virtualised and backed up both with SQL tools and with Volume Shadow copy.
The advantage of a Virtual Environment is downtime is only as long as it takes your Virtual Server host to boot the images.
If you are not using Virtualisation, then remember to backup transaction logs regularly as this will make it easier to restore to a given point in the day - it also means that your transaction logs dont grow too big!
I prefer to use the stsadm -o backup command 'for catastrophic backup' as it says in the help. This can be scheduled but requires some maintenance of the backup metadata XML file when you start running out of disk space and need to archive older backups. It has the advantage of transferring over timer jobs (usually) and other configuration because as Nico says, restoring the config database won't work for most situations.
To restore, you can use the user interface which is nice and not have to mess around with much else. I think it restores your solutions as well but haven't tested that extensively.