I would like to ask a question regarding Disaster Recovery. Earlier we used to have our own source code repo and build server, so we had a Disaster recovery plan as to restore from a backup, in case if something fails. When we moved to Azure Devops, everything including Repo , Build Piplines etc is managed by Microsoft. In that case what would be the recommended Disaster recovery strategy?
Standard answer - most deletion operations in Azure DevOps are recoverable- is not valid in our case.
Backrightup also doesn't suit in our situation.
When talking about DR, the fundamental question is what kind of disasters you think of. Microsoft has its own DR, and I think they have higher reliability than what we might have in a local system.
One issue with a higher chance of happening is some internet issue, and you cannot reach the repository. Against that, the best option is checking them out in a local system.
There is a less probable issue of locking out of your DevOps account, but I imagine that you will be able to rectify it fast.
In a DR discussion about our repos a while back, we thought it is less of a concern because of the way GIT works. GIT maintains a local repository on each of the computers that checkout the code, so you can recover most of the code if worst comes to worst.
But these situations are very subjective, so you will have to think about your case.
Related
I have a doubt... suppose a company one day decides to save all the code of their project on github, even if it is a private repository don't you risk sharing sensitive information that could compromise the entire project? (I am still referring to a free account without any upgrades or such things)
If this is the perception you have on GitHub or any other provider, you will have to do a local deployment in your infrastructure. In this way, you have to put extra effort for maintenance and updates and you will lose all the benefits that GitHub provides.
Anyway as a practice sensitive information such as credentials is not stored in git.
So I've come across this AzCopy tool, and multiple tutorials that say it's good for backing up my storage blobs and whatnot.
Isn't Azure Storage automatically backed up? Isn't that what locally redundant means?
I just want to make sure I'm not missing something and putting my application in jeopardy by not running some external backup.
Redundancy is different from back-ups. Redundancy means that all your changes are replicated to another location. In case of a failover your slave can theoretically function as a master and serve the (hopefully) latest state of your file system. However, the fact that everything is replicated also means that your accidental delete actions, file corruptions, etc. are replicated. Back-ups are meant to prevent this. In case you accidentally mess something up and perform some delete requests, you still have the back-ups and you can usually go back to any point in time (if you made a backup at that time of course).
And of course it's not a bad idea to be not fully dependent on Azure.
The most important thing about any backup policy is that before you create it you decide what you are protecting against, and what sort of data are you backing up.
If the data you backing up is an offsite backup of working data. If access to that data is restricted to admin personnel and they all know what the data is. Then replication could well be all you need to protect from a hardware failure on Azure.
If however you are backing up customer data, or file data that fred in accounts randomly deletes when he falls asleep at the keyboard then you have a different threat model and you should consider your backups accordingly.
Where you back it up is very much a matter of personal requirements and philosophy. I have known customers who will keep backups on Azure and AWS (even though their only compute workload was Azure) If in your threat model you want to protect against MS going bust and selling all of their kit on ebay one morning, then it makes sense to back up elsewhere. Or you can decide that you trust Azure to go bust and just split data across multiple regions.
TL;DR
Understand what you are protecting your data from, and design your backup policy from that.
We are considering to build a webapplication and rely on Azure. The main idea behind this application is that users are able to work together on specific tasks in the cloud. I'd love to go for the concept of instant releasing where users are not bothered with downtime but I have no idea how I can achieve this (if possible at all). Lets say 10.000 users are currently working on this webapplication, and I release software with database updates.
What happens when I publish a new release of my software into Azure?
What will happen to the brilliant work in progress of my poor users?
Should I bring the site down first before I publish a new release?
Can I "just release" and let users enjoy the "new" world as soon as they request a new page?
I am surprised that I can't find any information about releasing strategies in Azure, am I looking in the wrong places?
Windows Azure is a great platform with many different features which can simplify lots of software management tasks. However, bear in mind that no matter how great platform you use, your application depends on proper system architecture and code quality - well written application will work perfectly fine; poorly written application will fail. So do not expect that Azure will solve all your issues (but it may help with many).
What happens when I publish a new release of my software into Azure?
Windows Azure Cloud Services has a concept of Production and Staging deployments. New code deployment goes to staging first. Then you can do a quick QA over there (sometimes "warm up" the application to make sure it has all caches populated - but that depends on application design) and perform "Swap" - your staging deployment becomes production and production deployment becomes staging. That gives you ability to perform "rollback" in case of any issues with the new code. Swap operation is relatively fast as it is mostly internal DNS switch.
What will happen to the brilliant work in progress of my poor users?
It is always good idea to perform code deployments during the lowest site load (night time). Sometimes it is not possible e.g. if your application is used by global organization. Then you should use "the lowest" activity time.
In order to protect users you could implement solutions such as "automatic draft save" which happens every X minutes. But if your application is designed to work with cloud systems, users should not see any functionality failure during new code release.
Should I bring the site down first before I publish a new release?
That depends on architecture of your application. If the application is very well designed then you should not need to do that. Windows Azure application I work with has new code release once a month and we never had to bring the site down since the beginning (for last two years).
I hope that will give you better understanding of Azure Cloud Services.
Yes you can.
I suggest you create one of the visual stdio template applications and take a look at the "staging" and "production" environments located directly when you click your azure site in portal manager.
Say for example the users work on the "production" environment which is connected to Sqlserver1. You publish your new release to "staging" which is also connected to Sqlserver1. Then you just switch the two using the swap and staging becomes the "production" environment.
I'm not sure what happens to their work if they have something stored in sessions or server caches. Guess they will be lost. But client side stuff will work seamlessly.
"Should I bring the site down first before I publish a new release?"
I would bring up a warning (if the users work conissts of session stuff and so forth) saying brief downtime in 5 minutes and then after the swith telling everyone it is over.
How safe is to use an online SVN repository?
I want to develop collaboratively with some friends. I know you can create non-public accounts in some of those services, but I can't fell confortable to send all of our intelectual products to another company manage. After all, if your idea works, those companies can easily find your source code!
Do you think this care is important? If so, what is the best solution?
My question isn't "how good it is" or "which is better", I just want know if you trust them and why (or why not).
Below I give you SVN repositories examples:
XP-Dev
Unfuddle
Assembla
Thank you all!
If you have something valuable enough to be stolen, it's time to get a lawyer anyway. Get him involved from the start, have him review whatever agreements the various hosting sites have to offer, and make sure they can be held accountable for breaches of security, including the value of your source code in the hands of competitors.
It is definitely important to be concerned about your source code in the cloud. At the end of the day you have to weigh up the cost of installing, securing, maintaining, backing up yourself vs a $10/month plan with a hosted SVN service. There are always going to be a certain sector that will never upload code into a hosted repo, i.e. banks, military, etc, but for the majority of us the risk is low and minor compared to the benefits of not doing it yourself. Make sure the provider you choose enforces SSL, has regular backups (at least hourly granularity), their datacenter provider is SAS70, and a policy allowing you to download your full SVN repo dump if you choose to leave, or go elsewhere, and how long the provider has been in business, do they have a good track record, and does the provider enforce a password policy.
This question already has answers here:
How do you protect your software from illegal distribution? [closed]
(22 answers)
Closed 5 years ago.
I am working in a small startup organization with approximately 12 - 15 developers. We recently had an issue with one of our servers where by the entire server was "Re provisioned" i.e. completely wiped of all the code, databases, and information on it. Our hosting company informed us that only someone with access to the server account could have done something like this - and we believe that it may have been a disgruntled employee (we have recently had to downsize). We had local backups of some of the data but are still trying to recover from the data loss.
My question is this - we have recently began using GitHub to manage source control on some of our other projects - and have more then a few private repositories - is there any way to ensure that there is some sort of protection of our source code? What i mean by this is that I am aware that you can delete an entire Project on GitHub, or even a series of code updates. I would like to avoid this from happening.
What i would like to do is create (perhaps in a separate repository) a complete replica of the project on Git - and ensure that only a single individual has access to this replicated project. That way if the original project is corrupted or destroyed for any reason we can restore where we were (with history intact) from the backup repository.
Is this possible? What is the best way to do this? Github has recently introduced "Company" accounts... is that the way to go?
Any help on this situation would be greatly appreciated.
Cheers!
Well, if a disgruntled employee leaves, you can easily remove them from all your repositories, especially if you are using the Organizations - you just remove them from a team. In the event that someone deletes a repository maliciously that still had access for some reason, we have daily backups of all of the repositories that we will reconstitute if you ask. So you would never lose more than one day of code work at worst. Likely someone on the team will have an update with that code anyhow. If you need more protection than that, then yes, you can setup a cron'd fetch or something that will do mirrors of your code more often.
First, you should really consult github support -- only they can tell you how they do the backup, what options for permission control they have (esp. now that they introduced "organizations") etc. Also you have agreement with them -- do read it.
Second, it's still very easy to do git fetch by cron, say, once an hour (on your local machine or on your server) -- and you're pretty safe.
Git is a distributed system. So your local copy is the same as your remote copy on Git hub! You should be OK to push it back up there.