Setup Continuous Deployment with DropBox on Windows Azure Website - azure

Where I work, our marketing team is looking for a "quick and easy" method up periodically updating some files on a website of ours. I opened my mouth and said "We can use Azure Websites with DropBox!". It all works fine, except that with DropBox, files only deploy if I log into the Azure Portal and click Sync. Needless to say, this is a deal breaker, because the users want to save a file and have everything appear magically.
Is there a way to setup continuous deployments via DropBox on Azure? I don't mind setting up a job to run every 15 minutes to perform a file upload if needed.. but would prefer to avoid that if possible
Thanks In Advance

Currently we don't support the continuous sync with Dropbox. The challenge is the noise and the reliability of the site given those changes. Imagine users naturally modify file by file and Dropbox sync them one at time. You can get into the situation where your site is in transient bad state.

This is not currently possible using the Dropbox integration in Azure Websites. Best option for this is the local git integration, where Azure will provide you a remote git location that you can push to that causes an update.
So that gives you behavior, but not the dropbox behavior you want, as someone would still need to commit and push.
To get that you could look into implement a Git hook to mimick the behavior, where you would auto commit and push when a file changes.
Something like this would give you that behavior, but you'd need to translate to a server-based model.
Git Repo Auto-commit and Push
Alternatively, you can host the site in GitHub or Visual Studio Online and I beleive you get that hook automatically.

Related

Deploy two VSTS repositories to one Azure web app

Let's say I have an Azure App Service web app at foo.azurewebsites.net. The code for the web app (a simple Node.js server and React frontend) is hosted on VSTS, and a custom deployment script is configured build and deploy the web app every time code is pushed to the repository's master branch. In other words, the standard web app configuration.
Now, all of my API code (just a Node.js server) is in another repository on VSTS. I'd like to be able to do the following:
Have all requests to foo.azurewebsites.net/api be handled by the API server (an implication of this, which I would nonetheless like to state explicitly, is that the server can ask the browser to set cookies that the web app can then read, and vice versa).
Set up similar continuous deployment for the API server, such that it gets redeployed whenever there are code changes in the API repo.
Be able to maintain the web app and API repositories completely separately.
This seems like a fairly standard scenario...is there an accepted solution? I came across this, but it seems like a pretty hacky way to do it, not to mention the fact that I have no idea what the correct URL is for the web hook for VSTS and can't seem to find any information on it. Also, that example doesn't cover how to deal with point (1) above.
EDIT: Additional clarification
Note that the accepted answer on this question is not what I'm looking for. It describes how to pull from a second repository at deployment time, but not how to have that second repository trigger deployments, or how to handle the fact that the the second repository is its own server. Additionally, it introduces a dependency between the two repositories, since the deploy.cmd is presumably under source control in the first repository.
EDIT: Virtual Directories
Thanks to #CtrlDot for pointing out that Virtual Directories are the way to solve (1). Still seeking guidance on (2) and (3).
I think the concept you are referring to is called Virtual Directories
I'm not sure which VSTS task you are using to deploy, but based on the article provided, you should be to configure it to target only the virtual directory you want to deploy to.
EDIT
Sorry for not being more clear. The AzureRmWebAppDeployment task has a parameter for virtual application name. You would simply set that in your deployment pipeline for the API project (/api) and for the main project (leave it blank)

push local gitlab site issues and comments to remote repo

I've been using git for a little while now in a new project I am working on.
I decided to use GitLab.com as I would like the opportunity to keep me repos private until I'm ready to share them (which github doesn't allow me to do).
The whole beauty of git for me is that I have a copy of the whole repo on my local machine and on the remote site.
However I make lots of comments, on my 'local' gitlab instance.
I know that I can put the wiki into source control, is it possibly to do the same thing with the comments and milestones (or in some other way share them between repositories)
I feel that this should be possible.
Maybe using an rss feed to push and pull the data to / from the various locations.
Or can I use the issues as a 'mailing list' somehow, with a 'mail into list' (however I would then need to get my local gitlab instance to mail any new issues to the remote - could probably be setup using some form of 'auto forward' filter in my mail client / gmail.
Are any of these ideas even possible ?
Is there a better solution - I'd prefer something that will integrate into my gitlab instance (local and remote), rather than needing having to use a separate interface ~ I like everything to be in a single place if possible.
Remember also I like to have access to my issues etc when offline (and then have them 'sync' when I go back online).
Thanks for any help in advance.
David
You could build a script and make use of the API to sync your issues and notes. Maybe a script that pulls all of the new issues and notes and POSTs them to the equivalent projects on GitLab.com. You could run the script manually or create a cron job to post the new items periodically.

Deploy the same Azure binaries to multiple subscriptions

We are trying to work out a good continuous deployment setup using TFS, Visual Studio and Azure. At our company, each developer has their own Azure subscription that we use for testing, as well as shared QA1/QA2/PROD subscriptions that we can deploy to. We have matching TFS XAML build definitions for each of these, running Powershell scripts with parameters and PublishSettings files.
This all gives us a set of .cspkg and .cspkg files, and in theory we can deploy the right cspkg with the correct cspkg to any Azure system.
The problem we are encountering now though is that we want to start using the Redis Cache service. Installing the nuget package writes subscription-specific settings into the web.config, to point at the cache. This means that the cspkg is now complied specifically for the Azure subscription.
We could use SlowCheetah to merge web.config files on build, but this means that we would have to compile the package for each build definition, and as the number of developers increases this is obviously going to become unsustainable.
I am looking for a way to keep our old generic packages and still use the Redis Cache. We can connect to the cache in code during app_start, but then we can't use it to store IIS session state. I understand that the Azure Load Balancer is meant to keep users on the same server, but I'm unsure how that will work as we swap servers in/out.
It feels like we are approaching the problem wrong and there should be a simple solution that we are overlooking.
We are using Azure Tools 2.6, Visual Studio 2013, TFS 2015r2.
I think there are always 3 ways of doing this.
1st one is config during build, which is building one thing for one thing you described, which is not desired in most of scenario.
2nd is config during deployment, which means you open the cspkg file, change config, then put it back before upload without re-compile.
3nd is config after deployment, have a configuration management tool adjust the config file for you on the fly.
We use octopus deploy to archive #2 above, our CI tool feed octopus with cspkg and cscfg, octopus handles the rest. I would definitely not going after #1 but consider #3 is a valid option too.
As of today we store all our connection settings in .cscfg files. Even if for security reasons, we avoid storing any production connection strings in source control, only QA. And we have CI for QA, but not production. This way it works well for us, we just maintain different .cscfg for different environments (subscriptions)
However, in near future I think we will move to Key Vault for this.

git deploy to an Azure website has stopped working

I've been successfully using Git deploy (via Kudu) to a couple of Azure websites (e.g., beta/prod) for several months, and it's worked quite well. Starting today, I noticed that when I push to the appropriate respective git branch, my Azure websites will supposedly deploy - i.e., the deploy kicks off, everything builds, all my tests run, and the Azure management portal swears up and down that it's deployed my website - but ... nothing happens. My websites don't change. (Beta and prod pull from different branches of the same git repo, but no matter which I push to, none of the changes included in the latest push show up on either website.)
There are no errors or any other indication of a problem in the logs. The Azure portal detects the git pushes, runs the deployments, and swears that they've happened successfully. But the changes - some very simple ones, i.e., text on a certain page - simply aren't there.
This is the sort of thing that I'd normally contact Azure support for, but my subscription doesn't include tech support :-(. The Azure site recommends asking here on SO, and hence my post.
Any suggestions for further troubleshooting this?
Well, I don't know what was triggering the problem, but resetting the website - by adding a bogus key/value pair to the configuration, and saving it - triggers the website(s) to pick up the changes. Apparently the underlying issue is that the Kudu deploy doesn't seem to be triggering the website to reset itself. I'll add more details in the future if I run into the problem again.
[Edit 2013-10-15 - Today, deploys seem to be working normally again. My guess is that it was some sort of transient Azure bug that's now fixed.]

Using hg repository as web site

This is somewhat related to my security question here. Is it a bad idea to use an hg / mercurial repository for a live website? If so, why?
Furthermore, we have dev, test and production installations of our website, like dev.example.com, test.example.com and www.example.com. If it's a bad idea to use a repository for a live/production website, would it be OK to use an hg repository for the dev and test sites?
I'm also concerned about ease of deployment. We have technical and less technical co-workers who will be working with the site. The technical people (software engineers) won't have any problem working with the command line or TortoiseHG. I'm more concerned about the less technical people (web designers). They won't be comfortable working on the command line, and may even find TortoiseHG daunting. These co-workers mostly upload .css files and images to the server. I'd like for these files (at least the .css files) to be under version control, but I want this to be as transparent as possible for the non technical team members.
What's the best way to achieve this?
Edit:
Our 'site' is actually a multi-site CMS setup with a main repository and several subrepositories. Mock-up of the repository structure:
/root [main repository containing core files and subrepositories]
/modules [modules subrepository]
/sites/global [subrepository for global .css and .php files]
/sites/site1 [site1 subrepository]
...
/sites/siteN [siteN subrepository]
Software engineers would work in the root, modules and sites/global repositories. Less technical people (web designers) would work only in the site1 ... siteN subrepositories.
Yes, it is a bad idea.
Do not have your repository as your website. It means that things checked in, but unworking, will immediately be available. And it means that accidental checkins (it happens) will be reflected live as well (i.e. documents that don't belong there, etc).
I actually address this "concept" however (source control as deployment) with a tool I've written (a few other companies are addressing this topic now, as well, so you'll see it more). Mine is for SVN (at the moment) so it's not particularly relevant; I mention it only to show that I've considered this previously (not on a Repository though; a working copy, in that scenario the answer is the same: better to have a non-versioned "free" are as the website directory, and automate (via user action) the copying of the 'versioned' data to that directory).
Many folks keep their sites in repositories, and so long as you don't have people live-editing the live-site you're fine. Have a staging/dev area where your non-revision control folks make their changes and then have someone more RCS-friendly do the commit-pull-merge-push cycle periodically.
So long as it's the conscious action of a judging human doing the staging-area -> production-repo push you're fine. You can even put a hook into the production clone that automatically does a 'hg update' of the working directory within that production clone, so that 'push' is all it takes to deploy.
That said, I think you're underestimating either your web team or tortoiseHg; they can get this.
me personally (i'm a team of 1) and i quite like the idea of using src control as a live website. more so with hg, then with svn.
the way i see it, you can load an entire site, (add/remove files) with a single cmd
much easier then ftp/ssh this, delete that etc
if you are using apache (and probably iis as well) you can make a simple .htaccess file that will block all .hg files (or .svn if you are using svn)
my preferred structure is
development site is on local machine running directly out of a repository (no security is really required here, do what you like commit as required)
staging/test machine is a separate box or vm running a recent copy of the live database
(i have a script to push committed changes to staging server and run tests)
live machine
(open ssh connection, push changes to live server, test again, can all be scripted reasonably easily, google for examples)
because of push/pull nature of hg, it means you can commit changes and test without the danger of pushing a broken build to the live website. like you say in your comments, only specific people should have permission to push a version to the live site. (if it fails, you should easily be able to revert to the previous version, via src control)
Why not have a repo also be an active web server (for dev or test/QA environment anyway)?
Here's what I am trying to implement:
Developers have local test environments in which they can build and test their code
Developers make a clone of the dev environment on their local dev machine
Developers commit as often as they want to their local repo
When chunk of work is done and tested, then developer pushes working change sets to dev repo
Changes would be merged and tested on Dev, then pushed to Test/QA, and so on.
BTW, we're using Mercurial. I believe this model would only work using a distributed source code management tool.

Resources