I am getting ready to release a new web site in the coming weeks, and would like the ability to run multivariate or a/b tests between two version of the site.
The site is hosted on azure, and I am using the Service Gateway to split traffic between the instances of the site, both of which are deployed from Visual Studio Online. One from the main branch and the other from an "experimental" branch.
Can I configure Google analytics to assist me in tracking the success of my tests. From what I have read Google analytics seems to focus on multiple versions of a page within the same site for running its experiments.
I have though of perhaps using 2 separate tracking codes, but my customers are not overly technically savvy, so I would like to keep things as simple as possible. I have also considered collecting my own metrics inside the application, but I would prefer to use an existing tool as I don't really have the time to implement something like that.
can this be done? are there better options? is there a good nugget package that might fulfil my needs? any advice welcome.
I'd suggest setting a custom dimension that tells you which version of the site the user is on. Then in the reports you can segment and compare the data.
Related
First of all, I'm not really sure if this question goes here in stackoverflow or if I should ask it on another place. Please if that's the case, indicate me in the right way :)
So, for context, this is an app that I was asked to develop for my job. At first I thought in doing a webapp and host it inside the company servers and domain (intranet), but it isn't possible due to external issues that I can't control.
Is there another way to achieve this? The app must have a database and should be accessible for a bunch of users at the same time.
Of course we want to spend the least amount of money possible to make this happen. Also, using a workstation of our own to host everything is not possible either.
Edit: I didn't finish developing, but for now I'm developing it in Python Flask.
The number of users is small really, just up to five people.
OK - I guess a lot of what you'll get in response to this is your definition is too vague. Things such as scale, number of users, programming languages used to create the web app etc are important when talking about hosting.
However, for me, there are three very good options out there for free hosting, up to a certain amount of traffic.
1.) Heroku - Heroku.com
A world known web hosting platform. You can publish code through GitHub, and it has some extensive coverage for different types of web apps. Definitely worth a look.
2.) Netlify - netlify.com
Similar to Heroku, but used by some major companies. Allows you to host for free to a point, and is relatively simple to get started with.
3.) Vercel - vercel.com
A bit more technical in my opinion - but again, very similar to the above two and has a free tier.
All three are great options, and I'd recommend looking into them in more detail to see what option is best for you. Can't go wrong with any of them.
I had a similar problem: A Python-Flask-SQLite app for me and my office pals to use together.
The solution was creating one .exe file with pyinstaller, hosting this and the database files in a network drive (one that everyone that will use the app has access). As everybody (~10 people) sees the same db, things works fine!
We are considering to build a webapplication and rely on Azure. The main idea behind this application is that users are able to work together on specific tasks in the cloud. I'd love to go for the concept of instant releasing where users are not bothered with downtime but I have no idea how I can achieve this (if possible at all). Lets say 10.000 users are currently working on this webapplication, and I release software with database updates.
What happens when I publish a new release of my software into Azure?
What will happen to the brilliant work in progress of my poor users?
Should I bring the site down first before I publish a new release?
Can I "just release" and let users enjoy the "new" world as soon as they request a new page?
I am surprised that I can't find any information about releasing strategies in Azure, am I looking in the wrong places?
Windows Azure is a great platform with many different features which can simplify lots of software management tasks. However, bear in mind that no matter how great platform you use, your application depends on proper system architecture and code quality - well written application will work perfectly fine; poorly written application will fail. So do not expect that Azure will solve all your issues (but it may help with many).
What happens when I publish a new release of my software into Azure?
Windows Azure Cloud Services has a concept of Production and Staging deployments. New code deployment goes to staging first. Then you can do a quick QA over there (sometimes "warm up" the application to make sure it has all caches populated - but that depends on application design) and perform "Swap" - your staging deployment becomes production and production deployment becomes staging. That gives you ability to perform "rollback" in case of any issues with the new code. Swap operation is relatively fast as it is mostly internal DNS switch.
What will happen to the brilliant work in progress of my poor users?
It is always good idea to perform code deployments during the lowest site load (night time). Sometimes it is not possible e.g. if your application is used by global organization. Then you should use "the lowest" activity time.
In order to protect users you could implement solutions such as "automatic draft save" which happens every X minutes. But if your application is designed to work with cloud systems, users should not see any functionality failure during new code release.
Should I bring the site down first before I publish a new release?
That depends on architecture of your application. If the application is very well designed then you should not need to do that. Windows Azure application I work with has new code release once a month and we never had to bring the site down since the beginning (for last two years).
I hope that will give you better understanding of Azure Cloud Services.
Yes you can.
I suggest you create one of the visual stdio template applications and take a look at the "staging" and "production" environments located directly when you click your azure site in portal manager.
Say for example the users work on the "production" environment which is connected to Sqlserver1. You publish your new release to "staging" which is also connected to Sqlserver1. Then you just switch the two using the swap and staging becomes the "production" environment.
I'm not sure what happens to their work if they have something stored in sessions or server caches. Guess they will be lost. But client side stuff will work seamlessly.
"Should I bring the site down first before I publish a new release?"
I would bring up a warning (if the users work conissts of session stuff and so forth) saying brief downtime in 5 minutes and then after the swith telling everyone it is over.
I was reading through these questions:
Scaling Orchard with Azure Web Sites
Orchard CMS Performance
How to deploy Orchard CMS in Windows Azure?
I started to think about an e-commerce project I am undertaking and would like to clarify a few things if possible.
Please forgive me because I am finding it very difficult to articulate this question in a way I feel I have clearly communicated what I am thinking.
Firstly, what factors and when would those factors kick in for me to start thinking about scaling to handle the traffic of my web site. The type of factors I am aware of would include:
Session handling
Caching
I am thinking the amount of data being served in a request but not sure on the full implications of request size
Secondly, with all things there should be a certain level of up-front planning when trying to set up a web site that can handle traffic of certain levels. Would the Azure scaling need to be done upfront or is it a simple matter to make it work now for what is needed and then up-scale at a later date when it is necessary?
Let me give a real life scenario to try aid where my fear is:
A radio broadcast was put out for a certain web site trying to sell
their wares. The web site was not planned very well. The web site
started to receive visits from people listening to the radio show. So
many visitors that the web site was not able to handle the traffic and
an error message was displayed telling the world that they should
'talk to the administrator' or words to that effect. You know the
picture I am sure and I am also very certain it would be embarrassing
for any web developer to be told that this was happening to a web site
they had designed.
I would really like to really be able to distil a proper question out of this, but there are many things that I am just not aware of. To try an make this question less vague I will try to summarise what I would like to achieve:
I want to have a web site that is able to handle a lot of traffic following successful advertising/marketing campaigns. I want to walk the tightrope of budget versus functionality, which is why I would like to be able to do the least amount possible to start with and be able to easily up-scale as demand dictates.
Bearing this in mind, what approach/considerations should I take to avoid nasty pitfalls with performance/availability/reliability when using an Orchard CMS/Azure combination to deliver my project?
Orchard on Azure Web Sites is working great for us, see http://nublr.pt
A few things to bear in mind with the site configuration are:
follow the guidelines in http://docs.orchardproject.net/Documentation/Optimizing-Performance-of-Orchard-with-Shared-Hosting
set up caching (module Contrib.Cache available in the gallery) which will use IIS's application cache.
set up the Warmup feature to keep the site alive,
also ensure that dynamic compilation is off by using the Config/HostComponents.config
We are currently in "shared" mode of azure web sites, we don't have much traffic yet, but out load testing with https://loadimpact.com has not taken the site down once. at any time we can move to the "reserved" mode (it does take up to 24h for it to happen)
Version 1.6 will bring a lot of improvements to Orchard, try to get started with your development in it.
Hope this has helped.
note: there are few similar questions already asked here - but they are from 2009. May be something has changed since then.
I'm responsible for a bunch of websites hosted on different servers. I do not do any log analysis right now, but I would like to change this. First question - what is the best tool to view ISSUES with the website based on IIS logs (i.e. 404, 500 responses, long page processing, etc)? Ideally with grouping/sorting options? I do not want to spend a lot of time on this, I just want to periodically check if all is good with the website.
Second question (and I know most likely i'm asking for too much) - but is there any way to expose processed logs to web? So I can review things mentioned above without RPDing into the server?
Ideally I'm looking for a free/open source solution, but I'm ready to pay for a good software as well (but not a lot of $$).
Thank you.
You can take a look at our log monitoring solution EventSentry, which can monitor text-based logs like IIS logs. We have standard templates setup for IIS, and we can consolidate the logs in a database with web-access, so that you can review the logs without using RDP.
It's a pretty flexible solution that allows you to pick the fields you are interested in, and ignore the ones you are not - and thus save space in your database.
You can also setup real-time alerts, so that you can get an email when a critical error is encountered in a log file, like a 500 error.
http://www.eventsentry.com/features/log-file-monitoring
Finally, you can also plug-in command line tools which can verify that a given web page is accessible, or get alerted when it changes: http://www.eventsentry.com/features/application-monitoring.
I'm biased of course, but I would say that our solution is pretty affordable. Since it offers additional functionality as well, such as service monitoring (to monitor your IIS services) and event log monitoring (IIS does log critical messages to the event log), you can setup comprehensive monitoring with a single product.
I'd look into #LuckyLuke solution (or similar) - classic "build vs buy" decision. Based on your post, this isn't going to be your "full time" job so IMHO its best to leave it to those who do...
I don't know what "legacy" answers you are referring to, but if you want to tinker you can use Microsoft's own log parser, and depending on how far you want to go with it, you can use it (COM dll) to write your "admin web pages" in .Net/ASP.Net and host it in each of your servers....
If you're very specific about the errors you just want to be alerted about, another "hacky" way would be to provide your own custom error pages (either the default IIS error pages, or configure your Asp.Net apps to use specific error pages).
I've been working on a web application and finally published it to Azure. The application is not critical and currently I use only one role to keep costs down.
I would like to start try and get a feel of who (if anyone is using my site). Can anyone give me some suggestions on how I could do this. What I would really like is not to use anything like the google scripts that I see some web sites use for monitoring page hits. I would like to do as much as possible on the server.
Help advice on where to start and what to look at would be much appreciated.
Katarina
Aside from things like Google Analytics and StatCounter, you'd want to set up some performance counters that you can watch externally. This requires you to use the Diagnostic Monitor:
Set up performance counters to track, and how often to poll for values
Set up frequency to upload to Table Storage
Diagnostic data is aggregated from all your instances, so then you can run queries against the diagnostic tables. Cerebrata has a page that details these table names (you can also use their Diagnostics Manager tool, other 3rd-party tools, or roll your own).
Igork posted this StackOverflow answer as well, which references some blog posts by Azure MVP Neil Mackenzie.
To add to Dave's answer, there are three levels of monitoring you can do:
If you want to know who is using your site, Google Analytics is best and free... There are a few others, but all involve injecting small javascript on your pages
If you want to know the load your site is under, inspecting performance counters via Cerebrata's tool is likely best # http://www.cerebrata.com
If you want to go one step further and be notified when the load on your site is outside your predefined conditions (active monitoring) or have your website automatically scale up when the load is too high, AzureWatch is probably the best option # http://www.paraleap.com
HTH