How do new features/changes in azure data factory become available? - azure

Suppose I start using Azure Data Factory today, at some point the tool is likely to see improvements or other changes. Note that I am not talking about what I do inside the tool, but I am talking about the Data Factory itself. How will these changes become available to me?
Will I be able to look at the changes before they happen (and how long)?
Will I be able to stay on an old version if I do not like the new one or have not finished testing (e.g. security testing)?
Is there any indication of how often changes are rolled out? (Every year, 10x per day)
Does any of the above depend on the type of change (big, small, feature/bug/vulnerability).
I suspect that people have this question for many similar tools, so though I am specifically interested in the Azure Data Factory at this time, an indication of whether the answer applies to other types of solutions (within Azure or perhaps it is even similar for other vendors) would be useful.

Suppose I start using Azure Data Factory today, at some point the tool is likely to see improvements or other changes. Note that I am not talking about what I do inside the tool, but I am talking about the Data Factory itself. How will these changes become available to me?
Will I be able to look at the changes before they happen (and how long)?
You are talking about a Managed Solution so I expect a continuous stream of (small) fixes and improvements. That said, changes are generally announced for various Azure Products. See the ADF updates
Big changes might be first accessible as an opt-in preview feature before becoming General Available.
Is there any indication of how often changes are rolled out? (Every year, 10x per day)
Since it is a managed solution, why bother with such details? Rest assured that breaking changes are very limited and announced well before.
Will I be able to stay on an old version if I do not like the new one or have not finished testing (e.g. security testing)?
Again, this is a managed cloud service we are talking about. It is not an installable product you can decide to stay on older versions forever. They will push changes and you have to hope it is for the better ;-)
I suspect that people have this question for many similar tools, so though I am specifically interested in the Azure Data Factory at this time, an indication of whether the answer applies to other types of solutions (within Azure or perhaps it is even similar for other vendors) would be useful.
It will vary per company per (type of) product. For most Azure Services the answer will be the same.

Related

Has anyone used OpenAM/OpenDJ/OpenIDM suite without using ForgeRock's Support plans?

We are looking to implement an open source identity management system and have identified ForgeRock's stack as the best technology to implement.
The high cost of ForgeRock support and its per-User pricing model, however, is a potential roadblock. Our current User base is ~45K, but we expect to ramp up to 1M in the next 2 years.
So we're looking into scenarios where we proceed without FR Support. The lack of FR Maintenance releases would seem to put a damper on that, so we're curious if others have gone that route.
What has been your experience?
What kind of projects have you done this for? Size, etc.
In the absence of FR's Maintenance releases, have you been able to easily create your own patches?
What are some potential pitfalls?
If there are blogs or other communities that deal with this topic, please point me in their general direction.
Thanks.
As a community user I did use OpenAM(/OpenSSO) and OpenDJ for the past 6 years or so, but it was a very small deployment (10k users only 1 server instance from both products).
1) In the early stages we did have reliability issues with OpenAM, which we mostly resolved by restarting the server instances - clearly wasn't preferred, but we didn't really spend too much development effort on actually trying to resolve it (plus lacked the necessary knowledge for investigation back then). After spending some actual effort on trying to learn the product it turned out that the most of our issues were either self-inflicted (badly written customizations, or misconfigurations), or was actually something that got recently resolved in the OpenAM project and was relatively simple to backport to our version.
Of course the experience itself largely depends on how often you want to make configuration changes in the deployment though, since we weren't changing a lot of things over the years, OpenAM just worked nicely for long intervals without requiring any kind of maintenance.
3) Since we didn't really ran into new issues (the config barely changed), there weren't too many surprises after a while. The security patches were mostly simple to backport and didn't cause too much trouble (It did help that after 1,5 years I became a FR employee and I actively worked on OpenAM issues though :) )
4) I think running without subscription has its risks, but they mostly relate to:
are you planning to roll out new features based on OpenAM functionality during that 2 years (i.e. are you planning to constantly make changes to the deployment)?
do you have good developers to work on these features? Working with OpenAM for example can quite easily require you to have a look at the source code to figure out how things work, the quality of the documentation has improved a lot over the years though. Regardless, backporting fixes are going to be more and more difficult over time, as the releases will differ a lot more (since the development team is getting bigger and bigger for each projects) - and even then you can't just assume that all the issues you run into are by definition already resolved in trunk. The need to resolve some of the issues on your own is a cost/risk you need to take into account.
what kind of SLA do you want to have for your deployment? Is your business going bankrupt after a 1 minute outage? Is it acceptable to just frequently restart your service (in case you run into some weird issues)?
do you really need support for all 3 products? For example my background would allow me to work easily without OpenAM support, but I would be in the deep end if something is going wrong with my provisioning system...
And a generic remark:
Having user growth of 20x within two years sounds a bit unrealistic, or very hopeful at least. Maybe what you should look for is a 1 year subscription for a bit more reasonable target number and then have a renewal once you have a better understanding of customer growth in your business?

How to deal with Azure Outages, current one was a network drop between websites and SQL Database

We just suffered a SQL Database connectivity issue on Azure. Although very quick, around 1 minute, it did kick all users out, and/or raised Elmah Errors such as:
The wait operation timed out ...
at System.Data.ProviderBase.DbConnectionPool.TryGetConnection
Even glitches like this compromises confidence. I am trying to understand about good approaches to confront these transitory outages. Some thoughts that come to mind include:
Have some code that checks that all required services are running before using them and keep checking with provide friendly error message until they are. I think there is a tendency to assume all is available and working, and I wonder whether this is a dangerous assumption in the world of cloud. I suppose this is more an approach one would take when building a distributed application, although one may not for a database which is usually close to the web application.
Use failover procedures such as TrafficManager. However it is expensive as one now has >1 instance and also one needs to take care of the syncing data across >1 DB etc. Associated link on Failover procedure in Azure
Make sure Custom Error pages are used so Yellow Screen of death (YSOD) is not seen:
<customErrors mode="RemoteOnly" defaultRedirect="~/Error/Error" />
Although YSOD was seen by a colleague, not sure how with the above in force. Once criticism I have of Azure is that if Websites are down, then one can get bad error pages, only provided by Azure and not customisable, although I was advised that using something like CloudFlare can sort this issue.
I think a) is the most interesting concept. Should we code Azure Web Apps as if they are WAN rather than LAN applications, and assume nodes could be down, and so check beforehand?
I would really appreciate thoughts on the above. Our feeling is that Azure is getting a few too many of these outage blips now, which may be due to increased customers... not sure. Although no doubt within the 99.9% annual SLA.
EDIT1
A useful MSDN Azure Cloud Architecture article on this:
Resilient Azure Website Architectures

Recommendations for automatically logging unexpected errors/stack traces to bug tracker

We have been looking at automatically logging all unexpected client errors to our bug tracker. For reference our application is written in Java/GWT/Guice/Hibernate/Jetty and our bug tracker is the hosted version of FogBugz which can create bugs programatically or via an email.
The biggest problem I see with doing this is stack traces that happen in a loop overload the bug tracker by creating thousands of cases. Does anybody have a suggested way to handle automatic bug creation like this?
If you're using FogBugz bugscout (also see up-to-date docs here) then it has the ability to just increase number of occurences of same problem, instead of creating new case for same exception again and again.
Are you sure that you want to do that?
It obviously depends on your application but even by carefully taking care of the cases that could generate lots of bug reports (because of the loops) this approach could still end up filling the bug tracker.
How about this?
Code your app so that every time an exception is thrown, you gather info about the client (IP, login, app version, etc) and send that + the stack trace (or the whole exception object .ToString()) by email to yourself (or the dev team).
Then on you email client, have a filter that sorts that incoming mail and throws it in a nice folder for you to look at later.
Thus you can have tons of emails about maybe one of more issues but then you don't really care because you input the issues yourself in the bugtracker, and easily delete that ton of mail.
That's what I did for my app (which is a client-server desktop app). It plays out well in this case.
Hope that helped!
JIRA supports automated issues creation using so called services: documentation.
Does anybody have a suggested way to handle automatic bug creation...?
Well, I have. Don't do that.
What are you going to gain from that? Tester's effort? in my experience, whatever effort one can save from that was lost multiple times with overhead transferred to developers who had to analyze and maintain the automatically created tickets anyway. Not to mention overall frustration caused by that.
The least counterproductive way I can imagine would be something like establishing a dedicated bugs category or issue tracker instance, such that only testers can see and use it.
In that "sandbox", auto-created bugs could be assigned to testers who would later pass analyzed and aggregated bug reports to developers.
And even in that case, I'd recommend to pay close attention to what users (testers) say about the system. If they, say, start complaining about the system, consider trying a manual way of doing things instead.

SharePoint 2010: solution/feature upgrade recommended practices

this is kind of an open question: I'm trying to define for a team a set of recommended practices for delivering a SharePoint solution in stages. That means, changes will happen at every level (solution, feature, web part, content types, etc).
On your experience, which are the practices that really, really, worked for you guys? For ex. using upgrade custom actions, placing upgrade logic entirely on FeatureUprading event handlers? Or in FeatureActiving handlers and assume features could already exist? Else?
I'm asking because I know of projects that follow The Word on this from many MSDN articles and still, upgrades are kind of nightmarish to manage, and those processes are sometimes difficult to grasp for average devs.
Thanks!
As no one else has ventured an answer, my current approach is:
Use the declarative approach for those cases where it works 100% of the time e.g. NOT content types
Fall back to code for the rest
Always write the related code so that it can be run multiple times against either the pre- or post-upgrade state

Domain repository for requirements management - build or buy?

In my organisation, we have some very inefficient processes around managing requirements, tracking what was actually delivered on what versions, etc, do subsequent releases break previous functionality, etc - its currently all managed manually. The requirements are spread over several documents and issue trackers, and the implementation details is in code in subversion, Jira, TestLink. I'm trying to put together a system that consolidates the requirements info, so that it is sourced from a single, authoritative source, is accessible via standard interfaces - web services, browsers, etc, and can be automatically validated against. The actual domain knowledge is not that complicated but is highly proprietary and non-standard (i.e., not just customers with addresses, emails, etc), and is relational: customers have certain functionalities, features switched on/off, specific datasources hooked up - all on specific versions. So modelling this should be straightforward.
Can anyone advise the best approach for this - I a certain that I can develop a system from scratch that matches exactly the requirements, in say ruby on rails, grails, or some RAD framework. But I'm having difficulty getting management buy-in, they would feel safer with an off the shelf solution.
Can anyone recommend such a system? Or am I better off building it from scratch, as I feel I am? I'm afraid a bought system would take just as long to deploy, and would not meet our requirements.
Thanks for any advice.
I believe that you are describing two different problems. The first is getting everyone to standardize and the second is selecting a good tool for requirements management. I wouldn't worry so much about the tool as I would the process and the people. Having the best tool in the world won't help if your various project managers don't want to share.
So, my suggestion is to start simple. Grab Redmine or Trac and take on the challenge of getting everyone to standardize. Once you have everyone in the right mindset then you can improve the tools you use for storage.
{disclaimer - mentioning my employer's product}
The brief experiments I made with a commercial tool RequisitePro seemed pretty good me. Allowed one to annotate existing Word docs and create a real-time linked database of the identified requisistes then perform lots of analysis and tracking of them.
Sometimes when I see a commercial product I think "Oh, well nice glossy bits but the fundamentals I could knock up in Perl in a weekend." That's not the case with this stuff. I would certainly look at commercial products in this space and exeperiment with a couple (ReqPro has a free trial, I guess the competition will too) before spending time on my own development.
Thanks a mill for the reply. I will take a look at RequisitePro, at least I'll be following the "Nobody ever got fired for buying IBM" strategy ;) youre right, and I kinda knew it, in these situations, buy is better. It is tempting when I can visualise throwing it together quickly, but theres other tradeoffs and risks with that approach.
Thanks,
Justin
While Requisite Pro enforces a standard and that can certainly help you in your task, I'd certainly second Mark on trying to standardize the input by agreement with personnel and using a more flexible tool like Trac, Redmine (which both have incredibly fast deploy and setup times, especially if you host them from a VM) or even a custom one if you can get the management to endorse your project.

Resources