I am trying to use Azure functions and I see that there are Sample and Experimental type of templates.
Can I trust experimental templates in production environment?
Basically, experimental templates are for languages and/or features which are still in preview (e.g. features like external files and any language other than C#, F#, or Node).
It's possible that there could be significant breaking changes introduced for these preview languages and features. However, you are able to decide whether or not to upgrade your functions runtime to the newer version, so the old version should still work in production.
Related
I am using ODM 8.10 and want to automate building rule app files. The code is currently configured in the old Classic Rule Project, and we are trying to avoid migrating to Decision Services at this time. I have found build jars for Decision Services but nothing so far for Classic Rule Projects. There must be a way to do this as the rule app jar files are created in the eclipse IDE when you deploy/export a ruleApp. I am trying to find out the jar files the IDE uses and the commands it calls to execute the rule app builds.
Re: "There must be a way to do this"
But you will not necessarily have access to it. The ODM product developers have experience, source code, documentation, and other tools that you do not have access to.
Having said that, there is an build/deploy API that you may be able to access via ANT. I haven't used it since switching to Decision Services when that became feasible in ODM 8.7. Standard practice before that time was to automate deployments via Ant and a "headless" version of Eclipse. If the latest online docs don't describe it, you might try the older docs.
WARNING: Classic Rule Projects are a dead end! Not only will all your effort building them in a non-standard way be wasted, I believe that it will likely be more trouble than just migrating to Decision Services (which is not usually that difficult).
I'm trying to wrap my head around how we're supposed to build Azure functions.
I love the idea of building serverless, compact, single-function apps that respond to events.
Here are the problems I'm running into:
I have nice class libraries built in .NET Standard 2 that handle all my "backend needs" namely handling CRUD ops with Cosmos Db, Azure Table Storage, Azure SQL, Redis, Azure Storage. No matter what I did, I couldn't integrate these class libraries into an Azure Functions project. More details below.
Also, getting dependency injection in Azure Functions project has proven to be quite a task -- especially with my class libraries mentioned above.
At this point, the only option I'm seeing is to "copy and paste" code into a new Azure Functions project and use it without any DI.
This seems to go against "best practices". So what's the solution other than either to create monolithic code or wait till Azure Functions support .NET Core and DI.
I thought I could use my .NET Standard class libraries from a regular Azure Functions project targeting .NET Framework. After all, the idea of .NET Standard is to "standardize" things. I opened a couple of posts here on SO. I'm providing the links so that you can see the issues I've run into:
Using .NET Core 2.0 Libraries in WebJob Targeting .NET Framework 4.7
No parameterless constructor error in WebJobs with .NET Core and Ninject
P.S. My previous posts are referring to WebJobs. That was plan B approach because WebJobs seem half a step ahead of Azure Functions when it comes to supporting things like .NET Core and DI. Ultimately, I'd like to build a few Azure Functions that can use my class libraries built in .NET Standard 2.
Also, my previous posts mention that my class libraries target .NET Core 2.0. Since then I converted them to .NET Standard 2 which didn't really take much at all. I did this so that I truly conform to .NET Standard 2.
One issue is that Visual Studio has an outdated version of the Functions Core tools. Until this is resolved, you can work around in the following way:
Install the latest via npm by running npm install -g azure-functions-core-tools
In your Function App in VS, go to the Properties
Go to Debug, and click New... under Profile
Name the new Profile something like FunctionsNpm
Set the executable to (replace [YourUserName]): C:\Users\[YourUserName]\AppData\Roaming\npm\node_modules\azure-functions-core-tools\bin\func.exe
Set the arguments to host start
Set the working directory to $(TargetDir)
In toolbar, look for the green triangle icon to change your current Profile to the one you just created:
Now when you run from VS, you'll be using the npm tools instead of the older one that come with the VS package.
.NET Standard 2 support is on its way, see this github issue.
When developing front-end code for the browser, I often use the es2017 preset when transpiling down to a distribution bundle, which allows me all the conveniences of the included transformers. For conventional modules, I usually stick to whatever the required node engine I've specified for that particular module supports.
I would like to start developing these "conventional" modules using babel transformers as well, but I can foresee drawbacks to this, including:
It might inhibit the debugging workflow (more specifically when working with an IDE)
The performance of the module might suffer
What's the current state on this matter - would you say it makes sense to use babel in conventional modules given the aforementioned and other trade-offs? What are the pros/cons for your preferred workflow?
Bonus question: What are some reputable modules and/or module authors out there that are already using this technique? I've seen Facebook do it for their react ecosystem but I guess that makes sense since those are mostly modules for the browser.
It is converted back to vanilla JS (babel does that part).
What you get is that you can utilize Classes which I found useful.
Hopefully with time, browsers will support ES6 and we will not need babel.
The only drawback is that when debugging, you have to produce a source map, but that is temporary, see above.
To answer your second question: I'm using React in one of the websites, and most of the modules I needed (from npm) are using ES6.
I believe that the trade-offs or drawbacks that you mention both do not apply to developing nodejs code using babel as ES7 transpiler. Personally, I find using ES7 features with node tremendously productive.
There is source map support for debugging. I use karma for testing and it comes with excellent source map support (I use IntelliJ but I believe most IDEs will do). You can checkout this REST-API repository on github. It's a nice stack for building nodejs data backend. It uses karma for testing - even comes with code coverage support. It also integrates with pm2 for scaling and availability.
Regarding performance: I think transpiled code has been shown to run faster in many scenarios than the code a developer would write when not having advanced language features available. I will post some links later.
Are there any plans for the ServiceStack packages to start using the SemVer standard? We just had an unfortunate circumstance where we were broken by the interface breaking changes introduced in 4.0.44 from 4.0.43 around OrmLite.
We are a sizable commercial customer and have a custom implementation of a OrmLiteDialectProvider for one of our DBMSs, it all seemed good upon the initial upgrade in our web application, however as part of testing the changes around type converting broke our system. This wasn't initially evident as part of the upgrade because our custom implementation is in a NuGet package which overrides OrmLiteDialectProvider.ConvertDbValue on version 4.0.38 which is now gone. There were no binding issues because it is only a minor version difference.
NuGet adopted SemVer back in version 1.6.
Having the SemVer standard would make it a lot easier for us to know when interface breaking changes have been made, without having to dig through the Release Notes page.
NOTE: The release also didn't indicate that the old method had been removed and upgrading would break any custom implementations.
UPDATE FROM RESPONSE
Anyway, fair enough answer. I can appreciate it would be difficult to track each package individually. In our case we wrote a custom dialect provider as we have a legacy DBMS that wasn't supported and this appeared to be the way we were supposed to add the support. We wanted to use ORMLite because we use the rest of ServiceStack and it's a fantastic product.
The new way to support the types is a great improvement and actually made our implementation easier.
We actually ran into this issue because we do always keep our ServiceStack packages inline and were upgrading the ASP portion for some fixes to the WSDL generation and this came along as part of our upgrade.
ServiceStack adopts a single rolling version for all NuGet packages which all share the same Version number. Of all ServiceStack's 60 NuGet packages it's likely there's a breaking change to at least one of the packages so semver would be useless, you should also never mix and match different versions of ServiceStack together - when you upgrade, upgrade all packages to reference the same package versions. We do aim to keep user-facing breaking changes to a minimum, by looking to deprecate old API's first, maintain parallel API versions for a while then list the new API's release notes.
IOrmLiteDialectProvider is not a user-facing interface
However IOrmLiteDialectProvider is not considered a user-facing interface since it should be extremely rare that anyone implements their own custom provider. It's also the interface for specialization for all RDBMS's and often changes with every release to support new features, internal refactoring, optimizations, etc. E.g. implementing Type Converters was a major internal refactor that required changes to IOrmLiteDialectProvider but did not affect OrmLite's external user-facing API, later releases includes optimizations requiring further changes, again this doesn't affect OrmLite's external user-facing API.
SemVer won't help here, every ServiceStack version potentially has a breaking change in some of the packages and we have no intention to complicate each release by versioning each of the individual packages differently. The issue you're having is depending on an unstable Interface that's not intended for customization. It's not treated as a user-facing API so we don't try to maintain compatibility with existing versions or publish breaking changes which happens nearly every time we add features / optimizations to OrmLite. You should instead check the commit history of
IOrmLiteDialectProvider for any changes to this interface.
I'm currently using Visual Studio Web Essentials in order to bundle and minify my CSS and JavaScript files.
At present I'm manually creating the bundles with a version number (e.g. mybundle-1.0.0.css) in order to avoid caching issues when pushed out to production. I'm also having to manually change the bundle files version number each time a change is made to the source.
Is there any sort of automatic versioning functionality in Web Essentials bundling that I may have overlooked?
The ideal workflow would be:
Developer updates a source file.
Web essentials updates the bundle automatically.
Web essentials increments the version number in the filename automatically.
Is this possible?
If not, I'd be happy to hear any suggestions for better developer workflows.
Web Essentials doesn't have any support for dynamic versioning. Instead, I always use a dynamic runtime feature to automatically append fingerprints to my JS and CSS references. This works better for me, because it is completely independent of any build process or tooling support, instead it just looks at the actual files for changes. So it's much more robust.
I just wrote it up in a blog post here