Web Essentials - Bundle versioning - visual-studio-2012

I'm currently using Visual Studio Web Essentials in order to bundle and minify my CSS and JavaScript files.
At present I'm manually creating the bundles with a version number (e.g. mybundle-1.0.0.css) in order to avoid caching issues when pushed out to production. I'm also having to manually change the bundle files version number each time a change is made to the source.
Is there any sort of automatic versioning functionality in Web Essentials bundling that I may have overlooked?
The ideal workflow would be:
Developer updates a source file.
Web essentials updates the bundle automatically.
Web essentials increments the version number in the filename automatically.
Is this possible?
If not, I'd be happy to hear any suggestions for better developer workflows.

Web Essentials doesn't have any support for dynamic versioning. Instead, I always use a dynamic runtime feature to automatically append fingerprints to my JS and CSS references. This works better for me, because it is completely independent of any build process or tooling support, instead it just looks at the actual files for changes. So it's much more robust.
I just wrote it up in a blog post here

Related

how can i maintain a modern static website without transpiler or bundler tools

We have a static website, we outsource the website maintenance, we don't have source code repository, so contractor edits the code on production server directly.
It has no problem, as our website built decades ago with old school html4 only. What it store on the web server, is what the source code is.
At today, the web site can be composed by UI framework, eg. Vue, React....etc. Sometimes the HTML file contains web components and other JS module. I have done a little google to learn that, building a website today need NPM, NodeJs, Webpack, Gulp....etc, they manage js module and bundle / built the production code...
My problem is, we like to revamp our website with modern UI (HTML5, CSS3, mobile friendly...). The tools I just mentioned will "process" the source code and output production code. We don't have the source code server (eg. git server), for our contractor to store the source code. ( our company management doesn't allow us to purchase private repository services on the internet. eg..github, gitlab...etc).
Can I keep using the old school way? the source code on the production web server is always the only source code...
I have tried myself to using the require.js, it loads js module on the browser, so I can handle module loading without node.js and Webpack, and writing the web component in vanilla js. Is it the only solution I can do?
You certainly could continue to manage this site the "old school" way, but in doing so, you'll be ignoring the benefits that all the modern tools give you.
For example
no git (or other version control) means no rolling back changes (or errors)
using version control software also means you have a backup and you don't need to set up a backup scheme on the production server to save your files
editing on the production server means if someone makes a typo, the site is messed up; etc.
I would strongly recommend modern tools; if cost is a concern, consider free tools:
Bitbucket has long offered free private repositories; Github has recently also started offering them.
Tools such as Hugo, Jekyll, and others permit creation of static sites quickly and easily.
Edit in answer to some of the comments...
Switching to a more modern development workflow (including version control) is not just about saving money, it's also about:
Does the employer/client want their developer(s) spending a lot of time managing the site - possibly including fixing problems - or do they want them working on something else?
Is the employer/client willing to have periods of time when the site does not work correctly? As #birdspider mentions in the comments above, if you have multiple people working on the website on the production server, they're going to be messing up each others' work. Note that the use of a VCS helps avoid avoid some of the problems with people stepping on each others' toes and it also make fixing those conflicts so much easier.
If you approach the employer/client with these points and their answer is "we just don't like it", then there's probably not much else you can do. If I were in your shoes, I'd be strongly tempted to either a) implement something on my own (just to preserve my own sanity, although really this is probably not a good idea) or b) find a new job.

GitVersion – selective versioning multiple assemblies of the same project

I’m on a .net c# project composed by a solution with several class library projects.
The source control is managed by git using gitflow as branching model.
We have decided that we wanted to implement semantic versioning (http://semver.org/) of the project in order to follow a standard way to communicate our releases.
For that we are using GitVersionTask (via NuGet) which works pretty well with gitflow.
Every time we tag a release and we perform a build from the master branch the version of all assemblies are updated and a new release is out for delivery.
Only one of the assemblies has a public API, all the other are for internal consume. I would like to know if this is the correct way to manage the version of multiple assemblies of the same project I mean, isn’t it wrong to change the version of every assembly when only a couple (or even just one) was changed? To get thinks more complicated there is strong possibility that some of the “internal” assemblies will be used by other projects so I believe it not very wise to increment a major version of an assembly that didn’t suffer a change just because another assembly of the same project is promoting breaking changes. Should each assembly project be managed on its own repository?
Thanks in advance.
I know this is a bit of an old question, still:
I want to share a workaround that seems to be working:
GitVersion uses $(Build.SourcesDirectory) to see where the sources are located - src
We can change this using logging commands*
Workaround is to set the Build.SourcesDirectory before GitVersion task
Then gitVersion uses the GitVersion.yml from the project folder (Build.SourceDirectory) and voila - works
After that you might want to roll back the change or not - depending on your need. For me it seems it is nice to scope down to the only nuget package from the collection of nuget packages in our nugetPackages monorepo.
see GitVersion issue and comment
*Example Powershell command:
standard PowerShell task; set to inline script;
Write-Host "##vso[task.setvariable variable=Build_SourcesDirectory;]$(Build.SourcesDirectory)\$(NugetProjectName)"
There is certainly nothing in GitVersion that would help with having separate projects within the same repository. The guidance that we would offer here is that you should use different repositories for the different parts of your application. That way they can be versioned/updated at their own cadence.

securing the source code in a node-webkit desktop application

first things first , i have seen nwsnapshot. and its not helping.
i am building an inventory management system as a desktop app using node-webkit . the project being built is using compoundjs (mvc javascript library). which have a definite folder structure (you know mvc) and multiple javascript files inside them.
the problem is nwsnapshot allows the app to have only a single snapshot file but the logic of application is spread over all the folders in different javascript files.
so how do i secure my source code before shipping it to client? Or any other work-around Or smarter way (yes, i know about obfuscating).
You can use nodewebkit command called nwsnapshot to compile the javascript code into binary which will be loaded into the app without specifying any js file
nwsnapshot --extra-code application.js application.bin
in your package.json add this:
snapshot: 'application.bin'
It really depends on what you mean by "secure".
You can obfuscate your javascript code fairly well (as well as potentially improve performance) by using the Google Closure Compiler.
I'm not aware of any off-the-shelf solutions to encrypt/decrypt your javascript, and honestly I would question the need for that.
Some people think they need to make it impossible to view their source code, because they're used to dealing with compiled languages where you only ship binaries to users. The fact is, reverse-engineering that binary code was never as difficult as some people think it is, so if there's any financial incentive, there is practically no difference between shipping source code and the traditional shipping of binaries.
Some languages have offered genuine encryption of deployed assets, such as Microsoft's SLPS. It seems to me that the market for this was so small that Microsoft gave it to a partner (just my view). The truth is that most customers are not interested in taking your source code; they're far more interested in your ability to service and support that code in an efficient manner, while they get on with their job.
You may consider to merge the JS files into one in the build process and compile it.

Dojo load time extremely slow on iis

I am currently working on a project that is using Dojo as the js framework. Its a rather rich ui and as such is using (and thus loading) a lot of different .js files for the dojo plug-ins
When run on an apache server running on a mac, the files (all around 1k) are served very quickly (1 or 2 ms) and the page loads pretty fast (<5 seconds)
When run on IIS on Win 7, the files are served at an unbelievably slow rate (150ms - 1s), thus causing the page to take up to 3 minutes to load.
I have searched the internet to try to find a solution and have come up empty.
Anyone have any ideas?
Why not let Google serve the Dojo files for you?
The AJAX Libraries API is a content
distribution network and loading
architecture for the most popular,
open source JavaScript libraries. By
using the google.load() method, your
application has high speed, globally
available access to a growing list of
the most popular, open source
JavaScript libraries.
What you need to do is build an optimized version of your code. That way you will have far fewer hits to your server (but I guess they'll still be slow, until you discover the iis problem) Dojo runs out of the box as individual files which is great for development, but without running the build scripts to concatenate all these files together, the experience is poor. The CDN does build profiles for dojo base and certain profiles, like dijit.dijit. Doing a dojo.require on these profiles in addition to the individual requires would enable this after running a build. You would need to do create layers for your code as well. The build scripts can also concatenate css and inline template files, remove comments and whitespace, etc.
Have you actually tried measuring the load times on the intended target production server?
If you're just testing this on local development environments (or in development/test VM's) then I think you're comparing apples with oranges (pardon the pun :) ).

Why can BuildProvider be used only with ASP.NET website projects?

I was going to try Subsonic, you can generate DAL with buildProvider element in an ASP.NET website project. But I get curious why Web applications or windows applications do not support BuildProvider.
PS: I know for Subsonic there is one other option to use it with other than BuildProvider, but I just get curious.
It doesn't work because of the different way things are compiled in web application projects vs. website projects. From what I read on MSDN, it has to do with the fact that in web app projects, all your code files are compiled into a single assembly using MSBuild before deployment, but Build Providers are used to generate code that is compiled at runtime (from your App_Code folder).
In website projects, all of your code is compiled at runtime so it all plays nicely together.
You could possibly hook it into your pre-build event, and call the sonic.exe with the proper command line.

Resources