Dojo load time extremely slow on iis - iis

I am currently working on a project that is using Dojo as the js framework. Its a rather rich ui and as such is using (and thus loading) a lot of different .js files for the dojo plug-ins
When run on an apache server running on a mac, the files (all around 1k) are served very quickly (1 or 2 ms) and the page loads pretty fast (<5 seconds)
When run on IIS on Win 7, the files are served at an unbelievably slow rate (150ms - 1s), thus causing the page to take up to 3 minutes to load.
I have searched the internet to try to find a solution and have come up empty.
Anyone have any ideas?

Why not let Google serve the Dojo files for you?
The AJAX Libraries API is a content
distribution network and loading
architecture for the most popular,
open source JavaScript libraries. By
using the google.load() method, your
application has high speed, globally
available access to a growing list of
the most popular, open source
JavaScript libraries.

What you need to do is build an optimized version of your code. That way you will have far fewer hits to your server (but I guess they'll still be slow, until you discover the iis problem) Dojo runs out of the box as individual files which is great for development, but without running the build scripts to concatenate all these files together, the experience is poor. The CDN does build profiles for dojo base and certain profiles, like dijit.dijit. Doing a dojo.require on these profiles in addition to the individual requires would enable this after running a build. You would need to do create layers for your code as well. The build scripts can also concatenate css and inline template files, remove comments and whitespace, etc.

Have you actually tried measuring the load times on the intended target production server?
If you're just testing this on local development environments (or in development/test VM's) then I think you're comparing apples with oranges (pardon the pun :) ).

Related

how can i maintain a modern static website without transpiler or bundler tools

We have a static website, we outsource the website maintenance, we don't have source code repository, so contractor edits the code on production server directly.
It has no problem, as our website built decades ago with old school html4 only. What it store on the web server, is what the source code is.
At today, the web site can be composed by UI framework, eg. Vue, React....etc. Sometimes the HTML file contains web components and other JS module. I have done a little google to learn that, building a website today need NPM, NodeJs, Webpack, Gulp....etc, they manage js module and bundle / built the production code...
My problem is, we like to revamp our website with modern UI (HTML5, CSS3, mobile friendly...). The tools I just mentioned will "process" the source code and output production code. We don't have the source code server (eg. git server), for our contractor to store the source code. ( our company management doesn't allow us to purchase private repository services on the internet. eg..github, gitlab...etc).
Can I keep using the old school way? the source code on the production web server is always the only source code...
I have tried myself to using the require.js, it loads js module on the browser, so I can handle module loading without node.js and Webpack, and writing the web component in vanilla js. Is it the only solution I can do?
You certainly could continue to manage this site the "old school" way, but in doing so, you'll be ignoring the benefits that all the modern tools give you.
For example
no git (or other version control) means no rolling back changes (or errors)
using version control software also means you have a backup and you don't need to set up a backup scheme on the production server to save your files
editing on the production server means if someone makes a typo, the site is messed up; etc.
I would strongly recommend modern tools; if cost is a concern, consider free tools:
Bitbucket has long offered free private repositories; Github has recently also started offering them.
Tools such as Hugo, Jekyll, and others permit creation of static sites quickly and easily.
Edit in answer to some of the comments...
Switching to a more modern development workflow (including version control) is not just about saving money, it's also about:
Does the employer/client want their developer(s) spending a lot of time managing the site - possibly including fixing problems - or do they want them working on something else?
Is the employer/client willing to have periods of time when the site does not work correctly? As #birdspider mentions in the comments above, if you have multiple people working on the website on the production server, they're going to be messing up each others' work. Note that the use of a VCS helps avoid avoid some of the problems with people stepping on each others' toes and it also make fixing those conflicts so much easier.
If you approach the employer/client with these points and their answer is "we just don't like it", then there's probably not much else you can do. If I were in your shoes, I'd be strongly tempted to either a) implement something on my own (just to preserve my own sanity, although really this is probably not a good idea) or b) find a new job.

Is it possible to execute local .exe´s from angular application running in browser?

We are about to start a new project which should be like a desktop app but still run inside a browser for creating items in a system. After these items are created, an .EXE file on the LOCAL machine must be called to do some code generation. Is this possible if using Angular to develop the application or do we need third party libs for executing local .exe's?
No, this is not possible out of the box. Browsers make very sure that local executables cannot be started. You would have to look for other solutions.
One possible idea, depending on how much effort you want to invest, would be to compile the WebKit engine yourself, i.e., create a binary "wrapper" which runs the browser engine itself. Then you are free to extend it in whatever fashion you need, including adding the possibility to start local .exe's (or if those .exe's are your own applications, you could compile them right into your WebKit wrapper).

securing the source code in a node-webkit desktop application

first things first , i have seen nwsnapshot. and its not helping.
i am building an inventory management system as a desktop app using node-webkit . the project being built is using compoundjs (mvc javascript library). which have a definite folder structure (you know mvc) and multiple javascript files inside them.
the problem is nwsnapshot allows the app to have only a single snapshot file but the logic of application is spread over all the folders in different javascript files.
so how do i secure my source code before shipping it to client? Or any other work-around Or smarter way (yes, i know about obfuscating).
You can use nodewebkit command called nwsnapshot to compile the javascript code into binary which will be loaded into the app without specifying any js file
nwsnapshot --extra-code application.js application.bin
in your package.json add this:
snapshot: 'application.bin'
It really depends on what you mean by "secure".
You can obfuscate your javascript code fairly well (as well as potentially improve performance) by using the Google Closure Compiler.
I'm not aware of any off-the-shelf solutions to encrypt/decrypt your javascript, and honestly I would question the need for that.
Some people think they need to make it impossible to view their source code, because they're used to dealing with compiled languages where you only ship binaries to users. The fact is, reverse-engineering that binary code was never as difficult as some people think it is, so if there's any financial incentive, there is practically no difference between shipping source code and the traditional shipping of binaries.
Some languages have offered genuine encryption of deployed assets, such as Microsoft's SLPS. It seems to me that the market for this was so small that Microsoft gave it to a partner (just my view). The truth is that most customers are not interested in taking your source code; they're far more interested in your ability to service and support that code in an efficient manner, while they get on with their job.
You may consider to merge the JS files into one in the build process and compile it.

How does CRM 2011 load and manage plugins in sandbox mode?

I have 2 plugin assemblies which are sharing the proxy code generated by crmsvcutil. The proxy code file tends to be large(14+ MB) and it seems to bloat up the Plugin Dlls.
I am thinking that it might make sense to offset the proxy code into a separate assembly and deploy it to the GAC on the CRM server.This would reduce the bloat in the plugin assemblies and also reduce the memory footprint since only a single copy of the proxy code would be loaded into the process space.
The question is, how does CRM load individual plugin assemblies?
Are they all loaded into the same process space or are they loaded into separate app domains?
If they are loaded into separate app domains then it defeats the purpose of having a separate assembly containing the generated proxy code since it will be loaded separately into both app domains anyways.
Any thoughts appreciated
I can't answer your question directly, but if bloat is the problem, there is a number of extensions to the crmsvcutil out there that will allow you to filter the generated class file to only include the entities that you wish to play with. I've done so before (at a previous company and have since lost the source. Grrr!) with success, achieving a class file of a few kb rather than mb.
A quick google search took me to... http://fourbusyxrmarchitects.wordpress.com/2012/08/09/filtering-the-list-of-early-bound-classes-generated-by-the-code-generation-tool-crmsvcutil-for-crm-2011-2/

Yeoman working very slowly loading pages

I'm working on a project that uses Yeoman
it's been working great on my machine till recently some changes have been made to the project (introducing angular mainly) while I wasn't working on it for a month.
since I came back every page load has been taking around 2 mins only to get HTMLs and JS files!
the cpu is between 30%-50% physical memory around 60%, the computer is in good shapes.
other people working on the same project are getting very fast load times..
what can it be?
10x!
Igal
Sounds like the problem is on your side, therefore I would just delete the whole mess and check it out again, I assume you use some kind of Version Control.
If this does not work, it could be some caching problems - but it is hard to say with the current information.
Do you have anything that the others don't?
I would also make sure to update Yeoman, Node.js ect. to the lastest version, or just the same as your co-workers.
For extra performance you can disable the force option in the gruntfile, but seeing as the other people on your project has no problems this should not do anything about your situation.

Resources