YUI Compressor speed - yui

I have been using YUI Compressor (called via PHP script) to minify a combined file. At fist, I combined the file then minified it, but now I am minifying the files and then combining them. This potentially saves time, if not all (~40+ scripts and css files) have changed.
What I notice is, that the total process of minifying 40 files takes significantly longer (approx 120+ seconds) whereas the combined file took significantly less time to minify.
You can leave the philosophy of combining first vs after, but does anyone know the reason for this issue? Possibly could it be due to the initialization time of the application?
I am using this code:
$yui_jar = $this->fitango_root.'/js/yuicompressor-2.4.7.jar';
$command = "java -jar $yui_jar $filename -o $path_file";
echo "MINIFYING: $filename\n";
exec ($command,$result);

It will be the Java JVM startup time. Although not documented on the YUI compressor page, it does support wildcard conversion into single files:
java -jar yuicompressor-2.4.8.jar -o ".js$:-min.js" *.js
I've just tested that locally and can confirm it's working. If you run the compressor with no arguments, you get a more up-to-date list of options, which is where I got the above from

Related

Jmeter cmdrunner observed garbage characters

Please need some advise how to resolve this characters, thanks.
command:
java -jar apache-jmeter-4.0/lib/cmdrunner-2.2.jar --tool Reporter --generate-png tps_L1.png --input-jtl jmeter.jtl --plugin-type TransactionsPerSecond --width 1024 --height 768
Environment:
CentOS v7.3
Jmeter v4.0
It might be caused by your test using non-ASCII characters for Thread Group(s) and Sampler(s) labels.
First of all double check JMeter doesn't have problems with storing/interpreting Thread Group and Sampler labels, i.e. try generating HTML Reporting Dashboard (it includes Transactions per Second chart by the way)
If the problem remains - you need to find a way to configure JMeter to handle national characters, i.e. try explicitly defining file.encoding property to be UTF-8. To do this add the next line to system.properties file (lives in "bin" folder of your JMeter installation)
file.encoding=UTF-8
and restart JMeter to pick the property up
If it helps - play the same trick with JMeter Plugins CMDRunner tool, like:
java -Dfile.encoding=UTF-8 -jar apache-jmeter-4.0/lib/cmdrunner-2.2.jar --tool Reporter --generate-png tps_L1.png --input-jtl jmeter.jtl --plugin-type TransactionsPerSecond --width 1024 --height 768
Also be aware that according to JMeter Best Practices you should always be using the latest version of JMeter so consider upgrading to JMeter 5.0 (or whatever the latest version is available at JMeter Downloads page) as soon as possible

collada2gltf converter can't produce *.json file

I am reading a book: Programming 3D Applications with HTML5 and WebG , it involve a Vizi framework.
All the examples load the *.json file instead of *.gltf file. Why?
When I load *.gltf, it doesn't load any result, and the collada2gltf converters only produce *.gltf, *.bin, *.glsl files and so on.
What should I do?
.gltf is a JSON file. Try to open it with a text editor and see for youself. .bin and .glsl files are just additional resources, linked from .gltf file. Those are geometry buffers and shaders respectively. So to make it work you should make sure that all the files produced with the converter are also available to a web browser you running your code in.
Also you can try to add -e CLI flag to collada2gltf and it'll embed all the resources into result .gltf file.

How do I write a SCons script with hard-to-predict dynamic sources?

I'm trying to set up a build system involving a code generator. The exact files generated are unknown until after the generator is run, but I'd like to be able to run further build steps by pattern matching (run some program on all files with some extension). Is this possible?
Some of the answers here involving code generation seem to assume that the output is known or a listing of generated files is created. This isn't impossible in my case, but I'd like to avoid it since it makes things more complicated.
https://bitbucket.org/scons/scons/wiki/DynamicSourceGenerator seems to indicate that it's possible to add additional targets during Builder actions, but while I could get the build to run and list the generated files, any build steps introduced don't run.
https://bitbucket.org/scons/scons/wiki/NonDeterministicDependencies uses Scanners to add build steps. I put a glob(...) in a scanner, and it succeeds in detecting the generated files, but the files are inexplicably deleted before it actually runs the dependent step.
Is this use case possible? And why is SCons deleting my generated files?
A toy example
source (the file referenced in SConscript)
An example generator, constructs 3 files (not easily known to the build system) and puts them in the argument folder
echo "echo 1" > $1/gen1.txt
echo "echo 2" > $1/gen2.txt
echo "echo 3" > $1/gen3.txt
SConstruct
Just sets up a variant_dir
SConscript('SConscript', variant_dir='build')
SConscript
The goal is for it to:
"Compile" the generator (in this toy example, just copies a file called 'source' and adds execute permissions
Run the "compiled" generator ('source' is a script that generates files)
Perform some operation on each of those generated files by extension. This example just runs the "compile" copy operation on them (for simplicity).
env = Environment()
env.Append(BUILDERS = {'ExampleCompiler' :
Builder(action=[Copy('$TARGET', '$SOURCE'),
Chmod('$TARGET', 0755)])})
generator = env.ExampleCompiler('generator', 'source')
env.Append(BUILDERS = {'GeneratorRun' :
Builder(action=[Mkdir('$TARGET'),
'$SOURCE $TARGET'])})
generated_dir = env.GeneratorRun(Dir('generated'), generator)
Everything's fine up to here, where all the targets are explicitly known to the build system ahead of time.
Attempting to use this block of code to glob over the generated files causes SCons to delete (!!) the generated files:
for generated in generated_dir[0].glob('*.txt'):
generated_run = env.ExampleCompiler(generated.abspath + '.sh', generated)
Attempting to use an action to update the build tree results in additional actions not being run:
def generated_scanner(target, source, env):
for generated in source[0].glob('*.txt'):
print "scanned " + generated.abspath
generated_target = env.ExampleCompiler(generated.abspath + '.sh', generated)
Alias('TopLevelAlias', generated_target)
env.Append(BUILDERS = {'GeneratedOperation' :
Builder(action=[generated_scanner])})
dummy = env.GeneratedOperation(generated_dir[0].File('#dummy'), generated_dir)
Alias('TopLevelAlias', dummy)
The Alias operations are suggested in above dynamic source generator guide, but don't seem to do anything. The prints do execute and indicate that the action gets run.
Running some build pattern on special file extensions is possible with SCons. For C/CPP files this is the preferred scheme, for example:
env = Environment()
env.Program('main', Glob('*.cpp'))
The main task of SCons, as a build system, is to do the minimum amount of work such that all your targets are up-to-date. This makes things complicated for the use case you've described above, because it's not clear how you can reach a "stable" situation where no generated files are added and all targets are built.
You're probably better off by using a simple Python script directly...I really don't see how using SCons (or any other build system for that matter) is mission-critical in this case.
Edit:
At some point you have to tell SCons about the created files (*.txt in your example above), and for tracking all dependencies properly, the list of *.txt files has to be complete. This the task of the Emitter within SCons, which is responsible for returning the list of resulting target and source files for a Builder call. Note, that these files don't have to exist physically during the "parse" phase of SCons. Please also have a look at my answer to Scons: create late targets , which goes into some more detail.
Once you have a proper Emitter in place (see also https://bitbucket.org/scons/scons/wiki/ToolsForFools , "Using Emitters") you should be able to use the Glob('*.txt') call, which will detect and track your created files automatically.
Finally, on our page "Talks and Slides" ( https://bitbucket.org/scons/scons/wiki/TalksAndSlides ) you can find my talk from the PyCon FR.2014, "Why SCons is Not Slow", which explains shortly how SCons works internally. This might be helpful in understanding this problem better and coming up with a full solution.

Add a Timestamp to the End of Filenames with Grunt

During my Grunt tasks, add a unique string to the end of my filenames. I have tried grunt-contrib-copy and grunt-filerev. Neither have been able to do what I need them to...
Currently my LESS files are automatically compiled on 'save' in Sublime Text 3 (so this does not yet occur in my grunt tasks). Then, I open my terminal and run 'grunt', which concatenates (combines) my JS files. After this is done, then grunt should rename 'dist/css/main.css' and 'dist/js/main.js' with a "version" at the end of the filename.
I have tried:
grunt-contrib-copy ('clean:expired' deletes the concatenated JS before grunt-contrib-copy' can rename the file)
grunt-filerev ('This only worked on the CSS files for some reason, and it inserted the version number BEFORE the '.css'. Not sure why it didn't work on the JS files.')
Here's my Gruntfile.js
So, to be clear, I am not asking for "code review" I simply need to know how I can incorporate a "rename" process so that when the tasks are complete, I will have 'dist/css/main.css12345 & dist/js/main.js12345' with no 'dist/css/main.css' or 'dist/js/main.js' left in their respective directories.
Thanks in advance for any help!
UPDATE: After experimenting with this, I ended up using grunt-contrib-rename and it works great! I beleieve the same results can be achieved via grunt-contrib-copy, in fact I know it does the same thing. So either will work. As far as support for regex, not sure if both support it, so may be something else worth looking into before choosing one of these plugins :)
Your rename:dist looks like it should do what you want, you just need to move clean:dist to be the first task that runs (so it deletes things from the prior build rather than the current build). The order of tasks is defined by the array on this last line:
grunt.registerTask('default', ['jshint:dev', 'concat:dist', 'less:dist', 'csslint:dist', 'uglify:dist', 'cssmin:dist', 'clean:dist', 'rename:dist']);
That said, I'm not sure why you want this behavior. The more common thing to do is to insert a hash of the file into the filename before the file extension.
The difference between a hash and a timestamp is that the hash value will always be the same so long as the file contents don't change - so if you only change one file, the compiled output for just that file will be different and thus browsers only need to re-downloaded that one file while using cached versions of every other file.
The difference between putting this number before the file extension and after the extension is that a lot of tools (like your IDE) have behavior that changes based on the extension.
For this more standard goal, there are tons of ways to accomplish it but one of the more common is to combine grunt-filerev with grunt-usemin which will create properly named files and also update your HTML file(s) to reference these new file names
I'm not sure to understand completely what end you want, but if you add a var timestamp = new Date().getTime(); at the beginning of your gruntfile and concatenate to your dest param that should do the job.
dest: 'dist/js/main.min.js' + timestamp
Is it what your looking for?

Socket.io script size reduce

socket.io source script goes like 70k, a great part is comments, spaces...
I need to reduce that script to a smaller size
Some scripts do not even have spaces and the code is all toghether, this reduces the script original size.
Where is the location of the socket.io script so that I can remove comments and spaces?
Or is there a socket.io allready whithout comments and spaces with a smaller size?
There is a setting in the socket.io configuration for this:
https://github.com/LearnBoost/Socket.IO/wiki/Configuring-Socket.IO
browser client minification defaults to false
Does Socket.IO need to send a minified build of socket.io.js.
You may also enable gzip compression on the library.
The client .js file is in *yourdir*/node_modules/socket.io/node_modules/socket.io-client/dist
There is one file called socket.io.min.js which is minified already.
The OP fixed the problem by going to /node_modules/socket.io/lib and editing 'manager.js', to set both "minification" and "gzip compression" to "true". They had to do this way because because they were using nowJS which indirectly uses 'socket.io'
This reduced the file from 70k to about 4k!
it seems socket.io is either returning some other file or returning by building at run time.
I replaced socket.io.js by renaming the min one. cleared cache of browser but still getting the old file.

Resources