Add a Timestamp to the End of Filenames with Grunt - node.js

During my Grunt tasks, add a unique string to the end of my filenames. I have tried grunt-contrib-copy and grunt-filerev. Neither have been able to do what I need them to...
Currently my LESS files are automatically compiled on 'save' in Sublime Text 3 (so this does not yet occur in my grunt tasks). Then, I open my terminal and run 'grunt', which concatenates (combines) my JS files. After this is done, then grunt should rename 'dist/css/main.css' and 'dist/js/main.js' with a "version" at the end of the filename.
I have tried:
grunt-contrib-copy ('clean:expired' deletes the concatenated JS before grunt-contrib-copy' can rename the file)
grunt-filerev ('This only worked on the CSS files for some reason, and it inserted the version number BEFORE the '.css'. Not sure why it didn't work on the JS files.')
Here's my Gruntfile.js
So, to be clear, I am not asking for "code review" I simply need to know how I can incorporate a "rename" process so that when the tasks are complete, I will have 'dist/css/main.css12345 & dist/js/main.js12345' with no 'dist/css/main.css' or 'dist/js/main.js' left in their respective directories.
Thanks in advance for any help!
UPDATE: After experimenting with this, I ended up using grunt-contrib-rename and it works great! I beleieve the same results can be achieved via grunt-contrib-copy, in fact I know it does the same thing. So either will work. As far as support for regex, not sure if both support it, so may be something else worth looking into before choosing one of these plugins :)

Your rename:dist looks like it should do what you want, you just need to move clean:dist to be the first task that runs (so it deletes things from the prior build rather than the current build). The order of tasks is defined by the array on this last line:
grunt.registerTask('default', ['jshint:dev', 'concat:dist', 'less:dist', 'csslint:dist', 'uglify:dist', 'cssmin:dist', 'clean:dist', 'rename:dist']);
That said, I'm not sure why you want this behavior. The more common thing to do is to insert a hash of the file into the filename before the file extension.
The difference between a hash and a timestamp is that the hash value will always be the same so long as the file contents don't change - so if you only change one file, the compiled output for just that file will be different and thus browsers only need to re-downloaded that one file while using cached versions of every other file.
The difference between putting this number before the file extension and after the extension is that a lot of tools (like your IDE) have behavior that changes based on the extension.
For this more standard goal, there are tons of ways to accomplish it but one of the more common is to combine grunt-filerev with grunt-usemin which will create properly named files and also update your HTML file(s) to reference these new file names

I'm not sure to understand completely what end you want, but if you add a var timestamp = new Date().getTime(); at the beginning of your gruntfile and concatenate to your dest param that should do the job.
dest: 'dist/js/main.min.js' + timestamp
Is it what your looking for?

Related

Reading whole directory with Spark except one file

I have the following directory containing these CSV files:
/data/one.csv
/data/two.csv
/data/three.csv
/data/four.csv
If I want to read everything, I can simply do:
/data/*.csv
but I can not seem to read everything, except four.csv.
This:
/data/*[^four]*.csv
seemed to work but I think that if the list of files would be bigger, than this way of reading would probably be wrong (because of double wildcards).
Is there a good way to do this? I also am aware that:
/data/{one,two,three,^four}.csv
would solve this specific case, but I need the except method for future needs.
Thank you very much!
I am not 100% sure that this method will work, but you can try. You can use Bash/Python or whatever script to scan all the csv files in the folder, but not with the names four.csv.
The input for spark will be (assuming you have files: one.csv, two.csv, three.csv, four.csv, five.csv ,...up to n.csv files)
PathToFiles=[/data/one.csv, /data/two.csv, /data/three.csv, /data/five.csv, ..., /data/n.csv]
Then you can use (the code is in python)
filesRDD = spark.sparkContext.wholeTextFiles(",".join(PathToFiles))
I have wrote similar code in java, in my impression, it works and you can try.

PTC Integrity batch update member revision

Is there a way to update the member revision of a big list of files via command line?
I can't use :working or :head but have to specify a different revision for each file.
As far as I know --selectionFile only takes paths as input, but not the revision numbers.
edit: I wanted to set member a very big list of files and I wanted to avoid writing the command si updaterevision ... for every file, as it takes ages to complete for that many files. Instead I wanted to know if there is a more advanced method to specify a list of files and their revisions to be able to run the updaterevision only once (like it is with :working) for the whole list of files.
But as it is said in the comment there is no such possibility.
edit2: I use MKS for a couple of years now and as I now know, there is no such possibility (at least up to MKS 11.6) to update many files to different revisions with one single command line call. But using one call per member, as was proposed, made the whole operation take up to several hours as I had many thousands of members in the sandbox and MKS needs some time to complete each sicommand.
Some time already passed since you asked for this question, here is my comment in case it could still be useful for you in the future.
First, It is not completely clear what you want to achieve. Please be more descriptive and if possible provide example.
What I understand as of now is you need to set bunch of files listed as member revision thru the command line. This is fairly simple, the most complicated is actually to have the list of files to be updated to member and the revision that you want to set as member.
I recommend you to create a batch file with the commands to make each file member. You can use Regex to do it very quick and without much trouble.
Here is an example for updating one file member revision:
si updaterevision --hostname=servername --port=portnumber --user=username --changepackageid=5873763:2 --revision=:working myfile_a1.c
where
servername = the name of the server where your sandbox is located
portnumber = the port that provides access to the server for your sandbox
username = your login user id
changepackageid = here you change the number to use your defined TASK:ChangePackage for this changes
revision = if you have a working revision that you want now to become member, just use "working" as revision, otherwise you can define specific revision number, e.g. revision=1.2
At the end you define the name of the file you want to update.
Go to you sandbox root folder, open CMD window, and run the batch file. It will execute each line applying your changes.
If you have a list of files with the revision you want as member, you can use REGEX to convert it into a batch file.
Example list of files in text file:
file1.c 1.10
file3.c 1.19
sec_file1.c 1.1.2.1
support.h 1.7
Use notepad++ or other text editor with regex support and run this search:
Once you know which regex apply, you can now use it in the notepad++ to do a simple search and replace:
Search = ([\w].[\D])\s+([\d.]+).*
Replace = si updaterevision --hostname=servername --port=portnum --user=userid --changepackageid=6123933:4 --revision=\2 \1
\1 => FileName
\2 => File revision
See image below as example:
Finally just save doc as batch file and run it.
Just speculating that if you have a large list of members along with the member revision you want to update to, then you also have an sandbox that served you to generate this list.
If so my approach would be
c:\MySandbox> si updaterevision --recurse --revision=:working
If your member/revision list come from a development path you could first have a sandbox targeting that devpath, resync, (close thesandbox if opened in gui), retarget the sandbox to the destination devpath (or mainline) you want and then issue the command above.
For an single member approach I would use 'si rlog' to generate a list of si-commands directly
si rlog -R --noheaderformat --notrailerformat --revision=:working --format="si updaterevision {membername} --revision={revision}\r\n" > updaterevs.bat.txt
Review updaterevs.bat.txt rename it to updaterevs.bat and ecxecute it.
(Be careful if using it on other sandboxes)
Other interesting readings here might be the "snapshot sandbox" feature,
checkpointing in general and variants rsp. devpaths.
Using only these features might be politically more correct in the philosophy of Integrity.

Cannot zip files with the same name?

I could not believe this: it seems that the zip specification does not allow two different files with the same file name going into one zip file.
In my case I use an external file to specify all the files I wanna zip.
This could look like this:
../Website1/favicon.ico
../Website2/favicon.ico
and there we are, that's not possible, despite keeping the directory structure. You would expect the name to be <../Website1/favicon.ico> rather than but that does not seem to be the case, I get:
"Invalid ZIP request (cannot repeat names in Zip file)"
with WinZip. I tried the same with 7Zip - same result.
Strangely googling did not show many hits that really fit but those I found seem to confirm my findings. That's hard to believe since this limitation is very severe. I actually struggle to understand why this did not hit me a couple of decades earlier.
Am I overlooking something very basic here?
To be precise:
Adding these two files:
C:\Temp\Website1\FavIcon
C:\Temp\Website2\FavIcon
results in a single file; the last Add wins...
This however:
Website1\FavIcon
Website2\FavIcon
results in a zip file that contains both files.

save MATLAB code file along with results in one folder?

I'm processing a data set and running into a problem - although I xlswrite all the relevant output variables to a big Excel file that is timestamped, I don't save the code that actually generated that result. So if I try to recreate a certain set of results, I can't do it without relying on memory (which is obviously not a good plan). I'd like to know if there's a command(s) that will help me save the m-files used to generate the output Excel file, as well as the Excel file itself, in a folder I can name and timestamp so I don't have to do this manually.
In my perfect world I would run the master code file that calls 4 or 5 other function m-files, then all those m-files would be saved along with the Excel output to a folder names results_YYYYMMDDTIME. Does this functionality exist? I can't seem to find it.
There's no such functionality built in.
You could build a dependency tree of your main function by using depfun with mfilename.
depfun(mfilename()) will return a list of all functions/m-files that are called by the currently executing m-file.
This will include all files that come as MATLAB builtins, you might want to remove those (and only record the MATLAB version in your excel sheet).
As pseudocode:
% get all files:
dependencies = depfun(mfilename());
for all dependencies:
if not a matlab-builtin:
copyfile(dependency, your_folder)
As a "long term" solution you might want to check if using a version control system like subversion, mercurial (or one of many others) would be applicable in your case.
In larger projects this is preferred way to record the version of source code used to produce a certain result.

Artefact folder structure does not contain empty directories

I'm trying to store whole the output of my build, this includes some empty folders. These aren't included by the artefact mechanism in teamcity:
What doesn't work:
OAR\=> OAR.zip
OAR->OAR.zip
OAR
Inside of OAR i have a folder structure that needs to be stored. I know i could put a placeholder file in each but that is not the answer i'm after. Otherwise ill have to zip it myself?
Unfortunately TeamCity, by design, searches for files and uploads them as artifacts which means that empty folders are never included. Given the open and very old issue in the TeamCity tracker I doubt they are going to fix it any time soon.
I would recommend zipping the folder yourself, that is the approach we have taken. How you implement that depends on the build technology you are using. For example, if you are building using Nant you could add the zip task to your build, there are similar options for MSBuild and Ant.
If you don't want to rely on the build performing the zip I would recommend installing 7zip on your build agents and using the command line to perform the zip. Just remember if you want 7zip to include empty directories use * as the wildcard rather than *. * like so:
7z a -r OAR.zip *
Technically you could use powershell to do the zipping, which would be better than having to install something on your agents. I haven't tried this option myself.
Apologies for not linking all my references above. Apparently, and understandably so, I need at least 10 reputation to post more than 2 links.

Resources