save MATLAB code file along with results in one folder? - excel

I'm processing a data set and running into a problem - although I xlswrite all the relevant output variables to a big Excel file that is timestamped, I don't save the code that actually generated that result. So if I try to recreate a certain set of results, I can't do it without relying on memory (which is obviously not a good plan). I'd like to know if there's a command(s) that will help me save the m-files used to generate the output Excel file, as well as the Excel file itself, in a folder I can name and timestamp so I don't have to do this manually.
In my perfect world I would run the master code file that calls 4 or 5 other function m-files, then all those m-files would be saved along with the Excel output to a folder names results_YYYYMMDDTIME. Does this functionality exist? I can't seem to find it.

There's no such functionality built in.
You could build a dependency tree of your main function by using depfun with mfilename.
depfun(mfilename()) will return a list of all functions/m-files that are called by the currently executing m-file.
This will include all files that come as MATLAB builtins, you might want to remove those (and only record the MATLAB version in your excel sheet).
As pseudocode:
% get all files:
dependencies = depfun(mfilename());
for all dependencies:
if not a matlab-builtin:
copyfile(dependency, your_folder)
As a "long term" solution you might want to check if using a version control system like subversion, mercurial (or one of many others) would be applicable in your case.
In larger projects this is preferred way to record the version of source code used to produce a certain result.

Related

How to reference the most current Physical Sequential (PS) file in JCL

I wanted to create a job where I need to consider the latest file available as input file.
File format is as below: FILE1.TEST.TYYMMDD
is there any way to identify latest file based on date present in file name via JCL.
P.S. GDG versions are not created in existing process . Only PS file is created.
Thank you
I wanted to create a job where I need to consider the latest file available as input file. File [name] format is as below: FILE1.TEST.TYYMMDD is there any way to identify latest file based on date present in file name via JCL.
No.
You indicate that GDGs are not created in the existing process. GDGs would be the best way to accomplish your goal. Absent GDGs, you must write code.
You could accomplish your goal by writing (C, clist, COBOL, PL/I, Rexx) code using the LMDINIT and LMDLIST ISPF services. Then you would execute your code by running ISPF in batch. Many mainframe shops have a cataloged procedure to execute ISPF in batch.
Agree with #cschneid that there is not a platform way to handle this. However, I want to point out that GDGs are the platform way of managing PS files for access in a relative form.
Your comment
GDG versions are not created in existing process . Only PS file is
created.
That statement didn't make sense to me. GDGs are not a file type like physical sequential (PS) or partitioned (PO). It's a convention to allow relative reference to files created over time which sounds like what you want. I've only seen the use of GDGs for PS files.
Putting the date in the file name can have its uses but to z/OS its only part of the filename and not meta information that it operates on (like G0000v00's in GDGs.

Import data to OMNET++

I am trying to model a network in OMNET++. What I have is a text file (can be in an Excel file format) with nodes' names, list of interfaces, and interface connections. What I like to do is to write a program (perhaps a plug-in) to feed this file to OMNET++ and (automatically) create .ned and .cc based on this file. The rationale is that there is a long list of nodes/interfaces, that makes it difficult to do it manually, and possibly a change in the connections makes it difficult to recreate it, undelss it is done automatically. Could you point to some links/websites/documents, so that I learn how to write a plugin to read the information and create the nodes and their connections automatically? Obviously, the node types and characteristics could be modified in the plugin as necessary later.
An example is like:
(some other information there)...
cr1.atl-cr1.hst cr1.atl cr1.hst 2488
cr1.kcy-cr1.wdc cr1.kcy cr1.wdc 2488
cr1.atl-cr2.atl cr1.atl cr2.atl 10000
cr2.atl-cr1.wdc cr1.wdc cr2.atl 2488
...
where the second column is the source node, the third column is the destination node, and the first column is the link (firstNode-secondNode). The 4th column is the capacity/delay or other information of the link.
If you want this to be as flexible as possible, I would recommend writing a small Python script that reads a .csv file and renders .ned files as needed.
You might even consider using a templating engine like Mako. Quoting from its website, Mako is pretty straightforward to use:
from mako.template import Template
print(Template("hello ${data}!").render(data="world"))

Access test resources within Haskell tests

This is probably a basic question but I've been Googling for a while on it... I have a Cabal-ized Haskell project and I'm in the process of writing integration tests for it. I want to be able to include test resources for my project in the same repo and access them in tests. For example, here are a couple things I want to accomplish:
1) Check a dummy database instance into my repo, including a shell script that spins up a database process. I want to write an Hspec integration test that spins up the database process, makes some calls to it, and then shuts it down. So I need to be able to find the shell script so I can use System.Process.createProcess on it.
2) Check in paired "input" and "output" files. My test should process each of the input files and compare them to a corresponding output file to make sure they match. (I've read about "golden" but it doesn't seem to solve the problem of finding/reading the input files in the first place?)
In short, how can I go about creating a "resources" folder in the root folder of my Haskell project and find the path to it inside tests?
Have a look at an existing project that uses input and output file.
For example, take haddock, the source code is at https://github.com/haskell/haddock. They have the test files under a folder (https://github.com/haskell/haddock/tree/master/html-test/ref) and they are referenced as extra-source-files in the cabal file (https://github.com/haskell/haddock/blob/master/haddock.cabal). Then the test code (https://github.com/haskell/haddock/blob/master/html-test/run.lhs) uses some CPP macro (__FILE__) to get the current directory, and can then resolve the files relative to that folder.

Add a Timestamp to the End of Filenames with Grunt

During my Grunt tasks, add a unique string to the end of my filenames. I have tried grunt-contrib-copy and grunt-filerev. Neither have been able to do what I need them to...
Currently my LESS files are automatically compiled on 'save' in Sublime Text 3 (so this does not yet occur in my grunt tasks). Then, I open my terminal and run 'grunt', which concatenates (combines) my JS files. After this is done, then grunt should rename 'dist/css/main.css' and 'dist/js/main.js' with a "version" at the end of the filename.
I have tried:
grunt-contrib-copy ('clean:expired' deletes the concatenated JS before grunt-contrib-copy' can rename the file)
grunt-filerev ('This only worked on the CSS files for some reason, and it inserted the version number BEFORE the '.css'. Not sure why it didn't work on the JS files.')
Here's my Gruntfile.js
So, to be clear, I am not asking for "code review" I simply need to know how I can incorporate a "rename" process so that when the tasks are complete, I will have 'dist/css/main.css12345 & dist/js/main.js12345' with no 'dist/css/main.css' or 'dist/js/main.js' left in their respective directories.
Thanks in advance for any help!
UPDATE: After experimenting with this, I ended up using grunt-contrib-rename and it works great! I beleieve the same results can be achieved via grunt-contrib-copy, in fact I know it does the same thing. So either will work. As far as support for regex, not sure if both support it, so may be something else worth looking into before choosing one of these plugins :)
Your rename:dist looks like it should do what you want, you just need to move clean:dist to be the first task that runs (so it deletes things from the prior build rather than the current build). The order of tasks is defined by the array on this last line:
grunt.registerTask('default', ['jshint:dev', 'concat:dist', 'less:dist', 'csslint:dist', 'uglify:dist', 'cssmin:dist', 'clean:dist', 'rename:dist']);
That said, I'm not sure why you want this behavior. The more common thing to do is to insert a hash of the file into the filename before the file extension.
The difference between a hash and a timestamp is that the hash value will always be the same so long as the file contents don't change - so if you only change one file, the compiled output for just that file will be different and thus browsers only need to re-downloaded that one file while using cached versions of every other file.
The difference between putting this number before the file extension and after the extension is that a lot of tools (like your IDE) have behavior that changes based on the extension.
For this more standard goal, there are tons of ways to accomplish it but one of the more common is to combine grunt-filerev with grunt-usemin which will create properly named files and also update your HTML file(s) to reference these new file names
I'm not sure to understand completely what end you want, but if you add a var timestamp = new Date().getTime(); at the beginning of your gruntfile and concatenate to your dest param that should do the job.
dest: 'dist/js/main.min.js' + timestamp
Is it what your looking for?

scons: How to deal with dynamic targets?

I'm trying to automate my work of converting PDF to png file with scons. The tool used for my conversion is convert from ImageMagick.
Here's the raw command line:
convert input.pdf temp/temp.png
convert temp/*.png -append output.png
The first command will generate one PNG file for each page in PDF file, so the target of the first command is a dynamic file list.
Here's the SConstruct file I'm working on:
convert = Builder(action=[
Delete("${TARGET.dir}"),
Mkdir("${TARGET.dir}"),
"convert $SOURCE $TARGET"])
combine = Builder(action="convert $SOURCE -append $TARGET")
env = Environment(BUILDERS={"Convert": convert, "Combine": combine})
pdf = env.PDF("input.tex")
pngs = env.Convert("temp/temp.png", pdf) # I don't know how to specify target in this line
png = env.Combine('output.png', pngs)
Default(png)
The code pngs = env.Convert("temp/temp.png", pdf) actually is wrong since the target is multiple files that I don't know how many before env.Convert is executed, so the final output.png only contains the first page of the PDF file.
Any hint is appreciated.
UPDATE:
I just found that I can use command convert input.pdf -append output.png to avoid the two-step conversion.
Still I'm curious how to handle the scenario when the intermediate temporary file list is unknown beforehand and requires a dynamic target list.
If you want to know how to do the original (convert and combine) situation you proposed, I would suggest creating a builder with a SCons Emitter. The emitter allows you to modify the list of source and target files. This works nicely for generated files that dont exist with a clean build.
As you mentioned, the convert step will generate multiple targets, the trick is you need to be able to "calculate" those targets in the emitter based on the source. For example, recently I created a wsdl2java builder and was able to do some simple wsdl parsing in the emitter to calculate all of the target java files to be generated (the source being the wsdl).
Here is a general idea of what the build scripts should look like:
def convert_emitter(source, target, env):
# both and source and target will be a list of nodes
# in this case, the target will be empty, and you need
# to calculate all of the generated targets based on the
# source pdf file. You will need to open the source file
# with standard python code. All of the targets will be
# removed when cleaned (scons -c)
target = [] # fill in accordingly
return (target, source)
# Optionally, you could supply a function for the action
# which would have the same signature as the emitter
convert = env.Builder(emitter=convert_emitter,
action=[
Delete("temp"),
Mkdir("temp"),
"convert $SOURCE $TARGET"])
env.Append(BUILDERS={'Convert' : convert})
combine = env.Builder(action=convert_action, emitter=combine_emitter)
env.Append(BUILDERS={'Combine' : combine})
pdf = env.PDF('input.tex')
# You can omit the target in this call, as it will be filled-in by the emitter
pngs = env.Convert(source=pdf)
png = env.Combine(target='output.png', source=pngs)
Depending on what qualifies as "dynamic" for you, I believe the correct answer is: not possible.
As long as the source on which you would like to "dynamically" compute a target set is present when SCons is run, #Brady's solution should work fine. However, if the source in question itself is the target of some other command, it will not work. This is a fundamental limitation of SCons, as it makes the assumption that the set of build targets can be statically determined from the base set of input (non-intermediate) sources. It runs through and computes a build/target/dependency graph in one sweep, then executes it in the next. It has no ability to run through some known portion of the build graph, stop to introspect some intermediate targets to dynamically compute the rest of the build graph, and then continue. I'd frankly love for this ability in the work that I do with SCons, but I'm afraid this is just a fundamental limitation.
The best you can do is set the build up so that on the first run, it stops at the construction of the PDF (if no PDF target exists when the build script is executed). Once the PDF has been built, you can rerun the build and set things up so the rest of the build steps execute based on the PDF built from the last run. This more or less works decently... except for one problem. If the PDF ends up changing (and producing some new pages for instance), you'll actually have to rerun the build twice in order to capture the changes to the PDF, since any page counts (etc) will be based on the old version of the PDF.
I'd love for someone to prove me wrong here, but such is the way of things.
Looking at this, there's no requirement for the individual temp/*png to be kept - if there was, you shouldn't be putting them in a temp directory, and in any case you'd have to do quite a bit of work if you wanted to work out which pages to generate.
So it looks more sensible to do this as one step, this So you'd have something like this
png = env.Convert('output.png', 'input.pdf')
where the action function for convert was something like this:
Delete('temp'),
Mkdir('temp'),
'convert $SOURCE temp/$TARGET',
'for i in temp/*png; do convert $TARGET temp/$i',
Delete('temp')
Though frankly you might do better with writing that whole thing as a single callable script to make sure you got the page sorting correct.

Resources