Is there an automated way to convert files from .pv to .log? - purify

Is there a way to convert a .pv file to a text file apart from opening each one with purify -view and then exporting it?
I have lots of .pv generated files from running lots of tests with my executable instrumented with Rational's Purify. I know that I can specify -log-file=logfilename.log to generate the text version of the output when I run the tests. But I have thousands of tests and don't want to update them all to change the -log-file parameter used.

This sounds like a script job. You iterate all your .pv files and export the log-file using purify. In bash this would look something like this:
#!/bin/bash
for f in *.pv; do
purify -view $f -log-file=$f.log -windows=no;
done

Related

How can I run two bash scripts simultaneously and without repetition of the same action?

I'm trying to write a script that automatically runs a data analysis program. The data analysis takes a file, analyzes it, and puts all the outputs into a folder. The program can be run on two terminals simultaneously (each analyzing a different subject file).
I wrote a script that can do all the inputs automatically. However, I can only get my script to run one automatically. If I run my script simultaneously it will analyze the same subject twice (useless)
Currently, my script looks like:
for name in `ls [file_directory]`
do
[Data analysis commands]
done
If you run this on two terminals, it will start from the top of the directory containing all the data files. This is a problem, so I tried to do checks for duplicates but they weren't very effective.
I tried a name comparison with the if command (didn't work because all the output files except one were of a unique name, so it would check the first outfput folder at the top of the directory and say the name was different even though an output folder further down had the same name). It looked something like..
for name in `ls <file_directory>`
do
for output in `ls <output directory>`
do
If [ name==output ]
then
echo "This file has already been analyzed."
else
<Data analyis commands>
fi
done
done
I thought this was the right method but apparently not. I would need to check all the names before some decision was made (rather one by one which that does)
Then I tried moving completed data files with the mv command (didn't work because "name" in the for statement stored all the file names so it went down the list regardless of what was in the folder at present). I remember reading something about how shell scripts do not do things in "real time" so it makes sense that this didn't work.
My thought was looking for some sort of modification to that if statement so it does all the name checks before I make a decision (how?)
Also are there any other commands I could possibly be missing that I could possibly try?
One pattern I use often is to use split command.
ls <file_directory> > file_list
split -d -l 10 file_list file_list_part
This will create files like file_list_part00 to file_list_partnn
You can then feed these file names to you script.
for file_part in `ls file_list_part*`
do
for file_name in `cat file_part | tr '\n' ' '`
do
data_analysis_command file_name
done
done
Never use "ls" in a "for" (http://mywiki.wooledge.org/ParsingLs)
I think you should use a fifo (see mkfifo)
As a follow-on from the comments, you can install GNU Parallel with homebrew:
brew install parallel
Then your command becomes:
parallel analyse ::: *.dat
and it will process all your files in parallel using as many CPU cores as you have in your Mac. You can also add in:
parallel --dry-run analyse ::: *.dat
to get it to show you the commands it would run without actually running anything.
You can also add in --eta (Estimated Time of Arrival) for an estimate of when the jobs will be done, and -j 8 if you want to run, say 8, jobs at a time. Of course, if you specifically want the 2 jobs at a time you asked for, use -j 2.
You can also have GNU Parallel simply distribute jobs and data to any other machines you may have available via ssh access.

How can I create a file in Linux in a way that when I open it, it is actually running a process

I have a set of .sph files which are actually audio .wav files plus some header.
I have a program called sph2pipe which converts these .sph files to normal audio .wav files.
I want to create some kind of symbolic link to these .sph files that when I read these links I would be actually reading the converted version of them.
Something like this:
ln -s "sph2pipe a.sph |" "a.wav"
ln -s "sph2pipe b.sph |" "b.wav"
So this way, I don't have to convert all audio files to .wav files and instead I just create links to .sph files and I want them to get converted on the fly.
I hope I made myself clear. I was thinking what I am looking for is a Named pipe (https://en.wikipedia.org/wiki/Named_pipe) but this would not be useful in my case since I need to read the .wav files several times.
EDIT-1: I don't have to use named pipes. I just thought this could be the solution.
Actually, in my case, these .wav files are needed to be read several times.
EDIT-2: I was wondering how Samba (or gvfs-smb) works. So the files are in the network but there is also a path available for them in my system like: /run/user/1000/gvfs/smb-share:server=10.100.98.54,share=db/4a0a010a.sph. Can I do something like this? (I read .sph files from a specific path and .wav files come out :) )
EDIT-3: I came up with this so far:
keep_running.py:
#!/usr/bin/python3
import subprocess
cmd = 'mkfifo 4a0a010a.wav'
subprocess.call(cmd, shell=True)
cmd = 'sph2pipe -f wav 4a0a010a.wv1 > 4a0a010a.wav'
while True:
subprocess.call(cmd, shell=True)
And in shell:
./keep_running.py &
play 4a0a010a.wav
play 4a0a010a.wav
play 4a0a010a.wav
I can use the audio file as many times as I want.
What do you think would be the limitations of this implementation?
Would I be limited by the number of the processes that I can spawn? Because it looks like I need to spawn a process for each file.
Don't do it, it's a bad idea.
If you insist anyway, perhaps just out of curiosity, here's a proof of concept.
mkfifo a.wav
sph2pipe a.sph >a.wav &
Now, the results are available once in a.wav but once you have consumed them, they are gone, and a new instance of the background process has to be started if you need to do it again.
Sounds to me like a simple Makefile would serve your use case better (create missing files, recreate files which need to be updated, potentially remove temporary targets when the main target has successfully been made).
No, a named pipe, or fifo(7), wants some existing process to write it (and another to read it). There is no magic that will start the writing process when some other process opens that fifo for reading.
You could provide your FUSE filesystem (whose actions would produce the sound data). I am not sure that it worth the effort in your case.
Remember that several processes can read or write the same file at once.
EDITED ANSWER
Or, if you don't have more than a couple of 1000 files, you can spawn a process for each fifo that keeps sending the file to it repeatedly like this:
for f in *.sph; do
mkfifo "${f}.wav"
(while :; do sph2pipe "$f" > "${f}.wav"; done) &
done
ORIGINAL ANSWER
I am not at my computer but can you generate the WAV files automatically, use them, then delete them...
for f in *.sph; do
sph2pipe "$f" > "${f}.wav"
done
Then use them,,,
Then delete
rm *.wav

Manually Running Shell Script for sending new files notification emails

this is just for learning purpose. (don't consider inotify)
what if we want to develop a bash shell script which compare file list of previous run and current run, when ever we run the script manually and email file name file size time of new files only.
The best that I can suggest is to find the tools that you need to do you specific work.
e.g. ls -l combined with awk, use mail or any other mailing tool, etc.
The idea is to use standard tools to accomplish your mission.
Don't compile your own code, just use standard tools in your script. Most of the things that you need are already there.

multiple file view like DB-view

Is it possible, using bash, to create a view/virtual file that when opened combines 2 files into 1?
example:
FILE_META_1.txt
FILE_META_2.txt
combines into
FILE_META.txt
In general, this is not possible. I assume you mean you want to logically link 2 files without creating a 3rd file that is the sum of the 2 files. I've often wanted this feature also. It would have to be done at the kernel level or via a special file system, maybe use FUSE. UnionFS provides this for directories, but not for files. FuseFile looks like it does what you want. Also take a look at the Logic File System.
You can open them stream-like wise with process substitution:
cat <(cat FILE_META_1.txt; cat FILE_META_2.txt;)
<(*) here expands to a named pipe path which you could open and access like a file for input.

Getting linux terminal value from my application

I am developing a Qt application in Linux. I wanted to pass Linux commands to a terminal. That worked but now i also want to get a response from the terminal for this specific command.
For example,
ls -a
As you know this command lists the directories and files of the current working directory. I now want to pass the returned values from the ls call to my application. What is a correct way to do this?
QProcess is the qt class that will let you spawn a process and read the result. There's an example of usage for reading the result of a command on that page.
popen() , api of linux systerm , return FILE * that you can read it like a file descriptor, may help youp erhaps。
Parsing ls(1) output is dangerous -- make a few files with funny names in a directory and test it out:
touch "one file"
touch "`printf "\x0a\x0a\x0ahello\x0a world"`"
That creates two files in the current working directory. I expect your attempts to parse ls(1) output won't work. This might be alright if you're showing the results to a human, (though a human will be immensely confused if a filename includes output that looks just like ls(1) output!) but if you're trying to present something like an explorer.exe or Finder.app representation of files in the filesystem, this is horribly broken.
Instead, use opendir(3), readdir(3), and closedir(3) to read directory entries yourself. This will be safer, more portable, and (as a side benefit) slightly better performing.

Resources