How can I create a file in Linux in a way that when I open it, it is actually running a process - linux

I have a set of .sph files which are actually audio .wav files plus some header.
I have a program called sph2pipe which converts these .sph files to normal audio .wav files.
I want to create some kind of symbolic link to these .sph files that when I read these links I would be actually reading the converted version of them.
Something like this:
ln -s "sph2pipe a.sph |" "a.wav"
ln -s "sph2pipe b.sph |" "b.wav"
So this way, I don't have to convert all audio files to .wav files and instead I just create links to .sph files and I want them to get converted on the fly.
I hope I made myself clear. I was thinking what I am looking for is a Named pipe (https://en.wikipedia.org/wiki/Named_pipe) but this would not be useful in my case since I need to read the .wav files several times.
EDIT-1: I don't have to use named pipes. I just thought this could be the solution.
Actually, in my case, these .wav files are needed to be read several times.
EDIT-2: I was wondering how Samba (or gvfs-smb) works. So the files are in the network but there is also a path available for them in my system like: /run/user/1000/gvfs/smb-share:server=10.100.98.54,share=db/4a0a010a.sph. Can I do something like this? (I read .sph files from a specific path and .wav files come out :) )
EDIT-3: I came up with this so far:
keep_running.py:
#!/usr/bin/python3
import subprocess
cmd = 'mkfifo 4a0a010a.wav'
subprocess.call(cmd, shell=True)
cmd = 'sph2pipe -f wav 4a0a010a.wv1 > 4a0a010a.wav'
while True:
subprocess.call(cmd, shell=True)
And in shell:
./keep_running.py &
play 4a0a010a.wav
play 4a0a010a.wav
play 4a0a010a.wav
I can use the audio file as many times as I want.
What do you think would be the limitations of this implementation?
Would I be limited by the number of the processes that I can spawn? Because it looks like I need to spawn a process for each file.

Don't do it, it's a bad idea.
If you insist anyway, perhaps just out of curiosity, here's a proof of concept.
mkfifo a.wav
sph2pipe a.sph >a.wav &
Now, the results are available once in a.wav but once you have consumed them, they are gone, and a new instance of the background process has to be started if you need to do it again.
Sounds to me like a simple Makefile would serve your use case better (create missing files, recreate files which need to be updated, potentially remove temporary targets when the main target has successfully been made).

No, a named pipe, or fifo(7), wants some existing process to write it (and another to read it). There is no magic that will start the writing process when some other process opens that fifo for reading.
You could provide your FUSE filesystem (whose actions would produce the sound data). I am not sure that it worth the effort in your case.
Remember that several processes can read or write the same file at once.

EDITED ANSWER
Or, if you don't have more than a couple of 1000 files, you can spawn a process for each fifo that keeps sending the file to it repeatedly like this:
for f in *.sph; do
mkfifo "${f}.wav"
(while :; do sph2pipe "$f" > "${f}.wav"; done) &
done
ORIGINAL ANSWER
I am not at my computer but can you generate the WAV files automatically, use them, then delete them...
for f in *.sph; do
sph2pipe "$f" > "${f}.wav"
done
Then use them,,,
Then delete
rm *.wav

Related

Unix create multiple files with same name in a directory

I am looking for some kind of logic in linux where I can place files with same name in a directory or file system.
For e.g. i create a file abc.txt, so the next time if any process creates abc.txt it should automatically check and make the file named as abc.txt.1 should be created, then next time abc.txt.2 and so on...
Is there a way to achieve this.
Any logic or third party tools are also welcomed.
You ask,
For e.g. i create a file abc.txt, so the next time if any process
creates abc.txt it should automatically check and make the file named
as abc.txt.1 should be created
(emphasis added). To obtain such an effect automatically, for every process, without explicit provision by processes, it would have to be implemented as a feature of the filesystem containing the files. Such filesystems are called versioning filesystems, though typically the details are slightly different from what you describe. Most importantly, however, although such filesystems exist for Linux, none of them are mainstream. To the best of my knowledge, none of the major Linux distributions even offers one as a distribution-supported option.
Although it's a bit dated, see also Linux file versioning?
You might be able to approximate that for many programs via a customized version of the C standard library, but that's not foolproof, and you should not expect it to have universal effect.
It would be an altogether different matter for an individual process to be coded for such behavior. It would need to check for existing files and choose an appropriate name when opening each new file. In doing so, some care needs to be taken to avoid related race conditions, but it can be done. Details would depend on the language in which you are writing.
You can use BASH expression to achieve this. For example if I wanted to make 10 files all with the same name, but having a unique number value I would do the following:
# touch my_file{01..10}.txt
This would create 10 files starting at 01 all the way to 10. This method is also hand for looping over files in a sequence or if your also creating directories.
Now if i am reading you question right your asking that if you move a file or create a file in a directory. you would want the a script to automatically create a new file for you? If that is the case then just use a test and if there is a file move that file and mark it. Me personally I use time stamps to do so.
Logic:
# The [ -f ] tests if the file is present
if [ -f $MY_FILE_NAME ]; then
# If the file is present move the file and give it the PID
# That way the name will always be unique
mv $MY_FILE_NAME $MY_FILE_NAME_$$
mv $MY_NEW_FILE .
else
# Move or make the file here
mv $MY_NEW_FILE .
fi
As you can see the logic is very simple. Hope this helps.
Cheers
I don't know about Your particular use case, but You may try to look at logrotate:
https://wiki.archlinux.org/index.php/Logrotate

How to stream log files content that is constantly changing file names in perl?

I a series of applications on Linux systems that I need to basically constantly 'stream' out or even just 'tail' out but the challenge is the filenames are constantly rolling and changing.
The are all date encoded (dates being in different formats) and each then have different incremented formats.
Most of them start with one and increase, but one doesn't have an extension and then adds an extension past the first file and the other increments a number but once hitting 99 rolls to increment a alpha and returns the numeric to 01 and then up again as it rolls so quickly.
I just have the OS level shell scripting, OS command line utilities, and perl available to me to handle this situation for another application to pickup and read these logs.
The new files are always created right when it starts writing to the new file and groups of different logs (some I am reading some I am not) are being written to the same directory so I cannot just pickup anything hitting the directory.
If I simply 'tail -n 1000000 -f |' them today this works fine for the reader application I am using until the file changes and I cannot setup file lists ranges within the reader application, but can pre-process them so they basically appear as a continuous stream to the reader vs. the reader directly invoking commands to read them. A simple Perl log reader like this also work fine for a static filename but not for dynamic ones. It is critical I don't re-process any logs lines and just capture new lines being written to the logs.
I admit I am not any form a Perl guru, and the best answers / clue I've been able to find so far is the use of Perl's Glob function to possibly do this but the examples I've found basically reprocess all of the files on each run then seem to stop.
Example File Names I am dealing with across multiple apps I am trying to handle..
appA_YYMMDD.log
appA_YYMMDD_0001.log
appA_YYMMDD_0002.log
WS01APPB_YYMMDD.log
WS02APPB_YYMMDD.log
WS03AppB_YYMMDD.log
APPCMMDD_A01.log
APPCMMDD_B01.log
YYYYMMDD_001_APPD.log
As denoted above the files do not have the same inode and simply monitoring the directory for change is not possible as a lot of things are written there. On the dev system it has more than 50 logs being written to the directory and thousands of files and I am only trying to retrieve 5. I am seeing if multitail can be made available to try that suggestion but it is not currently available and installing any additional RPMs in the environment is generally a multi-month battle.
ls -i
24792 APPA_180901.log
24805 APPA__180902.log
17011 APPA__180903.log
17072 APPA__180904.log
24644 APPA__180905.log
17081 APPA__180906.log
17115 APPA__180907.log
So really the root of what I am trying to do is simply a continuous stream regardless if the file name changes and not have to run the extract command repeatedly nor have big breaks in the data feed while some script figures out that the file being logged to has changed. I don't need to parse the contents (my other app does that).. Is there an easy way of handling this changing file name?
How about monitoring the log directory for changes with Linux inotify, e.g. Linux::inotify2? Then you could detect when new log files are created, stop reading from the old log file and start reading from the new log file.
Try tailswitch. I created this script to tail log files that are rotated daily and have YYYY-MM-DD on their names. To use this script, you just say:
% tailswitch '*.log'
The quoting prevents the shell from interpreting the glob pattern. The script will perform glob pattern from time to time to switch to a newer file based on its name.

Creating Batch Spectrograms Using FFMPEG?

So I am wanting to create spectrograms using the FFMPEG for thousands of FLAC files in batch.
I am using the following for just one file.
ffmpeg -i audio-in.wav -lavfi showspectrumpic image-out.png
However, I would like to do this for all the files in a certain folder (\Desktop\FLACfiles) and don't want to keep changing the file name and the image output name.
I would like to somehow create a batch script in Windows 10 that automatically creates a spectrogram based on the filename.
I was trying to make it work but I don't have much experience via command line or programming in general. Not sure how to do achieve this.
Simply put, would like to use commands from a working directory containing FLAC files and create a spectrogram for each file matching the filename.
This worked for me using mp3 files but it wasn't fast. Someone may have a better solution. Try this as a bat file.
for %%a in ("*.mp3") do ffmpeg -i "%%a" -lavfi showspectrumpic "%%~na .png"

Getting linux terminal value from my application

I am developing a Qt application in Linux. I wanted to pass Linux commands to a terminal. That worked but now i also want to get a response from the terminal for this specific command.
For example,
ls -a
As you know this command lists the directories and files of the current working directory. I now want to pass the returned values from the ls call to my application. What is a correct way to do this?
QProcess is the qt class that will let you spawn a process and read the result. There's an example of usage for reading the result of a command on that page.
popen() , api of linux systerm , return FILE * that you can read it like a file descriptor, may help youp erhaps。
Parsing ls(1) output is dangerous -- make a few files with funny names in a directory and test it out:
touch "one file"
touch "`printf "\x0a\x0a\x0ahello\x0a world"`"
That creates two files in the current working directory. I expect your attempts to parse ls(1) output won't work. This might be alright if you're showing the results to a human, (though a human will be immensely confused if a filename includes output that looks just like ls(1) output!) but if you're trying to present something like an explorer.exe or Finder.app representation of files in the filesystem, this is horribly broken.
Instead, use opendir(3), readdir(3), and closedir(3) to read directory entries yourself. This will be safer, more portable, and (as a side benefit) slightly better performing.

How can you tell what files are currently open by any user?

I am trying to write a script or a piece of code to archive files, but I do not want to archive anything that is currently open. I need to find a way to determine what files in a directory are open. I want to use either Perl or a shell script, but can try use other languages if needed. It will be in a Linux environment and I do not have the option to use lsof. I have also had inconsistant results with fuser. Thanks for any help.
I am trying to take log files in a directory and move them to another directory. If the files are open however, I do not want to do anything with them.
You are approaching the problem incorrectly. You wish to keep files from being modified underneath you while you are reading, and cannot do that without operating system support. The best that you can hope for in a multi-user system is to keep your archive metadata consistent.
For example, if you are creating the archive directory, make sure that the number of bytes stored in the archive matches the directory. You can checksum the file contents before and after reading the filesystem and compare that with what you wrote to the archive and perhaps flag it as "inconsistent".
What are you trying to accomplish?
Added in response to comment:
Look at logrotate to steal ideas about how to handle this consistently just have it do the work for you. If you are concerned that rename of files will make processes that are currently writing them will break things, take a look at man 2 rename:
rename() renames a file, moving it
between directories if required. Any
other hard links to the file (as
created using link(2)) are unaffected.
Open file descriptors for oldpath are
also unaffected.
If newpath already exists it will be atomically replaced (subject
to a few conditions; see ERRORS
below), so that there is no point at
which another process attempting to
access newpath will find it missing.
Try ls -l /proc/*/fd/* as root.
msw has answered the question correctly but if you want to file the list of open processes, the lsof command will give it to you.

Resources