How to avoid infinite loop with --json and --watch using jest? - jestjs

I'm trying to create a simple test script that when run enters watch mode and re-writes a file called jest-lock.json
"test:output:watch": "jest --json --outputFile=jest-lock.json --watch"
When this runs it simply enters an infinite loop and I'm not sure what I'm doing wrong.
I have a simple test and I'm trying to do this in order to use the storybook jest-addon.
Any thoughts? All is appreciated.
Thanks

it appears this wasnt really solved anywhere (A good challenge! if i could add my reputation with your bounty i would), this source does give good information on why watch loops.
this isnt my solution but i believe you can use this:
watch -n1 'wc -l my.log | tee -a statistics.log'
This will execute your wc each second, add its output to the statistics.log file, and also show it on the screen.
So, you'll end up with a file filled with numbers, representing the successive number of lines of my.log.
now using this structure, (im not a json expert) we can restructure your line for what it should be. I may need some colleagues help with this. (a good start at least)

Related

Redirecting input to script running in background

I wrote a script for resizing windows, which require orientation and value in form of fraction, like so:
resize.sh -h 1/2
and it works as expected.
I also added -k flag, which means that script require user input, like so:
resize.sh -k -h
and in the script:
read -rsn 2 fraction
which I parse to get values for numerator and denominator.
This works great from command line, but idea behind this is to bind resize.sh -k -h to some key combination, and pass following two keys as input. But when I run script from keyboard, it run as a background process which is not associated with any tty, so read could not get its input. Is there any way to redirect global input to background process, after running it from keyboard.
What I tried so far:
Redirection to /proc/$$/fd/0, which didn't work.
Redirectiong currently active tty stdin to read, like so:
read -rsn 2 fraction < /dev/pts/0
which actually worked, but problem is that not all windows are terminal, e.g. web browser.
If my question is unclear, please feel free to ask for additional clarifications or details, and thanks in advance :)
You can use a named pipe for the process communication.
I made am example script where the background proces is a function.
#!/bin/bash
pipe_name=/tmp/mypipe$$
mkfifo "${pipe_name}"
resize()
{
read fraction < "${pipe_name}"
echo "Resize window to fraction=${fraction}"
}
resize &
read -p "Enter your fraction: "
echo "${REPLY}" > "${pipe_name}"
rm "${pipe_name}"
thank you both for providing very useful information. The solution is combination of both, actually.
First I modified read command in resize.sh to get input from named pipe, as Walter suggested, than I wrote a new, kinda "wrapper" script, which executes resize.sh in background, and than, since Barmar pointed I need a gui window, it starts very small terminal window running read and passing input to named pipe. Further more, using wmctrl I manage to place small terminal window right where currently active window begins, and hide it below (thanks to openbox per-application properties), so it's technically not visible at all :)
It's really too hacky for my liking, but it was really the only option I could think of at this moment, so until I find the better way, this gets the job done.
Once again, thank you both for directing me toward solution, I really appreciate it, cheers :)

How can I run two bash scripts simultaneously and without repetition of the same action?

I'm trying to write a script that automatically runs a data analysis program. The data analysis takes a file, analyzes it, and puts all the outputs into a folder. The program can be run on two terminals simultaneously (each analyzing a different subject file).
I wrote a script that can do all the inputs automatically. However, I can only get my script to run one automatically. If I run my script simultaneously it will analyze the same subject twice (useless)
Currently, my script looks like:
for name in `ls [file_directory]`
do
[Data analysis commands]
done
If you run this on two terminals, it will start from the top of the directory containing all the data files. This is a problem, so I tried to do checks for duplicates but they weren't very effective.
I tried a name comparison with the if command (didn't work because all the output files except one were of a unique name, so it would check the first outfput folder at the top of the directory and say the name was different even though an output folder further down had the same name). It looked something like..
for name in `ls <file_directory>`
do
for output in `ls <output directory>`
do
If [ name==output ]
then
echo "This file has already been analyzed."
else
<Data analyis commands>
fi
done
done
I thought this was the right method but apparently not. I would need to check all the names before some decision was made (rather one by one which that does)
Then I tried moving completed data files with the mv command (didn't work because "name" in the for statement stored all the file names so it went down the list regardless of what was in the folder at present). I remember reading something about how shell scripts do not do things in "real time" so it makes sense that this didn't work.
My thought was looking for some sort of modification to that if statement so it does all the name checks before I make a decision (how?)
Also are there any other commands I could possibly be missing that I could possibly try?
One pattern I use often is to use split command.
ls <file_directory> > file_list
split -d -l 10 file_list file_list_part
This will create files like file_list_part00 to file_list_partnn
You can then feed these file names to you script.
for file_part in `ls file_list_part*`
do
for file_name in `cat file_part | tr '\n' ' '`
do
data_analysis_command file_name
done
done
Never use "ls" in a "for" (http://mywiki.wooledge.org/ParsingLs)
I think you should use a fifo (see mkfifo)
As a follow-on from the comments, you can install GNU Parallel with homebrew:
brew install parallel
Then your command becomes:
parallel analyse ::: *.dat
and it will process all your files in parallel using as many CPU cores as you have in your Mac. You can also add in:
parallel --dry-run analyse ::: *.dat
to get it to show you the commands it would run without actually running anything.
You can also add in --eta (Estimated Time of Arrival) for an estimate of when the jobs will be done, and -j 8 if you want to run, say 8, jobs at a time. Of course, if you specifically want the 2 jobs at a time you asked for, use -j 2.
You can also have GNU Parallel simply distribute jobs and data to any other machines you may have available via ssh access.

Interactive quiz in Bash (Multiple Q's)

I'm teaching an introductory Linux course and have abandoned the paper-based multiple-choice quizzes and have created interactive quizzes in Bash. My quiz script is functional, but kind of quick-and-dirty, and now I'm in the improvement phase and looking for suggestions.
First off, I'm not looking to automate the grading, which certainly simplifies things.
Currently, I have a different script file for each quiz, and the questions are hard-coded. That's obviously terrible, so I created a .txt file holding the questions, delimited by lines with "question 01" etc. I can loop through and use sed -n "/^quest.*$i\$/,/^quest.*$(($i+1))\$/p", but this prints the delimiter lines. I can pipe through sed "/^q/d" or head -n-1|tail -n+2 to get rid of them, but is there a better way?
Second issue: For questions where the answer is an actual command, I'm printing a [user]$ prompt, but for short-answer, I'm using a >. In my text file, for each question, the last line is the prompt to use. Initially, I was thinking I could store the question in a variable and |tail -1 it to get the prompt, but duh, when you store it it strips newlines. I want the cursor to immediately follow the prompt, so I either need to pass it to read -p or strip the final newline from the output. (Or create some marker in the file to differentiate between the $ and > prompt.) One thought I had was to store each question in a separate file and just cat it to display it, making sure there was no newline at the end. That might be kind of a pain to maintain, but it would solve both problems. Thoughts?
Now to how I'm actually running the quiz. This is a Fedora 20 box, and I tried copying bash and setuid-ing it to me so that it would be able to read the quiz script that the students couldn't normally read, but I couldn't get that to work. After some trial and error, I ended up copying touch and setuid-ing it to me, then using that to create their answer file in a "submit" directory with an ACL so new files have o=w so they can write to their answer file (in the quiz with >> echo) but not read it back or access the directory. The only major loophole I see with this is that they can delete their file by name and start the quiz over with no record of having done so. Since I'm not doing any automatic grading, I'm not terribly concerned with the students being able to read the script file, although if I'm storing the questions separately, I suppose I could make a copy of cat and setuid it to read in files that they can't access.
Also, I realize that Bash is not the best choice for this, and learning the required simple input/output for Python or something better would not take much effort. Perhaps that's my next step.
1) You could use
sed -n "/^quest.*$i\$/,/^quest.*$(($i+1))\$/ { //!p }"
Here // repeats the last attempted pattern, which is the opening pattern in the first line of the range and the closing pattern for the rest.
...by the way, if you really want to do this with sed, you better be damn sure that i is a number, or you'll run into code injection problems.
2) You can store multiline command output in a variable without problems. You just have to make sure you quote the variable everafter to avoid shell expansion on it. For example,
QUESTION=$(sed -n "/^quest.*$i\$/,/^quest.*$(($i+1))\$/ { //!p }" questions.txt)
echo -n "$QUESTION" # <-- the double quotes are important here.
The -n option to echo tells echo to not append a newline at the end, which should take care of your prompt problem.
3) Yes, well, hackery breeds more hackery. If you want to lock this down, the first order of business would be to not give students a shell on the test machine. You could put your script behind inetd and have the students fill it out with telnet or something, I suppose, but...really, why bash? If it were me, I'd knock something together with a web server and one of the several gazillion php web quiz frameworks. Although I also have to wonder why it's a problem if students can see the questions and the answers they gave. It's not like all students use the same account and can see each other's answers, is it? (is it?) Don't store an answer key on the same machine and you shouldn't have a problem.

Redirect program output without changing directory

Problem
I'm writing a set of scripts to help with automated batch job execution on a cluster.
The specific thing I have is a $OUTPUT_DIR, and an arbitrary $COMMAND.
I would like to execute the $COMMAND such that its output ends up in $OUTPUT_DIR.
For example, if COMMAND='cp ./foo ./bar; mv ./bar ./baz', I would like to run it such that the end result is equivalent to cp ./foo ./$OUTPUT_DIR/baz.
Ideally, the solution would look something like eval PWD="./$OUTPUT_DIR" $COMMAND, but that doesn't work.
Known solutions
[And their problems]
Editing $COMMAND: In most cases the command will be a script, or a compiled C or FORTRAN executable. Changing the internals of these isn't an option.
unionfs, aufs, etc.: While this is basically perfect, users running this won't have root, and causing thousands+ of arbitrary mounts seems like a questionable choice.
copying/ hard/soft links: This might be the solution I will have to use: some variety of actually duplicating the entire content of ./ into ./$OUTPUT_DIR
cd $OUTPUT_DIR; ../$COMMAND : Fails if $COMMAND ever reads files
pipes : only works if $COMMAND doesn't directly work with files; which it usually does
Is there another solution that I'm missing, or is this request actually impossible?
[EDIT:]Chosen Solution
I'm going to go with something where each object in the directory is symbolic-linked into the output directory, and the command is then run from there.
This has the downside of creating a lot of symbolic links, but it shouldn't be too bad.
You can't solve this without making some assumptions about the interface of $COMMAND. There is no single definition of what "output ends up in $OUTPUT_DIR" means. For one program this may be some files, but another program might just print something to stdout and yet another might try sending some data over the internet using some protocol or display something in a GUI and there isn't an obvious way of mapping all of these to "output goes to $OUTPUT_DIR".
So, you need to invent some assumptions and require any $COMMAND implementation to follow them. Then, it may get as simple as requesting that the command accept a parameter such as --target=<DIR>. If your command was some simple command, you would have to create a wrapper script around it to translate that parameter into what the app accepts. cp, mv and a few more utils already accept the parameter --target, so that may be a good starting point.
You cannot set the output directory, you can only set the working directory.
The problem is, once you set the working directory, other references are going to be invalid. For example in your code foo:
cp ./foo ./bar
If you have a specific command, there are workarounds (creating a script that alters arguments, prepending the directory to specific arguments), but in general this is not possible.

How to tell if a process has ended?

Besides using top, is there a more precise way of identifying if the last executed command has finished if I had to check in a separate session over Putty?
pgrep
How about getting it to run another command immediately after that sets a flag.
$ do_command ; touch I_FINISHED
then when the command finishes it'll create a file called I_FINISHED that you
can look for.
or something more sophisticated that writes to a log file if you're doing it
multiple times.
I agree that it may be a faster option in the long run to have your program write to a log file or create a notification. Just put it at the end of the executed code, past the part that you suspect may cause it to hang.
ps -eo cmd
Lists all processes, and displays the command line, as 'typed' when the command started, so you will be able to tell your script apart from anything else running written in Perl.

Resources