Why does Shake think this file has changed? - incremental-build

Full Shakefile.hs:
import Development.Shake
import Development.Shake.FilePath
outDir = "_build"
expensiveProcess :: IO ()
expensiveProcess = do
putStrLn "expensiveProcess"
writeFile (outDir </> "b") "b"
main = shakeArgs shakeOptions{ shakeFiles = outDir } $ do
outDir </> "a" %> \out -> do
alwaysRerun
writeFileChanged out "a"
outDir </> "b" %> \out -> do
need [outDir </> "a"]
liftIO expensiveProcess
Let's say I start with an empty _build directory, and shake _build/b. In that case, _build/a has changed (since it's a new file), so we run some expensive process to generate _build/b:
$ rm -rf _build; stack exec -- shake --trace --trace _build/b
% Starting run
% Number of actions = 1
% Number of builtin rules = 9 [FilesQ,DoesDirectoryExistQ,GetDirectoryContentsQ,AlwaysRerunQ,GetDirectoryDirsQ,DoesFileExistQ,GetDirectoryFilesQ,GetEnvQ,FileQ]
% Number of user rule types = 1
% Number of user rules = 2
% Before usingLockFile on _build/.shake.lock
% After usingLockFile
% Missing -> Running, _build/b
# _build/b
% Missing -> Running, _build/a
# _build/a
% Missing -> Running, alwaysRerun
% Running -> Ready, alwaysRerun
= ((),"") (changed)
% Running -> Ready, _build/a
= ((Just File {mod=0x5F12F628,size=0x1,digest=NEQ},"")) (changed)
expensiveProcess
% Running -> Ready, _build/b
= ((Just File {mod=0x4D7E9358,size=0x1,digest=NEQ},"")) (changed)
Build completed in 0.00s
If I now shake _build/b again, then expensiveProcess doesn't run (good!), since its input _build/a hasn't changed. However, its output is marked as changed which can cause problems further downstream, if anything else depends on _build/b:
$ stack exec -- shake --trace --trace _build/b
% Starting run
% Number of actions = 1
% Number of builtin rules = 9 [FilesQ,DoesDirectoryExistQ,GetDirectoryContentsQ,AlwaysRerunQ,GetDirectoryDirsQ,DoesFileExistQ,GetDirectoryFilesQ,GetEnvQ,FileQ]
% Number of user rule types = 1
% Number of user rules = 2
% Before usingLockFile on _build/.shake.lock
% After usingLockFile
% Chunk 0 [len 34] 01000100000000000000040000000100000001000000010000000000000000000000 Id 1 = (StepKey (),Loaded (Result {result = "\SOH\NUL\NUL\NUL", built = Step 1, changed = Step 1, depends = [], execution = 0.0, traces = []}))
% Chunk 1 [len 30] 0a000400000000000000000000000100000001000000728ad03600000000 Id 4 = (alwaysRerun,Loaded (Result {result = "", built = Step 1, changed = Step 1, depends = [], execution = 6.215e-6, traces = []}))
% Chunk 2 [len 66] 080003000000080000005f6275696c642f611400000000000000000000002af6125f03000000010000000100000001000000fd957b39080000000400000004000000 Id 3 = (_build/a,Loaded (Result {result = "\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL*\246\DC2_\ETX\NUL\NUL\NUL\SOH\NUL\NUL\NUL", built = Step 1, changed = Step 1, depends = [[Id 4]], execution = 2.39931e-4, traces = []}))
% Chunk 3 [len 66] 080002000000080000005f6275696c642f621400000000000000000000005a937e4d03000000010000000100000001000000a1942b39080000000400000003000000 Id 2 = (_build/b,Loaded (Result {result = "\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NULZ\147~M\ETX\NUL\NUL\NUL\SOH\NUL\NUL\NUL", built = Step 1, changed = Step 1, depends = [[Id 3]], execution = 1.63632e-4, traces = []}))
% Chunk 4 [len 50] 000005000000000000000000000001000000010000000000000008000000040000000200000008000000ed872a3aed872a3a Id 5 = (Root,Loaded (Result {result = "", built = Step 1, changed = Step 1, depends = [[Id 2]], execution = 0.0, traces = [Trace {traceMessage = "", traceStart = 6.50524e-4, traceEnd = 6.50524e-4}]}))
% Read 5 chunks, plus 0 slop
% Found at most 6 distinct entries out of 5
% Loaded -> Running, _build/b
% Loaded -> Running, _build/a
% Loaded -> Running, alwaysRerun
% Running -> Ready, alwaysRerun
= ((),"") (changed)
# _build/a
% Running -> Ready, _build/a
= ((Just File {mod=0x5F12F628,size=0x1,digest=NEQ},"")) (unchanged)
% Running -> Ready, _build/b
= ((Just File {mod=0x4D7E9358,size=0x1,digest=NEQ},"")) (changed)
Build completed in 0.00s
Why is _build/b marked as changed in this second run?

Shake v0.19.6 had a bug (fixed in 90a52d9) where if a rule didn't rebuild at all it reported (changed) if the last time the rule ran it changed. In your case, the very first build caused _build/b to change, so provided _build/b didn't run again, it would continue to say (changed). That's not very useful, so Shake now reports (didn't run) for rules that didn't run, and that's what it now reports for _build/b the second time around.
% Running -> Ready, _build/a
= ((Just File {mod=0xE205992F,size=0x1,digest=NEQ},"")) (unchanged)
% Running -> Ready, _build/b
= ((Just File {mod=0xE208A5E4,size=0x1,digest=NEQ},"")) (didn't run)
This bug only impacted the output with trace turned on, internally Shake knows not to continue building, even before this fix.

Related

Is there a way to handle a variable number of inputs/outputs in Nextflow?

Is there a way to handle a varying number of inputs/outputs in Nextflow? Sometimes in the example below process 'foo' will have three inputs (and therefore create three pngs that need stitched together by 'bar') but other times there will be two or four. I'd like process 'bar' to be able to combine all existing files in 'foo.out.files' regardless of number. As it stands this would be able to properly handle everything only if there were exactly three inputs in params.input, but not if there were two or four.
Thanks!
#!/usr/bin/env nextflow
nextflow.enable.dsl=2
process foo {
input:
path input_file
output:
path '*.png', emit files
"""
script that creates variable number of png files
"""
}
process bar {
input:
tuple path(file_1), path(file_2), path(file_3)
"""
script that combines png files ${file_1} ${file_2} ${file_3}
"""
}
workflow {
foo(params.input)
bar(foo.out.files.collect())
}
UPDATE:
I'm getting 'Input tuple does not match input set cardinality' errors for this, for example:
params.num_files = 3
process foo {
input:
val num_files
output:
path '*.png', emit: files
"""
touch \$(seq -f "%g.png" 1 ${num_files})
"""
}
process bar {
debug true
input:
tuple val(word), path(png_files)
"""
echo "${word} ${png_files}"
"""
}
workflow {
foo( params.num_files )
words = Channel.from('a','b','c')
words
.combine(foo.out.files)
.set { combined }
bar(combined)
}
You don't acutally need the tuple qualifier here: the path input qualifier can also handle a collection of files. If you use a variable or the * wildcard, the original filenames will be preserved. In the example below, the variable refers to all files in the list. But if you need to, you can also access specific entries; for example:
params.num_files = 3
process foo {
input:
val num_files
output:
path '*.png', emit: files
"""
touch \$(seq -f "%g.png" 1 ${num_files})
"""
}
process bar {
debug true
input:
path png_files
script:
def first_png = png_files.first()
def last_png = png_files.last()
"""
echo ${png_files}
echo "${png_files[0]}"
echo "${first_png}, ${last_png}"
"""
}
workflow {
bar( foo( params.num_files ) )
}
Results:
$ nextflow run main.nf
N E X T F L O W ~ version 22.04.0
Launching `main.nf` [mighty_solvay] DSL2 - revision: 662b108e42
executor > local (2)
[e0/a619c4] process > foo [100%] 1 of 1 ✔
[ba/5f8032] process > bar [100%] 1 of 1 ✔
1.png 2.png 3.png
1.png
1.png, 3.png
If you need to avoid potential filename collisions, you can have Nextflow rewrite the input filenames using a name pattern. If the name pattern is a simple string and a collection of files is received, the filenames will be appended with a numerical suffix representing the ordinal position in the list. For example, if we change the 'bar' process definition to:
process bar {
debug true
input:
path 'png_file'
"""
echo png_file*
"""
}
We get:
$ nextflow run main.nf
N E X T F L O W ~ version 22.04.0
Launching `main.nf` [marvelous_bhaskara] DSL2 - revision: 980a2d067f
executor > local (2)
[f8/190e2c] process > foo [100%] 1 of 1 ✔
[71/e53b05] process > bar [100%] 1 of 1 ✔
png_file1 png_file2 png_file3
$ nextflow run main.nf --num_files 1
N E X T F L O W ~ version 22.04.0
Launching `main.nf` [nauseous_brattain] DSL2 - revision: 980a2d067f
executor > local (2)
[ce/2ba1b1] process > foo [100%] 1 of 1 ✔
[a2/7b867e] process > bar [100%] 1 of 1 ✔
png_file
Note that the * and ? wildcards can be used to control the names of the staged files. There is a table in the documentation that describes how the wildcards are to be replaced depending on the cardinality of the collection. For example, if we again change the 'bar' process definition to:
process bar {
debug true
input:
path 'file*.png'
"""
echo file*.png
"""
}
We get:
$ nextflow run main.nf
N E X T F L O W ~ version 22.04.0
Launching `main.nf` [small_poincare] DSL2 - revision: b106710bc6
executor > local (2)
[7c/cf38b8] process > foo [100%] 1 of 1 ✔
[a6/8cb817] process > bar [100%] 1 of 1 ✔
file1.png file2.png file3.png
$ nextflow run main.nf --num_files 1
N E X T F L O W ~ version 22.04.0
Launching `main.nf` [friendly_pasteur] DSL2 - revision: b106710bc6
executor > local (2)
[59/2b235b] process > foo [100%] 1 of 1 ✔
[2f/76e4e2] process > bar [100%] 1 of 1 ✔
file.png

How does the syntax in a if/then/else within a do block work in Haskell

I'm trying to make the folowing function:
repcountIORIban :: IORef -> Int -> Int -> Int -> Int -> Lock -> IORef -> Lock -> Int -> Int -> IO ()
repcountIORIban count number lower modulus amountthreads lock done lock2 difference rest = do
if rest > number
then let extra = 1
else let extra = 0
if number + 1 < amountthreads
then
forkIO $ realcountIORIban(count lower (lower + difference + extra - 1) modulus lock done lock2)
repcountIORIban (count (number + 1) (lower + difference + extra) modulus amountthreads lock done lock2 difference rest)
else
forkIO $ realcountIORIban(count lower (lower + difference + extra - 1) modulus lock done lock2)
But I can't run the program from which this function is a part of. It gives me the error:
error: parse error on input `else'
|
113 | else let extra = 0
| ^^^^
I've got this error a lot of times withing my program but I don't know what I'm doing wrong.
This is incorrect, you can't let after then/else and expect those lets to define bindings which are visible below.
do if rest > number
then let extra = 1 -- wrong, needs a "do", or should be "let .. in .."
else let extra = 0
... -- In any case, extra is not visible here
Try this instead
do let extra = if rest > number
then 1
else 0
...
Further, you need then do if after that you need to perform two or more actions.
if number + 1 < amountthreads
then do
something
somethingElse
else -- use do here if you have two or more actions
...

How to create a watchdog on a program in python?

I want to know is it even possible to create a watchdog on a program,
I am trying to do Discrete event simulation to simulate a functioning machine,
the problem is, once I inspect my machine at let's say time = 12 (inspection duration is 2 hours lets say) if the event failure is at 13-time units) there is no way that it can be because I am "busy inspecting"
so is there a sort of "watchdog" to constantly test if the value of a variable reached a certain limit to stop doing what the program is doing,
Here is my inspection program
def machine_inspection(tt, R, Dpmi, Dinv, FA, TRF0, Tswitch, Trfn):
End = 0
TPM = 0
logging.debug(' cycle time %f' % tt)
TRF0 = TRF0 - Dinv
Tswitch = Tswitch - Dinv
Trfn = Trfn - Dinv
if R == 0:
if falsealarm == 1:
FA += 1
else:
tt = tt + Dpmi
TPM = 1
End = 1
return (tt, End, R, TPM, FA, TRF0, Trfn, Tswitch)
Thank you very much!
basically you can't be inspecting during x time if tt + x will be superior to the time to failure TRF0 or Trfn

SPIN assert not triggered

I am trying to understand why the assert in this model isn't triggered.
ltl { !A#wa U B#sb && !B#wb U A#sa }
byte p = 0
byte q = 0
int x = 0
inline signal(sem) { sem++ }
inline wait (sem) { atomic { sem > 0 ; sem-- } }
proctype A() {
x = 10*x + 1
signal(p)
sa: wait(q)
wa: x = 10*x + 2
}
proctype B() {
x = 10*x + 3
signal(q)
sb: wait(p)
wb: x = 10*x + 4
}
init {
atomic { run A(); run B() }
_nr_pr == 1
assert(x != 1324)
}
Clearly, there is an order of operations that produces the final value x = 1324:
Initially x = 0
A sets x = 10*0 + 1 = 1
B sets x = 10*1 + 3 = 13
A and B allow each other to proceed
A sets x = 10*13 + 2 = 132
B sets x = 10*132 + 4 = 1324
The assertion isn't triggered because it is "never reached" when the solver proves that the property
ltl { !A#wa U B#sb && !B#wb U A#sa }
is true.
Take a look at the output that is given by the solver, it clearly states that:
it is checking any assertion, but only if within the scope of the claim:
Full statespace search for:
never claim + (ltl_0)
assertion violations + (if within scope of claim)
the assertion isn't reached:
unreached in init
t.pml:27, state 5, "assert((x!=1324))"
t.pml:28, state 6, "-end-"
(2 of 6 states)
You can use the option -noclaim so to check the model only for the assertion, which is then easily proven false:
~$ spin -search -noclaim t.pml
ltl ltl_0: ((! ((A#wa))) U ((B#sb))) && ((! ((B#wb))) U ((A#sa)))
pan:1: assertion violated (x!=1324) (at depth 13)
pan: wrote t.pml.trail
(Spin Version 6.4.8 -- 2 March 2018)
Warning: Search not completed
+ Partial Order Reduction
Full statespace search for:
never claim - (not selected)
assertion violations +
cycle checks - (disabled by -DSAFETY)
invalid end states +
State-vector 36 byte, depth reached 15, errors: 1
48 states, stored
6 states, matched
54 transitions (= stored+matched)
1 atomic steps
hash conflicts: 0 (resolved)
Stats on memory usage (in Megabytes):
0.003 equivalent memory usage for states (stored*(State-vector + overhead))
0.286 actual memory usage for states
128.000 memory used for hash table (-w24)
0.534 memory used for DFS stack (-m10000)
128.730 total actual memory usage
pan: elapsed time 0 seconds

Understanding an Error Trail from Spin Modelchecker

I am trying to use Spin Model Checker to modelcheck a Game between two objects (A and B). The objects move on a board, and each location is defined by its (x,y) coordinates. The two objects are supposed to not collide. I have three processes: init, A Model, B Model. I am model checking an ltl property: (liveness property to check if the two objects ever occupy same location)
ltl prop1 { [] (!(x_a == x_b) && !(y_a == y_b)) }
The error trail that I get is:
init -> A Model -> B Model -> init
However, I should not get an error trail (counterexample) based on the data that is shown: x_a=2, x_b=1, y_a=1, y_b=1.
Also the first init does go through all the lines of init process, but the second one only shows to the last line of it.
Also my A Model and B Model only consist of guards and actions in a 'do' block as shown below. However they are more complex and have if blocks on the right of '->'
active proctype AModel(){
do
:: someValue == 1 -> go North
:: someValue == 2 -> go South
:: someValue == 3 -> go East
:: someValue == 4 -> go West
:: else -> skip;
od
}
Do I need to put anything in an atomic block? The reason I am asking is that the line that the error trail is showing does not even go into the 'do' block, and it is just the first line of the two models.
EDIT:
The LTL property was wrong. I changed that to:
ltl prop1 { [] (!((x_a == x_b) && (y_a == y_b))) }
However, I am still getting the exact same error trail.
Your LTL property is wrongly implemented. Essentially, the counter example that SPIN found is a true counter example for the LTL as stated.
[] ( !(x_a == x_b) && !(y_z == y_b) ) =>
[] ( !(2 == 1) && !(1 == 1) ) =>
[] ( !0 && !1) =>
[] ( 1 && 0) =>
[] 0 =>
false
The LTL should be:
always not (same location) =>
[] (! ((x_a == x_b) && (y_a == y_b))) =>
[] (! ((2 == 1) && (1 == 1))) =>
[] (! (0 && 1) =>
[] (! 0) =>
[] 1 =>
true
Regarding your init and tasks. When starting your tasks you want to be sure that initialization is complete before the tasks run. I'll use one of two approaches:
init { ... atomic { run taskA(); run taskB() } where tasks are spawned once all initialization is complete`
or
bool init_complete = false;
init { ...; init_complete = true }
proctype taskA () { /* init local stuff */ ...; init-complete -> /* begin real works */ ... }
Your LTL may be failing during the initialization.
And based on your problem, if you ever change x or y you'd better change both at once in an atomic{}.

Resources