Alright, I've been puling my hair out for hours
Liquidsoap just isn't working for me, and I know this should work, sub for one appearently obvious error...
set("log.file",false)
set("log.stdout",true)
set("log.level",3)
podcasts = playlist("/home/user/icecast/ham.txt")
# This function turns a fallible
# source into an infallible source
# by playing a static single when
# the original song is not available
def my_safe(radio) =
# We assume that festival is installed and
# functional in liquidsoap
security = single("say:Hello, we are currently having some technical difficulties but we'll be back soon so stay tuned!")
# We return a fallback where the original
# source has priority over the security
# single. We set track_sensitive to false
# to return immediately to the original source
# when it becomes available again.
fallback(track_sensitive=false,[radio, security])
end
radio= podcasts
radio= my_safe(podcasts)
# A function that contains all the output
# we want to create with the final stream
def outputs(s) =
# First, we partially apply output.icecast
# with common parameters. The resulting function
# is stored in a new definition of output.icecast,
# but this could be my_icecast or anything.
output.icecast = output.icecast(host="localhost", password="foobar")
# An output in mp3 to the "live" mountpoint:
output.icecast(%mp3, mount="live",radio)
end
And the error
At line 23, character 6: The variable radio defined here is not used anywhere
in its scope. Use ignore(...) instead of radio = ... if you meant
to not use it. Otherwise, this may be a typo or a sign that your script
does not do what you intend.
If someone could also fix another issue I'm having
I would like to find how to run two sources to two separate mountpoints
set("log.file",false)
set("log.stdout",true)
set("log.level",3)
podcasts = playlist("/home/user/icecast/ham.txt")
songlist = playlist("/home/user/icecast/otherplaylist.txt")
# This function turns a fallible
# source into an infallible source
# by playing a static single when
# the original song is not available
def my_safe(radio) =
# We assume that festival is installed and
# functional in liquidsoap
security = single("say:Hello, we are currently having some technical difficulties but we'll be back soon so stay tuned!")
# We return a fallback where the original
# source has priority over the security
# single. We set track_sensitive to false
# to return immediately to the original source
# when it becomes available again.
fallback(track_sensitive=false,[radio, security])
end
radio= podcasts
radio= my_safe(podcasts)
def my_safe(songlist) =
# We assume that festival is installed and
# functional in liquidsoap
security = single("say:Hello, we are currently having some technical difficulties but we'll be back soon so stay tuned!")
# We return a fallback where the original
# source has priority over the security
# single. We set track_sensitive to false
# to return immediately to the original source
# when it becomes available again.
fallback(track_sensitive=false,[songlist, security])
end
moarradio= songlist
moarradio= my_safe(songlist)
# A function that contains all the output
# we want to create with the final stream
def outputs(s) =
# First, we partially apply output.icecast
# with common parameters. The resulting function
# is stored in a new definition of output.icecast,
# but this could be my_icecast or anything.
output.icecast = output.icecast(host="localhost", password="foobar")
# An output in mp3 to the "live" mountpoint:
output.icecast(%mp3, mount="live",radio)
output.icecast(%mp3, mount="otherlive",moarmusic)
end
And I get the same error, but it tells me the second variable isn't used (moarradio)
The liquidsoap configuration file is basically a script, which means it will be run from top to bottom.
Upon reading the configuration file, it is complaining that you are not using the radio variable defined line 23, the reason being you are using the one defined line 24. Unlike many other languages, there are no assignments, only definitions in a liquidsoap script
Other than that, you seem to not understand the difference between global and local variables, and their respective scopes.
You are declaring a my_safe function, which takes an argument. Within the scope of the function, refer to your argument with its name. Then call the my_safe function with your global variables as the argument, and what you are expecting will actually happen.
Your sources will not get registered if you do not call the outputs function. Simply drop the function construct.
Here is how I rewrote your second example:
set("log.file",false)
set("log.stdout",true)
set("log.level",3)
podcasts = playlist("/home/user/icecast/ham.txt")
songlist = playlist("/home/user/icecast/otherplaylist.txt")
# This function turns a fallible
# source into an infallible source
# by playing a static single when
# the original song is not available
def my_safe(stream) = # <- used a different variable name here
# We assume that festival is installed and
# functional in liquidsoap
security = single("say:Hello, we are currently having some technical difficulties but we'll be back soon so stay tuned!")
# We return a fallback where the original
# source has priority over the security
# single. We set track_sensitive to false
# to return immediately to the original source
# when it becomes available again.
fallback(track_sensitive=false,[stream, security]) # <- used a different variable name here
end
radio= my_safe(podcasts) # <- fix here
# <- got rid of redeclaration of my_safe
moarradio= my_safe(songlist) # <- fix here
# <- got rid of function construct
# First, we partially apply output.icecast
# with common parameters. The resulting function
# is stored in a new definition of output.icecast,
# but this could be my_icecast or anything.
output.icecast = output.icecast(host="localhost", password="foobar")
# An output in mp3 to the "live" mountpoint:
output.icecast(%mp3, mount="live",radio)
output.icecast(%mp3, mount="otherlive",moarradio) # <- fix here
Related
I'm fairly new to snakemake and inherited a kind of huge worflow that consists in a sequence of 17 rules that run in serial.
Each rule takes outputs from the previous rules and uses them to run a python script. Everything has worked great so far except that now I'm trying to improve the worflow since some of the rules can be run in parallel.
A rough example of what I'm trying to achieve, my understanding is that wildcards should allow me to solve this.
grid = [ 10 , 20 ]
rule all:
input:
expand("path/to/C/{grid}/file_C" ,grid = grid)
rule process_A:
input:
path_A = "path/to/A/file_A"
path_B = "path/to/B/{grid}/file_B" # A rule further in the worflow could need a file from a previous rule saved with this structure
params:
grid = lambda wc: wc.get(grid)
output:
path_C = "path/to/C/{grid}/file_C"
script:
"script_A.py"
And inside the script I retrieve the grid size parameter:
grid = snakemake.params.grid
In the end the whole rule process_A should be rerun with grid = 10 and with grid = 20 and save each result to a folder whose path depends on grid also.
I know there are several things wrong with this, but I can't seem to find were to start from to figure this out. The error I'm getting now is:
name 'params' is not defined
Any help as to where to start from?
It would be useful to post the error stack trace of name 'params' is not defined to know exactly what is causing it. For now...
And inside the script I retrieve the grid size parameter:
grid = snakemake.params.grid
I suspect you are mixing the script directive with the shell directive. Probably you want something like:
rule process_A:
input: ...
output: ...
params: ...
script:
"script_A.py"
inside script_A.py snakemake will replace snakemake.params.grid with the actual param value.
Alternatively, write a standalone python script that parses command line arguments and you execute like any other program using the shell directive. (I tend to prefer this solution as it makes things more explicit and easier to debug but it also means more boiler-plate code to write a standalone script).
Lets say an environment redirects stdout, ignores the output, and then uses their own IO object to write to the screen:
const supersecretstdoutname = Ref{IO}()
# ...
function evironment_init()
# ...
supersecretstdoutname[] = stdout
stdout = DevNull
# ...
end
Would I have any way of finding supersecretstdoutname (or if they did something similar with stdin) without digging through source code?
Said another way, can I get a list of open IOs, or at least the ones that have access to the screen/user input?
I'm in a situation in which I have several commands of the same class pushed onto a QUndoStack and depending on user input another command of a different class might be pushed on top of those. What I would now like to achieve is to remove a fixed number of these previous commands of the first type from the undo stack (either by undoing them or just removing them, doesn't matter in my case) when the topmost command's undo is executed. E.g. like this:
class CommandA(QUndoCommand):
# ...
class CommandB(QUndoCommand):
def undo(self):
# ...
# somehow remove last N commands of class A from undostack
stack = QUndoStack()
stack.push(CommandA())
# ...
stack.push(CommandA())
stack.push(CommandB())
Just removing the last N commands regardless of which class they belong to would also be helpful as a starting point. This seems to me like it would be a common requirement but I don't see if/how this would be possible.
In PyQt5, commands on the QUndoStack are stored by index and accessed via the command function. You can loop over the commands in the stack using the count method:
for ndx in range(stack.count()):
command = stack.command(ndx)
The command will have the type of your specific QUndoCommand class (i.e. either CommandA or CommandB), so you can use isinstance to check which one it is.
Hence, you can make a QWidget (e.g. a QtWidgets.QPushButton) or QAction whose triggered signal is connected to a function (slot) that undoes the last n CommandA commands:
def undo_last_n_CommandAs(n):
for ndx in range(stack.count()):
command = stack.command(ndx)
if n > 0 and isinstance(command, 'CommandA'):
command.undo()
n -= 1
I have a spreadsheet with a list of directories, and an associated variable number of 'accounts' needed to be assigned to each directory (could easily be converted to csv) ie:
directory # of accounts needed
/usr/src/Mon-Carlton/ 110
/usr/src/Mon-CoalMtn/ 50
/usr/src/Mon-Cumming/ 90
etc...
I also have a 'master_account_list.csv' file that contains the full list of all possible accounts available to be distributed to each area ie:
account_1,password,type
account_2,password,type
account_3,password,type
etc...
I would like to be able to script the splitting of the master_account_list.csv into a separate accounts.csv file for each unique directory with the listed # of accounts needed.
The master_file gets updated with fresh accounts often and there is a need to redistribute again to all the directories. (The resulting accounts.csv file has the same formatting as the master_account_list.)
What is the best way to accomplish then in Linux?
Edit: When the script is complete, it would be ideal if the remainder of unassigned accounts from the master_account_list.csv became the new master_account_list.csv.
Assuming you've converted the accounts spreasheet to a comma(!) separated csv file, without headers(!), you can use the following awk program:
split_accounts.awk:
# True as long as we are reading the first file which
# is the converted spreadsheet
NR==FNR {
# Store the directories and counts in arrays
dir[NR]=$1
cnt[NR]=$2
next
}
# This block runs for every line of master_account_list.csv
{
# Get the directory and count from the arrays we've
# created above. 'i' will get initialized automatically
# with 0 on it's first usage
d=dir[i+1]
c=cnt[i+1]
# append the current line to the output file
f=d"account_list.csv"
print >> f
# 'n' holds the number of accounts placed into the current
# directory. Check if it has the reached the desired count
if(++n==c) {
# Step to the next folder / count
i++
# Reset the number of accounts placed
n=0
# Close the previous output file
close(f)
}
}
Call it like this:
awk -F, -f split_accounts.awk accounts.csv master_account_list.csv
Note: 0 is not allowed for the count in the current implementation.
I have an input file where some variables are defined. For each iteration in a loop, I would like to read the file, update the values of some of the variables, then run calculations.
I have an input file called input.jl with
myval=1
Then I have a file myscript.jl with the following commands
for i=1:2
include("input.jl")
println(myval)
myval=2
end
If I run the file (julia myscript.jl), I get an error that myval is not defined. If I comment out the third or fourth lines, then it runs with no problem. If I remove the for loop, the three lines run with no problem. How can I read myval from input.jl, use it, then update its value during each iteration of the loop?
Unfortunately, it seems that the include function executes things at global scope, and then continues from where it left off. So if you're trying to dynamically include new variables into local scope, this is not the way to do it.
You can either introduce the variable at global scope first so that the function has access to it, and therefore the assignment will work (but, be aware that the variable will be updated at the global scope).
or
you can cheat by wrapping your input file into a module first. You still need to call the variable by its name, and you will get warnings about updating the module, but this way you can update your local variable dynamically at least, without needing that variable at global scope:
# in input.jl
module Input
myval = 1;
end
# in your main file
for i=1:2
include("input.jl")
myval = Input.myval;
println(myval)
myval=2
end
or
you could add a separate process and offload the calculation to its global scope, and retrieve it to your current process locally, e.g.
# in file input.jl
myval = 1
# in main file
addprocs(1);
for i=1:2
myval = remotecall_fetch(() -> (global myval; include("input.jl"); myval), 2);
println(myval)
myval=2
end