How do I add LADSPA plugin into pipewire configuration to be used for audio postprocessing?
There are number of existing ladspa plugins.
The ladspa plugin must work on stereo (two channels) audio.
There is an existing pipewire module that can encapsulate any number of ladspa plugins called filter-chain
First we need to add filter-chain module in our build system. In yocto bitbake recipe it is added like this:
RDEPENDS_libpipewire += " \
${PN}-modules-client-node \
+ ${PN}-modules-filter-chain \
.....
Then add appropriate pipewire.conf block using the filter-chain to add the specific ladspa plugin when pipewire is started:
{ name = libpipewire-module-filter-chain
args = {
node.name = "processing_name"
node.description = "Specifc postprocessing"
media.name = "Specifc postprocessing"
filter.graph = {
nodes = [
{
type = ladspa
name = plugin_name
plugin = libplugin_name
label = plugin_name #this needs to correspond to ladspa plugin code
control = {
"Some control" = 1234
}
}
]
}
capture.props = {
node.passive = true
media.class = Audio/Sink
audio.channels=2
audio.position=[FL,FR]
}
playback.props = {
media.class = Audio/Source
audio.channels=2
audio.position=[FL,FR]
}
}
}
The main point of integration is the label part in the node block. This must correspond with ladspa plugin code. I think ladspa id can be used instead.
Then the capture/playback props determine if the ladspa plugins will have stereo channels for processing and they describe type of nodes that are created for output and input.
Every postprocessing node has implicitly two nodes - one for input and another one for output.
Afterwards the ladspa plugin needs to be connected with the session manager of choice. In case of wireplumber we may use lua script to detect and connect the plugin nodes to appropriate sinks (alsa sink for example) and client nodes.
Example graph:
Related
Running ARKit 2.0 with an ARSCNView. iOS12
The application uses multithreading, that's why these functions are being performed on the main thread (just to be sure). I also tried without explicitly performing the functions on the main thread too, with no avail.
I'm using an .aiff sound file but have also tried a .wav. No joy.
I even tried removing audioNode_alarm from the node hierarchy & the sound still plays. I even remove the ARSCNView from the view hierarchy and the sound STILL plays. FFS
From what I can see, I'm doing things EXACTLY as I'm supposed to, to stop the audio from playing. The audio simply will not stop no matter what I try. Can anyone think why?!
weak var audioNode_alarm: SCNNode!
weak var audioPlayer_alarm: SCNAudioPlayer?
func setupAudioNode() {
let audioNode_alarm = SCNNode()
addChildNode(audioNode_alarm)
self.audioNode_alarm = audioNode_alarm
}
func playAlarm() {
DispatchQueue.main.async { [unowned self] in
self.audioNode_alarm.removeAllAudioPlayers()
if let audioSource_alarm = SCNAudioSource(fileNamed: "PATH_TO_MY_ALARM_SOUND.aiff") {
audioSource_alarm.loops = true
audioSource_alarm.load()
audioSource_alarm.isPositional = true
let audioPlayer_alarm = SCNAudioPlayer(source: audioSource_alarm)
self.audioNode_alarm.addAudioPlayer(audioPlayer_alarm)
self.audioPlayer_alarm = audioPlayer_alarm
}
}
}
func stopAlarm() {
DispatchQueue.main.async { [unowned self] in
self.audioNode_alarm?.removeAudioPlayer(self.audioPlayer_alarm!)
self.audioNode_alarm?.removeAllAudioPlayers()
}
}
What I ended up doing is stopping the sound and removing the player by
yourNode.audioPlayers.forEach { audioLocalPlayer in
audioLocalPlayer.audioNode?.engine?.stop()
yourNode.removeAudioPlayer(audioLocalPlayer)
}
According to the documentation SCNAudioPlayer has audioNode, which is supposed to be used "to vary parameters such as volume and reverb in real time during playback".
audioNode is of AVAudioNode type, so if we jump to engine prop and its type definition, we'll find all the controls we need.
As a step in switching from NLog to Serilog, I want to redirect the standard wiring underlying standard invocations of NLog's LogManager.GetLogger(name) to Bridge any code logging to NLog to forward immediately to the ambient Serilog Log.Logger - i.e. I want to just one piece of config that simply forwards the message, without buffering as Log4net.Appender.Serilog does for Log4net.
Can anyone concoct or point me to a canonical snippet that does this correctly and efficiently please? Requirements I can think of:
Maintain the level, i.e. nlog.Warn should be equivalent to serilog.Warning
It's ok for Serilog to generate the time anew
materializing the message in the appender - i.e., there's no need to maintain any properties associated with the 'message' (the LogEvent in Serilog terms)
no buffering
I don't need any other NLog Target to be used (i.e. mutating/deleting the message would be fine)
I think the best option is indeed a custom NLog target. Something like this: (C#)
using NLog;
using NLog.Targets;
using Serilog;
using Serilog.Events;
namespace MyNamespace
{
[Target("SerilogTarget")]
public sealed class SerilogTarget : TargetWithLayout
{
protected override void Write(LogEventInfo logEvent)
{
var log = Log.ForContext(Serilog.Core.Constants.SourceContextPropertyName, logEvent.LoggerName);
var logEventLevel = ConvertLevel(logEvent.Level);
if ((logEvent.Parameters?.Length ?? 0) == 0)
{
// NLog treats a single string as a verbatim string; Serilog treats it as a String.Format format and hence collapses doubled braces
// This is the most direct way to emit this without it being re-processed by Serilog (via #nblumhardt)
var template = new Serilog.Events.MessageTemplate(new[] { new Serilog.Parsing.TextToken(logEvent.FormattedMessage) });
log.Write(new Serilog.Events.LogEvent(DateTimeOffset.Now, logEventLevel, logEvent.Exception, template, Enumerable.Empty<Serilog.Events.LogEventProperty>()));
}
else
// Risk: tunneling an NLog format and assuming it will Just Work as a Serilog format
#pragma warning disable Serilog004 // Constant MessageTemplate verifier
log.Write(logEventLevel, logEvent.Exception, logEvent.Message, logEvent.Parameters);
#pragma warning restore Serilog004
}
static Serilog.Events.LogEventLevel ConvertLevel(LogLevel logEventLevel)
{
if (logEventLevel == LogLevel.Info)
return Serilog.Events.LogEventLevel.Information;
else if (logEventLevel == LogLevel.Trace)
return Serilog.Events.LogEventLevel.Verbose;
else if (logEventLevel == LogLevel.Debug)
return Serilog.Events.LogEventLevel.Debug;
else if (logEventLevel == LogLevel.Warn)
return Serilog.Events.LogEventLevel.Warning;
else if (logEventLevel == LogLevel.Error)
return Serilog.Events.LogEventLevel.Error;
return Serilog.Events.LogEventLevel.Fatal;
}
}
}
register it in your main() or app_start:
// Register so it can be used by config file parsing etc
Target.Register<MyNamespace.SerilogTarget>("SerilogTarget");
Before any logging takes place, the Target needs to be wired in so LogManager.GetLogger() can actually trigger a call to SerilogTarget.Write
public static void ReplaceAllNLogTargetsWithSingleSerilogForwarder()
{
// sic: blindly overwrite the forwarding rules every time
var target = new SerilogTarget();
var cfg = new NLog.Config.LoggingConfiguration();
cfg.AddTarget(nameof(SerilogTarget), target);
cfg.LoggingRules.Add(new NLog.Config.LoggingRule("*", LogLevel.Trace, target));
// NB assignment must happen last; rules get ingested upon assignment
LogManager.Configuration = cfg;
}
See also: https://github.com/nlog/nlog/wiki/How-to-write-a-custom-target
the optimal way to do this without inducing any avoidable perf impact etc.
This is the optimal way in NLog and has no performance impact on the NLog's site.
what does the TargetAttribute buy me ?
Well in this case you don't need it. The TargetAttribute is used when registering a full assembly, but because we register manually, it's not needed. I think it's best practice, but you could leave it out.
Also what does the Register buy me
This is indeed not needed when using programmatically config. But if you have XML config, you need the register.
I've the habit to write targets that works in all ways (register manually, register by assembly, config from code, config from XML). I could understand that could be confusing.
F# port:
module SerilogHelpers =
let private mapLevel = function
| x when x = NLog.LogLevel.Info -> LogEventLevel.Information
| x when x = NLog.LogLevel.Off || x = NLog.LogLevel.Trace -> LogEventLevel.Verbose
| x when x = NLog.LogLevel.Debug -> LogEventLevel.Debug
| x when x = NLog.LogLevel.Warn -> LogEventLevel.Warning
| x when x = NLog.LogLevel.Error -> LogEventLevel.Error
| _ -> LogEventLevel.Fatal
// via https://stackoverflow.com/a/49639001/11635
[<NLog.Targets.Target("SerilogTarget")>]
type SerilogTarget() =
inherit NLog.Targets.Target()
static member InitializeAsGlobalTarget() =
// sic: blindly overwrite the forwarding rules every time
// necessary as Azure Startup establishes a different config as a bootstrapping step
// see: LogModule.To.target("rollingFile", create, "*", LogLevel.Trace)
let cfg, target = NLog.Config.LoggingConfiguration(), SerilogTarget()
cfg.AddTarget("SerilogTarget", target)
cfg.LoggingRules.Add(NLog.Config.LoggingRule("*", NLog.LogLevel.Trace, target))
// NB assignment must happen last; rules get ingested upon assignment
NLog.LogManager.Configuration <- cfg
override __.Write(logEvent : NLog.LogEventInfo) =
let log = Log.ForContext(Serilog.Core.Constants.SourceContextPropertyName, logEvent.LoggerName)
match logEvent.Parameters with
| xs when isNull xs || xs.Length = 0 ->
// NLog treats a single string as a verbatim string; Serilog treats it as a String.Format format and hence collapses doubled braces
// This is the most direct way to emit this without it being re-processed by Serilog (via #nblumhardt)
let template = MessageTemplate [| Serilog.Parsing.TextToken(logEvent.FormattedMessage) |]
log.Write(new LogEvent(DateTimeOffset.Now, mapLevel logEvent.Level, logEvent.Exception, template, Seq.empty<LogEventProperty>))
| _ ->
// Risk: tunneling an NLog format and assuming it will Just Work as a Serilog format
log.Write(mapLevel logEvent.Level, logEvent.Exception, logEvent.Message, logEvent.Parameters)
I am not a native English speaker but I try to express my question as clear as possible.
I encountered this problem which has confused me for two days and I still can't find the solution.
I have built a stream which will run in the Spring Could Data Flow in the Hadoop YARN.
The stream is composed of Http source,processor and file sink.
1.Http Source
The HTTP Source component has two output channels binding with two different destinations which are dest1 and dest2 defined in the application.properties.
spring.cloud.stream.bindings.output.destination=dest1
spring.cloud.stream.bindings.output2.destination=dest2
Below is the code snipet for HTTP source for your reference..
#Autowired
private EssSource channels; //EssSource is the interface for multiple output channels
##output channel 1:
#RequestMapping(path = "/file", method = POST, consumes = {"text/*", "application/json"})
#ResponseStatus(HttpStatus.ACCEPTED)
public void handleRequest(#RequestBody byte[] body, #RequestHeader(HttpHeaders.CONTENT_TYPE) Object contentType) {
logger.info("enter ... handleRequest1...");
channels.output().send(MessageBuilder.createMessage(body,
new MessageHeaders(Collections.singletonMap(MessageHeaders.CONTENT_TYPE, contentType))));
}
##output channel 2:
#RequestMapping(path = "/test", method = POST, consumes = {"text/*", "application/json"})
#ResponseStatus(HttpStatus.ACCEPTED)
public void handleRequest2(#RequestBody byte[] body, #RequestHeader(HttpHeaders.CONTENT_TYPE) Object contentType) {
logger.info("enter ... handleRequest2...");
channels.output2().send(MessageBuilder.createMessage(body,
new MessageHeaders(Collections.singletonMap(MessageHeaders.CONTENT_TYPE, contentType))));
}
2. Processor
The processor has two multiple input channels and two output channels binding with different destinations.
The destination binding is defined in application.properties in processor component project.
//input channel binding
spring.cloud.stream.bindings.input.destination=dest1
spring.cloud.stream.bindings.input2.destination=dest2
//output channel binding
spring.cloud.stream.bindings.output.destination=hdfsSink
spring.cloud.stream.bindings.output2.destination=fileSink
Below is the code snippet for Processor.
#Transformer(inputChannel = EssProcessor.INPUT, outputChannel = EssProcessor.OUTPUT)
public Object transform(Message<?> message) {
logger.info("enter ...transform...");
return "processed by transform1";;
}
#Transformer(inputChannel = EssProcessor.INPUT_2, outputChannel = EssProcessor.OUTPUT_2)
public Object transform2(Message<?> message) {
logger.info("enter ... transform2...");
return "processed by transform2";
}
3. The file sink component.
I use the official fil sink component from Spring.
maven://org.springframework.cloud.stream.app:file-sink-kafka:1.0.0.BUILD-SNAPSHOT
And I just add the destination binding in its applicaiton.properties file.
spring.cloud.stream.bindings.input.destination=fileSink
4.Finding:
The data flow I expected should like this:
Source.handleRequest() -->Processor.handleRequest()
Source.handleRequest2() -->Processor.handleRequest2() --> Sink.fileWritingMessageHandler();
Should only the string "processed by transform2" is saved to the file.
But after my testing, the data flow is actual like this:
Source.handleRequest() -->Processor.handleRequest() --> Sink.fileWritingMessageHandler();
Source.handleRequest2() -->Processor.handleRequest2() --> Sink.fileWritingMessageHandler();
Both the "processed by transform1" and "processed by transform2" string are saved to the file.
5.Question:
Although the destination for the output channel in Processor.handleRequest() binds to hdfsSink instead of fileSink,the data still flows to file Sink. I can't understand this and this is not what I want.
I only want the data from Processor.handleRequest2() flows to file sink instead of both.
If I don't do it right, could anyone tell me how to do it and what is the solution?
It has been confused me for 2 days.
Thanks you for your kindly help.
Alex
Is your stream definition something like this (where the '-2' versions are the ones with multiple channels) ?
http-source-2 | processor-2 | file-sink
Note that Spring Cloud Data Flow will override the destinations defined in applications.properties which is why, even if spring.cloud.stream.bindings.output.destination for the processor is set to hdfs-sink, it will actually match the input of file-sink.
The way destinations are configured from a stream definition is explained here (in the context of taps): http://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#spring-cloud-dataflow-stream-tap-dsl
What you can do is to simply swap the meaning of channel 1 and 2 - use the side channel for hdfs. This is a bit brittle though - as the input/output channels of the Stream will be configured automatically and the other channels will be configured via application.properties - in this case it may be better to configure the side channel destinations via stream definition or at deployment time - see http://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_application_properties.
It seems to me that these could be just as well be 2 streams listening to separate endpoints, using regular components - given that data is supposed to be flowing side by side.
I am trying to structure my a data processing pipeline using uimaFit as follows:
[annotatorA] => [Consumer to dump annotatorA's annotations from CAS into DB]
[annotatorB (should take on annotatorA's annotations from DB as input)]=>[Consumer for annotatorB]
The driver code:
/* Step 0: Create a reader */
CollectionReader readerInstance= CollectionReaderFactory.createCollectionReader(
FilePathReader.class, typeSystem,
FilePathReader.PARAM_INPUT_FILE,"/path/to/file/to/be/processed");
/*Step1: Define Annotoator A*/
AnalysisEngineDescription annotatorAInstance=
AnalysisEngineFactory.createPrimitiveDescription(
annotatorADbConsumer.class, typeSystem,
annotatorADbConsumer.PARAM_DB_URL,"localhost",
annotatorADbConsumer.PARAM_DB_NAME,"xyz",
annotatorADbConsumer.PARAM_DB_USER_NAME,"name",
annotatorADbConsumer.PARAM_DB_USER_PWD,"pw");
builder.add(annotatorAInstance);
/* Step2: Define binding for annotatorB to take
what-annotator-a put in DB above as input */
/*Step 3: Define annotator B */
AnalysisEngineDescription annotatorBInstance =
AnalysisEngineFactory.createPrimitiveDescription(
GateDateTimeLengthAnnotator.class,typeSystem)
builder.add(annotatorBInstance);
/*Step 4: Run the pipeline*/
SimplePipeline.runPipeline(readerInstance, builder.createAggregate());
Questions I have are:
Is the above approach correct?
How do we define the depencdency of annotatorA's output in annotatorB in step 2?
Is the approach suggested at https://code.google.com/p/uimafit/wiki/ExternalResources#Resource_injection
, the right direction to achieve it ?
You can define the dependency with #TypeCapability like this:
#TypeCapability(inputs = { "com.myproject.types.MyType", ... }, outputs = { ... })
public class MyAnnotator extends JCasAnnotator_ImplBase {
....
}
Note that it defines a contract at the annotation level, not the engine level (meaning that any Engine could create com.myproject.types.MyType).
I don't think there are ways to enforce it.
I did create some code to check that an Engine is provided with the right required Annotations in the upstream of a pipeline, and prints an error log otherwise (see Pipeline.checkAndAddCapabilities() and Pipeline.addCapabilities() ). Note however that it will only work if all Engines define their TypeCapabilities, which is often not the case when one uses external Engines/libraries.
So I am using the https://forge.puppetlabs.com/pdxcat/nrpe module to try to figure out automation of NRPE across hosts.
One of the available usages is
nrpe::command {
'check_users':
ensure => present,
command => 'check_users -w 5 -c 10';
}
Is there anyway to make a "group" of these commands and have them called on specific nodes?
For example:
you have 5 different nrpe:command each defining a different check, and then call those specific checks?
I am basically trying to figure out if I could group certain checks/commands together instead of setting up a ton of text in the main sites.pp file. This would also allow for customized templates/configurations across numerous nodes.
Thanks!
EDIT:
This is the command and what it's supposed to do when called on with the 'check_users' portion. If I could have a class with a set of "nrpe:command" and just call on that class THROUGH the module, it should work. Sorry, though. Still new at puppet. Thanks again.
define nrpe::command (
$command,
$ensure = present,
$include_dir = $nrpe::params::nrpe_include_dir,
$libdir = $nrpe::params::libdir,
$package_name = $nrpe::params::nrpe_packages,
$service_name = $nrpe::params::nrpe_service,
$file_group = $nrpe::params::nrpe_files_group,
) {
file { "${include_dir}/${title}.cfg":
ensure => $ensure,
content => template('nrpe/command.cfg.erb'),
owner => root,
group => $file_group,
mode => '0644',
require => Package[$package_name],
notify => Service[$service_name],
}
}
What version are you talking about? In puppet latest versions, inheritance is deprecated, then you shouldn't use it.
The easiest way would be to use "baselines".
Assuming you are using a manifests directory (manifest = $confdir/manifests inside your puppet.conf), simply create a $confdir/manifests/minimal.pp (or $confdir/manifests/nrpe_config.pp or whatever class name you want to use) with the content below:
class minimal {
nrpe::command { 'check_users':
ensure => present,
command => 'check_users -w 5 -c 10',
}
}
Then just call this class inside your node definitions (let's say in $confdir/manifests/my_node.pp) :
node 'my_node.foo.bar' {
include minimal
}