Accessing audio samples from linphone - linux

I'm using linphonec (without gtk interface, only command line) in Linux and I want to access incoming and outgoing sound samples, but I don't know what file should I edit to access them.
Can anybody give me a clue, please?

Assuming the outgoing and incoming sound samples are related to a call
e.g. you are typing linphonec> call sip:usernumber#someproxy.net.
The call stack of the function is:
lpc_cmd_call from linphone-version/console/commands.c
linphone_core_invite_with_params from linphone-version/coreapi/linphonecore.c
linphone_core_invite_address_with_params from linphone-version/coreapi/linphonecore.c
linphone_core_start_invite from linphone-version/coreapi/linphonecore.c
linphone_call_init_media_streams from linphone-version/coreapi/linphonecall.c
audio_stream_new from from linphone-version/mediastreamer2/src/audiostream.c
Media stream is initialised on the host and can be accessed using mediastreamer2 API
In terms of accepting a call see the following call stack:
linphone_core_accept_call from linphone-version/coreapi/linphonecore.c
linphone_core_update_streams from linphone-version/coreapi/callbacks.c
linphone_call_start_media_streams from linphone-version/coreapi/linphonecall.c
linphone_call_start_audio_stream from linphone-version/coreapi/linphonecall.c
audio_stream_start_full from linphone-version/mediastreamer2/src/audiostream.c
Media stream is initialised on the client and can be accessed using mediastreamer2 API.
mediastreamer2 API documentation is avaliable here
linphone Source Code is avaliable here or
here

Related

cannot get onvif GetStreamUri from media2

I want to get the media stream url from onvif,but it shows that "Method 'GetStreamUri' not implemented: method name or namespace not recognized"Detail: [no detail] .
I don't know the reason why the method GetStreamUri not implemented?
I download wsdl "http://www.onvif.org/ver20/media/wsdl" and generate the code by using the gsoap.
console ouput
I find the TEST.log.It shows that it cannot find ns3:GetStreamUri and ns1:GetStreamUri
debug information
Is your camera Profile T compliant or does it at least implement Media Service 2?
In not, then you should use Media Service 1.

Snap7 + Ni Labwindows/CVI

Hi guys i am working on a project where my client wants to use Siemens S7 1200 to control some pneumatic tools and an interface on labwindows Cvi.
I downloaded SP7 ( snap 7) on a try to communicate with my plc but i found myself blocked since the downloaded file contains only a DLL file and a lib file whit no.h file ( header file )
could anyone tell me how to use snap 7 properly on labwindows ?
thanks
AFAIK if you want to use Snap7, you'll have to download the source (C++) and then manipulate it to work with LabWindows.
S7-1200s also have the following communication options.
Free-of-charge:
Modbus/TCP
Socket services ("Open user communication")
MQTT unencrypted
For a fee:
OPC UA
More here:
Communication methods for S7-1200/1500 PLCs
Personal opinion: if you don't have many data points to monitor, I would go with Modbus/TCP.

Sending C2D message to Azure IoT Edge

I know C2D is not supported in Azure IoT Edge and an option is to use Direct Method.
Is that can I use Module Client code and send message to a Module ?
I have a ModuleA which has output1 and ModuleB has a Handler input1.
I have a route as below
"ModuleAToModuleB": "FROM /messages/modules/ModuleA/outputs/output1 INTO BrokeredEndpoint(\"/modules/ModuleB/inputs/input1\")",
And I use the below code from a console app and send message to a specific module based on the connection string of the specific Module (ModuleA connection string)
string dataString = JsonConvert.SerializeObject(jData);
byte[] dataBytes = Encoding.UTF8.GetBytes(dataString);
var pipeMessage = new Message(dataBytes);
var moduleClient = ModuleClient.CreateFromConnectionString("HostName=xxx.azure-devices.net;DeviceId=xxx-01;ModuleId=ModuleA;SharedAccessKey=XXXXXXX", TransportType.Mqtt);
await moduleClient.SendEventAsync("output1", pipeMessage);
Will this code work, Will it send the Message from ModuleA to ModuleB ?
If you want to send anything frfom your laptop/pc in a console app to your IoT Edge device, you will need to use direct methods, like you mentioned in your question. To do that, you can use the Service SDK and use the following method:
InvokeDeviceMethodAsync(string deviceId, string moduleId, CloudToDeviceMethod cloudToDeviceMethod);
In your sample, you suggested using the ModuleClient to send a message to your module. This will not work, ModuleClient is designed to be used only in the Azure IoT Edge runtime, and the method you are using (ModuleClient.CreateFromConnectionString), is one that the runtime will use to set up a connection, using the environment variables available on the device.
With the Service SDK, you can send a direct method to your Module A, and nothing is stopping you to forward the payload of that method into Module B. You already have set up your route correctly.
You need to call function like InvokeMethodAsync which is direct method from moduleA to moduelB can be called. In the example you showed it seems you are calling sendEventAsync which might not work. Example is here in C#.
Also please go through this link which also suggests another method for module to module communication.
In addition to using direct methods, it's also possible for two
modules to communicate directly with each other, bypassing the Edge
Hub. The runtime, via Docker's networking capabilities, manages the
DNS entries for each module (container). This allows one module to
resolve the IP address of another module by its name.
For an example of this in action, you can follow the SQL tutorial
here:
https://learn.microsoft.com/en-us/azure/iot-edge/tutorial-store-data-sql-server.
This tutorial uses a module to read data out of the Edge Hub and write
it into another module hosting SQLServer using the SQLServer client
SDK. This interaction with SQLServer does not use the Edge Hub for
communicating

GET / POST using Clarion

I have Clarion 9 app that I want to be able to communicate with HTTP servers. I come from PHP background. I have 0 idea on what to do.
What I wish to be able to do:
Parse JSON data and convert QUEUE data to JSON [Done]
Have a global variable like 'baseURL' that points to e.g. http://localhost.com [Done]
Call functions such apiConnection.get('/users') would return me the contents of the page. [I'm stuck here]
apiConnection.post('/users', myQueueData) would POST myQueueData contents.
I tried using winhttp.dll by reading it from LibMaker but it didn't read it. Instead, I'm now using wininet.dll which LibMaker successfully created a .lib file for it.
I'm currently using the PROTOTYPE procedures from this code on GitHub https://gist.github.com/ddur/34033ed1392cdce1253c
What I did was include them like:
SimpleApi.clw
PROGRAM
INCLUDE('winInet.equ')
ApiLog QUEUE, PRE(log)
LogTitle STRING(10)
LogMessage STRING(50)
END
MAP
INCLUDE('winInetMap.clw')
END
INCLUDE('equates.clw'),ONCE
INCLUDE('DreamyConnection.inc'),ONCE
ApiConnection DreamyConnection
CODE
IF DreamyConnection.initiateConnection('http://localhost')
ELSE
log:LogTitle = 'Info'
log:LogMessage = 'Failed'
ADD(apiLog)
END
But the buffer that winInet's that uses always returns 0.
I have created a GitHub repository https://github.com/spacemudd/clarion-api with all the code to look at.
I'm really lost in this because I can't find proper documentation of Clarion.
I do not want a paid solution.
It kind of depends which version of Clarion you have.
Starting around v9 they added ClaRunExt which provides this kind of functionality via .NET Interop.
From the help:
Use HTTP or HTTPS to download web pages, or any other type of file. You can also post form data to web servers. Very easy way to send HTTP web requests (and receive responses) to Web Servers, REST Web Services, or standard Web Services, with the most commonly used HTTP verbs; POST, GET, PUT, and DELETE.
Otherwise, search the LibSrc\ directory for "http" and you will get an idea of what is already there. abapi.inc for example, appears to provide a wrapper around wininet.lib.

Speech recognition, nodeJS

I'm currently working on a tool allowing me to read all my notifications thanks to the connection to different APIs.
It's working great, but now I would like to put some vocal commands to do some actions.
Like when the software is saying "One mail from Bob", I would like to say "Read it", or "Archive it".
My software is running through a node server, currently I don't have any browser implementation, but it can be a plan.
What is the best way in node JS to enable speech to text?
I've seen a lot of threads on it, but mainly it's using the browser and if possible, I would like to avoid that at the beginning. Is it possible?
Another issue is some software requires the input of a wav file. I don't have any file, I just want my software to be always listening to what I say to react when I say a command.
Do you have any information on how I could do that?
Cheers
Both of the answers here already are good, but what I think you're looking for is Sonus. It takes care of audio encoding and streaming for you. It's always listening offline for a customizable hotword (like Siri or Alexa). You can also trigger listening programmatically. In combination with a module like say, you could enable your example by doing something like:
say.speak('One mail from Bob', function(err) {
Sonus.trigger(sonus, 1) //start listening
});
You can also use different hotwords to handle the subsequent recognized speech in a different way. For instance:
"Notifications. Most recent." and "Send message. How are you today"
Throw that onto a Pi or a CHIP with a microphone on your desk and you have a personal assistant that reads your notifications and reacts to commands.
Simple Example:
https://twitter.com/_evnc/status/811290460174041090
Something a bit more complex:
https://youtu.be/pm0F_WNoe9k?t=20s
Full documentation:
https://github.com/evancohen/sonus/blob/master/docs/API.md
Disclaimer: This is my project :)
To recognize few commands without streaming them to the server you can use node-pocketsphinx module. Available in NPM.
The code to recognize few commands in continuos stream should look like this:
var fs = require('fs');
var ps = require('pocketsphinx').ps;
modeldir = "../../pocketsphinx/model/en-us/"
var config = new ps.Decoder.defaultConfig();
config.setString("-hmm", modeldir + "en-us");
config.setString("-dict", modeldir + "cmudict-en-us.dict");
config.setString("-kws", "keyword list");
var decoder = new ps.Decoder(config);
fs.readFile("../../pocketsphinx/test/data/goforward.raw", function(err, data) {
if (err) throw err;
decoder.startUtt();
decoder.processRaw(data, false, false);
decoder.endUtt();
console.log(decoder.hyp())
});
Instead of readFile you just read the data from microphone and pass it to recognizer. The list of keywords to detect should look like this:
read it /1e-20/
archive it /1e-20/
For more details on spotting with pocketsphinx see Keyword Spotting in Speech and Recognizing multiple keywords using PocketSphinx
To get audio data into your application, you could try a module like microphone, which I haven't used by it looks promising. This could be a way to avoid having to use the browser for audio input.
To do actual speech recognition, you could use the Speech to Text service of IBM Watson Developer Cloud. This service supports a websocket interface, so that you can have a full duplex service, piping audio data to the cloud and getting back the resulting transcription. You may want to consider implementing a form of onset detection in order to avoid transmitting a lot of (relative) silence to the service - that way, you can stay within the free tier.
There is also a text-to-speech service, but it sounds like you have a solution already for that part of your tool.
Disclosure: I am an evangelist for IBM Watson.

Resources