How to set channel gain in OpenAL? - audio

I tried
alBufferf (myChannelId, AL_MAX_GAIN (and AL_GAIN), volumeValue);
and got error 0xA002.

As Isaac has said, you probably want to be setting gain on your a sources:
alSourcef (sourceID, AL_GAIN, volume)
To avoid recieving mysterious error codes in future, you should get into the habit of polling for errors after calls you think may fail / calls you are trying to debug.
This way, you'd know immediately that "0xA002" is "AL_INVALID_ENUM".
To do this with OpenAL you call "alGetError()" which clears and returns the most recent error;
ALenum ALerror = AL_NO_ERROR;
ALerror = alGetError();
std::cout << getALErrorString(ALerror) << std::endl;
You'll need to write something like this to take an error code and return/print a string
std::string getALErrorString(ALenum err) {
switch(err) {
case AL_NO_ERROR: return std::string("AL_NO_ERROR - (No error)."); break;
case AL_INVALID_NAME: return std::string("AL_INVALID_NAME - Invalid Name paramater passed to AL call."); break;
case AL_INVALID_ENUM: return std::string("AL_INVALID_ENUM - Invalid parameter passed to AL call."); break;
case AL_INVALID_VALUE: return std::string("AL_INVALID_VALUE - Invalid enum parameter value."); break;
case AL_INVALID_OPERATION: return std::string("AL_INVALID_OPERATION"); break;
case AL_OUT_OF_MEMORY: return std::string("AL_OUT_OF_MEMORY"); break;
default: return std::string("AL Unknown Error."); break;
};
}
You can lookup exactly what the error code means for a specific function call in OpenAL Programmer's Guide .
For example, on page 39 you can see AL_INVALID_ENUM on alSourcef means "The specified parameter is not valid".

0xA002 is an ILLEGAL ENUM ERROR in linux.
You got that because it's impossible to modify the gain of a buffer. There's no such thing.
What you can do is set the AL_GAIN attribute either to the listener (applying it to all sources in the current context) or to a particular source.

Related

Problem with Named event of length 4*n-1using CreateEvent() and Openevent()

I have two applications, one creates a named event using CreateEvent() and other opens the same event using OpenEvent(), as follows:
Application A.exe:
DLSRemoteConnnectionRqstEvent = CreateEvent(NULL, FALSE, FALSE, (LPCWSTR)DLS_REMOTE_CONNECTION_RQST_EVENT);
if (GetLastError() != 0)
{
DLSRemoteConnnectionRqstEvent = OpenEvent(EVENT_ALL_ACCESS, TRUE,(LPCWSTR)DLS_REMOTE_CONNECTION_RQST_EVENT);
}
else
{
cout<<"DLS_REMOTE_CONNECTION_RQST_EVENT created"<<endl;
}
Application B.exe :
DLSRemoteConnnectionRqstEvent =OpenEvent(EVENT_ALL_ACCESS, TRUE, (LPCWSTR)DLS_REMOTE_CONNECTION_RQST_EVENT);
if (GetLastError() != 0 && INVALID_HANDLE_VALUE == m_DLSRemoteConnnectionRqstEvent)
{
DLSRemoteConnnectionRqstEvent = CreateEvent(NULL, FALSE, FALSE, (LPCWSTR)DLS_REMOTE_CONNECTION_RQST_EVENT);
}
else
{
cout<<"m_DLSRemoteConnnectionRqstEvent opened"<<endl;
}
The event name is defined in a common header file as below:
#define CS_REMOTE_CONNECTION_RQST_EVENT "DLS_REMOTE_CONNECTION_RQST_MSG_TYPE"
Application A is able to create the event successfully, but Application B is not able to open the event, getting a handle value of NULL.
I have tested a few scenarios and came to know that if the event name is a length of 4n-1 then the open event always gives me a NULL handle value, and if the event name is less then 4n-1 or more than 4*n-1 then my application works fine.
Please help with why my application is behaving like this when the event name length is 4*n-1.
Other events created and opened in applications similarly as above are working fine as their lengths are not 4*n-1.
CreateEvent() and OpenEvent() should work on all the event lengths.
Your LPCWSTR typecasts are wrong. You can't cast narrow strings into wide strings like you are doing. Use a wide string literal to begin with, eg:
#define CS_REMOTE_CONNECTION_RQST_EVENT L"DLS_REMOTE_CONNECTION_RQST_MSG_TYPE"
...
CreateEventW(..., DLS_REMOTE_CONNECTION_RQST_EVENT);
...
OpenEventW(..., DLS_REMOTE_CONNECTION_RQST_EVENT);
That being said, using CreateEvent() and OpenEvent() the way you are is causing a race condition. Since it is clear that either application can create the event if it doesn't already exist, you should just use CreateEvent() in both applications and let it avoid the race condition for you. There is no reason to use OpenEvent() in this code at all.
Also, your error checking is wrong. Application A is not checking that a failure actually occurred before looking for a failure error code. CreateEvent() can report non-zero error codes in both success and failure conditions. Application B is at least trying to check if OpenEvent() failed, but it is doing so incorrectly since OpenEvent() returns NULL on failure, not INVALID_HANDLE_VALUE.
Try this instead:
Application A.exe and B.exe:
#define CS_REMOTE_CONNECTION_RQST_EVENT L"DLS_REMOTE_CONNECTION_RQST_MSG_TYPE"
...
DLSRemoteConnnectionRqstEvent = CreateEventW(NULL, FALSE, FALSE, DLS_REMOTE_CONNECTION_RQST_EVENT);
if (NULL == DLSRemoteConnnectionRqstEvent)
{
// error handling...
}
else
{
if (GetLastError() == ERROR_ALREADY_EXISTS)
cout << "DLS_REMOTE_CONNECTION_RQST_EVENT opened" << endl;
else
cout << "DLS_REMOTE_CONNECTION_RQST_EVENT created" << endl;
}

How should I handle Perl 6 $*ARGFILES that can't be read by lines()?

I'm playing around with lines which reads lines from the files you specify on the command line:
for lines() { put $_ }
If it can't read one of the filenames it throws X::AdHoc (one day maybe it will have better exception types so we can grab the filename with a .path method). Fine, so catch that:
try {
CATCH { default { put .^name } }
for lines() { put $_ }
}
So this catches the X::AdHoc error but that's it. The try block is done at that point. It can't .resume and try the next file:
try {
CATCH { default { put .^name; .resume } } # Nope
for lines() { put $_ }
}
Back in Perl 5 land you get a warning about the bad filename and the program moves on to the next thing.
I could filter #*ARGS first then reconstruct $*ARGFILES if there are some arguments:
$*ARGFILES = IO::CatHandle.new:
#*ARGS.grep( { $^a.IO.e and $^a.IO.r } ) if +#*ARGS;
for lines() { put $_ }
That works although it silently ignores bad files. I could handle that but it's a bit tedious to handle the argument list myself, including - for standard input as a filename and the default with no arguments:
my $code := { put $_ };
#*ARGS = '-' unless +#*ARGS;
for #*ARGS -> $arg {
given $arg {
when '-' { $code.($_) for $*IN.lines(); next }
when ! .IO.e { note "$_ does not exist"; next }
when ! .IO.r { note "$_ is not readable"; next }
default { $code.($_) for $arg.IO.lines() }
}
}
But that's a lot of work. Is there a simpler way to handle this?
To warn on bad open and move on, you could use something like this:
$*ARGFILES does role { method next-handle { loop {
try return self.IO::CatHandle::next-handle;
warn "WARNING: $!.message"
}}}
.say for lines
Simply mixing in a role that makes the IO::CatHandle.next-handle method re-try getting next handle. (you can also use but operator to mixin on a copy instead).
If it can't read one of the filenames it throws X::AdHoc
The X::AdHoc is from .open call; there's a somewhat moldy PR to make those exceptions typed, so once that's fixed, IO::CatHandle would throw typed exceptions as well.
It can't .resume
Yeah, you can only resume from a CATCH block that caught it, but in this case it's caught inside .open call and is made into a Failure, which is then received by IO::CatHandle.next-handle and its .exception is re-.thrown.
However, even if it were resumable here, it'd simply resume into a path where exception was thrown, not re-try with another handle. It wouldn't help. (I looked into making it resumable, but that adds vagueness to on-switch and I'm not comfortable speccing that resuming Exceptions from certain places must be able to meaningfully continue—we currently don't offer such a guarantee for any place in core).
including - for standard input as a filename
Note that that special meaning is going away in 6.d language as far as IO::Handle.open (and by extension IO::CatHandle.new) goes. It might get special treatment in IO::ArgFiles, but I've not seen that proposed.
Back in Perl 5 land you get a warning about the bad filename and the program moves on to the next thing.
In Perl 6, it's implemented as a generalized IO::CatHandle type users can use for anything, not just file arguments, so warning and moving on by default feels too lax to me.
IO::ArgFiles could be special-cased to offer such behaviour. Personally, I'm against special casing stuff all over the place and I think that is the biggest flaw in Perl 5, but you could open an Issue proposing that and see if anyone backs it.

Catching thrown Enum Values

Haxe permits the throwing of pretty much anything, but seems to be a bit limited in its catching ability. For example, I have a static error function that throws values of an ErrorType enum:
class Error
{
public static var CATCH_ALL:Bool = false;
public static function Throw(aError:ErrorType, ?ignore:Bool=false, ?inf:PosInfos):Void
{
trace('Error: $aError at ' + inf.className + ':' + inf.methodName + ':' + inf.lineNumber);
if (!CATCH_ALL && !ignore)
{
throw aError;
}
}
}
enum ErrorType
{
NULL_PARAM(msg:String);
NOT_FOUND(msg:String);
}
While I can catch pretty much anything, I am limited to basic types, class types and enum types. This means that I can catch every string, but not specifically a string containing "potato", for example. If I create multiple error classes, I can catch a specific class type while ignoring the others, but the same thing seems not to be possible with enum. Would there be an alternative to the following code that would compile?
try
{
Error.Throw(ErrorType.NULL_PARAM('Potato'));
}
catch (e:ErrorType.NULL_PARAM) trace(e); //does not work nor compile
catch (e:ErrorType) trace(e); //works, but catches every error
Selection of catch-expressions is limited to types / doesn't provide pattern matching capabilities like switch does:
Catch blocks are checked from top to bottom with the first one whose type is compatible with the thrown value being picked.
All values of the ErrorType enum are compatible with the ErrorType type. This means that unfortunately, I think the best you can do is to catch ErrorType and then do the selection inside the catch-block, using a switch and potentially re-throwing it. However, note that a simple throw e would currently cause the stack trace to be lost as discussed in #4159.

"this" argument in boost bind

I am writing multi-threaded server that handles async read from many tcp sockets. Here is the section of code that bothers me.
void data_recv (void) {
socket.async_read_some (
boost::asio::buffer(rawDataW, size_t(648*2)),
boost::bind ( &RPC::on_data_recv, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} // RPC::data_recvW
void on_data_recv (boost::system::error_code ec, std::size_t bytesRx) {
if ( rawDataW[bytesRx-1] == ENDMARKER { // <-- this code is fine
process_and_write_rawdata_to_file
}
else {
read_socket_until_endmarker // <-- HELP REQUIRED!!
process_and_write_rawadata_to_file
}
}
Nearly always the async_read_some reads in data including the endmarker, so it works fine. Rarely, the endmarker's arrival is delayed in the stream and that's when my program fails. I think it fails because I have not understood how boost bind works.
My first question:
I am confused with this boost totorial example , in which "this" does not appear in the handler declaration. ( Please see code of start_accept() in the example.) How does this work? Does compiler ignore the "this" ?
my second question:
In the on_data_recv() method, how do I read data from the same socket that was read in the on_data() method? In other words, how do I pass the socket as argument from calling method to the handler? when the handler is executed in another thread? Any help in form of a few lines of code that can fit into my "read_socket_until_endmarker" will be appreciated.
My first question: I am confused with this boost totorial example , in which "this" does not appear in the handler declaration. ( Please see code of start_accept() in the example.) How does this work? Does compiler ignore the "this" ?
In the example (and I'm assuming this holds for your functions as well) the start_accept() is a member function. The bind function is conveniently designed such that when you use & in front of its first argument, it interprets it as a member function that is applied to its second argument.
So while a code like this:
void foo(int x) { ... }
bind(foo, 3)();
Is equivalent to just calling foo(3)
Code like this:
struct Bar { void foo(int x); }
Bar bar;
bind(&foo, &bar, 3)(); // <--- notice the & before foo
Would be equivalent to calling bar.foo(3).
And thus as per your example
boost::bind ( &RPC::on_data_recv, this, // <--- notice & again
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)
When this object is invoked inside Asio it shall be equivalent to calling this->on_data_recv(error, size). Checkout this link for more info.
For the second part, it is not clear to me how you're working with multiple threads, do you run io_service.run() from more than one thread (possible but I think is beyond your experience level)? It might be the case that you're confusing async IO with multithreading. I'm gonna assume that is the case and if you correct me I'll change my answer.
The usual and preferred starting point is to have just one thread running the io_service.run() function. Don't worry, this will allow you to handle many sockets asynchronously.
If that is the case, your two functions could easily be modified as such:
void data_recv (size_t startPos = 0) {
socket.async_read_some (
boost::asio::buffer(rawDataW, size_t(648*2)) + startPos,
boost::bind ( &RPC::on_data_recv, this,
startPos,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} // RPC::data_recvW
void on_data_recv (size_t startPos,
boost::system::error_code ec,
std::size_t bytesRx) {
// TODO: Check ec
if (rawDataW[startPos + bytesRx-1] == ENDMARKER) {
process_and_write_rawdata_to_file
}
else {
// TODO: Error if startPos + bytesRx == 648*2
data_recv(startPos + bytesRx);
}
}
Notice though that the above code still has problems, the main one being that if the other side sent two messages quickly one after another, we could receive (in one async_read_some call) the full first message + part of the second message, and thus missing the ENDMARKER from the first one. Thus it is not enough to only test whether the last received byte is == to the ENDMARKER.
I could go on and modify this function further (I think you might get the idea on how), but you'd be better off using async_read_until which is meant exactly for this purpose.

Best pattern for simulating "continue" in Groovy closure

It seems that Groovy does not support break and continue from within a closure. What is the best way to simulate this?
revs.eachLine { line ->
if (line ==~ /-{28}/) {
// continue to next line...
}
}
You can only support continue cleanly, not break. Especially with stuff like eachLine and each. The inability to support break has to do with how those methods are evaluated, there is no consideration taken for not finishing the loop that can be communicated to the method. Here's how to support continue --
Best approach (assuming you don't need the resulting value).
revs.eachLine { line ->
if (line ==~ /-{28}/) {
return // returns from the closure
}
}
If your sample really is that simple, this is good for readability.
revs.eachLine { line ->
if (!(line ==~ /-{28}/)) {
// do what you would normally do
}
}
another option, simulates what a continue would normally do at a bytecode level.
revs.eachLine { line ->
while (true) {
if (line ==~ /-{28}/) {
break
}
// rest of normal code
break
}
}
One possible way to support break is via exceptions:
try {
revs.eachLine { line ->
if (line ==~ /-{28}/) {
throw new Exception("Break")
}
}
} catch (Exception e) { } // just drop the exception
You may want to use a custom exception type to avoid masking other real exceptions, especially if you have other processing going on in that class that could throw real exceptions, like NumberFormatExceptions or IOExceptions.
Closures cannot break or continue because they are not loop/iteration constructs. Instead they are tools used to process/interpret/handle iterative logic. You can ignore given iterations by simply returning from the closure without processing as in:
revs.eachLine { line ->
if (line ==~ /-{28}/) {
return
}
}
Break support does not happen at the closure level but instead is implied by the semantics of the method call accepted the closure. In short that means instead of calling "each" on something like a collection which is intended to process the entire collection you should call find which will process until a certain condition is met. Most (all?) times you feel the need to break from a closure what you really want to do is find a specific condition during your iteration which makes the find method match not only your logical needs but also your intention. Sadly some of the API lack support for a find method... File for example. It's possible that all the time spent arguing wether the language should include break/continue could have been well spent adding the find method to these neglected areas. Something like firstDirMatching(Closure c) or findLineMatching(Closure c) would go a long way and answer 99+% of the "why can't I break from...?" questions that pop up in the mailing lists. That said, it is trivial to add these methods yourself via MetaClass or Categories.
class FileSupport {
public static String findLineMatching(File f, Closure c) {
f.withInputStream {
def r = new BufferedReader(new InputStreamReader(it))
for(def l = r.readLine(); null!=l; l = r.readLine())
if(c.call(l)) return l
return null
}
}
}
using(FileSupport) { new File("/home/me/some.txt").findLineMatching { line ==~ /-{28}/ }
Other hacks involving exceptions and other magic may work but introduce extra overhead in some situations and convolute the readability in others. The true answer is to look at your code and ask if you are truly iterating or searching instead.
If you pre-create a static Exception object in Java and then throw the (static) exception from inside a closure, the run-time cost is minimal. The real cost is incurred in creating the exception, not in throwing it. According to Martin Odersky (inventor of Scala), many JVMs can actually optimize throw instructions to single jumps.
This can be used to simulate a break:
final static BREAK = new Exception();
//...
try {
... { throw BREAK; }
} catch (Exception ex) { /* ignored */ }
Use return to continue and any closure to break.
Example
File content:
1
2
----------------------------
3
4
5
Groovy code:
new FileReader('myfile.txt').any { line ->
if (line =~ /-+/)
return // continue
println line
if (line == "3")
true // break
}
Output:
1
2
3
In this case, you should probably think of the find() method. It stops after the first time the closure passed to it return true.
With rx-java you can transform an iterable in to an observable.
Then you can replace continue with a filter and break with takeWhile
Here is an example:
import rx.Observable
Observable.from(1..100000000000000000)
.filter { it % 2 != 1}
.takeWhile { it<10 }
.forEach {println it}

Resources