I want to play Windows system sounds like those for Error and Information dialog boxes.
I tried Beep API:
Public Declare Function Beep Lib "kernel32" _
(ByVal dwFreq As Long, ByVal dwDuration As Long) As Long
Beep 200, 2000
The problem is that it works in XP/2000 only, but not in Win 7. Also I want to be able to play different sounds.
You can play the standard Windows alert sounds by calling the MessageBeep function. To call it from VB 6, you'll need to write a declaration like so:
Public Declare Function MessageBeep Lib "user32" (ByVal wType As Long) As Long
And then you'll need the constants that specify the type of beep to play:
Public Const MB_DEFAULTBEEP As Long = -1 ' the default beep sound
Public Const MB_ERROR As Long = 16 ' for critical errors/problems
Public Const MB_WARNING As Long = 48 ' for conditions that might cause problems in the future
Public Const MB_INFORMATION As Long = 64 ' for informative messages only
Public Const MB_QUESTION As Long = 32 ' (no longer recommended to be used)
Notice that they match up perfectly with the icons displayed by a message box (MsgBox). Each of the available icons has a different default alert sound that is associated with it. The same guidance that applies to the correct use of these icons in a message box also applies to their use as isolated, independent alert sounds.
And of course, because these are standard system sounds, they're not guaranteed to always play the same sounds. The exact sounds used are configurable by the user. But that's probably what you want.
As for why Beep doesn't work, it's a rather sad and complicated story. The function documentation contains most of the details:
A long time ago, all PC computers shared a common 8254 programable interval timer chip for the generation of primitive sounds. The Beep function was written specifically to emit a beep on that piece of hardware.
[...]
Since then, sound cards have become standard equipment on almost all PC computers. As sound cards became more common, manufacturers began to remove the old timer chip from computers. The chips were also excluded from the design of server computers. The result is that Beep did not work on all computers without the chip. This was okay because most developers had moved on to calling the MessageBeep function that uses whatever is the default sound device instead of the 8254 chip.
Eventually because of the lack of hardware to communicate with, support for Beep was dropped in Windows Vista and Windows XP 64-Bit Edition.
In Windows 7, Beep was rewritten to pass the beep to the default sound device for the session. This is normally the sound card, except when run under Terminal Services, in which case the beep is rendered on the client.
You can find even more information on Larry Osterman's blog: What's up with the Beep driver in Windows 7?
So it should be working on Windows 7, but it requires that your computer has sound generator hardware installed, that you have speakers connected, and that they're turned on. Of course, so does the MessageBeep function.
Related
After performing a bleScan and giving the user their device to select, I then take that get that device via the getRemoteDevice(address) call.
Once I get that BluetoothDevice object, I then call createBond(). Since createBond() triggers an async operation, I have BroadcastReceiver listening for the results and confirm that the device has paired/bonded when I received a BONDED result in the receiver.
This is pretty standard procedure for bonding with a BT device.
The issue I'm running into is that under seemingly random conditions, the built in pair/connect dialog does not appear.
Through some testing I found that if I swipe down on the phone, long press bluetooth and click "Pair a New Device" and the device shows up in the list...I can then return to my app, call createBond() and the PIN dialog appears.
This tells me there's something iffy with some type of Bluetooth Cache or something along those lines.
I'm trying to determine why this might be and if there's something I should be ensuring that I do BEFORE calling createBond to ensure the pin dialog appears.
I can post relevant code but it's really just a one-liner
boolean bondInitiated = getDevice().createBond();
After I call create bond there's typically a 1-2 second pause and then the pin dialog appears.
Can someone offer some insight here? Is there a better way to pair with a BT device from Android other than calling createBond()
I'd LOVE to just give the user a PIN text box, let them enter the pin shown on the BT device (it's a glucometer) and then pair that way but I've not seen a way to do that.
Unfortunately, there isn't a standard way to always show the pairing dialog to the user. This is because the pairing process is dependant on the hardware, and some OEMs have modified how it works at the OS level. As such, there are variations depending on the hardware that is being used.
However, there might be some "hacks" that you can implement to get this to work. Have a look at this link as it includes details on the bondong process and the pairing popup. It's a bit outdated (3 years ago), but it includes the following paragraph:-
So if you want you can try to make the popup always appear in the
foreground by doing discovering for 1 second before connecting to a
device. It is a bit of a hack but it works.
I am working with an application that uses OpenAL API quite extensively. In particular, there are multiple sound sources, non-trivial listener filters, etc.
I want to be able to run this application significantly faster than real-time. At the same time, the sound must be saved for later postprocessing. Is there a way to access the OpenAL output programmatically (virtually) without ever playing the sound on the real playback device?
Ideally, I'd like to have access that would be played during every tick of the main loop of my application. Normally one tick corresponds to one rendered frame (e.g. 1/30th of a second). But in this case we would be running the app as fast as possible.
We ended up using OpenAL Soft to do this. Example:
#include "alext.h"
LPALCLOOPBACKOPENDEVICESOFT alcLoopbackOpenDeviceSOFT;
alcLoopbackOpenDeviceSOFT = alcGetProcAddress(NULL,"alcLoopbackOpenDeviceSOFT");
replace your default device with this device
ALCcontext *context = alcCreateContext(device, attrs);
Set the attrs as you would for your default device
Then in the main loop use:
LPALCRENDERSAMPLESSOFT alcRenderSamplesSOFT;
alcRenderSamplesSOFT = alcGetProcAddress(NULL, "alcRenderSamplesSOFT");
alcRenderSamplesSOFT(device, buffer, 1024);
Here the buffer will store 1024 samples. This code runs faster than real-time, therefore you can sample frames every tick
Are you able to do your required functions with the audio data prior to its being shipped to OpenAL? I've done a lot with javax.sound.sampled when it is untethered by the blocking write() method in SourceDataLine, especially when saving to file rather than playing back.
From what little I know about OpenAL, there is also a blocking process occurs when data is shipped, with a queue of arrays that are managed. I've been meaning to look into this further...
(Probably not being very helpful here. Apologies.)
I'm working on some home automation programs and one of the things I want to be able to do is detect when my 4th generation Apple TV has woken from sleep. This will generally only ever happen when someone pressed a button on its Siri remote to wake it up.
I have a PC (connected to the same TV as the Apple TV) that has a Pulse-Eight USB-CEC adapter, so naturally the first thing I tried was using CEC to determine when the Apple TV is awake. Unfortunately it's not reliable, since monitoring the Apple TV's power status to see when it wakes up produces false positives. (I should note that I do not have "Control TVs and Receivers" enabled on the Apple TV, and can't turn it on for the particular project I'm working on because I need the Apple TV to not change the TV's input.)
I'm trying to think of some other way to do this. I'm open to any possibilities, including things like:
Making use of private APIs on the Apple TV
Running an 'always on' program in the background of the Apple TV that sends a signal when the Apple TV wakes up, if that's even possible. (I suspect that it isn't.)
Monitoring the bluetooth communication between the Siri Remote and the Apple TV, if that's possible
Somehow filtering HDMI-CEC commands so that I can turn on 'Control TVs and Receivers', allow the Apple TV's CEC commands for turning on and off the TV, and exclude commands for changing the TV's input.
Any other method, no matter how hacky or ridiculous, as long as it works!
Does anyone have any suggestions? I'm running out of things to try!
I tried to post below on apple discussion / support communities but was told i don't have the right to post this content. Maybe someone in this group can succeed in doing it:
Apple TV 4 CEC integration is great when it works, but it doesn't work all the time and not with all the various equipment out there, you can do a search across forums and you will see lots of unhappy users. I would like to use a raspberry PI to detect when my AppleTV goes to sleep and wakes up and programmatically turn my tv on or off using its RS232C or custom CEC commands.
I used a bonjour services explorer and compared every single result between sleep and on states and there are no differences whatsoever. I would have expected Apple to welcome such automation projects and make this information readily available with a variable such as status: sleep or status: on.
Is there a way I could tell the two states apart via the network connection?
If not, could one build a TvOS app which runs on the background and makes this information available to clients somehow?
I finally found a method that seems to work consistently. This method is incredibly hacky and not at all the sort of way I'd prefer to do this, but it's the only one I've found so far that works consistently.
I have taken an old USB webcam and affixed it to the front of my Apple TV so that its lens is directly in front of the Apply TV's front facing light. Whenever the Apple TV is asleep, I simply check for the light turning on by taking images from the camera and analyzing their average luminosity. Since the lens is right next to the light, when it turns on it'll create a huge blown out white circle in the image that's incredibly easy to detect.
As long as the Apple TV is asleep, the light turning on seems to indicate 100% of the time that it has woken up. I have yet to find a single incident of either a false positive or false negative.
Since pressing buttons on the Siri remote causes this light to blink, this also means that I can detect buttons being pressed by looking for changes in the light while the Apple TV is awake. It's not 100% accurate, since some button presses are faster than the frame rate of my crappy old USB webcam, but it works well enough.
I would vastly prefer to find a better method of doing this, like making a request over the LAN to the Apple TV where the response clearly indicates it being awake or asleep, but so far it doesn't look like that's possible.
Here I am, six and a half years later, and I've finally found a better way to get the power state of my Apple TV.
I can simply use pyatv, which has a function named power_state that returns the Apple TV's current power state.
So if anyone has been following Twitch Plays Pokemon for the last week or so (http://www.twitch.tv/twitchplayspokemon) you'll know what I'm talking about. They are streaming an emulated version of Pokemon Red and allowing members to type controls into the chat. The controls they type correspond to that on an actual gameboy and are somehow 'sent' to the emulator as controls. For example, if someone types 'start' it pops up the start menu in the game.
Is there any documentation online which could show me how to do something like this (albeit smaller scale)?
Thanks!
It's actually quite simple once you get the hang of emulating keystrokes.
On windows you can use the keybd_event WinAPI to simulate keystrokes, here's some example c++ code that holds down the A key for 150 milliseconds:
keybd_event(0x41, 0, 0, 0); // starts holding down key 0x41 (A)
Sleep(150);
keybd_event(0x41, 0, KEYEVENTF_KEYUP, 0); // releases key 0x41 (A)
(you can find values for other keys here)
Once you get keystroke emulation working, you just need to either make your software connect to the IRC channel of your twitch chat or run HexChat or any other IRC client, make it connect to the twitch chat by following this guide, enabling logging and parsing the chat log on your software by simply reading the file line by line and waiting for new lines once you get to the end of it.
I wrote my own Twitch Plays ... software on Windows in c# in a matter of minutes and then released a polished, configurable version that should work on any game here.
Does anyone know of a good way to get a bi-directional dump of MIDI SysEx data on Linux? (between a Yamaha PSR-E413 MIDI keyboard and a copy of the Yamaha MusicSoft Downloader running in Wine)
I'd like to reverse-engineer the protocol used to copy MIDI files to and from my keyboard's internal memory and, to do that, I need to do some recording of valid exchanges between the two.
The utility does work in Wine (with a little nudging) but I don't want to have to rely on a cheap, un-scriptable app in Wine when I could be using a FUSE filesystem.
Here's the current state of things:
My keyboard connects to my PC via a built-in USB-MIDI bridge. USB dumpers/snoopers are a possibility, but I'd prefer to avoid them if possible. I don't want to have to decode yet another layer of protocol encoding before I even get started.
I run only Linux. However, if there really is no other option than a Windows-based dumper/snooper, I can try getting the USB 1.1 pass-through working on my WinXP VirtualBox VM.
I run bare ALSA for my audio system with dmix for waveform audio mixing.
If a sound server is necessary, I'm willing to experiment with JACK.
No PulseAudio please. It took long enough to excise it from my system.
If the process involves ALSA MIDI routing:
a virtual pass-through device I can select from inside the Downloader is preferred because it often only appears in an ALSA patch bay GUI like patchage an instant before it starts communicating with the keyboard.
Neither KMIDIMon nor GMIDIMonitor support snooping bi-directionally as far as I can tell.
virmidi isn't relevant and I haven't managed to get snd-seq-dummy working.
I I suppose I could patch ALSA to get dumps if I really must, but it's really an option of last resort.
The vast majority of my programming experience is in Python, PHP, Javascript, and shell script.
I have almost no experience programming in C.
I've never even seen a glimpse of kernel-mode code.
I'd prefer to keep my system stable and my uptime high.
This question has been unanswered for some time and while I do not have an exact answer to your problem I maybe have something that can push you in the right direction (or maybe others with similar problems).
I had a similar albeit less complex problem when I wanted to sniff the data used to set and read presets on an Akai LPK25 MIDI keyboard. Similar to your setup the software to setup the keyboard can run in Wine but I also had no luck in finding a sniffer setup for Linux.
For the lack of an existing solution I rolled my own using ALSA MIDI routing over virmidi ports. I understand why you see them as useless because without additional software they cannot help at sniffing MIDI traffic.
My solution was programming a MIDI relay/bridge in Java where I read input from a virmidi port, display the data and send it further to the keyboard. The answer from the keyboard (if any) is also read, displayed and finally transmitted back to the virmidi port. The application in Wine can be setup to use the virmidi port for communication and in theory this process is completely transparent (except for potential latency issues). The application is written in a generic way and not hardcoded to my problem.
I was only dealing with SysEx messages of about 20 bytes length so I am not sure how well the software works for sniffing the transfer of large amounts of data. But maybe you can modify it / write your own program following the example.
Sources available here: https://github.com/hiben/MIDISpy
(Java 1.6, ant build file included, source is under BSD license)
I like using aseqdump for that.
http://www.linuxcommand.org/man_pages/aseqdump1.html
You could use virtual midi devices for this purpose. So you have to load snd_seq_dummy so that it creates at least two ports:
$ sudo modprobe -r snd_seq_dummy
$ sudo modprobe snd_seq_dummy ports=1 duplex=1
Then you should have a device named Midi through:
$ aconnect -i -o -l
client 0: 'System' [type=kernel]
0 'Timer '
1 'Announce '
client 14: 'Midi Through' [type=kernel]
0 'Midi Through Port-0:A'
1 'Midi Through Port-0:B'
client 131: 'VMPK Input' [type=user,pid=50369]
0 'in '
client 132: 'VMPK Output' [type=user,pid=50369]
0 'out '
I will take the port and device numbers form this example. You have to inspect them yourself according to your setup.
Now you plug your favourate MIDI Device to the Midi Through ports:
$ aconnect 132:0 14:0
$ aconnect 14:0 131:0
At this time you have a connection where you can spy on both devices at the same time. You could use aseqdump to spy the MIDI conversation. There are different possibilities. I suggest to spy the connection between the loopback devices and the real device. This allows you for rawmidi connections to the loopback devices.
$ aseqdump -p 14:0,132:0 | tee dump.log
Now everything is set up for use. You just have to be careful about port names in your MIDI application. It should read MIDI data from Midi Through Port-0:B and write data to Midi Through Port-0:B.
Some additional hint: You could use the graphical frontend patchage for connecting and inspecting the MIDI connections via drag and drop. If you do this you will see that every Midi Through port occurs twice once as input and once as output. Both have to be connected in order to make this setup work.
If you want to use GMidiMonitor or some other application you spy on both streams intermixed (without showing the direction) using aconnect suppose 129:0 is the Midi Monitor port :
$ aconnect 14:0 129:0
$ aconnect 132:0 129:0
If you want to have exact direction information you could add another GMidiMonitor instance that you connect only to one of the ports. The missing messages come from the other port.
What about using gmidimonitor? See http://home.gna.org/gmidimonitor/