How long i will run a timer in background? - audio

In my application i am running a timer in background for every 8 seconds to play a custom sound,it works fine ,but it get stops at later sometime,so how can i play the sound continuously in background?
Currently i am using the below code to play the sound in background
SystemSoundID soundID;
AudioServicesCreateSystemSoundID((__bridge CFURLRef)filePath, &soundID);
AudioServicesPlaySystemSound(soundID);
let me know the good solution to play the sound continuously in background

Short answer: your app is simply suspended.
Long answer: You are missing key parts of the background savvy implementation.
You need to tell the iOS that you are an Audio App, and that you are requesting extra cycles when suspended.
The UIBackgroundModes is subject to approval
From the documentation:
Background modes for apps:
UIBackgroundModes value = audio
The app plays audible content to the user or records audio while in the background. (This content includes streaming audio or video content using AirPlay.)
If your app does not fall into any of the categories below, then your only option to extend backgrounding beyond the typical 5 seconds is to invoke -beginBackgroundTaskWithName:expirationHandler:. You will likely be suspend within 30 seconds or so.
audio
location
voip
newsstand-content
external-accessory
bluetooth-central & bluetooth-peripheral
fetch
remote-notification

Related

Godot 3.4.4 How to stop audio in html export?

There was a problem with audio playing in the exported instance of the game. The sound is played using the AudioStreamPlayer node. I export the game to html5 and run it either on a local server or on any service that executes html code. If during the game you minimize the browser, change the tab, make the game window NOT active, then all game processes are suspended, that is: the movement of all objects does not occur, all timers are suspended. BUT the music continues to play. I tried to handle a focus Notification , an input handle with a cursor to turn off the music, but this is not an option, because if you minimize the browser via alt + tab while on the tab with the game, then the focus does not go away and the input is still waiting (while as I said earlier, all other processes related to movement and the timer are suspended).
How stop this audio stream in this case?
thanks

One flutter screen should be working the entire time regardless of whether i am on that screen or another

I am making a an app with 3 main screens: a reminders list, a homescreen, and a chatbot. The chatbot I will integrate in using Dialogflow, and i've completed the todo list. What the homescreen does is display pulse and temp taken from arduino through a bluetooth connection and if either exceeds or falls below a certain threshold then a call should be initiated. I currently did the call initiation and the pulse and temp values are hardcoded in for now. But if I go to another screen, the homescreen obviously doesn't do its job. What is the way I can make the homescreen kind of like the main thread . Basically even if i am on the chatbot screen, the homescreen should be checking if temp and pulse are abnormal and initiating calls if required.
I am not sure what concept this falls under so any help is appreciated.
You can use Isolate class to achieve this.
An isolate has its own memory and a single thread of execution that runs an event loop even in background until you kill Isolate.
Here is the example of how to use Isolate in flutter
There is also flutter package available for Isolate flutter_isolate

J2ME Manager.CreatePlayer() freeze application for a split second when using the first time

When I call for the first time to Manager.CreatePlayer() its freezes my application for a split second and it's a problem for me because I'm writing a game and it's noticeable, what can I do to fix it ?
As far as I know, the common logics for game are :
Display loading screen
Here's all heavy operations are prepared/preloaded and cached, so the game can run smoothly later.
Methods that usually called here are Manager.createPlayer and Player.prefetch().
All the image & sound is prepared first, and can be used quickly when game started.
Start the game (loop)
As the resource have been prepared/preloaded, now you can use (draw/play) them here.
Use the Player instances that has been created & prefetched (from loading screen).
You can call Player.start() method here to play the sound.
You can read about the Player state (especially about prefetch) HERE.
Notice that you can reuse the Player instance and call start() method multiple times for playing the same sound. No need to call createPlayer again.

How to play background sound (music) in GW-Basic

I am developing a game in GW-Basic.i want to add music to it but the problem is that i am unable to play in the background but when i add sound then first the sound is played then after that game execution is started and vICE vERSA .while i want that both things play at the same time..so any idea how to do it??
This is actually possible. Use the PLAY "MB" statement to enable "music background" mode. In this mode all PLAY notes and SOUNDs are queued and executed in the background while your program is running. You can queue up to 32 notes. Look up PLAY in this manual.

realtime midi input and synchronisation with audio

I have built a standalone app version of a project that until now was just a VST/audiounit. I am providing audio support via rtaudio.
I would like to add MIDI support using rtmidi but it's not clear to me how to synchronise the audio and MIDI parts.
In VST/audiounit land, I am used to MIDI events that have a timestamp indicating their offset in samples from the start of the audio block.
rtmidi provides a delta time in seconds since the previous event, but I am not sure how I should grab those events and how I can work out their time in relation to the current sample in the audio thread.
How do plugin hosts do this?
I can understand how events can be sample accurate on playback, but it's not clear how they could be sample accurate when using realtime input.
rtaudio gives me a callback function. I will run at a low block size (32 samples). I guess I will pass a pointer to an rtmidi instance as the userdata part of the callback and then call midiin->getMessage( &message ); inside the audio callback, but I am not sure if this is thread-sensible.
Many thanks for any tips you can give me
In your case, you don't need to worry about it. Your program should send the MIDI events to the plugin with a timestamp of zero as soon as they arrive. I think you have perhaps misunderstood the idea behind what it means to be "sample accurate".
As #Brad noted in his comment to your question, MIDI is indeed very slow. But that's only part of the problem... when you are working in a block-based environment, incoming MIDI events cannot be processed by the plugin until the start of a block. When computers were slower and block sizes of 512 (or god forbid, >1024) were common, this introduced a non-trivial amount of latency which results in the arrangement not sounding as "tight". Therefore sequencers came up with a clever way to get around this problem. Since the MIDI events are already known ahead of time, these events can be sent to the instrument one block early with an offset in sample frames. The plugin then receives these events at the start of the block, and knows not to start actually processing them until N samples have passed. This is what "sample accurate" means in sequencers.
However, if you are dealing with live input from a keyboard or some sort of other MIDI device, there is no way to "schedule" these events. In fact, by the time you receive them, the clock is already ticking! Therefore these events should just be sent to the plugin at the start of the very next block with an offset of 0. Sequencers such as Ableton Live, which allow a plugin to simultaneously receive both pre-sequenced and live events, simply send any live events with an offset of 0 frames.
Since you are using a very small block size, the worst-case scenario is a latency of .7ms, which isn't too bad at all. In the case of rtmidi, the timestamp does not represent an offset which you need to schedule around, but rather the time which the event was captured. But since you only intend to receive live events (you aren't writing a sequencer, are you?), you can simply pass any incoming MIDI to the plugin right away.

Resources