How does the Web Audio API affect game development? - audio

I'm trying to understand what the introduction of the Web Audio API has meant for the development of web based games.
Flash games can of course do some quite advanced audio processing, and for simpler games the audio element was maybe enough. But how has Web Audio API changed the game dev scene? In terms of what can be done, supported platforms and so on.

Supported platforms are Chrome, Safari (with some prefixing caveats) and Firefox across all supported hardware/OS platforms; IE is working on development, though the longer tail of versions will take a while to deploy.
Web Audio enables very complex processing, but also very precise timing and multiple sounds; sound management is far, far easier than previously possible in HTML5. In short, Web Audio dramatically improves the story for game audio development on the Web - which, of course, was one of its goals.

Related

What benefits howler.js brings for a basic audio player in comparison to `audio` element?

Preconditions:
Developing an audio player for a web application.
All target browsers fully support audio tag.
No need in sprites, multiple simultaneous sounds etc, just one audio track to be played back at a moment.
Audio file has to be streamed from the server, not downloaded at once. Therefore not Web Audio API.
Why would I want to utilize howler.js or similar library instead of relying on the built-in audio tag in this scenario?
The only howler.js feature that is intriguing is “Handles edge cases and bugs across environments”.

What's the best protocol for live audio (radio) streaming for mobile and web?

I am trying to build a website and mobile app (iOS, Android) for the internet radio station.
Website users broadcast their music or radio and mobile users will just listen radio stations and chat with other listeners.
I searched a week and make a prototype with Wowza engine (using HLS and RTMP) and SHOUTcast server on Amazon EC2.
Using HLS has a delay with 5 seconds, but RTMP and SHOUTcast has 2 second delay.
With this result I think I should choose RTMP or SHOUTcast.
But I am not sure RTMP and SHOUTcast are the best protocol. :(
What protocol should I choose?
Do I need to provide a various protocol to cover all platform?
This is a very broad question. Let's start with the distribution protocol.
Streaming Protocol
HLS has the advantage of allowing users to get the stream in the bitrate that is best for their connection. Clients can scale up/down seamlessly without stopping playback. This is particularly important for video, but for audio even mobile clients are capable of playing 128kbit streams in most areas. If you intend to have a variety of bitrates available and want to change quality mid-stream, then HLS is a good protocol for you.
The downside of HLS is compatibility. iOS supports it, but that's about it. Android has HLS support but it is still buggy. (Maybe in another year or two once all the Android 3.0 folks are gone, this won't be as much of an issue.) JWPlayer has some hacks to make HLS work in Flash for desktop users.
I wouldn't bother with RTMP unless you're only concerned with Flash users.
Pure progressive streaming with HTTP is the route I almost always choose to go. Everything can play it. (Even my Palm Pilot's default media player from 12 years ago.) It's simple to implement and well understood.
SHOUTcast is effectively HTTP, but a poorly implemented version that has compatibility issues, particularly on mobile devices. It has a non-standard status line in its response which breaks a lot of clients. Icecast is a good alternative, and is what I would recommend for production use today. As another option, I have created my own streaming service called AudioPump which is HTTP as well, and has been specifically built to fix compatibility with oddball mobile clients, such as native Android players on old hardware. It isn't generally available yet, but you can contact me at brad#audiopump.co if you want to try it.
Latency
You mentioned a latency of 2 seconds being desirable. If you're getting 2-second latency with SHOUTcast, something is wrong. You don't want latency that low, particularly if you're streaming to mobile clients. I usually start with a 20-second buffer at a minimum, which is flushed to the client as fast as it can receive it. This enables immediate starting of the stream playback (as it fills up the client-side buffer so it can begin decoding) while providing some protection against buffer underruns due to network conditions. It's not uncommon for mobile users to walk around the corner of a building and lose their nice signal quality. You want your stream to survive that as best as possible, so if you have already sent the data to cover the drop out, the user doesn't have to know or care that their connection became mediocre for a short period of time.
If you do require low latency, you're looking at the wrong technology entirely. For low latency, check out WebRTC.
You certainly can tweak your traditional internet radio setup to reduce latency, but rarely is that a good idea.
Codec
Codec choice is what will dictate your compatibility more than anything else. MP3 is easily the most compatible, and AAC isn't far behind. If you go with AAC, you get better quality audio for a given bitrate. Most folks use this to reduce their bandwidth bill.
There are licensing fees with MP3, and there may be with AAC depending on what you're using for a codec. Check with a lawyer. I am not one, and the licensing is extremely complicated.
Other codecs include Vorbis and Opus. If you can use Opus, do so as the licensing is wide open and you get good quality for the bandwidth. Client compatibility here though is the killer of Opus. (Maybe in a few years it will be better.) Vorbis is a mediocre codec, but is free and clear.
On the extreme end, I have some stations doing their streaming in FLAC. This is lossless audio quality, but you're paying for 8x the bandwidth as you would with a medium quality MP3 station. FLAC over HTTP streaming compatibility is not code at the moment, but it works alright in VLC.
It is very common to support multiple codecs for your streams. Depending on your budget, if you can't do that, you're best off with MP3.
Finally on encoding, don't go from a lossy codec to another lossy codec if you can help it. Try to get the output stream as close to the input as possible. If you re-encode audio, you lose quality every time.
Recording from Browser
You mentioned users streaming from a browser. I built something like this a couple years ago with the Web Audio API where the audio is captured and then encoded and sent off to Icecast/SHOUTcast servers. Check it out here: http://demo.audiopump.co:3000/ A brief explanation of how it works is here: https://stackoverflow.com/a/20850467/362536
Anyway, I hope this helps you get started.
Streaming straight audio/mpeg (mp3 packets) has worked everywhere I've tried.
If you are developing an APP then go with AAC, if you are simply playing via web browser then you need a HTML5 Implimentation which is MP3. All custom protocols like RTMP or SHOUTcast requires additional UI to be built. There are some third party players available in open source world. You can either use them or stick to HTML5 MP3/OGG as most people now days are using chrome browser or other HTML5 complaint browsers.

In which language is www.audiotool.com programmed?

Im learning to code web stuff. ruby,javascript...
I would like to do something that makes noise like www.audiotool.com
The app is basically a DAW, digital audio workstation, is fast and sounds good... you can eveb use samples and save projects in the cloud.
But my main question is which languages or tools can make an app like this ?
but i don't know which languages make this kind of apps posible ?
is it creating the sound in the browser, or in a server and sending it back ?
any guesses?
Audiotool.com uses flash to synthesize audio. Their FAQ says that you should update your flash player if you're having trouble, so that seems like a pretty strong indication that they use flash.
However, if you want to make music apps, I would advise against using flash. Newer devices and operating systems will drop support for flash (iPhones/iPads already don't support flash, I believe).
If you want a future-proofed music-making solution, you can do that all client-side in javascript with the web-audio api.
I have authored, and actively maintain a javascript library that aims to simplify the process of building complex apps with the web audio api. If you're just getting started with making music on the web, you might want to check it out. The web audio api is not terribly beginner-friendly, in my opinion. https://github.com/rserota/wad

Will TideKit be able to stream live video & audio from Android & IOS cameras & mics to a server?

I need to know if TideKit will be able to stream live video and audio from device cameras and microphones. The Android and IOS APIs allow for this. I think Flex can do it. I asked about this on the Twitter page but I'm looking for a more definitive answer. The one I got was "TideKit is a development, not a streaming platform but you could develop an app for that! That’s where TideKit comes into play" which doesn't fully answer the question.
The goal is to stream video from Android & IOS cameras and audio from the device microphones to a media streaming server such as Flash Media Server or a Wowza streaming server using either RTMP or HTTP streaming from the app to the server. That or it would work if the stream were sent live in any other way to a server socket and then encoded for redistribution via a streaming server.
They key here though is "live" rather than having to waiting for a video or audio file to become complete before sending it off to the server. I know it's possible with the APIs and I really hope TideKit will be able to do this because no other platform similar to TideKit (and there are MANY) can do this besides Flex. I've poured through countless SDK documents. If TideKit can do this it will attract a lot more customers.
Eagerly awaiting a response,
Thanks
#xendi Thank you for your question. TideKit is an app development platform. You can use it for any type of app development for mobile, desktop and web. We've purposefully kept the core of TideKit small. This is to ensure its core is extremely stable and that most functionality can can come through modules.
Out of the box, TideKit has core AV functionality on all platforms. Extension of this functionality is through TideKit modules that have operating system implementations or from pure JavaScript modules. There are almost 100,000 modules of pure JavaScript functionality now available to you through existing repositories including NPM, Bower and Component that can simply be consumed in CommonJS.
When a TideKit or JavaScript module is installed it offers its APIs. This extends the APIs with those already available. Either way those APIs become available to you in JavaScript.
You already have access to camera with TideKit. The rest is handling the streaming protocol, ie RTSP, RTMP, HTTP etc. So there are a few ways to accomplish what you want with TideKit.
Using a TideKit module that supports the streaming protocols by interacting with its APIs in JavaScript.
Using a pure JavaScript solution from a repository together with TideKit that supports the protocols.
Writing your own TideKit module that ties together with APIs of the operating systems.
Writing the solution in pure javascript using TideKit's camera and network APIs.
TideKit is new and has not yet formally launched. We are currently in a reservations mode. We will be delivering it first to those with reservations and it will be gradually rolled out. Demos are currently being prepared to demonstrate the speed and low barrier to development. When TideKit formally launches, I would check for the availability of modules at that point (for both TideKit and JavaScript implementations). Note that not all possible functionality in TideKit modules will be available with the launch. New modules will be releasing over time.
As an aside, TideKit also supports WebRTC in HTML5 so this could work together with TideKit's other capabilities for interesting possibilities.

J2ME app Vs browser on a handset

Recently I started developing a J2ME app prototype. I noticed how difficult it is to develop a good looking UI. Consider developing an app in J2ME for booking flights interacting with webservice.
A website to book flights will be easy to develop with nice ui and can be accessed by browser on a handset. I understand not all handsets have browser but all the new and upcoming ones have browser and have big screen as well.
Is it a good idea to develop such a application in j2me which need to talk to webservice for it to work? Or j2me is only suitable for standalone apps?
Advantages of J2ME:
Can access phone resources, like file system, phone book and GPS. The last is very important in map applications.
You can build richer User Interfaces. It may be difficult as you say, but there are many GUI libraries that could assist you. On the contrary the UI for a mobile browser (you can't rely on CSS and javascript working) would be very poor.
Greater flexibility on the communication logic. You can encrypt/decrypt data, compress them, use SOAP web services. With the browser, your best bet would be to develop REST services.
Disadvantages of J2ME:
Midlets need to be signed. This has some cost and there are situations that even a signed app won't run properly in specific phones.
Developing a midlet to run in all types of phones is a nightmare. On the contrary, a well designed mobile web application would be displayed properly in all recent phones.
You need to have a channel for distributing your application. People would need to download it and get charged for the required bandwidth. You would need to care for angry customers having problems with the application. Things are easier with a web site.
J2ME apps are inevitably compared with native applications (iPhone, Windows Mobile, Symbian). Compared to these, they are very poor and many would find that paying for them or even using them isn't justified.
My conclusion: Nowadays real smart phones are becoming more popular and win an ever growing market share. Under these circumstances, the advantages of J2ME can't really overcome its restrictions. The only exception I could think of, is if to have to develop a GPS application. For all other cases, a mobile web site is a better idea.
There are a lot of misunderstanding and plain wrong statements in the previous answers.
I advice you to just do your research yourself. Nowadays you CAN develop really good looking apps with J2ME without writing your own GUI framework. Take a look at LWUIT really. For example they have a virtual keyboard as one of their touch screen functionalities and this you have on devices like the N97 which itself does not have a Virtual keyboard. BTW using LWUIT you have a Blackberry and Android port included if anyone cares.
Also Apps nowadays become the center stage on many platforms not just the iPhone. Look at the recent developments in this area like OVI, RIM, Samsung, SE, Orange World they all start with app shops.
"Getting people to use a website on their mobile phone is easier than getting them to download an application." this is just a claim without proof. you cannot say that like this. It depends on a lot of other factors. - Why should users type in your mobile url into the rather small screen again?
Anyway, this answer is probably too late so I'm not gonna write much more. The mobile industry is changing fast right now but there is not yet a alternative to J2ME for crossplatform development. Maybe in the future with better browsers and widget technolgies.
Just a short note, applications like google maps or gmail mobile probably don't use WebServices to talk to their server part. A WebService has a lot of overhead, especially when considering that mobile users are usually rated by the amount of data they transmit. The best way to perform communication between your client app and its server part is to use binary data over a socket connection.
I personally think it's really hard to make a consistent and reliable J2ME application that will run across a large set of mobile phones. Based on my experience, I would only develop a J2ME application (instead of a Web application) if it's a strict requirement - for example, to be able to view your bookings without being connected to the network. There are other costs associated with J2ME applications - the applications must be downloaded, the user will be asked if the application is allowed to connect to the network when it attempts to (there are exceptions for this case but I believe the application has to be signed by 3rd party company - more $$$ involved), you will have to maintain different versions of the application running on a variety of mobile phones (more complexity to the application), and so on...
Think about it this way - if you were developing a similar thing for a computer, would you build a desktop application or a web application? With the cellphones of today (many of which can access full-html sites with javascript - which means ajax), the proposition of the question is valid.
I thing a good rule of thumb should be: If what you're trying to achieve can be done with a mobile website - go for the website.
IMHO, apps should only be used to if they cant take advantage of the mobile hardware - like location, sound, video, 3d, pictures etc...
Even if the dev costs for the app were insignificant (they usually aren't), you'd have to offer some really amazing capabilities to make the users go through the trouble of downloading it.
(All of this is essentially true for J2ME/BREW. The iPhone is a little different as apps take the center stage)
One thing worth highlighting: the only standard way of deploying a MIDlet is via OTA download so you wouldn't expect a J2ME-capable phone to not have a web browser.
Mobile web browser like Webkit and Opera are getting better faster than J2ME (at least until MIDP3.0 starts shipping, if ever).
No matter which platform you choose, you will need to test your service on many devices. I don't think switching from J2ME to webapp makes a huge difference in that regard, because phone manufacturers keep changing the binaries that go into the phones firmwares.
Getting people to use a website on their mobile phone is easier than getting them to download an application. unless that application is already installed when they buy the phone, that is.
You might want to look at LWUIT for better and easier J2ME GUI.
One thing that J2ME will accomplish for a flight booking service is save battery life by not requiring constant network data transfer, thanks to the local storage mechanisms.
there are many great j2me apps that (need to) talk to webservices. just think of the google apps, like gmail mobile and maps for mobile. they are faster and easier to use than using the services via cell phone browser. so if you can design a good app, it's definitely worth it.
EDIT: also, a j2me app makes possible features that can't be provided by a web application: integration with phone features (address book, calendar), "call this number", location api, etc.
I think for business apps, or more text/data oriented things, a mobile web/wap site might be easier to maintain, since you won't have to deal with pushing client updates out to handsets.
For UI-intensive apps (maps, games, etc.), client apps are probably the way to go, so you can handle the more of the processing and rendering on the client side.
Both options are difficult though, since there are so many compatibility issues with phones. You might be best served by narrowing down what types of phones you want to support for your app. If you think most of your customers will be iPhone or Android phones, you can target those platforms (with either client apps or web apps) and avoid old-school j2me completely.
I hate WebApps on phones. They are slow and they don't work in a semi-connected environment.
J2ME apps can do local backups, bluetooth backups, bluetooth data sharing between 2 phones and better responsive UI. However that requires money,skill,time etc.
My main gripes with MIDP though is pushing software updates and wav real time mixing. Technically those are possible within the scope of MIDP but the goons at wheel are not very creative.

Resources