State of libspotify in 2017 - spotify

I know that libspotify will no longer get any support and is replaced by webapis.
I do not have any problems with this because libspotify usability is just bad and there are a lot of unfixed bug as well as lots of missing features.
However, there is no way to get any stream URL or have a small library for stream decoding.
We are promised that a new small replacement library will be available "soon" for more almost 4 years now and parts of libspotify are already shutdown. In addition to this since some time, there is a note that libspotify will be shut down in 2017.
I don't care about anything that is provided by the web API, which is clearly superior to libspotify in almost any regard. All I care about is a simple player API, where I can input a Spotify track URI and get raw PCM frames out (which can be output to speakers, streamed to non-supported devices, etc).
You already have this type of library inside the android SDK, which provides all we need. What is the reason for not providing a standard c library for non-android platforms (win32, Linux, mac)?
Or provide a way to get an RTMP or HTTP stream URL, the way it's done in Spotify webplayer, which would be even better.
I really like Spotify and hope to be able to continue developing apps that use it.

Related

In which language is www.audiotool.com programmed?

Im learning to code web stuff. ruby,javascript...
I would like to do something that makes noise like www.audiotool.com
The app is basically a DAW, digital audio workstation, is fast and sounds good... you can eveb use samples and save projects in the cloud.
But my main question is which languages or tools can make an app like this ?
but i don't know which languages make this kind of apps posible ?
is it creating the sound in the browser, or in a server and sending it back ?
any guesses?
Audiotool.com uses flash to synthesize audio. Their FAQ says that you should update your flash player if you're having trouble, so that seems like a pretty strong indication that they use flash.
However, if you want to make music apps, I would advise against using flash. Newer devices and operating systems will drop support for flash (iPhones/iPads already don't support flash, I believe).
If you want a future-proofed music-making solution, you can do that all client-side in javascript with the web-audio api.
I have authored, and actively maintain a javascript library that aims to simplify the process of building complex apps with the web audio api. If you're just getting started with making music on the web, you might want to check it out. The web audio api is not terribly beginner-friendly, in my opinion. https://github.com/rserota/wad

Will TideKit be able to stream live video & audio from Android & IOS cameras & mics to a server?

I need to know if TideKit will be able to stream live video and audio from device cameras and microphones. The Android and IOS APIs allow for this. I think Flex can do it. I asked about this on the Twitter page but I'm looking for a more definitive answer. The one I got was "TideKit is a development, not a streaming platform but you could develop an app for that! That’s where TideKit comes into play" which doesn't fully answer the question.
The goal is to stream video from Android & IOS cameras and audio from the device microphones to a media streaming server such as Flash Media Server or a Wowza streaming server using either RTMP or HTTP streaming from the app to the server. That or it would work if the stream were sent live in any other way to a server socket and then encoded for redistribution via a streaming server.
They key here though is "live" rather than having to waiting for a video or audio file to become complete before sending it off to the server. I know it's possible with the APIs and I really hope TideKit will be able to do this because no other platform similar to TideKit (and there are MANY) can do this besides Flex. I've poured through countless SDK documents. If TideKit can do this it will attract a lot more customers.
Eagerly awaiting a response,
Thanks
#xendi Thank you for your question. TideKit is an app development platform. You can use it for any type of app development for mobile, desktop and web. We've purposefully kept the core of TideKit small. This is to ensure its core is extremely stable and that most functionality can can come through modules.
Out of the box, TideKit has core AV functionality on all platforms. Extension of this functionality is through TideKit modules that have operating system implementations or from pure JavaScript modules. There are almost 100,000 modules of pure JavaScript functionality now available to you through existing repositories including NPM, Bower and Component that can simply be consumed in CommonJS.
When a TideKit or JavaScript module is installed it offers its APIs. This extends the APIs with those already available. Either way those APIs become available to you in JavaScript.
You already have access to camera with TideKit. The rest is handling the streaming protocol, ie RTSP, RTMP, HTTP etc. So there are a few ways to accomplish what you want with TideKit.
Using a TideKit module that supports the streaming protocols by interacting with its APIs in JavaScript.
Using a pure JavaScript solution from a repository together with TideKit that supports the protocols.
Writing your own TideKit module that ties together with APIs of the operating systems.
Writing the solution in pure javascript using TideKit's camera and network APIs.
TideKit is new and has not yet formally launched. We are currently in a reservations mode. We will be delivering it first to those with reservations and it will be gradually rolled out. Demos are currently being prepared to demonstrate the speed and low barrier to development. When TideKit formally launches, I would check for the availability of modules at that point (for both TideKit and JavaScript implementations). Note that not all possible functionality in TideKit modules will be available with the launch. New modules will be releasing over time.
As an aside, TideKit also supports WebRTC in HTML5 so this could work together with TideKit's other capabilities for interesting possibilities.

Web App with Microphone Input

I'm working on a C++ application which takes microphone input, processes it, and plays back some audio. The processing will incorporate a database located on a server. For ease of creating UI and for maximum portability, I'm thinking it would be nice to have the front end be done in HTML. Essentially, I want to record audio in a browser, send that audio to the server for processing, and then receive audio from the server which will then be played back inside the browser.
Obviously, it would be nice if HTML5 supported microphone input, but it does not. So, I will need to create a plugin of some kind in order to make this happen. NPAPI scares me because of the security issues involved, so I was looking into PPAPI and Native Client. Native Client does not yet support microphone input, and I believe that the PPAPI audio input API would be limited to a dev build of Chrome. FireBreath doesn't look like it supports any microphone function either. So, I believe my options are:
Write my own NPAPI plugin to record the audio
Use Flash to get microphone input
Bail on browsers altogether and just make a native application
The target audience for this is young children and people who aren't computer-adept. I'd like to make it as portable and simple to use as possible. Any suggestions?
If you can do it all in Flash and have the relevant knowledge, that would probably be the best solution:
You can avoid writing platform-specific code, delivery/updating is easy and Flash has broad coverage so users don't need to install any custom plugins.
FireBreath doesn't look like it supports any microphone function either.
You can write your own (platform-dependent) code for audio recording with FireBreath, just like you could in a plain NPAPI plugin. FireBreath just makes it easier for you to write the plugin, the result is still a NPAPI (and ActiveX) plugin with access to native APIs etc.
You can use Capturing Audio & Video features in HTML5, see this link for more information.

Record audio from various internal devices in Android (via undocumented API)

I was wondering whether it is possible to capture audio data from other sources like the system out, FM radio, bluetooth headset, etc. I'm particularly interested in capturing audio from the FM radio and already investigated all possibilities including trying to sniff the raw bluetooth communication between the phone and the radio device with no luck. It's too bad Android only allows recording audio from the MIC.
I've looked at the Android source code and couldn't find a backdoor to allow me to do that without rooting the device. Do you, at least, have any idea how to use other devices (maybe access somehow /dev/audio) say via NDK or even better - Java (maybe Reflection?) to trick the system to capture the audio stream from say, the FM radio. (in my case I'm trying to develop the app for the HTC Desire)
PS. And for those of you who are against using undocumented APIs, please don't post here - I'm writing an app that will be for my personal use or even if I ever publish it I will warn the user of possible incompatibilities.
I've spent quite some time deciphering the audio stack, and I think you may try to hijack libaudio. You'll have trouble speaking directly to the hardware (/dev/*) because many devices use proprietary audio drivers. There's no rule in this regard.
However, the audio hardware abstraction layer (HAL) provided by /system/lib/libaudio.so should expose the API described at http://source.android.com/porting/audio.html
The Android system, and especially audioflinger, uses this libaudio HAL to find available devices, deal with routing, and of course to read/write PCM data.
So, you could hijack the interaction between audioflinger and libaudio, by renaming the later, and providing your own libaudio which decorates the real one. Doing so, you should be able to log what happens and very possibly intercept FM radio output, provided that this is not directly handled by the hardware.
Of course, all this requires rooting. Please comment if you manage to do this, that interests me.

Streaming audio to a browser

I have a large amount of audio stored on my web server in a very custom format that can't be replayed by anything other than my own application. That application is a Win32 app that can connect to my web server and stream and replay that audio.
I'd really like to be able to do the streaming and replaying from within a browser, but don't know where to start. Ideally I'd like the technology to be cross-platform (unlike my current Win32 app) and cross-browser (IE 6 and above and Firefox).
My current thoughts are to look at things like:
Flash, but doesn't that only replay mp3 audio?
Java, are VMs freely available still?
Converting the audio to a WAV file on the web server and then using someone else's plugin to replay that file. I'd rather keep the conversion off the web server for performance reasons, but is still an option.
Writing my own custom plugin to do the complete stream and replay operation.
Any guidance would be most useful.
Please note that the audio is not music and that simply converting to another audio format is not trivial. The audio that is stored also changes frequently (every minute) would need constant conversion.
Why are you using a proprietary music format? I'd probably not even bother downloading a program to listen to it.
I would suggest you convert it to mp3 and then use flash.
Building your own plugin would probably be hard, there are so many different platforms you'd have to cater for, something like flash is written for them already.
Apart from converting server-side: Implement a decoder for your format in ActionScript or Java. Then you can write a Flash movie or Java applet that plays it. Both languages/runtimes should be fast enough to decode in realtime unless your format is very complex. Flash would be the more accessible of the two, since nearly everyone has the plugin installed. (It's possible that playing a raw sound buffer isn't supported by older Flash versions than 10, I'm no expert on that.) The Java plugin is definitely free, but you'd require the users to install it.
I'd go with converting the audio to WAV (or MP3) on the server. Writing your own cross-platform browser component would be a lot of work, thanks to the different ways the major OSes handle their audio APIs.
Try taking a look at shoutcast.
Basically its a server app that will stream music to any client that connects to it through a browser (effectively your own radio station). I've never used it myself but should be straight forward.
Another idea is winamp remote. Again you install the app on the server but this time you can browse your music collection on their website and play individual songs.

Resources