React Native Audio Visualization - audio

So I am using the react-native-audio package to play preloaded audio files and capture the user's recorded audio. What I would like to do is convert the audio into some sort of data for visualization and analysis. There seems to be several options for web but not much in this direction specifically for React Native. How would I achieve this? Thank you.

I've just bump with this post, I am building a React Native Waveform visualiser, still work in progres with the android side, but its working on the iOS side.
Pretty much is a port from WaveForm on IOS ,using Igor Shubin's solution.
You are very welcome to check out the code at https://github.com/juananime/react-native-audiowaveform
To try straight away:
npm react-native-audiowaveform --save
Cheers!

Related

can't import design in flutter app using api

I want to develop an invitation card maker app in flutter so the issue is that I don't know what kind of data I will need to import from the backend to make my cards editable. i have designed cards in photoshop but I don't know how to make them editable in a mobile app. if anyone has a suggestion please give me your suggestions
I'm glad to be able to help you.
In those cases you do the following:
Receive image to edit.
Use external library made to edit photos (https://pub.dev/packages/image_editor_pro)
Send edited image to the backend and replace the old one.
Hi all I am very happy that I finally found the solution.
So we have to first import the background image from API and load text data on it then we can use flutter packages to edit those texts.
But there is one thing you need to do before you import designs to the mobile app. You will need to have a perfect pixel size of font and background otherwise it will be overlapped.
I have images with dimensions of 3000*5000 so I have used aspect ratio for the responsiveness in every device.
And I have used Figma to design cards so we can get all CSS very fast for every line of text and then I am converting it to JSON using CSS to js converter and js to JSON converters.

Implementing Audio chat with Socket.IO and NodeJS

I have created a chat application using sails.js (node.js) and socket.IO.
I need to implement audio chat and file transfers along with it.
Could anyone help me in getting basic tutorial links for integrating WebRTC with socket.IO?
Thanks in advance.
If I were you, I would use a WebRTC library providing both the client and the server side. Check EasyRTC, SimpleWebRTC, PeerJS or others. Most libraries are implemented in Javascript and run in Node.js.
You will find tutorials in their respective websites.
I personally use PeerJS, the code and documentation are both very good, and it fully supports data channels (useful for file transfer). The only thing is that there are only 2 founders, and the community seems quite small.
I am also planning to make your kind of app on nodejs. During my research I found that WEBRTC support for mobile browsers is limited. In todays world whenever we are building a Web app we consider that a major portion of our users are going to use it on mobile phone. WebRtc is supported on android browsers like chrome, Firefox and opera. But on iPhone it doesn't support safari nor windows phone browsers.
You should take a look at Wowza streaming cloud at https://www.wowza.com/docs/wowza-streaming-cloud-free-trial

Can I drag a file out of a node-webkit app and drop it on the Desktop?

The client is trying to figure out whether they'd like to go with a node-webkit app with AngularJS, but their one sticking point is that they'd like to be able for their users to drag a file out of the app and onto the desktop or an email client (such as Outlook or Lotusnotes) like you can do with an applet (which I'm desperately trying to avoid).
As far as I can tell, this doesn't look possible, but I'm not well versed yet with the latest stuff you can do with HTML5 and Chrome specifically. Any guidance is greatly appreciated.
Edit: I've also never used node.js
http://www.html5rocks.com/en/tutorials/dnd/basics/#toc-dnd-files
http://www.thecssninja.com/javascript/gmail-dragout
The answer is yes. Here is some more text so that I can submit my answer.
#tjb1982 Using a demo like this one:
http://www.thecssninja.com/javascript/gmail-dragout worked for simple files, but when trying to drag a WAV file or MP3 file into a audio software( like logic, pro tools, ableton ) they won't recognise it as an audio file.

Android Native Audio development

I'm trying to develop the application based on native audio in gingerbread,
I executed the sample native audio program under the NDK ,but I'm not clear with
that. I need some example to learn how to use the openSL library.
Can any one suggest an example of open SL|ES based code ?
OpenSL ES documentation and that sample app are the best resources that are out there. Not to say that they're great, but they are definitely sufficient provided that you have the knowledge of object-oriented programming and audio. If you don't, those are the things you should look into first.

ZXING and LWUIT

I am using LWUIT to develop ZXING application which stops taking video when video come across with a QR code. I have seen j2me codes for zxing. Unforunately I found out that I cannot use some codes coz Canvas has been used a lot which LWUIT has no canvas.You have sample codes for LWUIT users to stop gettting video when camera saw a QRCode? I will be appreciated if you can provide me a sample code or tips for that. Thank you so much from now.
To integrate zxing you need to connect the LWUIT MediaComponent (or VideoComponent in the latest version) to the zxing framework, I haven't done this myself though:
http://www.java.net/node/706015?force=786
The other approach is to show a native Canvas and go back to LWUIT when you are done by using Form.show() which automatically reinstalls the LWUIT canvas. This has the major drawback of not working on RIM devices.

Resources