I have to send large text towards the server, but each time single packet is coming on the server.
I have to send commands and data to the server and vice versa, I can send commands but no data. Already looked over for several threads but unable to send a large text. Using Android device as server and client.
Can someone please share any working example.
Few More informations
1) Large data ranges 0- 10KB
2) Min Android OS 4.4
Let me add one Possible Example;
Suppose I have a Standalone Music player with a remote app in Android.
I started a playlist of 100 Songs (All songs are on Player not on Android, So Not using RFCOMM for streaming).
I remotely can place any commands like play/ pause/ Next/ Prev/ Vol+/ Vol- etc.
But List of songs should be sent on Player from remote (This is the part where data suppose to be huge)
Once players started with Playlist, my phone can be undiscoverable or out of the range or all possible cases to not communicating with the player. Meanwhile, I can Manually operate player, but same should be reported somewhere locally.
Once again I will connect with Player, Both will sync and share logs.
Related
I am trying to record audio on raspberry pi zero and want to transfer the audio data in real time through Bluetooth (Classic or BLE) to an android application . I have created a GATT server referring to Bluez example code but given the amount of data(32-bit audio data, 16000 Hz). The process through BLE is really slow and i am not able to record and send the data simultaneously but one by one.
So i want to shift to Bluetooth Classic. What would be the preferable protocol where i can record and send the data along as soon as i receive it ? Are there Pybluez API which can allow me to achieve this?
I want to build a video security infrastructure with raspberry pis.
Please take a look at the rough layout I've in mind:
What the system should be capable of:
The RPis need to stream a low latency video to the
webserver, which displays it to all clients visiting the website.
If a client authenticates he can control one RPi sending commands that gets translated into GPIO commands.
All RPis should be controllable simultaneously by different clients in realtime.
Some kind of scaleability (Clients + RPis)
My Questions:
I wanted to program everything in node.js. Good idea?
Could WebRTC and Sockets.io help me in this project - if not is
there another library that would help me out?
How many clients could a VPS Server (8GB RAM, 4 vCores) handel in this setup?
Is it possible to bring the latency down to < 2 seconds or more?
Anything helps! Thanks!
I'm trying to create an interactive voice-tree for an art project. Think of something like a choose-you-own-adventure, but on the phone and with voice commands. I already have a fair amount of experience working with Construct 2 (game-making software), and can easily build a branching, voice controlled interaction loadable through a modern browser with it. For reasons relevant to the overall story, I need players to connect to the interaction through a Google Voice number they will call.
I already have a GV number and have written an AutoHotKey script to auto-answer the Hangouts call, but I'm stuck trying to route the audio from the caller in Hangouts to the browser AND the audio response output of the browser back to the caller.
I know of an extremely primitive way to accomplish this, [which I've illustrated with this diagram:
Unfortunately, this is rather cumbersome and I suspect I can achieve my goal through virtualization or at the VERY least some sort of attenuation cables between two physical machines (I tried running a generic AUX cable between two laptops, but couldn't get speaker audio to go into microphone audio from one to the other).
I've been experimenting on Parallels running Windows 8.1 with Virtual Audio Cable(no luck), JACK(too robust), Chevolume(too limited), and IndieVolume(too limited).
I suspect VAC would be the best bet, but I can't seem to find a way to route Firefox audio output to a microphone input which directs to Chrome and vice versa. If I try accomplishing it all through just one virtual machine I have to use two different browsers for the voice-tree webpage and Hangouts call since Hangouts pushes its audio through Chrome (even the stand-alone application).
Is there any way to route microphone input and speaker output separately between two virtual machines? If not, could I still try and accomplish this with a specific type of cables between two laptops running windows 7/8 that have generic audio jacks?
Need Help!
Let's assume I have a robot in a room with a camera on it. I need the video source to be available live on a website. I don't want any latency at all (assuming that I have a good internet connection). Also if a user presses any keys while on the website, the robot needs to detect it and do actions accordingly. Now, I can handle all the actions the robot needs to do once I get the keys. There's a raspberry pi on the robot.
What would be the easiest way where we could achieve a bi-directional communication (one direction being video and another being plain text) between a browser and my robot, keeping the communication as fast as possible.
PS: I tried initiating a Google hangout and embedding the video, but there's a latency of atleast 1 minute.
Simple to do. get the camera for Raspberry Pi from here.
http://www.adafruit.com/products/1367
You could use Motion JPEG for transmitting the video. Follow the instructions below.
http://blog.miguelgrinberg.com/post/stream-video-from-the-raspberry-pi-camera-to-web-browsers-even-on-ios-and-android
Once you the the IP of your video stream, you could display it in a website.
To send commands to rap pi, whats the complication ? If your Rasp. pi have an internet (it should be for the video stream), you could write a program to read commands from your browser.
I am working on a project which will involve http live media streaming from a variety of devices like android phones/tablets, iphone, ipad, browser,etc. It will be a 2 way communication for all the devices with multiple devices connected to a conversation. I have implemented it partially i.e. one way by capturing audio from android phone(native app) and streaming to a web browser(HTML5 app) with a PHP server using ffmpeg and cvlc. I wanted to know of the best way to go ahead about it. Like, if there are any standards to be followed. Also what kind of a server should I be using? I don't want to use any streaming servers like Red5. I would like to implement the streaming logic similar to Http LiveStreaming by apple. I have come across MPEG-DASH that seems to be a standard for http streaming. I still have to look deeper into it. I was also thinking of using NodeJS for its popularity with streaming. Another worry was how do I go about capturing of media from devices? As in, should I use the native capability of the devices to convert media into an mp4 or any container that it supports and then stream it to the server or capture audio and images for a particular period of time and then send it to server and create a common output(I am not really sure of this idea). The separate capture is basically for simplifying the process of video streaming from the server end to any device. I was also thinking if I could completely bypass the server in any cases like a phone to phone or phone to tablet connection.
I just wanted to be sure of the things I will be using/implementing so that I wouldn't have to make drastic changes later on. Any help is deeply appreciated. Thank you.