I have been trying to figuring out if it is possible to send files between a wp8 device and a windows rt (Surface). Some people seem to write that this is possible but they never write how to do this.
So what I want to do is 1. record a video with my app on the wp8 device and save it to the isolated storage (this is where Im at, at the moment) 2. Send the video (approx 20min recording time) to my windows rt device 3. Play the video on the rt device. Step 1 & 3 are simple but step 2 is driving me crazy. I have been thinking about using Bluetooth but as the speed is just around 700kbit/s it will take forever to transfer it. Usb is a no go as it is in the isolated storage. Skydrive needs 3g. So what I am thinking is to start internet sharing on my wp8 device and then connect my windows rt device to it and when its done use wifi to send the video from wp8 to win rt.
Is there any way this could work or is this impossible?
If your devices are in the same Wi-Fi network, you can use it to send files. Glossing over the details, this could be achieved in two steps:
Make devices discover each other in the network (they should know each other's ip addresses).
Implement file sending over a tcp socket. The simplest approach is to split the file into chunks of some arbitrary, but small size, and send those chunks one after another.
Of course, it's a high-level description, so if you need some further help in topics mentioned above, feel free to ask.
EDIT: This url says that there's a possibility to listen to incoming network connections, because related class is available for Windows Store apps and for Windows Phone 8. You can use it as a starting point.
EDIT 1: I've quickly put up an example for you, to prove it works. Just tested it on my Lumia 920.
Windows.Networking.Sockets.StreamSocketListener listener = new Windows.Networking.Sockets.StreamSocketListener();
listener.ConnectionReceived += async (_, args) =>
{
var w = new Windows.Storage.Streams.DataWriter(args.Socket.OutputStream);
w.WriteInt32(42);
await w.StoreAsync();
};
await listener.BindEndpointAsync(new Windows.Networking.HostName("127.0.0.1"), "55555");
var clientSocket = new Windows.Networking.Sockets.StreamSocket();
await clientSocket.ConnectAsync(new Windows.Networking.HostName("127.0.0.1"), "55555");
var r = new Windows.Storage.Streams.DataReader(clientSocket.InputStream);
await r.LoadAsync(4);
var res = r.ReadInt32();
clientSocket.Dispose();
System.Windows.MessageBox.Show(res.ToString(), "The Ultimate Question of Life, the Universe, and Everything", System.Windows.MessageBoxButton.OK);
Is this something you are trying to do in code?
What is your average file size - are we talking low-res 320x480 or HD-quality 720p+ video...?
What are your limitations? (Time, Connectivity, etc)
You could set up Dropbox to do the transfer. The free version is limited in space (more if you share), but if you moved the files into and out of Dropbox as necessary then you'd at least be able to set it and forget about it. This would still require a network connection, so if you need to do this on the go it may not be a good answer.
If this is something you need to do while on vacation at Disney World or camping or something like that it may not be a viable option.
Related
first of all: What i am trying to do is only for private interest.
I'd like to connect a AT-09/HM-10 BLE-Module with Firmware 6.01 to another device which provides also a BLE Module, which it is not based on the CC254X-Chip,
I am able to communicate with this Device using my Laptop with integrated Bluetooth, Linux and the bluepy-helper. I am also able to make a connection using the HM10 through a USB-RS232-Module and "Hterm", but after that quite Stuck in my progress.
By "reverse-engineering" the Android-Application for controlling this particular device i found a set of Commands, stored as Strings in Hex-Format. The Java-Application itself sends out the particular Command combined with a CRC16-Modbus-Value in addition with a Request (whatever it is), to a particular Service and Characteristic UUID.
I also have a Wireshark-Protocol pulled from my Android-Phone while the application was connected to the particular device, but i am unable to find the commands extracted from the .apk in this protocol.
This is where i get stuck. After making a connection and sending out the Command+CRC16-Value i get no response at all, so i am thinking that my intentions are wrong. I am also not quite sure how the HM-10-Firmware handles / maps the Service and Char-UUIDs from the destination device.
Are there probably any special AT-Commands which would fit my need?
I am absolutely not into the technical depths of Bluetooth and its communication layer at all. The only thing i know is that the HM-10 connects to a selected BLE-Device and after that it provides a Serial I/O and data flows between the endpoints.
I have no clue how and if it can handle Data flow to certain Service/Char UUIDs from the destination endpoint, althrough it seems to have built-in the GATT , l2cap-Services and so on. Surely it handles all the neccessary communication by itself, but i donĀ“t know where i get access to the "front-end" at all.
Best regards !
I am using react-native-webrtc to handle the WebRTC portion of this.
I am using Websockets to signal and using ICE trickling to keep track of the ICE candidates.
I queue my ICE candidates until setLocalDescription has been called on the callee side. Then I addIceCandidate for each candidate in the queue.
On the caller side I am doing the same thing and not processing my ICE candidates until setRemoteDescription has been called.
I am only doing audio so no video being used.
When I test this with two mobile devices on the same network I have no issues.
But if I disconnect one device from the WiFi the calls still connect just fine except the audio cannot be heard on either device.
The onConnectionStateChange handler will still return "connected" and the onIceGatheringStateChanged will still return "complete".
I thought maybe I needed to use a TURN server to get this working so I started using Twilio's paid TURN/STUN server but the issue is still persisting.
Any ideas what to look into?
BACKGROUND
Ok, so you have to take some background on P2P connection on RTC platforms. And so, it begins (in very short version):
In order to establish connection you have to establish direct connection between two clients (how obviously, I know). In order to find this routes you need help on network servers.
And that's why you setup local SDP with setting, to which server we can access. ICE, TURN, STUN (you can find any information, for ex. this one). Now ICE candidates most obvious one, because this server endpoints within your local network and that's why your version is not working with different network.
Right, you have to use TURN/STUN to find NAT and correct routes between peers. Most TURN server are private and paid, but for less loaded application you might use public STUN servers, that would be more then enough.
You can find many available over there. One ex. is here.
stun.l.google.com:19302
stun1.l.google.com:19302
stun2.l.google.com:19302
SOLUTION
Now coming to your problem. If you think you have connected your devices with your signaling it doesn't mean you connected devices. (It's just to clarify, if you don't have media on your devices your RTC connection failed to establish, and it's not just audio).
The problem in using it's TURN/STUN servers on your devices, and you have to trace SDP which established during setRemoteDescription and check the servers were included. Furthermore there is always a Google demo which is working perfectly.
UPDATE
In order to trace how remote SDP will be set and connection establish oyu have to print candidates which will be used to setup. To do that, you have to print information which candidates gathered during setLocalDescription and setRemoteDescription.
In place where you are gathering candidates add logging to print information. You have to see, that STUN, TURN candidates will be there. Below ex in Java. Word ICE shouldn't bother you, because it's just means that candidates AFTER ICE traversing will be found.
// Listen for local ICE candidates on the local RTCPeerConnection
peerConnection.addEventListener('icecandidate', event => {
if (event.candidate) {
// Here should be your part where you are sending this candidate to your signaling channel
// Add logging to print entire candidate information. You should see some data related to ICE, TURN.
}
});
I am currently looking into the possibility of integrating an announcement system with sonos but have yet to find a reasonable approach and have started wondering if it is currently possible at all.
My initial approach was having songs subscribe to a radio station that would send a constant stream of announcements. After testing with this setup I have been unable to get the delay below 3 seconds (which is too long).
I then began looking into the sonos API Looking at documentation and the below graph I came to the conclusion that what I was trying to achieve was however possible with sonos.
It does seem however that it will require substantial effort to implement a service where I can stream audio to sonos directly so I was hoping I could get some things cleared up before I proceed with a rather costly implementation. (time)
Is it possible to get audio delay below 3 seconds when streaming directly?
Am I correct in understanding that I will need to write an app on the sonos platform to handle my requests?
If the answer to above is no; what other options are available?
MSB,
Our current public APIs are really not well designed for what you are trying to do. There are some open source projects which are working off reverse engineered APIs that may work a little better for you.
Just use my library to play a notification.
const SonosDevice = require('#svrooij/sonos').SonosDevice
const sonos = new SonosDevice(process.env.SONOS_HOST || '192.168.96.56')
sonos.PlayNotification({
trackUri: 'https://cdn.smartersoft-group.com/various/pull-bell-short.mp3', // Can be any uri sonos understands
// trackUri: 'https://cdn.smartersoft-group.com/various/someone-at-the-door.mp3', // Cached text-to-speech file.
onlyWhenPlaying: false, // make sure that it only plays when you're listening to music. So it won't play when you're sleeping.
timeout: 10, // If the events don't work (to see when it stops playing) or if you turned on a stream, it will revert back after this amount of seconds.
volume: 15, // Set the volume for the notification (and revert back afterwards)
delayMs: 700 // Pause between commands in ms, (when sonos fails to play sort notification sounds).
})
.then(played => {
console.log('Played notification %o', played)
})
Works in node/typescript and has a lot of possibilities.
I'd like my Fenix 3 to do the following:
Trigger = hold down start button (i.e. shortcut)
Send message via BT or WiFi to a server (Linux or Windows or Arduino or whatever)
I'll take care of the message and open/close my garage door.
After a bike tour I'd like to easily and safely open my garage door. I have a VmWare server running at home. I could use one of the machines on this server to listen to the messages or I could set up an Arduino or similar.
The main question is: Can I write an IQ app that utilizes the shortcut concept on the clock, i.e. triggered by long click on start or lap button?
Clarification: There seems to be some kind of global actions for long press. I can for example assign "Save position" to long press on start/stop. This works even from inside of other apps.
Can the clock communicate with sensors (i.e. Arduino or other BT client) even if not in training mode?
Clarification: I need to communicate directly with my Arduino via Bluetooth, i.e. not via my iPhone.
Thanks in advance.
Short answer: Yes
Long answer: If you record the time a keydown event comes in, and then check for a "long" press when the key is let up based on the time difference, you can fake it. There is not an event for a long press of a physical key though. I am also pretty sure your app needs to be the current one for this to work.
Link to the InputDelegate event options: http://developer.garmin.com/downloads/connect-iq/monkey-c/doc/Toybox/WatchUi/InputDelegate.html
As for the sensors question, I am not sure exactly what you are asking. Your app can do whatever you want, and it is my understanding that only one app will be running at a time.
Disclaimer: Thus far I have only been working with the emulator, I'm still waiting for my watch to get here.
You cannot write anything that hijacks user input events from another active application (including the watch face). You could make your own watch face, but it wouldn't have the ability to send network messages and it has only one way to accept user input (the look-at-watch gesture).
This is something that you can do pretty easily from a watch-app or a widget. Assuming that your fenix3 is connected to your phone via bluetooth, you can send http get requests as you see fit.
I've written a simple app that I call GIFTTT that uses the IFTTT Maker channel to open/close my garage door (and all sorts of other things).
I am new to the J2ME technology. And I am making an application which will transfer the text and image(downloaded through http and stored into an ImageItem of a form) from a client mobile to the server mobile using bluetooth. The connection used is SPP. I have succeded to transfer the text message. But I am unable to transfer the image.
Can anyone help me to transfer the image to the server mobile through bluetooth directly without saving it into the phone memory or memory card.,
I would be thankful to you.
javax.microedition.lcdui.Image.getRGB() is the method you are looking for.
If myImageItem is your ImageItem object, the code would look like this:
------------
Image myImage = myImageItem.getImage();
int[] myImageInts = new int[myImage.getHeight() * myImage.getWidth()];
// Beware of OutOfMemoryError here.
myImage.getRGB(myImageInts, 0, myImageInts.length, 0, 0,
myImage.getWidth(), myImage.getHeight());
------------
You can then convert each int in the array into 4 bytes
(in the correct order please)
and feed these to your Connection's OutputStream.
Alternatively, DataOutputStream.writeInt() does the conversion for you.
Well if your server mobile is using Bluetooth and also running an application written by you, then you can create your own protocol to do this.
For image transfer, it is best to send the bytes that were downloaded over HTTP (and used to create the ImageItem), then receive them at the server end and display in the same way.
What is the specific problem you're encountering while doing this?
funkybro
As funkybro suggested, you can use the bytes to transfer the image to the server mobile. For that you need to can just open the output stream of the connection that you have made to the bluetooth server mobile and then write the byte contents on to the output stream.