I am working on IoT project over google cloud. I use Publish/Subscribe to allow devices contact each others. I developed the backend system using nodejs, then I will develop Mobile app that will use google library to publish/subscribe.
The problem that I face now is that. Does google have any C/C++ Library for contacting PubSub/googlecloud API or not, and if not, is there any alternative way to keep embedded devices (programmed in C/C++) updated with mobile applications actions.
Note: I need real-time control between mobile app and embedded device.
Thanks
Google Cloud Pub/Sub has a HTTP/JSON based API (under the API Reference tag in the sidebar), so you can roll your own library in this case.
The client APIs that Google currently supports are listed here. If you can run Go or Java on your embedded device (both a lot less common than C/C++ on embedded devices, and usually only supported if you stretch the definition of "embedded"), you can have a fully-supported client library.
Related
I am a developer and I have a Bluetooth Lamp that has RGB and Day/Warm Light that has a third party App to control it.
My goal is to do some automation with my Lamp.
Is there a way to read what this app is sending to my Lamp is order to simulate its functionality? The thing is this App is not possible to integrate with my Google assistant so I am trying to find a way to do it my self by making my own mobile application to control my Lamp.
Or maybe my question should be something like: is there a generic App that can control generic Bluetooth Lamps?
Any information is greatly appreciated.
You could either hack the hardware itself, or you could simply snoop on the communications to hack in to the lamp controls.
Otherwise, I would check out integration platforms like IFTTT, which allows some customization of control with proprietary systems.
I'm looking for tutorials or developers guideline docs to develop the chromecast built-in devices. Actually, I want to know software structure and how to get required libraries or sample codes on chromecast built-in in a linux device with screen.
I've already looked at the google cast SDK documentation from https://developers.google.com/cast/. However, it contains content about application and streaming services. I have not found what I want yet.
for example,
required DRM (widevine or playready)
media pipeline integration
device discovery (DIAL???)
application lifecycle management (if needed)
I expect how I can get documentation and resources on what I need to understand for "google cast built-in device" development. Thank you.
I need to know if TideKit will be able to stream live video and audio from device cameras and microphones. The Android and IOS APIs allow for this. I think Flex can do it. I asked about this on the Twitter page but I'm looking for a more definitive answer. The one I got was "TideKit is a development, not a streaming platform but you could develop an app for that! That’s where TideKit comes into play" which doesn't fully answer the question.
The goal is to stream video from Android & IOS cameras and audio from the device microphones to a media streaming server such as Flash Media Server or a Wowza streaming server using either RTMP or HTTP streaming from the app to the server. That or it would work if the stream were sent live in any other way to a server socket and then encoded for redistribution via a streaming server.
They key here though is "live" rather than having to waiting for a video or audio file to become complete before sending it off to the server. I know it's possible with the APIs and I really hope TideKit will be able to do this because no other platform similar to TideKit (and there are MANY) can do this besides Flex. I've poured through countless SDK documents. If TideKit can do this it will attract a lot more customers.
Eagerly awaiting a response,
Thanks
#xendi Thank you for your question. TideKit is an app development platform. You can use it for any type of app development for mobile, desktop and web. We've purposefully kept the core of TideKit small. This is to ensure its core is extremely stable and that most functionality can can come through modules.
Out of the box, TideKit has core AV functionality on all platforms. Extension of this functionality is through TideKit modules that have operating system implementations or from pure JavaScript modules. There are almost 100,000 modules of pure JavaScript functionality now available to you through existing repositories including NPM, Bower and Component that can simply be consumed in CommonJS.
When a TideKit or JavaScript module is installed it offers its APIs. This extends the APIs with those already available. Either way those APIs become available to you in JavaScript.
You already have access to camera with TideKit. The rest is handling the streaming protocol, ie RTSP, RTMP, HTTP etc. So there are a few ways to accomplish what you want with TideKit.
Using a TideKit module that supports the streaming protocols by interacting with its APIs in JavaScript.
Using a pure JavaScript solution from a repository together with TideKit that supports the protocols.
Writing your own TideKit module that ties together with APIs of the operating systems.
Writing the solution in pure javascript using TideKit's camera and network APIs.
TideKit is new and has not yet formally launched. We are currently in a reservations mode. We will be delivering it first to those with reservations and it will be gradually rolled out. Demos are currently being prepared to demonstrate the speed and low barrier to development. When TideKit formally launches, I would check for the availability of modules at that point (for both TideKit and JavaScript implementations). Note that not all possible functionality in TideKit modules will be available with the launch. New modules will be releasing over time.
As an aside, TideKit also supports WebRTC in HTML5 so this could work together with TideKit's other capabilities for interesting possibilities.
I need to develop a project based on Bluetooth in mobile. Since I am new to j2me I studied some of the articles and run the project until the discovery of devices and services. I need to communicate between devices and transfer the desired files. I search code for client server communication through Bluetooth and got it but I didn't know how to run those code and implement further.
I have go through articles and I can run client server communication. Now I need to transfer the file and communicate to the user which was beyond the limit of my mobile through the another mobile which was within my limit.
JSR82.com has many articles and tutorials about how to use bluetooth from J2ME.
Better you refer the book, "BLUETOTH APPLICATION PROGRAMMING WITH JAVA API" by C.Balakumar. It is helpfull to you.
I am trying to use the new OTA enrollment and device management capabilities in iOS 4 to provide wireless app distribution for the enterprise. So far, I have come across a lot of third party MDM providers that seem to charge by the device. I don't believe this is something very hard to do on our own, especially as a prototype.
My search has led me to some open source software for SCEP. Together with the OTA configuration reference from Apple, I want to believe that the next step would be to actually implement an MDM server. Now, the WWDC talk had slides on various MDM queries supported by iOS 4, including installing and removing provisioning profiles, but there's no reference implementation or even exposed API that I could find.
Does anyone have any experience trying to fully develop an enterprise distribution and management system without third party software?
MDM providers that I've seen are acting as SCEP proxies so that you don't have to expose your certificate server to the internet.
The best open source SCEP server I've found so far is Dogtag (http://pki.fedoraproject.org/wiki/PKI_Main_Page)
woops I was meaning to comment.. not answer.