Do SmartApps run remotely even when all interactions are local? - smartthings

I am trying to have my SmartApp talk to my local REST server at my company. This REST server is not externally accessible. In an attempt to narrow down the issue, I have created a groovy program that interacts with the REST server. I have executed this on my own computer and coworkers' computers and they are all able to access the REST server as expected. When I try to access the REST server from my SmartApp (using the SmartThings httpGet() function), I only get ConnectionTimeoutExceptions. Is my SmartApp executing from an external perspective?

From the smartthings documentation, all apps except Smart Home Monitor and Smart Lights run remotely (https://support.smartthings.com/hc/en-us/articles/209979766-Local-processing).
Smart Home Monitor and Smart Lights are the only
SmartApps with local processing capabilities at this time. We are
working on additional local SmartApp options.
That's why you cannot access your local server from your smart app.
But what you can do is going the other way. Instead of having your SmartApp make call on your local server you can make your local server make call on your smartApp (by using WebServices SmartApp).
Perhaps it does not fit your need but you can image the following workflow:
Your local server do a call every minutes on your SmartApp on GET /needs.
Your SmartApp return the what it need.
Your local server send the need with a query POST /result
You can image a better flow but it is just a sample.

Related

Is there any way to improve service availability over the ping test?

To check service availability we have added ping test but it does not check actual core functionality of the application. It just ping the server and return the response.
Is there any way to where we can check service core functionality is working over the ping test?
In most cases you need to check:
Database
APIs
Servers
Ping test generally just test Servers.
The most comprehensive way to test the backend is to make an API which read a value from the database (without caching), by this way you will test the three main cores.
BUT this way is heavy on the backend especially if you have a lot of users (for example if there is in the same moment 100K users on your app, there will e 100K connection to DB and 100K API requests/Response, which could make the server unavailable for other users).
The way I overcome this the following:
There is a public very small file on the server (not on DNS) that have the last time/date the backend was checked if it is functional.
for every user that opens the app, the app will read this file.
if it could not read it then the servers are down for sure.
if the app could read the file, then it will check if the Current time - last check time > 1 minute then it will call an API CheckBackend which will check everything and update the small file.
by this method you will ensure that at max one full check is done every minute only, which is not that heavy on the server.
Usually, applications use from ports. Instead of ping you can use telnet request to IP:Port. Like this:
telnet test.netbeez.net 20011
Also, you should visit here for to get more information https://netbeez.net/blog/telnet-to-test-connectivity-to-tcp/

Can I run a front-end and back-end on Netlify?

I want to practice creating my own RESTful API service to go along with a client-side application that I've created. My plan is to use Node and Express to create a server. On my local machine, I know how to set up a local server, but I would like to be able to host my application (client and server) online as part of my portfolio.
The data that my client application would send to the server would not be significant in size, so there wouldn't be a need for a database. It would be sufficient to just have my server save received data dynamically in an array, and I wouldn't care about having that data persist if the user exits the webpage.
Is it possible to use a service like Netlify in order to host both a client and server for my purposes? I'm picturing something similar to how I can start up a local dev server on my computer so that the front-end can interface with it. Except now I want everything hosted online for others to view. I plan to create the Express server in the same repo as the front-end code.
No, Netlify doesn't allow you to run a server or backend. However, they do allow you to run serverless functions in the cloud. These can run for up to 10 sec. at a time. Furthermore Netlify also have a BETA solution called "background functions" That can run for up to 15 minutes. But honestly for a RESTful API there sure would be better solutions out there?
If you are still looking for the Netlify for Backend you can consider Qovery. They explained here why it is a good fit for their users.

Getting access to Mesibo video and audio stream from outside a browser (i.e on a server)

I would like to process audio and video from a Mesibo conference on the server side and then, if possible, feed the processed stream back in as a new publisher (participant) in a different group (conference).
By current best guess would be something like this...
Run the Mesibo Javascript API in a virtual browser using node browser-run and Xvfb
Connect to the conference in the browser and somehow extract the necessary WebRTC connection details and feed this back to the node process controlling the virtual browser
Connect to the conference using node webrtc-client
Having to run a virtual browser every time seems like overkill. Also I have no idea where I would get the webrtc connection details (step 2) from in the virtual browser. Does the Mesibo Javascript API expose these anywhere?
Assumedly if I could get the above working then I could use the same webrtc-client instance to feed the process back into the conference, but if I wanted to feed it into a different conference then I'd have to create another virtual browser.
Anybody got any ideas?
mesibo on-premise conference server exposes RTP API, possibly that should help. However, the on-premise conference server will be available publicly in Feb'21 so you will have to wait.
How would you expect step 2? are you looking to access the underlying peerconnection?

Best way to connect 2 separate node processes with socket.io communicating to a client

I'm new to working with sockets and have a small system design question:
I have 2 separate node processes for a web app, 1 is a simulator that is constantly running and the 2nd is an api server. Both share the same MongoDB database and we have a React app running for the client, served by the api server.
I'm looking to implement socket.io for real-time notifications and so I've set up a simple connection between the api and client.
My problem is that while the simulator runs, there are some events that I also want to trigger push notifications for so my question is how to hook that into everything?
The file hierarchy is like:
app/
simulator/
api/
client/
I saw this article for communication between node processes and I currently have 3 solutions in mind:
Leave hierarchy as it is and install socket.io package inside simulator as well. I'm not sure if sockets work this way but can both simulator and api connect to the same socket?
Move simulator file into api file to fork as a child process so that the 2 processes can communicate via child/parent messaging. simulator will message api which will then emit updates through the socket to client
Leave hierarchy as is and communicate via node-ipc. Same situation as above with simulator messaging api first before api emits that to client
If 1 is possible, that seems like the best solution in my impression. It seems like extra work to add an additional layer of messaging for 2 and 3.
Leave hierarchy as it is and install socket.io package inside simulator as well. I'm not sure if sockets work this way but can both simulator and api connect to the same socket?
The client would have to create a separate socket.io connection to the simulator process. Then, the client can receive data from the API server over one connection and from the simulator over another connection. You would need two separate, independent socket.io connections from the client, one to the API server and one to the simulator. Simulator and API server cannot share the same socket unless they are in the same process.
Move simulator file into api file to fork as a child process so that the 2 processes can communicate via child/parent messaging. simulator will message api which will then emit updates through the socket to client
This is really part of a broader option that the simulator communicates with the API server and sends it data that the API server can then send to the client over the single socket.io connection that the client made to the API server.
There are lots of different ways for the simulator process to communicate with the API server.
Since it's already an API server, you can just make an API for this (probably non-public). The simulator calls an API to send data to the client. The API server receives that data and sends it to the client.
As you suggest, if the simulator is run from the API server as a child process, then you can use parent/child communication messaging built into node.js. Note, you don't have to move the simulator files into the API file at all. You can just use child_process to launch the simulator as another nodejs app from another project. You just have to know the path to that other project.
You can use any another communication mechanism you want between the simulator process and the API server process. There could be a socket.io connection between them. You could use several forms of IPC, etc...
If 1 is possible, that seems like the best solution in my impression.
Your #1 option is not possible as separate processes can't use the same socket.io connection.
It seems like extra work to add an additional layer of messaging for 2 and 3.
My options #1 and #2 are not much code in each server. You're doing interprocess communication. You should expect to use some code to enable that. But, it's not hard at all.
If the lifetime of the simulator server and the API server are always together (they have no independent uses), then I'd probably do the child process thing where the API server launches the simulator and then use parent/child messaging to communicate between them. You do NOT have to combine sources to do this.
The child_process module can run the simulator process by just knowing what directory it is located in.
Otherwise, I'd probably make a small web server on a non-public port in the API server and have the simulator just send data to that other web server. I often refer to this as a control port. It's a way of "controlling or diagnosing" the API server internals and can only be accessed from within the private network and/or with credentials. The reason I'd use a separate web server (in the same nodejs app as the API server) is to make it easy to secure so it can't be accessed from the outside world like the regular public APIs can. You just put the internal web server on a port that is not exposed to the outside world.
You should check Socket.IO docs about adapters and Emitters. This allows to connect to sockets from different node processes and scalability.

nodejs local agent functionality

I have a website hosted on Heroku and Firebase (front (react) and backend(nodejs)) and I have some "long running scripts" that I need to perform. I had the idea to deploy a node process to my raspberry pi to execute this (because I need resources from inside my network).
How would I set this up securely?
I think I need to create a nodejs process that checks the central server regularly if there are any jobs to be done. Can I use sockets for this? What technology would you guys use?
I think the design would be:
1. Local agent starts and connects to server
2. Server sends messages to agent, or local agent polls with time interval
EDIT: I have multiple users that I would like to serve. The user should be able to "download" the agent and set it up so that it connects to the remote server.
You could just use firebase for this right? Create a new firebase db for "tasks" or whatever that is only accessible for you. When the central server (whatever that is) determines there's a job to be done, it adds it to your tasks db.
Then you write a simple node app you can run on your raspberry pi that starts up, authenticates with firebase, and listens for updates on your tasks database. When one is added, it runs your long running task, then removes that task from the database.
Wrap it up in a bash script that'll automatically run it again if it crashes, and you've got a super simple pubsub setup without needing to expose anything on your local network.

Resources