Triggering a command on HTTP(S) connection/disconnection? - linux

I'm running a streaming radio on my Raspberry Pi 3 using MPD (Music Player Daemon).
We are only 2 to use this radio, and most of the time, neither of us are connected to listen to the radio, which means that the service is running for nothing.
I checked that MPD is idling at 5% CPU and goes up to close to 40% when anyone connects, but in a more general way, I was wondering if there was some interruption, system notification or something like that which could be used to trigger a command, say, when someone connects on a specific port, or disconnects from it.
The goal would be to only run a service when needed.
I am using nginx as server, if that's of any help.
Only thing I could gather is a constant polling like what is mentioned here : https://www.tecmint.com/find-all-clients-connected-to-http-or-https-ports/
But, of course, I would need to be "instantly" reactive and not wait in between two pollings.

Related

Does X11 have a lifesign or constant stream?

I have a fault tolerant application, where an X Server requests to start an Application on a remote client (by some other mechanism) and receive and display its X-window. Fault tolerance means that the server needs to detect loss of the connection to the client and then call a different back-up-client and start the application there and show the window.
My question is whether there exists a mechanism in the X11 protocol that allows to reliably detect in an X11-Server whether the connection has been broken or not.
Experiments show that when unplugging a cable connection it needs some TCP-Timeout to detect the connection loss on socket level. This is very OS-dependent. In our case it was abut 30 minutes after which the X-Server eventually closed the window.
So another assumption could be that the X11-stream constantly delivers some commands and the server could implement some logic like this: If the X11-stream does not deliver any X11 traffic for a timeout y (e.g. 3 seconds), we assume the connection is lost and actively close the window and establish the connection to the fall-back-client.
Is the assumption true? I did not see any such statement in the X11-protocol about how to detect connection loss. Is there any explicit lifesign that is regularly transmitted? Or is the assumption valid that there is constant traffic? Or could there be longer periods of inactivity where nothing is transmitted at all while the connection is perfectly up and running?
There is a NoOperation command from the client that could be used for such purpose. But do clients usually implement something like that as a lifesign?
I have a fault tolerant application, where an X Server needs to start an Application...
I don't think that an X server can "start an application". May be that some setup allows something similar to that, but normally is not so.
...whether there exists a mechanism in the X11 protocol that allows to reliably detect in an X11-Server whether the connection has been broken or not.
No, it does not exist. The X11 protocol is based on TCP/IP, which does not provide directly this "heartbeat". I think the assumption is that, if you click or otherwise stimulate an X11 window, the TCP layer will timeout or throw another error if the client application is gone.
I did not see any statement in the X11-protocol about how to detect connection loss.
There is a NoOperation command from the client that could be used for such purpose. But do clients usually implement something like that as a lifesign?
Maybe that some application uses that NoOperation, but the purpose would be different from what you need. I mean, the X11 server is like an extension from the point of view of an application; the application can have interest to know whether the server is up and working, but it is not true the contrary. And, anyway, even if the server could detect that the application is gone, probably there is no way to tell the server to launch another application.
Probably a special proxy could be deployed; it could launch the application and monitor the connection (in both ways) and take the required steps in case the application goes away. But then again, who would monitor the proxy application?
First of all, X Protocol relies completely on TCP to send/receive information.
You cannot safely put a timeout capable transaction in order to detect a timeout in TCP. TCP is designed to retransmit only those segments that have already been sent but no acknowledged. It is completely asynchronous, in the sense that you send a command, and you can receive many responses or events unrelated to that command, before you receive the response. There's no heartbeat mechanism on XProtocol (except that the NOOP command is sent to synchronize operations with the server, and you receive a response for it, but you cannot overuse it, as that slows down severely the X connection, just launch any client with the -synchronous option to see it, see X(7)). You can even have TCP connections alive for years without interchanging a single packet. There's some mechanism, activated by option SO_KEEPALIVE that makes tcp to employ such heartbeat on TCP for a connection that has no data to transmit, but the X11 protocol normally doesn't make use of it. You don't post any code, nor a description of how the system is configured. The standard XServer never starts a connection by itself, except when launched specifically to negotiate with an XDMCP server (and this is done on UDP protocol) to serve as an XTerminal.
From your words probably you don't know that the roles of server and client are exchanged in X Protocol (the client is the remote application that connects to the server to display its output, and the server is the application that controls your display, mouse and keyboard) There's no means for the server to create a new client, so you need to be creating this connection in other means (probably through SSH, but not described).
By the way, when you say:
Experiments show that when unplugging a cable connection it needs some TCP-Timeout to detect the connection loss on socket level. This is very OS-dependent. In our case it was abut 30 minutes after which the X-Server eventually closed the window.
That is not OS-dependent. It is precisely the standard behaviour when you don't have traffic to send, there's no packet exchanged, so no detection is made (except if your client ---remember, this is the remote application program that wants to show its data in your local server--- activates the SO_KEEPALIVE option, and it requires several losses before declaring a lost connection) In your case the amount of time is variable because timers don't start until there's some data sent over the unplugged connection, and this makes it variable (not OS dependant)
On other side, you cannot pretend the server is going to turn on your monitor in case you leave the office and turn it off by mistake or by accident. What is the fault tolerance specification in that case?
IMHO, in regard of the presentation protocol, the application should be ready to show you as much information about the system as soon as you activate the connection (but the connection must be something allowed to fail). What is important is the means you develop for the application to be fault tolerant, even in the case you are not there to see the display. Will be somebody be advised that no one is looking at the screen? Are you going to detect the absence of operators in that case? Don't take this as a flame, but common sense should imperate in this case.
In case you need to ensure the connectivity to the remote host is available, you need to use another means to check for it. I recommend you to have a simple application pinging the remote host and alerting in case you don't get a positive result. Or you can open a connection to the server and then close it as soon as you get a positive response from the server (the first packet, for example) This will lead us to the next step, that is to ensure that some human is looking at the (turned on) screen of the display :)
For example, you can run a client in parallel to the one you are interested in, and force a heartbeat by asking for some server atom name (or a root window property value) in a loop with some delay. This will make the connection fail or your client can alert in case it doesn't receive the answer in some configurable time.

Continuous audio download stream

I'm looking to set up a server which will read from a some audio input device and serve that audio continuously to clients.
I don't need the audio to be necessarily be played by the client in real time I just want to be able for the client to start downloading from the point at which they join and then leave again.
So say the server broadcasts 30 seconds of audio data, a client could connect 5 seconds in and download 10 seconds of it (giving them 0:05 - 0:15).
Can you do this kind of partial download over TCP starting at whenever the client connects and end up with a playable audio file?
Sorry if this question is a bit too broad and not a 'how do a set variable x to y' kinda question. Let me know if there's a better forum to to post this in.
Disconnect the concepts of file and connection. They're not related. A TCP connection simply supports the reliable transfer of data. Nothing more. What your application chooses to send over that connection is its business, so you need to set your application in a way that it sends the data you want.
It sounds like what you want is a simple HTTP progressive internet radio stream, which is commonly provided by SHOUTcast and Icecast servers. I recommend Icecast to get started. The user connects, they get a small buffer of a few second in front to get them started (optional), and when they disconnect, that's it.

Possible to control garage door with Garmin IQ?

I'd like my Fenix 3 to do the following:
Trigger = hold down start button (i.e. shortcut)
Send message via BT or WiFi to a server (Linux or Windows or Arduino or whatever)
I'll take care of the message and open/close my garage door.
After a bike tour I'd like to easily and safely open my garage door. I have a VmWare server running at home. I could use one of the machines on this server to listen to the messages or I could set up an Arduino or similar.
The main question is: Can I write an IQ app that utilizes the shortcut concept on the clock, i.e. triggered by long click on start or lap button?
Clarification: There seems to be some kind of global actions for long press. I can for example assign "Save position" to long press on start/stop. This works even from inside of other apps.
Can the clock communicate with sensors (i.e. Arduino or other BT client) even if not in training mode?
Clarification: I need to communicate directly with my Arduino via Bluetooth, i.e. not via my iPhone.
Thanks in advance.
Short answer: Yes
Long answer: If you record the time a keydown event comes in, and then check for a "long" press when the key is let up based on the time difference, you can fake it. There is not an event for a long press of a physical key though. I am also pretty sure your app needs to be the current one for this to work.
Link to the InputDelegate event options: http://developer.garmin.com/downloads/connect-iq/monkey-c/doc/Toybox/WatchUi/InputDelegate.html
As for the sensors question, I am not sure exactly what you are asking. Your app can do whatever you want, and it is my understanding that only one app will be running at a time.
Disclaimer: Thus far I have only been working with the emulator, I'm still waiting for my watch to get here.
You cannot write anything that hijacks user input events from another active application (including the watch face). You could make your own watch face, but it wouldn't have the ability to send network messages and it has only one way to accept user input (the look-at-watch gesture).
This is something that you can do pretty easily from a watch-app or a widget. Assuming that your fenix3 is connected to your phone via bluetooth, you can send http get requests as you see fit.
I've written a simple app that I call GIFTTT that uses the IFTTT Maker channel to open/close my garage door (and all sorts of other things).

Multiple BLE Connections using Linux and Bluez 5.0

I am currently attempting to connect to multiple BLE devices using BlueZ 5.0 and Linux. I have one host BLE adapter and I have modified the gatttool to connect and perform this function. If I run an instance of the modified gatttool, I successfully connect and receive notification data from the BLE device. If I run another instance of the modified gatttool and connect to another BLE device, this application starts receiving notification data from both BLE devices and the initial application no longer receives any data. I believe this is due to the socket setup, where both applications are configuring their sockets to the same address and PSM (the newest instance receives the data whereas the other is starved). Is there a way to prevent this condition? Ideally, I want one application to connect to multiple devices. I assume that the application can only have one socket for the reason that multiple sockets will have the same issue as the multiple instances above. My BLE device is a TI CC2540 keyfob acting as a heartrate monitor.
I started an answer so I could have more space...
I'm using a combination of Python and C to get my code to work, so my "code" may look funny because it could be from either. Also, I used Bluez 4 as the 5 didn't support the kernel I was using. Let me know if there's an issue and I can clarify.
It seems like there's several ways of doing things, but I ended up opening separate sockets for different tasks. You can open a single socket and then set the socket options to take filtering off and you should get all the packets in one place. However, that was my initial way of doing it and I found that my connections would die within seconds.
To scan for connections I opened a socket(AF_BLUETOOTH, SOCK_RAW, BTPROTO_HCI) then did a bind on device 0. (there's a function called hci_get_route to get an available device number) You can then call hci_le_set_scan_parameters to set options, setsockopt(SOL_HCI, HCI_FILTER, filter) to just get LE scan events, and then called hci_le_set_scan_enable to turn on scanning.
Each device connection was made with a socket(AF_BLUETOOTH, SOCK_SEQPACKET, BTPROTO_L2CAP) which you then tell to connect to a particular device by calling connect on the socket with a struct sockaddr_l2 that has the particular device address in it. On that socket you should only get packets from that device. (one caveat... I found that my dongle wouldn't allow a connection while active scanning was taking place.. I had to temporarily shut it off just before connecting and then turn it back on. Otherwise I got a BUSY error from errno)
After saying all that, though... I think the way you're supposed to do everything in Bluez 5 is to use DBUS. Unfortunately that wasn't really an option for what I was doing. The functions I mentioned are in the shared lib that apparently isn't installed by default in 5 (you have to explicitly ask for it to be installed with configure). They stopped installing the shared lib by default because they wanted to encourage people to use DBUS instead.
WE have combined the code from hcitool and gatttool. The code works well for 2 device (scan, hci_le_create_conn and gatt_connect). I believe there is no limitation on the number of devices used.
1 Start cmd_lescan (from hcitool.c)
2.For each device scanned -
cmd_lecc (from hcitool.c)
gatt_connect (from gatttool.c)
This way one process can manage multiple BLE device. We do not have to turn OFF the scanning, just have ignore non advertisement messages:
if (meta->subevent != 0x02)
continue;
Thanks and looking forward to comments.

Where should I place input/output console for server?

I'm developing a simple 2d online game and now I'm designing my server. The server will be run on linux vps and I need a way to communicate with it (for example to close it, and as it will be run on vps, simply closing terminal won't work). So I think there are 2 options:
1) Write 2 apllications - server which doesn't say anything and doesn't accept console input and the second application is console which sends commands to server (like exit, get online players etc).
2) Write 1 application which has 2 threads - one is the real server, the second thread will be used for cin and cout. However I'm not sure if this will work on vps...
Or maybe there is better aproach? What is the usual way of doing this?
Remember that it must be vps-compatible way (only ssh access to it).
Thanks
I would go for a "daemon" (server) for the main server function and then use a secondary application that can connect to the server and send it commands.
Or just use regular signals, like most other servers do - when you reconfigure your Apache server, for example, you send it a SIGHUP signal that restarts the server. That way, you don't need a second application at all - just "kill -SIGHUP your_server_pid".

Resources