I know uwsgi is the protocol implemented in uWSGI server. But what changes does it have as compared to wsgi?
The uwsgi protocol is a wire protocol used over the socket between processes with uWSGI. It cannot be compared to WSGI, which is a programmatic API for Python. The uwsgi protocol is more akin to FASTCGI, or SCGI. It is language agnostic. From memory there is very little difference between it and SCGI.
In short, that uwsgi has the name 'wsgi' in it was a bad idea as it is actually unrelated to WSGI. You still need an adapter to get from uwsgi to Python WSGI. In the case of uWSGI that is written as C code and embedded in uWSGI. One could write an adapter between uwsgi and other language APIs for web applications as well and uWSGI internal also has such things.
Related
Ubuntu 20.04.1 LTS
Erlang/OTP 23 [erts-11.1] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [hipe]
Elixir 1.10.3 (compiled with Erlang/OTP 22)
Our Elixir project requires instantiating supervised communications with a long-running, data streaming Python process. Data will be pushed to its Elixir counterpart once every second. Both processes are running on the same machine. (Exile doesn’t seem to be ready for production environments, Porcelain/Erlport appear to have been abandoned, Rambo is only suitable for transient jobs, apparently, while Ports suffers from this this fatal flaw.)
Any stable libraries ideally suited for this? If so, where can we find their recipes for this use case?
while Ports suffers from this this fatal flaw.
Is it really a problem for your use case?
I think you don't need any library. Ports give you everything you need and they are simple enough to be used without 3rd party library.
These are some key points if it's your first time using Ports:
they are great because the processes are run outside the Erlang VM. A crash in the python script doesn't effect your Elixir processes.
it's an easy solution to run long-running processes (in any language).
when a port is closed, it doesn't kill the python process. It just closes the in/out file descriptors. In python you need to detect when in/out fd are closed (you just need to check if you receive an EOF).
Don't use stdin/stdout for the port <-> python communication. Use file descriptors 3 and 4 (:nouse_stdio option when opening a port).
take a look at the {:packet, N} option, it will make your life much easier!
Only the process which opens the port can send and receive messages (to the python process via the port). It's usually better to open the port in a dedicated GenServer, which becomes the interface to your python program. In this way the genserver process takes care of the Port, it can be supervised, and many Elixir processes can use the port sending messages to the genserver.
I've used Ports + Python to run a realtime YOLO, to detect object in realtime. I wrote in detail an article on how to use Ports with a long-running Python program, which I think it could useful: Real-time Object Detection with Phoenix and Python. I describe how to start and manage ports with Python, how to define a binary protocol, detect crashes and wrap the port with a GenServer and to make it supervised.
And here a great article written by Saša Jurić: Outside Elixir
What is the difference between daemon and service ? (In windows or Linux).
A daemon is a background, non-interactive program. It is detached from the keyboard and display of any interactive user. The word daemon for denoting a background program is from the Unix culture; it is not universal.
A service is a program which responds to requests from other programs over some inter-process communication mechanism (usually over a network). A service is what a server provides. For example, the NFS port mapping service is provided as a separate portmap service, which is implemented as the portmapd daemon.
A service doesn't have to be a daemon, but usually is. A user application with a GUI could have a service built into it: for instance, a file-sharing application.
For more details: https://askubuntu.com/questions/192058/what-is-technical-difference-between-daemon-service-and-process
Daemons are processes running in the background and are not in your face.They do certain tasks at set times or responds to certain events.
In Windows, daemons are called services.
Daemon
From wikipedia:
A daemon is a computer program that runs as a background process,
rather than being under the direct control of an interactive user.
For example you want to ping google.com. That means something in your OS should know how to handle the Domain name resolution. That is a daemon.
More to read : Berkeley Internet Name Daemon (BIND)
Service
That name comes from Client Server Model. It means that an application runs as a service on a server, and a client version of the application is used to access the service. For example an Apache HTTP server application is a service on a server and a Chrome Browser is a client on a PC.
More to read: Client Server Model
A daemon is a computer program that runs as a background process, rather than being under the direct control of an interactive user.
A daemon is a subset of services that always run in memory waiting to service a request.
For example - crond , ftpd ,etc
Whereas, a Service is a server application or set of applications that runs in the background waiting to be used, or carrying out essential task. They are basically called in inter-process communication.
For example - httpd
I want to communicate with my raspberry pi, through my webserver.
I want to use a web interface on my server, with which I can control an LED on my PI.
Can I use "node js" for this? or does anyone have a good idea or examples?
regards
You can indeed use just about any web server to communicate with the pi and thereby control its GPIO pins.
I wrote a web server specifically to interface with the pi's GPIO capabilities complete with utilities and examples if you want to try it out. It's a very lightweight native-code (C++) web server that you can use to control your LEDs (or what have you) with about 5 mins of setup:
OliWeb on GitHub
You can install it using git with:
sudo git clone https://github.com/m2ware/OliWeb.git
You could also install just about any other web server out there (Nginx, Node.JS, take your pick) and set up CGIs to call command-line utilities to drive the LED pins. Gordon's WiringPi utilities are easy to use and install - installation and usage instructions are below.
Gordon's WiringPi Utility
Each web server will have its own particulars in terms of invoking command line interfaces via CGI. If you're interested in NodeJS specifically, this describes how to invoke command-line functionality from Node:
How to invoke external scripts programs from node js
I know it is a little bit late, although for those who may be still interested I've recently developed a school project which does exactly this job.
I've used WebSocket and GPIO handling.
Here's the repo for the client: https://github.com/jkomyno/material-led-controller/
Here's the one for the server (you gotta put this on your RPI):
Is there some (working) example how to create RPC from windows to linux?
Client should be windows NT application, server is linux.
It needs to be MSRPC.
No Corba, no XML-RPC, SUN-RPC etc
MSDN says this:
RPC can be used in all client/server applications based on Windows
operating systems. It can also be used to create client and server
programs for heterogeneous network environments that include such
operating systems as Unix and Apple.
Unfortunately after spending few hours on google I'm giving up.
My expectation:
Linux node should have samba installed, because their MSRPC implementation works.
Using IDL file I generate stubs for both client and server
Client is built using MSVC
Server is build using gcc with some includes/libraries from samba (or other libs)
Linux node must have such RPC port mapper
Can someone point me out?
I think you have 2 possible ways to deal with this:
1- You can try using DCOM with wine, which means that you will actually write your code for windows, but at the same time you can test your results in the process and avoid using WinAPI calls that wine is not able to handle properly. This approach will allow you to generate stubs code from your IDL files.
2- You can try using Samba RPC Pluggable Modules, but I am afraid in this case the RPC communication will be more primitive.
Edit:
It seems there are many other ways. I found a list of libraries in DCOM-Wikipedia, j-Interop for example looks particularly promising.
I have come across a couple of proprietary applications on Linux platform which are administered via telnet. Remote telnet is not enabled but on the host you do a telnet session. You get an interface where you enter commands to make the application work. I was wondering how a telnet interface is built for any particular application. Not looking for a step-by-step, just a basic/general/big-picture answer of how one can approach building a telnet interface for an application.
telnet is based on the TCP/IP protocol. To "do" telnet from a C program, you'd start messing with sockets, accept()-ing connections and reading and writing to them using fork()-d threads (that's VERY briefly it).
If the app is already there, and already communicates to the console via stdin/stdout, you can rig a telnet interface on to it using (a) some configuration in your Internet daemon, (x)inetd, or by misusing the Swiss Army knife of TCP/IP, netcat.
The docs for both those programs describe how to set things up, vaguely. If you need more help, you know where to ask!