How do I connect external ports on applications running on different hosts using python? - redhawksdr

I have been successfully connecting external ports on applications running in different domains and nodes on a single host using python scripts uses-component.connect(provides-component>, providesPortName="portName")
but I now want to deploy one application on a different host but I get an error. I launch the remote domain and node using nodeBooter and can use local python to control it and launch the waveform and start it, but if I run python on the Uses port host it cannot redhawk.attach() to the domain on the provides port host. The error is StandardError: Did not find domain . The domain is running on the other host and nameclt list sees it so the Naming Service is connected properly. Is this supposed to be possible and I am just missing something or is there a problem with making external connections between domains on different hosts?

I'm going to use docker to emulate your environment. Hopefully I understood your situation correctly. I have 3 machines. A, B, and C. A and B each have their own domain, GPP, and running waveform. In my case A and B are docker containers. C will be used to reach out and interact with A and B making the connections.
These images are public so feel free to follow along if you have docker installed.
Machine A (IP address 172.17.0.3)
# Launch our 2.0.2 container
[ylb#axios]$docker run -it --rm axios/redhawk:2.0.2 bash -l
# Install a test waveform
[redhawk#6b0701e76e74 ~]$ sudo yum install -y rh.FM_mono_demo
# Start the omni services
[redhawk#6b0701e76e74 ~]$ sudo $OSSIEHOME/bin/cleanomni
# Start domain and dev manager
[redhawk#6b0701e76e74 ~]$ nodeBooter --daemon -D
[redhawk#6b0701e76e74 ~]$ nodeBooter --daemon -d $SDRROOT/dev/nodes/DevMgr_12ef887a9000/DeviceManager.dcd.xml
# Launch the waveform via python
[redhawk#6b0701e76e74 ~]$ python
>>> from ossie.utils import redhawk
>>> dom = redhawk.attach()
>>> app = dom.createApplication('/waveforms/rh/FM_mono_demo/FM_mono_demo.sad.xml')
We do the exact same steps for Machine B, who's IP was given as 172.17.0.2. Make sure not to close or exit these terminals, leave them up and in the python shell.
Now on Host C we can hop into python, connect to each domain, and make the connections.
[ylb#axios]$python
>>> from ossie.utils import redhawk
>>> dom1 = redhawk.attach('REDHAWK_DEV', '172.17.0.3')
>>> dom2 = redhawk.attach('REDHAWK_DEV', '172.17.0.2')
>>> app1 = dom1.apps[0]
>>> app2 = dom2.apps[0]
>>> app1.comps[0].name
'rh.TuneFilterDecimate'
>>> tfd1 = app1.comps[0]
>>> app2.comps[1].name
'rh.psd'
>>> psd2 = app2.comps[1]
>>> tfd1.connect(psd2)
So we had 3 machines, A, B, and C. A and B each ran a waveform and from machine C we connect the TFD component running on machine A to the PSD component running on machine B.

Related

Docker Container Attached to Network, Network Inspect Shows No Containers

I am simply trying to connect a ROS2 node from my Ubuntu 22.04 VM on my laptop to another ROS2 node on another machine running Ubuntu 18.04. Ideally, I would only have Docker on the second machine (the first machine runs a trivial node that will never change), but I have been trying using a separate container on each.
Here is what I am doing and what I am seeing when I inspect:
(ssh into machine 2 from VM 1.)
A: start up network from machine 2.
sudo docker network create -d overlay --attachable my-attachable-ovrlay
B: start up container 1.
sudo docker run -it --rm test1
C: successfully attach container 1 to the network.
sudo docker network connect dwgyau64pvpenxoj2edu4liqu bold_murdock
D: Confirm the container lists network.
sudo docker inspect -f '{{range $key, $value := .NetworkSettings.Networks}}{{$key}} {{end}}' bold_murdock
prints:
bridge my-attachable-ovrlay
E: Check the network to see container.
sudo docker network inspect my-attachable-ovrlay
prints (among other things):
"Containers": null,
I am new to Docker AND networking, so I could be missing something huge, but I have tried all of the standard suggestions I found online including disabling my firewall, opening a ton of ports using ufw allow on both machines, making sure nodes are active, etc etc etc etc etc.
I tried joining the network from machine 2 and that works and the container is displayed when using network inspect. But when I do that, then machine 1 simply refuses to connect to network.
F: In this situation it gives an error.
sudo docker network connect dwgyau64pvpenxoj2edu4liqu objective_mendel
prints:
Error response from daemon: attaching to network failed, make sure your network options are correct and check manager logs: context deadline exceeded
Also, before trying any docker networking, I have tried plainly pinging from VM1 to machine 2 and that works, both ways. I have tried to use netcat to open an old-timey chat window on port 1234 (random port as per this resource) and that works one way only. I can communicate both ways, but only when machine 1 sends the initial netcat request and machine 2 listens. When machine 2 sends request and 1 listens, nothing happens.
I have been struggling to get this to work for 3 weeks now. I know it’s something stupid, I just know it. Any advice would be incredibly appreciated. Please explain like I know nothing about networking, because I just about do.
EDIT: I converted images (still hyperlinked) into code blocks.
If both PCs are on the same LAN, you could skip the whole network configuration entirely and use ROS2 auto-discovery.
E.g.
PC1:
docker run -it --rm --net=host -v /dev/shm:/dev/shm osrf/ros:foxy-desktop
export ROS_DOMAIN_ID=1
ros2 run demo_nodes_py talker
PC2:
docker run -it --rm --net=host -v /dev/shm:/dev/shm osrf/ros:foxy-desktop
export ROS_DOMAIN_ID=1
ros2 run demo_nodes_py listener
If the PCs are not on the same network, I usually use ZeroTier to create a virtual LAN between PC1, PC2, and PC(N), then repeat the above example.
The issue was that the router was set to 1969. When we updated the time by connecting to the internet for 15 seconds, then disconnected, it started working.

PyZMQ Dockerized pub sub - sub won't receive messages

I want to build a modularized system with modules communicating over ZeroMQ. To improve usability, I want to Dockerize (some) of these modules, so that users don't have to setup an environment.
However, I cannot get a dockerized publisher to have its messaged received by a non-dockerized subscriber.
System
Ubuntu 18.04
Python 3.7
libzmq version 4.2.5
pyzmq version is 17.1.2
Docker version 18.09.0, build 4d60db4
Minimal test case
zmq_sub.py
# CC0
import zmq
def main():
# ZMQ connection
url = "tcp://127.0.0.1:5550"
ctx = zmq.Context()
socket = ctx.socket(zmq.SUB)
socket.bind(url) # subscriber creates ZeroMQ socket
socket.setsockopt(zmq.SUBSCRIBE, ''.encode('ascii')) # any topic
print("Sub bound to: {}\nWaiting for data...".format(url))
while True:
# wait for publisher data
topic, msg = socket.recv_multipart()
print("On topic {}, received data: {}".format(topic, msg))
if __name__ == "__main__":
main()
zmq_pub.py
# CC0
import zmq
import time
def main():
# ZMQ connection
url = "tcp://127.0.0.1:5550"
ctx = zmq.Context()
socket = ctx.socket(zmq.PUB)
socket.connect(url) # publisher connects to subscriber
print("Pub connected to: {}\nSending data...".format(url))
i = 0
while True:
topic = 'foo'.encode('ascii')
msg = 'test {}'.format(i).encode('ascii')
# publish data
socket.send_multipart([topic, msg]) # 'test'.format(i)
print("On topic {}, send data: {}".format(topic, msg))
time.sleep(.5)
i += 1
if __name__ == "__main__":
main()
When I open 2 terminals and run:
python zmq_sub.py
python zmq_pub.py
The subscriber receives without error the data (On topic b'foo', received data: b'test 1')
Dockerfile
I've created the following Dockerfile:
FROM python:3.7.1-slim
MAINTAINER foo bar <foo#spam.eggs>
RUN apt-get update && \
apt-get install -y --no-install-recommends \
gcc
WORKDIR /app
COPY requirements.txt /app
RUN pip install -r requirements.txt
COPY zmq_pub.py /app/zmq_pub.py
EXPOSE 5550
CMD ["python", "zmq_pub.py"]
and then I successfully build a Dockerized publisher with the command: sudo docker build . -t foo/bar
Attempts
Attempt 1
Now I have my docker container with publisher, I'm trying to have my non-dockerized subscriber receive the data. I run the following 2 commands:
python zmq_sub.py
sudo docker run -it foo/bar
I see my publisher inside the container publishing data, but my subscriber receives nothing.
Attempt 2
With the idea I have to map the internal port of my dockerized publisher to my machine's port, I run the following 2 commands:
python zmq_sub.py
sudo docker run -p 5550:5550 -it foo/bar
However, then I receive the following error: docker: Error response from daemon: driver failed programming external connectivity on endpoint objective_shaw (09b5226d89a815ce5d29842df775836766471aba90b95f2e593cf5ceae0cf174): Error starting userland proxy: listen tcp 0.0.0.0:5550: bind: address already in use.
It seems to me that my subscriber has already bound to 127.0.0.1:5550 and therefore Docker cannot do this anymore when I try to map it. If I change it to -p 5549:5550, Docker doesn't give an error, but then it's the same situation as with Attempt 1.
Question
How do I get my dockerized publisher to publish data to my non-dockerized subscriber?
Code
Edit 1: Code updated to also give an example on how to use docker-compose for automatic IP inference.
GitHub: https://github.com/NumesSanguis/pyzmq-docker
This is mostly a docker networking question, and isn't specific to pyzmq or zeromq. You would have the same issues with anything trying to connect from a container to the host.
To be clear, in this example, you have a server running on the host (zmq_sub.py, which calls bind), and you want to connect to it from a client running inside a docker container (zmq_pub.py).
Since the docker container is the one connecting, you don't need to do any docker port exposing or forwarding. EXPOSE and forwarding ports are only for making it possible to connect to a container (i.e. bind is called in the container), not to make outbound connections from a container, which is what's going on here.
The main thing here is that when it comes to networking with docker, you should think of each container as a separate machine on a local network. In order to be connectable from other containers or the host, a service should bind onto all interfaces (or at least an accessible interface). Binding on localhost in a container means that only other processes in that container should be able to talk to it. Similarly, binding localhost on the host means that docker containers shouldn't be able to connect.
So the first change is that your bind url should be:
url = 'tcp://0.0.0.0:5550'
...
socket.bind(url)
or pick the ip address that corresponds to your docker virtual network.
Then, your connect url needs to be the ip of your host as seen from the container. This can be found via ifconfig. Typically any ip address will do, but if you have a docker0 network, that would be the logical choice.
I had the same problem in this post.
Your problem is that the container localhost (127.0.0.1) have a difference with other container or host machine localhost.
So to overcome that, use the tcp://*:5550 in the .bind() instead of 127.0.0.1 or machine IP.
Then, you should make an expose IP and declare assign IP between the container and the host machine (I used from docker-compose to do that on the mentioned SO post at above). I think in your case will be as the following as you tried it:
EXPOSE 5550
and
sudo docker run -p 5550:5550 -it foo/bar

Configure the npm to start in the background on Mac OS X

Description
I am on a Mac OS X.
Right now, I have almost 10 Laravel/LAMP projects locally, that I ran using vhost configured with Apache. The awesome part about them is even when I restart my Mac or move between networks, or even close the terminal app/tab of my projects, the Apache is still running, all my local sites will still be accessible.
Goal
Now, I am looking to do the same things with my MEAN apps.
How would one configure something like that ?
Let's say I have 3 MEAN apps.
Example
App1
FE running on port : http://localhost:4201
BE running on port : http://localhost:3001
App2
FE running on port : http://localhost:4202
BE running on port : http://localhost:3002
App3
FE running on port : http://localhost:4203
BE running on port : http://localhost:3003
I'm opening for anything suggestions at this moment.
Can we configure the npm to start in the background ?
BE/API
FE
You can use macOS's launchd to run services in the background. There are a couple good GUI apps that make it easier to create launch services:
LaunchControl ($10)
Lingon ($10) - If you go with Lingon, get Lingon X 5 from the official website instead of Lingon 3 from the Mac App Store; Lingon X 5 is more powerful because it is not limited by Apple's sandboxing.
There's also launched.zerowidth.com, an interactive online tool for creating the .plist files that launchd uses.
launchd.info is also a good resource if you want to set them up manually. Apple's documentation is available too.
If you are having problems with commands not working, I recommend trying these troubleshooting steps:
Convert all your commands to use absolute paths (e.g. npm -> /usr/local/bin/npm). You can find the absolute path of a command by running which with the name of the command (e.g. which npm)
Run your commands from within bash using /bin/bash -c (e.g /bin/bash -c "/usr/local/bin/npm start")
One thing you can do is dockerize your applications.
With docker you can run your applications in a light weight virtual machine known as containers in your computer.
This have some advantages, for example, you can run your app with port 80 inside the virtual machine and expose another port to your machine. You can start or stop the container and so forth.
Go to https://www.docker.com/what-docker for more information.

Couchdb cartridge not responding in docker image

I successfully deployed a couchdb cartridge to wso2stratos and member get activated successfully. For the implementation of the dockerfile i used this git code. which include the below line that i have no idea why it is there! Can someone explain the below code?
RUN printf "[httpd]\nport = 8101\nbind_address = 0.0.0.0" > /usr/local/etc/couchdb/local.d/docker.ini
EXPOSE 8101
CMD ["/usr/local/bin/couchdb"]
I tried pointing http://127.0.0.1:5984/_utils/spec/run.html url and its working perfectly.
I just SSH to the docker container and start the couchdb,
root#instance-00000001:/usr/local/etc/couchdb/local.d# couchdb couchdb
Apache CouchDB 1.6.1 (LogLevel=info) is starting.
Apache CouchDB has started. Time to relax.
[info] [<0.32.0>] Apache CouchDB has started on http://0.0.0.0:8101/
Then I try to pointing the browser to http://0.0.0.0:8101/ and http://127.0.0.1:5984/_utils/index.html both of them not working.
Can someone tell me why i can't view my databases and create database window?
For your first question about what those lines do:
# Set port and address for couchdb to bind too.
# Remember these are addresses inside the container
# and not necessarily publicly available.
# See http://docs.couchdb.org/en/latest/config/http.html
RUN printf "[httpd]\nport = 8101\nbind_address = 0.0.0.0" >
/usr/local/etc/couchdb/local.d/docker.ini
# Tell docker that this port needs to be exposed.
# You still need to run -P when running container
EXPOSE 8101
# This is the command which is run automatically when container is run
CMD ["/usr/local/bin/couchdb"]
As for why you cannot access it, What does your docker run command look like, did you expose the port? i.e.
docker run -p 8101:8101 ....
Are you by any chance testing on OSX? If so try http://192.168.59.103:8101/ On OSX docker would be inside a virtual box VM as docker cannot run natively on OSX. The IP of the virtual machine can be looked up using boot2docker ip and is normally 192.168.59.103.

Run 2 copies of Jboss 6.1.0 in same machine, same time and same application

Hi Can some one provide me step by step guide to run 2 JBoss 6.1.0 Final in the same machine with different ports? My requirement is that i want to run same web application at same time in 2 different JBoss6.1.0 Final instances. one should be running on port 8080 and other one should be running on 8180.
You can find all the information here
Assuming you have two profiles (node1 and node2) you can run these two commands to run two instances
./run.sh -c node1
./run.sh -c node2 -Djboss.service.binding.set=ports-01 -Djboss.messaging.ServerPeerID=1

Resources