use trex test vpp performance, memif can't connnect - cisco

The goal is to connect TREX packet generator with VPP memif interface on the same Host. The memif Interface fails to get connected with the following logs
Failed with info:
Apr 28 15:12:24 xx[370159]: memif_connect_client(): Failed to connect socket: /run/vpp/memif.sock.
Apr 28 15:12:25 xx[370159]: EAL: Error - exiting with code: 1#012 Cause:
Apr 28 15:12:25 xx[370159]: rte_eth_dev_start: err=-1, port=0
vpp version: 19.08.3
trex version: v2.89
vpp# show memif
sockets
id listener filename
0 yes (2) /run/vpp/memif.sock
interface memif0/0
socket-id 0 id 0 mode ethernet
flags admin-up
listener-fd 49 conn-fd 0
num-s2m-rings 0 num-m2s-rings 0 buffer-size 0 num-regions 0
interface memif0/1
socket-id 0 id 1 mode ethernet
flags admin-up
listener-fd 49 conn-fd 0
num-s2m-rings 0 num-m2s-rings 0 buffer-size 0 num-regions 0
# cat /etc/trex_cfg.yaml**
- version: 2
interfaces: ["--vdev=net_memif0,role=slave,id=0,socket=/run/vpp/memif.sock",
"--vdev=net_memif1,role=slave,id=1,socket=/run/vpp/memif.sock"]
port_info:
- ip: 172.21.0.253
default_gw: 172.21.0.254
- ip: 192.168.1.254
default_gw: 192.168.1.253
platform:
master_thread_id: 16
latency_thread_id: 17
dual_if:
- socket: 0
threads: [18,19]

DPDK memif PMD has two modes of operation
Client mode
Server mode
The sequence of start also affects how memif communicate and connects to listener (server).
So if we need to make VPP as server and TREX as client use (always start VPP first and create memif interface)
create interface memif id 0 master
set interface state memif0/0 up
set interface ip address memif0/0 12.12.5.1/24
and
--vdev=net_memif0,role=client,id=0,socket=/run/vpp/memif.sock
if we need to make TREX server and VPP client use (always start TREX first)
--vdev=net_memif0,role=server,id=0,socket=/run/vpp/memif.sock
and
create interface memif id 0 slave
set interface state memif0/0 up
set interface ip address memif0/0 12.12.5.2/24
note:
if more than 1 interface is required, ensure VPP creates memif interface 0 and 1 on separate sockets.
tested the same with dpdk-pktgen and vpp.

Since the Trex 2.92 has been upgraded to use DPDK version 21.02
In interface in /etc/trex_cfg.yaml, instead of:
interfaces: ["--vdev=net_memif0,role=slave,id=0,socket=/run/vpp/memif.sock",
it should be
interfaces: ["--vdev=net_memif0,role=slave,id=0,socket-abstract=no,socket=/run/vpp/memif.sock",
as by default it assumes the socket path as raw socket path.

Related

Rdma infiniband cannot open hosts (iberror: discovery failed) Port state: Down

I am facing an issue while configuring rdma and Infiniband on my two nodes. Both of these two nodes are connected and I have installed the recommended software libraries and packages required.
But my port status is down and physical state is Disabled. I tried to enable the state but I get the error of can’t open MAD PORT
:~# ibportstate -L 1 3 enable
ibwarn: [6772] mad_rpc_open_port: can't open UMAD port ((null):0)
ibportstate: iberror: failed: Failed to open '(null)' port '0'
Infiniband ibstatus returns this:
Infiniband device ‘mlx5_0’ port 1 status:
default gid: fe80:0000:0000:0000:1270:fdff:fe6e:43e0
base lid: 0x0
sm lid: 0x0
state: 1: DOWN
phys state: 3: Disabled
rate: 100 Gb/sec (4X EDR)
link_layer: Ethernet
I don't understand what seems to be the issue here, I also upgraded the firmware but still the problem persists.
I figured it out and I am sharing the answers for others to see so the issue was the network interface, you need to see which network interface the Infiniband and check the status.
root#dtn0:~# /etc/init.d/openibd status
HCA driver loaded
Configured Mellanox EN devices:
ens11np0
Currently active Mellanox devices:
The following OFED modules are loaded:
rdma_ucm
rdma_cm
ib_ipoib
mlx5_core
mlx5_ib
ib_uverbs
ib_umad
ib_cm
ib_core
mlxfw
After that, I just assigned Ip and netmask on the interface and I was able to use the interface and reach the network.
root#dtn0:~# ifconfig ens11np0 10.0.0.50/24

Access Oracle Apex from remote machine

Hi all I have successfully installed Oracle 11g Express addition on a Linux VM (google cloud compute)
I have sqlplus working and I can query data.
The listener is also working.
But as there is no GUI with Linux servers I cannot try local host and external machines have the connection refused.
My Questions are:
1) Does Apex come pre-installed on Oracle XE it used to but not mentioned anywhere.
2) if the ip address of the server is 123.123.123 what url would I use to get to apex from a remote machine? I have tried
http://123.123.123:8080/apex/
http://123.123.123/apex/
https://123.123.123:8080/apex/
3) How can I tell if it is the server or Oracle refusing the connection?
Firewall
$ netstat -ant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN
tcp 0 0 10.128.0.3:50776 169.254.169.254:80 ESTABLISHED
tcp 0 0 10.128.0.3:43548 10.128.0.3:1521 ESTABLISHED
tcp 0 0 10.128.0.3:50722 169.254.169.254:80 CLOSE_WAIT
tcp 0 0 10.128.0.3:50814 169.254.169.254:80 ESTABLISHED
tcp 0 0 10.128.0.3:50774 169.254.169.254:80 ESTABLISHED
tcp 0 64 10.128.0.3:22 74.125.41.105:38312 ESTABLISHED
tcp6 0 0 :::40070 :::* LISTEN
tcp6 0 0 :::8080 :::* LISTEN
tcp6 0 0 :::1521 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
tcp6 0 0 ::1:25 :::* LISTEN
tcp6 0 0 10.128.0.3:1521 10.128.0.3:43548 ESTABLISHED
Listener
$ lsnrctl status
LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 22-AUG-2017 02:59:51
Copyright (c) 1991, 2011, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC_FOR_XE)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date 22-AUG-2017 02:00:17
Uptime 0 days 0 hr. 59 min. 33 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Default Service XE
Listener Parameter File /u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/centossmallblockpro/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC_FOR_XE)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=centossmallblockpro.c.sincere-destiny-176110.internal)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=centossmallblockpro.c.sincere-destiny-176110.internal)(PORT=8080))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "XE" has 1 instance(s).
Instance "XE", status READY, has 1 handler(s) for this service...
Service "XEXDB" has 1 instance(s).
Instance "XE", status READY, has 1 handler(s) for this service...
The command completed successfully
SQLPLUS working
sqlplus
SQL*Plus: Release 11.2.0.2.0 Production on Tue Aug 22 03:03:51 2017
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Enter user-name: system
Enter password:
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
SQL> select * from dual;
D
-
X
.
Telnet XX.XXX.XXX.XX 8080
Telnet: Unable to connect to remote host: Connection timed out
1) Oracle XE (11g) comes with APEX version 3.2 I think. This is a very old APEX release. Follow the instructions how to drop this old version and get the latest from otn.oracle.com. Latest version should also work with 11g XE.
2) Tunnel
You can create a ssh-tunnel from your desktop machine to the end-point server where your services are running. Now you can access services on remote machine from your desktop environment aka. sqlplus, SQL Developer, Firefox, etc..
# Access Your Database Remotely Through an SSH Tunnel
# ssh -L [local port]:[database host]:[remote port] [username]#[remote host]
# console 1: 9998 is just an arbitrary port > 1024. Can be anything.
ssh -N -L 9998:10.128.0.3:1521 -i ~/.ssh/id_rsa user#35.184.136.98
# console 2:
sqlplus user/pwd#localhost:9998/XE
# firefox:
http://localhost:9998/apex
Great answers to 1 and 2 from Bjarte but the actual problem was not the Linux firewall but the Compute engine firewall.
I didn't know it even existed, when you select the check box to open Http it created a rule to TCP:80 but I needed TCP:8080.
Here is the article that solved it for me cant open port on google compute engine ...

YAF terminating on error (couldn't create connected TCP socket)

I've installed and configured YAF (v. 2.8.4) + SiLK(v. 3.12.1) on Debian 8.2, and I faced with 2 problems:
1st. Every time I start yaf, as long as a TCP connection established, yaf process terminated with this error:
[2016-05-25 08:13:36] yaf terminating on error: couldn't create connected TCP socket to 127.0.0.1:18000 Connection refused
2nd. Also when yaf is running (there is not any TCP connection on eth0), out put of rwfilter --proto=0- --type=all --pass=stdout | rwcut | head command is empty.
I have some flow information for two days ago in /data/ directory, and I'm able to filter them by
rwfilter --start-date=2016/05/22 --end-date=2017/05/23 --proto=0- --type=all --pass=stdout | rwstats --fields=protocol --bottom --count=10
That's show that yaf and SiLK worked correctly on 23th May. (BUT for some minuets!!!). Unfortunately I only have today's logs and logs for 23th truncated.
Configs and Logs:
ps ax |grep "yaf\|rwflowpack":
58984 ? Ssl 0:00 /usr/local/sbin/rwflowpack --sensor-configuration=/data/sensor.conf --site-config-file=/ata/silk.conf --archive-directory=/var/lib/rwflowpack/archive --output-mode=local-storage --root-directory=/data --pidfile=/var/lib/rwflowpack/log/rwflowpack.pid --log-level=info --log-destination=syslog
84140 ? Ss 0:00 /usr/local/bin/yaf -d --live pcap --in eth0 --ipfix tcp --out localhost --ipfix-port 18000 --log /var/log/yaf/log/yaf.log --verbose --silk --applabel --max-payload=2048 --plugin-name=/usr/local/lib/yaf/dpacketplugin.la --pidfile /var/log/yaf/run/yaf.pid
iptables rules:
ACCEPT udp -- anywhere anywhere udp spt:18000
ACCEPT udp -- anywhere anywhere udp dpt:18000
ACCEPT tcp -- anywhere anywhere tcp spt:18000
ACCEPT tcp -- anywhere anywhere tcp dpt:18000
yaf.log:
[2016-05-25 08:48:02] yaf starting
[2016-05-25 08:48:02] Initializing Rules From File: /usr/local/etc/yafApplabelRules.conf
[2016-05-25 08:48:02] Application Labeler accepted 44 rules.
[2016-05-25 08:48:02] Application Labeler accepted 0 signatures.
[2016-05-25 08:48:02] DPI Running for ALL Protocols
[2016-05-25 08:48:02] Initializing Rules from DPI File /usr/local/etc/yafDPIRules.conf
[2016-05-25 08:48:02] DPI rule scanner accepted 63 rules from the DPI Rule File
[2016-05-25 08:48:02] DPI regular expressions cover 7 protocols
[2016-05-25 08:48:02] Forked child 82020. Parent exiting
[2016-05-25 08:48:02] running as root in --live mode, but not dropping privilege
[2016-05-25 08:50:48] Processed 814 packets into 0 flows:
[2016-05-25 08:50:48] Mean flow rate 0.00/s.
[2016-05-25 08:50:48] Mean packet rate 4.90/s.
[2016-05-25 08:50:48] Virtual bandwidth 0.0032 Mbps.
[2016-05-25 08:50:48] Maximum flow table size 36.
[2016-05-25 08:50:48] 29 flush events.
[2016-05-25 08:50:48] 0 asymmetric/unidirectional flows detected (-nan%)
[2016-05-25 08:50:48] YAF read 1643 total packets
[2016-05-25 08:50:48] Assembled 0 fragments into 0 packets:
[2016-05-25 08:50:48] Expired 0 incomplete fragmented packets. (0.00%)
[2016-05-25 08:50:48] Maximum fragment table size 0.
[2016-05-25 08:50:48] Rejected 829 packets during decode: (33.54%)
[2016-05-25 08:50:48] 829 due to unsupported/rejected packet type: (33.54%)
[2016-05-25 08:50:48] 829 unsupported/rejected Layer 3 headers. (33.54%)
[2016-05-25 08:50:48] 729 ARP packets. (29.49%)
[2016-05-25 08:50:48] 83 802.3 packets. (3.36%)
[2016-05-25 08:50:48] yaf terminating on error: couldn't create connected TCP socket to localhost:18000 Connection refused
rwflowpack logs:
May 25 13:17:54 XXX rwflowpack[58984]: 'S0': forward 0, reverse 0, ignored 0
May 25 13:19:54 XXX rwflowpack[58984]: Flushing files after 120 seconds.
May 25 13:19:54 XXX rwflowpack[58984]: 'S0': forward 0, reverse 0, ignored 0
May 25 13:21:54 XXX rwflowpack[58984]: Flushing files after 120 seconds.
May 25 13:21:54 XXX rwflowpack[58984]: 'S0': forward 0, reverse 0, ignored 0
usr/local/etc/yaf.conf:
ENABLED=1
YAF_CAP_TYPE=pcap
YAF_CAP_IF=eth0
YAF_IPFIX_PROTO=tcp
YAF_IPFIX_HOST=localhost
YAF_IPFIX_PORT=18000
YAF_STATEDIR=/var/log/yaf
YAF_EXTRAFLAGS="--silk --applabel --max-payload=2048 --plugin-name=/usr/local/lib/yaf/dpacketplugin.la"
/data/silk.conf:
version 2
sensor 0 S0 "Description for sensor S0"
sensor 1 S1
sensor 2 S2 "Optional description for sensor S2"
sensor 3 S3
sensor 4 S4
sensor 5 S5
sensor 6 S6
sensor 7 S7
sensor 8 S8
sensor 9 S9
sensor 10 S10
sensor 11 S11
sensor 12 S12
sensor 13 S13
sensor 14 S14
class all
sensors S0 S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14
end class
class all
type 0 in in
type 1 out out
type 2 inweb iw
type 3 outweb ow
type 4 innull innull
type 5 outnull outnull
type 6 int2int int2int
type 7 ext2ext ext2ext
type 8 inicmp inicmp
type 9 outicmp outicmp
type 10 other other
default-types in inweb inicmp
end class
default-class all
packing-logic "packlogic-twoway.so"
/data/sensor.conf:
probe S0 ipfix
listen-on-port 18001
protocol tcp
end probe
sensor S0
ipfix-probes S0
internal-ipblocks 192.168.1.0/24 10.10.10.0/24
external-ipblocks remainder
end sensor
/usr/local/etc/rwflowpack.conf:
ENABLED=1
statedirectory=/var/lib/rwflowpack
CREATE_DIRECTORIES=yes
BIN_DIR=/usr/local/sbin
SENSOR_CONFIG=/data/sensor.conf
DATA_ROOTDIR=/data
SITE_CONFIG=/data/silk.conf
PACKING_LOGIC=
INPUT_MODE=stream
INCOMING_DIR=${statedirectory}/incoming
ARCHIVE_DIR=${statedirectory}/archive
FLAT_ARCHIVE=0
ERROR_DIR=
OUTPUT_MODE=local
SENDER_DIR=${statedirectory}/sender-incoming
INCREMENTAL_DIR=${statedirectory}/sender-incoming
COMPRESSION_TYPE=
POLLING_INTERVAL=
FLUSH_TIMEOUT=
FILE_CACHE_SIZE=
FILE_LOCKING=1
PACK_INTERFACES=0
SILK_IPFIX_PRINT_TEMPLATES=
LOG_TYPE=syslog
LOG_LEVEL=info
LOG_DIR=${statedirectory}/log
PID_DIR=${LOG_DIR}
USER=root
EXTRA_OPTIONS=
EXTRA_ENVVAR=
yaf --version:
yaf version 2.8.4 Build Configuration:
* Timezone support: UTC
* Fixbuf version: 1.7.1
* DAG support: NO
* Napatech support: NO
* Netronome support: NO
* Bivio support: NO
* PFRING support: NO
* Compact IPv4 support: NO
* Plugin support: YES
* Application Labeling: YES
* Payload Processing Support: YES
* Entropy support: NO
* Fingerprint Export Support: NO
* P0F Support: NO
* Spread Support: NO
* MPLS Support: NO
* Non-IP Support: NO
* Separate Interface Support: NO
SiLK version:
SiLK 3.12.1; configuration settings:
* Root of packed data tree: /data
* Packing logic: Run-time plug-in
* Timezone support: UTC
* Available compression methods: none [default], zlib
* IPv6 network connections: yes
* IPv6 flow record support: yes
* IPFIX/NetFlow9/sFlow collection: ipfix,netflow9,sflow
* Transport encryption: no
* PySiLK support: no
* Enable assert(): no
I'm new in YAF and SiLK.
I used bellow link for building and configuring YAF+SiLk
https://tools.netsa.cert.org/yaf/libyaf/yaf_silk.html
Thus all parameters derived from that tutorial.
YAF take a port in YAF_IPFIX_PORT to connect to the IPFIX collector on the specified port. So, YAF does not open any port with that number and does not listening to that port :|
So I changed the value ofYAF_IPFIX_PORT= in yaf.conf, from 18000 to 18001 (the port which is defined for listen-on-port in sensor.conf)
Now It's working and I'm able to filter traffics.

Debian: cannot create rfcomm0

On Debian Jessie 8.2:
I'm trying to create the following device: /dev/rfcomm0 in order to connect my arduino via bluetooth module HC-05, but no success.
Here are the steps I'm following:
1) I guess my HC-05 called FOO is recognised and properly configured, because
hcitool scan
reports
98:D3:31:xx:xx:xx FOO
xx are just a mask I use here for privacy.
2) I added the file /etc/bluetooth/rfcomm.conf
rfcomm0 {
# Automatically bind the device at startup
bind yes;
# Bluetooth address of the device
device 98:D3:31:xx:xx:xx;
# RFCOMM channel for the connection
channel 1;
# Description of the connection
comment "FOO";
}
3) I restarted bluetooth service
sudo /etc/init.d/bluetooth restart
response is:
[ ok ] Restarting bluetooth (via systemctl): bluetooth.service.
Nevertheless device rfcomm0 is not created.
I'm following the instructions here:
Bluetooth serial communication with HC-05
I did this operation months ago on another Linux system (it was ubuntu) and I can remember
evertything went well: the port was created. Probably I'm missing some important step!
Thanks a lot,
Valerio
UPDATE:
this command
sdptool records 98:D3:31:xx:xx:xx
reports
Service Name: Dev B
Service RecHandle: 0x10000
Service Class ID List:
"Serial Port" (0x1101)
Protocol Descriptor List:
"L2CAP" (0x0100)
"RFCOMM" (0x0003)
Channel: 1
Language Base Attr List:
code_ISO639: 0x656e
encoding: 0x6a
base_offset: 0x100
I think this confirms that the channel in rfcomm.conf is 1
Ok , thanks to Kaylum this is solved!
The manual binding create the device rfcomm0
sudo rfcomm bind 0 98:D3:31:xx:xx:xx 1
Then, in order to make Processing write/read on the created port,
I needed to run Processing as sudoer, otherwise Processing says that the port exists but is busy. As sudoer, I can confirm that the port correctly sends data back and forth between Arduino and Processing!

Python program can not launch redhawksdr component with external network connected

I am a new user of RedHawkSDR and have written a python program under CentOS that controls a "SigGen" component. It works if there is no network connection except the loopback, but fails if I connect a wired network (listed as System ethO).
I do not specify any IP address in the python program, and the omniORB.cfg explicitly lists the loopback address as shown below, since there have been comments in other posts warning against using "localhost"
traceLevel=10
InitRef = NameService=corbaname::127.0.0.1:2809
supportBootstrapAgent = 1
InitRef = EventService=corbaloc::127.0.0.1:11169/omniEvents
Comparing the ominORB data that prints to the screen from the two cases:
Last identical step ==> "omniORB: AsyncInvoker: thread id=2 has started. Total threads=1
Next step:
for working (no network) ==> "omniORB: Adding root<0> (activating) to object table
for nonworking (network connected) ==> "omniORB: Removing root<0> (etherealizing) from object table
full message stream for network connected case==>
[aecom#crancentos1 Desktop]$ python pTrigger.py keyboard 5555 5050
omniORB: Version: 4.1.6
omniORB: Distribution date: Fri Jul 1 15:57:00 BST 2011 dgrisby
omniORB: Information: the omniDynamic library is not linked.
omniORB: omniORBpy distribution date: Fri Jul 1 14:52:31 BST 2011 dgrisby
omniORB: Initialising incoming endpoints.
omniORB: Attempt to set socket to listen on IPv4 and IPv6.
omniORB: Starting serving incoming endpoints.
omniORB: AsyncInvoker: thread id = 2 has started. Total threads = 1
/usr/local/redhawk/core/lib/python/ossie/utils/sb/domainless.py:863:
DeprecationWarning: Component class is deprecated. Use launch() method instead.
warnings.warn('Component class is deprecated. Use launch() method instead.', DeprecationWarning)
omniORB: Adding root<0> (activating) to object table.
omniORB: Creating ref to local: root<0>
target id : IDL:omg.org/CORBA/Object:1.0
most derived id: IDL:omg.org/CosNaming/NamingContextExt:1.0
omniORB: Creating Python ref to local: root<0>
target id : IDL:omg.org/CosNaming/NamingContextExt:1.0
most derived id: IDL:omg.org/CosNaming/NamingContextExt:1.0
omniORB: Version: 4.1.6
omniORB: Distribution date: Fri Jul 1 15:57:00 BST 2011 dgrisby
omniORB: Information: the omniDynamic library is not linked.
omniORB: omniORBpy distribution date: Fri Jul 1 14:52:31 BST 2011 dgrisby
omniORB: Initialising incoming endpoints.
omniORB: Attempt to set socket to listen on IPv4 and IPv6.
omniORB: Starting serving incoming endpoints.
omniORB: AsyncInvoker: thread id = 2 has started. Total threads = 1
omniORB: Removing root<0> (etherealising) from object table
Traceback (most recent call last):
File "pTrigger.py", line 118, in <module>
sigGen=sb.Component("SigGen")
File "/usr/local/redhawk/core/lib/python/ossie/utils/sb/domainless.py", line 872, in __new__
raise AssertionError, "Unable to launch component: '%s'" % e
AssertionError: Unable to launch component: 'resource 'SigGen_2' did not register with virtual environment'
Is there a system variable/token/thing that is "127.0.0.1" in the loopback case that switches to the network IP address when the system makes the network connection, which then confuses omniORB?
Any constructive guidance would be appreciated...
Best Regards,
Brad Meyer
ADDITIONAL DATA
// Firewall is off=============================
// Smoking gun ?===============================
omniORB: omniORBpy distribution date: Fri Jul 1 14:52:31 BST 2011 dgrisby
omniORB: Python thread state scavenger start.
omniORB: Initialising incoming endpoints.
omniORB: Instantiate endpoint 'giop:tcp:127.0.0.1:'
omniORB: Explicit bind to host 127.0.0.1.
omniORB: Bind to address 127.0.0.1 ephemeral port.
omniORB: Publish specification: 'addr'
omniORB: Try to publish 'addr' for endpoint giop:tcp:127.0.0.1:46877
omniORB: Publish endpoint 'giop:tcp:127.0.0.1:46877'
omniORB: Starting serving incoming endpoints.
omniORB: AsyncInvoker: thread id = 2 has started. Total threads = 1
omniORB: giopRendezvouser task execute for giop:tcp:127.0.0.1:46877
==>omniORB: SocketCollection idle. Sleeping.
omniORB: State root<0> (active) -> deactivating
// ifconfig shows loopback running ==================
Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:302 errors:0 dropped:0 overruns:0 frame:0
TX packets:302 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:26197 (25.5 KiB) TX bytes:26197 (25.5 KiB)
// ping 127.0.0.1 works ===========================================
// netstat -tulpn SHOWS OMIN CONNECTED TO SOME PORTS=========================================
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:42451 0.0.0.0:* LISTEN 2409/omniEvents
tcp 0 0 REDACTED FOR POST 0.0.0.0:* LISTEN 2617/dnsmasq
tcp 0 0 0.0.0.0:50517 0.0.0.0:* LISTEN 2067/rpc.statd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2254/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 2098/cupsd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2346/master
tcp 0 0 127.0.0.1:42251 0.0.0.0:* LISTEN 2022/omniNames
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1902/rpcbind
tcp 0 0 :::39409 :::* LISTEN 2067/rpc.statd
tcp 0 0 :::22 :::* LISTEN 2254/sshd
tcp 0 0 ::1:631 :::* LISTEN 2098/cupsd
tcp 0 0 ::1:25 :::* LISTEN 2346/master
tcp 0 0 :::2809 :::* LISTEN 2022/omniNames
tcp 0 0 :::11169 :::* LISTEN 2409/omniEvents
tcp 0 0 :::111 :::* LISTEN 1902/rpcbind
When multiple network interfaces are available, omniORB arbitrarily chooses one of them for publishing object references (see part 5 in http://omniorb.sourceforge.net/omni41/omniNames.html). In your case, it seems to be grabbing eth0 when you are network-connected, which is not playing well with omniNames for whatever reason (could be a firewall setting).
To get around this, I recommend adding the following line to your /etc/omniORB.cfg file:
endPoint = giop:tcp:127.0.0.1:
This will force omniNames to always use the local loopback instead of eth0. Given your current omniORB.cfg settings, I am assuming using localhost is acceptable for your application. If this is not the case (i.e., you really need to use eth0 instead of localhost), we will need to find the root cause of why omniNames is having trouble with your eth0 interface.
Clarification (since I can't use line breaks in the comments section):
Try turning the log level up to 40 and see if anything useful shows up between these log lines:
omniORB: AsyncInvoker: thread id = 2 has started. Total threads = 1
omniORB: Removing root<0> (etherealising) from object table
I'm having trouble reproducing your problem. In my working case, I get something like this:
omniORB: AsyncInvoker: thread id = 2 has started. Total threads = 1
omniORB: giopRendezvouser task execute for giop:tcp:127.0.0.1:60625
omniORB: Adding root<0> (activating) to object table.
I'm curious as to see if the IP on the second line looks suspicious for you.
I worked on this for a week… but finally found the solution as DrewC pointed me in new directions to look.
In the etc/host file, I added a line “ 127.0.0.1 ‘ComputerName’” and the problem went away.
Brad

Resources