I am trying to run a waveform with components operating on distinct machines. That is, I want A->B where component A runs on the GPP on machine 1 and component B runs on the GPP on machine 2. The CORBA nameserver on system A is visible in REDHAWK on system B, but I cannot access remote devices or components when I run a waveform.
How can I make the devices on one machine available to REDHAWK running on another?
Thanx for your assistance!
-jerhill
The essential thing for spreading REDHAWK components and devices across multiple machines is making sure that your CORBA communication works correctly between the machines. This usually amounts to configuring /etc/omniORB.cfg correctly. First, on one machine, you should have omniNames and omniEvents running, and setup your config per section 2.6 of the documentation. For reference:
InitRef = NameService=corbaname::127.0.0.1
InitRef = EventService=corbaloc::127.0.0.1:11169/omniEvents
On the second machine, your InitRef's must point to the first machine. If the first machine was 192.168.1.100, then your second machine's config could contain:
InitRef = NameService=corbaname::192.168.1.100
InitRef = EventService=corbaloc::192.168.1.100:11169/omniEvents
You should be able to verify this is working correctly on the second machine with:
$ nameclt list
The next issue you need to tackle is making sure that CORBA objects are listening on the appropriate network interfaces, and are publishing information in their IORs that allows them to be reached. In each of your config files, I recommend you add a line to tell omniORB what endpoint CORBA objects created on that machine should listen on. For example, on your first machine:
endPoint = giop:tcp:192.168.1.100:
endPoint = giop:unix:
This tells omniORB that CORBA objects should listen on a TCP port of their choosing on 192.168.1.100. It also adds a Unix pipe for fast access by objects on the same machine. omniORB will publish this information in the IOR for that object. What you choose here is important - if you use an IP that other machines can't reach, or use a hostname that other machines can't resolve then CORBA connections will fail.
After you've configured the endPoint setting on both machines, you may find it useful to inspect the information contained in your IORs. If you can access the naming service then you can retrieve IORs for your objects. For example, if you had a domain named 'REDHAWK_DEV' running, you can get the domain manager's IOR via:
$ nameclt resolve REDHAWK_DEV/REDHAWK_DEV
Then, feed the IOR to catior:
$ catior IOR:012345...
catior will decypher the IOR for you and show you what address and port a client would connect to.
Based on the fact that programs on B can see the name service on A, then I assume that the problem relates to Device/Device Manager configuration.
Make sure that the Device Manager on B meets these criteria:
the id attribute of the deviceconfiguration element of the dcd.xml file is unique
the id attribute of the GPP's componentinstantiation element on the dcd.xml file is unique
the name attribute of the namingservice element on the dcd.xml file is the Domain you are trying to connect to (of the form DomainName/DomainName)
you do not have a Domain Manager running on B that has a colliding name with the Domain Manager on A (an error should occur if you do that)
If these criteria are met and your system still does not work, please post the stdout from running nodeBooter from command-line for both the Device Manager that's not registering and the Domain Manager you're trying to register with.
Related
I'm working on a system with three parts that communicate over HTTP. The parts are the Service, the ServiceRegistry, and the Client. The Service and the ServiceRegistry are self-hosted OWIN applications. The nature of the client doesn't matter.
In my design, the Service POSTs to the ServiceRegistry to "register" itself. The ServiceRegistry reads Request.GetOwinContext().Request.RemoteIpAddress to determine where the Service is located and GETs back to the Service to perform some handshaking (the port for this GET is supplied in the original POST). Finally, the Client comes along and performs a GET to the ServiceRegistry asking for the location of the Service and receives back the IP address and port on which it can directly interact with the Service.
This works well when all three parts are running on different machines.
However, when the configuration is that the Service and the ServiceManager are running on MACHINE01 and the Client is running on MACHINE02 the system fails. What appears to be happening is (when both parts are located on one machine) RemoteIpAddress receives a link-local version of the IPV6 address. I strip off the Scope ID from the IPV6 address and return the address and port to the Client. But, to the Client running on a different machine, this is an unreachable address.
Can anybody suggest how I can read the remote IP address from the OWIN request in such a way that it will be reachable from another machine on my network?
When you are connected with any address, I don't think there is a way to get other addresses of the peer.
You could either implement and use some registry of address mappings between link locale addresses and global addresses. (Always in the hope the peer accepts requests on its global address as well.)
Or if you have access to it I'd propose to modify the requesting peer to send the request originating from its global address. This can normally achieved with source address selection. But I have no idea how you do this on the .NET platform as I am working on Unix systems.
I have setup a CentOS 6.4 server (minimal install) which is connected to network through an ethernet cable. The problem is that when the network link goes down, the status changes are not automatically detected but if i type "ifconfig" the interface still keeps its IP address (which is assigned by a DHCP server). After some time that the link is down the interface loses the address but when the link comes up again the network connection is not automatically restored like it would happen in a desktop computer. Even the command "dhclient eth0" does not always work to restore things, and I have to restart the whole network service with "/ect/init.d/network restart".
Is there any way to automatically detect network status changes like it happens in desktop installations? I'm thinking about a cron script that every 5 minutes pings a server outside my network and if it doesn't get any response it restarts network service, but this does not sound very efficient... is there another way?
EDIT: I realized I have not explained the situation correctly. My network topology is: server --> switch --> router --> external network (the router is another centos server with DHCPD).
For some reasons (that i'm not getting), when it happens that the router goes down and reboots, the other server becomes unreachable, and I have to manually restart network service on it. So the link does not effectively go down (the switch keeps it up), but the status change is at IP level.
You can check if you have NetworkManager enabled, I usually don't use it in the servers but it can help you in this case because it will monitor automatically the connections (it is quite common in desktop installations).
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/ch-NetworkManager.html
To look for messages indicating an issue with the NIC just check the kernel ring buffer using the dmesg command.
For example when the cable is disconnected from a given interface this is what I get:
igb: eth1 NIC Link is Down
The first word will depend on the name of your network driver.
You could also configure the system to log these messages also to /var/log/messages (by default I am not sure if they appear there). Then it would be just a matter of monitoring the log, look for similar messages and restart the network service.
In any case the NetworkManager, if it is not already enabled, it should be an easier solution.
There is a module called miimon for monitoring the network interface's status. ethtool will give you the link status.
$ethtool eth0
...
Link detected: yes
I am writing an application for deployment on desktop computers and using ServiceStack to expose json services to a central application which will consume them. I'm using ServiceStack self hosting and I've pretty much just followed along with the ServiceStack wiki examples to get basic connectivity up and running.
This is used to start the server and bind it
appHost.Init();
appHost.Start("http://server_ip:port/");
Windows also needs to be configured so a non administrator can accept incoming requests; with this command from an administrator command prompt
netsh http add urlacl=http://server_ip:port/ user=desktopMachineName\desktopUserName listen=yes
note: this seems to be fragile. If the urlBase in the appHost.Start(urlBase) command is different to the urlacl parameter in the netsh command then connections are refused by Windows. btw I've only tried this with Windows 8.
Is there an alternate approach so that the application can withstand changes to the desktop computers ip address (e.g. caused by DHCP)?
This is a desktop environment so I don't expect users to have hostnames setup or static ip addresses for their computers. I'm also trying to not require them to type in commands at a command prompt.
It kinda depends on how the central application locates the apps on the desktop computers.
If you only have a list of static IP addresses, you will need the desktops to have static IP addresses.
If somehow your central app knows what the current IP address is (or maybe this is in a network and you can use the machine's name) then try urlacl=http://+:port.
This basically sets a wild card saying that anything coming in on this port via http goes to this app.
Then apphost.Start("http://locahost:port/") should work.
We're Having an issue with sending an MSMQ message to the second DNS name on a server. If we send the IP for that same server, we're fine, but thats not where we are going architecturally. Any ideas as to why MSMQ would care about which name it receives?
Server Information:
The physical server load-int-01, has a second IP and DNS name associated with it.
First IP/DNS: load-int-01, with IP 10.0.10.10
Second IP/DNS: load-intv, with IP 10.0.10.20
Queue Path Formats Used:
FormatName:DIRECT=OS:load-int-01\private$\MyQueue → Works Fine
FormatName:DIRECT=OS:load-intv\private$\MyQueue → Returns the error…
The queue does not exist or you do not have sufficient permissions to perform this operation
We have also tried using the IP addresses instead, and both sets of IPs work fine.
FormatName:DIRECT=TCP:10.0.10.10\private$\MyQueue → Works Fine
FormatName:DIRECT=TCP:10.0.10.20\private$\MyQueue → Works Fine
We just got off the phone with Microsoft. This is a limitation of MSMQ. You can not receive on queues with a DNS name different than the server NETBIOS name. You can SEND to queues with an alternate DNS name provided you use the two registry keys mentioned above, OptionalNames and IgnoreOSNameValidation.
Back to virtual ip's for us, or we might keep the virtual name for the sending connection strings (with the reg settings) and use .\ for the receiving servername...that works.
Thanks for the help.
From:
http://support.microsoft.com/default.aspx?scid=kb;EN-US;899611
By default, Message Queuing verifies the message that it receives to determine whether the message is intended for the local computer. If the message is not intended for the local computer, the message is rejected.
So follow the section on "IgnoreOSNameValidation" in this article and I hope it will help.
Very frustrating. I'm trying to migrate some MSMQ targets (web services) and I guess I will have to configure them to use virtual IPs, and migrate the virtual IPs, since migrating the NetBIOS name will be a mission.
MSMQ should be re-christened MSMQ-1982, since it appears to predate the invention of a cunning and useful abstraction layer called "DNS" in 1983.
I had the same issue and got it working. The trick for me was after setting the IgnoreOSNameValidation registry key, you have to restart the Message Queuing service.
I know this is an old post, but it comes up in Google when searching for a solution to this issue.
This did work for me:
FormatName:DIRECT=TCP:HOST.TLD\PRIVATE$\MyQueue
Note that uses TCP instead of OS. This is the relevant documentation:
Non-transactional messaging by using Direct=TCP This configuration
functions without any particular configuration changes.
I have a Linux server with multiple ips (so, multiple eth0, eth0:0, eth0:1 etc).
The script I'm trying to start is a php CLI script which is downloading stuff from an another server API, and I would like to change the IP based on different parameters. Once the script is started, I don't need anymore to change the ip OF THAT SPECIFIC script until his end.
Do you have any clue if it is possible to achieve it?
My other solution was to install Xen or OpenVZ and create N different VPS per each IP, but as you can see is definitely a PITA :-)
You don't specify how you connect to the other server, but with sockets you can try socket_bind.
EDIT:
With curl you can try curl_setopt.
CURLOPT_INTERFACE The name of the outgoing network interface to use. This can be an interface name, an IP address or a host name.
I know how to do it in C - you use bind() on your socket before you call connect(), and you bind to the IP address assigned to the desired interface, passing 0 for port. I don't know how to do it in PHP.