GameServer ob1=new GameServer();
GameServer ob2=new GameServer();
GameServer ob3=new GameServer();
Remote objNA=UnicastRemoteObject.exportObject(ob1,2026);
Remote objEU=UnicastRemoteObject.exportObject(ob2,2026);
Remote objAS=UnicastRemoteObject.exportObject(ob3,2026);
Registry r=LocateRegistry.createRegistry(2026);
r.rebind("NA", objNA);
r.rebind("EU", objEU);
r.rebind("AS", objAS);
I am creating 3 different remote objects on the server and binding them to the same registry. The purpose is to have 3 different hash tables one for each server. Now when I am calling one server from the client based upon its IP address... by using Naming.lookup(NA).... I am not able to access the Hashtables of other 2 servers.
Any suggestions how to access the other two ?
r.rebind("NA", objNA);
r.rebind("EU", objEU);
r.rebind("AS", objAS);
That means you can retrieve stubs for those objects via:
GameServerInterface ifNA = (GameServerInterface)r.lookup("NA");
GameServerInterface ifEU = (GameServerInterface)r.lookup("EU");
GameServerInterface ifAS = (GameServerInterface)r.lookup("AS");
Related
Currently I'm doing a nodejs project where I don't want to use any third party packages or database systems. Now I'm almost done with the project. But here's the problem. I've some authentication related functionality that needs "Sticky Load Balancing", that means I need to do all the authentication related task by the primary node.
I know, I can send message to the primary node from the child nodes using the "process.send({ --- msg object --- })" and from the primary I can do "worker.send({ -- response object -- })" to send message to child nodes.
But I need to do something like this from the child nodes:
process.send(msgObject, callback);
where I'll get the response in the callback. But all the child nodes are separate nodejs instances. I've tried to include the callback function in the msgObject but in the primary node it strips all functions from the msgObject. I read the docs and found this:
process.send(message[, sendHandle[, options]][, callback])
But there's no example of how to implement it. It says the sendHandle is of type net.Server | net.Socket. I know how to set up a basic socket server and communicate with it. But I'm not sure if it's a good idea to use a socket server to implement this communication.
i have some problems in changing database service through cx_Oracle module.
It seems that somehow the first connection "persists", even if delete the object or create a new connection in a child process. So, when I attempt to make a connection to another service, it raises a "ORA-01017: invalid username/password; logon denied"
I use a wallet to arrange connection.
class Connection(object):
def __init__(self,oracle_user,instance=os.environ["ORACLESRV"],env=os.environ["ENVPURPOSE"]):
self.oracle_user = oracle_user
self.instance = instance
self.env = env
wallet_path = "$SCRIPTS/oracle/wallets/{env}/{oracle_user}".format(env=self.env.upper(),oracle_user=self.oracle_user.upper())
os.environ["TNS_ADMIN"] = os.path.expandvars(wallet_path)
os.environ["NLS_LANG"] = "Italian_Italy.UTF8"
self.connection = cx_Oracle.connect("/#"+self.instance.upper())
The first connection is made without errors, but when I try to change service (i.e. the "instance" argument of Connection class) the connection is refused. Parameters are passed correctly to the constructor, but it's like the script keeps seeing the first wallet, which obviously contains user/pwd for the other service.
How can I overcome this "persistence"?
Oracle reads its environment variables -- including ones like TNS_ADMIN and NLS_LANG -- only once. Once a connection has been established, the environment variables are not consulted again. This is likely the source of the "persistence" that you are seeing. You'll need to make sure that the environment variables are all defined before the connection is made and are suitable for all of the connections you intend to make; otherwise, you'll need to use a child process of some sort (but not created using a fork).
I'm trying to add zabbix server support for a service that is written in Python. This service should send metrics to a zabbix server in active mode. E.g. the service connects to the server periodically, not the other way. (The service can be operated behind firewalls, the only option is to use active mode.)
In the host.create API call, I'm required to give the interfaces for the host. Here is the documentation for that: https://www.zabbix.com/documentation/3.4/manual/api/reference/host/create - the interfaces parameter is required. If I try to give an empty list:
zapi = ZabbixAPI(cfg.url)
zapi.login(cfg.user, cfg.password) # I'm using an administrator user here!
host = zapi.host.create(
host=cfg.host_name,
description=cfg.host_description,
inventory_mode=1, # auto host inventory population
status=0, # monitored host
groups=[host_group_id],
interfaces=[], # active agent, no interface???
)
Then I get this error:
pyzabbix.ZabbixAPIException: ('Error -32500: Application error., No permissions to referred object or it does not exist!', -32500)
I can create hosts using the same user and the zabbix web interface, so I guess the problem is with the interfaces. So I have tried to create an interface first. However, the hostinterface.create method requires a hostid parameter.
See here: https://www.zabbix.com/documentation/3.4/manual/api/reference/hostinterface/create - I must give a hostid.
This is catch 22 - In order to create a host, I need to have a host interface. But to create a host interface, I need to have a host.
What am I missing? Maybe I was wrong and the host.create API call was rejected because of a different reason. How can I figure out what it was?
The host create api will create the hostinterface as well, you need to populate interfaces[] with the correct fields acccording to the documentation
For instance, add this before calling the api:
interfaces = []
interfaces.append( {
'type' : 2,
'main' : 1,
'useip': 1,
'ip' : '1.2.3.4',
'dns' : "",
'port' : '161'
} )
then pass it to the host create api
The referenced documentation not show explicity but, in Zabbix, one host need to have:
- One or more interfaces (active hosts need too)
- One or more host group
So for your code work you will need to change to someting like this:
zapi = ZabbixAPI(cfg.url)
zapi.login(cfg.user, cfg.password) # I'm using an administrator user here!
host = zapi.host.create(
host=cfg.host_name,
description=cfg.host_description,
inventory_mode=1, # auto host inventory population
status=0, # monitored host
groups=[host_group_id],
interfaces=[ {"type": "1",
"main": "1",
"useip": "1",
"ip": "127.0.0.1",
"dns": "mydns", # can be blank
"port": "10051"}],
)
In your case is a "active host" but in Zabbix the concept for Active/Passive is for item, not for hosts. So its possible (and not very unusual) have hosts with passive and active itens at same time.
I have a script that retrieves a login for ECR, authenticates a DockerClient instance with the login credentials (reauth set to True), and then attempts to pull a nominated container image.
The code seems to work perfectly when running on my local machine interacting with docker daemon on an EC2 instance, but when running from the EC2 instance I am constantly getting
404 Client Error: Not Found ("repository XXXXXXXX.dkr.ecr.eu-west-2.amazonaws.com/autohld-runner not found: does not exist or no pull access")
The same repo is being used for both executing the code locally and remotely on the EC2 instance. I have tried setting the access to the image within ECR to allow pull for both everyone and my AWS ID. I have granted the role assigned to the EC2 instance Full Admin access also. All with no joy.
If I perform the same tasks on the EC2 instance via command line with the exact same repo URI (copied from the error), it works with no issue.
Is there something I am missing within docker-py ?
url = "tcp://127.0.0.1:2375"
dockerd = docker.DockerClient(base_url=url, version='auto')
dockerd.login(username=ecr.username, password=ecr.password, email='none', registry=ecr.registry, reauth=True)
dockerd.images.pull(ecr.get_repo(instance.tags['Container']), tag='latest')
get_repo returns the full URI as reported in the error message, the Container element is the name 'autohld-runner'
Thanks
It seems that if the registry has been accessed via the cli then an auth token or something is set and docker remembers this allowing subsequent calls to work. However in this case the instance is starting up completely fresh and using the login method within docker-py.
This doesn't seem to pass the credentials on to the pull, I have found that using the auth_config named argument and passing in a dictionary of auth parameters works.
auth_creds = {'username': ecr.username, 'password': ecr.password}
dockerd.images.pull(ecr.get_repo(instance.tags['Container']), tag='latest', auth_config=auth_creds)
HTH
I am trying to access a remote ArangoDb install (on a windows server).
I've tried changing the endpoint in the arangod.conf as mentioned in another post here but as soon as I do the database stops responding both remotely and locally.
I would like to be able to do the following remotely:
Connect to the server in my application code (during development).
Connect to the server from a local arangosh shell.
Connect to the Arango server dashboard (http://127.0.0.1:8529/_db/_system/_admin/aardvark/standalone.html)
Long time since I came back to this. Thanks to the previous comments I was able to sort this out.
The file to edit is arangod.conf. On a windows machine located at:
C:\Program Files\ArangoDB 2.6.9\etc\arangodb\arangod.conf
The comments under the [server] section helped. I changed the endpoint to be the IP address of my server (bottom line)
[server]
# Specify the endpoint for HTTP requests by clients.
# tcp://ipv4-address:port
# tcp://[ipv6-address]:port
# ssl://ipv4-address:port
# ssl://[ipv6-address]:port
# unix:///path/to/socket
#
# Examples:
# endpoint = tcp://0.0.0.0:8529
# endpoint = tcp://127.0.0.1:8529
# endpoint = tcp://localhost:8529
# endpoint = tcp://myserver.arangodb.com:8529
# endpoint = tcp://[::]:8529
# endpoint = tcp://[fe80::21a:5df1:aede:98cf]:8529
#
endpoint = tcp://192.168.0.14:8529
Now I am able to access the server from my client using the above address.
Please have a look at the managing endpoints documentation.It explains how to bind and how to check whether it worked out.