I am trying to access a remote ArangoDb install (on a windows server).
I've tried changing the endpoint in the arangod.conf as mentioned in another post here but as soon as I do the database stops responding both remotely and locally.
I would like to be able to do the following remotely:
Connect to the server in my application code (during development).
Connect to the server from a local arangosh shell.
Connect to the Arango server dashboard (http://127.0.0.1:8529/_db/_system/_admin/aardvark/standalone.html)
Long time since I came back to this. Thanks to the previous comments I was able to sort this out.
The file to edit is arangod.conf. On a windows machine located at:
C:\Program Files\ArangoDB 2.6.9\etc\arangodb\arangod.conf
The comments under the [server] section helped. I changed the endpoint to be the IP address of my server (bottom line)
[server]
# Specify the endpoint for HTTP requests by clients.
# tcp://ipv4-address:port
# tcp://[ipv6-address]:port
# ssl://ipv4-address:port
# ssl://[ipv6-address]:port
# unix:///path/to/socket
#
# Examples:
# endpoint = tcp://0.0.0.0:8529
# endpoint = tcp://127.0.0.1:8529
# endpoint = tcp://localhost:8529
# endpoint = tcp://myserver.arangodb.com:8529
# endpoint = tcp://[::]:8529
# endpoint = tcp://[fe80::21a:5df1:aede:98cf]:8529
#
endpoint = tcp://192.168.0.14:8529
Now I am able to access the server from my client using the above address.
Please have a look at the managing endpoints documentation.It explains how to bind and how to check whether it worked out.
Related
How to get the server name in Python Django? we hosted Django application in AWS - Apache machine which have more than 1 server and increased based on load.
Now i need to find from which server the particular request is made and from which server we got the response.
I am using Python 3.6, Django,Django restframework, Apache server on AWS machine.
I assume by server name you mean hostname,
to get your hostname you can use
import socket
socket.gethostname()
https://docs.python.org/3/library/socket.html#socket.gethostname
or you can query your instance metadata, here you'll get a richer result such as
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
events/
hostname
iam/
instance-action
instance-id
instance-type
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-hostname
public-ipv4
public-keys/
reservation-id
security-groups
services/
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
My opinion is try to use instance-id
So, I am trying to make a Python script using pyvmomi to control the state of a virtual machine I'm running on my ESXi server. Basically, I tried using connection.content.searchIndex.FindByIp(ip="the ip of the VM", vmSearch=True) to grab my VM and then power it on, but of course I cannot get the IP of the VM when it's off. So, I was wondering if there was any way I could get the VM, maybe by name or its ID? I searched around quite a bit but couldn't really find a solution. Either way, here's my code so far:
from pyVim import connect
# Connect to ESXi host
connection = connect.Connect("192.168.182.130", 443, "root", "password")
# Get a searchIndex object
searcher = connection.content.searchIndex
# Find a VM
vm = searcher.FindByIp(ip="192.168.182.134", vmSearch=True)
# Print out vm name
print (vm.config.name)
# Disconnect from cluster or host
connect.Disconnect(connection)
The searchindex doesn't have any methods to do a 'findbyname' so you'll probably have to resort to pulling back all of VMs and filtering through them client side.
Here's an example of returning all the VMs: https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/getallvms.py
Another option, if you're using vCenter 6.5+, there's the vSphere Automation SDK for Python where you can interact with the REST APIs to do a server side filter. More info: https://github.com/vmware/vsphere-automation-sdk-python
This code might prove helpful:
from pyVim.connect import SmartConnect
from pyVmomi import vim
import ssl
s=ssl.SSLContext(ssl.PROTOCOL_TLSv1)
s.verify_mode=ssl.CERT_NONE
si= SmartConnect(host="192.168.100.10", user="admin", pwd="admin123",sslContext=s)
content=si.content
def get_all_objs(content, vimtype):
obj = {}
container = content.viewManager.CreateContainerView(content.rootFolder, vimtype, True)
for managed_object_ref in container.view:
obj.update({managed_object_ref: managed_object_ref.name})
return obj
vmToScan = [vm for vm in get_all_objs(content,[vim.VirtualMachine]) if "ubuntu-16.04.4" == vm.name]
I have two machines on LAN, I'd like to connect to AranoDB serve on one of them from another one.
The first one has an address 192.168.0.105, arangod.conf
[server]
endpoint = tcp://0.0.0.0:8529
storage-engine = auto
another one has an address 192.168.0.100 and arangod.conf
[server]
endpoint = tcp://192.168.0.105:8529
storage-engine = auto
ArangoDB on the first machine is working. When I try to start ArangoDB on the second machine, I see the following error:
2018-08-21T09:46:15Z [2724] INFO {authentication} Jwt secret not specified, generating...
2018-08-21T09:46:15Z [2724] INFO ArangoDB 3.3.12 [win64] 64bit, using build tags/v3.3.12-0-g225095d762, VPack 0.1.30, RocksDB 5.6.0, ICU 58.1, V8 5.7.492.77, OpenSSL 1.0.2a 19 Mar 2015
2018-08-21T09:46:15Z [2724] INFO using storage engine mmfiles
2018-08-21T09:46:15Z [2724] INFO {cluster} Starting up with role SINGLE
2018-08-21T09:46:15Z [2724] INFO {authentication} Authentication is turned on (system only)
2018-08-21T09:46:18Z [2724] INFO using endpoint 'http+tcp://192.168.0.105:8529' for non-encrypted requests
2018-08-21T09:46:18Z [2724] ERROR {communication} unable to bind to endpoint 'http+tcp://192.168.0.105:8529': The requested address is not valid in its context
2018-08-21T09:46:18Z [2724] WARNING {communication} failed to open endpoint 'http+tcp://192.168.0.105:8529' with error: The requested address is not valid in its context
2018-08-21T09:46:18Z [2724] FATAL failed to bind to endpoint 'http+tcp://192.168.0.105:8529'. Please check whether another instance is already running using this endpoint and review your endpoints configuration.
I've already created rules in the windows firewall and in the router.
Test-NetConnection results are:
PS C:\Users\> Test-NetConnection -ComputerName 192.168.0.105 -Port 8529
ComputerName : 192.168.0.105
RemoteAddress : 192.168.0.105
RemotePort : 8529
SourceAddress : 192.168.0.100
TcpTestSucceeded : True
What else should I do?
Not sure what you try here... connect with one server to another server? This is bound to fail. Don't you want to run a server on one machine and connect to it from another computer on the local network using arangosh? Or simply use the web interface?
The endpoint must be an address used by a network interface of your local computer. It can't be the address of another machine.
Setups like clusters require a lot more configuration (if done bare-metal).
For an overview of deployment modes including multi-machine setups you may want to check the work-in-progress documentation: https://docs.arangodb.com/devel/Manual/Deployment/
I'm using sample codes from documentation and I'm trying to connect to server using Prosys OPC UA Client. I have tried opcua-commander and integration objects opc ua client and it looks like server works just fine.
Here's what is happening:
After entering endpointUrl, client adds to url -- urn:NodeOPCUA-Server-default.
Client asks to specify security settings.
Client asks to choose server - only 1 option and it's urn:NodeOPCUA-Server-default.
And it goes back to step 2 and 3 over and over.
If I just minimize prosys client without closing configuration after some time I get this info in terminal:
Server: closing SESSION new ProsysOpcUaClient Session15 because of timeout = 300000 has expired without a keep alive
\x1B[46mchannel = \x1B[49m ::ffff:10.10.13.2 port = 51824
I have tried this project and it works -> node-opcua-htmlpanel. What's missing in sample code then?
After opening debugger I have noticed that each Time I select security settings and hit OK, server_publish_engine reports:
server_publish_engine:179 Cencelling pending PublishRequest with statusCode BadSecureChannelClosed (0x80860000) length = 0
This is due to a specific interoperability issue that was introduced in node-opcua#0.2.2. this will be fixed in next version of node-opcua. The resolution can be tracked here https://github.com/node-opcua/node-opcua/issues/464
The issue has been handled at the Prosys OPC Forum:
The error happens because the server sends different
EndpointDescriptions in GetEndpointsResponse and
CreateSessionResponse.
In GetEndpoints, the returned EndpointDescriptions contain
TransportProfileUri=http://opcfoundation.org/UA-Profile/Transport/uatcp-uasc-uabinary.
In CreateSessionResponse, the corresponding TransportProfileUri is
empty.
In principle, the server application is not working according to
specification. The part 4 of the OPC UA specification states that “The
Server shall return a set of EndpointDescriptions available for the
serverUri specified in the request. … The Client shall verify this
list with the list from a DiscoveryEndpoint if it used a
DiscoveryEndpoint to fetch the EndpointDescriptions. It is recommended
that Servers only include the server.applicationUri, endpointUrl,
securityMode, securityPolicyUri, userIdentityTokens,
transportProfileUri and securityLevel with all other parameters set to
null. Only the recommended parameters shall be verified by the
client.”
I am using HAProxy to for AWS RDS (MySQL) load balancing for my app, that is written using Flask.
The HAProxy.cfg file has following configuration for the DB
listen mysql
bind 127.0.0.1:3306
mode tcp
balance roundrobin
option mysql-check user haproxy_check
option log-health-checks
server db01 MASTER_DATABSE_ENDPOINT.rds.amazonaws.com
server db02 READ_REPLICA_ENDPOINT.rds.amazonaws.com
I am using SQLALCHEMY and it's URI is:
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://USER:PASSWORD#127.0.0.1:3306/DATABASE'
but when I am running an API in my test environment, the APIs that are just reading stuff from DB are executing just fine but the APIs that are writing something to DB are giving me errors mostly that:
(pymysql.err.InternalError) (1290, 'The MySQL server is running with the --read-only option so it cannot execute this statement')
I think I need to use 2 URLs now in this scenario, one for read-only operation and one for writes.
How does this work with Flask and SQLALCHEMY with HAProxy?
How do I tell my APP to use one URL for write operations and other HAProxy URL to read-only operations?
I didn't find any help from the documentation of SQLAlchemy.
Binds
Flask-SQLAlchemy can easily connect to multiple databases. To achieve
that it preconfigures SQLAlchemy to support multiple “binds”.
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://USER:PASSWORD#DEFAULT:3306/DATABASE'
SQLALCHEMY_BINDS = {
'master': 'mysql+pymysql://USER:PASSWORD#MASTER_DATABSE_ENDPOINT:3306/DATABASE',
'read': 'mysql+pymysql://USER:PASSWORD#READ_REPLICA_ENDPOINT:3306/DATABASE'
}
Referring to Binds:
db.create_all(bind='read') # from read only
db.create_all(bind='master') # from master