How to get the server name in Python Django? - python-3.x

How to get the server name in Python Django? we hosted Django application in AWS - Apache machine which have more than 1 server and increased based on load.
Now i need to find from which server the particular request is made and from which server we got the response.
I am using Python 3.6, Django,Django restframework, Apache server on AWS machine.

I assume by server name you mean hostname,
to get your hostname you can use
import socket
socket.gethostname()
https://docs.python.org/3/library/socket.html#socket.gethostname
or you can query your instance metadata, here you'll get a richer result such as
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
events/
hostname
iam/
instance-action
instance-id
instance-type
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-hostname
public-ipv4
public-keys/
reservation-id
security-groups
services/
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html
My opinion is try to use instance-id

Related

Pyads connection refused with Beckhoff running Twincat 3

I am trying to make a connection from a server running Ubuntu to a Beckhoff PLC with TwinCAT 3. With Windows everything works fine but with the same server on Linux I can't get a connection.
The Linux server has a static IP and in the route manager in the PLC I can find the route and see the server. I have tried adding the route by the route manager in the PLC and with "add_route_to_plc" but both ways my connection is refused. I have already turned off all firewalls. Any of you guys any idea what goes wrong here? In the attachment I have added some picture to see my settings and code that I try to run.
Python error: "connection closed by remote"
Python code:
import pyads
SENDER_AMS = '192.168.1.180.1.1'
PLC_IP = '192.168.1.100'
PLC_USERNAME = 'Administrator'
PLC_PASSWORD = '1'
ROUTE_NAME = 'GID_TEST_ROUTE'
HOSTNAME = 'Grid-stabilizer'
pyads.open_port()
pyads.set_local_address(SENDER_AMS)
pyads.add_route_to_plc(SENDER_AMS, HOSTNAME, PLC_IP, PLC_USERNAME, PLC_PASSWORD, route_name=ROUTE_NAME)
pyads.close_port()
plc=pyads.Connection('192.168.1.100.1.1', pyads.PORT_TC3PLC1)
plc.open()
plc.read_state()
If you are running python on linux and the plc on windows try
plc=pyads.Connection('192.168.1.100.1.1', pyads.PORT_TC3PLC1, PLC_IP)
This will create a route on the linux system. In your code the ip is missing to create a proper route.
Check the port of your plc. It should be 851.

Connecting to oracle db from python

We are connecting to oracle from python using cx_oracle package.
But the user_id, password and SID details are hardcoded in that.
My question is, is there any way to create a Datasource kind of thing? Or how we will deploy such python script sin production?
The database is in a Linux box and python is installed in another Linux box(Weblogic server is also installed in this Linux box).
import cx_Oracle
con = cx_Oracle.connect('pythonhol/welcome#127.0.0.1/orcl')
print con.version
Expectation is :
Can we deploy python in a production instance?
If yes how can we connect to the database by hiding the DB credentials?
Use some kind of 'external authentication', for example a wallet. See the cx_Oracle documentation https://cx-oracle.readthedocs.io/en/latest/user_guide/connection_handling.html#connecting-using-external-authentication
In summary:
create a wallet with mkstore which contains the username/password credentials.
copy the wallet to the machines that are running Python
make sure no bad people can access the wallet
configure Oracle Net files to point to the wallet
your scripts would connect like
connection = cx_Oracle.connect(dsn="mynetalias", encoding="UTF-8")
or
pool = cx_Oracle.SessionPool(externalauth=True, homogeneous=False, dsn="mynetalias",
encoding="UTF-8")
pool.acquire()

Start VM from powered off state using pyvmomi

So, I am trying to make a Python script using pyvmomi to control the state of a virtual machine I'm running on my ESXi server. Basically, I tried using connection.content.searchIndex.FindByIp(ip="the ip of the VM", vmSearch=True) to grab my VM and then power it on, but of course I cannot get the IP of the VM when it's off. So, I was wondering if there was any way I could get the VM, maybe by name or its ID? I searched around quite a bit but couldn't really find a solution. Either way, here's my code so far:
from pyVim import connect
# Connect to ESXi host
connection = connect.Connect("192.168.182.130", 443, "root", "password")
# Get a searchIndex object
searcher = connection.content.searchIndex
# Find a VM
vm = searcher.FindByIp(ip="192.168.182.134", vmSearch=True)
# Print out vm name
print (vm.config.name)
# Disconnect from cluster or host
connect.Disconnect(connection)
The searchindex doesn't have any methods to do a 'findbyname' so you'll probably have to resort to pulling back all of VMs and filtering through them client side.
Here's an example of returning all the VMs: https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/getallvms.py
Another option, if you're using vCenter 6.5+, there's the vSphere Automation SDK for Python where you can interact with the REST APIs to do a server side filter. More info: https://github.com/vmware/vsphere-automation-sdk-python
This code might prove helpful:
from pyVim.connect import SmartConnect
from pyVmomi import vim
import ssl
s=ssl.SSLContext(ssl.PROTOCOL_TLSv1)
s.verify_mode=ssl.CERT_NONE
si= SmartConnect(host="192.168.100.10", user="admin", pwd="admin123",sslContext=s)
content=si.content
def get_all_objs(content, vimtype):
obj = {}
container = content.viewManager.CreateContainerView(content.rootFolder, vimtype, True)
for managed_object_ref in container.view:
obj.update({managed_object_ref: managed_object_ref.name})
return obj
vmToScan = [vm for vm in get_all_objs(content,[vim.VirtualMachine]) if "ubuntu-16.04.4" == vm.name]

Read/Read-Write URIs for Amazon Web Services RDS

I am using HAProxy to for AWS RDS (MySQL) load balancing for my app, that is written using Flask.
The HAProxy.cfg file has following configuration for the DB
listen mysql
bind 127.0.0.1:3306
mode tcp
balance roundrobin
option mysql-check user haproxy_check
option log-health-checks
server db01 MASTER_DATABSE_ENDPOINT.rds.amazonaws.com
server db02 READ_REPLICA_ENDPOINT.rds.amazonaws.com
I am using SQLALCHEMY and it's URI is:
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://USER:PASSWORD#127.0.0.1:3306/DATABASE'
but when I am running an API in my test environment, the APIs that are just reading stuff from DB are executing just fine but the APIs that are writing something to DB are giving me errors mostly that:
(pymysql.err.InternalError) (1290, 'The MySQL server is running with the --read-only option so it cannot execute this statement')
I think I need to use 2 URLs now in this scenario, one for read-only operation and one for writes.
How does this work with Flask and SQLALCHEMY with HAProxy?
How do I tell my APP to use one URL for write operations and other HAProxy URL to read-only operations?
I didn't find any help from the documentation of SQLAlchemy.
Binds
Flask-SQLAlchemy can easily connect to multiple databases. To achieve
that it preconfigures SQLAlchemy to support multiple “binds”.
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://USER:PASSWORD#DEFAULT:3306/DATABASE'
SQLALCHEMY_BINDS = {
'master': 'mysql+pymysql://USER:PASSWORD#MASTER_DATABSE_ENDPOINT:3306/DATABASE',
'read': 'mysql+pymysql://USER:PASSWORD#READ_REPLICA_ENDPOINT:3306/DATABASE'
}
Referring to Binds:
db.create_all(bind='read') # from read only
db.create_all(bind='master') # from master

Remote ArangoDB access

I am trying to access a remote ArangoDb install (on a windows server).
I've tried changing the endpoint in the arangod.conf as mentioned in another post here but as soon as I do the database stops responding both remotely and locally.
I would like to be able to do the following remotely:
Connect to the server in my application code (during development).
Connect to the server from a local arangosh shell.
Connect to the Arango server dashboard (http://127.0.0.1:8529/_db/_system/_admin/aardvark/standalone.html)
Long time since I came back to this. Thanks to the previous comments I was able to sort this out.
The file to edit is arangod.conf. On a windows machine located at:
C:\Program Files\ArangoDB 2.6.9\etc\arangodb\arangod.conf
The comments under the [server] section helped. I changed the endpoint to be the IP address of my server (bottom line)
[server]
# Specify the endpoint for HTTP requests by clients.
# tcp://ipv4-address:port
# tcp://[ipv6-address]:port
# ssl://ipv4-address:port
# ssl://[ipv6-address]:port
# unix:///path/to/socket
#
# Examples:
# endpoint = tcp://0.0.0.0:8529
# endpoint = tcp://127.0.0.1:8529
# endpoint = tcp://localhost:8529
# endpoint = tcp://myserver.arangodb.com:8529
# endpoint = tcp://[::]:8529
# endpoint = tcp://[fe80::21a:5df1:aede:98cf]:8529
#
endpoint = tcp://192.168.0.14:8529
Now I am able to access the server from my client using the above address.
Please have a look at the managing endpoints documentation.It explains how to bind and how to check whether it worked out.

Resources