I have an ENC setup which determines which environment a node will be placed during check-in.
Currently I'm keeping track of node types using the host name in an external database.
When a node checks in for the first time I'd like to determine the environment it should be in based on a fact. For example say I want to use the OS fact to determine if a new node should be sent a windows or linux configuration file.
It seems I only have access to the nodes host name, which I could potentially send to PuppetDB to retrieve the facts, but this wouldn't be the case on the initial check-in of a new node being registered with the Puppet Server.
Does anyone have a practical solution for this?
I found that if I directly access PuppetDB from my ENC, even on the very first check in I could access all the facts for my node.
Here is an example ENC using a python library for PuppetDb use:
#!/usr/bin/env python
import sys
from pypuppetdb import connect
db = connect(host='puppetdb', port=8080, ssl_verify=False, ssl_key=None, ssl_cert=None, timeout=20)
certname = sys.argv[1]
try:
node = db.node(certname)
print 'environment: ' + node.fact('os').value
except:
print 'environment: default'
Related
I had followed the steps given in https://docs.datastax.com/en/developer/python-driver/3.25/getting_started/ to connect to cassandra database using python code, but still after running the code snippet I am getting
NoHostAvailable: ('Unable to connect to any servers', {'hosts"port': OperationTimedOut('errors=None, last_host=None'),
Python version 2.7 and 3 (classpath is set for both the python versions)
Java 1.8 (class path has been set)
Apache cassandra 3.11.6 (apache home classpath has been set)
I tend to use a very simple app to test connectivity to a Cassandra cluster:
from cassandra.cluster import Cluster
cluster = Cluster(['10.1.2.3'], port=45678)
session = cluster.connect()
row = session.execute("SELECT release_version FROM system.local").one()
if row:
print(row[0])
Then run it:
$ python HelloCassandra.py
4.0.6
In your comment you mentioned that you're getting OperationTimedOut which indicates that the driver never got a response back from the node within the client timeout period. This usually means (a) you're connecting to the wrong IP, (b) you're connecting to the wrong CQL port, or (c) there's a network connectivity issue between your app and the cluster.
Make sure that you're using the IP address that you've set in rpc_address of cassandra.yaml. Also make sure that the node is listening for CQL clients on the right port. You can easily verify this by checking the output of either of these Linux utilities like netstat or lsof, for example:
$ sudo lsof -nPi -sTCP:LISTEN
Cheers!
So that error message suggests that the host/port combination either does not have Cassandra running on it or is under heavy load and unable to respond.
Can you edit your question to include the Cassandra connection portion of your code, as well as maybe how you're calling it? I have a test script which I use (and you're welcome to check it out), and here is the connection portion:
protocol=4
hostname=sys.argv[1]
username=sys.argv[2]
password=sys.argv[3]
nodes = []
nodes.append(hostname)
auth_provider = PlainTextAuthProvider(username=username, password=password)
cluster = Cluster(nodes,auth_provider=auth_provider, protocol_version=protocol)
session = cluster.connect()
I call it like this:
$ python3 testCassandra.py 127.0.0.1 aaron notReallyMyPassword
local
One thing you might try too, would be to run a nodetool status on the cluster just to make sure it's running ok.
Edit
local variable 'session' referenced before assignment
So this sounds to me like you're attempting a session.execute before session = cluster.connect(). Have a look at my Git repo (linked above) to see the correct order for instantiating session.
I am not using default port
In that case, make sure the port is being set in the cluster definition. Ex:
port = 19099
cluster = Cluster(nodes,auth_provider=auth_provider, port=port)
We are connecting to oracle from python using cx_oracle package.
But the user_id, password and SID details are hardcoded in that.
My question is, is there any way to create a Datasource kind of thing? Or how we will deploy such python script sin production?
The database is in a Linux box and python is installed in another Linux box(Weblogic server is also installed in this Linux box).
import cx_Oracle
con = cx_Oracle.connect('pythonhol/welcome#127.0.0.1/orcl')
print con.version
Expectation is :
Can we deploy python in a production instance?
If yes how can we connect to the database by hiding the DB credentials?
Use some kind of 'external authentication', for example a wallet. See the cx_Oracle documentation https://cx-oracle.readthedocs.io/en/latest/user_guide/connection_handling.html#connecting-using-external-authentication
In summary:
create a wallet with mkstore which contains the username/password credentials.
copy the wallet to the machines that are running Python
make sure no bad people can access the wallet
configure Oracle Net files to point to the wallet
your scripts would connect like
connection = cx_Oracle.connect(dsn="mynetalias", encoding="UTF-8")
or
pool = cx_Oracle.SessionPool(externalauth=True, homogeneous=False, dsn="mynetalias",
encoding="UTF-8")
pool.acquire()
I have hdfs cluster and python on the same google cloud platform. I want to access the files present in the hdfs cluster from python. I found that using pydoop one can do that but I am struggling with giving it right parameters maybe. Below is the code that I have tried so far:-
import pydoop.hdfs as hdfs
import pydoop
pydoop.hdfs.hdfs(host='url of the file system goes here',
port=9864, user=None, groups=None)
"""
class pydoop.hdfs.hdfs(host='default', port=0, user=None, groups=None)
A handle to an HDFS instance.
Parameters
host (str) – hostname or IP address of the HDFS NameNode. Set to an empty string (and port to 0) to connect to the local file system; set to 'default' (and port to 0) to connect to the default (i.e., the one defined in the Hadoop configuration files) file system.
port (int) – the port on which the NameNode is listening
user (str) – the Hadoop domain user name. Defaults to the current UNIX user. Note that, in MapReduce applications, since tasks are spawned by the JobTracker, the default user will be the one that started the JobTracker itself.
groups (list) – ignored. Included for backwards compatibility.
"""
#print (hdfs.ls("/vs_co2_all_2019_v1.csv"))
It gives this error:-
RuntimeError: Hadoop config not found, try setting HADOOP_CONF_DIR
And if I execute this line of code:-
print (hdfs.ls("/vs_co2_all_2019_v1.csv"))
nothing happens. But this "vs_co2_all_2019_v1.csv" file does exist in the cluster but is not available at the moment, when I took screenshot.
My hdfs screenshot is shown below:
and the credentials that I have are shown below:
Can anybody tell me that what am I doing wrong? Which credentials do I need to put where in the pydoop api? Or maybe there is another simpler way around this problem, any help will be much appreciated!!
Have you tried the following?
import pydoop.hdfs as hdfs
import pydoop
hdfs_object = pydoop.hdfs.hdfs(host='url of the file system goes here',
port=9864, user=None, groups=None)
hdfs_object.list_directory("/vs_co2_all_2019_v1.csv")
or simply:
hdfs_object.list_directory("/")
Keep in mind that pydoop.hdfs module is not directly related with the hdfs class (hdfs_object). Thus, the connection that you established in the first command is not used in hdfs.ls("/vs_co2_all_2019_v1.csv")
I am using Flask SQLalchemy in my google app engine standard environment project to try and connect to my GCP Postgresql database..
According to google docs, the url can be created in this format
# postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_socket=/cloudsql/<cloud_sql_instance_name>
and below is my code
from flask import Flask, request, jsonify
import constants
app = Flask(__name__)
# Database configuration from GCP postgres+pg8000
DB_URL = 'postgres+pg8000://{user}:{pw}#/{db}?unix_socket=/cloudsql/{instance_name}'.format(user=user,pw=password,db=dbname, instance_name=instance_name)
app.config['SQLALCHEMY_DATABASE_URI'] = DB_URL
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False # silence the
deprecation warning
sqldb = SQLAlchemy(app)
This is the error i keep getting:
File "/env/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 412, in connect return self.dbapi.connect(*cargs, **cparams) TypeError: connect() got an unexpected keyword argument 'unix_socket'
The argument to specify a unix socket varies depending on what driver you use. According to the pg8000 docs, you need to use unix_sock instead of unix_socket.
To see this in the context of an application, you can take a look at this sample application.
It's been more than 1.5 years and no one has posted the solution yet :)
Anyway, just use the below URI
postgres+psycopg2://<db_user>:<db_pass>#<public_ip>/<db_name>?host=/cloudsql/<cloud_sql_instance_name>
And yes, don't forget to add your systems public IP address to the authorized network.
Example of docs
As you can read in the gcloud guides, an examplary connection string is
postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_sock=<socket_path>/<cloud_sql_instance_name>/.s.PGSQL.5432
Varying engine and socket part
Be aware that the engine part postgres+pg8000 varies depending on your database and used driver. Also, depending on your database client library, the socket part ?unix_sock=<socket_path>/<cloud_sql_instance_name>/.s.PGSQL.5432 may be needed or can be omitted, as per:
Note: The PostgreSQL standard requires a .s.PGSQL.5432 suffix in the socket path. Some libraries apply this suffix automatically, but others require you to specify the socket path as follows: /cloudsql/INSTANCE_CONNECTION_NAME/.s.PGSQL.5432.
PostgreSQL and flask_sqlalchemy
For instance, I am using PostgreSQL with flask_sqlalchemy as database client and pg8000 as driver and my working connection string is only postgres+pg8000://<db_user>:<db_pass>#/<db_name>.
So, I am trying to make a Python script using pyvmomi to control the state of a virtual machine I'm running on my ESXi server. Basically, I tried using connection.content.searchIndex.FindByIp(ip="the ip of the VM", vmSearch=True) to grab my VM and then power it on, but of course I cannot get the IP of the VM when it's off. So, I was wondering if there was any way I could get the VM, maybe by name or its ID? I searched around quite a bit but couldn't really find a solution. Either way, here's my code so far:
from pyVim import connect
# Connect to ESXi host
connection = connect.Connect("192.168.182.130", 443, "root", "password")
# Get a searchIndex object
searcher = connection.content.searchIndex
# Find a VM
vm = searcher.FindByIp(ip="192.168.182.134", vmSearch=True)
# Print out vm name
print (vm.config.name)
# Disconnect from cluster or host
connect.Disconnect(connection)
The searchindex doesn't have any methods to do a 'findbyname' so you'll probably have to resort to pulling back all of VMs and filtering through them client side.
Here's an example of returning all the VMs: https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/getallvms.py
Another option, if you're using vCenter 6.5+, there's the vSphere Automation SDK for Python where you can interact with the REST APIs to do a server side filter. More info: https://github.com/vmware/vsphere-automation-sdk-python
This code might prove helpful:
from pyVim.connect import SmartConnect
from pyVmomi import vim
import ssl
s=ssl.SSLContext(ssl.PROTOCOL_TLSv1)
s.verify_mode=ssl.CERT_NONE
si= SmartConnect(host="192.168.100.10", user="admin", pwd="admin123",sslContext=s)
content=si.content
def get_all_objs(content, vimtype):
obj = {}
container = content.viewManager.CreateContainerView(content.rootFolder, vimtype, True)
for managed_object_ref in container.view:
obj.update({managed_object_ref: managed_object_ref.name})
return obj
vmToScan = [vm for vm in get_all_objs(content,[vim.VirtualMachine]) if "ubuntu-16.04.4" == vm.name]