Ansible inventory plugin for azure throws encoding error - azure

From what I understand using ansible-inventory-plugins over dynamic-inventory-provisioners is the new way of handling dynamic hosts, as of cloud providers and so on.
So, at first I've set the azure credentials in my environment:
± env | grep AZ
AZURE_SECRET=asdf
AZURE_TENANT=asdf
AZURE_SUBSCRIPTION_ID=asdf
AZURE_CLIENT_ID=asdf
Next, I've written an ansible.cfg with the following content:
± cat ansible.cfg
[inventory]
enable_plugins = azure_rm
Finally I wrote the yaml file with the minimum setting as shown at the ansible inventory plugin page:
± cat foo.azure_rm.yaml
---
plugin: azure_rm
When I am running the ansible-inventory binary on that file, I get:
± ansible-inventory -i foo.azure_rm.yaml --list
[WARNING]: * Failed to parse /path/to/foo.azure_rm.yaml with azure_rm plugin: Unicode-objects must be encoded before hashing
[WARNING]: Unable to parse /path/to/foo.azure_rm.yaml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {}
}
Summing up: The main problem seems to be the line:
[WARNING]: * Failed to parse /path/to/foo.azure_rm.yaml with azure_rm plugin: Unicode-objects must be encoded before hashing
Help, anyone?

I think this is an error in the script. Adding the debug flag to Ansible gives me the following stacktrace:
File "/usr/local/lib/python3.6/site-packages/ansible/inventory/manager.py", line 273, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 235, in parse
self._get_hosts()
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 292, in _get_hosts
self._process_queue_batch()
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 412, in _process_queue_batch
result.handler(r['content'], **result.handler_args)
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 357, in _on_vm_page_response
self._hosts.append(AzureHost(h, self, vmss=vmss))
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 466, in __init__
self.default_inventory_hostname = '{0}_{1}'.format(vm_model['name'], hashlib.sha1(vm_model['id']).hexdigest()[0:4])
It seems this was only recently fixed: https://github.com/ansible/ansible/pull/46608. So either you'll have to wait for 2.8 or use development version.

I've fixed it in a github fork and use pipenv to include this version in my environment. Actually it should be a backup port from devel, where the problem is already fixed. Maybe I'll fix this during the coming days and do a PR at ansible to include it into stable-2.7, but maybe the better option is to wait for 2.8 in May.

I have had the same issue and solve it by using python3
you can check your ansible python version with the following command
ansible --version | grep "python version"
python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]
install all python3 packages
pip3 install ansible azure azure-cli
if needed export env variable for the authentication
export ANSIBLE_AZURE_AUTH_SOURCE=cli
then run ansible inventory with
python3 $(which ansible-inventory) -i my.azure_rm.yaml --graph
my.azure_rm.yml file looks like this one:
plugin: azure_rm
include_vm_resource_groups:
- my_resource_group_rg
auth_source: cli

Related

pyflink Unsupported Python SqlFunction CAST when working with amazon-kinesis-sql-connector and udtf function

i am currently trying to get Pyflink running with the AWS-Kinesis-SQL-Connector.
A use the TableAPI and can read from Kinesis and also write back to another Kinesis Stream. As soon as i use a udtf decorated function i get the following exception:
File "/home/user/anaconda3/envs/flink-env/lib/python3.8/site-packages/pyflink/table/table_environment.py", line 828, in execute_sql
return TableResult(self._j_tenv.executeSql(stmt))
File "/home/user/anaconda3/envs/flink-env/lib/python3.8/site-packages/py4j/java_gateway.py", line 1321, in __call__
return_value = get_return_value(
File "/home/user/anaconda3/envs/flink-env/lib/python3.8/site-packages/pyflink/util/exceptions.py", line 158, in deco
raise java_exception
pyflink.util.exceptions.TableException: org.apache.flink.table.api.TableException: Unsupported Python SqlFunction CAST.
I try to sum up the core snippets of the script:
#udtf(result_types=[DataTypes.STRING(), DataTypes.INT()])
def flatten_row(row: Row) -> Row:
for s in row["members"]:
yield Row(str(s["id"]), s["name"])
result_table = input_table.flat_map(flatten_row).alias("id", "name")
table_env.create_temporary_view("result_table", result_table)
As soon as i want to execute it on the Stream the exception get's raised.
table_result = table_env.execute_sql(f"INSERT INTO {output_table_name} SELECT * FROM result_table")
The output_table and input_table are connected to Kinesis Streams and without the udtf function it works.
Environment
Used apache-flink==1.16.0 and python3.8. Tried Conda and PIP environments
Thank you!
Already tried different versions of the apache-flink and the amazon-kinesis-sql-connector. Conda and PIP environments with Python3.8.
Finally i found out that the problem was the JDK version pre-installed on my MacOS. I downgraded from 15.0.2 until i reached 11.0.16, which was finally working without any error. So it seems that the Python apache-flink package needs older JDK versions.

ElasticSearch error: 'The client noticed that the server is not a supported distribution of Elasticsearch'

New to ElasticSearch. I was following this guide to get things set up: https://john.soban.ski/boto3-ec2-to-amazon-elasticsearch.html
I ran the "connect_to_es.py" script there, and oddly it worked the first time, but in a subsequent runs, it started throwing this error:
Traceback (most recent call last):
File "../connect_to_es.py", line 21, in <module>
print(json.dumps(es.info(), indent=4, sort_keys=True))
File "/home/ubuntu/projects/.venv/lib/python3.8/site-packages/elasticsearch/client/utils.py", line 168, in _wrapped
return func(*args, params=params, headers=headers, **kwargs)
File "/home/ubuntu/projects/.venv/lib/python3.8/site-packages/elasticsearch/client/__init__.py", line 294, in info
return self.transport.perform_request(
File "/home/ubuntu/projects/.venv/lib/python3.8/site-packages/elasticsearch/transport.py", line 413, in perform_request
_ProductChecker.raise_error(self._verified_elasticsearch)
File "/home/ubuntu/projects/.venv/lib/python3.8/site-packages/elasticsearch/transport.py", line 630, in raise_error
raise UnsupportedProductError(message)
elasticsearch.exceptions.UnsupportedProductError: The client noticed that the server is not a supported distribution of Elasticsearch
The elasticsearch python library version I have is 7.14, and my elasticsearch on AWS is running 7.10. Any thoughts on what's going on here?
Copy of code:
from elasticsearch import Elasticsearch, RequestsHttpConnection
from requests_aws4auth import AWS4Auth
import boto3
import json
host = '<url>.us-east-1.es.amazonaws.com'
region = 'us-east-1'
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
es = Elasticsearch(
hosts = [{'host': host, 'port': 443}],
http_auth = awsauth,
use_ssl = True,
verify_certs = True,
connection_class = RequestsHttpConnection
)
print(json.dumps(es.info(), indent=4, sort_keys=True))
Seems like downgrading fixed it pip3 install 'elasticsearch<7.14.0'
New elasticsearch-js has an issue:
The new product version check rejects oss distributions?
Downgrading it to lower version (e.g. 7.13) should help.
As some of the other answers indicate, you can downgrade right now but opensearch-py is a better long term solution
It should be a drop-in replacement for elasticsearch-py and it will be updated and patched over time. It supports OSS Elasticsearch and OpenSearch.
this error occurs because of the version conflict. Version of elasticsearch python library and elasticsearch should be the same.
In my case, elasticsearch version was 7.10 on AWS and I was using elasticsearch python library version 7.15 with my Django project. I removed it and installed new python library with version 7.10 in Django project and it worked fine for me.
I fixed the error by making following changes in Gemfile -
I changed -
gem 'elasticsearch'
to -
gem 'elasticsearch', '~> 7.1'
Ideally, I downgraded from 7.18(current version as of today) to 7.1

Flask unable to read Authorization header on ElasticBeanstalk

I have deployed a Flask app to AWS ElasticBeanstalk. The app is unable to read the 'Authorization' header in requests.
Error log reports:
KeyError: 'HTTP_AUTHORIZATION'
Error traced to:
#application.before_request
def before_request():
try:
token = request.headers['Authorization'].split(' ')[-1]
user = User.get(token=token)
g.user = user
except ValueError as e:
abort(401)
Application directory:
app/
.elasticbeanstalk
application.py
virt
.ebignore
requirements.txt
Environment configuration sets the WSGIPath to application.py:
aws:elasticbeanstalk:container:python:
NumProcesses: '1'
NumThreads: '15'
StaticFiles: /static/=static/
WSGIPath: application.py
Environment runs Python 3.6 and the following components:
Click==7.0
Flask==1.0.2
Flask-RESTful==0.3.7
itsdangerous==1.1.0
Jinja2==2.10
MarkupSafe==1.1.1
peewee==3.9.2
psycopg2==2.7.7
python-dotenv==0.10.1
pytz==2018.9
six==1.12.0
Werkzeug==0.14.1
Is anything else required?
Attempted (unsuccessful) solution:
I have burnt many hours on this, and have attempted to configure WSGIPassAuthorization, (as per advice here and elsewhere) however, I have not been successful.
Application directory containing workaround:
app/
.elasticbeanstalk
.ebextensions/
wsgi_custom.config
application.py
virt
.ebignore
requirements.txt
When I attempt to create the eb environment containing .ebextensions/wsgi_custom.config the EB CLI reports an error saying that the YAML is not valid:
ERROR: InvalidParameterValueError - The configuration file .ebextensions/wsgi_custom.config in application version app-190310_100513 contains invalid YAML or JSON. YAML exception: Invalid Yaml: while scanning a simple key
in "<reader>", line 7, column 5:
WSGIPassAuthorization On
^
could not found expected ':'
in "<reader>", line 7, column 29:
On
^
, JSON exception: Invalid JSON: Unexpected character (f) at position 0.. Update the configuration file.
Contents of .ebextensions/wsgi_custom.config:
files:
"/etc/httpd/conf.d/wsgi_custom.conf":
mode: "000644"
owner: root
group: root
content: |
WSGIPassAuthorization On
My YAML validation tool reports valid YAML.
Note: editor is set to use spaces, as per AWS YAML advice.
Set WSGIPassAuthorization on ElasticBeanstalk using container_commands.
Step 1. Create .ebextensions/wsgi_custom.config:
app/
.elasticbeanstalk
.ebextensions/
wsgi_custom.config
application.py
virt
.ebignore
requirements.txt
wsgi_custom.config:
container_commands:
01wsgipass:
command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'
Step 2. Restart EB environment.
Flask app can now read 'Authorization' header in requests.
Solved YAML validation errors in the example above. I believe the validation errors were a red herring. The .conf file mentioned in the script now has a valid name.
Contents of .ebextensions/wsgi_custom.config:
files:
"/etc/httpd/conf.d/wsgihacks.conf":
mode: "000644"
owner: root
group: root
content: |
WSGIPassAuthorization On

Puppet: how to add a line to an existing file

I am trying to add a line to an existing file /etc/fuse.conf. I tried this
added a folder two folders under modules directory
sudo mkdir /etc/puppet/modules/test
sudo mkdir /etc/puppet/modules/test/manifests
Then created a test.pp file and added following lines
sudo vim /etc/puppet/modules/test/manifests/test.pp
file { '/etc/fuse.conf':
ensure => present,
}->
file_line { 'Append a line to /etc/fuse.conf':
path => '/etc/fuse.conf',
line => 'Want to add this line as a test',
}
After that I ran this command
puppet apply /etc/puppet/modules/test/manifests/test.pp
Then I opened this file /etc/fuse.conf and there was no change in the file. The line was not added to the file. I don't understand what I am missing here. How can I do this?
Interesting. I ran the same test you did without an issue, and as long as you have stdlib installed in your environment you should be fine.
https://forge.puppet.com/puppetlabs/stdlib
The results of running the same steps you outlined were successful for me:
[root#foreman-staging tmp]# puppet apply /etc/puppet/modules/test/manifests/test.pp
Notice: Compiled catalog for foreman-staging.kapsch.local in environment production in 0.18 seconds
Notice: /Stage[main]/Main/File[/etc/fuse.conf]/ensure: created
Notice: /Stage[main]/Main/File_line[Append a line to /etc/fuse.conf]/ensure: created
Notice: Finished catalog run in 0.24 seconds
What did your puppet run output?
You should use templates (ERB) to handle file configuration. Its easier and cleaner.
Check the puppet docs for it in :
https://docs.puppetlabs.com/puppet/latest/reference/lang_template.html
There are other options though. e.g. Augeas which is an API for file configuration and integrate very well with Puppet. http://augeas.net/index.html
[]'s
There are a few ways to handle this. If it's ini file you can use ini_setting. If it's supported by augeas you can use that. Otherwise try specifying the after parameter to file_line

How to list the packages installed in a target rootfs built using oe-core?

For documentary purpose,
I am looking for efficient ways to list the packages installed in a target rootfs built using oe-core.
The list of packages installed in your image is stored in the manifest file (besides of build history which is already mentioned).
Content of the manifest file looks like:
alsa-conf cortexa7hf-neon-vfpv4 1.1.2-r0.1
alsa-conf-base cortexa7hf-neon-vfpv4 1.1.2-r0.1
alsa-lib cortexa7hf-neon-vfpv4 1.1.2-r0.1
alsa-states cortexa7hf-neon-vfpv4 0.2.0-r5.1
alsa-utils-alsactl cortexa7hf-neon-vfpv4 1.1.2-r0.5
alsa-utils-alsamixer cortexa7hf-neon-vfpv4 1.1.2-r0.5
...
The list consists of the package name, architecture and a version.
That manifest is located in the deploy directory (i.e. deploy/images/${MACHINE}/). Here as an example of the directory listing (there are target images and the manifest file)
example-image-genericx86.ext3
example-image-genericx86.manifest
example-image-genericx86.tar.bz2
USER_CLASSES ?= "buildname image-mklibs image-prelink buildhistory"
ERROR: Error executing a python function in /opt/apps_proc/oe-core/meta/recipes-core/eglibc/eglibc_2.17.bb:
The stack trace of python calls that resulted in this exception/failure was:
File: 'buildhistory_emit_pkghistory', lineno: 216, function:
0212:
0213: write_pkghistory(pkginfo, d)
0214:
0215:
***0216:buildhistory_emit_pkghistory(d)
0217:
File: 'buildhistory_emit_pkghistory', lineno: 207, function: buildhistory_emit_pkghistory
0203: filelist = []
0204: pkginfo.size = 0
0205: for f in pkgfiles[pkg]:
0206: relpth = os.path.relpath(f, pkgdestpkg)
***0207: fstat = os.lstat(f)
0208: pkginfo.size += fstat.st_size
0209: filelist.append(os.sep + relpth)
0210: filelist.sort()
0211: pkginfo.filelist = " ".join(filelist)
Exception: OSError: [Errno 2] No such file or directory: '/opt/apps_proc/oe-core/build/tmp-eglibc/work/armv7a-vfp-neon-oe-linux-gnueabi/eglibc/2.17-r3/packages-split/eglibc-thread-db/lib/libthread_db-1.0.so'
ERROR: Function failed: buildhistory_emit_pkghistory
Add build history to your USER_CLASSES variable in local.conf
USER_CLASSES ?= "buildhistory"
After you rerun the build look in build/buildhistory for more info.
You may need to force rebuilds to properly populate the directory.

Resources