Flask unable to read Authorization header on ElasticBeanstalk - python-3.x

I have deployed a Flask app to AWS ElasticBeanstalk. The app is unable to read the 'Authorization' header in requests.
Error log reports:
KeyError: 'HTTP_AUTHORIZATION'
Error traced to:
#application.before_request
def before_request():
try:
token = request.headers['Authorization'].split(' ')[-1]
user = User.get(token=token)
g.user = user
except ValueError as e:
abort(401)
Application directory:
app/
.elasticbeanstalk
application.py
virt
.ebignore
requirements.txt
Environment configuration sets the WSGIPath to application.py:
aws:elasticbeanstalk:container:python:
NumProcesses: '1'
NumThreads: '15'
StaticFiles: /static/=static/
WSGIPath: application.py
Environment runs Python 3.6 and the following components:
Click==7.0
Flask==1.0.2
Flask-RESTful==0.3.7
itsdangerous==1.1.0
Jinja2==2.10
MarkupSafe==1.1.1
peewee==3.9.2
psycopg2==2.7.7
python-dotenv==0.10.1
pytz==2018.9
six==1.12.0
Werkzeug==0.14.1
Is anything else required?
Attempted (unsuccessful) solution:
I have burnt many hours on this, and have attempted to configure WSGIPassAuthorization, (as per advice here and elsewhere) however, I have not been successful.
Application directory containing workaround:
app/
.elasticbeanstalk
.ebextensions/
wsgi_custom.config
application.py
virt
.ebignore
requirements.txt
When I attempt to create the eb environment containing .ebextensions/wsgi_custom.config the EB CLI reports an error saying that the YAML is not valid:
ERROR: InvalidParameterValueError - The configuration file .ebextensions/wsgi_custom.config in application version app-190310_100513 contains invalid YAML or JSON. YAML exception: Invalid Yaml: while scanning a simple key
in "<reader>", line 7, column 5:
WSGIPassAuthorization On
^
could not found expected ':'
in "<reader>", line 7, column 29:
On
^
, JSON exception: Invalid JSON: Unexpected character (f) at position 0.. Update the configuration file.
Contents of .ebextensions/wsgi_custom.config:
files:
"/etc/httpd/conf.d/wsgi_custom.conf":
mode: "000644"
owner: root
group: root
content: |
WSGIPassAuthorization On
My YAML validation tool reports valid YAML.
Note: editor is set to use spaces, as per AWS YAML advice.

Set WSGIPassAuthorization on ElasticBeanstalk using container_commands.
Step 1. Create .ebextensions/wsgi_custom.config:
app/
.elasticbeanstalk
.ebextensions/
wsgi_custom.config
application.py
virt
.ebignore
requirements.txt
wsgi_custom.config:
container_commands:
01wsgipass:
command: 'echo "WSGIPassAuthorization On" >> ../wsgi.conf'
Step 2. Restart EB environment.
Flask app can now read 'Authorization' header in requests.

Solved YAML validation errors in the example above. I believe the validation errors were a red herring. The .conf file mentioned in the script now has a valid name.
Contents of .ebextensions/wsgi_custom.config:
files:
"/etc/httpd/conf.d/wsgihacks.conf":
mode: "000644"
owner: root
group: root
content: |
WSGIPassAuthorization On

Related

How to install spark-xml library using dbx

I am trying to install library spark-xml_2.12-0.15.0 using dbx.
The documentation I found is to include it on the conf/deployment.yml file like:
custom:
basic-cluster-props: &basic-cluster-props
spark_version: "10.4.x-cpu-ml-scala2.12"
basic-static-cluster: &basic-static-cluster
new_cluster:
<<: *basic-cluster-props
num_workers: 2
build:
commands:
- "mvn clean package" #
environments:
default:
workflows:
- name: "charming-aurora-sample-jvm"
libraries:
- jar: "{{ 'file://' + dbx.get_last_modified_file('target/scala-2.12', 'jar') }}" #
tasks:
- task_key: "main"
<<: *basic-static-cluster
deployment_config: #
no_package: true
spark_jar_task:
main_class_name: "org.some.main.ClassName"
You may see documentation page here: https://dbx.readthedocs.io/en/latest/guides/jvm/jvm_devops/?h=maven
I have installed the library on the cluster using Maven file (https://mvnrepository.com/artifact/com.databricks/spark-xml_2.13/0.15.0):
<!-- https://mvnrepository.com/artifact/com.databricks/spark-xml -->
<dependency>
<groupId>com.databricks</groupId>
<artifactId>spark-xml_2.13</artifactId>
<version>0.15.0</version>
</dependency>
I can use it on a notebook level but not from a job deployed using dbx.
Edit
I am using PySpark .
So, I included it as this at conf/deployment.yml:
libraries:
- maven: "com.databricks:spark-xml_2.12:0.15.0"
On the file conf/deployment.yml
- name: "my-job"
libraries:
- maven:
- coordinates:"com.databricks:spark-xml_2.12:0.15.0"
tasks:
- task_key: "first_task"
<<: *basic-static-cluster
python_wheel_task:
package_name: "project_name"
entry_point: "jl" # take a look at the setup.py entry_points section for details on how to define an entrypoint
parameters: ["--conf-file", "file:fuse://conf/tasks/my_job_config.yml"]
Then I go with
dbx deploy my-job
This throwing the following error:
HTTPError: 400 Client Error: Bad Request for url: https://adb-xxxx.azuredatabricks.net/api/2.0/jobs/reset
Response from server:
{ 'error_code': 'MALFORMED_REQUEST',
'message': "Could not parse request object: Expected 'START_OBJECT' not "
"'START_ARRAY'\n"
' at [Source: (ByteArrayInputStream); line: 1, column: 91]\n'
' at [Source: java.io.ByteArrayInputStream#37fda06f; line: 1, '
'column: 91]'}
You were pretty close, and the error you've run into doesn't really say much.
We plan to introduce structure verification to make such that checks are more understandable.
The correct deployment file structure should look as follows:
- name: "my-job"
tasks:
- task_key: "first_task"
<<: *basic-static-cluster
# please note that libraries section is on the task level
libraries:
- maven:
coordinates:"com.databricks:spark-xml_2.12:0.15.0"
python_wheel_task:
package_name: "project_name"
entry_point: "jl" # take a look at the setup.py entry_points section for details on how to define an entrypoint
parameters: ["--conf-file", "file:fuse://conf/tasks/my_job_config.yml"]
Two important points here:
libraries section is on the task level
maven section expects an object, not a list, therefore this will not work:
#THIS IS INCORRECT DON'T DO THIS
libraries:
- maven:
- coordinates:"com.databricks:spark-xml_2.12:0.15.0"
But this will:
# correct structure
libraries:
- maven:
coordinates:"com.databricks:spark-xml_2.12:0.15.0"
I've summarized these detail in this new documentation section.
The documentation says following:
The workflows section of the deployment file fully follows the Databricks Jobs API structures.
If you look into API documentation, you will see that you need to use maven instead of file, and provide Maven coordinate as a string. Something like this (please note that you need to use Scala 2.12, not 2.13):
libraries:
- maven:
coordinates: "com.databricks:spark-xml_2.12:0.15.0"

Not able to look up class parameter in hiera

I have look at other questions like Using hiera to set class parameters? and others which discusses hiera 3. I am using hiera 5.
Here is my hiera.yaml
[root#e64a2e5c7c79 fisherman]# cat /fisherman/fisherman/hiera/hiera.yaml
---
version: 5
defaults: # Used for any hierarchy level that omits these keys.
datadir: data # This path is relative to hiera.yaml's directory.
data_hash: yaml_data # Use the built-in YAML backend.
hierarchy:
- name: "Apps" # Uses custom facts.
path: "apps/%{facts.appname}.yaml"
I also have this hiera data file:
[root#e64a2e5c7c79 fisherman]# cat /fisherman/fisherman/hiera/apps/HelloWorld.yaml
---
fisherman::create_new_component::component_name: 'HelloWord'
But when I run my puppet agent like so ...
export FACTER_appname=HelloWorld
hiera_config=/fisherman/fisherman/hiera/hiera.yaml
modulepath=/fisherman/fisherman/modules
puppet apply --modulepath=$modulepath --hiera_config=$hiera_config -e 'include fisherman'
... I get this error ...
Error: Evaluation Error: Error while evaluating a Function Call, Class[Fisherman::Create_new_component]: expects a value for parameter $component_name (file: /fisherman/fisherman/modules/fish
erman/manifests/init.pp, line: 12, column: 9) on node e64a2e5c7c79
I tried debugging hiera with puppet lookup like so:
[root#e64a2e5c7c79 /]# export FACTER_appname=HelloWorld
[root#e64a2e5c7c79 /]# hiera_config=/fisherman/fisherman/hiera/hiera.yaml
[root#e64a2e5c7c79 /]# modulepath=/fisherman/fisherman/modules
[root#e64a2e5c7c79 /]# puppet lookup --modulepath=$modulepath --hiera_config=$hiera_config --node agent.local --explain fisherman::create_new_component::component_name
Searching for "lookup_options"
Global Data Provider (hiera configuration version 5)
Using configuration "/fisherman/fisherman/hiera/hiera.yaml"
Hierarchy entry "Apps"
Path "/fisherman/fisherman/hiera/data/apps/.yaml"
Original path: "apps/%{facts.appname}.yaml"
Path not found
Environment Data Provider (hiera configuration version 5)
Using configuration "/etc/puppetlabs/code/environments/production/hiera.yaml"
Merge strategy hash
Hierarchy entry "Per-node data (yaml version)"
Path "/etc/puppetlabs/code/environments/production/data/nodes/.yaml"
Original path: "nodes/%{::trusted.certname}.yaml"
Path not found
Hierarchy entry "Other YAML hierarchy levels"
Path "/etc/puppetlabs/code/environments/production/data/common.yaml"
Original path: "common.yaml"
Path not found
Module data provider for module "fisherman" not found
Searching for "fisherman::create_new_component::component_name"
Global Data Provider (hiera configuration version 5)
Using configuration "/fisherman/fisherman/hiera/hiera.yaml"
Hierarchy entry "Apps"
Path "/fisherman/fisherman/hiera/data/apps/.yaml"
Original path: "apps/%{facts.appname}.yaml"
Path not found
Environment Data Provider (hiera configuration version 5)
Using configuration "/etc/puppetlabs/code/environments/production/hiera.yaml"
Hierarchy entry "Per-node data (yaml version)"
Path "/etc/puppetlabs/code/environments/production/data/nodes/.yaml"
Original path: "nodes/%{::trusted.certname}.yaml"
Path not found
Hierarchy entry "Other YAML hierarchy levels"
Path "/etc/puppetlabs/code/environments/production/data/common.yaml"
Original path: "common.yaml"
Path not found
Module data provider for module "fisherman" not found
Function lookup() did not find a value for the name 'fisherman::create_new_component::component_name'
I noticed this in the above output:
Hierarchy entry "Apps"
Path "/fisherman/fisherman/hiera/data/apps/.yaml"
Original path: "apps/%{facts.appname}.yaml"
Path not found
It looks like facts.appname is empty and not HelloWorld as I had expected.
What am I doing wrong here?
Thanks
Based on the information in the question I can't reproduce this. Here is my setup if it helps:
# init.pp
class test (
String $component_name,
) {
notify { $facts['appname']:
message => "Component name: $component_name for fact appname of ${facts['appname']}"
}
}
# hiera.yaml
---
version: 5
defaults:
datadir: data
data_hash: yaml_data
hierarchy:
- name: "Apps" # Uses custom facts.
path: "apps/%{facts.appname}.yaml"
# data/apps/HelloWorld.yaml
---
test::component_name: 'MyComponentName'
# spec/classes/test_spec.rb
require 'spec_helper'
describe 'test' do
let(:hiera_config) { 'spec/fixtures/hiera/hiera.yaml' }
let(:facts) {{ 'appname' => 'HelloWorld' }}
it {
is_expected.to contain_notify("HelloWorld")
.with({
'message' => "Component name: MyComponentName for fact appname of HelloWorld"
})
}
end
Tested on Puppet version:
▶ bundle exec puppet -V
6.6.0
Output:
▶ bundle exec rake spec
I, [2019-07-07T16:42:51.219559 #22140] INFO -- : Creating symlink from spec/fixtures/modules/test to /Users/alexharvey/git/home/puppet-test
/Users/alexharvey/.rvm/rubies/ruby-2.4.1/bin/ruby -I/Users/alexharvey/.rvm/gems/ruby-2.4.1/gems/rspec-core-3.8.2/lib:/Users/alexharvey/.rvm/gems/ruby-2.4.1/gems/rspec-support-3.8.2/lib /Users/alexharvey/.rvm/gems/ruby-2.4.1/gems/rspec-core-3.8.2/exe/rspec --pattern spec/\{aliases,classes,defines,functions,hosts,integration,plans,tasks,type_aliases,types,unit\}/\*\*/\*_spec.rb
test
should contain Notify[HelloWorld] with message => "Component name: MyComponentName for fact appname of HelloWorld"
Finished in 0.1444 seconds (files took 0.9699 seconds to load)
1 example, 0 failures
You also can query the Hiera hierarchy directly using puppet lookup like this:
▶ FACTER_appname=HelloWorld bundle exec puppet lookup \
--hiera_config=spec/fixtures/hiera/hiera.yaml test::component_name
--- MyComponentName

Serverless cannot import local files;in same directory; into python file

I have a serverless code in python. I am using serverless-python-requirements:^4.3.0 to deploy this into AWS lambda.
My code imports another python file in same directory as itself, which is throwing an error.
serverless.yml:
functions:
hello:
handler: functions/pleasework.handle_event
memorySize: 128
tags:
Name: HelloWorld
Environment: Ops
package:
include:
- functions/pleasework
- functions/__init__.py
- functions/config
(venv) ➜ functions git:(master) ✗ ls
__init__.py boto_client_provider.py config.py handler.py sns_publish.py
__pycache__ cloudtrail_handler.py glue_handler.py pleasework.py
As you can see, pleasework.py and config are in same folder, but when I do import config in pleasework I get an error:
{
"errorMessage": "Unable to import module 'functions/pleasework': No module named 'config'",
"errorType": "Runtime.ImportModuleError"
}
I am struggling with this for few days and think I am missing something basic.
import boto3
import config
def handle_event(event, context):
print('lol: ')
ok, so i found out my isssue. Way i was importing the file was wrong
Instead of
import config
I should be doing
import functions.config
#Pranay Sharma's answer worked for me.
An alternate way is creating and setting PYTHONPATH environment variable to the directory where your handler function and config exist.
To set environment variables in the Lambda console
Open the Functions page of the Lambda console.
Choose a function.
Under Environment variables, choose Edit.
Choose Add environment variable.
Enter a key and value.
In our case Key is "PYTHONPATH" and value is "functions"

fatal python error :py_Initialize : unable to get the locale encodings import error: no module named encodings

the important part of the error message:
I am getting the following error
starting uWSGI 2.0.18
setting pythonHome to /var/www/demo/venv
python version :3.5.3
Fatal Python error :unable to get the locale encoding
import error : no module named 'encodings'
It shows python version :3.5.3
however inside my venv/lib folder , there is only one package python 2.7
does this have something to do with my error?
please help me out with this.
this is my demo_uwsgi.ini file
#application's base folder
base = /var/www/demo
#python module to import
app = flaskfile //flaskfile is my flask file
module = %(app)
home = %(base)/venv
pythonpath = %(base)
#socket file's location
socket = /var/www/demo/%n.sock
#permissions for the socket file
chmod-socket = 666
#the variable that holds a flask application inside the module imported at line #6
callable = app
#location of log files
logto = /var/log/uwsgi/%n.log```
Am I missing plugins or something? I added plugins = python32 in my demo_uwsgi.ini file and it shows no such file or directory. Do I need to change or unset python path or something?
figured it out myself. Delete the default Nginx configuration file and add your new configuration file at the /etc/nginx. Then follow the instructions in this link https://vladikk.com/20.13/09/12/serving-flask-with-nginx-on-ubuntu/ step by step. change the ownership from root to user. It works perfectly

Ansible inventory plugin for azure throws encoding error

From what I understand using ansible-inventory-plugins over dynamic-inventory-provisioners is the new way of handling dynamic hosts, as of cloud providers and so on.
So, at first I've set the azure credentials in my environment:
± env | grep AZ
AZURE_SECRET=asdf
AZURE_TENANT=asdf
AZURE_SUBSCRIPTION_ID=asdf
AZURE_CLIENT_ID=asdf
Next, I've written an ansible.cfg with the following content:
± cat ansible.cfg
[inventory]
enable_plugins = azure_rm
Finally I wrote the yaml file with the minimum setting as shown at the ansible inventory plugin page:
± cat foo.azure_rm.yaml
---
plugin: azure_rm
When I am running the ansible-inventory binary on that file, I get:
± ansible-inventory -i foo.azure_rm.yaml --list
[WARNING]: * Failed to parse /path/to/foo.azure_rm.yaml with azure_rm plugin: Unicode-objects must be encoded before hashing
[WARNING]: Unable to parse /path/to/foo.azure_rm.yaml as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {}
}
Summing up: The main problem seems to be the line:
[WARNING]: * Failed to parse /path/to/foo.azure_rm.yaml with azure_rm plugin: Unicode-objects must be encoded before hashing
Help, anyone?
I think this is an error in the script. Adding the debug flag to Ansible gives me the following stacktrace:
File "/usr/local/lib/python3.6/site-packages/ansible/inventory/manager.py", line 273, in parse_source
plugin.parse(self._inventory, self._loader, source, cache=cache)
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 235, in parse
self._get_hosts()
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 292, in _get_hosts
self._process_queue_batch()
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 412, in _process_queue_batch
result.handler(r['content'], **result.handler_args)
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 357, in _on_vm_page_response
self._hosts.append(AzureHost(h, self, vmss=vmss))
File "/usr/local/lib/python3.6/site-packages/ansible/plugins/inventory/azure_rm.py", line 466, in __init__
self.default_inventory_hostname = '{0}_{1}'.format(vm_model['name'], hashlib.sha1(vm_model['id']).hexdigest()[0:4])
It seems this was only recently fixed: https://github.com/ansible/ansible/pull/46608. So either you'll have to wait for 2.8 or use development version.
I've fixed it in a github fork and use pipenv to include this version in my environment. Actually it should be a backup port from devel, where the problem is already fixed. Maybe I'll fix this during the coming days and do a PR at ansible to include it into stable-2.7, but maybe the better option is to wait for 2.8 in May.
I have had the same issue and solve it by using python3
you can check your ansible python version with the following command
ansible --version | grep "python version"
python version = 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0]
install all python3 packages
pip3 install ansible azure azure-cli
if needed export env variable for the authentication
export ANSIBLE_AZURE_AUTH_SOURCE=cli
then run ansible inventory with
python3 $(which ansible-inventory) -i my.azure_rm.yaml --graph
my.azure_rm.yml file looks like this one:
plugin: azure_rm
include_vm_resource_groups:
- my_resource_group_rg
auth_source: cli

Resources