Puppet Postgresql Error - puppet

I apply an incremental change to my postgresql install using puppet.
sudo puppet apply --modulepath=/vagrant/puppet/modules -e "include iwd-postgresql"
This result in the following error:
Error: Puppet::Parser::AST::Resource failed with error ArgumentError: Could not find declared class postgresql::globals at /vagrant/puppet/modules/iwd-postgresql/manifests/init.pp:6 on node target.intware.com
Wrapped exception:
Could not find declared class postgresql::globals
I have installed the puppetlabs/postgresql module.
If I do a puppet list I see the following:
[vagrant#target ~]$ puppet module list
/home/vagrant/.puppet/modules
├── puppetlabs-apt (v2.2.0)
├── puppetlabs-concat (v1.2.4)
├── puppetlabs-postgresql (v4.6.0)
└── puppetlabs-stdlib (v4.9.0)
/usr/share/puppet/modules (no modules installed)
Any ideas? I am running the appply command on a vagrant virtual machine in vagrant's home folder.

#ChrisPitman's comments pointed me in the right direction. I needed to set up the correct modulepath to include both our custom modules along with pre-built ones.
The following worked for me:
sudo puppet apply --modulepath=/vagrant/puppet/modules:/etc/puppet/modules -e "include iwd-postgresql"

When you used the command puppet apply you was using this path /home/vagrant/puppet/modules, but your module is installed under /home/vagrant/.puppet/modules, anyway If you are using vagrant, I think is better if you the Vagrantfile for use puppet and customize your vm

Related

Getting Pytest to include prod-dir in path when running tests isn't obvious to me [duplicate]

I used easy_install to install pytest on a Mac and started writing tests for a project with a file structure likes so:
repo/
|--app.py
|--settings.py
|--models.py
|--tests/
|--test_app.py
Run py.test while in the repo directory, and everything behaves as you would expect.
But when I try that same thing on either Linux or Windows (both have pytest 2.2.3 on them), it barks whenever it hits its first import of something from my application path. For instance, from app import some_def_in_app.
Do I need to be editing my PATH to run py.test on these systems?
I'm not sure why py.test does not add the current directory in the PYTHONPATH itself, but here's a workaround (to be executed from the root of your repository):
python -m pytest tests/
It works because Python adds the current directory in the PYTHONPATH for you.
Recommended approach for pytest>=7: use the pythonpath setting
Recently, pytest has added a new core plugin that supports sys.path modifications via the pythonpath configuration value. The solution is thus much simpler now and doesn't require any workarounds anymore:
pyproject.toml example:
[tool.pytest.ini_options]
pythonpath = [
"."
]
pytest.ini example:
[pytest]
pythonpath = .
The path entries are calculated relative to the rootdir, thus . adds repo directory to sys.path in this case.
Multiple path entries are also allowed: for a layout
repo/
├── src/
| └── lib.py
├── app.py
└── tests
├── test_app.py
└── test_lib.py
the configuration
[tool.pytest.ini_options]
pythonpath = [
".", "src",
]
or
[pytest]
pythonpath = . src
will add both app and lib modules to sys.path, so
import app
import lib
will both work.
Original answer (not recommended for recent pytest versions; use for pytest<7 only): conftest solution
The least invasive solution is adding an empty file named conftest.py in the repo/ directory:
$ touch repo/conftest.py
That's it. No need to write custom code for mangling the sys.path or remember to drag PYTHONPATH along, or placing __init__.py into dirs where it doesn't belong (using python -m pytest as suggested in Apteryx's answer is a good solution though!).
The project directory afterwards:
repo
├── conftest.py
├── app.py
├── settings.py
├── models.py
└── tests
└── test_app.py
Explanation
pytest looks for the conftest modules on test collection to gather custom hooks and fixtures, and in order to import the custom objects from them, pytest adds the parent directory of the conftest.py to the sys.path (in this case the repo directory).
Other project structures
If you have other project structure, place the conftest.py in the package root dir (the one that contains packages but is not a package itself, so does not contain an __init__.py), for example:
repo
├── conftest.py
├── spam
│ ├── __init__.py
│ ├── bacon.py
│ └── egg.py
├── eggs
│ ├── __init__.py
│ └── sausage.py
└── tests
├── test_bacon.py
└── test_egg.py
src layout
Although this approach can be used with the src layout (place conftest.py in the src dir):
repo
├── src
│ ├── conftest.py
│ ├── spam
│ │ ├── __init__.py
│ │ ├── bacon.py
│ │ └── egg.py
│ └── eggs
│ ├── __init__.py
│ └── sausage.py
└── tests
├── test_bacon.py
└── test_egg.py
beware that adding src to PYTHONPATH mitigates the meaning and benefits of the src layout! You will end up with testing the code from repository and not the installed package. If you need to do it, maybe you don't need the src dir at all.
Where to go from here
Of course, conftest modules are not just some files to help the source code discovery; it's where all the project-specific enhancements of the pytest framework and the customization of your test suite happen. pytest has a lot of information on conftest modules scattered throughout their docs; start with conftest.py: local per-directory plugins
Also, SO has an excellent question on conftest modules: In py.test, what is the use of conftest.py files?
I had the same problem. I fixed it by adding an empty __init__.py file to my tests directory.
Yes, the source folder is not in Python's path if you cd to the tests directory.
You have two choices:
Add the path manually to the test files. Something like this:
import sys, os
myPath = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, myPath + '/../')
Run the tests with the env var PYTHONPATH=../.
Run pytest itself as a module with:
python -m pytest tests
This happens when the project hierarchy is, for example, package/src package/tests and in tests you import from src. Executing as a module will consider imports as absolute rather than relative to the execution location.
You can run with PYTHONPATH in project root
PYTHONPATH=. py.test
Or use pip install as editable import
pip install -e . # install package using setup.py in editable mode
I had the same problem in Flask.
When I added:
__init__.py
to the tests folder, the problem disappeared :)
Probably the application couldn't recognize folder tests as a module.
I created this as an answer to your question and my own confusion. I hope it helps. Pay attention to PYTHONPATH in both the py.test command line and in the tox.ini.
https://github.com/jeffmacdonald/pytest_test
Specifically: You have to tell py.test and tox where to find the modules you are including.
With py.test you can do this:
PYTHONPATH=. py.test
And with tox, add this to your tox.ini:
[testenv]
deps= -r{toxinidir}/requirements.txt
commands=py.test
setenv =
PYTHONPATH = {toxinidir}
I fixed it by removing the top-level __init__.py in the parent folder of my sources.
I started getting weird ConftestImportFailure: ImportError('No module named ... errors when I had accidentally added __init__.py file to my src directory (which was not supposed to be a Python package, just a container of all source).
It is a bit of a shame that this is an issue in Python... But just adding this environment variable is the most comfortable way, IMO:
export PYTHONPATH=$PYTHONPATH:.
You can put this line in you .zshrc or .bashrc file.
I was having the same problem when following the Flask tutorial and I found the answer on the official Pytest documentation.
It's a little shift from the way I (and I think many others) are used to do things.
You have to create a setup.py file in your project's root directory with at least the following two lines:
from setuptools import setup, find_packages
setup(name="PACKAGENAME", packages=find_packages())
where PACKAGENAME is your app's name. Then you have to install it with pip:
pip install -e .
The -e flag tells pip to install the package in editable or "develop" mode. So the next time you run pytest it should find your app in the standard PYTHONPATH.
I had a similar issue. pytest did not recognize a module installed in the environment I was working in.
I resolved it by also installing pytest into the same environment.
Also if you run pytest within your virtual environment make sure pytest module is installed within your virtual environment. Activate your virtual environment and run pip install pytest.
For me the problem was tests.py generated by Django along with tests directory. Removing tests.py solved the problem.
I got this error as I used relative imports incorrectly. In the OP example, test_app.py should import functions using e.g.
from repo.app import *
However liberally __init__.py files are scattered around the file structure, this does not work and creates the kind of ImportError seen unless the files and test files are in the same directory.
from app import *
Here's an example of what I had to do with one of my projects:
Here’s my project structure:
microbit/
microbit/activity_indicator/activity_indicator.py
microbit/tests/test_activity_indicator.py
To be able to access activity_indicator.py from test_activity_indicator.py I needed to:
start test_activity_indicatory.py with the correct relative import:
from microbit.activity_indicator.activity_indicator import *
put __init__.py files throughout the project structure:
microbit/
microbit/__init__.py
microbit/activity_indicator/__init__.py
microbit/activity_indicator/activity_indicator.py
microbit/tests/__init__.py
microbit/tests/test_activity_indicator.py
According to a post on Medium by Dirk Avery (and supported by my personal experience) if you're using a virtual environment for your project then you can't use a system-wide install of pytest; you have to install it in the virtual environment and use that install.
In particular, if you have it installed in both places then simply running the pytest command won't work because it will be using the system install. As the other answers have described, one simple solution is to run python -m pytest instead of pytest; this works because it uses the environment's version of pytest. Alternatively, you can just uninstall the system's version of pytest; after reactivating the virtual environment the pytest command should work.
I was getting this error due to something even simpler (you could even say trivial). I hadn't installed the pytest module. So a simple apt install python-pytest fixed it for me.
'pytest' would have been listed in setup.py as a test dependency. Make sure you install the test requirements as well.
Since no one has suggested it, you could also pass the path to the tests in your pytest.ini file:
[pytest]
...
testpaths = repo/tests
See documentation: https://docs.pytest.org/en/6.2.x/customize.html#pytest-ini
Side effect for Visual Studio Code: it should pick up the unit test in the UI.
We have fixed the issue by adding the following environment variable.
PYTHONPATH=${PYTHONPATH}:${PWD}/src:${PWD}/test
As pointed out by Luiz Lezcano Arialdi, the correct solution is to install your package as an editable package.
Since I am using Pipenv, I thought about adding to his answer a step-by-step how to install the current path as an edible with Pipenv, allowing to run pytest without the need of any mangling code or lose files.
You will need to have the following minimal folder structure (documentation):
package/
package/
__init__.py
module.py
tests/
module_test.py
setup.py
setup.py mostly has the following minium code (documentation):
import setuptools
setuptools.setup(name='package', # Change to your package name
packages=setuptools.find_packages())
Then you just need to run pipenv install --dev -e . and Pipenv will install the current path as an editable package (the --dev flag is optional) (documentation).
Now you should be able to run pytest without problems.
If this pytest error appears not for your own package, but for a Git-installed package in your package's requirements.txt, the solution is to switch to editable installation mode.
For example, suppose your package's requirements.txt had the following line:
git+https://github.com/foo/bar.git
You would instead replace it with the following:
-e git+https://github.com/foo/bar.git#egg=bar
If nothing works, make sure your test_module.py is listed under the correct src directory.
Sometimes it will give ModuleNotFoundError not because modules are misplaced or export PYTHONPATH="${PWD}:${PYTHONPATH}" is not working, its because test_module.py is placed into a wrong directory under the tests folder.
it should be 1-to-1 mapping relation recursively instead of the root folder should be named as "tests" and the name of the file that include test code should starts with "test_",
for example,
./nlu_service/models/transformers.py
./tests/models/test_transformers.py
This was my experience.
Very often the tests were interrupted due to module being unable to be imported.
After research, I found out that the system is looking at the file in the wrong place and we can easily overcome the problem by copying the file, containing the module, in the same folder as stated, in order to be properly imported.
Another solution proposal would be to change the declaration for the import and show MutPy the correct path of the unit. However, due to the fact that multiple units can have this dependency, meaning we need to commit changes also in their declarations, we prefer to simply move the unit to the folder.
My solution:
Create the conftest.py file in the test directory containing:
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.realpath(__file__)) + "/relative/path/to/code/")
This will add the folder of interest to the Python interpreter path without modifying every test file, setting environment variable or messing with absolute/relative paths.

AWS codedeploy deployment failure due to node modules names with special characters

We are using AWS code deploy with bitbucket to deploy our applications on our ec2 instances. This is a new issue that we faced for our angular project repository. This repo has node modules as we are using angular with node and hence these dependencies are needed. These dependencies are having directory names starting with special character #. We found a thread on stack which said names with special charaters might cause a failure with a similar error we encountered.
The error we receive is
We are unable to resolve this. When we removed the node modules directory the deployment works fine. Hence we are sure that the issue has to be with the names. We cannot change or remove as these dependencies are used by angular. We believe there must be a way to tackle this and hence looking for suggetions. Appspec.yml file helps to filter out files, can that be helpful in this case?
Details of deployment:
We use the standard bitbucket code-deploy plugin to communicate with aws. The bitbucket repository branch to be deployed is set and the deployment group is selected to initiate the deployment.
The above image has the node modules bundled with the app in the same branch.We are using angular 7 with node hence these dependencies are needed. Now if we remove the node-modules directory, the deployment works fine. Hence we concluded that it is these special characters that are causing the failure.Here's another question which describes similar issue due to special characters.
All the time for node modules it is advised to packed them with your code rather than downloading them at deployment time.
Try to clean the destination directory yourself before installation using 'BeforeInstall' hook in AppSpec file as follows:
version: 0.0
os: linux
files:
- source: /
destination: /var/app/myapp
hooks:
BeforeInstall:
- location: ./cleanup.sh
and the content of cleanup.sh is similar to this:
#!/bin/bash -xe
rm -rf /var/app/myapp/
In the above, make sure to update the destination of your application deployment.
Edit 1:
Did a simple test with Repo:
https://github.com/shariqmus/codedeploy-special-char-test
(Recursively zipped the repo and uploaded to S3 and tested from there)
... and no dramas during extraction:
[root#ip-172-31-27-170 codedeploy-agent]# tree /var/app/myapp
/var/app/myapp
├── appspec.yml
├── cleanup.sh
├── node_modules
│   └── #agm
│   └── file.txt
└── README.md

puppet module not getting installed in agent

I am trying to install a puppet module in master and the agent node. The installation on the master is successful, the new module is visible in module list. Then I changed the site.pp file and included the new module. After that I ran the puppet agent -t command on the agent and expected the module to be installed in the agent. The command is running without any issues but the module is not getting installed.
Following is the sequence of steps which were executed on master:
puppet module install puppetlabs-ntp --version 6.2.0
puppet agent -t
puppet module list
Output:
/etc/puppetlabs/code/environments/production/modules
├── puppetlabs-ntp (v6.2.0)
└── puppetlabs-stdlib (v4.17.1)
Updated site.pp file as follows:
Content:
node default {
include ntp
}
And the following is the steps executed on agent:
puppet agent -t
puppet module list
Output:
/etc/puppetlabs/code/environments/production/modules
└── puppetlabs-stdlib (v4.17.1)
Have even compared the output of puppet agent -t --debug from both master and agent but did not see any specific errors which might be causing this issue.
What am I missing here ?
You are misunderstanding how Puppet works. If everything is set up correctly, the result you should expect when you run puppet agent -t on the agent is for Puppet on that node to configure the ntp service for you. It will not transfer the ntp module to the agent. The Puppet code itself is supposed to remain on the master.

Configuring http_backend in puppet 4

I have a problem while configuring Puppet 4 master to work with HTTP requests so I can use CouchDB for hiera.
These are the steps I did so far:
created new CouchDB with test database
created new document called common
gem install hiera-http-1.0.0
put the http_backend.rb file in /opt/puppetlabs/code/environments/production/mpdules/hiera_http and /opt/puppetlabs/puppet/lib/ruby/gems/2.1.0/gems/hiera-http
When I run gem list I can see:
Hiera (3.2.0)
Hiera-http (1.0.0)
Now, when I try running hiera common or anything else I get ERROR :
'require' : cannot load such file -- lookup_http (LoadError)
My hiera.yaml looks like :
:backends:
- http
And, of course, all the required settings (host,port..)
When i run puppet agent -t on the agent I get
cannot load backend http: no such file to load -- hiera/backend/http_backend at site.pp
Your steps 3 and 4:
gem install hiera-http-1.0.0
put the http_backend.rb file in /opt/puppetlabs/code/environments/production/mpdules/hiera_http and /opt/puppetlabs/puppet/lib/ruby/gems/2.1.0/gems/hiera-http
need to be replaced by:
/opt/puppetlabs/puppet/bin/gem install hiera-http
That will ensure the hiera-http hiera backend gem is automatically properly installed and configured for the ruby that puppet uses.
If you want to use the system ruby for puppet so that it recognizes the hiera-http installed from the system gem, then you need to install puppet, facter, and hiera with gem and not your OS package manager.

Can I install puppet modules through puppet manifest?

Primary goal is to add all puppet modules automatically, so that all dev-env's and prod-env could be started with one command. How can I install puppet modules through puppet manifest?
We've been happily using librarian-puppet to sync all 3rd party modules, it supports setting the modules' locations and versions. So production and dev run the exact same code.
The usage is one liner
librarian-puppet install
In other cases we have a shell script that runs puppet two times, one time a minimal module that is only responsible for fetching the required modules, and then the full blown puppet flow when all modules are available.
The "Puppet module" type and provider does exactly that:
module { 'author/mymodule':
ensure => present,
}
module { 'puppetlabs/stdlib':
ensure => '2.6.0',
}
module { 'puppetlabs/stdlib':
ensure => present,
modulepath => '/etc/puppet/modules',
}
Recent versions of Puppet also have a puppet module command that allows you to install arbitrary Puppet modules from the Puppet forge, example:
$ puppet module install rcoleman/puppet_module
Notice: Preparing to install into /home/anarcat/.puppet/modules ...
Notice: Created target directory /home/anarcat/.puppet/modules
Notice: Downloading from https://forgeapi.puppetlabs.com ...
Notice: Installing -- do not interrupt ...
/home/anarcat/.puppet/modules
└── rcoleman-puppet_module (v0.0.3)
Other alternatives include:
librarian
r10k (cited as a replacement for librarian), supports dynamic environments and much more
r10k is now part of Puppet Entreprise 2015.03, so it can certainly be considered best practice.
Here is an example of mine:
$module_stdlib = 'puppetlabs-stdlib'
exec { 'puppet_module_stdlib':
command => "puppet module install ${module_stdlib}",
unless => "puppet module list | grep ${module_stdlib}",
path => ['/bin', '/opt/puppetlabs/bin']
}
Where $module_stdlib is a module I whant to install.
The /bin path is a path where the grep comes from.
And the /opt/puppetlabs/bin is a path where for the puppet binary exists in my installation.
Seems that writing module for installing puppet modules is possible - it'll be just wrapper for puppet module tool. However, I didn't hear about such module yet.
I suppose this mechanism of installation is not popular because often you need to modify installed module, do some customizations. Practical tool for management of such modifications is version control system. For example, in our team we keep /etc/puppetlabs/puppet directory in git repostitory. After installation of any module from Puppet Forge we add its files to version control and push it to remote git server. Our internally developed modules are also kept in this repository. This way, several puppet masters (dev and prod environments) are synchronized with this central repository and always have up-to-date versions of all modules.
I did this and it seemed to work
exec { 'puppet-fstab':
path => '/bin:/usr/bin',
command => 'puppet module install -i /usr/share/puppet/modules/AlexCline-fstab >>/tmp/err.log',
}

Resources