Given I have added this following piece of code in conftest.py:
def pytest_addoption(parser):
parser.addoption('--environment', action='store')
I have created a Stanging.py file that has :
URL = "Some URL"
And then I launch the tesets using:
pytest --environment "Staging"
How do I then get, in a separate file, the environment option's value?
I tried in my test file:
from simple_settings import settings
and then:
print(settings.URL)
But when executed it raised:
Settings are not configured
Related
I am using python-decouple 3.4 for setting up environment variables for my django application. My .env file is in the same directory as that of manage.py. Except for SECRET_KEY (in settings.py), loading other environment variables in either settings.py or views.py directly fails stating that they have not been defined. The other environment variables which give error will be used in views.py.
Here is my .env file:-
SECRET_KEY=<django_app_secret_key>
file_path=<path_to_the file>
If I try to define them in settings.py like:-
from decouple import config
FILE_PATH = config('file_path')
and then use them in views.py,
from django.conf.settings import FILE_PATH
print(FILE_PATH)
then also I get the same error. How can I define environment variable for my views.py specifically?
[Edit: This is the error which I get:-
raise UndefinedValueError('{} not found. Declare it as envvar or define a default value.'.format(option))
decouple.UndefinedValueError: file_path not found. Declare it as envvar or define a default value.
whether I used this
from decouple import config
FILE_PATH = config('file_path')
in settings.py directly or views.py directly or first in settings.py and then in views.py like the example shown above]
I reproduced the same code you explained and it is working for me. The .env file should be in the root folder where manage.py exists. Make sure you are referencing the same settings file:
python manage.py runserver --settings=yourproj.settings.production
In the .env file:
file_path='/my/path'
In the settings.py file:
from decouple import config
FILE_PATH = config('file_path')
Also in the views.py file import the should be like this:
from django.conf import settings
print(settings.FILE_PATH)
In the .env file, the values assigned to variables were not enclosed in qoutes and that was why it was giving the error that it was unable to find file_path variable.
The .env file should be like this:-
SECRET_KEY='<django_app_secret_key>'
file_path='<path_to_the file>'
Anyways thanks to #Iain Shelvington and #Prakash S for your help.
I have a crontab entry which is running a python3 script. This python script uses a config.ini file to get some tokens for use in the script.
The crontab entry is:
*/15 * * * * /usr/bin/python3 /opt/scripts/tf_state_backup/tf_state_backup.py >> ~/cron.out 2>&1
The config.ini file has the following:
[terraform]
token = <base64 encoded API key>
[gitlab]
token = <base64 encoded API key>
The relevant part of the python script is as follows:
import configparser
## read config file and decode api keys
config = configparser.ConfigParser()
config.read(os.path.abspath('config.ini'))
tfc_token = base64.b64decode(config['terraform']['token']).decode('utf-8')
gitlab_token = base64.b64decode(config['gitlab']['token']).decode('utf-8')
When this runs I can check the cron.out file for any errors. I get the following error every time it runs.
SyntaxError: invalid syntax
Traceback (most recent call last):
File "/opt/scripts/tf_state_backup/tf_state_backup.py", line 17, in <module>
tfc_token = base64.b64decode(config['terraform']['token']).decode('utf-8')
File "/usr/lib64/python3.6/configparser.py", line 959, in __getitem__
raise KeyError(key)
KeyError: 'terraform'
I have checked the following:
Ensured script and config has correct permissions & +x permission
Ran script exactly as it is in the cron tab, it runs fine without any problems
Ensured the config.ini was being referenced by it's absolute path, not relative path
Any help on this would be excellent.
Your should use the get method of ConfigParser object. The first parameter is the section name the second is the variable name. If the raw parameter is set to True then special characters will be read as string (Eg.: %).
I have written a working version.
test.ini:
[terraform]
token = aGVsbG93b3JsZA==
[gitlab]
token = bm90X2hlbGxvd29ybGQ=
test.py:
import configparser
import base64
config = configparser.ConfigParser()
config.read("test.ini")
tfc_token = base64.b64decode(config.get('terraform', 'token', raw=True)).decode('utf-8')
gitlab_token = base64.b64decode(config.get('gitlab', 'token', raw=True)).decode('utf-8')
print(tfc_token)
print(gitlab_token)
Output:
>>> python3 test.py
helloworld
not_helloworld
FYI:
I have used Python3.6.6 and Linux OS for testing.
I have generated the base64s on this site: https://www.base64encode.org/
I managed to get this running by updating the cron job to the following:
*/15 * * * * cd /opt/scripts/tf_state_backup/ && /usr/bin/python3 /opt/scripts/tf_state_backup/tf_state_backup.py
I am perhaps not getting the path correctly for the config file?
Regardless it is now working.
So I tried many things (from SO and more) getting my tests running but nothing worked this is my current code:
test.py which I call to run the tests: python3 ./src/preprocess/python/test.py
import unittest
if __name__ == '__main__':
testsuite = unittest.TestLoader().discover('.')
unittest.TextTestRunner(verbosity=2).run(testsuite)
the test file looks like this:
import unittest
from scrapes.pdf import full_path_to_destination_txt_file
print(full_path_to_destination_txt_file)
class PreprocessingTest(unittest.TestCase):
def path_txt_appending(self):
self.assertEqual(full_path_to_destination_txt_file(
"test", "/usr/test"), "/usr/test/test.txt")
if __name__ == '__main__':
unittest.main(verbosity=2)
But the output is always like this:
python3 ./src/preprocess/python/test.py
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Additional Information:
As you can see I call this not from my root directory. The test folder is in ./src/preprocess/python/test/ and has a __init__.pyfile included (there is also a init file on the level of test.py)
it would be okay for me if I have to code down all the calls for all the tests I just want to finish this
automatic search with -t does not work either so I thought the more robust method here with test.py would work...
using this framework is a requirement I have to follow
test_preprocessing.py is in the test folder and from scrapes.pdf import full_path_to_destination_txt_filescrapes is a module folder on the same level as test
When I call the single unit test directly in the command line it fails because of the relative import. But using the test.py (obviously) finds the modules.
What is wrong?
By default, unittest will only execute methods whose name starts with test:
testMethodPrefix
String giving the prefix of method names which will be interpreted as test methods. The default value is 'test'.
This affects getTestCaseNames() and all the loadTestsFrom*() methods.
from the docs.
Either change that attribute or (preferably) prefix your method name with test_.
I have a serverless code in python. I am using serverless-python-requirements:^4.3.0 to deploy this into AWS lambda.
My code imports another python file in same directory as itself, which is throwing an error.
serverless.yml:
functions:
hello:
handler: functions/pleasework.handle_event
memorySize: 128
tags:
Name: HelloWorld
Environment: Ops
package:
include:
- functions/pleasework
- functions/__init__.py
- functions/config
(venv) ➜ functions git:(master) ✗ ls
__init__.py boto_client_provider.py config.py handler.py sns_publish.py
__pycache__ cloudtrail_handler.py glue_handler.py pleasework.py
As you can see, pleasework.py and config are in same folder, but when I do import config in pleasework I get an error:
{
"errorMessage": "Unable to import module 'functions/pleasework': No module named 'config'",
"errorType": "Runtime.ImportModuleError"
}
I am struggling with this for few days and think I am missing something basic.
import boto3
import config
def handle_event(event, context):
print('lol: ')
ok, so i found out my isssue. Way i was importing the file was wrong
Instead of
import config
I should be doing
import functions.config
#Pranay Sharma's answer worked for me.
An alternate way is creating and setting PYTHONPATH environment variable to the directory where your handler function and config exist.
To set environment variables in the Lambda console
Open the Functions page of the Lambda console.
Choose a function.
Under Environment variables, choose Edit.
Choose Add environment variable.
Enter a key and value.
In our case Key is "PYTHONPATH" and value is "functions"
I use scrapy 1.1.0, and I have 5 spiders in the "spiders" folder.
In every spider, I try to use python3 logging module. And the code structure like this :
import other modules
import logging
class ExampleSpider(scrapy.Spider):
name = 'special'
def __init__(self):
# other initializations
# set log
self.log = logging.getLogger('special')
self.log.setLevel(logging.DEBUG)
logFormatter = logging.Formatter('%(asctime)s %(levelname)s: %(message)s')
# file handler
fileHandler = logging.FileHandler(LOG_PATH) # LOG_PATH has defined
fileHandler.setLevel(logging.DEBUG)
fileHandler.setFormatter(logFormater)
self.log.addHandler(fileHandler)
# other functions
every spider has the same structure.When I run these spiders, I check the log file, they did exist, but their size are always 0 byte.
And the other question is that when I run one spider, it always generated two or more log files. Like I run a spider, and it will generate a.log and b.log.
Any answers would appreciate.
You can set log file via LOG_FILE setting in settings.py or via command line argument --logfile FILE, i.e. scrapy crawl myspider --logfile myspider.log
As described in the official docs