I'm a total newbie and I'm trying to do this project this is my first time, and it's almost done. I tried every method mentioned in this SO thread to move secret key from settings. In every method i got some kind of error, even from this official django doc mathod. I couldn't find where I'm making mistake.
When the secret key is inside the settings.py, everything is working super smooth. But I need to push my code in git, so i have to hide it from settings.py.
Right now im adding the details when i tried using django-environ, to keep secret key outside of settings.py.
im putting the contents inside the root project folder.
im using miniconda: 4.10.1. here is my requirement.txt.
# platform: linux-64
_libgcc_mutex=0.1=main
_openmp_mutex=4.5=1_gnu
appdirs=1.4.4=py_0
asgiref=3.3.4=pyhd3eb1b0_0
attrs=21.2.0=pyhd3eb1b0_0
black=19.10b0=py_0
ca-certificates=2021.5.30=ha878542_0
certifi=2021.5.30=py39hf3d152e_0
click=8.0.1=pyhd3eb1b0_0
django=3.2.4=pyhd3eb1b0_0
django-environ=0.4.5=py_1
importlib-metadata=3.10.0=py39h06a4308_0
krb5=1.17.1=h173b8e3_0
ld_impl_linux-64=2.35.1=h7274673_9
libedit=3.1.20210216=h27cfd23_1
libffi=3.3=he6710b0_2
libgcc-ng=9.3.0=h5101ec6_17
libgomp=9.3.0=h5101ec6_17
libpq=12.2=h20c2e04_0
libstdcxx-ng=9.3.0=hd4cf53a_17
mypy_extensions=0.4.1=py39h06a4308_0
ncurses=6.2=he6710b0_1
openssl=1.1.1k=h7f98852_0
pathspec=0.7.0=py_0
pip=21.1.2=py39h06a4308_0
psycopg2=2.8.6=py39h3c74f83_1
python=3.9.5=h12debd9_4
python_abi=3.9=1_cp39
pytz=2021.1=pyhd3eb1b0_0
readline=8.1=h27cfd23_0
regex=2021.4.4=py39h27cfd23_0
setuptools=52.0.0=py39h06a4308_0
six=1.16.0=pyh6c4a22f_0
sqlite=3.35.4=hdfb4753_0
sqlparse=0.4.1=py_0
tk=8.6.10=hbc83047_0
toml=0.10.2=pyhd3eb1b0_0
typed-ast=1.4.2=py39h27cfd23_1
typing_extensions=3.7.4.3=pyha847dfd_0
tzdata=2020f=h52ac0ba_0
wheel=0.36.2=pyhd3eb1b0_0
xz=5.2.5=h7b6447c_0
zipp=3.4.1=pyhd3eb1b0_0
zlib=1.2.11=h7b6447c_3
settings.py
import os
import environ
from pathlib import Path
env = environ.Env(
# set casting, default value
DEBUG=(bool, False)
)
# reading .env file
environ.Env.read_env()
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env('SECRET_KEY')
# False if not in os.environ
DEBUG = env('DEBUG')
im not adding the rest of settings. i dont think its important. if need please mention I.ll update.
i placed .env file in root of the project where manage.py and db.sqlite3 are placed
.env
#env file
DEBUG=on
#copied the entire line from settings.py
SECRET_KEY ='xxxx django secret key here xxxx'
while running "python manage.py runserver", i got this error.
im not sure what im missing. i got some kind of error, when i tried each method and errors are not same. sorry that i cannot explain every method and error here.
there are several questions asked in this form. but most are not answered and some are not accurately explains my situation. please mention if anything else is needed or for more clarification.
First check that you have installed django-environ and maybe you have a typing mistake in your requirements.txt it should be django-environ=0.4.5 instead of django-environ=0.4.5=py_1
you can pass the path of your .env inside read_env(env_file="relative_path_of_your_env_file")
it read a .env file into os.environ.
If not given a path to a dotenv path, does filthy magic stack backtracking
to find manage.py and then find the dotenv.
go through this code https://github.com/joke2k/django-environ/blob/master/environ/environ.py#L614
From the structure of the file tree, its clear that .env file is placed in the root folder of the project. When checking the error message, its visible that whoever is searching for .env file is checking at the same place as settings.py.
So, the short answer is if you are using django-environ to keep secret-key outside, place .env file together with settings.py in the same directory.
For a bit more elaborated content, you can refer to this link. I felt it is suitable for newbies.
Related
I setup Neovim LSP using the nvim-lspconfig and the lsp-installer where I also installed the pyright server.
Without any further configuration it worked out of the box. However when I have a class in a subfolder and add a new method, pyright does not recognize this method when I want to access it in a different file. When I restart neovim, or open and close the file, pyright suddenly recognizes the newly added method.
I also tried :LspRestart with no effect.
I tried to add some settings to the pyright server:
return {
settings = {
python = {
analysis = {
autoSearchPaths = true,
diagnosticMode = "workspace",
useLibraryCodeForTypes = true,
}
}
},
}
But this also had no effect.
:LspLog also does not show anything which could point to the issue:
[START][2022-07-15 11:11:05] LSP logging initiated
[WARN][2022-07-15 11:11:09] ...lsp/handlers.lua:109 "The language server pyright triggers a registerCapability handler despite dynamicRegistration set to false. Report upstream, this warning is harmless"
[WARN][2022-07-15 11:11:09] ...lsp/handlers.lua:456 "stubPath typings is not a valid directory."
[WARN][2022-07-15 11:11:20] ...lsp/handlers.lua:109 "The language server pyright triggers a registerCapability handler despite dynamicRegistration set to false. Report upstream, this warning is harmless"
I also could not find any setting regarding to this issue here which could solve this.
Since I am new to python, the way I import and structure classes might not be common and might be an issue which could cause this problem.
As a main entry point I have main.py in the root folder
All other source files are in a program/ folder which does not have a __init__.py
Inside program/ there are folders which each have a __init__.py file f.e. core/
core/__init__.py:
from .myClass import myClass
and in main.py I import it like this:
from subfolder.core import myClass
myClass.newMethod() # this is only recognized by lsp/pyright after the file is closed and reopen
Is the issue a bug in pyright (not likely I guess), a missing setting or my strange folder/import structure?
Can you try this: create (or modify) pyproject.toml, put it in the project root directory. Inside pyproject.toml, add the following lines:
[tool.pyright]
extraPaths = ["program/core" ,"program/directory_2", "program/directory_3"]
The idea is that you have to add the sub directories manually, which is really tedious but at least it works in my case.
As far as I'm aware I'm using best practices to define paths (using raw strings) and how I go about joining them (using os.path.join()), e.g.
import os
fdir = r'C:\Code\...\samples'
fpath = os.path.join(fdir, 'fname.ext')
and doing so has not caused me any problems when running my code within a Python or command shell. If I print fpath to the console I get consistent use of \s in the path:
C:\Code...\samples\fname.ext
But when I run a Docker containerized version of the code and run the image I get the error:
FileNotFoundError: [Errno 2] No such file or directory:
'C:\Code\...\samples/fname.ext'
I don't understand why os.path.join() has used a / to join fdir and fname.ext when the rest of the path included \\. It doesn't do this when I run the code outside of the container.
I have tried using os.path.normpath():
fpath = os.path.join(fdir, 'fname.ext')
fpath = os.path.normpath(fpath)
as discussed here, and os.sep.join():
fpath = os.sep.join([fdir, 'fname.ext'])
as covered here, and Path().joinpath():
from pathlib import Path
fpath = Path(fdir).joinpath('fname.ext')
as well as Path() / 'path_to_add':
fpath = Path(fdir) / 'fname.ext'
as discussed here, but in every case I end up with the same result using os.path.join().
Can someone please help me to understand what is going on and how to create consistent paths that will work whether I run the code in Python in a Windows environment, or in a Docker container?
Update Nov. 16:
In trying to keep my question brief I think I've left out details that are crucial. Apologies to those who have kindly taken the time to offer suggestions based on my incomplete description of the problem.
My code needs to import/export files from/to directories that are defined within a user-specified configuration file.
So the configuration file has a section of code where the user defines variables and paths, e.g.
samplesDir = r"path-to-samples-directory"
The variables are stored in a dictionary of dictionaris and stored as a .json.
At the start of the code the user defines the key that selects the dictionary of interest so that at various parts in my code when a file needs to be imported/exported, the paths are at hand.
So back to my example, samplesDir is stored in the configuration dictionary, cfgDict, so all I need to do is append the file name:
sampleFpath = os.path.join(sampleDir, sampleFname)
and sampleFname is determined based on other variables.
Because of the dynamic nature of the variables (including directory paths and file paths), I think it rules out the use of static path defined in a .yml with Docker Compose.
Update Nov. 18:
It may help to include a few more details and some screenshots.
The above screenshot shows the file and folder structure of the src directory containing the source code, the main app.py script for command-line use, the Dockerfile, etc.
The configs folder contains JSON files that includes variables, paths to directories and files. The user can create configuration files either by copying an existing one and modifying the entries, or configuration files can be generated by calling config.py.
Within config.py I have pre-set variables and paths, so that the directory path to the configuration files (configs), sample files (sample_DROs) and others (e.g. fiducials) are all within src.
I don't anticipate any reason why the user would want to store the config files anywhere else, nor do I expect them to want to use different sample files (or move them elsewhere). However, they will undoubtedly create their own fiducials and may decide not to store them in the fiducials directory (i.e. somewhere not within the src directory).
Likewise I have pre-set the download directory (based on the parameters stored within the configuration files, files are fetched from a server and downloaded) to be the default Downloads directory:
rootDownloadDir = os.path.join(Path.home(), "Downloads", "xnat_downloads")
Those files are later imported, processed, and the outputs are (by default) exported into sub-directories within rootDownloadDir.
Within Dockerfile I set the working directory of the container to be that of the source code and copy all of the contents of src (with the exception of some directories defined in .dockerignore):
WORKDIR C:/Code/WP1.3_multiple_modalities/src
...
COPY . .
so that the structure of the container mimics that of WORKDIR:
Hence I have allowed for flexibility in import/export directories, and they are by default a combination of paths within and outside of the src directory. And so, the code executed within the container will need to access files both within and outside of src.
That said, I don't know what rootDownloadDir will look like when os.path.join(Path.home(), "Downloads", "xnat_downloads") is run within the container.
This has got me thinking - Is it bad practice to set the download directory outside of src?
Returning to the original error:
the sample file is in the container:
From the actual behavior I can suppose that the container is based on Unix-like image. Path separator is / in such systems.
To build an environment-independent path which works inside and outside of the container you need the following steps:
Mounting of host folder to container directory.
Environment variable inside and outside the container.
I can show an example of how this is achievable via docker-compose tool and its configuration file docker-compose.yml:
# docker-compose.yml file
version: '3'
services:
<service_name>: # your service name here
image: <image_name> # name of image your container is built on
environment:
- SAMPLES_PATH=/samples
volumes:
- C:\Code\somepath\samples:/samples
In your python code you can use the following structure:
import os
fdir = os.getenv('SAMPLES_PATH', r'C:\Code\...\samples')
fpath = os.path.join(fdir, 'fname.ext')
I am using python-decouple 3.4 for setting up environment variables for my django application. My .env file is in the same directory as that of manage.py. Except for SECRET_KEY (in settings.py), loading other environment variables in either settings.py or views.py directly fails stating that they have not been defined. The other environment variables which give error will be used in views.py.
Here is my .env file:-
SECRET_KEY=<django_app_secret_key>
file_path=<path_to_the file>
If I try to define them in settings.py like:-
from decouple import config
FILE_PATH = config('file_path')
and then use them in views.py,
from django.conf.settings import FILE_PATH
print(FILE_PATH)
then also I get the same error. How can I define environment variable for my views.py specifically?
[Edit: This is the error which I get:-
raise UndefinedValueError('{} not found. Declare it as envvar or define a default value.'.format(option))
decouple.UndefinedValueError: file_path not found. Declare it as envvar or define a default value.
whether I used this
from decouple import config
FILE_PATH = config('file_path')
in settings.py directly or views.py directly or first in settings.py and then in views.py like the example shown above]
I reproduced the same code you explained and it is working for me. The .env file should be in the root folder where manage.py exists. Make sure you are referencing the same settings file:
python manage.py runserver --settings=yourproj.settings.production
In the .env file:
file_path='/my/path'
In the settings.py file:
from decouple import config
FILE_PATH = config('file_path')
Also in the views.py file import the should be like this:
from django.conf import settings
print(settings.FILE_PATH)
In the .env file, the values assigned to variables were not enclosed in qoutes and that was why it was giving the error that it was unable to find file_path variable.
The .env file should be like this:-
SECRET_KEY='<django_app_secret_key>'
file_path='<path_to_the file>'
Anyways thanks to #Iain Shelvington and #Prakash S for your help.
Is there anything that offers replay-type functionality, by pointing at a predefined prompt-answer file?
What works and what I'd like to achieve.
Let's take an example, using a cookiecutter to prep a Python package for pypi
cookiecutter https://github.com/audreyr/cookiecutter-pypackage.git
You've downloaded /Users/jluc/.cookiecutters/cookiecutter-pypackage before. Is it okay to delete and re-download it? [yes]:
full_name [Audrey Roy Greenfeld]: Spartacus 👈 constant for me/my organization
email [audreyr#example.com]: spartacus#example.com 👈 constant for me/my organization
...
project_name [Python Boilerplate]: GladiatorRevolt 👈 this will vary.
project_slug [q]: gladiator-revolt 👈 this too
...
OK, done.
Now, I can easily redo this, for this project, via:
cookiecutter https://github.com/audreyr/cookiecutter-pypackage.git --replay
This is great!
What I want:
Say I create another project, UnleashHell.
I want to prep a file somehow that has my developer-info and project level info for Unleash. And I want to be able to run it multiple times against this template, without having to deal with prompts. This particular pypi template gets regular updates, for example python 2.7 support has been dropped.
The problem:
A --replay will just inject the last run for this cookiecutter template. If it was run against a different pypi project, too bad.
I'm good with my developer-level info, but I need to vary all the project level info.
I tried copying the replay file via:
cp ~/.cookiecutter_replay/cookiecutter-pypackage.json unleash.json
Edit unleash.json to reflect necessary changes.
Then specify it via --config-file flag
cookiecutter https://github.com/audreyr/cookiecutter-pypackage.git --config-file unleash.json
I get an ugly error, it wants YAML, apparently.
cookiecutter.exceptions.InvalidConfiguration: Unable to parse YAML file .../000.packaging/unleash.json. Error: None of the known patterns match for {
"cookiecutter": {
"full_name": "Spartacus",
No problem, json2yaml to the rescue.
That doesn't work either.
cookiecutter.exceptions.InvalidConfiguration: Unable to parse YAML file ./cookie.yaml. Error: Unable to determine type for "
full_name: "Spartacus"
I also tried a < stdin redirect:
cookiecutter.prompts.txt:
yes
Spartacus
...
It doesn't seem to use it and just aborts.
cookiecutter https://github.com/audreyr/cookiecutter-pypackage.git < ./cookiecutter.prompts.txt
You've downloaded ~/.cookiecutters/cookiecutter-pypackage before. Is it okay to delete and re-download it? [yes]
: full_name [Audrey Roy Greenfeld]
: email [audreyr#example.com]
: Aborted
I suspect I am missing something obvious, not sure what. To start with, what is the intent and format expected for the --config file?
Debrief - how I got it working from accepted answer.
Took accepted answer, but adjusted it for ~/.cookiecutterrc usage. It works but the format is not super clear. Especially not on the rc which has to be yaml, though that's not always/often the case with rc files.
This ended up working:
file ~/.cookiecutterrc:
without nesting under default_context... tons of unhelpful yaml parse errors (on a valid yaml doc).
default_context:
#... cut out for privacy
add_pyup_badge: y
command_line_interface: "Click"
create_author_file: "y"
open_source_license: "MIT license"
# the names to use here are:
# full_name:
# email:
# github_username:
# project_name:
# project_slug:
# project_short_description:
# pypi_username:
# version:
# use_pytest:
# use_pypi_deployment_with_travis:
# add_pyup_badge:
# command_line_interface:
# create_author_file:
# open_source_license:
I still could not get a combination of ~/.cookiecutterrc and a project-specific config.yaml to work. Too bad that expected configuration format is so lightly documented.
So I will use the .rc but enter the project name, slug and description each time. Oh well, good enough for now.
You are near.
Try this cookiecutter https://github.com/audreyr/cookiecutter-pypackage.git --no-input --config-file config.yaml
The --no-input parameter will suppress the terminal user input, it is optional of course.
The config.yaml file could look like this:
default_context:
full_name: "Audrey Roy"
email: "audreyr#example.com"
github_username: "audreyr"
cookiecutters_dir: "/home/audreyr/my-custom-cookiecutters-dir/"
replay_dir: "/home/audreyr/my-custom-replay-dir/"
abbreviations:
pp: https://github.com/audreyr/cookiecutter-pypackage.git
gh: https://github.com/{0}.git
bb: https://bitbucket.org/{0}
Reference to this example file: https://cookiecutter.readthedocs.io/en/1.7.0/advanced/user_config.html
You probably just need the default_context block since that is where the user input goes.
I'm using Tornado working with Python 3 and Linux server, when I edit and save some text or XML files I want Tornado to restart itself. I checked the document and found the autoreload module and the watch function here.
It seems it only worked for pyo files. What can I do if I want it to reload when a certain URI is modified?
Setting the debug flag to True in settings forces Tornado to reload whenever a file is modified or whenever a URI is changed in app.py (or where ever you have defined your handlers). Tornado also automatically reloads template files so any changes in there will be seen instantly.
settings = {
'debug':True,
# other stuff
}
tornado.web.Application.__init__(self, handlers, **settings)
the file added must be an absolute path.
def addwatchfiles(*paths):
for p in paths:
autoreload.watch(os.path.abspath(p))
addwatchfiles('config.xml')
config.xml is at the same directory in where the server's python file start at.
You need to turn autoreload on:
tornado.autoreload.start()
tornado.autoreload.watch('myfile')
Complete example at https://gist.github.com/renaud/10356841