Python configparser remote file in Gitlab - python-3.x

I have requirement to refactor a K8s Python app so that it gets some configuration from a remote Giltab project because for various reasons we want to decouple applicaton settings from our pipeline/deployment environment.
In my functional testing, this works:
import configparser
config = configparser.ConfigParser()
config_file = "config.ini" # local file for testing
config.read(config_file)
['config.ini']
However, when I attempt to read the configuration from a remote file (our requirement), this DOES NOT work:
import requests
import os
token = os.environ.get('GITLAB_TOKEN')
headers = {'PRIVATE_TOKEN': token}
params = { 'ref' : 'master' }
response = requests.get('https:/path/to/corp/gitlab/file/raw', params=params,
headers=headers
config = configparser.ConfigParser()
configfile = response.content.decode('utf-8')
print(configfile) # this is good!
config.read(configfile) # this fails to load the contents into configparser
[]
I get an empty list. I can create a file and or print the contents of the configfile object from the requests.get call, and the ini data looks good. config.read() seems unable to load this as an object in memory and only seems to work by reading a file from disk. Seems like writing the contents of the requests.get to a local .ini file would defeat the whole purpose of using the remote configuration repo.
Is there a good way to read that configuration file from the remote and have configparser access it at container runtime?

I got this working with:
config.read_string(configfile)

Related

How to access the Hydra config object at runtime

I need to change the output/working directory of the hydra config framework in such a way that it lies outside of my project directory. According to my understanding and the doc, config.yaml would need to look like this:
exp_nr: 0.0.0.0
condition: something
hydra:
run:
dir: /absolute/path/to/folder/${exp_nr}/${condition}/
In my code, I then tried to access and set the path like this:
import os
import hydra
from omegaconf import DictConfig
#hydra.main(config_path="../../config", config_name="config", version_base="1.3")
def main(cfg: DictConfig):
print(cfg)
cwd = os.getcwd()
print(f"The current working directory is {cwd}")
owd = hydra.utils.get_original_cwd()
print(f"The Hydra original working directory is {owd}")
work_dir = cfg.hydra.run.dir
print(f"The work directory should be {work_dir}")
But I get the following output and error:
{'exp_nr': '0.0.0.0', 'condition': 'something'}
The current working directory is /project/path/subdir/subsubdir
The Hydra original working directory is /project/path/subdir/subsubdir
Error executing job with overrides: ['exp_nr=1.0.0.0', 'condition=somethingelse']
Traceback (most recent call last):
File "/project/path/subdir/subsubdir/model.py", line 13, in main
work_dir = cfg.hydra.run.dir
omegaconf.errors.ConfigAttributeError: Key 'hydra' is not in struct
full_key: hydra
object_type=dict
I see that hydra.run.dir doesn't appear in the cfg dict printed first but how can I access the path through the config if os.getcwd() isn't set already? Or what did I do wrong?
The path is correct as I already saved files to the folder before integrating hydra and if the process isn't killed due to the error the folder also gets created but hydra doesn't save any files to it, not even the log file with the parameters it should save by default. I also tried to set the path relative to the standard output path or having an extra config parameter work_dir: ${hydra.run.dir} (returns an Interpolation error).
You can access the Hydra config via the HydraConfig singleton documented here.
from hydra.core.hydra_config import HydraConfig
#hydra.main()
def my_app(cfg: DictConfig) -> None:
print(HydraConfig.get().job.name)

unable to read configfile using Configparser in Databricks

I want to read some values as a parameter using configparser in Databricks
i can import configparser module in databricks but unable to read the parameters from configfile its coming up error as KEY ERROR
please check the below screenshot
config file is
The problem is that your file is located on DBFS (the /FileStore/...) and this is file system isn't understood by configparser that works with "local" file system. To get this working, you need to append the /dbfs prefix to file path: /dbfs/FileStore/....
P.S. it may not work on community edition with DBR 7.x. In this case, just copy this config file before reading using the dbutils.fs.cp, like this :
dbutils.fs.cp("/FileStore/...", "file:///tmp/config.ini")
config.read("/tmp/config.ini")

Flask app config from YAML file is not loaded

My app needs configs like = app.config['LDAP_BASE_DN'] = 'OU=users,dc=example,dc=org' I want to pass this configurations to a YAML file and then make the app use it. I can load the file using PyYAML or config_with_yaml the problem is i can't set the app to use it as configurations.
It should work based on "https://exploreflask.com/en/latest/configuration.html"
I load my config with cfg = config.load("/Users/pjose/Project/dev_maintenance/backend/config.yaml") then i set my app config to get the data from the yaml file app.config.from_object(cfg) and by calling app.config["LDAP_USERNAME"] it should set the config, but it does not work.
YAML file:
LDAP_USERNAME: 'CN=Hermes Conrad,ou=people,dc=planetexpress,dc=com'
I get this error:
File "/Users/pjose/Project/dev_maintenance/backend/dev_maintenance/__init__.py", line 32, in <module>
app.config["LDAP_USERNAME"]
KeyError: 'LDAP_USERNAME'
I finally could make this work
The problem was that i was not passing the value in the YAML file to the app.config["LDAP_USERNAME"] as there are no references of this subject on the documentation that i used and i though it would fetch the value just by declaring like that.
So a example on how you could use a YAML file to set you app configurations is:
config.yaml
SQLALCHEMY_DATABASE_URI: "sqlite://"
SQLALCHEMY_TRACK_MODIFICATIONS : False
Then to get the values you need to parse the yaml using the PyYAML lib or other:
data = yaml_loader.yaml("/Users/pjose/Project/dev_maintenance/backend/config.yaml")
app.config.from_object(data)
app.config['SQLALCHEMY_DATABASE_URI'] = data.setdefault('SQLALCHEMY_DATABASE_URI')

How to add a module folder /tar.gz to nodes in Pyspark

I am running pyspark in Ipython Notebook after doing following configuration
export PYSPARK_DRIVER_PYTHON=/usr/local/bin/jupyter
export PYSPARK_DRIVER_PYTHON_OPTS="notebook--NotebookApp.open_browser=False --NotebookApp.ip='*' --NotebookApp.port=8880"
export PYSPARK_PYTHON=/usr/bin/python
I am having a custom udf function, which makes use of a module called mzgeohash. But, I am getting module not found error, I guess this module might be missing in workers / nodes .I tried to add sc.addpyfile and all. But, what will be the effective way to add a cloned folder or tar.gz python module in this case , from Ipython .
Here is how I do it, basically the idea is to create a zip of all the files in your module and pass it to sc.addPyFile() :
import dictconfig
import zipfile
def ziplib():
libpath = os.path.dirname(__file__) # this should point to your packages directory
zippath = '/tmp/mylib-' + rand_str(6) + '.zip' # some random filename in writable directory
zf = zipfile.PyZipFile(zippath, mode='w')
try:
zf.debug = 3 # making it verbose, good for debugging
zf.writepy(libpath)
return zippath # return path to generated zip archive
finally:
zf.close()
...
zip_path = ziplib() # generate zip archive containing your lib
sc.addPyFile(zip_path) # add the entire archive to SparkContext
...
os.remove(zip_path) # don't forget to remove temporary file, preferably in "finally" clause

scons environment setup

Is there a way to tell scons to use a particular file to setup the default environment? I am using TI DSPs and the compiler is something different than cc; I'd like to have one "environment file" that defines where the compiler is, and what the default flags are, and then be able to use this for several projects.
Any suggestions?
You can use the normal python utilities to read a file or process XML and then import it into your env. If you don't have some external file that you need to import into SCons, then you can simply encode the environment in the scons file. If, for some reason, your environment is defined in a Perl dictionary ( as in my case...) you can either try to use PyPerl or convert the Perl dictionary into YAML and then read the YAML into python. ( I was able to do the later, but not the former).
Let's say you simply have a file that you need to read which has environment variables in the form:
ENV_VAR1 ENV_VAL1
ENV_VAR2 ENV_VAL2
...
You could import this into your SConstruct.py file like:
import os
env_file = open('PATH_TO_ENV_FILE','r')
lines = env.file.readlines()
split_regex = re.compile('^(?P<env_var>[\w_]+) *(?P<env_val>.*)')
for line in lines:
regex_search = split_regex.search(line)
if regex_search:
env_var = regex_search.group('env_var')
env_val = regex_search.group('env_val').strip()
os.environ[env_var] = env_val
base_env = Environment(ENV=os.environ)
# even though the below lines seem redundant, it was necessary in my build
# flow...
for key in os.environ.iterkeys():
base_env[key] = os.environ[key]
If you want to stick this ugliness inside a different file and then import it from your main SConstruct.py file, you can add the following to enable access to the 'Environment' class from your other file:
from SCons.Environment import *
Then in your main SConstruct.py file, import the env file like:
from env_loader import *
SInclusion file:
...
myenv = Environment(...)
...
SConstruct file:
...
execfile('SInclusion')
...
myenv.Object(...)
...

Resources