Running Brightway with project dir on user-defined directory - brightway

The default directory in which Brightway stores projects and all associated components is determined by appdirs. Indeed, in bw2data.projects, the project directory is set as:
data_dir = appdirs.user_data_dir(LABEL, "pylca")
For example, for my Windows install , this is C:\users\me\AppData\Local\pylca\Brightway3.
I would like for one of my projects to be on an external network-based disk. This is for a used project, not just for cold storage. Is there functionality within Brightway to change the location of a project?

Yes, and the best way to do this is in the activation script for your project-specific virtual environment. See the FAQs (and please report an issue if more detail is needed or something is wrong):
https://docs.brightwaylca.org/faq.html#how-do-i-find-where-my-data-is-saved
https://docs.brightwaylca.org/faq.html#setting-brightway2-dir-in-a-virtual-environment

As an alternative procedure if you want to change BRIGHTWAY2_DIR in Python, this works:
import os
os.environ['BRIGHTWAY2_DIR']='path/to/my/other/dir'
from brightway2 import *
Despite interesting leads such as this one on reload, I have been unable to make this work if there was a brightway2 import before setting the BRIGHTWAY2_DIR environment variable.

Related

Azure ML release bug AZUREML_COMPUTE_USE COMMON_RUNTIME

On 2021-10-13 in our application in Azure ML platform we get this new error that causes failures in pipeline steps - python module import failures - warning stack <- warning that leads to pipeline runtime error
we needed to set it to false. Why is it failing? What exactly are exact (and long term) consequences when opting out? Also, Azure ML users - do you think it was rolled out appropriately?
Try to add into your envirnoment new variable like this:
environment.environment_variables = {"AZUREML_COMPUTE_USE_COMMON_RUNTIME":"false"}
Long term (throughout 2022), AzureML will be fully migrating to the new Common Runtime on AmlCompute. Short term, this change is a large undertaking, and we're on the lookout for tricky functionality of the old Compute Runtime we're not yet handling correctly.
One small note on disabling Common Runtime, it can be more efficient (avoids an Environment rebuild) to add the environment variable directly to the RunConfig:
run_config.environment_variables["AZUREML_COMPUTE_USE_COMMON_RUNTIME"] = "false"
We'd like to get more details about the import failures, so we can fix the regression. Are you setting the PYTHONPATH environment variable to make your custom scripts importable? If so, this is something we're aware isn't working as expected and are looking to fix it within the next two weeks.
We identified the issue and have rolled out a hotfix on our end addressing the issue. There are two problems that could've caused the import issue. One is that we are overwriting the PYTHONPATH environment variable. The second is that we are not adding the python script's containing directory to python's module search path if the containing directory is not the current working directory.
It would be great if you can please try again without setting the AZUREML_COMPUTE_USE_COMMON_RUNTIME environment variable and see if the problem is still there. If it is, please reply to either Lucas's thread or mine with a minimal repro or description of where the module you are trying to import is located at in relation to the script being run and the root of the snapshot (which is the current working directory).

How to download the pretrained dataset of huggingface RagRetriever to a custom directory [duplicate]

The default cache directory is lack of disk capacity, I need change the configure of the default cache directory.
You can specify the cache directory everytime you load a model with .from_pretrained by the setting the parameter cache_dir. You can define a default location by exporting an environment variable TRANSFORMERS_CACHE everytime before you use (i.e. before importing it!) the library).
Example for python:
import os
os.environ['TRANSFORMERS_CACHE'] = '/blabla/cache/'
Example for bash:
export TRANSFORMERS_CACHE=/blabla/cache/
As #cronoik mentioned, alternative to modify the cache path in the terminal, you can modify the cache directory directly in your code. I will just provide you with the actual code if you are having any difficulty looking it up on HuggingFace:
tokenizer = AutoTokenizer.from_pretrained("roberta-base", cache_dir="new_cache_dir/")
model = AutoModelForMaskedLM.from_pretrained("roberta-base", cache_dir="new_cache_dir/")
I'm writing this answer because there are other Hugging Face cache directories that also eat space in the home directory besides the model cache and the previous answers and comments did not make this clear.
The Transformers documentation describes how the default cache directory is determined:
Cache setup
Pretrained models are downloaded and locally cached at: ~/.cache/huggingface/transformers/. This is the default directory given by the shell environment variable TRANSFORMERS_CACHE. On Windows, the default directory is given by C:\Users\username.cache\huggingface\transformers. You can change the shell environment variables shown below - in order of priority - to specify a different cache directory:
Shell environment variable (default): TRANSFORMERS_CACHE.
Shell environment variable: HF_HOME + transformers/.
Shell environment variable: XDG_CACHE_HOME + /huggingface/transformers.
What this piece of documentation doesn't explicitly mention is that HF_HOME defaults to $XDG_CACHE_HOME/huggingface and is used for other huggingface caches, e.g. the datasets cache, which is separate from the transformers cache. The value of XDG_CACHE_HOME is machine dependent, but usually it is $HOME/.cache (and HF defaults to this value if XDG_CACHE_HOME is not set) - thus the usual default $HOME/.cache/huggingface
So you probably will want to set the HF_HOME environment variable (and possibly set a symlink to catch cases where the environment variable is not set).
export HF_HOME=/path/to/cache/directory
This environment variable is also respected by Hugging Face datasets library, although the documentation does not explicitly state this.

After adding libraries to dymola, i can't connect it to Python via buildingspy

After i have made some libraries load directly when i open Dymola by modifying the file : "c:/program files (x86)/dymola 2016 fd01/insert/dymola.mos" , and adding those lines :
Utilities.setenv("MODELICAPATH","C:/Users/hrameh/Desktop/EnergySystems_V2-73p/ModelicaLibraries/ExternalMedia-master/Modelica/ExternalMedia 3.2.1");
openModel("C:\Users\hrameh\Desktop\EnergySystems_V2-73p\ModelicaLibraries\ExternalMedia-master\Modelica\ExternalMedia 3.2.1\package.mo");
Utilities.setenv("MODELICAPATH","C:\Users\hrameh\Desktop\EnergySystems_V2-73p\ModelicaLibraries\EnergySystems");
openModel("C:\Users\hrameh\Desktop\EnergySystems_V2-73p\ModelicaLibraries\EnergySystems\package.mo");
Utilities.setenv("MODELICAPATH","\illuin\users$\hrameh\Mes documents\Dymola");
The model works completely fine in Dymola. But, when trying to simulate the model via Python using the buildingspy library, the simulation fails. Any suggestions to help me find a solution?
I guess your problem is, that buildingspy relies on the default working directory path Dymola gets when the dymola.exe is called by buildingspy - but the openModel-commands in your dymola.mos change the working directory.
Use
openModel("<path-to-package.mo>", changeDirectory=false);
to avoid this.
Additionally, with newer Dymola versions, you must ensure that no saved startup directory is used, by selecting: Edit -> Options -> Settings -> Do not save Startup directory.
As you currently use Dymola 2016 FD01 this is not a problem for you at the moment.
Such problems can be detected by showing the Dymola window when buildingspy simulates a model. You can do this with showGUI as shown in this minimum example:
import os
from buildingspy.simulate.Simulator import Simulator
os.environ["PATH"] += os.pathsep + "C:/Program Files (x86)/Dymola 2016 FD01/bin"
s = Simulator("Modelica.Blocks.Examples.PID_Controller", "dymola")
s.showGUI(show=True)
s.simulate()
Some further hints for your example:
You do not need the Utilities.setenv()-calls to open a library. openModel is sufficient
I would not use Utilities.setenv, as this is an undocumented and apparently a very old package (creation date is from 2004). Use Modelica.Utilities.System.setEnvironmentVariable instead. This way you also don't get an extra package loaded in the package browser
Using the file /insert/dymola.mos has some disadvantages:
it is used system wide by every user, so it should not contain paths to user directories
if you install a new dymola version, you must edit the insert/dymola.mos file of this installation again
Alternatives to dymola.mos
In Dymola 2016 FD01 use the file setup.mos instead to add the openModel commands (located in C:\Users\\AppData\Roaming\Dynasim)
Newer Dymola versions don't have setup.mos anymore, but setup.dymx for the settings and startup.mos for the user commands (to open libraries, etc.)

Where/How to save a preferences file in a *nix command line utility?

I am writing a small command line utility. It should hopefully be able to run on OSX, UNIX and Linux.
It needs to save a few preferences somewhere, like in a small YAML config file.
Where would one save such a file?
Language: Python 2.7
OS: *nix
Commonly, these files go somewhere like ~/.rc (eg: ~/.hgrc). This could be the path to a file, or to a directory if you need lots of configuration settings.
For a nice description see http://www.linuxtopia.org/online_books/programming_books/art_of_unix_programming/ch10s03.html
I would avoid putting the file in the ~ directory only because it has gotten totally flooded with crap. The recent trend, at least on ubuntu, is to use ~/.config/<appname>/ for whatever dot files you need. I really like that convention.
If your application is named "someapp" you save the configuration in a file such as $HOME/.someapp. You can give the config file an extension if you like. If you think your app may have more than one config file you can use the directory $HOME/.someapp and create regular-named (not hidden) files in there.
Many cross-platform tools use the same path on OS X as on linux (and other POSIX/non-Windows platforms). The main advantage of using the POSIX locations isn't saving a few lines of code, but saving the need for Mac-specific instructions, and allowing Mac users to get help from the linux users in the community (without any need to translate their suggestions).
The other alternative is to put them in the "Mac-friendly" locations under ~/Library instead. The main advantage of using the Mac locations is basically "Apple says so"—unless you plan to sandbox your code, in which case the main advantage is that you can do so.
If you choose to use the Library locations, you should read About the OS X File System and OS X Library Directory Details in the File System Programming Guide, but here's the short version:
Almost everything: Create a subdirectory with your app's name or bundle ID (unless you're going out of your way to set a bundle ID, you'll get org.python.python, which you don't want…) under ~/Library/Application Support. Ideally you should use APIs like -[NSFileManager URLForDirectory:inDomain:appropriateForURL:create:error:] to get the path; if not, you have to deal with things like localization, sandbox containers, etc. manually.
Anything that can be easily re-created (so it doesn't need to be backed up, migrated, etc.): An identically-named subdirectory of ~/Library/Caches.
Preferences: Use the NSUserDefaults or CFPreferences APIs instead. If you use your own format, the "old" way of doing things is to create a subdirectory under ~/Library/Preferences named with your app's name or bundle ID, and put your files in that. Apple no longer recommends that, but doesn't really recommend an alternative (short of "use CFPreferences, damnit!"); many apps (e.g., Aquamacs) still do it the old way, but others instead pretend they're not preferences and store them under Application Support.
In Python, this works as follows (leaving out the error handling, and assuming you're going by name instead of setting a bundle ID for yourself):
from Foundation import *
fm = NSFileManager.defaultManager()
appsupport = (fm.URLForDirectory_inDomain_appropriateForURL_create_error_(
NSApplicationSupportDirectory, NSUserDomainMask, None, True, None)[0].
URLByAppendingPathComponent_isDirectory_(
appname, True))
caches = (fm.URLForDirectory_inDomain_appropriateForURL_create_error_(
NSCachesDirectory, NSUserDomainMask, None, True, None)[0].
URLByAppendingPathComponent_isDirectory_(
appname, True))
prefs = NSUserDefaults.persistentDomainForName_(appname)

InstallShield: How can single custom actions be tested?

(I'm using InstallShield2012 V.18)
In setup.rul I defined a function per prototype declaration, included the file with the function definition and compiled it successfully (InstallShield compile).
Now I'd like to test this function (only).
I don't want to run the whole installation, not even test (Ctrl-T) because I want to avoid a complete re-build which takes too long time to do it often.
Is there a way to test only the custom function in InstallShield or per command line?
Not really although I can give you some tips.
Create a dummy feature with a release flag of DEVONLY.
Create a dummy component for that feature.
Create a ProductConfiguration that builds a single MSI with no EXE and a release flag of DEVONLY.
Building this production configuration will be very fast. A couple seconds on my laptop with an SSD. You can selectivly include other features through the use of release flags if you need certain components in order to setup the test environment for your CA.
Another strategy is to develop your CA in a test harness project and then transplant the code into your real installer when you know it all works.
Christopher, thanks for this fast reply. I have to put my answer here because commenting was restricted, because too long.
I also thought about using such a workaround but first wanted to avoid it if possible.
But ok, now I tried these steps, 1 and 2 no problem, but 3: InstallShield didn't allow me to configure a Product Configuration without Setup.exe in my .ism file (although we have IS2012 Pro).
Then I tried to do it in a Basic MSI Project (is that what you meant?), which really builds in very short time. And now I can see my scripting during Test Release, yeah :-)
To "transplant" my script now to the main ism I'm missing an export function for .rul files as it exists for custom actions, but there is only a import. So I will have to copy-paste while switching between ism files, but never mind.

Resources