Python ImportError when executing through windows scheduler - python-3.x

I did some searches for this topic and found some prior threads, but I did not understand any of them as I am still a total beginner in Python.
I have a Python script which has some long string variables stored in various .py files in a sub-directory. I'm importing the .py files from that sub-directory when I run the script. There is a __init__.py file in the sub-directory. The only reason I'm using this setup is that the long string variables which I'm storing in those other files would make the code very difficult to read as they are SQL strings and can span 50-100 lines each.
Everything works perfectly when I run this script through PyCharm.
However, when I run the script through Windows Scheduler or a batch file, I get an ImportError for all of the .py files in the sub-directory. The problem is definitely related to the python script not knowing where to look for those .py files when it's run through Windows Scheduler. But I'm not sure how to fix it.
The action for the scheduler task is to run the python exe
D:\Python35\python.exe
with the argument as the script
D:\python\tableaudatasourcebuilds\dcitechnicalperformance\dcitechnicalperformance0.py
So the full action looks like:
D:\Python35\python.exe "D:\python\tableaudatasourcebuilds\dcitechnicalperformance\dcitechnicalperformance0.py"
The subdirectory which stores the long string variables .py files is:
D:\python\tableaudatasourcebuilds\dcitechnicalperformance\dcitechnicalperformance0\
The imports look like:
from dcitechnicalperformance.dcitechnicalperformance0.dciquer import nzsqldciwk
Does anyone know how to address this problem? Any help is much appreciated.

Good afternoon,
First of all i don't know how much sense there is to store long SQL querys on a module, I'm not by any means an expert, but something like a JSON file (or hell, even store them in a table inside the sql) seems like a better approach.
About your problem I think it resides on the current directory where the task is launched, let me explain:
In PyCharm when you run the code it launches from the location of the file, and with so, it's able to find the directory with the module.
With the scheduled task it may be launching in another directory and so, it's unable to find the module as the directory is not present.
If you decide to stick with your reproach a plausible solution would be to create a .bat file that browses to the project location:
#ECHO OFF
D:
cd D:\python\tableaudatasourcebuilds\dcitechnicalperformance\
D:\Python35\python.exe dcitechnicalperformance0.py
And that should work.

Related

How to share a variable between 2 pyRevit scripts?

I am using the latest version of pyRevit, v45.
I'm writing some info in temporary files with
myTempFile = script.get_instance_data_file("id")
This creates a file named pyRevit_2018_xxxx_id.tmp in which I store useful info. If I'm not mistaken, the "xxxx" part is changing every time I reload Revit. Now, I need to get access to this information from another pyRevit script.
How can I retrieve the name of the temp file I need to read? In other words, how do I access "myTempFile" from within the second script, which has no idea of the name of "myTempFile"?
I guess I can share somehow that variable between my script, but what's the proper way to do this? I know this must be a very basic programming question, but I'm indeed not a programmer ;)
Thanks a lot,
Arnaud.
Ok, I realise now that my variables in the 1st script cease to exist after its execution.
So for now I wrote the file name in another file, of which I know the name.. That works.
But if there's a cleaner way to do this, I'd be glad to learn ;)
Arnaud
pyrevit.script module provides 4 different methods for creating temporary files based on their use case:
get_instance_data_file:
for data files marked with Revit instance pid. This means that scripts running on another instance will not see this temp file.
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_instance_data_file
get_universal_data_file:
for temp files accessible to all Revit instances and versions
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_universal_data_file
get_data_file:
Base method to get a standard temp file for current revit version
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_data_file
get_document_data_file:
temp file marked with active document (so scripts working on another document will not see this)
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_document_data_file
Each method uses a pattern to create the temp file name. So as long as the call to the method is the same of different scripts, the method generates the same file name.
Example:
Script 1:
from pyrevit import script
tfile = script.get_data_file('mydata')
Script 2:
from pyrevit import script
tempfile = script.get_data_file('mydata')
In this example tempfile = tfile since the file id is the same.
There is documentation on each so make sure you take a look at those and pick the flavor that serves your purpose.

Python cx_freeze Create dirs for included files in build

Is it possible to create dirs(folders) on cx_freeze build output, cause i include(include_files) many databases files and i want these to be in specific folder etc. I can take them easily from my folders.....
"include_files": ["databases/nations.txt","databases/newafrica.txt",
"databases/neweeurope.txt","databases/neweurope.txt","databases/newmeast.txt","graph.py",
"databases/newnamerica.txt","databases/plates.txt",
"databases/ACN/rigidA.txt","databases/ACN/rigidB.txt",
"databases/ACN/rigidC.txt","databases/ACN/rigidD.txt","databases/ACN/flexibleA.txt",
"databases/ACN/flexibleB.txt","databases/ACN/flexibleC.txt",
"databases/ACN/flexibleD.txt","alternates.xlsx",
but this will just copy all of them in exe build dir and its a mess.
Thanks in advance.
There are several ways you can go around solving the problem.
Method 1 - Using include_files
Rather than ask for each individual text file you could just put the file name in the setup script and leave out the individual text files. In your case it would be like this:
"include_files": ["databases"]
This would copy the entire databases folder with everything in it into you build folder.
Absolute file paths work as well.
If you are going to use the installer feature (bdist_msi) this is the method to use.
You can copy sub folders only using "include_files": ["databases/ACN"]
Method 2 - Manually
Ok it's rather un-pythonic but the one way to do it is to copy it manually into the build folder.
Method 3 - Using the os module
Much the same as method two it would copy the folder into your build folder but instead of coping it manually it would use Python. You also have the option of using additional Python features as well.
Hope I was helpful.

Finding and saving present directory of a SAS-Studio-program in a Linux server

I am trying to make a macro variable in SAS Studio which saves the "present working directory" as a macro variable.
The SAS-program is run in a "CPF" process flow file in SAS Studio, and the whole SAS-file and processes are saved and run in a Linux server.
In SAS-Studio, the location of CPF-process flow file seems like in the directory /sasdata/model_v1, and when I run a Linux command like X "pwd" then I expect that the result will give /sasdata/model_v1, but I get another directory instead like /sasinstall/sasconfig/Lev1/SASApp instead, I guess the the process flow file with CPF-suffix is run from this directory.
So the question is how I can find and save the working directory of my cpf-file and save as a macro-variable, or even maybe for my other sas-files too, I may need the solution for both SAS-files and CPF-files.
If I find the directory, then I guess it should be enough to save them as macro-variable by using %let macrovariable = "/directory"
I don't think SAS will show you the path of the process file. It doesn't in SAS/Studio 3.5.
It will set the path for a normal program file (as long as you have saved it) in the _SASPROGRAMFILE macro variable.

Access test resources within Haskell tests

This is probably a basic question but I've been Googling for a while on it... I have a Cabal-ized Haskell project and I'm in the process of writing integration tests for it. I want to be able to include test resources for my project in the same repo and access them in tests. For example, here are a couple things I want to accomplish:
1) Check a dummy database instance into my repo, including a shell script that spins up a database process. I want to write an Hspec integration test that spins up the database process, makes some calls to it, and then shuts it down. So I need to be able to find the shell script so I can use System.Process.createProcess on it.
2) Check in paired "input" and "output" files. My test should process each of the input files and compare them to a corresponding output file to make sure they match. (I've read about "golden" but it doesn't seem to solve the problem of finding/reading the input files in the first place?)
In short, how can I go about creating a "resources" folder in the root folder of my Haskell project and find the path to it inside tests?
Have a look at an existing project that uses input and output file.
For example, take haddock, the source code is at https://github.com/haskell/haddock. They have the test files under a folder (https://github.com/haskell/haddock/tree/master/html-test/ref) and they are referenced as extra-source-files in the cabal file (https://github.com/haskell/haddock/blob/master/haddock.cabal). Then the test code (https://github.com/haskell/haddock/blob/master/html-test/run.lhs) uses some CPP macro (__FILE__) to get the current directory, and can then resolve the files relative to that folder.

executing script file from azure blob and write its results to file

I'll explain the task requested from me:
I have two containers in Azure, one called "data" and one called "script". In the "data" container there's a txt file with data, and in the "script" container there's a script file.
Now, I need programatically (with WorkerRole) to execute the script file, with the content of the data file as parameters (Example: a script file that accepts a string 's' and returns to the screen "Hello, 's'", when 's' in the string given, and in the data file there's a string), and save the result of the run into another file which needs to be saved in another container called "result".
How do I do all these? I've already uploaded the files and created the blobs programatically, but I can't seem to understand how to execute the file of how to save its result to another file?
Can I please have some help?
Thanks in advance
Here are the steps in pseudo code:
Retrieve the script from the blob(using DownloadToStream())
Compile the script(I will leave this to you as I have no idea what
format your script is)
Load parameters from blob(same as step 1)
Execute script with those parameters.
If your script's can be written as lambda expressions then this becomes a lot easier as you can turn them into Action's
Edit based on your questiions:
DownloadText() is no longer included in Azure Storage 2.0, you only have access to DownloadToStream(). Even if you are using an older version(say 1.7) I would recommend using DownloadToStream() in the event you ever upgrade in the future. This will prevent having to refactor your code.
In terms of executing your script, depending on what type of script it is(if it is c# code you can use this example: Is it possible to dynamically compile and execute C# code fragments?. If you need to execute a different type of script you would need to run it using Process.Start and you can look at this example: http://www.dotnetperls.com/process-start
I do not have much experience with point number 2 but those are the processes I have heard and seen used.

Resources