I want to change directory to d:\apps\documents.Then create a folder 'money' in that then again change to d:\apps\documents\money.i am trying below code but unable to get the desired output.Any help would be great as i am a beginner.
def proc= ['cmd', '/c','cd','/d','d:\\apps\\documents']
Process process=proc.execute(null, new File('C:/'));
process.waitForOrKill( 2000 )
println process.text
def proc1= ['cmd','/c','mkdir','money']
Process process1=proc1.execute(null, new File('C:/'))
process1.waitForOrKill( 2000 )
println process1.text
This is creating a folder money in c drive but i want it to create it in d:\apps\documents
def proc2= ['cmd', '/c','cd','/d','d:\\apps\\documents\\money']
Process process2=proc2.execute(null, new File('C:/'));
process2.waitForOrKill( 2000 )
You don't modify your current working directory of your current Groovy process, this is not possible. What you do is, you open a new process in C:, there you change the working directory. Then you again open a new process again in C: and create the directory there.
I guess your code is not your real use-case, because using native command execution for creating a directory is non-sense if you are in Groovy where you can do it much easier and in a portable way, so I'm not gonna suggest a concrete solution as the use-case is not clear.
Related
I need help with that problem. Searching on Google, I've found a way to run the R script without error. It's creating a .bat file with the directorys of Rscript.exe and the script I want to run:
My script is very simple: create a dataframe and save it on a Excel.
library(xlsx)
employee <- c('John Doe','Peter Gynn','Jolie Hope')
salary <- c(21000, 23400, 26800)
startdate <- as.Date(c('2010-11-1','2008-3-25','2007-3-14'))
employ.data <- data.frame(employee, salary, startdate)
write.xlsx(employ.data, 'prueba_r_excel.xlsx')
print('final script')
When I manually run the file, it works without problems and creates the Excel:
But when I put it on task scheduler of windows,
it seems that it does the whole script without problems (it shows the print) but it doesn't create the file for me. Someone knows what could be the problem? Do I have to give some kind of special permission to creat new files from the task scheduler?
First I thought that Task Scheduler could not create new files, because I saw that the Task Scheduler doesn't have write permissions. But later I found the error, and that is that in the part of the path where the file is saved you have to put it entire.
The problem was that if I ran the R script manually, the Excel is created in that same folder, but when executing it from the Task Scheduler I also created it but in another folder (system32)
I am using the latest version of pyRevit, v45.
I'm writing some info in temporary files with
myTempFile = script.get_instance_data_file("id")
This creates a file named pyRevit_2018_xxxx_id.tmp in which I store useful info. If I'm not mistaken, the "xxxx" part is changing every time I reload Revit. Now, I need to get access to this information from another pyRevit script.
How can I retrieve the name of the temp file I need to read? In other words, how do I access "myTempFile" from within the second script, which has no idea of the name of "myTempFile"?
I guess I can share somehow that variable between my script, but what's the proper way to do this? I know this must be a very basic programming question, but I'm indeed not a programmer ;)
Thanks a lot,
Arnaud.
Ok, I realise now that my variables in the 1st script cease to exist after its execution.
So for now I wrote the file name in another file, of which I know the name.. That works.
But if there's a cleaner way to do this, I'd be glad to learn ;)
Arnaud
pyrevit.script module provides 4 different methods for creating temporary files based on their use case:
get_instance_data_file:
for data files marked with Revit instance pid. This means that scripts running on another instance will not see this temp file.
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_instance_data_file
get_universal_data_file:
for temp files accessible to all Revit instances and versions
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_universal_data_file
get_data_file:
Base method to get a standard temp file for current revit version
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_data_file
get_document_data_file:
temp file marked with active document (so scripts working on another document will not see this)
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_document_data_file
Each method uses a pattern to create the temp file name. So as long as the call to the method is the same of different scripts, the method generates the same file name.
Example:
Script 1:
from pyrevit import script
tfile = script.get_data_file('mydata')
Script 2:
from pyrevit import script
tempfile = script.get_data_file('mydata')
In this example tempfile = tfile since the file id is the same.
There is documentation on each so make sure you take a look at those and pick the flavor that serves your purpose.
I did some searches for this topic and found some prior threads, but I did not understand any of them as I am still a total beginner in Python.
I have a Python script which has some long string variables stored in various .py files in a sub-directory. I'm importing the .py files from that sub-directory when I run the script. There is a __init__.py file in the sub-directory. The only reason I'm using this setup is that the long string variables which I'm storing in those other files would make the code very difficult to read as they are SQL strings and can span 50-100 lines each.
Everything works perfectly when I run this script through PyCharm.
However, when I run the script through Windows Scheduler or a batch file, I get an ImportError for all of the .py files in the sub-directory. The problem is definitely related to the python script not knowing where to look for those .py files when it's run through Windows Scheduler. But I'm not sure how to fix it.
The action for the scheduler task is to run the python exe
D:\Python35\python.exe
with the argument as the script
D:\python\tableaudatasourcebuilds\dcitechnicalperformance\dcitechnicalperformance0.py
So the full action looks like:
D:\Python35\python.exe "D:\python\tableaudatasourcebuilds\dcitechnicalperformance\dcitechnicalperformance0.py"
The subdirectory which stores the long string variables .py files is:
D:\python\tableaudatasourcebuilds\dcitechnicalperformance\dcitechnicalperformance0\
The imports look like:
from dcitechnicalperformance.dcitechnicalperformance0.dciquer import nzsqldciwk
Does anyone know how to address this problem? Any help is much appreciated.
Good afternoon,
First of all i don't know how much sense there is to store long SQL querys on a module, I'm not by any means an expert, but something like a JSON file (or hell, even store them in a table inside the sql) seems like a better approach.
About your problem I think it resides on the current directory where the task is launched, let me explain:
In PyCharm when you run the code it launches from the location of the file, and with so, it's able to find the directory with the module.
With the scheduled task it may be launching in another directory and so, it's unable to find the module as the directory is not present.
If you decide to stick with your reproach a plausible solution would be to create a .bat file that browses to the project location:
#ECHO OFF
D:
cd D:\python\tableaudatasourcebuilds\dcitechnicalperformance\
D:\Python35\python.exe dcitechnicalperformance0.py
And that should work.
Now, I am writting a Groovy script to invoke other's interface. But I need change my current working path when running the script. I know it is not possible in Java. Is it possible in Groovy?
If you can run other script as separate process, you can give ProcessBuilder parameter working dir:
def processBuilder=new ProcessBuilder(command)
processBuilder.directory(new File("Working dir"))
def process = processBuilder.start()
or
command.execute(null, new File("Working dir"))
so that process will switch to your new folder and execute it there.
As Groovy runs on JVM, the same restrictions apply. Unfortunately it is not possible.
Changing the current working directory in Java?
JDK bug
Java/groovy doesn't really "Have" a working directory as far as I can tell. The shell that launched groovy has one and any child "commands" inherit from that shell diretly.
Java also seems to read the current directory of the shell and store it in "user.dir". This is used as a base for the "File" object so if you System.setProperty("user.dir", "c:/windows") it will change future invocations of new File(".") but will not change the parent shell directory (and therefore not the child directories).
Here are three "Work-Arounds" that may work for different scenarios:
1) I KIND OF overcame this for a very specific task... I wanted to implement "cd" as a groovy script. It was only possible because all my scripts were already being "wrapped" in a batch file. I made it so that my script could create a file called "afterburner.cmd" that, if it existed, would be executed when the script exits. There was some batch file trickery to make this work.
A startup cmd file could also "Set" the current directory before invoking your groovy script/app.
By the way, Having a startup cmd has been much more helpful than I'd thought it would be--It makes your environment constant and allows you to more easily deploy your "Scripts" to other machines. I even have mine compile my scripts to .classes because it turned out to be faster to compile a .groovy to a .class and start the .class with "Java" than it was to just run the script with "groovy"--and usually you can skip the compile step which makes it a LOT faster!
2) For a few small commands, you might write a method like this:
def currentDir = "C:\\"
def exec(command, dir = null) {
"cmd /c cd /d ${dir?:currentDir} && $command".execute().text
}
// Default dir is currentDir
assert exec("dir").endsWith("C:\\>")
// different dir for this command only
assert exec("dir", "c:\\users").endsWith("C:\\users")
// Change default dir
currentDir = "C:\\windows"
assert exec("dir").endsWith("C:\\windows")
it will be slower than "".execute() if "cmd" is not required.
3) Code a small class that maintains an "Open" command shell (I did this once, there is a bit of complexity), but the idea is:
def process="cmd".execute()
def in=process.in
def out=process.out
def err=process.err
Now "in" is an input stream that you could spin off/read from and "out" is an output stream that you can write commands to, keep an eye on "err" to detect errors.
The class should write a command to the output, read the input until the command has completed then return the output to the user.
The problem is detecting when the output of any given command is complete. In general you can detect a "C:..." prompt and assume that this means that the command has finished executing. You could also use a timeout. Both are pretty fallible. You can set that shell's prompt to something unique to make it much less fallible.
The advantage is that this shell can remain open for the entire life of your app and can significantly increase speed since you aren't repeatedly creating "cmd" shells. If you create a class (let's call it "CommandShell") that wraps your Process object then it should be really easy to use:
def cmd=new CommandShell()
println cmd.execute("cd /d c:\\")
println cmd.execute("dir") // Will be the dir of c:\
I wrote a groovy class like this once, it's a lot of experimenting and your instance can be trashed by commands like "exit" but it's possible.
you can wrap it up in a dir block.
eg :
dir('yourdirectory') {
codeblock
}
I am having a hard time with a seemingly simple Azure program.
My exercise is to create WorkerRole that spawns "helloworld.exe"
- which does just that - prints "hello world" and exits.
I am using Visual Studio to create a project,
then added new folder to project solution "bin2" where I put hello.exe
using menu option "Add Existing Item".
then created local storage bin2 in ServiceDefinition.csdef:
so I can find my executable with RoleEnvironment:
string baseDir = RoleEnvironment.GetLocalResource("bin2").RootPath.Replace('\', '/');
string command = Path.Combine(baseDir, #"hello.exe");
then ran cspack.exe to create .csx directory.
Resulting .csx package got hello.exe in the correct location:
WorkerRole1.csx\roles\WorkerRole1\approot\bin2\hello.exe
then I started local development fabric with csrun.exe and get error from the parent process that bin2/hello.exe is missing.
Do I need to do something else to make csrun to copy hello.exe into "bin2".
Any ideas?
Thank you in advance,
Ivgard
I'm pretty sure I answered this question already (probably on the MSDN forum)? But the local resource you declare will give you a path entirely different from where you're putting your hello.exe. When you add the file to your project, it gets included with the rest of the code for your role. When you look up the local resource, you get a path to an empty directory which you can use to write and read data. Those two are completely separate and unrelated locations.
If you want to find your hello.exe that's under bin2, just look for the relative path, or use %RoleRoot%\approot\bin2 (or maybe it's %RoleRoot%\approot\bin\bin2?).