Controlling .exe parameters in c# .net - c#-4.0

I want to control parameters of .exe file when it runs again. For example
DocumentViewer.exe -DocID=1097911 -DBName "TEST.WORLD" -DBUname "DB1" -DBPwd "DB1" -AccessUserName "user1" -AccessPassword "pass1" -AccessPath "C:\..."
While this .exe file is running if user runs a new instance of .exe file with different DocID, I want to inform user "You should complete your work with DocID:1097911 to open new document".
I don't understand where should I get second instance's parameters to control with the first one?

You'll need some type of data store in order to save the parameters. Then after the instance is terminated you need to clear out the data store. In this example i'm going to use a text file for simplicity.
when an instance of your .exe is started, first read from a centralized text file. If the text file is empty then update the file with the instance's parameters and continue starting. If its not empty then notify the user with the DocId from the text file.

Related

Azure Logic App FTP Get File Content fails

I have set up a logic app with the ftp trigger [When a file is added or modified (properties only)]. This works just fine when I upload a 50+MB file to that ftp server. I have a [Get File Content] action set up right after the trigger. For the File input of the [Get File Content] action, I used the [List of Files Name] dynamic content from the trigger AND I have just filled in the path by using the available 'File Picker' (which connects to the FTP just fine). When I test this out, it fails on the [Get File Content] action stating BadRequest and this Body.
{
"status": 400,
"message": "An invalid request was made. Inspect the passed parameters and actions.\r\nclientRequestId: 7d9f2ff3-62d0-4f69-8cc5-f41c35297882",
"source": "ftp-eus.azconn-eus.p.azurewebsites.net"
}
The inputs that come into the action show the correct file name and path. So I am confused on what it means by "Inspect the passed parameters and actions". Can someone point me in the right direction on how to solve this?
EDIT
Here are some screenshots to show. I don't get [File Name] as a dynamic option from my trigger. It doesn't even matter though, I can pick the exact file I want downloaded from the FTP Picker and it still fails. See screenshots:
Dynamically select file:
Statically select file:
Same result from both of them:
If you use Get file content to pick file, you could find the input of File is the Path of the file you want, so you could not get the file content just with the File content. You could use File path or File name, if you want to use the File name you should also know the path.
If you want to use File name, the input would be like this, this is a little inconvenient.
Or just with the File path. Actually the inputs in these two ways are same, so they all could get the file.

How to share a variable between 2 pyRevit scripts?

I am using the latest version of pyRevit, v45.
I'm writing some info in temporary files with
myTempFile = script.get_instance_data_file("id")
This creates a file named pyRevit_2018_xxxx_id.tmp in which I store useful info. If I'm not mistaken, the "xxxx" part is changing every time I reload Revit. Now, I need to get access to this information from another pyRevit script.
How can I retrieve the name of the temp file I need to read? In other words, how do I access "myTempFile" from within the second script, which has no idea of the name of "myTempFile"?
I guess I can share somehow that variable between my script, but what's the proper way to do this? I know this must be a very basic programming question, but I'm indeed not a programmer ;)
Thanks a lot,
Arnaud.
Ok, I realise now that my variables in the 1st script cease to exist after its execution.
So for now I wrote the file name in another file, of which I know the name.. That works.
But if there's a cleaner way to do this, I'd be glad to learn ;)
Arnaud
pyrevit.script module provides 4 different methods for creating temporary files based on their use case:
get_instance_data_file:
for data files marked with Revit instance pid. This means that scripts running on another instance will not see this temp file.
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_instance_data_file
get_universal_data_file:
for temp files accessible to all Revit instances and versions
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_universal_data_file
get_data_file:
Base method to get a standard temp file for current revit version
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_data_file
get_document_data_file:
temp file marked with active document (so scripts working on another document will not see this)
http://pyrevit.readthedocs.io/en/latest/pyrevit/script.html#pyrevit.script.get_document_data_file
Each method uses a pattern to create the temp file name. So as long as the call to the method is the same of different scripts, the method generates the same file name.
Example:
Script 1:
from pyrevit import script
tfile = script.get_data_file('mydata')
Script 2:
from pyrevit import script
tempfile = script.get_data_file('mydata')
In this example tempfile = tfile since the file id is the same.
There is documentation on each so make sure you take a look at those and pick the flavor that serves your purpose.

Inno Setup - How to replace UserName with string

first attempt at posting this question was with severe jet lag. This re-edit of the question I only have the associated sleep deprivation...
I want to know the best way to use the installer to capture path information and then remove any reference to the user profile.
My program is designed to run from a central network location. To reduce network traffic some files are copied to a local drive, eg. AppData\MyProg\
I use a file browse dialogue so the person installing can specify the location:
pg_LocalPaths := CreateInputDirPage(pg_CentralPaths.id,
'Confirm File Locations For User Settings', 'To improve user experience these locations should be off the network.',
'Default Locations are:', False, 'New Folder');
// Add file browswer item (with an empty caption)
pg_LocalPaths.Add('Supporting files will be copied here by the application:');
pg_LocalPaths.Add('User settings (for a single user) will be saved here:');
// Set initial value
pg_LocalPaths.Values[0] := GetPreviousData('pg_LocalPaths0', ExpandConstant('{userappdata}\{#pFolder}'));
pg_LocalPaths.Values[1] := GetPreviousData('pg_LocalPaths1', ExpandConstant('{userappdata}\{#pFolder}'));
During install the local drive location is specified and stored in a config file.
An issue that I failed to predict is that when the program is installed, the path in the config file is defined for a single user, namely the user performing the install. In practice any user should be able to run the program...
To get the correct path for any given user, my software looks for a string "{UserName}" and replaces it with: Environ(UserName)
MyPath = Replace(MyPath, "{UserName}", Environ(UserName))
So, for example, in the installer I need to replace:
C:\Users\My Name\My Program OR C:\Users\My Admin\My Program
with something like:
C:\Users\\{UserName}\My Program
I alreay have this working in My Program but I am not sure of the best way to get it working in Inno... My inital thought was to write the config file like this:
StringChangeEx(MyPath, "My Name", "{UserName}", True)
Would like some perspective on this, not sure how to get it working for all situations.
Cheers,
You're doing it the wrong way.
Don't store the complete path in the config file; merely store a value that indicates that the user wishes to store data in the usual per-user location. (Or don't store any value, since that should be the default anyway.)
Then in your application, on every run of your application, use the Shell API to fetch the current AppData path for the current user and append your app's unique subfolder to this.
Note that it is perfectly valid for the user's AppData path to not contain their username, and not even be on C:. Don't make assumptions; use the Shell API. That's what it's for.
(Exactly which one to use and how to call it varies depending on target OS and your programming language of choice, which you haven't specified.)

Unload a file from a ftp and rename it in host

I have one file delivered in a ftp daily. This file doesn´t have the same name everyday. It has the date and the hour of the creation. For example, today the file has the name 20130814_XX_YY_20130814152345, created at 15:23:45 and tomorrow the file can name 20130815_XX_YY_20130815152421. The _XX_YY_ is always the same but the hour will change everyday.
I want to create a host jcl that gets this file with variable name and rename it to a host file. How can I do this ?
Thank you
Regards
Chuchito
STEP1: You can use LS in FTP to write to disk, so you can have a file with the file-name in it. Then GET that file.
STEP2: Process the contents of your file to generate the FTP Control Cards (at least for the GET). The GET generated will be of the form GET 20130814_XX_YY_20130814152345 'HLQ.MAINFRAM.DATASET', where the server name has come from the file GETted in STEP1 and the local (Mainframe) file can be hard-coded, or supplied to the generation if flexibility is required.
STEP3: Run FTP again with the Control Card(s) generated.
Isn't there anything in the Spec?
Sometimes we create complexities where an "out of the box" solution simplifies life considerably.
After the post updated, I now understand the problem a bit better.
If the name is required to be so specific, then the other suggested solution (if i understand it) is to have a fixed file name on the server that contains a list of file names to be uploaded.
In fact, the server could create a fixed file name that is really the JCL to run on the mainframe!!! This file would include the //SYSIN DD * and GET commands! The mainframe uploads this file and submits it as-is to the job reader, which then runs on the mainframe. The last step of this job (created by the server, but run on the mainframe) is to FTP an empty JCL file back to the server, in this way the server "knows" that the mainframe has uploaded the files.
Alternatively, why does the non-Z\os system need to name the file with time information? If the mainframe processes the file daily then date should be sufficient.
With this change the mainframe can reliably predict the file name for the day, generate the appropriate GET command and run.
With a job scheduler it would be easy to run the upload to the mainframe twice a day. This might address any concerns that are expressed in the desire to include a time in the file's name.
Run a Rexx step via a Background TSO step:
Background TSO step
You can then run a listcat to get all the files. You could either write the listcat output to a file and read it in or trap the output via the Address command
or the OutTrap function.
Then use the standard TSO Rename command.
Alternatively you could run ISPF background rexx program and use the ISPF equivalents to get the file name
(1) The real solution to this should be through a scheduling tool for Mainframe jobs. These tools provide capabilities to take care of formatting like the one you described.
(2) Alternatives: REXX and COBOL
(3) If you don't want to prefer REXX, here's a little brief into how you could create the JCL dynamically using COBOL:
A COBOL program that would read a "template" JCL.
Using INSPECT / REPLACE, you could substitute the prototypes with the string that is populated with the date of your choice (you could supply this as a simple SYSIN parm too, if you want the COBOL code to be flexible on the date selection)
Now that your formatted JCL is ready, you could write it to the output stream
//OUTFILE DD SYSOUT=(INTRDR,)
or
//OUTFILE DD SYSOUT=(,INTRDR)
Anything that is written to INTRDR (Internal Reader), goes straight to JES to submit your job!
Hope this helps.

executing script file from azure blob and write its results to file

I'll explain the task requested from me:
I have two containers in Azure, one called "data" and one called "script". In the "data" container there's a txt file with data, and in the "script" container there's a script file.
Now, I need programatically (with WorkerRole) to execute the script file, with the content of the data file as parameters (Example: a script file that accepts a string 's' and returns to the screen "Hello, 's'", when 's' in the string given, and in the data file there's a string), and save the result of the run into another file which needs to be saved in another container called "result".
How do I do all these? I've already uploaded the files and created the blobs programatically, but I can't seem to understand how to execute the file of how to save its result to another file?
Can I please have some help?
Thanks in advance
Here are the steps in pseudo code:
Retrieve the script from the blob(using DownloadToStream())
Compile the script(I will leave this to you as I have no idea what
format your script is)
Load parameters from blob(same as step 1)
Execute script with those parameters.
If your script's can be written as lambda expressions then this becomes a lot easier as you can turn them into Action's
Edit based on your questiions:
DownloadText() is no longer included in Azure Storage 2.0, you only have access to DownloadToStream(). Even if you are using an older version(say 1.7) I would recommend using DownloadToStream() in the event you ever upgrade in the future. This will prevent having to refactor your code.
In terms of executing your script, depending on what type of script it is(if it is c# code you can use this example: Is it possible to dynamically compile and execute C# code fragments?. If you need to execute a different type of script you would need to run it using Process.Start and you can look at this example: http://www.dotnetperls.com/process-start
I do not have much experience with point number 2 but those are the processes I have heard and seen used.

Resources