I've recently become "reacquainted" with Windows and I'm also new to .NET & C#. I'm trying to figure out a way to run a program on a Windows 2003 machine at all times (i.e. it runs when no one is logged in and automatically starts on server boot). I think I'm overcomplicating the problem and getting myself stuck.
This program, called Job.exe, normally runs in a GUI, but I do have the option of running it from the command line with parameters.
Because of the "always on" part, the first thing that comes to mind is to create a service. Ridiculously, I'm getting stuck on how exactly to run the executable (Job.exe) from within my Service1.cs file (did I mention I'm new to c#?).
A couple other points I'm stuck on regarding creating a service are how/where to configure desktop interaction since I want Job.exe to run totally in the background. Also, since OnStart is supposed to return to the OS when finished, I'm a little confused as to where I should put the code to execute the program; do I place it in my OnStart method or create a method that I then call from OnStart?
Last question on creating a service is about the parameters. Job.exe accepts two parameters in total, one static and one dynamic (i.e. could be defined via the service properties dialog in the services management console). I'd like to be able to create multiple instances of the service specifying a different dynamic parameter for each one. Also, the dynamic parameter should be able to accept a string array.
I'm sure there are options outside of creating a service, so I will take any and all suggestions.
Since you mentioned that you may be over-complicating the problem, you may consider using the Task Scheduler to run your application.
Using the Task Scheduler will allow you to make a "regular" desktop application which is arguably a simpler approach than creating your own service. Also, the Task Scheduler has many options that fit the requirements you touched on.
The simplest approach might be to create a service, reference the application's assembly, and call it's Main() method to start the application. The application itself could use Environment.UserInteractive to detect if it is running as a service or as a desktop application.
One thing to watch out for, as you mention, is that the Start method of a service (and the other control methods) is expected to return immediately-ish (in the timespan of the "Starting Service..." dialog) so you'll need to spin up a thread to run the main method. Something like this:
Thread t = new Thread(() => { MyApplication.Application.Main("firstParam", "secondParam"); });
t.Start();
The params could come from a file and the service can be configured with the file name as a parameter (see this article for one of many examples on how to do that), or the service could be configured as you mentioned to take the parameters and pass them along to the application's main method. These are both viable approaches. Either way, only one instance of a service can be running at a time, so you'd need to register the service multiple times with different names to configure different parameters.
Related
I'm trying to write a script that will create a new google cloud project, then provision firebase resources to it, all using the Node SDK.
The first step is calling google.cloudresourcemanager("v1").projects.create, and I use an organization-level service account for that with the proper permissions which returns the proper Operation object on success.
The issue is that after this call, there's often a delay of up to several hours before a call to google.cloudresourcemanager("v1").projects.get or google.firebase({version: "v1beta1"}).projects.addFirebase works, and if you check in the console the project isn't there. The issue is not with permissions (authorization/authentication) as when I manually verify that a project exists then call those two functions, they work as expected.
Has anybody else experienced something like this?
thanks!
On the Official API Documentation it is mentioned this:
Request that a new Project be created. The result is an Operation which can be used to track the creation process. This process usually takes a few seconds, but can sometimes take much longer. The tracking Operation is automatically deleted after a few hours, so there is no need to call operations.delete.
This means that this method works in an asynchronous way. As the response can be used to track the creation of the project, but it hasn't been fully created when the API answers.
Also it Mentiones that it can take long time.
As this is how the API works all the SDKs including the NodeJS one will share this behavior.
I want to create a application that contains a feature that allows users to submit code and the server will compile and run it, similar to Ideone & Spoj. How do I do this securely in a scalable manner?
Partial Solutions I'm aware of:
IDEA 1 - 3rd Party Services
The Sphere Engine. However this costs a LOT of money!
I'm not aware of any open source application I can run on my server to achieve this, or a cheaper alternative. Please correct me if i'm wrong.
IDEA 2 - VM
This would be the next most sensible choice. However, I'm unsure how to implement it. For example let's say I created a VM and started to run the user's code. This would restrict damage on MY system, but not the damage on the VM, which other users would have to use. Does that mean I have to create a new VM each and every time I want to compile and run user's code (which clearly is not scalable - correct me if I'm wrong.
Having not set up a thing, I assumed that services like TravisCI (which compiles code and runs it under test cases you provide), have a base virtual machine image, which boots up and processes your code. The next user to come along gets a separate VM booted from the same base image, your changes aren't stored.
So inside the VM, the user code can do whatever. All of its effects, except stuff written to the console will be erased at the end of the time limit.
For my system, I have a back-end process that uses a 3rd party command line tool to do some occasional processing. This tool writes to and reads from the file system (I point it at some files, it works its magic, and then it writes out the results to another file).
This is obviously easy to do with an Azure Virtual Machine. Just write a Windows Service to manage this command line tool and have it read from a Queue to get the processing jobs.
To this point, however, I've been able to do everything in Azure without having to resort to a full blown VM. I like that. I like not having to worry about applying patches and other maintenance, downtime and the like.
So, my question is, is there something in Azure that would let me have this service without resorting to a VM? Would a "Worker Role" be able to accomplish this? Can it read and write to/from the file system? Can it handle 3rd party tools with a bunch of arbitrary dependencies? Can I launch another process from C# code within the worker role?
Would a "Worker Role" be able to accomplish this?
Absolutely! Remember that a Worker Role is a full blown VM also (with same OS powering Azure Virtual Machine).
Can it read and write to/from the file system?
Yes. However there's a catch. You can can't read/write to any arbitrary location on the VM. You would have full access to a special folder on that VM called Local Storage. You can read more about it here: http://msdn.microsoft.com/en-us/library/azure/ee758708.aspx
Can it handle 3rd party tools with a bunch of arbitrary dependencies?
Yes, again! Again, there's a catch. Since these VMs are stateless VMs, anything you install after the VM is stood up for you by Microsoft is not guaranteed to be there in case Microsoft decides to tear down that VM for whatever reasons. If you need to install any additional software, you would have to install them via a process called Startup Tasks. You can read about them here: http://msdn.microsoft.com/en-us/library/azure/hh180155.aspx.
Can I launch another process from C# code within the worker role?
Though I have not tried it personally but I think it is possible because you get a VM running latest version of Windows server.
We have built a project in Enterprise Guide for the purpose of creating a easy understandable and maintainable code. The project contain a set of process flows which run should be done in specific order. This project we need to run on a Linux Server machine, where the SAS Metadata Server is running.
Basic idea is to extract this project into SAS code, which we would be able to run from command line in Linux as a batch job.
Question 1:
Is there any other way to schedule a batch job in Linux-hosted SAS Server? I have read about VBS scripting for scheduling/running batch jobs, but in order this to be done on Linux Server, a installation of WINE is required, which on a production machine which already runs a number of other important applications, is almost completely out of question.
Is there a way to specify a complete project export into SAS code, provided that I give the specific order of running process flows? I have tried out ordered list, which is able to make you a list of tasks to run in order (although there is no way to choose a whole process flow as a single task), but unfortunately, this ordered list itself is later not possible to be exported as a SAS code.
Current solution we do is the following:
We export each single process flow of the SAS EG project into SAS code, and then create another SAS code with %include lines to run all the extracted codes in order that we want. This is of course a possible solution, but definitely not the most elegant one.
Question 2:
Since I don't know how exactly the code is being exported afterwards, are there any dangers I should bear in mind with the solution I chose.
Is there any other, more elegant way?
You have a couple of options from what I'm familiar with, plus I suspect if Dom happens by he'll know more. These answers are based on EG 6.1, which is the current version (ships with 9.4); it's possible some of these things may not be true in earlier versions.
First, if you're running Enterprise Guide from Windows, you can schedule the job locally (on any Windows machine with Enterprise Guide). You're not scheduling the server directly, you schedule Windows to launch an EG process that connects to the server and does its magic. That's how I largely interact with scheduling (because I have a fairly 'light' scheduling need).
Second, from the blog post "Four Ways to Schedule SAS Tasks", options 3 and 4 may be helpful for you. The SAS Platform Suite is designed in part for scheduling, and the options using SAS Management Console to schedule via operating system tools, are both very helpful.
Third, you may want to look into SAS Stored Processes, which should be schedulable. A process flow can be converted into a stored process.
For your specific questions:
Question 1: When you export a process flow or a project, at least in 6.1 you have the option to change the order in which the programs are exported. It's manual, so it's probably not perfect, but it does give you that option. (The code seems to be by default in creation order, which is sub-optimal.) The project export does group process flows together, but you don't have the option of manipulating the order of process flows - you have to move each program around, which would be tedious. It also of course gives you less flexibility if you need to multiply run programs.
Question 2: As Stig Eide points out in comments, make sure your System Option LRECL is > 256 (the default) or you run some risk of code being cut off. In 9.2+ this is modifiable; just place LRECL=32767in your config.sas file.
I'm going to start developing a Java web app that I believe I will be deploying to CloudBees, but am concerned about what JRE/sandbox restrictions may apply.
For instance, with Google App Engine, you're not allowed to execute any methods packaged inside java.io.file or java.net. You're not allowed to start threads without using their custom ThreadFactory. You're not allowed to use JNDI, JMX or make calls to remote RDBMSes hosted on 3rd party machines. You're not allowed to use reflection. With GAE, there's a lot you're not allowed to do.
Do these same restrictions hold true for CloudBees? I'm guessing no, as I just read their entire developer docs and didn't run across anything of the sort.
However, what happens if my app tries to write to the local file system when deployed to their servers? They must have certain restrictions as to what can run on their machines, if for no other reason than security!
So I ask: what are these restrictions, or where can I find them listed in their docs? Thanks in advance!
Last I checked (a) there is no sandbox; (b) you can write to the local filesystem, but any files you write there may be discarded if the application is reprovisioned for any reason, i.e. use it for temporary files only. (An optional permanent file store service has been considered as a feature useful for certain applications.)