We have built a project in Enterprise Guide for the purpose of creating a easy understandable and maintainable code. The project contain a set of process flows which run should be done in specific order. This project we need to run on a Linux Server machine, where the SAS Metadata Server is running.
Basic idea is to extract this project into SAS code, which we would be able to run from command line in Linux as a batch job.
Question 1:
Is there any other way to schedule a batch job in Linux-hosted SAS Server? I have read about VBS scripting for scheduling/running batch jobs, but in order this to be done on Linux Server, a installation of WINE is required, which on a production machine which already runs a number of other important applications, is almost completely out of question.
Is there a way to specify a complete project export into SAS code, provided that I give the specific order of running process flows? I have tried out ordered list, which is able to make you a list of tasks to run in order (although there is no way to choose a whole process flow as a single task), but unfortunately, this ordered list itself is later not possible to be exported as a SAS code.
Current solution we do is the following:
We export each single process flow of the SAS EG project into SAS code, and then create another SAS code with %include lines to run all the extracted codes in order that we want. This is of course a possible solution, but definitely not the most elegant one.
Question 2:
Since I don't know how exactly the code is being exported afterwards, are there any dangers I should bear in mind with the solution I chose.
Is there any other, more elegant way?
You have a couple of options from what I'm familiar with, plus I suspect if Dom happens by he'll know more. These answers are based on EG 6.1, which is the current version (ships with 9.4); it's possible some of these things may not be true in earlier versions.
First, if you're running Enterprise Guide from Windows, you can schedule the job locally (on any Windows machine with Enterprise Guide). You're not scheduling the server directly, you schedule Windows to launch an EG process that connects to the server and does its magic. That's how I largely interact with scheduling (because I have a fairly 'light' scheduling need).
Second, from the blog post "Four Ways to Schedule SAS Tasks", options 3 and 4 may be helpful for you. The SAS Platform Suite is designed in part for scheduling, and the options using SAS Management Console to schedule via operating system tools, are both very helpful.
Third, you may want to look into SAS Stored Processes, which should be schedulable. A process flow can be converted into a stored process.
For your specific questions:
Question 1: When you export a process flow or a project, at least in 6.1 you have the option to change the order in which the programs are exported. It's manual, so it's probably not perfect, but it does give you that option. (The code seems to be by default in creation order, which is sub-optimal.) The project export does group process flows together, but you don't have the option of manipulating the order of process flows - you have to move each program around, which would be tedious. It also of course gives you less flexibility if you need to multiply run programs.
Question 2: As Stig Eide points out in comments, make sure your System Option LRECL is > 256 (the default) or you run some risk of code being cut off. In 9.2+ this is modifiable; just place LRECL=32767in your config.sas file.
Related
I have a small single board computer which will be running a linux distribution and some programs and has specific user configuration, directory structure, permissions settings etc.
My question is, what is the best way to maintain the system configuration for release? In my time thinking about this problem I've thought of a few ideas but each has its downsides.
Configure the system and burn the image to an iso file for distribution
This one has the advantage that the system will be configured precisely the way I want it, but committing an iso file to a repository is less than desirable since it is quite large and checking out a new revision means reflashing the system.
Install a base OS (which is version locked) and write a shell script to configure the settings from scratch.
This one has the advantage that I could maintain the script in a repository and update and config changes by pulling changes to the script and running it again, however now I have to maintain a shell script to configure a system and its another place where something can go wrong.
I'm wondering what the best practices are in embedded in general so that I can maybe implement a good deployment and maintenance strategy.
Embeddded systems tend to have a long lifetime. Do not be surprised if you need to refer to something released today in ten years' time. Make an ISO of the whole setup, source code, diagrams, everything... and store it away redundantly. Someone will be glad you did a decade from now. Just pretend it's going to last forever and that you'll have to answer a question or research a defect in ten years.
I want to create a application that contains a feature that allows users to submit code and the server will compile and run it, similar to Ideone & Spoj. How do I do this securely in a scalable manner?
Partial Solutions I'm aware of:
IDEA 1 - 3rd Party Services
The Sphere Engine. However this costs a LOT of money!
I'm not aware of any open source application I can run on my server to achieve this, or a cheaper alternative. Please correct me if i'm wrong.
IDEA 2 - VM
This would be the next most sensible choice. However, I'm unsure how to implement it. For example let's say I created a VM and started to run the user's code. This would restrict damage on MY system, but not the damage on the VM, which other users would have to use. Does that mean I have to create a new VM each and every time I want to compile and run user's code (which clearly is not scalable - correct me if I'm wrong.
Having not set up a thing, I assumed that services like TravisCI (which compiles code and runs it under test cases you provide), have a base virtual machine image, which boots up and processes your code. The next user to come along gets a separate VM booted from the same base image, your changes aren't stored.
So inside the VM, the user code can do whatever. All of its effects, except stuff written to the console will be erased at the end of the time limit.
I'm maintaining the servers of a web game. Whenever we add a new server to our game, I have to configure many environment details and install softwares (for example, testing if some ports of the new machine can be connected from other places, installing mysql-client, pv..., copying the game server files from the other machine, and changing mysql server connection URL) on the new machine.
So my question is "How can I automize the whole process of setting up a new server?" Because most of the works I do are repetitive. I don't want to do this kind of job whenever a new machine comes in.
Is there a tool that allows me to save the state of a linux machine so that next time when we buy a new server, I can copy the state of an old linux machine to the new machine? I think this is one of the ways to automize the process of setting up a new game server.
I've also tried using some *.sh scripts to automize the process. But it's not always possible to get the return value of every command I execute. This is why I come here and ask for help.
Have you looked at Docker, Ansible, Cheff or Puppet?
In Docker you can build a new container by describing required operations in docker file. And you can easily move container between machines.
Ansible, Cheff and Puppet are systems management automation tools.
I doubt you'll find such tool to automatize an entire customization process because it's rather difficult to define/obtain a one-size-fit-all linux machine state, especially if the customisation includes logical/functional sequences.
But with good scripting you can obtain a possibly more reliable customisation from scratch (rather than copying it from another machine). I'd recommend a higher-level scripting language, tho, IMHO regular bash/zsh/csh scripting is not good/convenient enough. I prefer python, which gives easy access to every cmd's return code, stdout, stderr and with the pexpect module it can drive interactive cmds.
There are tools to handle specific types of customisations (sw package installations, config files), but not all I needed, so I didn't bother and went straight for custom scripts (more work, but total control). Personal preference, tho, others will advise against that.
I have an application written based on lotus notes client. I wanted to check whether lotus notes is running before starting my application, so that I can skip asking for password from the user if "Don't prompt for password from other notes-based programs" is checked.
One method is get all the running process and look for nlnotes.exe and notes2.exe process to confirm.
Is there any other method to achieve the same.
To be more specific, I want to know whether any registry entries are made to say that notes is currently running. We can't open two instances of notes client, this made me think IBm might have used registry entry to check for running instance.
Kindly correct me if I'm wrong.
The registry would not be a good place for info like that, because if the client crashed the registry data would need to be cleaned up. The same is true for lock files. So while I can't say for sure, I believe IBM detects whether the client is already running by looking for in-memory objects - e.g., shared memory sections, mutexes, etc. Using Process Explorer, I see several shared memory sections associated with the Notes processes. One likely candidate is a section called -LTSCS-22275429-MEM9, but I don't know how that name is generated, if it ever changes with reinstall, reboot, etc. It would take a fair amount of experimentation to determine that - and then of course one would have to figure out how to write the code to detect it, but that's my best guess as to how it's done.
I downloaded this shell script from this site.
It's suspiciously large for a bash script. So I opened it with text editor and noticed
that behind the code there is a lot of non-sense characters.
I'm afraid of giving the script execution right with chmod +x jd.sh. Can you advise me how to recognize if it's safe or how to set it's limited rights in the system?
thank you
The "non-sense characters" indicate binary files that are included directly into the SH file. The script will use the file itself as a file archive and copy/extract files as needed. That's nothing unusual for an SH installer. (edit: for example, makeself)
As with other software, it's virtually impossible to decide wether or not running the script is "safe".
Don't run it! That site is blocked where I work, because it's known to serve malware.
Now, as to verifying code, it's not really possible without isolating it completely (technically difficult, but a VM might serve if it has no known vulnerabilities) and running it to observe what it actually does. A healthy dose of mistrust is always useful when using third-party software, but of course nobody has time to verify all the software they run, or even a tiny fraction of it. It would take thousands (more likely millions) of work years, and would find enough bugs to keep developers busy for another thousand years. The best you can usually do is run only software which has been created or at least recommended by someone you trust at least somewhat. Trust has to be determined according to your own criteria, but here are some which would count in the software's favor for me:
Part of a major operating system/distribution. That means some larger organization has decided to trust it.
Source code is publicly available. At least any malware caused by company policy (see Sony CD debacle) would have a bigger chance of being discovered.
Source code is distributed on an appropriate platform. Sites like GitHub enable you to gauge the popularity of software and keep track of what's happening to it, while a random web site without any commenting features, version control, or bug database is an awful place to keep useful code.
While the source of the script does not seem trustworthy (IP address?), this might still be legit. With shell scripts it is possible to append binary content at the end and thus build a type of installer. Years ago, Sun would ship the JDK for Solaris in exactly that form. I don't know if that's still the case, though.
If you wanna test it without risk, I'd install a Linux in a VirtualBox (free virtual-machine software), run the script there and see what it does.
Addendum on see what it does: There's a variety of tools on UNIX that you can use to analyze a binary program, like strace, ptrace, ltrace. What might also be interesting is running the script using chroot. That way you can easily find all files that are installed.
But at the end of the day this will probably yield more binary files which are not easy to examine (as probably any developer of anti-virus software will tell you). Therefore, if you don't trust the source at all, don't run it. Or if you must run it, do it in a VM where at least it won't be able to do too much damage or access any of your data.