SAS EG how to extract, edit and insert a Program in .egp WINDOWS/Linux - linux

I have a scenario where .egp's are created on Windows environment. As part of migration these need to be migrated to UNIX/Linux server and from EG 4.1 to 4.2 and we have to make the programs comply with LINUX/Unix standards (like font casing) and the directory paths to the linux or unix environment.
As we have around 300 .egp's to be migrarted, Say in the first go if we use migration wizard on sas eg 4.2 version to automatically have the .egp's converted to 4.2 standards, the bigggest question is how to incorporate changes to the sas programs.Is there any automated way to extract the program from respective node in .egp, edit and insert at the same node.
Thanks in advance.

If the code exists purely in EG, not that I'm aware of via SAS - EG is not itself programmable.
If the code objects are stored as physical files outside of EG they could conceivably be imported into EG (by looping over the folders involved) and some text substitution done.
Alternatively it involves a full on scripting language. EG files are zip files, and once uncompressed contain .sas text files in subfolders within the zip file. It should be possible to iterate over them all and make the required changes.
In neither case will it be much fun. (Though doing it manually doesn't sound great either.)
Talk to SAS - they may have a tool they've put together for someone else they can let you have.

Related

Creating custom documents (PDFs) on the fly on a website

I want to create custom documents on my website which is running on a Linux-based server. My website has user login capability to access specific details on the website.
What I want to do is:
Use a default .tex file where the contents of the main document are stored. This would be available on the server (on admin side);
Get few user specific inputs (like login name, the day and date when the request was made), their custom inputs like what specific details they want (this will make it possible to include or exclude few chapters, sections from the document);
Using the inputs received above (in point 2), the document would be customized on the fly on the website by running LaTeX compiler and the output of the compilation would be shared with the user.
My questions are:
Has someone tried this before? Any suggestions, alternatives they can point to? If there is any other better solution than LaTeX, I am open to hear and understand that as well.
Are there any specific settings that we need to do either on the server or on LaTeX installation that will enable doing this?
Any additional packages, programs are required to be installed to get this working?
Any help and insights would be appreciated.
You can generate PDF using appropriate libraries for programming language you use for your back-end. This is definitely safer than injecting user input into TeX file and probably would be faster too.
PHP: Best way to create a PDF with PHP
Ruby: https://github.com/prawnpdf/prawn
anything else: google for "$LANGUAGE generate pdf".
The first and the second questions can be done in any programming language you choose while reading the .tex template and add/omit the data, then save it to the temporal .tex file. After compilation yo can remove this file. If you are working with a linux server you can use a service (cron, systemd) to automate the cleaning of files.
To compile and get the file you must use pdflatex command line program, which is the one any LaTeX editor uses. I compile my LaTeX documents this way in linux. I think this way is quite fast, except if you want images in this document, or are using tikz pictures.
I know I am suggesting the old way to do the work, but usually is the best way.
And, finally, I think PHDComics uses something like this for the emergency button (down in the right), only that in the site the pdf is already generated for the specific comic: http://www.phdcomics.com/

How could Visual Studio 2012 be set to use a custom tool to customise the Reading/writing of existing editors?

Update: It appears that VS doesn't have the hooks needed to do what is needed in my use case. However there are a couple of options that could work for other people and as such I'm marking the question as answered but I would love to find a solution that works for me.
We have encrypted files that are routinely kept in encrypted form within source control (TFS). When I want to compare versions I use Beyond Compare and have added the encryption/decryption tool as filtering on the read/write process to allow plain text viewing and editing.
However if I just want to open the file for reading/editing it's a bit tedious using a dummy comparison just to view/edit the file.
As such as I wondering if there is a configuration setting or way in Visual Studio that would allow me to insert a filter on the read/write so that it could display/edit/save files that would otherwise be unreadable.
Edit:
*NB: The encryption aspect is just single use case *, I'm actually looking for a generic answer that doesn't require writing an editor to replace the editors within VS that already exist such as the MS supplied XML editor or the custom third party ones.
I have both custom and non custom files that are encrypted. Each file type already has an editor. We have no access to the source for any of these editors. The problem is that the file is encrypted in TFS, and all I need is the filtering on the read and write for all files regardless of editor.
I want to use all the existing features of the installed editors without change. Only the reading and writing need to be customised.
Here's a potentially hacky way to achieve what you are trying to do, if there is no other easy option.
TFS stores data in a SQL database. Therefore you can theoretically modify the read/edit command that is used to extract the data from TFS and send it to the editor/viewer. This might involve modifying a stored procedure, or putting a trigger in place to modify the data before it is presented to the editor.
You would need to run a Profiler Trace on the TFS database when you click on edit/view or browse to the node in the source control tree. This will help you to figure out what data TFS is accessing and what functions/stored procs/tables etc it used to extract said data.
The same in reverse; you'd need to modify the 'writing' of the data to use your custom tool before putting it in the DB.
SQL has the ability to call CLR code, so you could use your tool if it's written in .NET.
The easiest way would be to download the 2012 SDK, Microsoft already provide a nice walkthrough on how to implement your custom editor HERE.
The process is:
Install the SDK
Fire up VS2012; Select New Project -> Other Proj Types -> Visual Studio Package
Visual C#, company name, etc...
Tick the "Custom Editor" tickbox
Fill in the rest of the details
So now you're presented with all the source of a vanilla text editor, and the part you want to hook in to is the IPersistFileFormat::Load() and IPersistFileFormat::Save() functions found under EditorPane.cs and put your encryption/decryption routines in there, thus you'll be left with a text editor with a custom encrypted file format.
This may not do what you need, since you need to call third party exe. However this answer may be useful for others that have access to source code (or a dll or library).
You could write a file system filter that encrypts/ decrypts the data to and from disk. Note that the driver sits at the OS level, and is outside of Visual Studio.
From the MSDN article File Systems and File System Filter Drivers:
A file system filter driver intercepts requests targeted at a file system or another file system filter driver. By intercepting the request before it reaches its intended target, the filter driver can extend or replace functionality provided by the original target of the request. Examples of File Systems and File System Filter Drivers include anti-virus filters, backup agents, and encryption products.
See this Code Project article for a tutorial: File System Filter Driver Tutorial. The article does not show how to do encryption/ decryption, but shows how to get a simple driver up and running.
There are extensions that will capture events to the current window save for example and what turns out to be document load. ** This is not a custom editor **
check out the following two links:
http://msdn.microsoft.com/en-us/library/dd885244.aspx
and a fairly complete open source addin that works with files when saved (regardless of type)
https://bitbucket.org/s_cadwallader/codemaid/src/7cf1bf6108801f48b85e30d85e1646fbc73ba889/CodeMaid/Integration/Events/RunningDocumentTableEventListener.cs?at=default
which hooks the RDT table to extend the current environment. You would need to adjust from here of course but this should get you going in the right direction.

Could I have any issue using the # (at sign) in a *n*x directory?

I'm developing a website, and want to create a directory after the user username, what is going to be the email address (so I don’t have to generate new ɪᴅs, etc)
I've made some tests and it seems to work fine. Also, I didn’t find any documentation against using the "#" in a directory, but could I find some problem in the future with this approach?
I mean, might some browser not be able to upload images from this directory, or some other problem?
if you plan to run perl scripts (and possibly other languages) against those files you will need to remember to escape the # sign. It's not a huge problem, but I personally would not do it.
More importantly if the path is visible to the browser you would be disclosing the user's email address to the whole world.
I would suggest using something like an MD5 hash of the user's email instead. It is (relatively) unique, and you can recalculate it very easily if you need to. Gravatar uses this approach for instance. See: http://en.gravatar.com/site/implement/hash/
no.. there should be no problems.. browsers are trying to read the file and they don't care that much about the title only file content... (header matters)
So.. there should be no problem...
Historically some remote filesystems have used the # to "escape" from normal path processing to do "interesting" stuff.
Some version control systems use # to denote a certain version of a path (e.g. Subversion, ClearCase).
Some other tools use # to denote "user#remote_host" stuff - AFAIK rsync is one of them which might bite you - you should check if that tool is used somewhere on your site for backup or syncing or something like that.
So - I would not use that character within filenames.

make swf from fla without ever opening it

is it possible to change text and images in a fla file without ever opening it up and then making the swf via command line? I want to make a flash template and save the fla. Then be able to update my text and image name and convert it to swf. I have one template but tons of different text options and background images. It would be nice to be able to copy the master.fla twenty times and just change the source code (will do this from command line) and then convert to swf (via command line).
Any help would be appreciated.
With CS5, you can do half of what you're asking today, by using the XFL file format instead of FLA. Instead of a binary blob, you get an editable XML file and a tree of separate asset files: PNGs, AS3 files, etc. You can then modify the XML or AS3 files programmatically to get your variants.
(A CS5 FLA file is really just a zipped up version of the XFL, but there's no advantage to using that instead of an XFL. In CS4 and previous, FLA was a proprietary binary format.)
The missing piece is an XFL compiler. Adobe currently provides no such thing, and the third party market hasn't yet produced one.
You could use a systems automation tool to drive the Flash Professional environment through the compilation steps. On OS X, for example, either Automator or AppleScript should be able to do what you want. It'll just have more overhead than the command line compiler you were hoping for.
I agree with Jason, there are a lot of alternatives to what you suggest. Keeping content out of the SWF is good practice actually. This is a good way to avoid large files!
Depending on what you 're looking to achieve, there are a lot of solutions available. XML is an option, JSON another.
If you're looking to build a template, any of the above would seem appropriate.
It sounds like you're working from the Flash IDE, as Jason suggests you may want to have a look at another IDE, such as FlashDevelop, FDT or FlashBuilder as they make coding with AS3 a lot easier.

linux script, standard directory locations

I am trying to write a bash script to do a task, I have done pretty well so far, and have it working to an extent, but I want to set it up so it's distributable to other people, and will be opening it up as open source, so I want to start doing things the "conventional" way. Unfortunately I'm not all that sure what the conventional way is.
Ideally I want a link to an in depth online resource that discusses this and surrounding topics in depth, but I'm having difficulty finding keywords that will locate this on google.
At the start of my script I set a bunch of global variables that store the names of the dirs that it will be accessing, this means that I can modify the dir's quickly, but this is programming shortcuts, not user shortcuts, I can't tell the users that they have to fiddle with this stuff. Also, I need for individual users' settings not to get wiped out on every upgrade.
Questions:
Name of settings folder: ~/.foo/ -- this is well and good, but how do I keep my working copy and my development copy separate? tweek the reference in the source of the dev version?
If my program needs to maintain and update library of data (gps tracklog data in this case) where should this directory be? the user will need to access some of this data, but it's mostly for internal use. I personally work in cygwin, and I like to keep this data on separate drive, so the path is wierd, I suspect many users could find this. for a default however I'm thinking ~/gpsdata/ -- would this be normal, or should I hard code a system that ask the user at first run where to put it, and stores this in the settings folder? whatever happens I'm going ot have to store the directory reference in a file in the settings folder.
The program needs a data "inbox" that is a folder that the user can dump files, then run the script to process these files. I was thinking ~/gpsdata/in/ ?? though there will always be an option to add a file or folder to the command line to use that as well (it processed files all locations listed, including the "inbox")
Where should the script its self go? it's already smart enough that it can create all of it's ancillary/settings files (once I figure out the "correct" directory) if run with "./foo --setup" I could shove it in /usr/bin/ or /bin or ~/.foo/bin (and add that to the path) what's normal?
I need to store login details for a web service that it will connect to (using curl -u if it matters) plan on including a setting whereby it asks for a username and password every execution, but it currently stores it plane text in a file in ~/.foo/ -- I know, this is not good. The webservice (osm.org) does support oauth, but I have no idea how to get curl to use it -- getting curl to speak to the service in the first place was a hack. Is there a simple way to do a really basic encryption on a file like this to deter idiots armed with notepad?
Sorry for the list of questions, I believe they are closely related enough for a single post. This is all stuff that stabbing at, but would like clarification/confirmation over.
Name of settings folder: ~/.foo/ -- this is well and good, but how do I keep my working copy and my development copy separate?
Have a default of ~/.foo, and an option (for example --config-directory) that you can use to override the default while developing.
If my program needs to maintain and update library of data (gps tracklog data in this case) where should this directory be?
If your script is running under a normal user account, this will have to be somewhere in the user's home directory; elsewhere, you'll have no write permissions. Perhaps ~/.foo/tracklog or something? Again, add a command line option, and also an option in the configuration file, to override this.
I'm not a fan of your ~/gpsdata default; I don't want my home directory cluttered with all sorts of directories that programs created without my consent. You see this happen on Windows a lot, and it's really annoying. (Saved games in My Documents? Get out of here!)
The program needs a data "inbox" that is a folder that the user can dump files, then run the script to process these files. I was thinking ~/gpsdata/in/ ?
As stated above, I'd prefer ~/.foo/inbox. Also with command-line option and configuration file option to change this.
But do you really need an inbox? If the user needs to run the script manually over some files, it might be better just to accept those file names on the command line. They could just be processed wherever, without having to move them to a "magic" location.
Where should the script its self go?
This is usually up to the packaging system of the particular OS you're running on. When installing from source, /usr/local/bin is a sensible default that won't interfere with package managers.
Is there a simple way to do a really basic encryption on a file like this to deter idiots armed with notepad?
Yes, there is. But it's better not to, because it creates a false sense of security. Without a master password or something, secure storage is not possible! Pidgin, for example, explicitly stores passwords in plain text, so that users won't make any false assumptions about their passwords being stored "securely". So it's best just to store them in plain text, complain if the file is world-readable, and add a clear note to the manual to warn the user what's going on.
Bottom line: don't try to reinvent the wheel. There have been thousands of scripts and programs that faced the same issues; most of them ended up adopting the same conventions, and for good reasons. Look at what they do, and mimic them instead of reinventing the wheel.
You can start with the Filesystem Hierarchy Standard. I'm not sure how well followed it is, but it does provide some guidance. In general, I try to use the following:
$HOME/.foo/ is used for user-specific settings - it is hidden
$PREFIX/etc/foo/ is for system-wide configuration
$PREFIX/foo/bin/ is for system-wide binaries
sym-links from $PREFIX/foo/bin are added to $PREFIX/bin/ for ease of use
$PREFIX/foo/var/ is where variable data would live - this is where your input spools and log files would live
$PREFIX should default to /opt/foo even though almost everyone seems to plop stuff in /usr/local by default (thanks GNU!). If someone wants to install the package in their home directory, then substitute $HOME for $PREFIX. At least that is my take on how this should all work.

Resources