Getting custom code out of CGI.pm - linux

We have a custom site written in Perl that uses a slightly modified version of CGI.pm. During server updates, CGI.pm may be overwritten when it is updated. My goal is to be able to update the server without affecting the site - i.e. change the code that relies on the modified CGI.pm. Unfortunately I am not very familiar with Perl syntax or best practice.
Here is a diff of the modified CGI.pmn (line numbers are off from the most recent version of CGI.pm):
--- CGI.pm.orig Tue Nov 7 12:14:09 2006
+++ CGI.pm Tue Nov 7 12:13:29 2006
## -3386,7 +3386,7 ## sub read_multipart {
if (defined $self->{'.upload_hook'})
{
$totalbytes += length($data);
- &{$self->{'.upload_hook'}}($filename ,$data, $totalbytes, $self->{'.upload_data'});
+ &{$self->{'.upload_hook'}}($filename ,$data, $totalbytes, $self->{'.upload_data'}, $param, $header{'Content-Type'});
}
print $filehandle $data if ($self->{'use_tempfile'});
}
The code that relies on this diff follows:
my %file_hash = ();
my $page = new CGI(\&file_reader, \%file_hash, 0);
my $session = &get_session($page);
foreach my $param_name (keys %file_hash) {
my $notes_param = $param_name . "_notes";
&store_file($file_hash{$param_name}, $page->param($notes_param),
&get_session_name($session));
}
Without the diff, when the file is stored, the $param_name variable appears to be empty.
What is the best way to handle this? As I said before, my main goal is to simplify updates to the server; is there some way to either
a) (preferably) get %file_hash to work properly without a modified version of CGI.pm
b) prevent updates to CGI.pm (and is this a bad idea)

Have you tried submitting this to the CGI RT as a desired improvement? It seems reasonable to pass $param and \%header (rather than $header{'Content-Type'}) to the callback.
To answer your question, you could install CGI in a local directory. The directories in the PERL5LIB env var are searched first, so your CGI version would be found instead of the one installed by the Ubuntu.

I'm the CGI.pm maintainer. I think there is a better option than using a local copy of CGI.pm. Instead, create a sub-class, and use the my subclass in your application. It would look like this:
package My::CGI;
use parent 'CGI';
sub my_sub_that_modified {
...
}
1;
So, you just need to copy in the subroutine that you modified. Then, when you upgrade CGI.pm, you likely don't need to touch your application, unless your modified subroutine changes.
With the "local lib" option, you would still need to manually merge in security updates and other changes you might want to into your local copy. The sub-class route minimizes that.
An example of this approach is Bugzilla::CGI which makes several changes in their sub-class.

Related

uninitialized constant OrigenTesters

I am working on transferring one application from rgen to Origen.
I added environment/j750.rb in my application.
added the below code into j750.rb
# environment/j750.rb
$tester = OrigenTesters::J750.new
in Target folder, I also added $test as below:
$tester = OrigenTesters::J750.new
however, when I tried to generate pattern, it still failed and showed'uninitialized constant OrigenTesters'.
When and how to initialize it?
Thanks a lot in advance!
Normally this is something that Origen users don't particularly need to worry about, if you add:
gem 'origen_testers'
to your application's Gemfile, then it will be required automatically and a reference like OrigenTesters in your environment file or anywhere else will just work.
However, I note that you mention upgrading from an rgen app, which means that your application must be very old and in fact may not even have a Gemfile.
If you contact me internally I can send you the link to our intranet page which has a guide on how to update these really old apps.

how to get the filename along with absolute path to the file, whenever a new file is created using inode in linux?

I doing some experiments with my linux OS (CentOS) and I want to track all the tool logs created under the same environment, tool generates the respective logs (.log extn) for tracking these changes I wrote a perl watcher which actually monitoring the directory that I set and when the new file is created it will show at the output but This is consuming a lot of memory and CPU utilization as i have set 2sec as the sleep period.
My QUESTION "Is there any better of way doing this ?" I thought of using inode table for tracking all the changes in the system. can this solve my issue ? and if yes then could please let us know the solution towards the same ?
It seems that you want to monitor a directory for changes. This is a complex job, but for which there are good modules. The easiest one to recommend is probably Linux::Inotify2
This module implements an interface to the Linux 2.6.13 and later Inotify file/directory change notification system.
This seems to be along the lines of what you wanted.
Any such monitor needs additional event handling. This example uses AnyEvent.
use warnings;
use strict;
use feature 'say';
use AnyEvent;
use Linux::Inotify2;
my $dir = 'dir_to_watch';
my $inotify = Linux::Inotify2->new or die "Can't create inotify object: $!";
$inotify->watch( $dir, IN_MODIFY | IN_CREATE, sub {
my $e = shift;
my $name = $e->fullname;
say "$name modified" if $e->IN_MODIFY; # Both show the new file
say "$name created" if $e->IN_CREATE; # but see comments below
});
my $inotify_w = AnyEvent->io (
fh => $inotify->fileno, poll => 'r', cb => sub { $inotify->poll }
);
1 while $inotify->poll;
If you only care about new files then you only need one constant above.
For both types of events the $name has the name of the new file. From man inotify on my system
... the name field in the returned inotify_event structure identifies the name of the file within the directory.
The inotify_event structure is suitably represented by a Linux::Inotify2::Watcher object.
Using IN_CREATE seems to be an obvious solution for your purpose. I tested by creating two files, with two redirected echo commands separated by semi-colon on the same command line, and also by touch-ing a file. The written files are detected as separate events, and so is the touch-ed file.
Using IN_MODIFY may also work since it monitors (in $dir)
... any file system object in the watched object (always a directory), that is files, directories, symlinks, device nodes etc. ...
As for tests, both files written by echo as above are reported, as separate events. But a touch-ed file is not reported, since data didn't change (the file wasn't written to).
Which is better suited for your need depends on details. For example, a tool may open a log file as it starts, only to write to it much later. The two ways above will behave differently in that case. All this should be investigated carefully under your specific conditions.
We may think of a race condition, since while the code executes other file(s) could slip in. But the module is far better than that and it does report new changes after the handler completes. I tested by creating files while that code runs (and sleeps) and they are reported.
Some other notable frameworks for event-driven programming are POE and IO::Async.
The File::Monitor does this kind of work, too.

Lua Allows to Delete Files in CS2D game

Linux permissions can't stop Lua from going outside its own directory and into another.
Also, if declaring these values before:
os.execute = nil
io.popen = nil
It actually solves the problem, but disables many functions, like my clients not being able to use their other saved scripts.
You could create a new linux user, with permissions only for the needed directory, and run as that.
Otherwise there really is no solution. You can sandbox as much as you want, but if those functions are needed then you're out of luck.
You could always create your wrapper functions around the original os.execute before setting it to nil, so that you have more control of the actions taken. for example:
local osExecute = os.execute
os.execute = nil
os.createDir = function(dir)
osExecute("mkdir "..dir)
end
os.saveScript(script)
osExecute("command to save script goes here")
end
Of course, this example is prone to attacks, you would need to validate the parameters, to make sure it really is a folder name, and not folder;rm -rf /, for example.
Same goes for popen, you can always create your set of commands to execute.
You can then, of course create a user with restricted permissions, for extra security.

Define oh my zsh aliases per folder

I have a different name for my heroku remote depending on the project, and I would like to have an alias for each of my projects to push to heroku, is there anyway to define an alias per folder, so that inside my project folder I can define the alias pointing to the right heroku remote?
I would suggest using a function instead of aliases.
# paths to projects
# - must be absolute
# - must not contain symlinks
# - must not end with "/"
# run `pwd -P` in the project root to get the correct value
PROJ_A="/path/to/project/a"
PROJ_B="/path/to/project/b"
pushit () {
case $(pwd -P) in
( $PROJ_A | $PROJ_A/* )
# Put command(s) needed to push A here
;;
( $PROJ_B | $PROJ_B/* )
# Put command(s) needed to push B here
;;
( * )
echo "You are not in a project directory"
;;
esac
}
Reasoning:
While it is possible to modify aliases every time the current working directory is changed by utilizing the chpwd hook, you only need to know which command to use when you are actually pushing to heroku. So most of the time the aliases would be modified for nothing.
Using a function, the appropriate decisions can be made when you actually need it. And as all of the logic used in the function would be needed anyway when going the chpwd route, so you actually reduce the lines of code needed when using the function directly.
Just activate the dotenv plugin:
plugins+=(dotenv)
ZSH_DOTENV_FILE=.any-name-you-like-for-the-per-directory-alias-file
Also consider direnv, which lets you export environment variables in a per-directory .envrc, and unset them when you move away (which dotenv cannot do). direnv is compatible with all the popular shells. However, it doesn't currently let you define aliases or functions.
You can still combine Adaephon's approach with direnv, by creating a function (rather than an alias) that decides what to do based on one (or several) environment variables, which are in turn set by direnv.
Both approaches deal with the security aspect by having you manually whitelist directories in which the dot files will be sourced.
I'm over two years late, but for anyone who is looking for a similar solution, I just recently finished up a script that should enable you to do just what you are trying to do. Check it out: localalias.
It allows you to create and use pseudo-aliases and functions that are local to the directory they are defined in. Perhaps the best part is that it accomplishes this without performing any dicey "source" commands. These solutions are easier and faster, but are a big security concern.
Here's a demo:

sync two vobs file (by clearfsimport) without checking in the updated file

I am using following command to sync B vob files from A vob
clearfsimport -master -follow -nsetevent -comment $2 /vobs/A/xxx/*.h /vobs/B/xxx/
It works fine. But it will check in all the changes automatically. Is there a way to do the same task but leave the update files in a check out status?
I want to update the file for B from A. Build my programme, and then re-cover the branch. So if the updated files is an check out status, I can do unco later. Well with my command before, everything is checked in. I cann't re-cover my branch then.
Thanks.
As VonC said, it's impossible to prevent "clearfsimport" to do the check in. And he suggested to use a label to recover back.
For me, the branch where I did "clearfsimport" is branched from a label.Let's call it LABEL_01. So I guess I can use that label for recovery. Is there an easy way (one command) to recover the files under /vobs/B/xxx/ to label LABEL_01 ? I want to do it in my bash script, so the less/easy the command is, the better.
Thanks.
After having a look at the man page for clearfsimport, no, it isn't possible to prevent the checkins.
I would set a label before the clearfsimport, and modify the config spec for the new version to be created in a branch (similar to this config spec).
That way, "re-cover" the initial branch would be easy: none of the new version would have been created in it.

Resources