I created a Perl script reading information from an XLSX sheet. Since on one machine it worked well, and on another it did not, I included a short debug section:
$sheetdate = ($sheet -> {Cells} [0] [$sheet->{MaxCol}]) -> value();
print "value: $sheetdate\n";
$sheetdate = ($sheet -> {Cells} [0] [$sheet->{MaxCol}]) -> get_format();
print "getformat: $sheetdate\n";
On one machine it printed:
value: 2016-01-18
getformat: yyyy-mm-dd
While on the other:
value: 1-18-16
getformat: m-d-yy
Same script, same worksheet, different results. I believe that something in the environment makes the difference, but I do not know what exactly.
Any hints?
"Same script, same worksheet, different results. I believe that something in the environment makes the difference, but I don not know what exactly."
You sort-of indicate here yourself that you're not really seeking the solution to a perl or XLSX problem so much as some assistance with troubleshooting your environment.
Without access to the environment its difficult to offer a solution per se, but I can say this - you need to;
1) Re-arrange things so that you do get the same result from both environments;
2) Identify a list of differences between the original, problem environment and the one that now "works"; and
3) Modify one thing on the list at a time - moving towards the environment that works - checking each time until it becomes clear what the key variable (not in a programming sense) is.
With regards to (1), take a look at Strawberry Perl. Using Strawberry, its relatively easy to set up what some call Perl on a stick (see Portable ZIP edition) - a complete perl environment on a USB stick. Put your document on the same USB and then try the two environments - this time with absolute certainty of having the same environment. If different results persist, try booting from a "live environement" DVD (linux or widows as appropriate), and then using the USB.
Ultimately, I'd suggest there's something (such as a spreadsheet template ) at play that is different between the environments. You just need to go through a process of elimination to find out what it is.
With the benefit of hindsight, I think its worth revisiting this to produce a succinct answer for those who come across this problem in the future.
The original question was how could a perl script produce two different results when the excel data file fed into it is identical (which was confirmed with MD5 checksums). As programmers, our focus tends to be on the scripts we write and the data that goes into them. What slips to the back of the mind is the myriad of ways that perl itself can be installed and configured.
The three things that should assist in determining where the difference between two installs lie are;
(1) Use strawberry perl on a stick as described above to take the environment out of the equation and thereby (if the problem "disappears") confirm that the problem is something to do with the environment.
(2) Use Data::Dumper liberally throughout to find where the flow of execution "forks."
(3) Compare the output of perl -V (note capital V) to find out if there are differences in how the respective perls were built and configured.
The root cause of the problem was an outdated Spreadsheet::XLSX cpan module installed as RPM from the distribution repository. The RPM included version 0.13 of this module, in CPAN there was already version 0.15, and since version 0.14 the module's behaviour was changed in this particular respect. So once I replaced the pre-compiled module with the version downloaded directly from CPAN and compiled locally, the problem was solved.
Related
I created expect script for customer and i fear to customize it like he want without returning to me so I tried to encrypt it but i didn't find a way for it
Then I tried to convert it to excutable but some commands was recognized by active tcl like "send" command even it is working perfectly on red hat
So is there a way to protect my script to be reading?
Thanks
It's usually enough to just package the code in a form that the user can't directly look inside. Even the smallest of speed-bump stops them.
You can use sdx qwrap to parcel your script up into a starkit. Those are reasonably resistant to random user poking, while being still technically open (the sdx tool is freely available, after all). You can convert the .kit file it creates into an executable by merging it with a packaged runtime.
In short, it's basically like this (with some complexity glossed over):
tclkit sdx.kit qwrap myapp.tcl
tclkit sdx.kit unwrap myapp.kit
# Copy additional assets into myapp.vfs if you need to
tclkit sdx.kit wrap myapp.exe -runtime C:\path\to\tclkit.exe
More discussion is here, the tclkit runtimes are here, and sdx itself can be obtained in .kit-packaged form here. Note that the runtime you use to run sdx does not need to be the same that you package; you can deploy code for other platforms than the one you are running from. This is a packaging phase action, not a compilation or linking.
Against more sophisticated users (i.e., not Joe Ordinary User) you'll want the Tcl Compiler out of the ActiveState TclDevKit. It's a code-obscurer formally (it doesn't actually improve the performance of anything) and the TDK isn't particularly well supported any more, but it's the main current solution for commercial protection of Tcl code. I'm on a small team working on a true compiler that will effectively offer much stronger protection, but that's not yet released (and really isn't ready yet).
One way is to store the essential code running in your server as back-end. Just give the user a fron-end application to do the requests. This way essential processes are on your control, and user cannot access that code.
I work from 2 different machines. One is Windows and the other is Linux. If I alternately work on the same project but switch between both OSes, will I eventually run into compiling errors? I ask because maybe there are standards supported by one but not by the other.
That question is a pretty broad one and it depends, strictly speaking, on your tool chain. If you were to use the same tool chain (e.g. GCC/MinGW or Clang), you'd be minimizing the chance for this class of errors. If you were to use Visual Studio on Windows and GCC or Clang on the Linux side, you'd run into more issues alone because some of the headers differ. So once your program leaves the realm of strict ANSI C (C89) you'll be on your own.
However, if you aren't careful you may run into a lot of other more profane errors, such as the compiler on Linux choking on the line endings if you didn't tell your editor on the Windows side to use these.
Ah, and also keep in mind that if you want to actually cross-compile, GCC may be the best choice and therefore the first part I mentioned in my answer becomes a moot point. GCC is a proven choice on both ends. And given your question it's unlikely that you are trying to write something like a kernel mode driver - which would be fundamentally different.
That may be only if your application use some specific API.
It is entirely possible to write code that works on both platforms, with no issues to compile the code. It is, however, not without some difficulties. Compilers allow you to use non-standard features in the compiler, and it's often hard to do more fancy user interfaces (even if it's still just text) because as soon as you start wanting to do more than "read a line of text as it is entered in a shell", it's into "non-standard" land.
If you do find yourself needing to do more than what the standard C library can do, make sure you isolate those parts of the code into a separate file (or a couple of files, one for Linux/Unix style systems and one for Windows systems).
Using the same compiler (gcc) would help avoiding problems with "compiler B doesn't compile code that works fine in compiler A".
But it's far from an absolute necessity - just make sure you compile the code on both platforms and with all of your "suppoerted" compilers often enough that you haven't dug a very deep hole that is hard to get out of before you discover that "it's not working on the other system". It certainly helps if you have (at least) a virtual machine running the other OS, so you can easily try both variants.
Ideally, you want to set up an automated system, such that when you change the code [and feel that the changes are "complete"], it automatically gets built on both platforms and all compilers you want to use. And if possible, also automatically tested!
I would also seriously consider using version control - that way, when something breaks on one or the other side, you can go back and look at what the code looked like before it stopped working, and (hopefully) find the reason it broke much quicker than "Hmm, I think it's the change I made to foo.c, lets take that out... No, not that one, ok how about the change here..." - at least with version control, you can say "Ok, so version 1234 doesn't work, let's try version 1220 - ok, that works. Now try 1228, still works - so change between 1229 and 1234 - try 1232, ah, it's broken..." No editing files and you can still go to any other version you like with very little difficulty. I have used Mercurial quite a bit, git a little bit, some subversion, and worked on a project in Perforce for a few years. All of these are good - personally, I think I prefer mercurial.
As a side-effect: Most version control systems also deal with filename and line endings in the saner way than doing this manually.
If you combine your version control system with a "automated build and test-system", such as Jenkins, you can get everything very automated. Jenkins is free and runs on both Windows and Linux, and you can use it to automatically build and test your code as and when you submit the code to the version control system.
It will not create a problem until you recompile the source code in the respective OS. If you wanna run your compiled file generated by windows(.exe or .obj), into linux or vice-versa then it will definitely create a problem and wont be possible. But you can move you source code (file with extension .c/.c++) into any of the os. And sometimes it also create problems with different header files, so take care of that also. Best practice is to use single OS for you entire project, avoid multiple os until it is extremely necessary.
I wonder if we can a reduce just a little bit the effort around packages
under linux/unix OS environments and software installations.
It is my stance that there is too much redundant effort about $subject.
I have been pondering about ways to connect build systems of $subject
with some next "stage build tools", like: easybuild (1) & openbuildservice (2);
read below for more details.
To be more specific, I was able last week to take pkgsrc's repository,
process the Makefiles via a tiny "pkg2eb" script and produce *.eb files
for easybuild, then fed many parallel gcc compilations with them.
That "blindly-driven process" ended up in >600 successful builds,
ie. these were packages that simply needed 'wget/configure/make/make install';
It's not bad for a first run, just wonder if it can be done any better.
So:
According to your experience, which OS has the cleanest/leanest
pkgsrc/port structure to be sourced & fed to other external tools?
This is NOT the same as which has the most available packages!
Have you heard of any similar efforts trying to massively produce
packages from eg. a common source list in a structured manner?
(I mean, in a transferable way across different build systems)
So,
much relevant information is visible here:
http://www.mancoosi.org/edos/packages/ # lengthy description of various packaging formats
this one shows the higher level picture:
http://www.mancoosi.org/edos/suggestions/ (esp. 2.1.1 Expressivity shortcomings)
Anyway, to answer to original question, the best bets as of now are:
RPM's .spec files
DEB control files
pkgsrc; possible but some hackery is still needed
portage; quite clean, distinguishes between DEPEND and RDEPEND
macports; easy to parse; very detailed dependencies aspects
ports; like pkgsrc; multiple dependencies defined
I'm still new to the Ada programming world so forgive me if this question is obvious.
I am looking at developing an application (in Ada, using the features in the 2005 revision) that reads from the serial port and basically performs manipulation of the strings and numbers it receives from an external device.
Now my intention was to likely use Florist and the POSIX terminal interfaces to do all the serial work on Linux first....I'll get to Windows/MacOS/etc... some other time but I want to leave that option open.
I would like to follow Ada best practices in whatever I do with this. So instead of a hack like conditional compilation under C (which I know Ada does not have anyway) I would like to find out how you are suppose to specify a change in package files from the command line (gnatmake for example)?
The only thing I can think of right now is I could name all platform packages exactly the same (i.e. package name Serial.Connector with the same filenames) and place them in different folders in the project archive and then upon compilation specify the directories/Libraries to look in for the files with -I argument and change directory names for different platforms.
This is way I was shown for GCC using C/C++...is this still the best way with Ada using GNAT?.
Thanks,
-Josh
That's a perfectly acceptable way of handling this kind of situation. If at all possible you should have a common package specification (or specifications if more than one is appropriate), with all the platform-specific stuff strictly confined to the corresponding package body variations.
(If you did want to go down the preprocessor path, there's a GNAT preprocessor called gnatprep that can be used, but I don't like conditional compilation either, so I'd recommend staying with the separate subdirectories approach.)
You could use the GNAT Project file package Naming: an extract from a real example, where I wanted to choose between two versions of a package in the same directory, one with debug additions, is
...
type Debug_Code is ("no", "yes");
Debug : Debug_Code := External ("DEBUG", "no");
...
package Naming is
case Debug is
when "yes" =>
for Spec ("BC.Support.Managed_Storage")
use "bc-support-managed_storage.ads-debug";
for Body ("BC.Support.Managed_Storage")
use "bc-support-managed_storage.adb-debug";
when "no" =>
null;
end case;
end Naming;
To select the special naming, either set the environment variable DEBUG to yes or build with gnatmake -XDEBUG=yes.
Yes, the generally accepted way to handle this in Ada is to do it with different files, selected by your build system. Gnu make is about as multiplatform as it gets, and can allow you to build different files (with different names and/or directories and everything) under different configurations.
As a matter of fact, I find this a superior way (over #ifdefs) to do it in C as well.
Mostly for my amusement, I created a makefile in my $HOME/bin directory called rebuild.mk, and made it executable, and the first lines of the file read:
#!/bin/make -f
#
# Comments on what the makefile is for
...
all: ${SCRIPTS} ${LINKS} ...
...
I can now type:
rebuild.mk
and this causes make to execute.
What are the reasons for not exploiting this on a permanent basis, other than this:
The makefile is tied to a single directory, so it really isn't appropriate in my main bin directory.
Has anyone ever seen the trick exploited before?
Collecting some comments, and providing a bit more background information.
Norman Ramsey reports that this technique is used in Debian; that is interesting to know. Thank you.
I agree that typing 'make' is more idiomatic.
However, the scenario (previously unstated) is that my $HOME/bin directory already has a cross-platform main makefile in it that is the primary maintenance tool for the 500+ commands in the directory.
However, on one particular machine (only), I wanted to add a makefile for building a special set of tools. So, those tools get a special makefile, which I called rebuild.mk for this question (it has another name on my machine).
I do get to save typing 'make -f rebuild.mk' by using 'rebuild.mk' instead.
Fixing the position of the make utility is problematic across platforms.
The #!/usr/bin/env make -f technique is likely to work, though I believe the official rules of engagement are that the line must be less than 32 characters and may only have one argument to the command.
#dF comments that the technique might prevent you passing arguments to make. That is not a problem on my Solaris machine, at any rate. The three different versions of 'make' I tested (Sun, GNU, mine) all got the extra command line arguments that I type, including options ('-u' on my home-brew version) and targets 'someprogram' and macros CC='cc' WFLAGS=-v (to use a different compiler and cancel the GCC warning flags which the Sun compiler does not understand).
I would not advocate this as a general technique.
As stated, it was mostly for my amusement. I may keep it for this particular job; it is most unlikely that I'd use it in distributed work. And if I did, I'd supply and apply a 'fixin' script to fix the pathname of the interpreter; indeed, I did that already on my machine. That script is a relic from the first edition of the Camel book ('Programming Perl' by Larry Wall).
One problem with this for generally distributable Makefiles is that the location of make is not always consistent across platforms. Also, some systems might require an alternate name like gmake.
Of course one can always run the appropriate command manually, but this sort of defeats the whole purpose of making the Makefile executable.
I've seen this trick used before in the debian/rules file that is part of every Debian package.
To address the problem of make not always being in the same place (on my system for example it's in /usr/bin), you could use
#!/usr/bin/env make -f
if you're on a UNIX-like system.
Another problem is that by using the Makefile this way you cannot override variables, by doing, for example make CFLAGS=....
"make" is shorter than "./Makefile", so I don't think you're buying anything.
The reason I would not do this is that typing "make" is more idiomatic to building Makefile based projects. Imagine if every project you built you had to search for the differently named makefile someone created instead of just typing "make && make install".
You could use a shell alias for this too.
We can look at this another way: is it a good idea to design a language whose interpreter looks for a fixed filename if you don't give it one? What if python looked for Pythonfile in the absence of a script name? ;)
You don't need such a mechanism in order to have a convention based around a known name. Example: Autoconf's ./configure script.