Our jenkin process builds and produce an MSI. during process when it first builds, it produce msi(say 500kb.msi). this MSI is already digitallysigned. After that it resigns(not sure why) and generate msi(say 496kb.msi). 2nd Msi have difference of 4kb in size. I have extracted both MSIs using the lessmsi tool and compared both extracted contents and realized both are exactly same.
Problem is, when client try to install 496kb.msi it is not producing expected behaviour. Only difference between them I can see is digital signature. And both have been signed with sh1.
Any help with this will also be helpful, however, my question is, what does installer is bundled with apart from files and folders. Before extraction, It had a difference of 4kb and after extraction size for both are exactly same. Where does this 4kb consumed in case of first MSI?
An MSI can be very complex, not just a simple file/folder manager. It can have custom actions (code) that modify your system, or predefined MSI tables that also modify the system.
The easiest way to find the difference is to diff the MSIs with SuperOrca, it is a free tool. It should highlight the different tables.
Related
I created expect script for customer and i fear to customize it like he want without returning to me so I tried to encrypt it but i didn't find a way for it
Then I tried to convert it to excutable but some commands was recognized by active tcl like "send" command even it is working perfectly on red hat
So is there a way to protect my script to be reading?
Thanks
It's usually enough to just package the code in a form that the user can't directly look inside. Even the smallest of speed-bump stops them.
You can use sdx qwrap to parcel your script up into a starkit. Those are reasonably resistant to random user poking, while being still technically open (the sdx tool is freely available, after all). You can convert the .kit file it creates into an executable by merging it with a packaged runtime.
In short, it's basically like this (with some complexity glossed over):
tclkit sdx.kit qwrap myapp.tcl
tclkit sdx.kit unwrap myapp.kit
# Copy additional assets into myapp.vfs if you need to
tclkit sdx.kit wrap myapp.exe -runtime C:\path\to\tclkit.exe
More discussion is here, the tclkit runtimes are here, and sdx itself can be obtained in .kit-packaged form here. Note that the runtime you use to run sdx does not need to be the same that you package; you can deploy code for other platforms than the one you are running from. This is a packaging phase action, not a compilation or linking.
Against more sophisticated users (i.e., not Joe Ordinary User) you'll want the Tcl Compiler out of the ActiveState TclDevKit. It's a code-obscurer formally (it doesn't actually improve the performance of anything) and the TDK isn't particularly well supported any more, but it's the main current solution for commercial protection of Tcl code. I'm on a small team working on a true compiler that will effectively offer much stronger protection, but that's not yet released (and really isn't ready yet).
One way is to store the essential code running in your server as back-end. Just give the user a fron-end application to do the requests. This way essential processes are on your control, and user cannot access that code.
Installshield allows to 2 types of project file types - XML & Binary, in a Basic MSI project.
What should discourage me from using XML project file type over Binary type?
I have 2 of those listed listed below, but are these correct?
Slowing opening and closing time on big project files
May slow down InstallShield compile/build process
build time is not affected.
Loading time is also not that different, I just checked with one of my projects who has about 6,000 files in it, and the time to open it seems almost identical (maybe 1-2 seconds slower on the XML project)
There are two potentially significant differences, depending on how you use your projects:
As a poorly-kept-secret, the Binary format is compatible with Windows Installer automation and editing tools, but the XML format is generally not.
However the XML format is more amenable to text-based viewing, editing, source control, or XML-based automation.
There are several additional differences I know about that I consider insigificant:
The representations differ, so you will see size differences in the resulting project files.
It is a conversion, as InstallShield works internally with the Binary format, so it takes some (minimal) extra time to load and save the project file.
The code that converts internally between XML and Binary representation is well-used, but it's still a conversion. That's additional complexity, so loading and saving as XML has more points of potential failure.
There are some slight behavioral differences after deleting a record "in the middle." Saving and loading through XML will normalize this and add a new record at the end; working in Binary may reuse the deleted record's spot for the next record. However record order is not semantically significant.
Let's say I have 2 features, both use abc.dll, and both reference it from their respective current directories.
So the output will look like this :
Feature1
abc.dll
Feature2
abc.dll
I've created 2 components for this. In reality I have many features and many dll's that are shared, and my installer size is nearly 1GB.
What I am looking for is a smarter way to do this, using IS 2015 professional.
What I've looked at so far:
Merge modules: Not sure if this would work, also it means I need to maintain the merge modules manually should files be upgraded.
DuplicateFile, via direct editor, but this wouldn't work because there is no way to have this bound to a feature, only a component.
A hidden feature which would install the shared files to the target system, then a post script which would copy these files to their respective features, and delete the folder of this feature.
Is there a best practice method to implement what I need?
The most suitable approach in this case is, indeed, merge modules. I am not sure why the concern about maintaining them - you should have an automated build process that creates all merge modules and then builds your main installer with the newly created modules.
However, in my opinion, merge modules are a bit cumbersome to use if you have a lot of custom actions.
An alternative to merge modules - assuming you are using a Windows Installer project - is using small MSI packages which you "chain" to your main installer (you can chain multiple packages with different conditions and supply different properties). Here too, you should have a build process which builds all those small msi packages and then builds the main installer.
If you don't want to have this kind of 'sub-projects', then the option of a hidden feature with a post action is acceptable, I've seen it, and done it, a few times. Note that if you target Windows 7 or later, instead of physically copying the files and deleting them, you can use symbolic links (using the mklink command), which helps reduce the installation's foot print on the target system (and make patching easier - you replace the original file, and all its links are updated automatically).
I'm working on a project now in which I configure the cabal file to build several executables which share the library built by the same cabal file. The cabal project is structured much like this one, with one library section followed by several executable sections that include this library in their build-depends sections.
I'm using this approach so I can make common functions available to any number of executables, and create more executables easily as needed.
Yet in his Monad Reader article on Hoogle p.33, Neil Mitchell advocates bundling up Haskell projects into a single executable with multiple modes (e.g. by using Neil Mitchell's CmdArgs library.) So there might be one mode to start a web server, another mode to query the database from the command line, etc. Quote:
Provide one executable
Version 3 had four executable programs – one to generate ranking
information, one to do command line searching, one to do web
searching, and one to do regression testing. Version 4 has one
executable, which does all the above and more, controlled by flags.
There are many advantages to providing only one end program – it
reduces the chance of code breaking without noticing it, it makes the
total file size smaller by not duplicating the Haskell run-time system,
it decreases the number of commands users need to learn. The move to
one multipurpose executable seems to be a common theme, which tools
such as darcs and hpc both being based on one command with multiple
modes.
Is a single multimode executable really the better way to go? Are there countervailing reasons to stick with separate executables sharing the same library?
Personally, I'm more of a fan of the Unix philosophy "write programs that do one thing and do it well". However there are reasons for doing either way, so the only reasonable answer here is: it depends.
One example where it makes senses to bundle everything into same executable, is when you're targeting a platform that is very limited on resources (e.g, embedded system). This is the approach taken by BusyBox.
On the other hand if you divide into multiple executables, you give your clients the option of just using those that matter to them. With a single executable, even if your client really just wanted one functionality, he'll have no way to get rid of the extra baggage.
I'm sure there are a lot of more reasons for going either way, but this just goes to show that there's no definitive answer. It depends on the use case.
I'm in the process of switching to Linux for development, and I'm puzzled about how to maintain a good FHS compliancy in my programs.
For example, under Windows, I know that all the resources (Bitmaps, audio data, etc.) that my program will need can be found with relative paths from the executable, so its the same if I'm running the program from my development directory, or from an installation (Under "Program Files" for example), the program will be able to locate all its files.
Now, under Linux, I see that usually the executable goes under /usr/local/bin and its resources on /usr/local/share. (And the truth is that I'm not even sure of this)
For convenience reasons (such as version control) I'd like to have all the files pertaining to the project under a same path, say, for example, project/src for the source and project/data for resource files.
Is there any standard or recommended way to let me just rebuild the binary for testing and use the files on the project/data directory, while also being able to locate the files when they are under /usr/local/share?
I thought for example of setting a symlink under /usr/local/share pointing to my resources dir, and then just hardcode that path inside my program, but I feel its quite hackish and not very portable.
Also, I thought of running an install script that copies all the resources to /usr/local/share everytime I change, or add resources, but I also feel its not a good way to do it.
Could anyone tell me or point me to where it tells how this issue is usually resolved?
Thanks!
For convenience reasons (such as version control) I'd like to have all the files pertaining to the project under a same path, say, for example, project/src for the source and project/data for resource files.
You can organize your source tree as you wish — it need not bear any resemblance to the FHS layout desired of installed software.
I see that usually the executable goes under /usr/local/bin and its resources on /usr/local/share. (And the truth is that I'm not even sure of this)
The standard prefix is /usr. /usr/local is for, well, "local installations" as the FHS spec reiterates.
Is there any standard or recommended way to let me just rebuild the binary for testing and use the files on the project/data directory
Definitely. Run ./configure --datadir=$PWD/share for example is the way to point your build to the data files form the source tree (substitute by proper path) and use something like -DDATADIR="'${datadir}'" in AM_CFLAGS to make the value known to the (presumably C) code. (All of that, provided you are using autoconf/automake. Similar options may be available in other build systems.)
This sort of hardcoding is what is used in practice, and it suffices. For a development build within your own working copy, having a hardcoded path should not be a problem, and final builds (those done by a packager) will simply use the standard FHS paths.
You could just test a few locations. For example, first check if you have a data directory within the directory you're currently running the program from. If so, just go ahead and use it. If not, try /usr/local/share/yourproject/data, and so on.
For developing/testing, you can use the data directory within your project folder, and for deploying, use the stuff in /usr/local/share/. Of course, you can test for even more locations (e.g. /usr/share).
Basically the requirement for this method is that you have a function that builds the correct paths for all filesystem accesses. Instead of fopen("data/blabla.conf", "w") use something like fopen(path("blabla.conf"), "w"). path() will construct the correct path from the path determined using the directory tests when the program started. E.g. if the path was /usr/local/share/yourproject/data/, the string returned by path("blabla.conf") would be "/usr/local/share/yourproject/data/blabla.conf" - and there is your nice absolute path.
That's how I'd do it. HTH.
My preferred solution in cases like this is to use a configuration file, along with a command-line option that overrides its location.
For example, a configuration file for a fully deployed application named myapp could reside in /etc/myapp/settings.conf and a part of it could look like this:
...
confdir=/etc/myapp/
bindir=/usr/bin/
datadir=/usr/share/myapp/
docdir=/usr/share/doc/myapp/
...
Your application (or a launcher script) can parse this file to determine where to find the rest of the needed files.
I believe that you can reasonably assume in your code that the location of the configuration file is fixed under /etc/myapp - or any other location specified at compile time. Then you provide a command line option to allow that location to be overridden:
myapp --configfile=/opt/myapp/etc/settings.conf ...
It might also make sense to have options for some of the directory paths as well, so that the user can easily override any of the configuration file settings. This approach has a couple of advantages:
Your users can relocate the application very easily - just by moving the files, modifying the paths in the configuration file and then using e.g. a wrapper script to call the main application with the proper --configfile option.
You can easily support FHS, as well as any other scheme you need to.
While developing, you can have your testsuite use a specially crafted configuration file with the paths being wherever you need them to be.
Some people advocate probing the system at runtime to resolve issues like this. I usually suggest avoiding such solutions for at least the following reasons:
It makes your program non-deterministic. You can never tell at a first glance which configuration file it picks up - especially if you have multiple versions of the application on your system.
At any installation mix-up, the application will remain fat and happy - and so will the user. In my opinion, the application should look at one specific and well-documented location and abort with an informative message if it cannot find what it is looking for.
It's highly unlikely that you will always get everything right. There will always be unexpected rare environments or corner cases that the application will not handle.
Such behaviour is against the Unix philosophy. Even comamnd shells probe multiple locations because all locations can hold a file that should be parsed.
EDIT:
This method is not mandated by any formal standard that I know of, but it is the prevalent solution in the Unix world. Most major daemons (e.g. BIND, sendmail, postfix, INN, Apache) will look for a configuration file at a certain location, but will allow you to override that location and - through the file - any other path.
This is mostly to allow the system administrator to implement whetever scheme they want or to setup multiple concurrent installations, but it does help during testing as well. This flexibility is what makes it a Best Practice if not a proper standard.