I'm trying to update our installer so a user can simply double-click on a file and have all the dependencies and our software installed easily. This is a suite of applications that will are deployed on a clean UbuntuĀ 8.04 (Hardy Heron) installation. I have investigated making a .deb file, but listing the dependencies doesn't work, because there isn't any Internet access available. And, any script that would set up a local APT repository would still need to be run from the command line. Is there a way to put a .deb file inside of a .deb file?
I know many companies ship shell scripts that you have to chmod +x, and then execute. This is not acceptable. It is ridiculous that this isn't possible; especially considering the distribution and architecture is fixed.
If you are totally confident that it will be installed on the same system every time, you can find the list of package dependencies yourself, fetch them from the Ubuntu repositories, and package them up with your software. You just have to be clear that your software is for a specific version, probably deal with things like keeping up with maintenance releases.
You can also easily install with a script. As for your complaint about making scripts executable, well, I don't know how you're shipping your product, but since you say it's going somewhere without Internet access, I assume it's going to be copied from some kind of media. If you make the script executable when you put it on that media, you're done.
If you'd like to do this using packages, you can create a CD-ROM which contains a package repository. You can find all kinds of information on this with Google Search. For starters, try this - it's a GUI for doing it. http://aptoncd.sourceforge.net/
A makeself self-extracting executable that starts the install script using sudo will work.
The user can either run it from a terminal (after chmod-ing it) or can double-click it and tell it to "Run" from the prompt.
It's possible to put deb-files into deb-files. The only thing you need to do is to configure the appropriate scripts.
A .deb-file consists of:
1x control.tar.gz: contains a file "control" (describes the package) and optional files like "postinst" (script executed right after extraction). There are other files you might include, and Google Search should deliver information about the available scripts.
1x data.tar.gz: contains some structure of root-filesystem which contains files/folders that need to be (re-)placed. Additionally, you may configure the behaviour in the mentioned scripts.
1x debian-binary: as far as I remember, this is simply a version number in a file. I don't know exactly what it means; just remember that in most of the cases this is 2.0.
So you now may put your .deb files in the data-package. Those are extracted by your script... and installed using:
# dpkg -i yourpackage1.deb yourpackage2.deb
Related
On my website I need to give users an ability to download and easily start my executable (AppImage binary) on Linux.
For Windows version it is just .exe which works after downloading and clicking:
Download file
Click on file in browser downloads panel to start it
For Linux user now needs to do following:
Download file
Open folder containing file from browser downloads panel
Right click on it to add exec permission
Click on "Allow execution of this file" checkbox
Press Ok
Click on file to start it
It is hard to explain this flow for regular user, it makes users leave away.
Is it possible to minimize it to minimal clicks as on Windows?
Any advises appreciated to achieve minimal clicks. I can compile app for any format (its on electron but I can process it before upload)
I thought about using .deb . It will limit app for Debian-based only, but main problem in same time that I did not find ability to run post installation to exec app, and I don't want ask user to enter start menu.
Executable bits are a basic UNIX security measure thus it is not really easy to work around this (for good reason). Thinking about this, for the specific case of downloaded files, Windows also applies some restrictions (special NTFS stream which tells Windows Explorer to warn about the dangers of an executable file from the Internet).
You can of course provide your application as a .tar.something archive and store executable files in there. After extraction, they will normally have the correct execution bits set.
The option with the deb-package can also solve your problem (for some users) but is a little more complicated:
User downloads deb package
User clicks on deb package and has some program installed that provides a GUI for installing packages (like gdebi). Like on Windows there will be some "security check" in form of a dialog box where the user needs to enter a (sudo) password. Afterwards, apt will install the package
If the package is created correctly, it can transport the executable bit correctly such that no explicit permission change is needed afterwards. If for some reason there is need to do something post-installation, Debian packages can provide postinst scripts which run (as root!) at the end of the package's installation.
In any case, as dealing with executable files is a common procedure on Linux, it might not scare so many users away as expected. If you want to make it comfortable for the users, provide the package as they expect/like them. On Windows I would think that to be a .msi package and on Linux I prefer a package corresponding to my distribution (.deb, .rpm).
If you want the users to update their packages regularly (good for security) then it is helpful to provide a "repository" that users can add and install your package from. Of course, "the best" is having a package as part of the distribution, but that is quite some effort and needs to pass a lot of "quality assurance gates" :)
After make of sources I have compiled executable file and data directory with images for it. What should I do at "make install" phase to correctly install these files to the linux system? And how then application can find installed data (in case when binary and data are placed in different directories)?
Are there any standards for this?
There are many ways to install packages on a Linux and Unix system much like any other operating system. The normal method of installing software is through your distributions package manager. Package managers are different based on the distribution you are using but in general they take a package (a file filled with binaries source code or other files required for the piece of software to work) and place it into the corresponding places as defined by the Filesystem Hierarchy Standard. When you do a make install what you are doing is bypassing the package manager and placing the binaries into the hierarchy standard directly making it nearly impossible for the package manager to handle or account for that programs existence. This is not a good thing for anyone as it is hard to keep a system secure or stable with many unknown files placed throughout the system. Please if you want to install something manually please take a look at the filesystem hierarchy stabdard and place the files under the appropriate folder in either /opt and create a symlink in an area covered by your PATH variable or under /usr/local/
I have a standalone server running Cygwin -- I did not setup this server, it was inherited. Anyway, I'd like to know what options the installing admin selected in the setup program.
I've read that I could look in /etc/setup, /etc/postinstall, or /etc/preremove but there are a lot of packages in those directories... same goes for the output of cygcheck -c.
I don't want to know every single library on the system... just how to duplicate the install. Is there a way to determine which packages were select in the GUI setup program?
Thanks!
Cygwin is pretty standalone. You should be able to archive up the entire Cygwin directory (and subdirectories) and move it to the same location on another system.
If you archive it up I recommend 7-zip. You can get it free here. The built in Windows archiver can create permission problems when an archive is extracted on a destination system. I recommend 7-zip for both archiving and unarchiving. If you use the built in Windows archiver and then move it to the new system and extract it - it will extract without errors. However you may find things don't actually work right while using some Cygwin applications
If you don't copy everything you won't move any of the original admin's custom changes.
The need
Recently I've started flirting with the idea of making my own customized Debian live distro. My aim is to have an USB stick with Debian, specific packages, custom scripts and files installed inside. In this way, I can take my OS with everything I need to work with without taking my laptop with me. Furthermore, It will be specially useful in case I just wanted to replicate the OS without the hassle of installing every single packages and further customizations over again.
The research
So I decided to go for it and educate myself on the subject. I've found the Linux from scratch project (LFS), but to be honest, it will take me lots of time I currently cannot afford to invest (But seriously thinking for the future).
I decided to use the live-build project scripts based on the instructions and examples of their manual. http://live.debian.net/manual/3.x/html/live-manual.en.html
The problem
So far, I've built a hybrid.iso image with a custom selection of packages by specifying them in the /config/packages-list/mylist.list.chroot.
Then I tried to copy my custom scripts, files and software inside specific folders under the chroot directory just created,
i.e.
mkdir chroot/etc/skel/<custom dir here>
or
cp <some file or script> chroot/usr/local/bin/
and then run
lb build binary
The problem is that the iso doesn't get built after the first time I run lb build and the customizations done on the chroot directory are deleted every time I try to build it again.
I've tried...
lb clean --binary
lb clean --stage
lb build binary
or
lb build binary iso
So what am I missing? How can I add custom files, folders, scripts to my custom live Debian without downloading every single package over again?
why isn't the iso image built again after the first time I run lb build?
Thanks in advance...
P.D: I decided to be very detailed on the writing so anyone could understand, specially those that want to try the same...
I am conscious about LFS too. But, this
My aim is to have an USB stick with Debian, specific packages, custom
scripts and files installed inside.
and this
it will take me lots of time I currently cannot afford to invest
made me pointing to my answer
I have two suggestions. The easy one, use tools like remastersys or live-magic.
Follow this link.
The difficult one, follow the official documentation to how to creat a custom debian cd.
Debian official doc
This answer comes a year late for the original poster, but for future searchers: don't add files directly to the chroot. Instead, make a folder structure in config/includes.chroot. Then your customizations will be retained when you rebuild the image.
See the section "Live/chroot local includes" in the debian-live manual: http://live.debian.net/manual/4.x/html/live-manual.en.html#506
I have written an open source (GPL) application for Linux and OSX and now wish to distribute it. Is it normal to distribute the source code along with the binaries by default, or just provide a link to where it can be obtained?
If I include the source files, where is the normal location for writing them on the users system for Linux and OSX (I thought /usr/local/src but on my Ubuntu machine, supposedly chock-full of open source apps, this directory is empty).
It is usual to distribute the sources and binaries separately. Binaries would normally be distributed in distro-specific package formats whilst sources would be a simple .tar.gz containing a project folder. The user could unpack it to /usr/local/src if they wanted but it should build anywhere. It's not up to your program to drop its sources in any particular location.
I thought /usr/local/src but on my Ubuntu machine, supposedly chock-full of open source apps, this directory is empty
It will be empty if you are only using the Ubuntu repos. The OS is in charge of /usr and will drop any sources you install into /usr/src. But /usr/local is left for you to play with; that's where you install stuff that the distro doesn't provide.
About /usr/local/src
/usr/local and any subdirectories are always going to be empty on your machine unless YOU have specifically put something in there. It's a section of the filesystem that is reserved for user-installed software for that specific machine. Ubuntu (or any distribution) is not ever supposed to touch it.
Your distro will have separate places for its own source code, if any. Most Ubuntu installations won't need source code anyway (though you can download it if you want to), but if they do it'll go somewhere like /usr/src. But if you want to place your own source code somewhere and don't want your distro to mess with it, then just:
If it's just for developing/compiling in your own user account, you can just put it somewhere in your home directory.
If it's a piece of software you'll installing on the system, /usr/local/src is the suggested spot and your distro won't mess with it there.
FHS is the standard which says where in the filesystems things go, and includes distinctions such as the ones I've discussed above.
Your software should be able to be compiled no matter which directory it's in, because as you can see, it can depend.
It's worth looking at a few projects on Sourceforge (http://www.sf.net). As menioned by #bobince it's normal to distribute binaries and source separately. It's certainly kind to users not to require compilation so they can download and run.