Distributing source files with an open source app - linux

I have written an open source (GPL) application for Linux and OSX and now wish to distribute it. Is it normal to distribute the source code along with the binaries by default, or just provide a link to where it can be obtained?
If I include the source files, where is the normal location for writing them on the users system for Linux and OSX (I thought /usr/local/src but on my Ubuntu machine, supposedly chock-full of open source apps, this directory is empty).

It is usual to distribute the sources and binaries separately. Binaries would normally be distributed in distro-specific package formats whilst sources would be a simple .tar.gz containing a project folder. The user could unpack it to /usr/local/src if they wanted but it should build anywhere. It's not up to your program to drop its sources in any particular location.
I thought /usr/local/src but on my Ubuntu machine, supposedly chock-full of open source apps, this directory is empty
It will be empty if you are only using the Ubuntu repos. The OS is in charge of /usr and will drop any sources you install into /usr/src. But /usr/local is left for you to play with; that's where you install stuff that the distro doesn't provide.

About /usr/local/src
/usr/local and any subdirectories are always going to be empty on your machine unless YOU have specifically put something in there. It's a section of the filesystem that is reserved for user-installed software for that specific machine. Ubuntu (or any distribution) is not ever supposed to touch it.
Your distro will have separate places for its own source code, if any. Most Ubuntu installations won't need source code anyway (though you can download it if you want to), but if they do it'll go somewhere like /usr/src. But if you want to place your own source code somewhere and don't want your distro to mess with it, then just:
If it's just for developing/compiling in your own user account, you can just put it somewhere in your home directory.
If it's a piece of software you'll installing on the system, /usr/local/src is the suggested spot and your distro won't mess with it there.
FHS is the standard which says where in the filesystems things go, and includes distinctions such as the ones I've discussed above.
Your software should be able to be compiled no matter which directory it's in, because as you can see, it can depend.

It's worth looking at a few projects on Sourceforge (http://www.sf.net). As menioned by #bobince it's normal to distribute binaries and source separately. It's certainly kind to users not to require compilation so they can download and run.

Related

Pygame will not import in python 3.8 [duplicate]

I'm probably just being very thick here, but it's not clear to me where I'm supposed to install 'new' user-specific programs on Windows 7 (and presumably Vista too, though I've not specifically looked at that scenario yet).
Under Windows XP (rightly or wrongly) we always installed our programs into folders under 'Program Files' and accepted that they'd be kind-of available to everyone. From what I can gather under Windows 7 I'm supposed to install my software under the user's AppData folder (possibly AppData\Local\MyApp). That makes a degree of sense, but the fact that this folder is 'hidden' by default means that we're going to have 'fun' talking our users through support stuff.
I want to install our software so that it's user specific (the Users bit in Windows 7 makes perfect sense) but I do want the user to be able to access it if required. Our program also includes a 'data' subdirectory which it needs to write into while it's running (embedded database), but as the program is intended to be single-user/standalone, the data folder being inside a user-specific folder isn't going to be a problem.
My problem is just that whole 'hidden folder' aspect of AppData. As much as I've trawled the MSDN, I can't work out where else I'm supposed to install user-specific programs. Taken one way it would seem to be something like AppData\Local\MyApp, and another way it would seem to be just as valid under the user's My Documents\MyApp equivalent.
Has anyone got a clear guide for where all this stuff goes? I found the MSDN docs confusing. :-)
Not really.
The directory that serves as a common
repository for application-specific
data for the current roaming user.
AppData is, surprisingly, for application data, not for installation (Click Once/Silverlight applications aside). You can, and should still install into Program Files, just don't expect to write into that folder.
You can install software into AppData if you want it to follow a user about in an Active Directory environment, which happens if you put it in AppData\Roaming (the SpecialFolder.ApplicationData location).
You can also install into AppData if you want the software to be available to just the user that installs it. This can be useful if, for example, you have multiple users on the same machine, who all want to run different versions of the software in complete isolation.
If you want settings to only apply on the local machine then you use AppData\Local, which is SpecialFolders.LocalApplicationData - this will make AD administrators very happy as the roaming profile size won't suddenly jump up 50Mb or whatever the size of your software is.
If you wanted to create settings which apply to all users then you're looking at SpecialFolders.CommonApplicationData
You should remember never to rely on the actual name of the directory - localisation issues mean this can change and the location does change with OS versions two. You should be using the special folder enumeration in your software, or the equivalent in your installer.
Could you not install into Program Files, but use AppData as it's supposed to be used, and store your database in there?
Windows 7 added the FOLDERID_UserProgramFiles known folder and by default this maps to %LOCALAPPDATA%\Programs. This is used by MSI when ALLUSERS=2 & MSIINSTALLPERUSER=1.
On Vista and earlier there is no canonical per-user application folder but just using %LOCALAPPDATA% is pretty common. Sadly MSI will just use %ProgramFiles% on these systems.
It's 2019, and I just installed Visual Studio Code (a Microsoft product) in the default folder of
%userprofile%\AppData\Local\Programs\Microsoft VS Code
This is probably for getting around the requirement to have an administrator or UAC prompt authorise the installation
Windows 7 folder structure is deeply inspired on Unix structure:
/usr/ -> C:\Program Files\ -> binaries: executables and dynamically linked
/etc/ -> C:\ProgramData\ -> global settings
/home/ -> C:\Users\ -> a folder for each user
~/.* -> C:\Users\Hikari\AppData\Roaming\ -> settings for each user
Windows has more folder, like My Documents for files with content produced by user, AppData Local and Roaming (which Unix usually handles with NFS).
It's about time for us developers to start using these structures. We must separate at least binary files that don't need to be replicated, global and user settings.
When a setup is installing an app, this setup should expect to have permission to write on Program Files. Once the setup is finished, Program Files should be writable only for other setups aiming to update binaries to other versions.
Please install executable files to the %programfiles% folder in Windows - a simple MSI based install package can perform an active setup for any new user who logs onto the machine to create the user specific files and folders in their profiles %appdata% folder. You see this behaviour for Internet Explorer, Adobe reader, etc. - It's the little MSI installer window that pops up the first time you log onto a machine which has those applications installed. - Thanks - a system admin :)
My opinion, for what it's worth, is that user-specific program files is just asking for trouble and is a damn stupid thing to do.
A much more sensible approach is to install different versions of your program to:
\Program Files\Your Program\Program_v0.1\Program.exe
\Program Files\Your Program\Program_v0.2\Program.exe
\Program Files\Your Program\Program_v0.3\Program.exe
\Program Files\Your Program\Program_v0.4\Program.exe
I would then place a bootstrapping launcher at:
\Program Files\Your Program\ProgramLauncher.exe
Then, the user application data folder will only contain data, including an INI/XML/Settings file that indicates the version of the program that this user is working with.
Such an approach satisfies the core tenant of keeping data and executing code separate, allows every user to run a specific version of the code, and offers a small amount of de-duplication by ensuring the same executable code is not copied multiple times across user folders.
Otherwise, go right ahead with installing programs to AppData and undoing the years it has taken us to achieve clean separation of code and data. I found this thread because I noticed that Chromium and DropBox are installing code to AppData. I'm going to uninstall those program, and change the permissions on my AppData folder to exclude execution to ensure I can easily spot other programs attempting the same BS.

How to correctly make install of binaries and data after compile in linux?

After make of sources I have compiled executable file and data directory with images for it. What should I do at "make install" phase to correctly install these files to the linux system? And how then application can find installed data (in case when binary and data are placed in different directories)?
Are there any standards for this?
There are many ways to install packages on a Linux and Unix system much like any other operating system. The normal method of installing software is through your distributions package manager. Package managers are different based on the distribution you are using but in general they take a package (a file filled with binaries source code or other files required for the piece of software to work) and place it into the corresponding places as defined by the Filesystem Hierarchy Standard. When you do a make install what you are doing is bypassing the package manager and placing the binaries into the hierarchy standard directly making it nearly impossible for the package manager to handle or account for that programs existence. This is not a good thing for anyone as it is hard to keep a system secure or stable with many unknown files placed throughout the system. Please if you want to install something manually please take a look at the filesystem hierarchy stabdard and place the files under the appropriate folder in either /opt and create a symlink in an area covered by your PATH variable or under /usr/local/

setting up cygwin via the GUI

I have a standalone server running Cygwin -- I did not setup this server, it was inherited. Anyway, I'd like to know what options the installing admin selected in the setup program.
I've read that I could look in /etc/setup, /etc/postinstall, or /etc/preremove but there are a lot of packages in those directories... same goes for the output of cygcheck -c.
I don't want to know every single library on the system... just how to duplicate the install. Is there a way to determine which packages were select in the GUI setup program?
Thanks!
Cygwin is pretty standalone. You should be able to archive up the entire Cygwin directory (and subdirectories) and move it to the same location on another system.
If you archive it up I recommend 7-zip. You can get it free here. The built in Windows archiver can create permission problems when an archive is extracted on a destination system. I recommend 7-zip for both archiving and unarchiving. If you use the built in Windows archiver and then move it to the new system and extract it - it will extract without errors. However you may find things don't actually work right while using some Cygwin applications
If you don't copy everything you won't move any of the original admin's custom changes.

Are there any reasons to not install binaries, etc in a user's home dir?

I'm a user on a shared computing environment. More often than not, the system doesn't have most of the libraries I need or the binaries and programs are atleast 4-5 versions old. It's so cumbersome to email the sysadmins each time to update packages etc, that I've started installing them to a folder in my home dir.
My question is: are there any negatives to doing this? Can I also install the latest version of my shell to my home dir and chsh to use that? Certain packages have a lot of files. Will this affect login times (I presume the system has to stat() my entire home dir and check with quota)?
Typical practices is to have a ~/.bin directory with symlinks to your local executables. That way, you don't have to update your PATH for each new app, just the links.
Yes, what you describe is a common practice, though usually one doesn't need the number of packages/libraries you seem to. Do be careful if applications or libraries make assumptions about where they're installed...
There should not be any significant effect on login times.

Double-click installer in Ubuntu?

I'm trying to update our installer so a user can simply double-click on a file and have all the dependencies and our software installed easily. This is a suite of applications that will are deployed on a clean UbuntuĀ 8.04 (Hardy Heron) installation. I have investigated making a .deb file, but listing the dependencies doesn't work, because there isn't any Internet access available. And, any script that would set up a local APT repository would still need to be run from the command line. Is there a way to put a .deb file inside of a .deb file?
I know many companies ship shell scripts that you have to chmod +x, and then execute. This is not acceptable. It is ridiculous that this isn't possible; especially considering the distribution and architecture is fixed.
If you are totally confident that it will be installed on the same system every time, you can find the list of package dependencies yourself, fetch them from the Ubuntu repositories, and package them up with your software. You just have to be clear that your software is for a specific version, probably deal with things like keeping up with maintenance releases.
You can also easily install with a script. As for your complaint about making scripts executable, well, I don't know how you're shipping your product, but since you say it's going somewhere without Internet access, I assume it's going to be copied from some kind of media. If you make the script executable when you put it on that media, you're done.
If you'd like to do this using packages, you can create a CD-ROM which contains a package repository. You can find all kinds of information on this with Google Search. For starters, try this - it's a GUI for doing it. http://aptoncd.sourceforge.net/
A makeself self-extracting executable that starts the install script using sudo will work.
The user can either run it from a terminal (after chmod-ing it) or can double-click it and tell it to "Run" from the prompt.
It's possible to put deb-files into deb-files. The only thing you need to do is to configure the appropriate scripts.
A .deb-file consists of:
1x control.tar.gz: contains a file "control" (describes the package) and optional files like "postinst" (script executed right after extraction). There are other files you might include, and Google Search should deliver information about the available scripts.
1x data.tar.gz: contains some structure of root-filesystem which contains files/folders that need to be (re-)placed. Additionally, you may configure the behaviour in the mentioned scripts.
1x debian-binary: as far as I remember, this is simply a version number in a file. I don't know exactly what it means; just remember that in most of the cases this is 2.0.
So you now may put your .deb files in the data-package. Those are extracted by your script... and installed using:
# dpkg -i yourpackage1.deb yourpackage2.deb

Resources