Currently, I'm using node to create a script to ensure that files meet our standards and one requirement is to ensure that audio files don't exceed a maximum loudness. For this I'm using ffmpeg and child_proccesses exec function. The problem though would be that this would require every user to install and configure ffmpeg on their machines. Is there any way to bundle ffmpeg with the app?
The ffmpeg-cli module (https://www.npmjs.com/package/ffmpeg-cli) claims to do precisely this. From the documentation, it "will download and extract necessary binaries for your OS!".
Hans's suggestion was fantastic, and may be helpful to others, but I found this one which better suited my problem: https://www.npmjs.com/package/ffmpeg-static
It simply downloads the proper binary on npm install and gives you the path to it.
Related
I'm looking for the proper way to distribute/deploy a node.js app that would run as a small webserver on the user's machine.
Is there a stub method or install script or a "install wizard" that would download all node_modules dependencies, download the latest nodejs binary, set up the environment, etc... or I have to distribute it bulk with everything packed? Is there any guide for that purpose?
Edited :
You could install node and npm, download your dependencies by running npm install in the command line (first declare them within your package.json) only then users can run your script. This is how you do development in Node.js, or deploy to a development server. See using npm. You could automate that with a shell script if that is what you are after.
However, when distributing programs to end-users that might not be the best approach. Linux users are used to a package (.deb for instance) and Windows users are used to an .exe or a setup wizard.
That is why I recommended the tools below. I also assumed you were targeting Windows as this is less of a problem is unix-like environments.
If you want a single file (.exe), pkg and nexe are made for that purpose. These Node.js tools are used by the developer to compile JavaScript code into a single executable binary that is convenient for end-users and Windows deployment. The resulting .exe file is very light and does not require node to be installed on the end-user’s computers.
Electron along with electron-packager can produce setup wizards, but it installs a lot of files even for the smallest program. Your program will include all of node and webkit, that is why it produces heavy installs.
NSIS can also create a setup wizard, it is simple and does common stuff well (copying files, running shell scripts).
Original answer:
Short answer is: not really.
You have to keep in mind that Javascript is and has always been interpreted, so until recently never compiled to binary as you might do with other languages. Some exploration has been going on, but essentially you won’t get a "good practice" answer.
The long answer is, maybe, for some limited use cases:
There is the fresh new pkg that does exactly this, and it looks promising.
There has been nexe for a while, it works great for some use cases (maybe yours). Native/compiled modules are still an issue however.
Electron might work for a full blown app with a significant user interface, but it is not light or compact.
You could always use browserify to concatenate and uglify all your code with the modules you use and then make an installer with something like NSIS to setup node and your script. Native modules would still be a problem however.
I am still new to the electron ecosystem and desktop development in general but what I wish to do is to interface with a third party, open source application that comes bundled in with my software. First, I am unsure on what the package options to distribute should be. Is it customary to have two downloads, one for users that already have the third party binary installed, and another one that includes it? Also how do I go about actually packaging, and installing the binary? Should this be an option on my package.json? What kind of script should I execute? Are there any npm modules to facilitate this?
edit - is it possible to invoke npm from my main.js even though a user has not previously installed node? I know node is bundled with the electron package but is npm too?
-The binary in this case is PostgreSQL
There are a couple of options coming to my mind.
Bundle a 3rd party installer w/ your app. This is what I did recently. On the first run I check if the service that I need is installed / running and if not I call the 3rd party installer / start it. When the installer quits I simply app.relaunch() and start consumig it. Of course you'll need installers for each platform you plan to support. And you'll have to figure out ways to check if the software is installed (properly) for each platform.
Bundle binaries w/ you app. Of course you can bundle pretty much anything w/ your electron app. Again, you'll need binaries for each platform you plan to support. And of course they shouldn't be linked to anything that the default user doesn't have on his machine like SDKs and additional headers ...
Less comfy but you can alway add some start-up message or before-download massage telling the user that he needs software xy in order to run your application.
Derivate of 1/2: Download required stuff on demand. For your example this would mean checking the user's OS and arch and then just download the required installers or binaries if available. You could also build the stuff on the user's machine although this probably being the worst/biggest/most complex solution.
Then there's things like https://www.npmjs.com/package/pg - you should always check npm if someone already built what you need ;)
I'd recommend using the great electron-builder which makes bundling stuff w/ your app a piece of cake.
Feel free to comment if you need more intel.
I am getting into a position where I have to use other people code for projects, for example openTLD. I want to change some of the code to give it more functionality and use it in a diffrent way. What I have found is that many people have packaged their files in such a way that you are supposed to use
cmake
and then
make
and sometimes after that
make install
I don't want to install the software on my system. What I am looking to do is get these peoples code to a point where I can add to it in Eclipse or even just using Nano and then compile it.
At what point is the code in a workable/usable state. Can I use it after doing cmake or do I need to also call make? Is my thinking correct that it would be better to edit the code after calling cmake as opposed to before? I am not going to want my finished code to be cross platform supported, it will only be on Linux. Is it easer to learn cmake and edit the code befor running cmake as opposed to not learning cmake and using the code afterwards, if that is possible?
You question is a little open ended.
Looking at the opentld project, there is a binary and a library available for use. If you are interested in using the binary in your code, you need to download the executables(Linux executables are not posted). If you are planning to use the library, you have two options. Either you use the pre-built library or build it during your build process. You would include the header files in your custom application and link with the library.
If you add more details, probably others can pitch in with new answers or refine the older ones.
I have packaged my application into an RPM package, say, myapp.rpm. While installing this application, I would like to receive some inputs from the user (an example for input could be - environment where the app is getting installed - "dev", "qa", "uat", "prod"). Based on the input, the application will install the appropriate files. Is there a way to pass parameters while installing the application?
P.S.: A possible solution could be to create an RPM package for each environment. However, in our scenario, this is not a viable option since we have around 20 environments and we do not wish to have 20 different packages for the same application.
In general, RPM packages should not require user interaction. Time and time again, the RPM folks have stated that it is an explicit design goal of RPM to not have interactive installs. For packages that need some sort of input before first use, you typically ask for this information on first use, our you put it all in config files with macros or something and tell your users that they will have to configure the application before it is usable.
Even passing a parameter of some sort counts as end-user interaction. I think what you want is to have your pre or install scripts auto detect the environment somehow, maybe by having a file somewhere they can examine. I'll also point out that from an RPM user's perspective, having a package named *-qa.rpm is a lot more intuitive than passing some random parameter.
For your exact problem, if you are installing different content, you should create different packages. If you try to do things differently, you're going to end up fighting the RPM system more and more.
It isn't hard to create a build system that can spit out 20+ packages that are all mostly similar. I've done it with a template-ish spec file and some scripts run by make that will create the various spec files and build the RPMs. Without knowing the specifics, it sounds like you might even have a core package that all 20+ environment packages depend on, then the environment specific packages install whatever is specific to their target environment.
You could use the relocate option, e.g.
rpm -i --relocate /env=/uat somepkg.rpm
and have your script look up the variable data from a file located in the "env" directory
I think this is a very valid question, specially as soon as you are moving into the application development realm. There he configuration of the application for different target systems is your daily bread: you need to configure for Development, Integration Test, Acceptance Test, Production etc. I sure don't think building a seperate package for each enviroment is the solution. Basically it should be the same code running in different enviroments.
I know that this requirement is not supported by rpm. But what you can do as a work around is to use a simple config file, that the %pre script knows
to look for. The config file could be a simple shell script that for example sets environment variables, and then the different und pre and post scripts can use those.
I am looking for a good way to install an application I developed with all its dependencies in a fancy way. Currently I have a big make file that downloads, unpacks, compiles and installs all dependencies. This however is a little tedious, since there are quite a few dependencies and the make file is getting larger and larger which eventually will be hard to maintain. Therefore I am looking for a packaging tool with the following features:
It should be a light weight package manager which is very easy to install (or even installs itself and afterwards all my dependencies)
The destination of the installed binaries, libraries etc. should be customizable
Each installation process of a dependency should be easy configurable
It should be possible to include self written scripts that get executed at a specific point during the installation process (in order to manipulate make files, flags etc)
No admin rights should be necessary since all clients that install my application will not have admin rights and are not able to use an already installed package manager
I do not know if this kind of software exists. I myself don't have much of experience with packaging tools.
Thx in advance for any link, hint, suggestion!
opkg is something thats based on ipkg (now defunct) and originally dpkg. Its used in embedded systems. Light weight for sure.
ports from crux linux (www.crux.nu)?
A quick search returns InstallJammer. I would propose make debs and rpms and tarballs and stick with standard installation process (root privileges and such)m but if you can't do that, then, well, you can't.
I'm sure you know how suspicious it would look for the user.