Finding relevant info when choosing Node Alpine Docker image version - node.js

Some might view this question as an opinion based one, but please consider that I am just asking for information sources.
I am working on a Docker based project, using a Node Alpine image. Right now I'm using the node:16.13-alpine image.
When I start updating images to the latest version, I'm always at a loss as to which version to pick.
In my example, the node image page https://hub.docker.com/_/node?tab=description&amp%3Bpage=1&amp%3Bname=alpine lists the following available images versions:
18-alpine3.15, 18.10-alpine3.15, 18.10.0-alpine3.15, alpine3.15, current-alpine3.15
18-alpine, 18-alpine3.16, 18.10-alpine, 18.10-alpine3.16, 18.10.0-alpine, 18.10.0-alpine3.16, alpine, alpine3.16, current-alpine, current-alpine3.16
16-alpine3.15, 16.17-alpine3.15, 16.17.1-alpine3.15, gallium-alpine3.15, lts-alpine3.15
16-alpine, 16-alpine3.16, 16.17-alpine, 16.17-alpine3.16, 16.17.1-alpine, 16.17.1-alpine3.16, gallium-alpine, gallium-alpine3.16, lts-alpine, lts-alpine3.16
14-alpine3.15, 14.20-alpine3.15, 14.20.1-alpine3.15, fermium-alpine3.15
14-alpine, 14-alpine3.16, 14.20-alpine, 14.20-alpine3.16, 14.20.1-alpine, 14.20.1-alpine3.16, fermium-alpine, fermium-alpine3.16
This list is of course an ever moving target.
Now, when picking a version out of all of these, what element can I take into consideration (short of reading every single release note for each image)?
Is there a page somewhere offering a high level view of these images, of known issues? Are some of these images designed to be "safe bets", unlikely to introduce freshly introduced bugs? I run an npm audit on packages used inside my image from time to time, but is there some equivalent tool which might alert it is time to update the node image itself, because a new bug / security breach has been found?
I know this is a pretty wide question, but I am sure there are some good practice guidelines to follow here, any pointer is appreciated.
Thanks!

The two most important things to do here are
Have good integration tests; and
Check your Dockerfile into source control.
If you have both of these things then trying out any of the images you list isn't a huge risk. Update the Dockerfile FROM line, build an image, and run it; if the integration tests pass, check in the change; and if not, revert it. If you can set up your continuous-integration system to run the tests for you then this becomes "open a pull request and wait for a passing build".
The other trade-off is how much you want an image you know works, versus an image that gets regular updates. Most Docker images are built on some underlying Linux distribution. The node:16.13-alpine image you have currently isn't in the list of images you show, which means that, if there is some vulnerability in the underlying Alpine base, that particular image isn't getting rebuilt. But, conversely, your build might automatically update from Node 16.13.0 to 16.13.2 without you being aware of it.
It also helps to understand your language's update and versioning strategy. Node, for example, puts out an major release roughly annually with even major version numbers (14, 16, 18, ...), but Python's annual releases have minor version numbers (3.8, 3.9, 3.10, ...).
I'd suggest:
If you can tolerate not knowing the exact version, then use a release-version image like node:16-alpine or python:3.10. Make sure to docker build --pull to get the updates in the base image.
If you've pinned to an exact version like node:16.13.0-alpine, updating to the most recent patch release node:16.13.2-alpine is most likely safe.
If your base image uses semantic versioning, then upgrading minor releases node:16.17-alpine is supposed to be safe.
It is worth reading the release notes for major-version upgrades.

Related

Convert pdf to images using Node package or open source tool

I'm looking for an open-source tool or a NPM package, which can be ran using node (for example by spawning a process and calling command line).
The result I need a PDF file converted/broken to images. Where each page in PDF is now an image file.
I checked
https://npmjs.com/package/pdf-image -- seems to be last maintained 3 years ago.
same for https://npmjs.com/package/pdf-img-convert
Please advise which package/tool I can use?
Thanks in advance.
Be aware generally https://npmjs.com/package/pdf-img-convert is frequently updated thus the better of the two, but has 3 pending pull requests so review if they impact your useage. (Note https://npmjs.com/package/pdf-image has a significantly much heavier set of dependencies to break and also has a much bigger list of pending pull requests thus your correct assumption the older it is ....)
However current pdf-img-convert 1.0.3 has a breaking dependency that needs a manual correction due to a change in Mozilla naming earlier this year from es5 to legacy.
see https://github.com/olliet88/pdf-img-convert.js/issues/10
For a cross platform Open Source CLI tool I would suggest Artifex MuTool (AGPL is not free for commercial use, but your getting quality support) has continuous daily commits, it can be programmed via Mutool Run ecma.js
Out of the box a simple convert in.pdf out%4d.png will attempt fixing broken PDF but may reject some that need a more forgiving secondary approach such as above.
Go ahead with the second one.
https://npmjs.com/package/pdf-img-convert

How can I check if my code will run in a new (or old) version of Node?

I have a code running in Node 9.8
Node 9 will reach End-of-life soon.
If I switch to node 10, how can I check if my code will run in node 10 without having to execute all paths of the code ?
Or if I go down to 8.11, how can I check if my code will run in node 8.11 ?
There is no test cases written on the code.
This is a good example of why solid unit/integration tests are critical to long-term maintainability. That said, there are a few steps you can take to reduce the risk of breaking things:
Take a look at the change logs pertaining to the versions you're moving to/from. The NodeJS team kindly includes a Notable Changes section in each change log, though I wouldn't rely on that alone as being 100% inclusive of the potentially breaking changes you may be up against.
Consider writing unit/integration tests, both as part of your assurance that things won't break from this version change, as well as that things won't break from later version changes (or everyday changes for that matter).
As much as I hate to say it, Googling around for guides on upgrading (or downgrading?) NodeJS versions may help you identify potential danger zones.
Generally, I'd consider it safer and better practice to upgrade the version than downgrade. For one, you're moving forward to the newer and greater experience the NodeJS team wants you work with, and secondly, future versions are probably more likely to be backwards compatible, whereas the old version may be missing features you're using.

How can one make a private copy of Hackage

I'd like to snapshot the global Hackage database into a frozen, smaller one for my company's deploys. How can one most easily copy out some segment of Hackage onto a private server?
Here's one script that does it in just about the simplest way possible: https://github.com/jamwt/mirror-hackage
You can also use the MirrorClient directly from the hackage2 repo: http://code.haskell.org/hackage-server/
This is not an answer two the question in the title but an answer to my interpretation of what the OP wish to achieve.
Depending of what you want for level of stability in your production circle you can approach the problem in several ways.
I have split the dependencies in two parts, things that I can use that are in the haskell platform (keep every platform used in production) and then only use a small number of packages outside that and don't let anyone (including yourself) add more packages into your dependency tree just because of laziness (as developer). These extra packages you use some kind of script for and collect from hackage (lock to version) by using cabal fetch. Keep them safe. Create a install script that uses your safe packages and if a new machine (developer) are added to your team, use that script.
yackage is great but it all comes down to how you ship your product. If you have older versions in production you need to have a yackage setup for every version and that could be quiet annoying after a couple of years.
You can download Hackage with Voker57's hackage-mirror.sh. You'll need 'curl' for it to run. If you're using a Debian based Linux distribution, you can install curl by typing apt-get install curl.
Though it's not a segment of Hackage, I've written a bash script, that downloads the whole Hackage, what can be further easily set up as a mirror using an HTTP server. Also, it downloads all required stuff like GHC compilers ready to be used with Stack.
Currently, a complete Hackage mirror occupies ~10GiB (~100000 packages of all versions) and Stack related stuff like GHC compilers ~21GiB (~200 files). Consequent runs of the script skip already downloaded stuff, so it downloads only new one. So it's a pretty convenient way to "live offline" and sync up to date when online.

How do you go about getting fixes put into linux packages?

I need group4 decode in the Python Imaging Library, but in order to build it, I need to get some changes put into the distros libtiff-dev packages.
Having never done this kind of thing before, I'm curious about where to start. The changes I need in libtiff are the placement of the header files once libtiff is installed. Right now, libtiff drops its header files into /usr/include, but it only drops in
/usr/include/tiffconf.h
/usr/include/tiff.h
/usr/include/tiffio.h
/usr/include/tiffio.hxx
/usr/include/tiffvers.h
I need to add:
/usr/include/tif_config.h
/usr/include/tif_dir.h
/usr/include/tiffiop.h
The patch in PIL I had to use to get all this going is from 2006 and is made against the 1.1.6 PIL library (PIL is now at 1.1.7), but I'm pretty sure I can't get these patches for PIL into pyPI distribution if it won't build in the distros.
So, how do you get changes into the distros. I don't need to change anything in libtiff, just in the way it gets delivered. I need to get those 3 files added to /usr/include
After that's done, I can push to get the fix into PIL.
There are two routes to getting fixes into Linux distributions. If the issue is distribution specific then the best place to start is the bug tracker for that distribution. You mentioned missing files, which is likely to be a distribution issue. (It's not quite clear from what you wrote why those files would be missing everywhere, are you sure they're not deprecated or something?)
Redhat Bugzilla
Debian bug tracker
If it's not distribution specific you could still go via the bug tracker for the distribution you use, but you could also go directly to the original author. Author details are normally available somewhere within each distribution.

How to build Linux system from kernel to UI layer

I have been looking into MeeGo, maemo, Android architecture.
They all have Linux Kernel, build some libraries on it, then build middle layer libraries [e.g telephony, media etc...].
Suppose i wana build my own system, say Linux Kernel, with some binariers like glibc, Dbus,.... UI toolkit like GTK+ and its binaries.
I want to compile every project from source to customize my own linux system for desktop, netbook and handheld devices. [starting from netbook first :)]
How can i build my own customize system from kernel to UI.
I apologize in advance for a very long winded answer to what you thought would be a very simple question. Unfortunately, piecing together an entire operating system from many different bits in a coherent and unified manner is not exactly a trivial task. I'm currently working on my own Xen based distribution, I'll share my experience thus far (beyond Linux From Scratch):
1 - Decide on a scope and stick to it
If you have any hope of actually completing this project, you need write an explanation of what your new OS will be and do once its completed in a single paragraph. Print that out and tape it to your wall, directly in front of you. Read it, chant it, practice saying it backwards and whatever else may help you to keep it directly in front of any urge to succumb to feature creep.
2 - Decide on a package manager
This may be the single most important decision that you will make. You need to decide how you will maintain your operating system in regards to updates and new releases, even if you are the only subscriber. Anyone, including you who uses the new OS will surely find a need to install something that was not included in the base distribution. Even if you are pushing out an OS to power a kiosk, its critical for all deployments to keep themselves up to date in a sane and consistent manner.
I ended up going with apt-rpm because it offered the flexibility of the popular .rpm package format while leveraging apt's known sanity when it comes to dependencies. You may prefer using yum, apt with .deb packages, slackware style .tgz packages or your own format.
Decide on this quickly, because its going to dictate how you structure your build. Keep track of dependencies in each component so that its easy to roll packages later.
3 - Re-read your scope then configure your kernel
Avoid the kitchen sink syndrome when making a kernel. Look at what you want to accomplish and then decide what the kernel has to support. You will probably want full gadget support, compatibility with file systems from other popular operating systems, security hooks appropriate for people who do a lot of browsing, etc. You don't need to support crazy RAID configurations, advanced netfilter targets and minixfs, but wifi better work. You don't need 10GBE or infiniband support. Go through the kernel configuration carefully. If you can't justify including a module by its potential use, don't check it.
Avoid pulling in out of tree patches unless you absolutely need them. From time to time, people come up with new scheduling algorithms, experimental file systems, etc. It is very, very difficult to maintain a kernel that consumes from anything else but mainline.
There are exceptions, of course. If going out of tree is the only way to meet one of your goals stated in your scope. Just remain conscious of how much additional work you'll be making for yourself in the future.
4 - Re-read your scope then select your base userland
At the very minimum, you'll need a shell, the core utilities and an editor that works without an window manager. Paying attention to dependencies will tell you that you also need a C library and whatever else is needed to make the base commands work. As Eli answered, Linux From Scratch is a good resource to check. I also strongly suggest looking at the LSB (Linux standard base), this is a specification that lists common packages and components that are 'expected' to be included with any distribution. Don't follow the LSB as a standard, compare its suggestions against your scope. If the purpose of your OS does not necessitate inclusion of something and nothing you install will depend on it, don't include it.
5 - Re-read your scope and decide on a window system
Again, referring to the everything including the kitchen sink syndrome, try and resist the urge to just slap a stock install of KDE or GNOME on top of your base OS and call it done. Another common pitfall is to install a full blown version of either and work backwards by removing things that aren't needed. For the sake of sane dependencies, its really better to work on this from bottom up rather than top down.
Decide quickly on the UI toolkit that your distribution is going to favor and get it (with supporting libraries) in place. Define consistency in UIs quickly and stick to it. Nothing is more annoying than having 10 windows open that behave completely differently as far as controls go. When I see this, I diagnose the OS with multiple personality disorder and want to medicate its developer. There was just an uproar regarding Ubuntu moving window controls around, and they were doing it consistently .. the inconsistency was the behavior changing between versions. People get very upset if they can't immediately find a button or have to increase their mouse mileage.
6 - Re-read your scope and pick your applications
Avoid kitchen sink syndrome here as well. Choose your applications not only based on your scope and their popularity, but how easy they will be for you to maintain. Its very likely that you will be applying your own patches to them (even simple ones like messengers updating a blinking light on the toolbar).
Its important to keep every architecture that you want to support in mind as you select what you want to include. For instance, if Valgrind is your best friend, be aware that you won't be able to use it to debug issues on certain ARM platforms.
Pretend you are a company and will be an employee there. Does your company pass the Joel test? Consider a continuous integration system like Hudson, as well. It will save you lots of hair pulling as you progress.
As you begin unifying all of these components, you'll naturally be establishing your own SDK. Document it as you go, avoid breaking it on a whim (refer to your scope, always). Its perfectly acceptable to just let linux be linux, which turns your SDK more into formal guidelines than anything else.
In my case, I'm rather fortunate to be working on something that is designed strictly as a server OS. I don't have to deal with desktop caveats and I don't envy anyone who does.
7 - Additional suggestions
These are in random order, but noting them might save you some time:
Maintain patch sets to every line of upstream code that you modify, in numbered sequence. An example might be 00-make-bash-clairvoyant.patch, this allows you to maintain patches instead of entire forked repositories of upstream code. You'll thank yourself for this later.
If a component has a testing suite, make sure you add tests for anything that you introduce. Its easy to just say "great, it works!" and leave it at that, keep in mind that you'll likely be adding even more later, which may break what you added previously.
Use whatever version control system is in use by the authors when pulling in upstream code. This makes merging of new code much, much simpler and shaves hours off of re-basing your patches.
Even if you think upstream authors won't be interested in your changes, at least alert them to the fact that they exist. Coordination is essential, even if you simply learn that a feature you just put in is already in planning and will be implemented differently in the future.
You may be convinced that you will be the only person to ever use your OS. Design it as though millions will use it, you never know. This kind of thinking helps avoid kludges.
Don't pull upstream alpha code, no matter what the temptation may be. Red Hat tried that, it did not work out well. Stick to stable releases unless you are pulling in bug fixes. Major bug fixes usually result in upstream releases, so make sure you watch and coordinate.
Remember that it's supposed to be fun.
Finally, realize that rolling an entire from-scratch distribution is exponentially more complex than forking an existing distribution and simply adding whatever you feel that it lacks. You need to reward yourself often by booting your OS and actually using it productively. If you get too frustrated, consistently confused or find yourself putting off work on it, consider making a lightweight fork of Debian or Ubuntu. You can then go back and duplicate it entirely from scratch. Its no different than prototyping an application in a simpler / rapid language first before writing it for real in something more difficult. If you want to go this route (first), gNewSense offers utilities to fork your own OS directly from Ubuntu. Note, by default, their utilities will strip any non free bits (including binary kernel blobs) from the resulting distro.
I strongly suggest going the completely from scratch route (first) because the experience that you will gain is far greater than making yet another fork. However, its also important that you actually complete your project. Best is subjective, do what works for you.
Good luck on your project, see you on distrowatch.
Check out Linux From Scratch:
Linux From Scratch (LFS) is a project
that provides you with step-by-step
instructions for building your own
customized Linux system entirely from
source.
Use Gentoo Linux. It is a compile from source distribution, very customizable. I like it a lot.

Resources