firstly, I installed and ran docker contain using below command.
docker run -i -t ubuntu /bin/bash
Then I executed below commands.
root#d444a77039e7:/# apt-get update
0% [Connecting to archive.ubuntu.com (91.189.92.200)]
It blocked all the time.
Then I ran the below command, but met issues.
root#d444a77039e7:/# apt-get install nodejs
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package nodejs
Then I set the http and https proxy like below, but it failed as well.
root#d444a77039e7:/# export HTTP_PROXY=http://proxy.xxx.com
root#d444a77039e7:/# export HTTPS_PROXY=http://proxy.xxx.com
Could you tell me how can I fix this issue? thanks. My host machine is redhat5.9 which does not support latest version of nodejs. So I plan to install it on docker engine.
That means your docker build has not been started with the new docker 1.9+ build-arg arguments. That will avoid putting the full proxy (which can include sometimes your credentials) in the Dockerfile:
You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image. However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.
A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the --build-arg flag:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile. Also, these values don’t persist in the intermediate or final images like ENV values do.
Try with small letters.
export http_proxy=http://proxy.xxx.com
export https_proxy=http://proxy.xxx.com
The proper way to overtake this issue is, you can create a new image with below Dockerfile, with that, you needn't manually set it any more.
FROM ubuntu
ENV http_proxy http://proxy.xxx.com
ENV https_proxy http://proxy.xxx.com
Related
I have a Dockerfile that is currently using amazonlinux as the base image.
The purpose of the image is to run two binaries in the container. Consequently, the CMD instruction of the Dockerfile currently looks like this:
CMD [ "/bin/sh", "-c", "/binary1 & /binary2"]
I am looking to modify this Dockerfile to migrate it to a "distroless" image. This entails modifying the Dockerfile FROM to be built on top of a stripped-down base image (which will itself be Linux-based).
My problem is that this new stripped-down base image will no longer contain the "&" that previously came with the shell in the prior Linux image. It does not have "&&" either, or for that matter any operator that would enable me to run both binaries from within the Dockerfile.
I am wondering if there is some way to run multiple binaries in a stripped down image like this?
For example, perhaps I can install the files containing "&", "&&", or some similar command in my Dockerfile to accomplish this, since the new "distroless" image will still be Linux based? If so, how can I determine which specific files I would need, and how can I install them?
Any pointers would be appreciated, as I am quite new to Docker.
Any pointers would be appreciated, as I am quite new to Docker.
In general, don't try running multiple binaries in a single container like this. In almost all cases, it is more flexible and management to run two separate containers: so if you were to build a "distroless" image containining your two binaries, you would start two containers from the same image (e.g. docker run myimage binary1 and docker run myimage binary2).
When you do something like...
CMD [ "/bin/sh", "-c", "/binary1 & /binary2"]
...you have made failures of binary1 invisible to Docker: if the command fails, your container will merrily keep running, and you can't use a restart policy to restart it for you automatically.
Alternately, if you really want to do the thing you're trying to do, rather than using a "distroless" base image, consider instead using a minimal image like busybox or alpine: these will provide you with a shell and common unix utilities for debugging work, but are still quite small.
I have a Ubuntu:20.04 image with my software being installed by dockerfile RUN commands. So the script i want to execute is build by Dockerfile RUN call to buildmyscripts.sh
This program installs perfectly and then if i run the container (with default entrypoint of /bin/sh or /bin/bash)
and execute manually: /root/build/script.sh -i arg1 -m arg2 it works then.
However same doesn't work with ENTRYPOINT set to the /root/build/script.sh followed by CMD set to the arguments. I get following error on running the image
Error: cannot load shared library xyz.so
Xyz.so is a common shared library installed by RUN anyway before.
Please assist thanks
Note: i run as USER root because i have self hosted runner on a hardened Server so security not an issue.
Apparently we need to source the script for environment variables by prepending to the entrypoint/cmd variable in dockerfile. Since the source was through another script it wasnt working alone with ENV variable
Environment
I do use CI/CD of gitlab to bundle my application.
I do use node:14-alpine as image and do run yarn to build my app.
After build is finished, I do deploy my app via rsync to the target-server, which run's ubuntu 20.04.
On this server, I do use pm2 to start the app and keep it running.
Issue
If I look into the logs, I do see an error like this:
I've searched a bit, and found that the issue might be caused of musl-dev is missing.
I've installed it at my server, and into the docker-container, but with same result.
BUT, if I do delete the node_modules directory from server, and run yarn install right at the Server, the app run like expected
Question
So why does this issue happens here? Must I have the same distribution & version of linux in my docker-container to fit all dependencies?
Don't use an Alpine image if you're deploying on Ubuntu.
So why does this issue happens here?
The fundamental C standard library implementation is different on the two (Alpine uses musl libc; Ubuntu and more or less all other distros use GNU C Library (glibc)).
Trying to move binaries (such as those that might appear in node_modules for native modules) built against one libc implementation to a system using the other will likely be painful or not work at all (as you noticed).
Must I have the same distribution & version of linux in my docker-container to fit all dependencies?
If none of the dependencies use native code, then you should be able to just move things over without issues, but otherwise it'll be easiest (e.g. considering the versions of other libraries your dependencies may link against) to just use the same version as your target OS – or, if you don't want to think about that, just deploy your application as a Docker container.
Even if the suggestion from #AKX is a good answer, I've played a bit around to figure out how to solve this special case.
Here is my solution:
install musl-dev at the server
link it to /lib
apt-get install musl-dev
ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1
In my case it's only this single dependency which cause the trouble. If I got more of this, I will follow AKX's suggestion and choose a debian/ubuntu-like distribution to bundle it.
I'd like to run a command line script using the coffee executable, but I'd like to call that executable through npx.
Something like #!/usr/bin/env npx coffee does not work, because only one argument is supported via env.
So, is there a way to run an npx executable via env?
Here is a solution using ts-node.
Any single OS, ts-node installed globally
#!/usr/bin/env ts-node
// TypeScript code
You probably also need to install #swc/core and #swc/cli globally unless you do further configuration or tweaking (see the notes at the end). If you have any issues with those, be sure to install the latest versions.
macOS, ts-node not installed globally
#!/usr/bin/env npx ts-node
// TypeScript code
Whether this always works in macOS is unknown. There could be some magic with node installing a shell command shim (thanks to #DaMaxContext for commenting about this).
This doesn't work in Linux because Linux distros treat all the characters after env as the command, instead of considering spaces as delimiting separate arguments. Or it doesn't work in Linux if the node command shim isn't present (not confirmed that's how it works, but in any case, in my testing, it doesn't work in Linux Docker containers).
This means that npx ts-node will be treated as a single executable name that has a space in it, which obviously won't work, as that's not an executable.
See the notes at the bottom about npx slowness.
Cross-platform with ts-node not installed globally in macOS, and some setup in linux
Creating a shebang that will work in both macOS and Linux (or macOS using Docker running a Linux image), without having to globally install ts-node and other dependencies in macOS, can be accomplished if one is willing to do a little bit of setup on the Linux/Docker side. Obviously, Linux must have node installed.
Use the #!/usr/bin/env npx ts-node shebang. We just have to fool Linux into thinking that npx ts-node with the space is actually a valid executable name.
Build a named Docker image that has the required dependencies globally installed and a symbolic link making npx ts-node resolve to just ts-node.
Here is an example all-in-one command line on macOS that will both build this image and run it:
docker buildx build -t node-ts - << EOF
FROM node:16-alpine
RUN \
npm install -g #swc/cli #swc/core ts-node \
&& ln -s /usr/local/bin/ts-node '/usr/local/bin/npx ts-node'
ENV SWC_BINARY_PATH=/usr/local/lib/node_modules/#swc/core/binding
WORKDIR /app
EOF
docker run -it --rm \
-v "$(pwd):/app" \
node-ts \
sh
Note that for this example script to work, the above line containing EOF must not have any other characters on the line, before or after it, including spaces.
Inside of the running container, all .ts scripts that have been made executable chmod +x script.ts will be executable simply by running them from the command line, e.g., ./test-script.ts. You can replace the above sh with the name of the script, as well (but be sure to precede it with ./ so Docker knows to run it as an executable instead of pass it as an argument to node).
Additional Thoughts & Considerations
There are other ways to achieve the desired functionality.
The docker run command can mount files into the image, including mounting executables in various directories. Some creative use of this could avoid needing to install anything or build a docker image first.
The install commands could be part of the docker run instead of pre-building an image, but then would be performed on each execution, taking much longer.
The PATH could be modified in macOS, linux, and in the docker build to add the folder containing ts-node's bin.js from any ts-node dist directory, then a shebang of #!/usr/bin/env bin.js should theoretically work (and can try bin-esm.js to avoid needing SWC, though this enters experimental node territory and may not be suitable for production scripts). This works in macOS, and in Docker outside of an npm project, and in Docker inside of an npm project configured to use TS & swc by passing the --skipProject flag to ts-node or setting environment variable TS_NODE_SKIP_PROJECT=true. A working test command line example: docker run -it --rm -v "$(pwd):/app" -e TS_NODE_SKIP_PROJECT=true -w /app --entrypoint sh node:16-alpine -c 'PATH="$PATH:/app/node_modules/ts-node/dist" ./test.ts'.
Any named executable that can be found in the PATH and run via direct command can be a shebang (using #!/usr/bin/env executable). It can be a shell script, a binary file, anything. A shell script can easily be put at a known location, added to the PATH, and then call whatever you like. It could be multi-statement, compiling the file to .js, then running that. Whatever your needs are.
In some special cases you might want to simply use node as your shebang executable, setting node options through environment variables to force ts-node as your loader. See ts-node Recipes:Other for more info on this.
Notes:
The SWC_BINARY_PATH environment variable ensures that ts-node can find the architecture-specific swc compiler (to avoid error "Bindings not found"). If you're running on only one architecture, you won't need it. Or, if you are mounting node_modules that have these #swc packages already installed for the correct architecture, you won't need it.
It is possible to install node_modules binaries for multiple architectures. The way to do this varies between the different package managers. For example, yarn 3 lets you define which binaries to install all at once in .yarnrc.yml. There are other options for npm and possible yarn 1 (and 2?) using environment variables.
ts-node does offer options for running without swc (though this is slower). You could try shebangs with ts-node-esm instead of ts-node. Look at all the symlinks in the /usr/local/bin folder or consult ts-node documentation for more information. If ts-node
It is possible to run .ts files directly using node and setting node options in environment variables. node --loader=ts-node does work, in recent versions (16+?). The experimental mode warnings can be suppressed.
There are some crazy ways to trick the shell to run JavaScript instead of a unix shell. Check out this answer that uses a normal sh shebang, but a clever shell statement to transfer execution over to node, that is basically ignored by JavaScript. This isn't great as it requires extra lines of trickery, but could help some people. Other answers on the page are also instructive and it's worth reviewing to get the full picture.
Some of the complexity here might go away if running .ts files outside of an npm project. In my own testing in Docker, the context was always in a project having its own tsconfig.json and swc installed, so with a different setup you might have different results. It proved to be difficult to get ts-node to ignore npm project context found with the executed .ts file.
The difference between ESM and CommonJS module handling has not been explained here. This is a complicated topic and beyond the scope of this answer.
Suffice it to say that if you can figure out how to run your scripts from the command line in the form executable [options] [file], then you should be able to figure out how to run ./[file] with an appropriate shebang, by mixing and matching all the ideas presented here. You don't have to use ts-node. You can directly use node, swc, tsc itself (by first compiling and then running any .js file or set of .js files in the found context), or any utility or tool that is able to compile or run TypeScript.
Note that using npx is significantly slower than running ts-node directly, because it may need to download the ts-node package, and dependencies, every time it runs.
Some various random tips on possible strategies for SWC architecture support:
https://socket.dev/npm/package/#rnw-community/nestjs-webpack-swc
https://github.com/yarnpkg/yarn/issues/2221
First of all, I think this is more of a Linux issue as the problem seems to be on a linux-flavoured Docker container, but I'm happy to accept that I can do something to the team city config to overcome this.
I'm also not very experienced with Linux, Docker or node/npm, though I do have a lot of development experience and am very comfortable with command line interfaces in general.
Background
We currently have Team City set up as a build server, for building a variety of projects:
.Net Framework,
.Net Core
Angular CLI
A couple of simple websites which use node packages to generate HTML from Markdown.
The server is running as a Docker container using Docker for Windows on a Windows Server box, and this is working well.
We have one Windows 10 Build agent (a VM) which is also working fine, and builds all the .Net and .Net Core stuff fine.
The simple docs site stuff primarily uses the markdown-to-html node package, so its build steps simply get all the source .md files and compile to html with markdown-to-html, plus use some other npm packages for SASS compilation and minification of js etc. No actual node code as such, just some jQuery. In order to not tie up the other agent, and because this stuff can run happily on Linux, I want to have this running on a small docker image rather than a full VM build agent somewhere.
I previously successfully used a node.js team city agent docker image (either jacobpeddk/teamcity-agent-nodejs or omez/teamcity-agent-nodejs - can't recall) which did work for a time, though I had issues with being able to install some npm packages globally in build scripts, which meant I had to get a bash terminal into the container and run some manual npm commands. I also I think had to run apt-get install zip to get a zipping step to work. This worked fine for a while (weeks).
I added some extra JS stuff to one of these simple projects, and suddenly I was getting errors when trying to build. I (perhaps stupidly) decided that this was probably due to the container having older versions of node and/or npm etc, so I attempted to update this by getting a bash shell into the container, installing nvm and updating node.js & npm.
This ended up with a rather broken container (node errors), so I thought I'd instead start again, but actually start with the jetbrains/minimal-build-agent Docker image instead, with the aim of ending up with a nice bespoke image for our needs specifically (as I couldn't find a very up-to-date pre-existing one)
I've running a Bash shell directly on the build agent container by executing this on the host:
docker exec -it basicagent /bin/bash
then from there I've installed nvm, Python (required for node install step) and node:
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh | bash
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
apt-get update
apt-get install python 3.6
nvm install v8.11.1 (matching version on my dev machine)
npm install -g markdown-folder-to-html (npm package I previously found I had to install globally)
apt-get install zip (just used for a build step to zip up artifacts)
If I now run (via the bash shell) npm -version I get back 5.6.
If I try to get a build to run that uses npm in a command line step, then I get this error in the build log:
/opt/buildagent/temp/agentTmp/custom_script2764770419520852926: npm: not found
I wondered if it was an issue with the user/path that the team city agent process is using vs. the one I'm using in Bash, so I added the following to the build script:
echo PATH = $PATH
echo user var = $USER
echo user via 'id':
id -u -n
the output of which is:
PATH = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
user var =
user via id:
root
So it's running the agent as root, and doesn't appear to have node in the $PATH at all.
If I run the above directly from Bash however, I can see that I am root, but my $PATH is different:
PATH = /root/.nvm/versions/node/v8.11.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
root
So I'm now confused: I've re-started the container and this has had no effect - it seems that when I'm logged in as root manually I have a certain path set, but when the build agent service is running as root it's different.
I have no idea why this happens, but I've basically worked around the problem by adding:
export PATH=$PATH:/root/.nvm/versions/node/v8.11.1/bin
to the top of every build step that uses npm in a script. To my mind this seems a rather daft thing to have to do - considering this used to work without this, and the only real difference is possibly a slightly different flavour of linux container. AFAIK the original build agent container was based on the jetbrains minimal-build-agent one, so unless they've changed what they base that on it should be roughly the same...
I also had to change the compressor being used in a node-minify build step from gcc (google closure compiler) to babel-minify as the former was basically hanging indefinitely, but that is a separate problem (though also something that was fine and now isn't...)
Thanks to anyone who took the time to read... though I do wonder if one-day I'll exhaust my own research options, and finally go ask the internet and actually get someone respond - for some reason whenever I get to the point where I have to ask, it always seems no-one else has the answer either and I end up having to work it out myself. It's probably character-building though I suppose.. (this isn't just SO - I've found this be the case for over 15 years on various forums about various things...)