Haskell Stack Image Container Execute On Docker run - haskell

I am following the turorials from stackage and docker to run a haskell build via docker.
Building and Image creation works well and i can run the app via docker run -p 5000:5000 {imagename} {app-exe}
I am using the build in features of the latest stack to create the docke image with this minimal configuration.
image:
container:
base: "fpco/ubuntu-with-libgmp"
How can i make the image to launch the executable automatically, so that i can just type docker run -p 5000:5000 {imagename}. I know how to do it in a dockerfile but not with stack. I was thinking that I have to use:
entrypoints:
- appname-exe
No success, no matter if I just use the name of executable or the absolute path to it. Maybe I don't understand what the entrypoint is for.
I am using Docker for Mac.
Any suggestions appreciated.
Cheers
Bjorn

I figured it out myself. Everything is working correctly, I just didn't understand that stack creates two separate images. One just for the environment and one for the entrypoint.
So I checked docker images and found indeed two images. I was simply running the wrong image. This is correct
docker run -p 5000:5000 {imagename-app-exe}
Man sometimes you don't see the forest.

Related

Docker: how to run multiple binaries in a stripped-down "distroless" image?

I have a Dockerfile that is currently using amazonlinux as the base image.
The purpose of the image is to run two binaries in the container. Consequently, the CMD instruction of the Dockerfile currently looks like this:
CMD [ "/bin/sh", "-c", "/binary1 & /binary2"]
I am looking to modify this Dockerfile to migrate it to a "distroless" image. This entails modifying the Dockerfile FROM to be built on top of a stripped-down base image (which will itself be Linux-based).
My problem is that this new stripped-down base image will no longer contain the "&" that previously came with the shell in the prior Linux image. It does not have "&&" either, or for that matter any operator that would enable me to run both binaries from within the Dockerfile.
I am wondering if there is some way to run multiple binaries in a stripped down image like this?
For example, perhaps I can install the files containing "&", "&&", or some similar command in my Dockerfile to accomplish this, since the new "distroless" image will still be Linux based? If so, how can I determine which specific files I would need, and how can I install them?
Any pointers would be appreciated, as I am quite new to Docker.
Any pointers would be appreciated, as I am quite new to Docker.
In general, don't try running multiple binaries in a single container like this. In almost all cases, it is more flexible and management to run two separate containers: so if you were to build a "distroless" image containining your two binaries, you would start two containers from the same image (e.g. docker run myimage binary1 and docker run myimage binary2).
When you do something like...
CMD [ "/bin/sh", "-c", "/binary1 & /binary2"]
...you have made failures of binary1 invisible to Docker: if the command fails, your container will merrily keep running, and you can't use a restart policy to restart it for you automatically.
Alternately, if you really want to do the thing you're trying to do, rather than using a "distroless" base image, consider instead using a minimal image like busybox or alpine: these will provide you with a shell and common unix utilities for debugging work, but are still quite small.

Some question on Boot2docker setup for build and run

I’m a fresh beginner on bioinformatics. Recently, I start learning it with the book named “Bioinformatics with Python Cookbook (by Antao, Tiago)”. I met some issues while setting up Docker for Linux. Please see below for the issues:
I was trying to set up the Docker files following the author’s instruction, but I found some files were “failed to download”.
docker build -t bio
https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile
Then I still went ahead set up the container following the instruction:
“Now, you are ready to run the container, as follows: docker run -ti -p 9875:9875 -v YOUR_DIRECTORY:/data bio”
I typed as docker run -ti -p 9875:9875 -v C:/Users/guangliang/Desktop/Bioinformation/data bio
However, it gave me an error saying “Unable to find image “bio:latest” locally”.
Can anyone give me any suggestions on this? My thought could be the first step I missed downloading some files for setting the Dockers, but I am not sure if I can fetch these files.
Thank you so much for any comments!
Best regards
Johnny
I tried downloading the docker files a few time, but the error still appears
docker build -t bio
https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile
docker run -ti -p 9875:9875 -v C:/Users/guangliang/Desktop/Bioinformation/data bio
In the first issue, I found some files were “failed to download”.
In the 2nd issue, an error saying “Unable to find image “bio:latest” locally”. appears
Here you have a couple of problems:
1) It looks you do not download that docker file and build required docker image locally
2) You are getting that error about not finding image locally because of previous problem
So, you should do like this:
1) Download that Dockerfile (https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile). If you cant download that file for some reason, just open it at the git, select all content, copy, than in some folder on your computer make a new file, name it "Dockerfile" and paste the content.
2) Build locally image - go to the folder you download that dockerfile and execute following command:
docker build -t bio .
3)Run your container with docker run ... command

Docker: What's available in the parent image?

In docker how do I know what packages are available in the parent image I am using?
I am trying something like:
docker search python
but I get some (network?) error.
I have the image locally. How would I search what packages I can use?
docker search is the command you use to search for docker images on the docker hub.
If I understood well your question, given an image (say alpine:latest) you want to know what's inside that image.
If that's the case, the only thing you can do IMHO is just run the container and explore it.
You can do that for the alpine:latest image with the following command:
docker run --rm -ti alpine:latest ash
You'll get a prompt inside an instance of the mentioned image and you can dig around to check what's available.
There is no place where you can access informations about the content of an image in a structured way.

Running Docker Image

The user guide states that an image should be run as follows:
docker run -t -i ubuntu /bin/bash
I get that -t creates the pseudo-terminal and -i makes it interactive. But it seems that the /bin/bash part is unnecessary. Whether I run it with or without /bin/bash, I'm given an interactive prompt that I can read and write from both times.
root#77eeb1f4ac2a:/#
Why do we need /bin/bash?
Part 2
I'm running on Docker for Mac. When I download the hello-world binary and run it, it's only 1kb. Obviously a Linux image wasn't downloaded with it. Is the small hello-world binary running off my Mac kernel or off of a small Linux kernel that comes with Docker for Mac?
Why do we need /bin/bash?
Because while the ubuntu image may be configured to run /bin/bash by default, that's not going to be true of every image. If you have an image that starts a webserver by default, and you want to run bash...you need to make that explicit. Some images don't specify any default command, leading to:
$ docker run -it alpine
docker: Error response from daemon: No command specified.
It never hurts to be explicit when starting a container, especially using an inmage that you didn't build yourself.
When I download the hello-world binary and run it...
Which hello-world binary?
but is a VM of Linux executing it or is my mac executing it?
Docker only runs under Linux. When you are using Docker under OS X or Windows, you are running containers inside a Linux VM spawned for that purpose by docker-machine (or, previously, boot2docker). Under Windows Docker uses Hyper V, and on OS X it previously used VirtualBox and in more recent versions may be using something else (it's been a while since I've run Docker under OS X).
Part 1:
Whatever you pass after docker run -t -i ubuntu is the first command that your container will run. You can try using /bin/bash, /bin/sh, or even echo hello and see it in action. Ubuntu uses bash by default, but other containers use other commands based on their Dockerfiles.
part 2:
When you run hello-world, a docker container is created from the hello-image. Containers "include the application and all of its dependencies --but share the kernel with other containers, running as isolated processes in user space on the host operating system.".
Hello-world in specific is created from scratch https://hub.docker.com/_/scratch/.

Docker Compose In Production

Trying to use docker-compose to build and up a simple Node.js application. Although I ran into the same problem with a Django application so I think I'm just missing some sort of vital step. Here is my Dockerfile:
FROM node:4.2.1
CMD mkdir -p /var/app
COPY . /var/app
EXPOSE 3000
CMD node /var/app/index.js
When I run docker compose up pointed towards a digital ocean machine it throws a node error suggesting it can't find the code in /var/app. Is there some other mechanism I am supposed to use to get my code onto the machine other than docker?
The line CMD mkdir -p /var/app is wrong. It should be only one CMD in a Dockerfile, usually at the end.
Only the last CMD directive in a chain of inherited docker images will be executed.
You should use RUN instead
From Dockerfile reference
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
The main purpose of a CMD is to provide defaults for an executing container.
Try taking out the mkdir step. You also need to set the working directory.

Resources