I'm trying to deploy a Yesod app to an Ubuntu server using Keter. So far this is what I've done:
Install Keter on the server using the provided setup script
wget -O - https://raw.github.com/snoyberg/keter/master/setup-keter.sh | bash
Run yesod keter to create a bundle on my dev machine (running OS X Mavericks)
scp the *.keter file into /opt/keter/incoming on the server
At this point, I think I should be able to go to my domain and have the app working, but I'm seeing a "Welcome to nginx" page instead. Additionally, all I have in /opt/keter/log/keter/current.log is:
2014-05-10 18:21:01.48: Unpacking bundle '/opt/keter/etc/../incoming/DoDeployTest.keter'
And I think I should have lines about starting a process and loading an app.
What do I need to do to deploy Yesod with Keter? Is there a good tutorial covering this (so far alot of the ones I'm reading seem somewhat outdated based on not mentioning useful things like yesod keter; hard to say though).
I'm pretty new to Haskell/Yesod/Keter/Sysadmin work so any help is appreciated.
Appendix:
Github repo of the Yesod project (Its vanilla yesod init w/ postgres + configuring the keter.yaml file)
Keter.yaml file:
exec: ../dist/build/DoDeployTest/DoDeployTest
args:
- production
host: "http://www.yesodonrails.com"
postgres: true
root: ../static
To ensure the maximum level of success possible, I would strongly advise you to compile and run both Keter and your Yesod application on the same platform. The recommendation is also to compile your application on a different machine from the one you're deploying on, since GHC compilation is very resource intensive. It looks like you're already doing this (albeit compiling on OS X and deploying on an Ubuntu server, which is not going to work, as described in response to your own answer).
My recommendation would be to use Docker containers to ensure consistent environments. I have a GitHub project containing a number of Dockerfiles I've been working on to address this and I'll describe roughly what they do here. Note that this GitHub project is still a work in progress and I don't have everything absolutely perfect yet. This is also similar to the answer I gave to this question.
keter-build:
FROM haskell:latest
RUN apt-get update && apt-get install -y \
git
RUN mkdir /src
RUN cd src && \
git clone https://github.com/snoyberg/keter && \
cd keter && \
git checkout e8b5a3fd5e14dfca466f8acff2a02f0415fceeb0
WORKDIR /src/keter
RUN cabal update
RUN cabal install keter
This configures a container that can be used to build the keter binary at a given revision from the Keter GitHub project.
keter-host:
FROM debian
RUN apt-get update && apt-get install -y \
ca-certificates \
libgmp-dev \
nano \
postgresql
COPY artifacts/keter /opt/keter/bin/
COPY artifacts/keter-config.yaml /opt/keter/etc/
EXPOSE 80
CMD ["/opt/keter/bin/keter", "/opt/keter/etc/keter-config.yaml"]
This container is a self-contained Keter host. You should ensure that the keter binary built in the keter-build container is available in the artifacts directory so that the COPY artifacts/keter /opt/keter/bin/ instruction copies it into the image.
yesod-env:
FROM haskell:latest
RUN apt-get update && apt-get install -y \
ca-certificates \
git \
nano \
wget
RUN echo 'deb http://download.fpcomplete.com/debian/jessie stable main' > /etc/apt/sources.list.d/fpco.list
RUN wget -q -O- https://s3.amazonaws.com/download.fpcomplete.com/debian/fpco.key | apt-key add -
RUN apt-get update && apt-get install -y \
stack
This is a container for building the Yesod app. Note that this Dockerfile is incomplete and haven't got it to clone the app's source code and build it yet. However, this might get you started.
Note that all three containers are ultimately based off the same debian Docker base image, so that binaries produced in each container will have a good chance of being valid in other containers. Some of my work was inspired by Dockerfiles designed by thoughtbot.
Ah, so based on the advice from the blog post introducing Keter, I tried to run the executable inside the *.keter file manually. Doing so yielded the message "cannot execute binary file". I suspect this is because I was compiling on a Mac originally, and deploying to an Ubuntu instance (I had this same problem trying to deploy to Heroku).
Process for discovering this (might be slightly inaccurate):
cp /opt/keter/incoming/DoDeployTest.keter /tmp
cd /tmp
mv DoDeployTest.keter DoDeployTest.tar.gz
gunzip DoDeployTest.tar.gz
tar xvf DoDeployTest.tar
# run executable
/dist/build/appname/appname
Related
i need deploy to fargate, but nodered rebuild will follow hostname to create flow.json, this make me so hard to load old config to new nodered.
But now, if using docker run -h is work,but in fargate dose not work, how can i do?
Of course, release nodered docker version is solved this problem,but i don’t know how to call cli tools,if base on node-red, how can i install aws-cli2 and call it in nodered dashboard?
FROM nodered/node-red:latest
#USER root
RUN curl https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip -o awscliv2.zip
RUN unzip awscliv2.zip
RUN ./aws/install
CMD ["node-red"]
The correct Dockerfile would be:
FROM nodered/node-red:latest
USER root
RUN curl https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip -o awscliv2.zip
RUN unzip awscliv2.zip
RUN ./aws/install
RUN rm -rf ./aws
USER node-red
But the problem is that the image is based on Alpine Linux which uses the musl standard libraries instead of glibc. And the AWS tools will not work with this runtime.
The easiest solution will be to use the Debian based build that I mentioned in the first comment. The docker file for that can be found here, follow the instructions there to use the docker-debian.sh which will create an image called
testing:node-red-build which you can then use as the base for the Dockerfile I showed earlier:
FROM testing:node-red-build
USER root
RUN curl https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip -o awscliv2.zip
RUN unzip awscliv2.zip
RUN ./aws/install
RUN rm -rf ./aws
USER node-red
I'm planning to use Docker to deploy a node.js app. The app has several dependencies that require node-gyp. Node-gyp builds these modules (e.g. canvas, lwip, qrcode) against compiled libraries on the delivery platform, and in my experience these builds can be highly dependent on the o/s version and libraries installed, and they often break a simple npm install.
So is building my Dockerfile FROM node:version the correct approach? This seems to be the approach shown in every Docker/Node tutorial I've found so far. But if I build from a node image, what will happen when I deploy the container? How can I ensure the target host will have the libraries needed to compile the node-gyp modules?
The other way I'm looking at is to build the Dockerfile FROM ubuntu:version. But I think this would mean installing nodeJS into the Ubuntu image and the whole thing would be much larger.
Are there other ways of handling this?
How can I ensure the target host will have the libraries needed to compile the node-gyp modules?
The target host is running docker as well. As long as the dependencies are in your image then your server has them as well. That's the entire point with docker if you ask me. If it runs locally, then it runs on the server as well.
I'd go with node-alpine (FROM node:8-alpine) for even smaller files. I struggled with node-gyp before I wrapped my head around it, but now I don't even see how I ever thought it was a problem. As long as you add build tools RUN apk add python make gcc g++ you are good to go (this adds some 100-200mb to the size however).
Also if it ever gets time consuming (say you find yourself rebuilding your image with --no-cache every now and then) then it can be a good idea to split it up into a base-image of your own and another image FROM my-base-image:latest which contains things that you change a more often.
There is some learning curve for sure, but I didn't find it that steep. At least not if you have touched docker before.
The other way I'm looking at is to build the Dockerfile FROM ubuntu:version.
I had only used CentOS before jumping on docker, and I run CentOS on my servers. So I thought it would be a good idea to run CentOS-images as well, but I found that to be just silly. There is absolutely zero gain unless you need something very OS-specific. Now I've only used alpine for maybe half a year, and so far the only alpine-specific command I've needed to learn is apk add/del.
And you probably know already, but don't spend too much time optimizing docker file size in the beginning. (You can reduce layer size a lot by combining commands on one line, (adding packages, running command, removing packages). But that cancels out the use of the docker image cache if you make any small changes in big layers. Better to leave that out until it matters.
If you need to build stuff using node-gyp, you can add the line below, replacing your npm install or yarn install:
RUN apk add --no-cache --virtual .build-deps make gcc g++ python \
RUN npm install --production --silent \
RUN apk del .build-deps
Or even simpler, you can install alpine-sdk which is similar to Debian's build-essentials
RUN apk add --no-cache --virtual .build-deps alpine-sdk python \
RUN npm install --production --silent \
RUN apk del .build-deps
Source: https://github.com/mhart/alpine-node/issues/27#issuecomment-390187978
Looking back (2 years later), managing node dependencies in a container is still a challenge. What I do now is:
Build the docker image FROM node:10.16.0-alpine (or other node
version). These are official node images on hub.docker.com. Docker
recommends alpine, and Nodejs builds on top of that, including
node-gyp, so it's a good starting point;
Include a RUN apk add --no-cache to include all the libraries
needed to build the dependent module, e.g. canvas (see example below);
Include a RUN npm install canvas in the docker build file; this
builds the node module (e.g. canvas) into the docker image, so it gets loaded into any container run from that image.
But this can get ugly. Alpine uses different libraries from more heavy-weight OS's: notably, alpine uses musl in place of glibc. The dependent module may need to link to glibc, so then you would have to add it to the image. Sasha Gerrand offers one way to do it with alpine-pkg-glibc
Example installing node-canvas v2.5, which links to glibc:
# geo_core layer
# build on a node image, in turn built on alpine linux, Docker's official linux pulled from hub.docker.com
FROM node:10.16.0-alpine
# add libraries needed to build canvas
RUN apk add --no-cache \
build-base \
g++ \
libpng \
libpng-dev \
jpeg-dev \
pango-dev \
cairo-dev \
giflib-dev \
python \
; \
# add glibc and install canvas
RUN apk --no-cache add ca-certificates wget && \
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub && \
wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.29-r0/glibc-2.29-r0.apk && \
apk add glibc-2.29-r0.apk && \
npm install canvas#2.5.0
;
I have a nodejs app that connects to a blockchain on the same server. Normally I use 127.0.0.1 + the port number (each chain gets a different port).
I decided to put the chain and the app in the same container, so that the frontend developers don't have to bother with setting up the chain.
However, When I build the image the chain should start. When I run the image it isn't. Furthermore, when I do go in the container and try to run it manually it says "besluitChain2#xxx.xx.x.2:PORT". So I thought instead of 127.0.0.1 I needed to connect to the port on 127.0.0.2, but that doesn't seem to work.
I'm sure connecting like this isn't new, and should work the same with a database. Can anyone help? The first piece of advice would be how to debug these images, because I have no idea where it goes wrong.
here is my dockerfile
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y apt-utils
RUN apt-get install -y build-essential
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -y nodejs
ADD workfolder/app /root/applications/app
ADD .multichain /root/.multichain
RUN npm install \
&& apt-get upgrade -q -y \
&& apt-get dist-upgrade -q -y \
&& apt-get install -q -y wget curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& cd /tmp \
&& wget http://www.multichain.com/download/multichain-1.0-beta-1.tar.gz \
&& tar -xvzf multichain-1.0-beta-1.tar.gz \
&& cd multichain-1.0-beta-1 \
&& mv multichaind multichain-cli multichain-util /usr/local/bin \
&& cd /tmp \
&& rm -Rf multichain*
RUN multichaind Chain -daemon
RUN cd /root/applications/app && npm install
CMD cd /root/applications/app && npm start
EXPOSE 8080
btw due to policies I can only connect to the server at port 80 to check if it works. When I run the docker image I can go to my /api-docs but not to any of the endpoints where I start interacting with the blockchain.
I decided to put the chain and the app in the same container
That was a mistake, I think.
Docker is not a virtual machine. It's a virtual application or process instance.
A Docker container runs a linux distro under the hood, but this is a detail that should be ignored when thinking about the purpose of Docker.
You should think of a Docker container as a single application process, not as a full virtual machine to run generally run multiple processes. This is evidenced by the way Docker will shut the container down once the main process shuts down (the process with PID 1).
I've got a longer post about this, here: https://derickbailey.com/2016/08/29/so-youre-saying-docker-isnt-a-virtual-machine/
Additionally, the RUN multichaind instruction in your dockerfile doesn't run the chain in your image / container. It tells the image to run this instruction during the build process.
A Dockerfile is a list of instructions for building an image. The wording here is important. An image is not executed, it is built. An image is a static, immutable template from which a Container is executed.
RUN multichaind Chain -daemon
By putting this RUN instruction in your image, you are temporarily starting the chain, but it is immediately halted (forcefully) when the image layer is done building. It will not remain running, because an image is not executed, it is built.
My advice is to put the chain in a separate image.
You'll have one image for the chain, and one for the node.js app.
You can use docker-compose to make it easier to run containers from both of these at the same time. Or you can run containers manually from them. Either way, you need two images.
I'm currently rebuilding our build server, and creating a set of Docker images for our various projects, as each has rather different toolchain and library requirements. Since Docker currently only runs on 64-bit hosts, the build server will be a x86_64 Fedora 22 machine.
These images must be able to build historical/tagged releases of our projects without modification; we can make changes to the build process for each project if needed, but only for current trunk and future releases.
Now, one of my build environments needs to reproduce an old i686 build server. For executing 32-bit programs I can simply install i686 support libraries (yum install glibc.i686 ncurses-libs.i686), but that doesn't help me to build 32-bit programs, without having to modify Makefiles to pass -m32 to GCC … and, as stated above, I do not wish to alter historical codebases at all.
So, my current idea is to basically fake a i686 version of CentOS in a Docker container by installing all i686 packages, including GCC. That way, although uname -a will report the host's x86_64 architecture, everything else within the container should be pretty consistent. I took the idea (and centos6.tar.gz) from the "centos-i386" base image which, in essence, I'm trying to reproduce for my own local image.
Sadly, it's not going very well.
Here's a minimal-ish Dockerfile:
FROM scratch
# Inspiration from https://hub.docker.com/r/toopher/centos-i386/~/dockerfile/
ADD centos6.tar.gz /
RUN echo "i686" > /etc/yum/vars/arch && \
echo "i386" > /etc/yum/vars/basearch
ENTRYPOINT ["linux32"]
# Base packages
RUN yum update -y && yum -y install epel-release patch sed subversion bzip zip
# AT91SAM9260 ARM compiler
ADD arm-2009q1-203-arm-none-linux-gnueabi-i686-pc-linux-gnu.tar.bz2 /usr/local/
ENV PATH $PATH:/usr/local/arm-2009q1/bin
# AT91SAM9260 & native cxxtest
ADD cxxtest-3.10.1.tar.gz /staging/
WORKDIR /staging/cxxtest/
RUN cp -r cxxtest /usr/local/arm-2009q1/arm-none-linux-gnueabi/include/
RUN cp -r cxxtest /usr/local/include/
RUN cp cxxtestgen.pl /usr/bin/
RUN ln -s /usr/bin/cxxtestgen.pl /usr/bin/cxxtestgen
WORKDIR /
RUN rm -rf /staging/
The build fails on the first "RUN" in the cxxtest installation step:
/bin/sh: cp: command not found
The command '/bin/sh -c cp -r cxxtest /usr/local/arm-2009q1/arm-none-linux-gnueabi/include/' returned a non-zero code: 127
What's wrong?
Because your image is being built from "scratch", not from the "centos6" base image (as is the case with the published "centos6-i686" image), even though you unpacked CentOS 6 into the filesystem as your first step, Bash was started up before that so your shell context has no meaningful PATH set. Adding the following after your "ENTRYPOINT" will result in all the usual binaries being accessible again, for the duration of the build process:
ENV PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Containers created from your image (had it built; say, by not trying to build cxxtest) would never have been affected, as the fresh Bash instances would have had the PATH correctly set through /etc/profile.
I'm working on a haskell web app using yesod that I eventually want to deploy to EC2, can someone recommend an AMI that has a recent haskell platform and a git client install-able from the repositories?
If you look at Michael Snoyman's setup script here, it contains the steps he used to get an EC2 instance going on a Ubuntu AMI.
https://github.com/yesodweb/benchmarks/blob/master/setup.sh
I also have Yesod running from source on Amazon Linux. It takes a few hours to build everything but I think any of the standard boxes with at least 8G of memory should do it (otherwise GHC can't link). This is how I did it:
# install what packages are available
sudo yum --enablerepo=epel install haskell-platform git make ncurses-devel patch
# make and install ghc
wget http://www.haskell.org/ghc/dist/7.0.4/ghc-7.0.4-src.tar.bz2
tar jxf ghc-7.0.4-src.tar.bz2
rm ghc-7.0.4-src.tar.bz2
cd ghc-7.0.4
./configure
make -j 4
# wait a few hours
sudo make install
cd
rm -rf ghc-7.0.4
# make and install haskell-platform
wget http://lambda.haskell.org/platform/download/2011.4.0.0/haskell-platform-2011.4.0.0.tar.gz
tar zxf haskell-platform-2011.4.0.0.tar.gz
cd haskell-platform-2011.4.0.0
./configure
make -j 4
sudo make install
cd
rm -rf haskell-platform-2011.4.0.0
You shouldn't compile on an EC2 instance. Choose a generic AMI like Ubuntu, and perform the compile on a local computer, then upload the static binary to EC2.