Don't publish sub modules to docker - sbt-native-packager

I have two Projects:
val common = Project("common", file("common"))
.enablePlugins(PlayScala)
val frontend = Project("frontend", file("frontend"))
.enablePlugins(PlayScala)
.dependsOn(common).aggregate(common)
Now, I want to do a docker build, which works out of the box, but when I do
sbt "project frontend" docker:publish
it will publish both modules to docker. How can I prevent pushing the common module to my docker registry.

You can either remove the aggregate(commons) or override the docker:publishLocal task like this:
...
. settings(
publishLocal in Docker := {}
)

Related

Is it possible to install and run docker inside node container in Jenkins?

This is somewhat complicated situation, but I have Jenkins installed inside a docker container. I'm trying to run some tests in node.js app, but this test environment requires docker+docker-compose to be enabled. At the moment, the Jenkins configuration is through pipeline code
So far, I've tried pulling docker inside a stage, as follow:
pipeline {
agent {
docker {
image 'node'
}
}
stages {
stage("Checkout") {
steps {
git url: ....
}
}
stage("Docker") {
steps {
script {
def image = docker.image('docker')
image.pull()
image.inside() {
sh 'docker --version'
sh 'docker-compose --version'
}
}
}
}
}
with error returning 'docker: not found'. I was expecting the script to succeed because I've tried exactly the same with 'agent any' which had no problem, but inside node image it doesn't seem to work.
I'm also not sure if this is the right way to do so because as I understand correctly, this way of running docker inside a docker is not recommended. One method that I have found is that when running docker, it is recommended to run docker -v /var/run/docker.sock:/var/run/docker.sock ... but currently I am running on docker-compose, with installation steps from https://www.jenkins.io/doc/book/installing/docker/ (instead of individual docker, I've combined both jenkins and jenkins-blueocean into a docker-compose file), and that did not work.
At this moment, I'm out of idea and any solutions or other suggestions as to how to run both node.js and docker in the same environment, would be greatly appreciated.
You can try to use docker-in-docker image https://hub.docker.com/_/docker

Which plugin is required to have docker.build work in Jenkins?

I am using docker on Mac and have Jenkins running in docker container.
Client is interacting with docker daemon on host machine.
I have following plugins installed:
docker-plugin
workflow-aggregator
I do have docker client / command working in container. I have also checked it using sh and even docker cloud can spin up agents.
But below Jenkinsfile is contantly throwing error.
def image
pipeline {
agent {
label "container"
}
stages {
stage('Build') {
steps {
script {
image = docker.build("username/image:$BUILD")
}
}
}
}
}
But I am constantly facing below error message:
groovy.lang.MissingPropertyException: No such property: docker for class: groovy.lang.Binding
Error: No such property: docker for class: groovy.lang.Binding
No such Docker property indicates that Docker Pipeline plugin is not installed.
It's little confusing because name of these three plugins are very much similar to each other's id:
Docker have id docker-plugin
Pipeline plugin have id workflow-aggregator
Docker Pipeline plugin have id docker-workflow
If you have this issue:
groovy.lang.MissingPropertyException: No such property: docker for class: groovy.lang.Binding.
We most likely encountered the same issue, in order to fix it I only had to install the Docker Pipeline plugin in Jenkins, so all you have to do is go to:
Jenkins Homepage > Manage Jenkins > Manage Plugins > Available
Search for Docker Pipeline install it restart jenkins and you are ready to go.
For more info about Docker Pipeline Plugin Scripts click here.

pass filePath to dockerfile as variable _ nodeJS dockerode Docker

In my case, I am creating a config.json that I need to copy from the host to my container.
I figured out there is some option that I can pass args to my dockerfile.
so first step is :
1.create Dockerfile:
FROM golang
WORKDIR /go/src/app
COPY . . /* here we have /foo directory */
COPY $CONFIG_PATH ./foo/
EXPOSE $PORT
CMD ["./foo/run", "-config", "./foo/config.json"]
as you can see, I have 2 variable [ "$CONFIG_PATH", "$PORT"].
so these to variables are dynamic and comes from my command in docker run.
here I need to copy my config file from my host to my container, and I need to run my project with that config.json file.
after building image:
second step:
get my config file from user and run the docker image with these variables.
let configFilePath = '/home/baazz/baaaf/config.json'
let port = "8080"
docker.run('my_image', null, process.stdout, { Env: [`$CONFIG_PATH=${configFilePath}`, `$PORT=${port}`] }).then(data => {
}).catch(err => { console.log(err) })
I am getting this error message when I am trying to execute my code.
Error opening JSON configuration (./foo/config.json): open
./foo/config.json: no such file or directory . Terminating.
You generally don’t want to COPY configuration files like this in your Docker image. You should be able to docker run the same image in multiple environments without modification.
Instead, you can use the docker run -v option to inject the correct config file when you run the image:
docker run -v $PWD/config-dev.json:/go/src/app/foo/config.json my_image
(The Dockerode home page shows an equivalent Binds option. In Docker Compose, this goes into the per-container volumes:. There’s no requirement that the two paths or file names match.)
Since file paths like this become part of the external interface to how people run your container, you generally don’t want to make them configurable at build time. Pick a fixed path and document that that’s the place to mount your custom config file.

Concourse CI - Build Artifacts inside source, pass all to next task

I want to set up a build pipeline in Concourse for my web application. The application is built using Node.
The plan is to do something like this:
,-> build style guide -> dockerize
source code -> npm install -> npm test -|
`-> build website -> dockerize
The problem is, after npm install, a new container is created so the node_modules directory is lost. I want to pass node_modules into the later tasks but because it is "inside" the source code, it doesn't like it and gives me
invalid task configuration:
you may not have more than one input or output when one of them has a path of '.'
Here's my jobs set up
jobs:
- name: test
serial: true
disable_manual_trigger: false
plan:
- get: source-code
trigger: true
- task: npm-install
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: "6" }
inputs:
- name: source-code
path: .
outputs:
- name: node_modules
run:
path: npm
args: [ install ]
- task: npm-test
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: "6" }
inputs:
- name: source-code
path: .
- name: node_modules
run:
path: npm
args: [ test ]
Update 2016-06-14
Inputs and outputs are just directories. So you put what you want output into an output directory and you can then pass it to another task in the same job. Inputs and Outputs can not overlap, so in order to do it with npm, you'd have to either copy node_modules, or the entire source folder from the input folder to an output folder, then use that in the next task.
This doesn't work between jobs though. Best suggestion I've seen so far is to use a temporary git repository or bucket to push everything up. There has to be a better way of doing this since part of what I'm trying to do is avoid huge amounts of network IO.
There is a resource specifically designed for this use case of npm between jobs. I have been using it for a couple of weeks now:
https://github.com/ymedlop/npm-cache-resource
It basically allow you to cache the first install of npm and just inject it as a folder into the next job of your pipeline. You could quite easily setup your own caching resources from reading the source of that one as well, If you want to cache more than node_modules.
I am actually using this npm-cache-resource in combination with a Nexus proxy to speed up the initial npm install further.
Be aware that some npm packages have native bindings that need to be built with the standardlibs that matches the containers linux versions standard libs so, If you move between different types of containers a lot you may experience some issues with libmusl etc, in that case I recommend either streamlinging to use the same container types through the pipeline or rebuilding the node_modules in question...
There is a similar one for gradle (on which the npm one is based upon)
https://github.com/projectfalcon/gradle-cache-resource
This doesn't work between jobs though.
This is by design. Each step (get, task, put) in a Job is run in an isolated container. Inputs and outputs are only valid inside a single job.
What connects Jobs is Resources. Pushing to git is one way. It'd almost certainly be faster and easier to use a blob store (eg S3) or file store (eg FTP).

Golang Mac OSX build for Docker machine

I need to run Golang application on Docker machine.
I'm working on Mac OSX and Docker is working on top of Linux virtual machine, so binaries builded on Mac are not runnable on Docker.
I see two ways here:
cross-compile binaries on Mac for linux OS
copy project sources to docker, run 'go get' and 'go build' on it
First one is hard because of CGO (it is used in some imported libraries).
Second is very slow because of 'go get' operation.
Can you please tell me, which way is the most common in that situation? Or maybe I'm doing something wrong?
Here a solution to make cross-compile super easy even with CGO.
I stumbled upon it recently after wasting a lot of time getting a new windows build server to build my Go app.
Now I just compile it on my Mac and will create a Linux build server with it:
https://github.com/karalabe/xgo
Many thanks to Péter Szilágyi alias karalabe for this really great package!
How to use:
have Docker running
go get github.com/karalabe/xgo
xgo --targets=windows/amd64 ./
There are lots more options!
-- edit --
Almost 3 Years later I'm not using this any more, but my docker image to build my application in a linux based CD pipeline is still based on the docker images used in xgo.
I use the first approach. Here its a gulp task the build go code. If the production flag is set, it runs GOOS=linux CGO_ENABLED=0 go build instead go build. So the binary will work inside a docker container
gulp.task('server:build', function () {
var build;
let options = {
env: {
'PATH': process.env.PATH,
'GOPATH': process.env.GOPATH
}
}
if (argv.prod) {
options.env['GOOS'] = 'linux'
options.env['CGO_ENABLED'] = '0'
console.log("Compiling go binarie to run inside Docker container")
}
var output = argv.prod ? conf.paths.build + '/prod/bin' : conf.paths.build + '/dev/bin';
build = child.spawnSync('go', ['build', '-o', output, "src/backend/main.go"], options);
if (build.stderr.length) {
var lines = build.stderr.toString()
.split('\n').filter(function(line) {
return line.length
});
for (var l in lines)
util.log(util.colors.red(
'Error (go install): ' + lines[l]
));
notifier.notify({
title: 'Error (go install)',
message: lines
});
}
return build;
});
You could create a Docker container from the distinct OS you need for your executable, and map a volume to your src directory. Run the container and make the executable from within the container. You end up with a binary that you can run on the distinct OS.

Resources