Golang Mac OSX build for Docker machine - linux

I need to run Golang application on Docker machine.
I'm working on Mac OSX and Docker is working on top of Linux virtual machine, so binaries builded on Mac are not runnable on Docker.
I see two ways here:
cross-compile binaries on Mac for linux OS
copy project sources to docker, run 'go get' and 'go build' on it
First one is hard because of CGO (it is used in some imported libraries).
Second is very slow because of 'go get' operation.
Can you please tell me, which way is the most common in that situation? Or maybe I'm doing something wrong?

Here a solution to make cross-compile super easy even with CGO.
I stumbled upon it recently after wasting a lot of time getting a new windows build server to build my Go app.
Now I just compile it on my Mac and will create a Linux build server with it:
https://github.com/karalabe/xgo
Many thanks to Péter Szilágyi alias karalabe for this really great package!
How to use:
have Docker running
go get github.com/karalabe/xgo
xgo --targets=windows/amd64 ./
There are lots more options!
-- edit --
Almost 3 Years later I'm not using this any more, but my docker image to build my application in a linux based CD pipeline is still based on the docker images used in xgo.

I use the first approach. Here its a gulp task the build go code. If the production flag is set, it runs GOOS=linux CGO_ENABLED=0 go build instead go build. So the binary will work inside a docker container
gulp.task('server:build', function () {
var build;
let options = {
env: {
'PATH': process.env.PATH,
'GOPATH': process.env.GOPATH
}
}
if (argv.prod) {
options.env['GOOS'] = 'linux'
options.env['CGO_ENABLED'] = '0'
console.log("Compiling go binarie to run inside Docker container")
}
var output = argv.prod ? conf.paths.build + '/prod/bin' : conf.paths.build + '/dev/bin';
build = child.spawnSync('go', ['build', '-o', output, "src/backend/main.go"], options);
if (build.stderr.length) {
var lines = build.stderr.toString()
.split('\n').filter(function(line) {
return line.length
});
for (var l in lines)
util.log(util.colors.red(
'Error (go install): ' + lines[l]
));
notifier.notify({
title: 'Error (go install)',
message: lines
});
}
return build;
});

You could create a Docker container from the distinct OS you need for your executable, and map a volume to your src directory. Run the container and make the executable from within the container. You end up with a binary that you can run on the distinct OS.

Related

Is it possible to install and run docker inside node container in Jenkins?

This is somewhat complicated situation, but I have Jenkins installed inside a docker container. I'm trying to run some tests in node.js app, but this test environment requires docker+docker-compose to be enabled. At the moment, the Jenkins configuration is through pipeline code
So far, I've tried pulling docker inside a stage, as follow:
pipeline {
agent {
docker {
image 'node'
}
}
stages {
stage("Checkout") {
steps {
git url: ....
}
}
stage("Docker") {
steps {
script {
def image = docker.image('docker')
image.pull()
image.inside() {
sh 'docker --version'
sh 'docker-compose --version'
}
}
}
}
}
with error returning 'docker: not found'. I was expecting the script to succeed because I've tried exactly the same with 'agent any' which had no problem, but inside node image it doesn't seem to work.
I'm also not sure if this is the right way to do so because as I understand correctly, this way of running docker inside a docker is not recommended. One method that I have found is that when running docker, it is recommended to run docker -v /var/run/docker.sock:/var/run/docker.sock ... but currently I am running on docker-compose, with installation steps from https://www.jenkins.io/doc/book/installing/docker/ (instead of individual docker, I've combined both jenkins and jenkins-blueocean into a docker-compose file), and that did not work.
At this moment, I'm out of idea and any solutions or other suggestions as to how to run both node.js and docker in the same environment, would be greatly appreciated.
You can try to use docker-in-docker image https://hub.docker.com/_/docker

How to build docker image without having to use the sudo keyword

I'm building a node.js app which allows people to run code on my server and I'm using Docker to containerise the user's code so that it can't steal data or in general do something they shouldn't. I have a Docker image template that is copied into the user's personal app directory and I want to build the image using this function I've written:
const util = require("util");
const exec = util.promisify(require("child_process").exec);
async function buildContainer(path, dockerUser) {
return await exec(`sudo docker build -t user_app_${dockerUser} ${path}`);
}
However when I go to use it, it requires me to enter my sudo password as if I was executing it manually in a terminal window.
Is there anyway I can run this function without having to include the sudo keyword?
Thanks in advance.
you can use podman instead of docker.
There you don´t need sudo.
You have the most commands like docker.
example:
podman build
podman run
and so on...
hope that helps :)
Regards

Dotnet Core - Get the application's launch path

Question - Is there a better/right way to get the application's launch path?
Setup -
I have a console application that runs in a Linux Debian docker image. I am building the application using the --runtime linux-x64 command line switch and have all the runtime identifiers set appropriately. I was expecting the application to behave the same whether launching it by calling dotnet MyApplication.dll or ./MyApplication but they are not.
Culprit Code -
I have deployed files in a folder below the application directory that I reference so I do the following to get what I consider my launch path. I have read various articles saying this is the correct way to get what I want, and it works depending on how I launch it.
using var processModule = Process.GetCurrentProcess().MainModule;
var basePath = Path.GetDirectoryName(processModule?.FileName);
When launching this using the comand dotnet MyApplication.dll the above codes path is /usr/share/dotnet
When launching this using the command ./MyApplication.dll the path is then /app
I understand why using dotnet would be different as it is the process that is running my code, but again it was unexpected.
Any help here to what I should use given the current environment would be appreciated. Ultimately I need the path where the console application started from as gathered by the application when it starts up.
Thanks for your help.
This code should work:
public static IConfiguration LoadConfiguration()
{
var assemblyDirectory = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);
.....
}

How a docker container can use another docker container built as an executable and not as a service

Objective:
Running container "A" that is basically a nodejs server. This server should run an executable such "E" that is expose in another running container.
in simplified code. "A" contains this snippet of code that uses "E".
const spawn = require('child_process').spawn;
const someArgsForE = {
arg1:"some_string",
arg2:123
};
// E is the executable that would be normally run as 'docker run E '{ arg1:"some_string", arg2:123}' ... (ignore the correct escaping)
let childProcess = spawn("E", [JSON.stringify(someArgsForE)]);
childProcess.on('close', (code, signal) => {
//do whatever with the result... maybe write in a volume
});
Ideally "A" can implement some logic so that it can be aware of the existence of "E" .
If(serviceExists("E")){ ... do whatever ...}
Since also another "E_b" executable might exists and be used by the same server "A" .
I cannot figure out how I can achieve this with docker-compose without wrapping "E" and possibly "E_b" into others nodejs services but accessing them as executables.
To have docker inside docker and the using something like
let childProcess = spawn("docker", ["run", "E", args]);
is not ideal either.
Any clean possible solution ?
This is impossible without giving the service unlimited root-level access over the host. This is not a privilege you usually want to give to processes with network-facing services.
The best approach for what you’re describing is to make the “A” image self-contained by just adding the “E” executable to it. Depending on what kind of executable it is, you might be able to install it with a package manager or otherwise make it available
FROM node
# Some things are installable via APT
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install -y --no-install-recommends \
e-executable
# Or sometimes you have an executable or tarball available locally
ADD f-executable.tar.gz /usr/local
# Routine stuff for a Node app
WORKDIR /app
COPY package.json yarn.lock .
RUN yarn install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
The alternative approach is to bind-mount the host’s Docker socket into the container. As previously mentioned, this gives the service unlimited root-level access over the host. Common pitfalls to this approach include shell injection attacks that allow a caller to docker run -v /:/host ..., filesystem permission problems, and directory mapping issues where the left-hand side of a docker run -v option is always a host path even if it’s being launched from a container. I’d pretty strongly suggest avoiding this path.

Execute OS specific script in node / Grunt

I have a Grunt task which executes .cmd file on the local machine to do its thing. I need to use this task on the CI server, which is a Linux machine. I have the relevant .sh (shell script for Linux) for that. I need a way to execute these two without changing my Gruntfile.
Currently I have to change my Gruntfile to make it work locally for windows and remote file uses .sh.
Any solution to do same is welcome. Detecting underlying OS? Or a way to call same command which internally calls the OS specific command?
You could take advantage of node's process.platform:
process.platform
What platform you're running on: 'darwin', 'freebsd', 'linux', 'sunos' or 'win32'
console.log('This platform is ' + process.platform);
Then within the code, optionally add the file extensions based on that:
if (process.platform === "win32") {
ext = ".cmd";
} else {
ext = ".sh";
}
Using grunt-shell or any other shell command tool, take advantage of the similarities between win and *nix shells, particularly ||. The first cmd will fail on *nix and fallback to sh:
shell: {
options: {
stderr: false,
failOnError: true
},
command: 'cmd command.cmd || sh command.sh'
}

Resources