How do I run Linux tasks without Docker (on the underlying system)? - linux

Tasks image_resource property is marked as optional in the documentation, but GNU/Linux tasks fail without it.
Also, the docs for the type property of image_resource say:
Required. The type of the resource. Usually docker-image
But I couldn't find any information about other supported types.
How can I run tasks on the underlying system without any container technology, like in my Windows and macOS workers?

In Concourse, you really are not supposed to do anything outside of Docker. That is one of the main features. Concourse runs in Docker containers and starts new containers for each build. If you want to run one or more Linux commands in sh or bash in the container, you can try something like this below, for your task config.
- task: linux
config:
platform: linux
image_resource:
type: docker-image
source: {repository: ubuntu, tag: '18.04'}
run:
dir: /<path-to-dir>
path: sh
user: root
args:
- -exc
- |
echo "Running in Linux!"
ls
scp <you#your-host-machine:file> .
telnet <your-host-machine>
<whatever>
...

Related

ibmcom/db2 docker image fails on m1

I'm having trouble setting up DB2 on macOS via Docker on my M1-Max MacBook Pro (32 GB RAM). I already had a look at this question, which might be related, however there is not a lot of information and I cannot exactly say, if it is about the exact same thing.
I set up following docker-compose.yml:
version: '3.8'
services:
db2:
image: ibmcom/db2
platform: linux/amd64
container_name: db2-test
privileged: true
environment:
LICENSE: "accept"
DB2INSTANCE: "db2dude"
DB2INST1_PASSWORD: "db2pw"
DBNAME: "RC1DBA"
BLU: "false"
ENABLE_ORACLE_COMPATIBILITY: "false"
UPDATEVAIL: "NO"
TO_CREATE_SAMPLEDB: "false"
REPODB: "false"
IS_OSXFS: "true"
PERSISTENT_HOME: "true"
HADR_ENABLED: "false"
ETCD_ENDPOINT: ""
ETCD_USERNAME: ""
ETCD_PASSWORD: ""
volumes:
- ~/workspace/docker/db2-error/db2/database:/database
- ~/workspace/docker/db2-error/db2/db2_data:/db2_data
ports:
- 50000:50000
on my Intel-MacBook, this spins up without any issue, on my M1-MacBook however I see after Task #4 finished, I see following portion inside of the STDOUT:
DBI1446I The db2icrt command is running.
DBI1070I Program db2icrt completed successfully.
(*) Fixing /etc/services file for DB2 ...
/bin/bash: db2stop: command not found
From what I could figure out, the presence of (*) Fixing /etc/services file for DB2 ... already seems to be wrong (since it does not appear in my intel log and does not sound like everything's fine) and the /bin/bash: db2stop: command not found appears due to line 81 of /var/db2_setup/include/db2_common_functions, which states su - ${DB2INSTANCE?} -c 'db2stop force'.
As far as I understand, su - should run with the path of the target user. In every single .profile or .bashrc in the home directory, the ~/sqllib/db2profile is being sourced (via . /database/config/db2dude/sqllib/db2profile).
However, when as root inside of the container (docker exec -it db2-test bash), calling su - db2dude -c 'echo $PATH', it prints /usr/local/bin:/bin:/usr/bin. Therefore, the PATH obviously is not as expected.
Maybe someone can figure out, what's happening at this point. I also tried running Docker with "new Virtualization framework", which did not change anything. I assume, Dockers compatibility magic might not be perfect, however I'm looking forward to find some kind of workaround, maybe by building an image upon ibmcom/db2.
I highly appreciate your time and advice. Thanks a lot in advance.
As stated in #mshabou's answer, there is no support yet. One way you can still make it work is by prepending your Docker command with DOCKER_DEFAULT_PLATFORM=linux/amd64 or executing export DOCKER_DEFAULT_PLATFORM=linux/amd64 in your shell before starting the container.
Alternatively, you can also use colima. Install colima as described on their GitHub page and then start it in emulated mode like colima start --arch x86_64. Now you will be able to use your ibmcom/db2 image the way you're used to (albeit with decreased performance).
db2 is not supported on ARM architecture, only theses Architectures are supported: amd64, ppc64le, s390x
https://hub.docker.com/r/ibmcom/db2

Docker - unable to run script

What I'm doing
I am using AWS batch to run a docker container for a large compute job. I have configured the ECR/ECS successfully to the best of my knowledge but am having issues running the required commands for reasons that are beyond my level of understanding with docker ( newbie )
What I need to do is pass the below commands into my application and start my application to perform some heavy computing tasks; all commands listed below must be present.
The Issue(s)
The issue arises when I send the submit job to AWS batch; this service pulls the image from the ACR ( amazon container repository ) and spins up a compute environment. The issue comes from when I try to run the command I pass in, below I will go throgh it.
"command": [
"mkdir -p logging",
"chmod 777 logging/",
"docker run -t -i -e my-application", # container name
"-e APIKEY",
"-e BASEURI",
"-e APIUSER",
"-v WORKSPACE /logging:/src/log",
"DOCKERIMAGE",
"python my_app.py",
"-t APP_USER",
"-e APP_ENVIRONMENT",
"-u APP_USERNAME",
"-p APP_PASSWORD",
"-i IN_PATH",
"-o OUT_PATH",
"-b tmp/"
]
The command above generates the following error(s)
container_linux.go:370: starting container process caused: exec: "mkdir -p log": executable file not found in $PATH
I tried to pass in the command to echo the env var $PATH but was unsuccesfull getting a response and resulted in a similar error.
I have ran successfully "ls" and was able to see the directory contents of my application inside.
I am not however able to run any of these commands that I have included in the command [] section. I have tried just running python and such in hopes of getting a more detailed error but was unsuccessful.
Logic in plain English
Create a path called logging if it doesnt exist
set the permissions for logging
run the docker container and pass in the environment variables while doing so
Tell docker to run the python file my_app.py and pass in the expected runtime args
Execute and perform the required logic deligated in the python3 application
Questions
Why can I not create a directory here called "logging" where am I ?
Am I running these properly as defined by AWS batch? or docker
What am I missing or where am I going wrong?
AWS Batch high level doc
AWS Batch link specific to what i'm doing
Assuming that you're following the syntax described in the Container
Properties
section of the AWS docs, you have several problems with the syntax of
your command directive.
First
The command directive can only run a single command. You can't mash together a bunch of commands as you're trying to do in your example. If you need to run multiple commands you would need to embed them as an argument to a shell. For example, something like:
command: ["/bin/sh", "-c", "mkdir -p logging; chmod 777 logging; ..."]
Second
You must properly tokenize your
command lines -- that is, when you type mkdir -p logging at the
command prompt, the shell splits this into three parts (or "tokens"): ['mkdir', '-p', 'logging']. You need to do the same thing when building up the
list of arguments to command.
This is invalid:
command: ["mkdir -p logging"]
That would looking for a command named mkdir -p logging, and of course no such command exists. That would properly be written as:
command: ["mkdir", "-p", "logging"]
Third
I'm not very familiar with the AWS batch environment, but it's unlikely you can run a docker command inside a docker` container as you're trying to do. It's unclear why you're doing this, though: why not just configure your AWS batch job with the appropriate image, environment variables, etc?
Take a look at some of these example job definitions.

Packer failed when executed on Gitlab-runner

I have a packer file to deploy Centos 7 using vSphere-Iso builder that works Ok when executed directly on a linux server but when I try to run the same packer file using a gitlab-runner it fails as it does not wait until the OS is installed. It fails after waiting for 1 minute but if I run the packer command with -on-error=run-cleanup-provisioner the OS install finish succesuflly so clear the issue is that packer is just not waiting.
2021/07/20 12:02:40 packer.io plugin: [INFO] Waiting for IP, up to total timeout: 30m0s, settle timeout: 5m0s
==> vsphere-iso.autogenerated_1: Waiting for IP...
==> vsphere-iso.autogenerated_1: Clear boot order...
==> vsphere-iso.autogenerated_1: Power off VM...
==> vsphere-iso.autogenerated_1: Destroying VM...
2021/07/20 12:03:12 [INFO] (telemetry) ending
==> Wait completed after 1 minute 2 seconds
2021/07/20 12:03:12 machine readable: error-count []string{"1"}
==> Some builds didn't complete successfully and had errors:
My boot command is the following as I do not use DHCP.
boot_command = ["<up><tab> text inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/vmware-ks.cfg ip=10.118.12.117::10.118.12.1:255.255.255.0:{{ .Name }}.localhost:ens192:none<enter><wait>"]
I have tested using options like ssh_host, ip_wait_address, ip_settle_timeout, ssh_wait_timeout, pause_before_connecting but nothing seems to work.
As I said, the same packer pkr.hcl file works OK if run it manually on a regular linux but not on my gitlab-runner that is a runner installed directly on my Gitlab server (Yes, I know is not the best practice but I only use the runner for this task)
Packer versions 1.7.2 and 1.7.3 tested, gitlab-runner 14.0.0 and 14.0.1 tested.
Managed to make it work by changing the las wait on my boot command for wait5m. This will give the OS enough time to get installed and the VM rebooted.
New boot command boot_command = ["<up><tab> text inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/vmware-ks.cfg ip=10.118.12.117::10.118.12.1:255.255.255.0:{{ .Name }}.localhost:ens192:none<enter><wait5m>"]
All the other wait options from packer are no longer needed with this boot command.
Doing some test I managed to make it work as well by creating a Firewall drop rule for the VM just after the kickstar file was loaded and removing the FW rules once the OS was installed. Definitelly, packer is just ignoring all the wait machanism native to packer when running on the gitlab-runner
EDIT: After having the same issue with my Windows Templates y tested using a different gitlab-runner installed on a different server instead of the one in the same gitlab server and it worked perfectly with my initial contifiguration for both, windows and centos.

Elastic Beanstalk: log task customization on Amazon Linux 2 platforms

I'm wondering how to do log task customization in the new Elastic Beanstalk platform (the one based on Amazon Linux 2). Specifically, I'm comparing:
Old: Single-container Docker running on 64bit Amazon Linux/2.14.3
New: Single-container Docker running on 64bit Amazon Linux 2/3.0.0
(My question actually has nothing to do with Docker as such, I'm speculating the problem exist for any of the new Elastic Beanstalk platforms).
Previously I could follow Amazon's recipe, meaning put a file into /opt/elasticbeanstalk/tasks/bundlelogs.d/ and it would then be acted upon. This is no longer true.
Has this changed? I can't find it documented. Anyone been successful in doing log task customization on the newer Elastic Beanstalk platform? If so, how?
Minimal working example
I've created a minimal working example and deployed on both platforms.
Dockerfile:
FROM ubuntu
COPY daemon-run.sh /daemon-run.sh
RUN chmod +x /daemon-run.sh
EXPOSE 80
ENTRYPOINT ["/daemon-run.sh"]
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Logging": "/var/mydaemon"
}
daemon-run.sh:
#!/bin/bash
echo "Starting daemon" # output to stdout
mkdir -p /var/mydaemon/deeperlogs
while true; do
echo "$(date '+%Y-%m-%dT%H:%M:%S%:z') Hello World" >> /var/mydaemon/deeperlogs/app_$$.log
sleep 5
done
.ebextensions/mydaemon-logfiles.config:
files:
"/opt/elasticbeanstalk/tasks/bundlelogs.d/mydaemon-logs.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/log/eb-docker/containers/eb-current-app/deeperlogs/*.log
If I do "Full Logs" action on the old platform I would get a ZIP with my deeperlogs included
inside var/log/eb-docker/containers/eb-current-app. On the new platform I don't.
Investigation
If you look on the disk you'll see that the new Elastic Beanstalk doesn't have a /opt/elasticbeanstalk/tasks folder at all, unlike the old one. Hmm.
On Amazon Linux 2 the folder is:
/opt/elasticbeanstalk/config/private/logtasks/bundle
The .ebextensions/mydaemon-logfiles.config should be:
files:
"/opt/elasticbeanstalk/config/private/logtasks/bundle/mydaemon-logs.conf":
mode: "000644"
owner: root
group: root
content: |
/var/mydaemon/deeperlogs/*.log
container_commands:
append_deeperlogs_to_applogs:
command: echo -e "\n/var/log/eb-docker/containers/eb-current-app/deeperlogs/*" >> /opt/elasticbeanstalk/config/private/logtasks/bundle/applogs
The mydaemon-logfiles.config also adds deeperlogs into applogs file. Without it deeperlogs will not be included in the download log zip bundle. Which is intresting, because the folder will be in the correct location, i.e., /var/log/eb-docker/containers/eb-current-app/deeperlogs/. But without being explicitly listed in applogs, it will be skipped when zip bundle is being generated.
I tested it with single docker environment (3.0.1).
The full log bundle successful contained deeperlogs with correct log data:
Hope that this will help. I haven't found any references for that. AWS documentaiton does not document this, as it is mostly based on Amazon Linux 1, not Amazon Linux 2.
Amazon has fixed this problem in version of the Elastic Beanstalk AL2 platforms released on 04-AUG-2020.
It has been fixed so that log task customization on AL2-based platforms now works the way it has always worked (i.e. on the prevision generation AL2018 platforms) and you can therefore follow the official documentation in order to make this happen.
Succesfully tested with platform "Docker running on 64bit Amazon Linux 2/3.1.0". If you (still) use "Docker running on 64bit Amazon Linux 2/3.0.x" then you must use the undocumented workaround described in Marcin's answer but you are probably better off by upgrading your platform version.
As of 2021/11/05, I tried the accepted answer and various other examples including the latest official documentation on using the .ebextensions folder with *.config files without success.
Most likely something I was doing wrong but here's what worked for me.
The version I'm using: Docker running on 64bit Amazon Linux 2/3.4.8
Simply, add a volume to your docker-compose.yml file to share your application logs to the Elastic Beanstalk log directory.
Example docker-compose.yml:
version: "3.9"
services:
app:
build: .
ports:
- "80:80"
user: root
volumes:
- ./:/var/www/html
# "${EB_LOG_BASE_DIR}/<service name>:<log directory inside container>
- "${EB_LOG_BASE_DIR}/app:/var/www/html/application/logs" # ADD THIS LINE
env_file:
- .env
For more info, here's the documentation I followed.
Hopefully, this helps future readers like myself 👍

Concourse CI - Build Artifacts inside source, pass all to next task

I want to set up a build pipeline in Concourse for my web application. The application is built using Node.
The plan is to do something like this:
,-> build style guide -> dockerize
source code -> npm install -> npm test -|
`-> build website -> dockerize
The problem is, after npm install, a new container is created so the node_modules directory is lost. I want to pass node_modules into the later tasks but because it is "inside" the source code, it doesn't like it and gives me
invalid task configuration:
you may not have more than one input or output when one of them has a path of '.'
Here's my jobs set up
jobs:
- name: test
serial: true
disable_manual_trigger: false
plan:
- get: source-code
trigger: true
- task: npm-install
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: "6" }
inputs:
- name: source-code
path: .
outputs:
- name: node_modules
run:
path: npm
args: [ install ]
- task: npm-test
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: "6" }
inputs:
- name: source-code
path: .
- name: node_modules
run:
path: npm
args: [ test ]
Update 2016-06-14
Inputs and outputs are just directories. So you put what you want output into an output directory and you can then pass it to another task in the same job. Inputs and Outputs can not overlap, so in order to do it with npm, you'd have to either copy node_modules, or the entire source folder from the input folder to an output folder, then use that in the next task.
This doesn't work between jobs though. Best suggestion I've seen so far is to use a temporary git repository or bucket to push everything up. There has to be a better way of doing this since part of what I'm trying to do is avoid huge amounts of network IO.
There is a resource specifically designed for this use case of npm between jobs. I have been using it for a couple of weeks now:
https://github.com/ymedlop/npm-cache-resource
It basically allow you to cache the first install of npm and just inject it as a folder into the next job of your pipeline. You could quite easily setup your own caching resources from reading the source of that one as well, If you want to cache more than node_modules.
I am actually using this npm-cache-resource in combination with a Nexus proxy to speed up the initial npm install further.
Be aware that some npm packages have native bindings that need to be built with the standardlibs that matches the containers linux versions standard libs so, If you move between different types of containers a lot you may experience some issues with libmusl etc, in that case I recommend either streamlinging to use the same container types through the pipeline or rebuilding the node_modules in question...
There is a similar one for gradle (on which the npm one is based upon)
https://github.com/projectfalcon/gradle-cache-resource
This doesn't work between jobs though.
This is by design. Each step (get, task, put) in a Job is run in an isolated container. Inputs and outputs are only valid inside a single job.
What connects Jobs is Resources. Pushing to git is one way. It'd almost certainly be faster and easier to use a blob store (eg S3) or file store (eg FTP).

Resources