Flannel-Wrapper unable to resolve UUID: no such file - coreos

After installing coreos (stable, beta or alpha) I can't start flanneld.service because of the dependency flannel-docker-opts.service fails. It's giving an error about
rm: unable to resolve the UUID from file: open
/var/lib/coreos/flannel-wrapper2.uuid: no such file or directory
I'm new to coreos and am trying to install kubernetes on it; for that I have a separate etcd cluster stood up with SSL certs for them. I have an etcd proxy on the image up and running but flannel wont start for docker to run.
I'm not sure if I'm suppose to be including more configuration in my cloud-config to fix this or not. I can't seem to find anything on flannel-wrapper or flannel-docker-opts.service
Here's my cloud-config.yaml
#cloud-config
write_files:
- path: /run/systemd/system/etcd2.service.d/30-certificates.conf
permissions: 0644
content: |
[Service]
Environment="ETCD_CERT_FILE=/etc/ssl/etcd/client.pem"
Environment="ETCD_KEY_FILE=/etc/ssl/etcd/client-key.pem"
Environment="ETCD_TRUSTED_CA_FILE=/etc/ssl/etcd/ca.pem"
Environment="ETCD_PEER_CERT_FILE=/etc/ssl/etcd/client.pem"
Environment="ETCD_PEER_KEY_FILE=/etc/ssl/etcd/client-key.pem"
Environment="ETCD_PEER_TRUSTED_CA_FILE=/etc/ssl/etcd/ca.pem"
# Listen only on loopback interface.
Environment="ETCD_LISTEN_CLIENT_URLS=http://127.0.0.1:2379,http://127.0.0.1:4001"
hostname: "Kube-MST1"
ssh_authorized_keys:
- "ssh-rsa AAAAB3N....
coreos:
etcd2:
proxy: on
listen-client-urls: "http://127.0.0.1:2379"
initial-cluster: "ETCD1=http://192.168.1.7:2380,ETCD2=http://192.168.1.8:2380,ETCD3=http://192.168.1.9:2380"
fleet:
public-ip: "192.168.1.10"
metadata: "region=us-east"
etcd_servers: "http://127.0.0.1:2379"
etcd_cafile: /etc/ssl/etcd/ca.pem
etcd_certfile: /etc/ssl/etcd/client.pem
etcd_keyfile: /etc/ssl/etcd/client-key.pem
flannel:
etcd_prefix: "/coreos.com/network"
etcd_endpoints: "http://127.0.0.1:2379"
public-ip: "192.168.1.10"
interface: "192.168.1.10"
etcd_cafile: /etc/ssl/etcd/ca.pem
etcd_certfile: /etc/ssl/etcd/client.pem
etcd_keyfile: /etc/ssl/etcd/client-key.pem
update:
reboot-strategy: "etcd-lock"
units:
- name: 00-ens192.network
runtime: true
content: |
[Match]
Name=ens192
[Network]
DNS=192.168.1.100
DNS=192.168.1.101
Address=192.168.1.10/24
Gateway=192.168.1.1
- name: flanneld.service
command: start
drop-ins:
- name: 50-network-config.conf
content: |
[Service]
ExecStartPre=/usr/bin/etcdctl --endpoints http://127.0.0.1:2379 \ --ca-file /etc/ssl/etcd/ca.pem --cert-file /etc/ssl/etcd/client.pem --key-file /etc/ssl/etcd/client-key.pem \ set /coreos.com/network/config '{ "Network": "10.0.0.0/16" }'
- name: etcd2.service
command: start
- name: fleet.service
command: start
- name: docker.service
drop-ins:
- name: "50-insecure-registry.conf"
content: |
[Service]
Environment=DOCKER_OPTS='--insecure-registry="proxy.test.lab:8081"'
- name: docker.service
drop-ins:
- name: 51-docker-mirror.conf
content: |
[Unit]
Requires=flanneld.service
After=flanneld.service
Restart=always
command: start
- name: kubelet-unit.service
command: start
content: |
[Unit]
Requires=flanneld.service
After=flanneld.service
[Service]
Environment=KUBELET_VERSION=v1.5.3_coreos.0
Environment="RKT_OPTS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log \
--volume dns,kind=host,source=/etc/resolv.conf \
--mount volume=dns,target=/etc/resolv.conf"
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--api-servers=http://127.0.0.1:8080 \
--register-schedulable=false \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--container-runtime=docker \
--allow-privileged=true \
--pod-manifest-path=/etc/kubernetes/manifests \
--hostname-override= 192.168.1.10 \
--cluster_dns= 10.9.0.100 \
--cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target

Related

Error while loading rust contract while creating sawtooth network

I was trying to load the hyperledger grid smart contract to sawtooth network
after successfully running the docker-compose file with 4 active validator, i am getting following error
I have just logs one contract details rest 5 is giving me same error
I have also created a issue on sawtooth git repo click HERE
➜ ~ docker logs tnt-contract-builder
Response Body:
Link { link: "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=958c6a9e7577f16c812a66fbdf0d860b74c36237808cfaabc7910548fb5c8c81451d6ad41a59a50d14af9a81b7569511443dd4dab6067a262a43a3517f52b270" }
Response Body:
StatusResponse {"data":[{"id": "958c6a9e7577f16c812a66fbdf0d860b74c36237808cfaabc7910548fb5c8c81451d6ad41a59a50d14af9a81b7569511443dd4dab6067a262a43a3517f52b270", "status": "PENDING", "invalid_transactions": []}], "link": "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=958c6a9e7577f16c812a66fbdf0d860b74c36237808cfaabc7910548fb5c8c81451d6ad41a59a50d14af9a81b7569511443dd4dab6067a262a43a3517f52b270&wait=30"}
Response Body:
Link { link: "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=dc410ca4b86c35a7ce37f5b82a0c3c7a8209b980ad4638e6120b47703ba0fe9f6b02f717b68c89fb640b6cfda0d3f2e81f80f8750163d417e61141c09fa5bd53" }
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/libcore/result.rs:1188:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
Response Body:
Link { link: "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=eb421c54bb740a278fbd65ef37879a2fa193b897828cac1f250c8b8060899ce516b19c13fa63b1f081222402f2563dba7d707aace5c33594f587e7c8d5a051d9" }
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/libcore/result.rs:1188:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
Response Body:
Link { link: "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=a6d84ff7056902908026264bc224c046290f99337770d2b6913a5f76fe8ed5814e5277dc5244b9564d4d12823dc56a2ae1f7c4986f6ac7344a19a20fc382e639" }
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/libcore/result.rs:1188:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
Response Body:
Link { link: "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=a33bd2e40a34aac8399a8c39bfe073129a8abcaf23575247a521106af8eba43104aff6743efe26300a8ff3854dac20ec74492722d1f321aa02cf320131e83ef2" }
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 11, kind: WouldBlock, message: "Resource temporarily unavailable" }', src/libcore/result.rs:1188:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
Response Body:
Link { link: "http://sawtooth-rest-api-default-0:8008/batch_statuses?id=9e6140c04c1bd0cf39e281754bf4f8508fc07ab40887250f75e2dd13f9eab3cd4bf7ccacb8d3de88e0f52258627adc709dceaa501d80a18268b9fac8c869fcfb" }
this is my docker-compose file for creating sawtooth network and to load grid smart contract
version: "3.6"
volumes:
contracts-shared:
grid-shared:
pbft-shared:
gridd-alpha:
templates-shared:
cache-shared:
services:
# ---== shared services ==---
sabre-cli:
image: hyperledger/sawtooth-sabre-cli:latest
volumes:
- contracts-shared:/usr/share/scar
- pbft-shared:/pbft-shared
container_name: sabre-cli
stop_signal: SIGKILL
tnt-contract-builder:
image: piashtanjin/tnt-contract-builder:latest
container_name: tnt-contract-builder
volumes:
- pbft-shared:/pbft-shared
entrypoint: |
bash -c "
while true; do curl -s http://sawtooth-rest-api-default-0:8008/state | grep -q head; if [ $$? -eq 0 ]; then break; fi; sleep 0.5; done;
sabre cr --create grid_track_and_trace --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre upload --filename /tmp/track_and_trace.yaml --key /pbft-shared/validators/validator-0 --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre ns --create a43b46 --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre perm a43b46 grid_track_and_trace --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre perm 621dee01 grid_track_and_trace --key /pbft-shared/validators/validator-0 --read --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre perm 621dee05 grid_track_and_trace --key /pbft-shared/validators/validator-0 --read --url http://sawtooth-rest-api-default-0:8008 --wait 30
echo '---------========= grid schema contract is loaded =========---------'
"
schema-contract-builder:
image: piashtanjin/schema-contract-builder:latest
container_name: schema-contract-builder
volumes:
- pbft-shared:/pbft-shared
entrypoint: |
bash -c "
while true; do curl -s http://sawtooth-rest-api-default-0:8008/state | grep -q head; if [ $$? -eq 0 ]; then break; fi; sleep 0.5; done;
sabre cr --create grid_schema --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre upload --filename /tmp/schema.yaml --key /pbft-shared/validators/validator-0 --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre ns --create 621dee01 --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre perm 621dee01 grid_schema --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre perm 621dee05 grid_schema --key /pbft-shared/validators/validator-0 --read --url http://sawtooth-rest-api-default-0:8008 --wait 30
echo '---------========= grid schema contract is loaded =========---------'
"
# pike-contract-builder:
pike-contract-builder:
image: piashtanjin/pike-contract-builder:latest
container_name: pike-contract-builder
volumes:
- pbft-shared:/pbft-shared
entrypoint: |
bash -c "
while true; do curl -s http://sawtooth-rest-api-default-0:8008/state | grep -q head; if [ $$? -eq 0 ]; then break; fi; sleep 0.5; done;
sabre cr --create grid_pike --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre upload --filename /tmp/pike.yaml --key /pbft-shared/validators/validator-0 --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre ns --create 621dee05 --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre perm 621dee05 grid_pike --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
echo '---------========= pike contract is loaded =========---------'
"
product-contract-builder:
image: piashtanjin/product-contract-builder:latest
container_name: product-contract-builder
volumes:
- pbft-shared:/pbft-shared
entrypoint: |
bash -c "
while true; do curl -s http://sawtooth-rest-api-default-0:8008/state | grep -q head; if [ $$? -eq 0 ]; then break; fi; sleep 0.5; done;
sabre cr --create grid_product --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre upload --filename /tmp/product.yaml --key /pbft-shared/validators/validator-0 --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre ns --create 621dee05 --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre ns --create 621dee01 --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre ns --create 621dee02 --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre perm 621dee05 grid_product --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre perm 621dee01 grid_product --key /pbft-shared/validators/validator-0 --read --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre perm 621dee02 grid_product --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
echo '---------========= grid_product contract is loaded =========---------'
"
location-contract-builder:
image: piashtanjin/location-contract-builder:latest
container_name: location-contract-builder
volumes:
- pbft-shared:/pbft-shared
entrypoint: |
bash -c "
while true; do curl -s http://sawtooth-rest-api-default-0:8008/state | grep -q head; if [ $$? -eq 0 ]; then break; fi; sleep 0.5; done;
sabre cr --create grid_location --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre upload --filename /tmp/location.yaml --key /pbft-shared/validators/validator-0 --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre ns --create 621dee04 --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre perm 621dee05 grid_location --key /pbft-shared/validators/validator-0 --read --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre perm 621dee01 grid_location --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre perm 621dee04 grid_location --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
echo '---------========= grid_location contract is loaded =========---------'
"
purchase-order-contract-builder:
image: piashtanjin/purchase-order-contract-builder:latest
container_name: purchase-order-contract-builder
volumes:
- pbft-shared:/pbft-shared
entrypoint: |
bash -c "
while true; do curl -s http://sawtooth-rest-api-default-0:8008/state | grep -q head; if [ $$? -eq 0 ]; then break; fi; sleep 0.5; done;
sabre cr --create grid_purchase_order --key /pbft-shared/validators/validator-0 --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre upload --filename /tmp/purchase_order.yaml --key /pbft-shared/validators/validator-0 --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre ns --create 621dee06 --key /pbft-shared/validators/validator --owner $$(cat /pbft-shared/validators/validator-0.pub) --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre perm 621dee05 grid_purchase_order --key /pbft-shared/validators/validator-0 --read --url http://sawtooth-rest-api-default-0:8008 --wait 30
sabre perm 621dee06 grid_purchase_order --key /pbft-shared/validators/validator-0 --read --write --url http://sawtooth-rest-api-default-0:8008 --wait 30
echo '---------========= grid_purchase_order contract is loaded =========---------'
"
# if [ ! -e sabre-admin.batch ]; then
# sawset proposal create \
# -k /root/.sawtooth/keys/my_key.priv \
# sawtooth.swa.administrators=$$(cat /pbft-shared/validators/validator-0.pub) \
# -o sabre-admin.batch
# sawadm genesis sabre-admin.batch
validator-0:
image: hyperledger/sawtooth-validator:nightly
container_name: sawtooth-validator-default-0
expose:
- 4004
- 5050
- 8800
ports:
- "4004:4004"
volumes:
- pbft-shared:/pbft-shared
command: |
bash -c "
if [ -e /pbft-shared/validators/validator-0.priv ]; then
cp /pbft-shared/validators/validator-0.pub /etc/sawtooth/keys/validator.pub
cp /pbft-shared/validators/validator-0.priv /etc/sawtooth/keys/validator.priv
fi &&
if [ ! -e /etc/sawtooth/keys/validator.priv ]; then
sawadm keygen
mkdir -p /pbft-shared/validators || true
cp /etc/sawtooth/keys/validator.pub /pbft-shared/validators/validator-0.pub
cp /etc/sawtooth/keys/validator.priv /pbft-shared/validators/validator-0.priv
fi &&
if [ ! -e config-genesis.batch ]; then
sawset genesis -k /etc/sawtooth/keys/validator.priv -o config-genesis.batch
fi &&
while [[ ! -f /pbft-shared/validators/validator-1.pub || \
! -f /pbft-shared/validators/validator-2.pub || \
! -f /pbft-shared/validators/validator-3.pub || \
! -f /pbft-shared/validators/validator-4.pub ]];
do sleep 1; done
echo sawtooth.consensus.pbft.members=\\['\"'$$(cat /pbft-shared/validators/validator-0.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-1.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-2.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-3.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-4.pub)'\"'\\] &&
if [ ! -e /root/.sawtooth/keys/my_key.priv ]; then
sawtooth keygen my_key
fi &&
if [ ! -e config.batch ]; then
sawset proposal create \
-k /etc/sawtooth/keys/validator.priv \
sawtooth.consensus.algorithm.name=pbft \
sawtooth.consensus.algorithm.version=1.0 \
sawtooth.validator.transaction_families='[{"family": "sabre", "version": "0.5"}, {"family":"sawtoo", "version":"1.0"}, {"family":"xo", "version":"1.0"}]' \
sawtooth.identity.allowed_keys=\\['\"'$$(cat /pbft-shared/validators/validator-0.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-1.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-2.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-3.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-4.pub)'\"'\\] \
sawtooth.swa.administrators=\\['\"'$$(cat /pbft-shared/validators/validator-0.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-1.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-2.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-3.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-4.pub)'\"'\\] \
sawtooth.consensus.pbft.members=\\['\"'$$(cat /pbft-shared/validators/validator-0.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-1.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-2.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-3.pub)'\"','\"'$$(cat /pbft-shared/validators/validator-4.pub)'\"'\\] \
sawtooth.publisher.max_batches_per_block=1200 \
-o config.batch
fi &&
if [ ! -e /var/lib/sawtooth/genesis.batch ]; then
sawadm genesis config-genesis.batch config.batch
fi &&
sawtooth-validator -vv \
--endpoint tcp://validator-0:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--bind network:tcp://eth0:8800 \
--scheduler parallel \
--peering static \
--maximum-peer-connectivity 10000
"
validator-1:
image: hyperledger/sawtooth-validator:nightly
container_name: sawtooth-validator-default-1
expose:
- 4004
- 5050
- 8800
volumes:
- pbft-shared:/pbft-shared
command: |
bash -c "
if [ -e /pbft-shared/validators/validator-1.priv ]; then
cp /pbft-shared/validators/validator-1.pub /etc/sawtooth/keys/validator.pub
cp /pbft-shared/validators/validator-1.priv /etc/sawtooth/keys/validator.priv
fi &&
if [ ! -e /etc/sawtooth/keys/validator.priv ]; then
sawadm keygen
mkdir -p /pbft-shared/validators || true
cp /etc/sawtooth/keys/validator.pub /pbft-shared/validators/validator-1.pub
cp /etc/sawtooth/keys/validator.priv /pbft-shared/validators/validator-1.priv
fi &&
sawtooth keygen my_key &&
sawtooth-validator -vv \
--endpoint tcp://validator-1:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--bind network:tcp://eth0:8800 \
--scheduler parallel \
--peering static \
--maximum-peer-connectivity 10000 \
--peers tcp://validator-0:8800
"
validator-2:
image: hyperledger/sawtooth-validator:nightly
container_name: sawtooth-validator-default-2
expose:
- 4004
- 5050
- 8800
volumes:
- pbft-shared:/pbft-shared
command: |
bash -c "
if [ -e /pbft-shared/validators/validator-2.priv ]; then
cp /pbft-shared/validators/validator-2.pub /etc/sawtooth/keys/validator.pub
cp /pbft-shared/validators/validator-2.priv /etc/sawtooth/keys/validator.priv
fi &&
if [ ! -e /etc/sawtooth/keys/validator.priv ]; then
sawadm keygen
mkdir -p /pbft-shared/validators || true
cp /etc/sawtooth/keys/validator.pub /pbft-shared/validators/validator-2.pub
cp /etc/sawtooth/keys/validator.priv /pbft-shared/validators/validator-2.priv
fi &&
sawtooth keygen my_key &&
sawtooth-validator -vv \
--endpoint tcp://validator-2:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--bind network:tcp://eth0:8800 \
--scheduler parallel \
--peering static \
--maximum-peer-connectivity 10000 \
--peers tcp://validator-0:8800 \
--peers tcp://validator-1:8800
"
validator-3:
image: hyperledger/sawtooth-validator:nightly
container_name: sawtooth-validator-default-3
expose:
- 4004
- 5050
- 8800
volumes:
- pbft-shared:/pbft-shared
command: |
bash -c "
if [ -e /pbft-shared/validators/validator-3.priv ]; then
cp /pbft-shared/validators/validator-3.pub /etc/sawtooth/keys/validator.pub
cp /pbft-shared/validators/validator-3.priv /etc/sawtooth/keys/validator.priv
fi &&
if [ ! -e /etc/sawtooth/keys/validator.priv ]; then
sawadm keygen
mkdir -p /pbft-shared/validators || true
cp /etc/sawtooth/keys/validator.pub /pbft-shared/validators/validator-3.pub
cp /etc/sawtooth/keys/validator.priv /pbft-shared/validators/validator-3.priv
fi &&
sawtooth keygen my_key &&
sawtooth-validator -vv \
--endpoint tcp://validator-3:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--bind network:tcp://eth0:8800 \
--scheduler parallel \
--peering static \
--maximum-peer-connectivity 10000 \
--peers tcp://validator-0:8800 \
--peers tcp://validator-1:8800 \
--peers tcp://validator-2:8800
"
validator-4:
image: hyperledger/sawtooth-validator:nightly
container_name: sawtooth-validator-default-4
expose:
- 4004
- 5050
- 8800
volumes:
- pbft-shared:/pbft-shared
command: |
bash -c "
if [ -e /pbft-shared/validators/validator-4.priv ]; then
cp /pbft-shared/validators/validator-4.pub /etc/sawtooth/keys/validator.pub
cp /pbft-shared/validators/validator-4.priv /etc/sawtooth/keys/validator.priv
fi &&
if [ ! -e /etc/sawtooth/keys/validator.priv ]; then
sawadm keygen
mkdir -p /pbft-shared/validators || true
cp /etc/sawtooth/keys/validator.pub /pbft-shared/validators/validator-4.pub
cp /etc/sawtooth/keys/validator.priv /pbft-shared/validators/validator-4.priv
fi &&
sawtooth keygen my_key &&
sawtooth-validator -vv \
--endpoint tcp://validator-4:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--bind network:tcp://eth0:8800 \
--scheduler parallel \
--peering static \
--maximum-peer-connectivity 10000 \
--peers tcp://validator-0:8800 \
--peers tcp://validator-1:8800 \
--peers tcp://validator-2:8800 \
--peers tcp://validator-3:8800
"
sawtooth-rest-api:
image: hyperledger/sawtooth-rest-api:latest
container_name: sawtooth-rest-api-default-0
expose:
- 8008
ports:
- "8008:8008"
depends_on:
- validator-0
command: |
bash -c "
sawtooth-rest-api \
--connect tcp://validator-0:4004 \
--bind sawtooth-rest-api:8008
"
stop_signal: SIGKILL
sawtooth-rest-api1:
image: hyperledger/sawtooth-rest-api:nightly
container_name: sawtooth-rest-api-1
expose:
- 8008
depends_on:
- validator-0
command: |
bash -c "
sawtooth-rest-api -v --connect tcp://validator-1:4004 --bind sawtooth-rest-api1:8008
"
stop_signal: SIGKILL
sawtooth-rest-api2:
image: hyperledger/sawtooth-rest-api:nightly
container_name: sawtooth-rest-api-2
expose:
- 8008
depends_on:
- validator-0
command: |
bash -c "
sawtooth-rest-api -v --connect tcp://validator-2:4004 --bind sawtooth-rest-api2:8008
"
stop_signal: SIGKILL
sawtooth-rest-api3:
image: hyperledger/sawtooth-rest-api:nightly
container_name: sawtooth-rest-api-3
expose:
- 8008
depends_on:
- validator-0
command: |
bash -c "
sawtooth-rest-api -v --connect tcp://validator-3:4004 --bind sawtooth-rest-api3:8008
"
stop_signal: SIGKILL
sawtooth-rest-api4:
image: hyperledger/sawtooth-rest-api:nightly
container_name: sawtooth-rest-api-4
expose:
- 8008
depends_on:
- validator-0
command: |
bash -c "
sawtooth-rest-api -v --connect tcp://validator-4:4004 --bind sawtooth-rest-api4:8008
"
stop_signal: SIGKILL
sawtooth-settings-tp:
image: hyperledger/sawtooth-settings-tp:latest
container_name: sawtooth-settings-tp
expose:
- 4004
command: settings-tp -v -C tcp://validator-0:4004
stop_signal: SIGKILL
sawtooth-settings-tp-1:
image: hyperledger/sawtooth-settings-tp:latest
container_name: sawtooth-settings-tp-1
expose:
- 4004
command: settings-tp -v -C tcp://validator-1:4004
stop_signal: SIGKILL
sawtooth-settings-tp-2:
image: hyperledger/sawtooth-settings-tp:latest
container_name: sawtooth-settings-tp-2
expose:
- 4004
command: settings-tp -v -C tcp://validator-2:4004
stop_signal: SIGKILL
sawtooth-settings-tp-3:
image: hyperledger/sawtooth-settings-tp:latest
container_name: sawtooth-settings-tp-3
expose:
- 4004
command: settings-tp -v -C tcp://validator-3:4004
stop_signal: SIGKILL
sawtooth-settings-tp-4:
image: hyperledger/sawtooth-settings-tp:latest
container_name: sawtooth-settings-tp-4
expose:
- 4004
command: settings-tp -v -C tcp://validator-4:4004
stop_signal: SIGKILL
sabre-tp:
image: hyperledger/sawtooth-sabre-tp:0.8
container_name: sawtooth-sabre-tp
depends_on:
- validator-0
entrypoint: sawtooth-sabre -vv --connect tcp://validator-0:4004
sawtooth-client:
image: hyperledger/sawtooth-shell:nightly
container_name: sawtooth-shell
volumes:
- pbft-shared:/pbft-shared
depends_on:
- validator-0
command: |
bash -c "
sawtooth keygen &&
tail -f /dev/null
"
stop_signal: SIGKILL
pbft-0:
image: hyperledger/sawtooth-pbft-engine:nightly
container_name: sawtooth-pbft-engine-default-0
command: pbft-engine -vv --connect tcp://validator-0:5050
stop_signal: SIGKILL
pbft-1:
image: hyperledger/sawtooth-pbft-engine:nightly
container_name: sawtooth-pbft-engine-default-1
command: pbft-engine -vv --connect tcp://validator-1:5050
stop_signal: SIGKILL
pbft-2:
image: hyperledger/sawtooth-pbft-engine:nightly
container_name: sawtooth-pbft-engine-default-2
command: pbft-engine -vv --connect tcp://validator-2:5050
stop_signal: SIGKILL
pbft-3:
image: hyperledger/sawtooth-pbft-engine:nightly
container_name: sawtooth-pbft-engine-default-3
command: pbft-engine -vv --connect tcp://validator-3:5050
stop_signal: SIGKILL
pbft-4:
image: hyperledger/sawtooth-pbft-engine:nightly
container_name: sawtooth-pbft-engine-default-4
command: pbft-engine -vv --connect tcp://validator-4:5050
stop_signal: SIGKILL
# ---== alpha node ==---
db-alpha:
image: postgres
container_name: db-alpha
hostname: db-alpha
restart: always
expose:
- 5432
environment:
POSTGRES_USER: grid
POSTGRES_PASSWORD: grid_example
POSTGRES_DB: grid
gridd-alpha:
image: gridd
container_name: gridd-alpha
hostname: gridd-alpha
build:
context: ../..
dockerfile: daemon/Dockerfile
args:
- REPO_VERSION=${REPO_VERSION}
- CARGO_ARGS= --features experimental
volumes:
- contracts-shared:/usr/share/scar
- pbft-shared:/pbft-shared
- gridd-alpha:/etc/grid/keys
- cache-shared:/var/cache/grid
expose:
- 8080
ports:
- "8080:8080"
environment:
GRID_DAEMON_KEY: "alpha-agent"
GRID_DAEMON_ENDPOINT: "http://gridd-alpha:8080"
entrypoint: |
bash -c "
# we need to wait for the db to have started.
until PGPASSWORD=grid_example psql -h db-alpha -U grid -c '\q' > /dev/null 2>&1; do
>&2 echo \"Database is unavailable - sleeping\"
sleep 1
done
grid keygen --skip && \
grid keygen --system --skip && \
grid -vv database migrate \
-C postgres://grid:grid_example#db-alpha/grid &&
gridd -vv -b 0.0.0.0:8080 -k root -C tcp://validator-0:4004 \
--database-url postgres://grid:grid_example#db-alpha/grid
"

Elastic Search upgrade to v8 on Kubernetes

I am having an elastic search deployment on a Microsoft Kubernetes cluster that was deployed with a 7.x chart and I changed the image to 8.x. This upgrade worked and both elastic and Kibana was accessible, but now i need to enable THE new security feature which is included in the basic license from now on. The reason behind the security first came from the requirement to enable APM Server/Agents.
I have the following values:
- name: cluster.initial_master_nodes
value: elasticsearch-master-0,
- name: discovery.seed_hosts
value: elasticsearch-master-headless
- name: cluster.name
value: elasticsearch
- name: network.host
value: 0.0.0.0
- name: cluster.deprecation_indexing.enabled
value: 'false'
- name: node.roles
value: data,ingest,master,ml,remote_cluster_client
The elastic search and kibana pods are able to start but i am unable to set APM Integration due security. So I am enabling security using the below values:
- name: xpack.security.enabled
value: 'true'
Then i am getting an error log from the elasic search pod: "Transport SSL must be enabled if security is enabled. Please set [xpack.security.transport.ssl.enabled] to [true] or disable security by setting [xpack.security.enabled] to [false]". So i am enabling ssl using the below values:
- name: xpack.security.transport.ssl.enabled
value: 'true'
Then i am getting an error log from elastic search pod: "invalid SSL configuration for xpack.security.transport.ssl - server ssl configuration requires a key and certificate, but these have not been configured; you must set either [xpack.security.transport.ssl.keystore.path] (p12 file), or both [xpack.security.transport.ssl.key] (pem file) and [xpack.security.transport.ssl.certificate] (pem key file)".
I start with Option1, i am creating the keys using the below commands (no password / enter, enter / enter, enter, enter) and i am coping them to a persistent folder:
./bin/elasticsearch-certutil ca
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
cp elastic-stack-ca.p12 data/elastic-stack-ca.p12
cp elastic-certificates.p12 data/elastic-certificates.p12
In addition I am also configuring the below values:
- name: xpack.security.transport.ssl.truststore.path
value: '/usr/share/elasticsearch/data/elastic-certificates.p12'
- name: xpack.security.transport.ssl.keystore.path
value: '/usr/share/elasticsearch/data/elastic-certificates.p12'
But the pod is still in initializing, if generate the certificates with password. then i am getting an error log from elastic search pod: "cannot read configured [PKCS12] keystore (as a truststore) [/usr/share/elasticsearch/data/elastic-certificates.p12] - this is usually caused by an incorrect password; (no password was provided)"
Then i go to Option2, i am creating the keys using the below commands and i am coping them to a persistent folder
./bin/elasticsearch-certutil ca --pem
unzip elastic-stack-ca.zip –d
cp ca.crt data/ca.crt
cp ca.key data/ca.key
In addition I am also configuring the below values:
- name: xpack.security.transport.ssl.key
value: '/usr/share/elasticsearch/data/ca.key'
- name: xpack.security.transport.ssl.certificate
value: '/usr/share/elasticsearch/data/ca.crt'
But the pod is still in initializing state without providing any logs, as i know while pod is in initializing state it does not produce any container logs. From portal side in events everything seems to be ok, except the elastic pod which is not in ready state.
At last i located the same issue to the eleastic search community, without any response: https://discuss.elastic.co/t/elasticsearch-pods-are-not-ready-when-xpack-security-enabled-is-configured/281709?u=s19k15
Here is my StatefullSet
status:
observedGeneration: 169
replicas: 1
updatedReplicas: 1
currentRevision: elasticsearch-master-7449d7bd69
updateRevision: elasticsearch-master-7d8c7b6997
collisionCount: 0
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch-master
template:
metadata:
name: elasticsearch-master
creationTimestamp: null
labels:
app: elasticsearch-master
chart: elasticsearch
release: platform
spec:
initContainers:
- name: configure-sysctl
image: docker.elastic.co/elasticsearch/elasticsearch:8.1.2
command:
- sysctl
- '-w'
- vm.max_map_count=262144
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
runAsUser: 0
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.1.2
ports:
- name: http
containerPort: 9200
protocol: TCP
- name: transport
containerPort: 9300
protocol: TCP
env:
- name: node.name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: cluster.initial_master_nodes
value: elasticsearch-master-0,
- name: discovery.seed_hosts
value: elasticsearch-master-headless
- name: cluster.name
value: elasticsearch
- name: cluster.deprecation_indexing.enabled
value: 'false'
- name: ES_JAVA_OPTS
value: '-Xmx512m -Xms512m'
- name: node.roles
value: data,ingest,master,ml,remote_cluster_client
- name: xpack.license.self_generated.type
value: basic
- name: xpack.security.enabled
value: 'true'
- name: xpack.security.transport.ssl.enabled
value: 'true'
- name: xpack.security.transport.ssl.truststore.path
value: /usr/share/elasticsearch/data/elastic-certificates.p12
- name: xpack.security.transport.ssl.keystore.path
value: /usr/share/elasticsearch/data/elastic-certificates.p12
- name: xpack.security.http.ssl.enabled
value: 'true'
- name: xpack.security.http.ssl.truststore.path
value: /usr/share/elasticsearch/data/elastic-certificates.p12
- name: xpack.security.http.ssl.keystore.path
value: /usr/share/elasticsearch/data/elastic-certificates.p12
- name: logger.org.elasticsearch.discovery
value: debug
- name: path.logs
value: /usr/share/elasticsearch/data
- name: xpack.security.enrollment.enabled
value: 'true'
resources:
limits:
cpu: '1'
memory: 2Gi
requests:
cpu: 100m
memory: 512Mi
volumeMounts:
- name: elasticsearch-master
mountPath: /usr/share/elasticsearch/data
readinessProbe:
exec:
command:
- bash
- '-c'
- >
set -e
# If the node is starting up wait for the cluster to be ready
(request params: "wait_for_status=green&timeout=1s" )
# Once it has started only check that the node itself is
responding
START_FILE=/tmp/.es_start_file
# Disable nss cache to avoid filling dentry cache when calling
curl
# This is required with Elasticsearch Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no
http () {
local path="${1}"
local args="${2}"
set -- -XGET -s
if [ "$args" != "" ]; then
set -- "$#" $args
fi
if [ -n "${ELASTIC_PASSWORD}" ]; then
set -- "$#" -u "elastic:${ELASTIC_PASSWORD}"
fi
curl --output /dev/null -k "$#" "http://127.0.0.1:9200${path}"
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
HTTP_CODE=$(http "/" "-w %{http_code}")
RC=$?
if [[ ${RC} -ne 0 ]]; then
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with RC ${RC}"
exit ${RC}
fi
# ready if HTTP code 200, 503 is tolerable if ES version is 6.x
if [[ ${HTTP_CODE} == "200" ]]; then
exit 0
elif [[ ${HTTP_CODE} == "503" && "8" == "6" ]]; then
exit 0
else
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}"
exit 1
fi
else
echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )'
if http "/_cluster/health?wait_for_status=green&timeout=1s" "--fail" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
exit 1
fi
fi
initialDelaySeconds: 10
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 3
failureThreshold: 3
lifecycle:
postStart:
exec:
command:
- bash
- '-c'
- >
#!/bin/bash
# Create the
dev.general.logcreation.elasticsearchlogobject.v1.json index
ES_URL=http://localhost:9200
while [[ "$(curl -s -o /dev/null -w '%{http_code}\n'
$ES_URL)" != "200" ]]; do sleep 1; done
curl --request PUT --header 'Content-Type: application/json'
"$ES_URL/dev.general.logcreation.elasticsearchlogobject.v1.json/"
--data
'{"mappings":{"properties":{"Properties":{"properties":{"StatusCode":{"type":"text"}}}}},"settings":{"index":{"number_of_shards":"1","number_of_replicas":"0"}}}'
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
drop:
- ALL
runAsUser: 1000
runAsNonRoot: true
restartPolicy: Always
terminationGracePeriodSeconds: 120
dnsPolicy: ClusterFirst
automountServiceAccountToken: true
securityContext:
runAsUser: 1000
fsGroup: 1000
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- elasticsearch-master
topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
enableServiceLinks: true
volumeClaimTemplates:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: elasticsearch-master
creationTimestamp: null
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
volumeMode: Filesystem
status:
phase: Pending
serviceName: elasticsearch-master-headless
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
revisionHistoryLimit: 10
Any ideas?
Finally found the answer, maybe it helps lot of people in case they face something similar. When the pod is initializing endlessly is like sleeping. In my case a strange code inside my chart StatefullSet started causing this issue when security became enabled.
while [[ "$(curl -s -o /dev/null -w '%{http_code}\n'
$ES_URL)" != "200" ]]; do sleep 1; done
This will not return 200 as now the http excepts also a user and a password to authenticate and therefore is goes for a sleep.
So make sure that in case the pods are in initializing state and remaining there, there is no any while/sleep

Sharing volumes inside docker container on gitlab runner

So, I am trying to mount a working directory with project files into a child instance on a gitlab runner in sort of a DinD setup. I want to be able to mount a volume in a docker instance, which would allow me to muck around and test stuff. Like e2e testing and such… without compiling a new container to inject the files I need… Ideally, so I can share data in a DinD environment without having to build a new container for each job that runs…
I tried following (Docker volumes not mounted when using docker:dind (#41227) · Issues · GitLab.org / GitLab FOSS · GitLab) and I have some directories being mounted, but it is not the project data I am looking for.
So, the test jobs, I created a dummy file, and I wish to mount the directory in a container and view the files…
I have a test ci yml, which sort of does what I am looking for. I make test files in the volume I which to mount, which I would like to see in a directory listing, but sadly do not. I my second attempt at this, I couldn’t get the container ID becuase the labels don’t exist on the runner and it always comes up blank… However, the first stages show promise as It works perfectly on a “shell” runner outside of k8s. But, as soon as I change the tag to use a k8s runner it craps out. I can see old directory files /web and my directory I am mounting, but not the files within it. weird?
ci.yml
image: docker:stable
services:
- docker:dind
stages:
- compile
variables:
SHARED_PATH: /builds/$CI_PROJECT_PATH/shared/
DOCKER_DRIVER: overlay2
.test: &test
stage: compile
tags:
- k8s-vols
script:
- docker version
- 'export TESTED_IMAGE=$(echo ${CI_JOB_NAME} | sed "s/test //")'
- docker pull ${TESTED_IMAGE}
- 'export SHARED_PATH="$(dirname ${CI_PROJECT_DIR})/shared"'
- echo ${SHARED_PATH}
- echo ${CI_PROJECT_DIR}
- mkdir -p ${SHARED_PATH}
- touch ${SHARED_PATH}/test_file
- touch ${CI_PROJECT_DIR}/test_file2
- find ${SHARED_PATH}
#- find ${CI_PROJECT_DIR}
- docker run --rm -v ${CI_PROJECT_DIR}:/mnt ${TESTED_IMAGE} find /mnt
- docker run --rm -v ${CI_PROJECT_DIR}:/mnt ${TESTED_IMAGE} ls -lR /mnt
- docker run --rm -v ${SHARED_PATH}:/mnt ${TESTED_IMAGE} find /mnt
- docker run --rm -v ${SHARED_PATH}:/mnt ${TESTED_IMAGE} ls -lR /mnt
test alpine: *test
test ubuntu: *test
test centos: *test
testing:
stage: compile
tags:
- k8s-vols
image:
name: docker:stable
entrypoint: ["/bin/sh", "-c"]
script:
# get id of container
- export CONTAINER_ID=$(docker ps -q -f "label=com.gitlab.gitlab-runner.job.id=$CI_JOB_ID" -f "label=com.gitlab.gitlab-runner.type=build")
# get mount name
- export MOUNT_NAME=$(docker inspect $CONTAINER_ID -f "{{ range .Mounts }}{{ if eq .Destination \"/builds/${CI_PROJECT_NAMESPACE}\" }}{{ .Source }}{{end}}{{end}}" | cut -d "/" -f 6)
# run container
- docker run -v $MOUNT_NAME:/builds -w /builds/$CI_PROJECT_NAME --entrypoint=/bin/sh busybox -c "ls -la"
This is the values files I am working with…
image: docker-registry.corp.com/base-images/gitlab-runner:alpine-v13.3.1
imagePullPolicy: IfNotPresent
gitlabUrl: http://gitlab.corp.com
runnerRegistrationToken: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
runnerToken: ""
unregisterRunners: true
terminationGracePeriodSeconds: 3600
concurrent: 5
checkInterval: 10
rbac:
create: true
resources: ["pods", "pods/exec", "secrets"]
verbs: ["get", "list", "watch","update", "create", "delete"]
clusterWideAccess: false
metrics:
enabled: true
runners:
image: docker-registry.corp.com/base-images/docker-dind:v1
imagePullPolicy: "if-not-present"
requestConcurrency: 5
locked: true
tags: "k8s-vols"
privileged: true
secret: gitlab-runner-vols
namespace: gitlab-runner-k8s-vols
pollTimeout: 180
outputLimit: 4096
kubernetes:
volumes:
- type: host_path
volume:
name: docker
host_path: /var/run/docker.sock
mount_path: /var/run/docker.sock
read_only: false
cache: {}
builds: {}
services: {}
helpers:
cpuLimit: 200m
memoryLimit: 256Mi
cpuRequests: 100m
memoryRequests: 128Mi
image: docker-registry.corp.com/base-images/gitlab-runner-helper:x86_64-latest
env:
NAME: VALUE
CI_SERVER_URL: http://gitlab.corp.com
CLONE_URL:
RUNNER_REQUEST_CONCURRENCY: '1'
RUNNER_EXECUTOR: kubernetes
REGISTER_LOCKED: 'true'
RUNNER_TAG_LIST: k8s-vols
RUNNER_OUTPUT_LIMIT: '4096'
KUBERNETES_IMAGE: ubuntu:18.04
KUBERNETES_PRIVILEGED: 'true'
KUBERNETES_NAMESPACE: gitlab-runners-k8s-vols
KUBERNETES_POLL_TIMEOUT: '180'
KUBERNETES_CPU_LIMIT:
KUBERNETES_MEMORY_LIMIT:
KUBERNETES_CPU_REQUEST:
KUBERNETES_MEMORY_REQUEST:
KUBERNETES_SERVICE_ACCOUNT:
KUBERNETES_SERVICE_CPU_LIMIT:
KUBERNETES_SERVICE_MEMORY_LIMIT:
KUBERNETES_SERVICE_CPU_REQUEST:
KUBERNETES_SERVICE_MEMORY_REQUEST:
KUBERNETES_HELPER_CPU_LIMIT:
KUBERNETES_HELPER_MEMORY_LIMIT:
KUBERNETES_HELPER_CPU_REQUEST:
KUBERNETES_HELPER_MEMORY_REQUEST:
KUBERNETES_HELPER_IMAGE:
KUBERNETES_PULL_POLICY:
securityContext:
fsGroup: 65533
runAsUser: 100
resources: {}
affinity: {}
nodeSelector: {}
tolerations: []
envVars:
- name: CI_SERVER_URL
value: http://gitlab.corp.com
- name: CLONE_URL
- name: RUNNER_REQUEST_CONCURRENCY
value: '1'
- name: RUNNER_EXECUTOR
value: kubernetes
- name: REGISTER_LOCKED
value: 'true'
- name: RUNNER_TAG_LIST
value: k8s-vols
- name: RUNNER_OUTPUT_LIMIT
value: '4096'
- name: KUBERNETES_IMAGE
value: ubuntu:18.04
- name: KUBERNETES_PRIVILEGED
value: 'true'
- name: KUBERNETES_NAMESPACE
value: gitlab-runner-k8s-vols
- name: KUBERNETES_POLL_TIMEOUT
value: '180'
- name: KUBERNETES_CPU_LIMIT
- name: KUBERNETES_MEMORY_LIMIT
- name: KUBERNETES_CPU_REQUEST
- name: KUBERNETES_MEMORY_REQUEST
- name: KUBERNETES_SERVICE_ACCOUNT
- name: KUBERNETES_SERVICE_CPU_LIMIT
- name: KUBERNETES_SERVICE_MEMORY_LIMIT
- name: KUBERNETES_SERVICE_CPU_REQUEST
- name: KUBERNETES_SERVICE_MEMORY_REQUEST
- name: KUBERNETES_HELPER_CPU_LIMIT
- name: KUBERNETES_HELPER_MEMORY_LIMIT
- name: KUBERNETES_HELPER_CPU_REQUEST
- name: KUBERNETES_HELPER_MEMORY_REQUEST
- name: KUBERNETES_HELPER_IMAGE
- name: KUBERNETES_PULL_POLICY
hostAliases:
- ip: "10.10.x.x"
hostnames:
- "ch01"
podAnnotations:
prometheus.io/path: "/metrics"
prometheus.io/scrape: "true"
prometheus.io/port: "9252"
podLabels: {}
So, I have made a couple of tweaks to the helm chart. I have added a a volumes section in the config map…
config.toml: |
concurrent = {{ .Values.concurrent }}
check_interval = {{ .Values.checkInterval }}
log_level = {{ default “info” .Values.logLevel | quote }}
{{- if .Values.metrics.enabled }}
listen_address = ‘[::]:9252’
{{- end }}
volumes = ["/builds:/builds"]
#volumes = ["/var/run/docker.sock:/var/run/docker.sock", “/cache”, “/builds:/builds”]
I tried using the last line, which includes the docker sock mount, but when it ran, it complained that it could no find mount docker.sock, file not found, so I used the builds directory only in this section, and in the values files, added, the docker.sock mount. and it seems to work fine. for everything else but this mounting thing…
I also saw examples of setting the runner to privileged, but that didn’t seem to do much for me…
when I run the pipeline, this is the output…
So as you can see no files…
Thanks for taking the time to be thorough in your request, it really helps!

Azure AKS: Create a Kubeconfig from service account [duplicate]

I have a kubernetes cluster on Azure and I created 2 namespaces and 2 service accounts because I have two teams deploying on the cluster.
I want to give each team their own kubeconfig file for the serviceaccount I created.
I am pretty new to Kubernetes and haven't been able to find a clear instruction on the kubernetes website. How do I create a kube config file for a serviceaccount?
Hopefully someone can help me out :), I rather not give the default kube config file to the teams.
With kind regards,
Bram
# your server name goes here
server=https://localhost:8443
# the name of the secret containing the service account token goes here
name=default-token-sg96k
ca=$(kubectl get secret/$name -o jsonpath='{.data.ca\.crt}')
token=$(kubectl get secret/$name -o jsonpath='{.data.token}' | base64 --decode)
namespace=$(kubectl get secret/$name -o jsonpath='{.data.namespace}' | base64 --decode)
echo "
apiVersion: v1
kind: Config
clusters:
- name: default-cluster
cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: default-context
context:
cluster: default-cluster
namespace: default
user: default-user
current-context: default-context
users:
- name: default-user
user:
token: ${token}
" > sa.kubeconfig
I cleaned up Jordan Liggitt's script a little.
Unfortunately I am not yet allowed to comment so this is an extra answer:
Be aware that starting with Kubernetes 1.24 you will need to create the Secret with the token yourself and reference that
# The script returns a kubeconfig for the ServiceAccount given
# you need to have kubectl on PATH with the context set to the cluster you want to create the config for
# Cosmetics for the created config
clusterName='some-cluster'
# your server address goes here get it via `kubectl cluster-info`
server='https://157.90.17.72:6443'
# the Namespace and ServiceAccount name that is used for the config
namespace='kube-system'
serviceAccount='developer'
# The following automation does not work from Kubernetes 1.24 and up.
# You might need to
# define a Secret, reference the ServiceAccount there and set the secretName by hand!
# See https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-long-lived-api-token-for-a-serviceaccount for details
secretName=$(kubectl --namespace="$namespace" get serviceAccount "$serviceAccount" -o=jsonpath='{.secrets[0].name}')
######################
# actual script starts
set -o errexit
ca=$(kubectl --namespace="$namespace" get secret/"$secretName" -o=jsonpath='{.data.ca\.crt}')
token=$(kubectl --namespace="$namespace" get secret/"$secretName" -o=jsonpath='{.data.token}' | base64 --decode)
echo "
---
apiVersion: v1
kind: Config
clusters:
- name: ${clusterName}
cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: ${serviceAccount}#${clusterName}
context:
cluster: ${clusterName}
namespace: ${namespace}
user: ${serviceAccount}
users:
- name: ${serviceAccount}
user:
token: ${token}
current-context: ${serviceAccount}#${clusterName}
"
Look to https://github.com/superbrothers/kubectl-view-serviceaccount-kubeconfig-plugin
This plugin helps to get service account config via
kubectl view-serviceaccount-kubeconfig <service_account> -n <namespace>
Kubectl can be initialized to use a cluster account. To do so, get the cluster url, cluster certificate and account token.
KUBE_API_EP='URL+PORT'
KUBE_API_TOKEN='TOKEN'
KUBE_CERT='REDACTED'
echo $KUBE_CERT >deploy.crt
kubectl config set-cluster k8s --server=https://$KUBE_API_EP \
--certificate-authority=deploy.crt \
--embed-certs=true
kubectl config set-credentials gitlab-deployer --token=$KUBE_API_TOKEN
kubectl config set-context k8s --cluster k8s --user gitlab-deployer
kubectl config use-context k8s
The cluster file is stored under: ~/.kube/config. Now the cluster can be accessed using:
kubectl --context=k8s get pods -n test-namespace
add this flag --insecure-skip-tls-verify if you are using self signed certificate.
Revisiting this as I was looking for a way to create a serviceaccount from the command line instead of repetitive point/click tasks through Lens IDE. I came across this thread and took the original authors ideas and expanded on the capabilities as well as supporting serviceaccount creations for Kubernetes 1.24+
#!/bin/sh
# This shell script is intended for Kubernetes clusters running 1.24+ as secrets are no longer auto-generated with serviceaccount creations
# The script does a few things: creates a serviceaccount, creates a secret for that serviceaccount (and annotates accordingly), creates a clusterrolebinding or rolebinding
# provides a kubeconfig output to the screen as well as writing to a file that can be included in the KUBECONFIG or PATH
# Feed variables to kubectl commands (modify as needed). crb and rb can not both be true
# ------------------------------------------- #
clustername=some_cluster
name=some_user
ns=some_ns # namespace
server=https://some.server.com:6443
crb=false # clusterrolebinding
crb_name=some_binding # clusterrolebindingname_name
rb=true # rolebinding
rb_name=some_binding # rolebinding_name
# ------------------------------------------- #
# Check for existing serviceaccount first
sa_precheck=$(kubectl get sa $name -o jsonpath='{.metadata.name}' -n $ns) > /dev/null 2>&1
if [ -z "$sa_precheck" ]
then
kubectl create serviceaccount $name -n $ns
else
echo "serviceacccount/"$sa_precheck" already exists"
fi
sa_name=$(kubectl get sa $name -o jsonpath='{.metadata.name}' -n $ns)
sa_uid=$(kubectl get sa $name -o jsonpath='{.metadata.uid}' -n $ns)
# Check for existing secret/service-account-token, if one does not exist create one but do not output to external file
secret_precheck=$(kubectl get secret $sa_name-token-$sa_uid -o jsonpath='{.metadata.name}' -n $ns) > /dev/null 2>&1
if [ -z "$secret_precheck" ]
then
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: $sa_name-token-$sa_uid
namespace: $ns
annotations:
kubernetes.io/service-account.name: $sa_name
EOF
else
echo "secret/"$secret_precheck" already exists"
fi
# Check for adding clusterrolebinding or rolebinding (both can not be true)
if [ "$crb" = "true" ] && [ "$rb" = "true" ]
then
echo "Both clusterrolebinding and rolebinding can not be true, please fix"
exit
elif [ "$crb" = "true" ]
then
crb_test=$(kubectl get clusterrolebinding $crb_name -o jsonpath='{.metadata.name}') > /dev/null 2>&1
if [ "$crb_name" = "$crb_test" ]
then
kubectl patch clusterrolebinding $crb_name --type='json' -p='[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": '$sa_name', "namespace": '$ns' } }]'
else
echo "clusterrolebinding/"$crb_name" does not exist, please fix"
exit
fi
elif [ "$rb" = "true" ]
then
rb_test=$(kubectl get rolebinding $rb_name -n $ns -o jsonpath='{.metadata.name}' -n $ns) > /dev/null 2>&1
if [ "$rb_name" = "$rb_test" ]
then
kubectl patch rolebinding $rb_name -n $ns --type='json' -p='[{"op": "add", "path": "/subjects/-", "value": {"kind": "ServiceAccount", "name": '$sa_name', "namespace": '$ns' } }]'
else
echo "rolebinding/"$rb_name" does not exist in "$ns" namespace, please fix"
exit
fi
fi
# Create Kube Config and output to config file
ca=$(kubectl get secret $sa_name-token-$sa_uid -o jsonpath='{.data.ca\.crt}' -n $ns)
token=$(kubectl get secret $sa_name-token-$sa_uid -o jsonpath='{.data.token}' -n $ns | base64 --decode)
echo "
apiVersion: v1
kind: Config
clusters:
- name: ${clustername}
cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: ${sa_name}#${clustername}
context:
cluster: ${clustername}
namespace: ${ns}
user: ${sa_name}
users:
- name: ${sa_name}
user:
token: ${token}
current-context: ${sa_name}#${clustername}
" | tee $sa_name#${clustername}

What is the correct way to mount a hostpah volume with gitlab runner?

I need to create a volume to expose the maven .m2 folder to be reused in all my projects but I can't do that at all.
My gitlab runner is running inside my kuberentes cluster as a container.
Follows Deployment and configmap
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitlab-runner
namespace: default
spec:
template:
metadata:
labels:
name: gitlab-runner
spec:
serviceAccountName: gitlab-sa
nodeName: 140.6.254.244
containers:
- name: gitlab-runner
image: gitlab/gitlab-runner
securityContext:
privileged: true
command: ["/bin/bash", "/scripts/entrypoint"]
env:
- name: KUBERNETES_NAMESPACE
value: default
- name: KUBERNETES_SERVICE_ACCOUNT
value: gitlab-sa
# This references the previously specified configmap and mounts it as a file
volumeMounts:
- mountPath: /scripts
name: configmap
livenessProbe:
exec:
command: ["/usr/bin/pgrep","gitlab.*runner"]
initialDelaySeconds: 60
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
exec:
command: ["/usr/bin/pgrep","gitlab.*runner"]
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
volumes:
- configMap:
name: gitlab-runner-cm
name: configmap
ConfigMap:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner-cm
namespace: default
data:
entrypoint: |
#!/bin/bash
set -xe
cp /scripts/config.toml /etc/gitlab-runner/
# Register the runner
/entrypoint register --non-interactive --registration-token ###### --url http://gitlab.######.net --clone-url http://gitlab.######.net --executor "kubernetes" --name "Kubernetes Runner" --config "/etc/gitlab-runner/config.toml"
# Start the runner
/entrypoint run --user=gitlab-runner \
--working-directory=/home/gitlab-runner \
--config "/etc/gitlab-runner/config.toml"
config.toml: |
concurrent = 50
check_interval = 10
[[runners]]
name = "PC-CVO"
url = "http://gitlab.######.net"
token = "######"
executor = "kubernetes"
cache_dir = "/tmp/gitlab/cache"
[runners.kubernetes]
[runners.kubernetes.volumes]
[[runners.kubernetes.volumes.host_path]]
name = "maven"
mount_path = "/.m2/"
host_path = "/mnt/dados/volumes/maven-gitlab-ci"
read_only = false
[[runners.kubernetes.volumes.host_path]]
name = "gitlab-cache"
mount_path = "/tmp/gitlab/cache"
host_path = "/mnt/dados/volumes/maven-gitlab-ci-cache"
read_only = false
But even putting [[runners.kubernetes.volumes.host_path]] as informed in the documentation my volume is not mounted on the host, I tried to use a pv and pvc, but nothing worked, anyone has a light on how to expose this .m2 folder on the host so all my jobs can share it without caching?
After beating my head with name resolution issues with internal DNS, volumes for my m2 and using the docker daemon instead of docker: dind, I finally got a configuration that solves my problem, below is the final configuration files if anyone passes for the same problem.
The main problem was that when the runner was registered the config.toml file was modified by the registration process and this overwrites my settings, to solve this I made a cat after the container registration.
Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gitlab-runner
namespace: default
spec:
template:
metadata:
labels:
name: gitlab-runner
spec:
serviceAccountName: gitlab-sa
nodeName: 140.6.254.244
containers:
- name: gitlab-runner
image: gitlab/gitlab-runner
securityContext:
privileged: true
command: ["/bin/bash", "/scripts/entrypoint"]
env:
- name: KUBERNETES_NAMESPACE
value: default
- name: KUBERNETES_SERVICE_ACCOUNT
value: gitlab-sa
# This references the previously specified configmap and mounts it as a file
volumeMounts:
- mountPath: /scripts
name: configmap
livenessProbe:
exec:
command: ["/usr/bin/pgrep","gitlab.*runner"]
initialDelaySeconds: 60
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
exec:
command: ["/usr/bin/pgrep","gitlab.*runner"]
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
volumes:
- configMap:
name: gitlab-runner-cm
name: configmap
Config Map (Here is the solution!)
---
apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner-cm
namespace: default
data:
entrypoint: |
#!/bin/bash
set -xe
cp /scripts/config.toml /etc/gitlab-runner/
# Register the runner
/entrypoint register --non-interactive --registration-token ############ --url http://gitlab.######.net --clone-url http://gitlab.######.net --executor "kubernetes" --name "Kubernetes Runner" --config "/etc/gitlab-runner/config.toml"
cat >> /etc/gitlab-runner/config.toml << EOF
[[runners.kubernetes.volumes.host_path]]
name = "docker"
path = "/var/run/docker.sock"
mount_path = "/var/run/docker.sock"
read_only = false
[[runners.kubernetes.volumes.host_path]]
name = "maven"
mount_path = "/.m2/"
host_path = "/mnt/dados/volumes/maven-gitlab-ci"
read_only = false
[[runners.kubernetes.volumes.host_path]]
name = "resolvedns"
mount_path = "/etc/resolv.conf"
read_only = true
host_path = "/etc/resolv.conf"
EOF
# Start the runner
/entrypoint run --user=gitlab-runner \
--working-directory=/home/gitlab-runner \
--config "/etc/gitlab-runner/config.toml"
config.toml: |
concurrent = 50
check_interval = 10
[[runners]]
name = "PC-CVO"
url = "http://gitlab.########.###"
token = "##############"
executor = "kubernetes"
cache_dir = "/tmp/gitlab/cache"
[runners.kubernetes]
Check if GitLab 15.6 (November 2022) can help:
Mount ConfigMap to volumes with the Auto Deploy chart
The default Auto Deploy Helm chart now supports the extraVolumes and extraVolumeMounts options.
In past releases, you could specify only Persistent Volumes for Kubernetes.
Among other use cases, you can now mount:
Secrets and ConfigMaps as files to Deployments, CronJobs, and Workers.
Existing or external Persistent Volumes Claims to Deployments, CronJobs, and Workers.
Private PKI CA certificates with hostPath mounts to achieve trust with the PKI.
Thanks to Maik Boltze for this community contribution.
See Documentation and Issue.

Resources