Neo4j HA model don't work in docker - azure

I'm trying to run Neo4j in HA mode using Azure Container Service + Docker. To run mode is required HA 3 instances within the same network.
I create a network with the command:
docker network create --driver = bridge cluster
But when trying to associate instances of this network I got the following error:
docker: Error response from daemon: network cluster not found.
I've tried with the network ID and does not work.
I'm following this tutorial: https://neo4j.com/developer/docker-3.x/ but without success. Something tip?
ps .: Running in sigle mode works.
Commands and result I get.
jefersonanjos#swarm-master-21858A81-0:~/neo4j/data$ docker network create --driver=bridge cluster
result: d9fb3dd121ded5bfe01765ce4d276b75ad4e66ef1f2bd62b858a2cea86ccc1ec
jefersonanjos#swarm-master-21858A81-0:~/neo4j/data$ docker run --name=instance1 --detach --publish=7474:7474 --publish=7687:7687 --net=cluster --hostname=instance1 \
--env=NEO4J_dbms_mode=HA --env=NEO4J_ha_serverId=1 \
--env=NEO4J_ha_host_coordination=instance1:5001 --env=NEO4J_ha_host_data=instance1:6001 \
--env=NEO4J_ha_initialHosts=instance1:5001,instance2:5001,instance3:5001 \
neo4j:enterprise
result: b57ca9a895535b07ef97d956a780b9687e7384b33f389e2470e0ed743c79ef11
jefersonanjos#swarm-master-21858A81-0:~/neo4j/data$ docker run --name=instance2 --detach --publish 7475:7474 --publish=7688:7687 --net=cluster --hostname=instance2 \
--env=NEO4J_dbms_mode=HA --env=NEO4J_ha_serverId=2 \
--env=NEO4J_ha_host_coordination=instance2:5001 --env=NEO4J_ha_host_data=instance2:6001 \
--env=NEO4J_ha_initialHosts=instance1:5001,instance2:5001,instance3:5001 \
neo4j:enterprise
docker: Error response from daemon: network cluster not found.
See 'docker run --help'.
jefersonanjos#swarm-master-21858A81-0:~/neo4j/data$ docker run --name=instance3 --detach --publish 7476:7474 --publish=7689:7687 --net=cluster --hostname=instance3 \
--env=NEO4J_dbms_mode=HA --env=NEO4J_ha_serverId=3 \
--env=NEO4J_ha_host_coordination=instance3:5001 --env=NEO4J_ha_host_data=instance3:6001 \
--env=NEO4J_ha_initialHosts=instance1:5001,instance2:5001,instance3:5001 \
neo4j:enterprise
08c4c5156dc8bb589f4c876de3a2bf0170450ae640606d505e1851da94220d7e

The problem in azure with docker was because I'm doing the test with a cluster of machines.
So the command:
docker network create --driver = bridge cluster does not serve for this purpose.
We can must use the --driver = overlay to function as multi-host.
Mor info: https://docs.docker.com/engine/userguide/networking/get-started-overlay/

Related

use blobfuse inside an Azure Kubernetes (AKS) container

We wanted to configure blobfuse inside an Azure Kubernetes container to access the Azure storage service.
I created the storage account and a blob container.
I installed blobfuse on the docker image (I tried with alpine and with ubuntu:22.04 images).
I start my application through a Jenkins pipeline with this configuration:
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: test
image: my_ubuntu:20.04
command: ['cat']
securityContext:
allowPrivilegeEscalation: true
devices:
- /dev/fuse
"""
}
}
I ran this command inside my docker container:
blobfuse /path/to/my/buckett --container-name=${AZURE_BLOB_CONTAINER} --tmp-path=/tmp/path --log-level=LOG_DEBUG --basic-remount-check=true
I got
fuse: device not found, try 'modprobe fuse' first
Running modprobe fuse returns modprobe: FATAL: Module fuse not found in directory /lib/modules/5.4.0-1068-azure
All answers I googled mentioned using --privileged and /dev/fuse device, which I did, with no results.
The same procedure works fine on my linux desktop, but not from inside a docker container on the AKS cluster.
Is this even the right approach to access the Azure Storage service from inside Kubernetes?
Is it possible to fix the error fuse: device not found ?
fuse: device not found, try 'modprobe fuse' first
I have also researched regarding fuse issues:
Either the fuse kernel module isn't loaded on your host computer (very unlikely)
Either the container you're using to perform the build doesn't have enough privileages.
--privileged gives too many permissions to the container instead of you should be able to get things working by replacing it with --cap-add SYS_ADMIN like below.
docker run -d --rm \
--device /dev/fuse \
--cap-add SYS_ADMIN \
<image_id/name>
and also run
docker run -d --rm \
--device /dev/fuse \
--cap-add SYS_ADMIN \
--security-opt apparmor:unconfined \
<image_id/name>
Try to run this command if it fails try to check up setup and check with versions and also blobfuse installation
For reference I also suggest you this Article
Mounting Azure Files and Blobs using Non-traditional options in Kubernetes - by Arun Kumar Singh
kubernetes-sigs/blob-csi-driver: Azure Blob Storage CSI driver (github.com)

How to Connect Pyspark to datastax Cassandra that is running on the docker?

I am running Datastax Cassandra on Docker, and I create my table on Datastax Cassandra, but I want to install Pyspark container with this docker-compose.yml, but I don't know how do I set network of docker-compose.yml file to connect Datastax Cassandra and Pyspark container together.
this is docker-compose.yml for running pyspark :
spark:
image: jupyter/pyspark-notebook
container_name: pyspark
ports:
- "8888:8888"
- "4040:4040"
- "4041:4041"
- "4042:4042"
expose:
- "8888"
- "4040"
- "4041"
- "4042"
environment:
CHOWN_HOME: "yes"
GRANT_SUDO: "yes"
NB_UID: 1000
NB_GID: 100
deploy:
replicas: 1
restart_policy:
condition: on-failure
volumes:
- ./Documents:/home/jovyan/work
,and this is docker command for creating Datastax Cassandra container :
docker run \
-e \
DS_LICENSE=accept \
--memory 4g \
--name my-dse \
-d \
-v /Documents/datastax/cassandra:/lib/cassandra \
-v /Documents/datastax/spark:/lib/spark \
-v /Documents/datastax/dsefs:/lib/dsefs \
-v /Documents/datastax/log/cassandra:/log/cassandra \
-v /Documents/datastax/log/spark:/log/spark \
-v /Documents/datastax/config:/config \
-v /Documents/datastax/opscenter:/lib/opscenter \
-v /Documents/datastax/datastax-studio:/lib/datastax-studio \
datastax/dse-server:6.8.4 \
-g \
-s \
-k
please help me to write the docker-compose.yml to run connected Pyspark to Cassandra Datastax for reading data from it.
By default, docker-compose should setup the common network if both containers are started by it, so you can just use DSE container name for spark.cassandra.connection.host parameter.
If both containers aren't maintained by docker-compose, then you may (you'll need to set spark.cassandra.connection.host parameter correctly):
just use the internal IP of the DSE container: docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-dse
use legacy Docker links (not recommended, really) and use DSE container name for connection
use docker network connect (see documentation), as well with the DSE container name
start DSE Docker image with port 9042 exposed to the outside, and use host's IP for connection
P.S. If you'll have pyspark in the Jupyter container, then you don't need to pass -k flag because it will start Spark on DSE as well, and it's not very good with 4Gb of RAM. Also, if you don't need DSE Graph, remove the -g switch.

Running Gitlab Runner in Azure Container Instances (ACI)

I would like to run Gitlab-Runner in Azure Container Instances (ACI).
For this I have the docker container gitlab/gitlab-runner running in the Azure ACI.
With the following command I register this runner for my Gitlab server.
gitlab-runner register \
--non-interactive \
--run-untagged=true \
--locked=false \
--executor "docker" \
--docker-image docker:latest \
--url "https://gitlab.com/" \
--registration-token "MyTokenYYYYYYYY" \
--description "my-own-runner" \
--tag-list "frontend, runner" \
--docker-volumes /var/run/docker.sock:/var/run/docker.sock
The new runner is also recognized under gitlab. However, when I run a job, I get the following error.
Preparing the "docker" executor
ERROR: Failed to remove network for build
ERROR: Preparation failed: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? (docker.go:960:0s)
If I run the Runner with the identical configuration locally on my notebook everything works. How do I get it to work in the Azure ACI?
How can I mount the docker sock in the Azure ACI when registering it?
Many thanks in advance for your help.
You're not going to be able to run another docker container inside the container you created in Azure ACI. In order to achieve "docker-in-docker" (dind), the daemon instance (your ACI container in this case) needs to be running in privileged mode which would allow escalated access to the host machine that you are sharing with other ACI users. You can read about this more on Docker hub where it says:
Note: --privileged is required for Docker-in-Docker to function
properly, but it should be used with care as it provides full access
to the host environment, as explained in the relevant section of the
Docker documentation.
The common solution for this is to use an auto-scale group of 0 or more VMs to provide compute resources to your gitlab runners and the containers they spawn.
Gitlab docs - docker-machine autoscaler
blog post re: doing this on Azure https://www.n0r1sk.com/post/on-premise-gitlab-with-autoscale-docker-machine-microsoft-azure-runners/

How do I set the Docker disk size in Ubuntu CLI?

Docker taking space - image here
Is their way to set the Docker Disk Image size been used using CLI?
Yes there is, You can create a volume and attach it to your container.
As in docker documentation you can create volume like this
docker volume create --driver local \
--opt type=tmpfs \
--opt device=tmpfs \
--opt o=size=100m,uid=1000 \
foo
And attach it to you container using the -v option on docker run command
docker run -d -v foo:/world busybox
you can find a description for each option in the docker documentation it's a one of the greatest reference you could find https://docs.docker.com/engine/reference/commandline/volume_create/

connecting from one docker to another using port forwarding and network=host fails

I wrote an app in python3.7.5 that connects to RabbitMQ:
Using Ubuntu as the docker-machine
I am running rabbitmq with docker:
docker run --name rabbitmq -p 5671:5671 -p 5672:5672 -p 15672:15672 --hostname rabbitmq rabbitmq:3.6.6-management
TEST:
My python app connects to it via 127.0.01:5672
Expected: connects and works
Actual: connects and works
I put the app inside docker and build and run
--build-arg ENVIRONMENT_NAME=develop
-t pdf-svc-image:latest .
&& docker run
-P
--env ENVIRONMENT_NAME=local
--name html-to-pdf
-v /home/mickey/dev/core/components/pdf-svc/:/html-to-pdf
--privileged
--network host
pdf-svc-image:latest bash
(This command line is created with pycharm)
When running this code (inside the docker) , I get an exception
return await aio_pika.connect_robust(
"amqp://guest:guest#{host}".format(host=consts.MESSAGE_QUEUE_HOST)
)
[Errno 111] Connect call failed ('127.0.0.1', 5672)
[Errno 99] Cannot assign requested address
Help ?
According to https://docs.docker.com/network/host/,
Note: Given that the container does not have its own IP-address when using host mode networking, port-mapping does not take effect, and the -p, --publish, -P, and --publish-all option are ignored, producing a warning instead:
I am not sure this is your case. You would login the container, and do run ping, nslookup to check the network connection.
RabbitMQ container
docker run --name rabbitmq \
-p 5671:5671 -p 5672:5672 -p 15672:15672 \
--hostname rabbitmq \
--network host \ # <-- Add this line, now both container see each other
rabbitmq:3.6.6-management
App container
docker run \
-P \
--env ENVIRONMENT_NAME=local \
--name html-to-pdf \
-v /home/mickey/dev/core/components/pdf-svc/:/html-to-pdf \
--privileged \
--network host \
pdf-svc-image:latest bash
Then on your code you need to load your variable with host = rabbitmq not 127.0.0.1.

Resources