How to persist CouchDB data? - hyperledger-fabric

I created the composer network, and persist my CouchDB volume.
How can I recreate a network using the persistent couchDB data previous?
Related to this Question

Its actually question related to docker. You need to create a docker volume and use only that volume for your container. You can backup that volume and whenever you want to start new container you can mount that volume to docker container while starting it. You can go through the docker link for volumes

Related

How to create a container image and download it from Azure Container Image

I want to pull a Azure Container to my local system from the Azure Container Instance.
My Container
my goal is to create an image from this container. I just don't know how :) I have now tried by running the arm script in Azure and creating the container. Now I have the container but don't know how to create an image of it and then download it.
If you have made changes to the ACI container's filesystem which you want to persist in a new image, then, I'm afraid, without access to the underlying infrastructure hosting the container runtime, you shall not be able to perform a docker commit or similar operation to build an image from a running container.
At the time of writing, since Azure Container Groups are Container-as-a-Service offering, Azure does not allow customers access to the underlying infrastructure that support them.
Instead you can write your own Dockerfile [How-to] with base image as atmoz/sftp:latest and adding additional layers for the desired modifications. You can refer to atmoz/sftp on Docker Hub for usage and guidelines.

Is it possible to attach and NFS (NetApp files) volume to an Azure Container Instance?

I have an NFS server (NetApp files) which I wish to mount as a volume on an Azure Container Instance. Is this possible?
I need to do this programmatically and the az CLI appears not to have any options for it.
I can launch the container in the same vnet as the NFS server but I can't then mount the volume as far as I can tell.
I am able to access the volume from within an AKS cluster I run but I was hoping to keep this bit of functionality outside of the cluster.
I also haven't found any workaround (such as mounting the volume inside the container).
Any suggestions? Is this impossible?
Thanks.
The short answer is yes, it's possible to mount a volume of Azure NetApp Files to an Azure Container Instance. Not to do this in programming or use Azure CLI, but to create a custom Docker image with some drivers like SAMBA/CIFS which will helps to mount a NFS volume as a Docker volume.
As references, you can follow the steps and refer to the related documents below.
The blog Using a SAMBA/CIFS mount as a docker volume introduces the base idea. And the other blog Mount AWS EFS, NFS or CIFS/Samba volumes in Docker also be useful at here.
Follow the offical document Tutorial: Create a container image for deployment to Azure Container Instances to know how to create a container image.
The offical document Mount or unmount a volume for Windows or Linux virtual machines shows the Mount instructions of Azure NetApp Files to help to mount a volume to a Linux instance which be a Docker instance of a custom image you need to build.
Then, if the image has been built, you can try to deploy it as Azure Container Instance. For doing these, please refer to the two offical documents Tutorial: Deploy an Azure container registry and push a container image and Tutorial: Deploy a container application to Azure Container Instances.

How to transfer docker volumes to kubernetes volumes in azure?

I'm trying to apply kubernetes on one of my applications, and my app is using docker volumes and saves data in there.
When I'm applying the kubernetes on that app it won't save any data from the docker volumes obviously and it just needs kubernetes volumes, thing is, I have my data inside the docker volumes and I need to transfer it to a kubernetes volume, and all that is running in azure and since kubernetes interfaces with azure I figured there should be a way to automate this only I couldn't find how to do that.
If someone can help with this ill be very thankful.
To transfer the data in the docker volume to Azure Kubernetes volumes, the way I can think of is that transfer the data in the docker volumes to Azure file share, and then mount the file share to the Kubernetes volumes.
The best way is that mount the Azure file share to the machine where the docker server in at the beginning, and then create the volumes in the mount path and use the volumes for your application in the docker. When you deploy the application in Azure Kubernetes, mount the file share to its volumes.
I do not think there are the interfaces in Azure Kubernetes to automate transfer the data from docker volumes to AKS volumes.
When I'm applying the kubernetes on that app it won't save any data
from the docker volumes obviously and it just needs kubernetes volumes.
This is not fully true, as #Charles Xu mentioned, as long as your Docker containers use 'Azure File Storage' backed Volumes, you can seamlessly mount the same data volumes (Azure File Storage) as a Persistence Volumes in Azure Kubernetes Service (one of two types of data storage supported by AKS).
Taking into account you are running currently your Docker containers on-premise environment, use can use Docker Volume Driver for Azure File Storage to start pushing your data to Azure (check demo here), or for Swarm Cluster in Azure use Cloudstor.
According to the kubernetes documentation:
https://kubernetes.io/docs/concepts/storage/volumes/
Docker volumes and Kubernetes volumes are different and work differently.
When you create your PVC or it is dynamically created for you, mount it into your pod and copy the data into it using:
kubectl cp
Your app will automatically start storing data in the volume.
Docker volumes are just directories on disk and you can't simply map them to the Kubernetes volume

Mount Azure storage in Docker Swarm

I have successfully set-up a docker swarm on an Azure scale-set, all good. Now, i need to mount an Azure storage device on my container to utilize my app, this is ofc. simple with "docker run" since i can add capabilities; but this cannot be achieved through "docker service create". What are the possibilities for me to mount storage shares (CIFS/Samba) on the autoscaling nodes for my container?
Thanks in advance.
There is a volume plugin for Docker to mount Azure storage entities as a volume using a volume driver: https://github.com/Azure/azurefile-dockervolumedriver

Running Docker Oracle Container with different databases

i want to realize the following scenario:
create a basic Oracle XE Docker container(no database inside)
connect this container with a database file backup container
remove the database file container and connect to another file container
Its not very useful when i create a oracle xe container with a not exexchangeable database, because then i had 20 container, each has a size of 2,5gb.
Is there a way to do this ? In my understanding, you can link container.
So i had to create container for each database version i have and link the oracle container with the container of the database backup file. But how i can tell docker, that i want to install the backup file from one container in the oracle container, and how can i easily exchange this?

Resources