I have a sawtooth 1.1 dockerized network, and i'm trying to backup the database from the validators so i can put down every container and then recover in case of a disaster.
Trying to achieve this i proceed as followed:
Stopped the all containers;
Backed up all the files of one of the validators on /var/lib/sawtooh/ using
cp --sparse=always [file] [file_backup]
Removed all the containers using docker-compose down
Started a fresh network with docker-compose up
Stopped all containers using docker-compose stop
Copied the files backed up on step 2 to the new validators using the command of that same step
Restarted all network using docker-compose restart
After this i could repare that the states were correct, users on the blockchain have the same balance as before of the docker-compose down. But the blockchain doesn't process new transactions.
The only error that i've found in the logs, was in the sawtooth-poet-engine
i believe during the consensus as it show on the this image,
ERROR_IMAGE.
So my question is, does anybody tried to do this with success or have any idea of what i'm doing wrong?
I just tried the same thing and it worked for me.
One possible problem is file permissions and ownership.
Use the cp -p option to preserve ownership and permissions:
cp -p --sparse=always [file] [file_backup]
Also verify the ownership and permissions are correct with ls -l /var/lib/sawtooth .
They should be all read/write by owner and owned by user/group sawtooth. If not, fix the ownership with something like
chown sawtooth:sawtooth /var/lib/sawtooth /var/lib/sawtooth/*
chmod u+rw /var/lib/sawtooth /var/lib/sawtooth/*
chmod ugo+r /var/lib/sawtooth/block-* /var/lib/sawtooth/txn_receipts-00.lmdb*
Related
I am running a default t2.nano ec2 linux ami. Nothing is changed on it. I am trying to rsync my local changes to the server. There is a permissions issue that I don't know enough about to fix.
My structure is as follows. I'm trying to push my work to the technology directory. The technology directory is mapped to a staging domain. i.e. technology.staging.com
:/var/www/html/technology
this is from the root, and it does work fine, it's the rsync that is failing.
when I push locally to that directory I get a "failed: Permission denied (13)" error.
I'm running an nginx server and assigned permissions to the www directory as follows:
sudo chown -R nginx:nginx /var/www
My user is ec2-user which is the normal default. Here is where I am tripped up. You can see the var directory is given root access.
You can see that the www directory then has permissions set to nginx so our server can access the files. I believe I need to add the ec2-user to this directory as well as the nginx user so that I can rsync my files there and the server will still have access I'm just unsure of how to do that.
As a test, I created a test directory at this location and it worked successfully.
:/home/ec2-user/test
you can see the permission here are set for the ec2-user which is why it works i'm sure.
Here's the command I'm running on my local machine to rsync my files which fails.
rsync -azP -e "ssh -i /Users/username/devwork/company/comp.pem" company_technology/ ec2-user#1.2.3.4:/var/www/html/technology
Here's the command that was working.
rsync -azP -e "ssh -i /Users/username/devwork/company/comp.pem" company_technology/ ec2-user#1.2.3.4:/home/ec2-user/test
I have done enough research and testing to know that it's a permissions error, I just can't figure out the right way to solve it. Do I need to create a group and assign both the nginx and ec2-user to the group and then give that group the same permissions level on the :/var directory.
Side note, what permissions level do I set for the chown to make these permissions that are currently set?
I have server config files in the :/etc/nginx/conf.d/ directory that map to the directories I create inside of :/var/www/html directory so I can have multiple sites hosted on the server.
So in this example, I have a config file at :/etc/nginx/conf.d/technology.conf which maps to the directory at :/var/www/html/technology
Thank you in advance, again, I do feel like I have put forth the research and effort to show that I've gone as far as I know how to do.
The answer made sense after I spent roughly a day playing around. You have to give access to both the ec2-user and the nginx group. I believe you never want to put a user in a group that involves the server itself, I think things would go south.
After changing the owner to both the ec2-user and nginx group, it still didn't work exactly the way I wanted it to. The reason was, I needed the nginx permissions to be updated to what they had when they were assigned the user role.
Basically, theec2-user had write permissions and the server did not. we wanted the user to have write permissions so they could rsync my local files to the directory on the server, and the nginx group needed the same level of permissions to display the pages. Now that I think about it, the nginx group may have only needed read permissions to display things, but this at least solved the problem for now.
Here is the command I ran on the server to update the ownership and the permissions, as well as the output.
modify ownership
sudo chown -R ec2-user:nginx :/var/www/html/technology
modify permissions
sudo chmod -R o=rwx,g+rwx,o-w technology
The end result looks like this
You can see the permissions match, and the ownership is as we expected. The only thing I have to figure out is after I rsync new files to the server, I need to run the previous code to update the permissions again. I'm sure that will come to me later, but I hope this helps anyone in the same situation.
I'm having problem because i've installed & started docker as a "bad_user". The problem is that this container generates static files (its jekyll/jekyll image), and those files are owned by "bad_user" so i cannot edit them (i know i could add myself to bad_user group or own the dir by chown -R but it would be painful to do every time, and it just bugs me :).
I have tried to reinstall docker & removing /etc/docker directory without any effect. Every time i reinstall it (docker service/manager) runs as "bad_user" and overwrites directory owner.
My question is:
Would that be possible to make docker running under "docker" user ? I have already created that user with that group (yes, i have reinstalled docker-ce under that user already).
Im working on debian-based distro.
I guess in my case its docker daemon issue, somehow when its syncrhonizing shared volume files it gives permission to bad_user instead of user who is running container.
PS. This is the command i run if that matters:
docker run --rm -p 8000:8000 \
--volume="/home/docker/blog:/srv/jekyll" \
-it tocttou/jekyll:3.5 \
jekyll serve --watch --port 8000
Okay i figured it out. It turns out that when you run linux container that creates some files on the shared volume (the -v argument makes shared volume), the file permissions will be for user with grup id = 1000 and id = 1000. In my case user with id=1000 was "bad_user". If you want to workaround that you can use --user and specify user id that you're running under.
The key is to remember that linux permissions are just numbers, for host filesystem number 1000 is (in my case) "bad_user" and 10001 is "docker_user". If you check permissions from inside of the container you'll might see that user id = 1000 means very different user than on your host system.
I hope that next people who will encounter this issue will find that userful.
You can find more information here: https://dille.name/blog/2018/07/16/handling-file-permissions-when-writing-to-volumes-from-docker-containers/
I have a need to make any files that are created in the specific Linux directory to have 777 permission.
I would like to have all the users to be able to do Read, Write and Execute on all files under this folder. So what is the best way or Linux command to make it happen?
What I am doing is that I am spinning off two separate containers one for Nginx server and one for PHP:FPM app server to host Laravel 5.4 app.
Please consider the following scenario. I have a docker application container A (PHP:FPM) which is used to serve the web application files to docker container B (Nginx). Now when I access the website, I am delivering the web pages through the web container. Both the containers are within the same network and I share the volumes from my app container to my web container. But when the web container tries to read the files on the app container I get the error which is something like below:
The stream or file "/var/www/storage/logs/laravel.log" could not be
opened: failed to open stream: Permission denied
So I added RUN chmod -R 777 storage in my docker file.
However it is not solving the issue.
So I also tried using SGID to fix the issue by adding one more line in my dockerfile as RUN chmod -R ug+rwxs storage. Still it is not solving the issue of permission.
On a separate note, funny thing is that on my MAC Docker container this works without any issue ( I mean without adding chmod -R 777 to folder or using SGID for setting permission to a folder in my docker file). But when the same code is run on Linux AMI EC2 instance (Amazon AMI Linux EC2) ... the permission issue start to occur.
So how do I fix this ?
The solution is to launch both containers using the same user identified by the same uid. For instance you can choose root or any uid when running the container:
docker run --user root ...
Alternatively, you can switch to another user, before startup, inside your Dockerfile by adding the following before the CMD or ENTRYPOINT
USER root
I have solved it by figuring out user name under which cache files are created when someone access the application url . And then updating my dockerfile to include statement for SGID ownership for that user on the root of app folder where all source code resides (so all subfolder and files included later in whatever way ... at run-time sometime... are accessible from web container for that user) and then using chmod 777 permission on specific folders that needs to have chmod 777 permission.
I have unexpectedly executed sudo rm -rf */ on my amazon EC2 linux instance.
As soon as i saw the files deleting so i have terminated the command using ctrl+c.
My website instance is working fine. But i am unable to connect to the instance through ssh. Its giving connection has closed. On another pc there is an already existing connection, so when i do ls or sudo commands Its giving usr/bin/ls not found command. cd is working fine to switch the directories.
How can i be able to connect through the ssh and restore the deleted directories? OR How can i solve this issue?
Im sorry to tell you, but i don't think there is much you can do now. Setting it up completely new would be the easiest and i think most possible way to get it up working fine again. And there's this one rule, never use "sudo rm -rf" without checking your Location twice and a third time. You can delete your whole system if you do this command in your Root. So good luck.
Nico
Let us say you deleted the / in the instance I1 which has a volume attached V1. Now create another instance with same configuration and say it I2 with volume V2.
So now stop I1 detach V1 from it and attach it to I2.
Mount the V1 on I2.
Since both instances are from same Ami. They have same uuid so to mount by mount -o nouuid /V1 /V2/mountpoint.
Copy files in the directory / in V2 to / in V1.
Detach it from I2 and re-attach it I1.
I hope it works.
I'm having a slightly annoying issue while using a Docker container (I'm on Ubuntu, so no virtualization like VMWare or b2d). I've built my image, and have a running container that has one shared (mounted) directory from my host, and one shared (mounted) file from my host. Here's the docker run command in full:
docker run -dit \
-p 80:80 \
--name my-container \
-v $(pwd)/components:/var/www/components \
-v $(pwd)/index.php:/var/www/index.php \
my-image
This works great, and both /components (and its contents) and the file are shared appropriately. However, when I want to make changes to either the directory (e.g. adding a new file or folder), or edit the mounted file (or any file in the directory), I'm unable to do so due to incorrect permissions. Running ls- lFh shows that the owner and group for the mounted items have been changed to libuuid:libuuid. Modifying either the file or parent directory requires root permissions, which impedes my workflow (as I'm working from Sublime Text, not Terminal, I'm presented with a popup for admin privs).
Why does this occur? How can I work around this / handle this properly? From Managing Data Volumes: Mount a Host File as a Data Volume:
Note: Many tools used to edit files including vi and sed --in-place may result in an inode change. Since Docker v1.1.0, this will produce an error such as “sed: cannot rename ./sedKdJ9Dy: Device or resource busy”. In the case where you want to edit the mounted file, it is often easiest to instead mount the parent directory.
This would seem to suggest that instead of mounting /components and /index.php, I should instead mount the parent directory of both. Sounds great in theory, but based on the behavior of the -v option and how it interacts with /directory, it would seem that every file in my parent directory would be altered to be owned by libuuid:libuuid. Additionally, I have lots of things inside the parent directory that are not needed in the container - things like build tools, various files, some compressed folders, etc. Mounting the whole parent directory would seem to be wasteful.
Running chown user:group on /components and /index.php on my host machine allow me to work around this and seem to continue to sync with the container. Is this something I'll need to do every time I run a container with mounted host volumes? I'm guessing that there is a more efficient way to do this, and I'm just not finding an explanation for my particular use-case anywhere.
I am using this container for development of a module for another program, and have no desire to manage a data-only container - the only files that matter are from my host; persistence isn't needed elsewhere (like a database, etc).
Dockerfile
/setup
Created on pastebin to avoid an even longer post. Never expires.
After creating the image, this is the run command I'm using:
docker run -dit \
-p 80:80 \
--name my-container \
-v $(pwd)/components:/var/www/wp-content/plugins/my-plugin-directory/components \
-v $(pwd)/index.php:/var/www/wp-content/plugins/my-plugin-directory/index.php \
my-image
It looks like your chown -R nginx:nginx ... commands inside your container are changing the ownership bits on your files to be owned by libuuid on your host machine.
See Understanding user file ownership in docker: how to avoid changing permissions of linked volumes for a basic explanation on how file ownership bits work between your host and your docker containers.