I am trying to open pdf file that I created inside a Docker container. I tried using xdg-open and Firefox but I'm getting the following errors:
www-browser: not found
links2: not found
elinks: not found
links: not found
lynx: not found
w3m: not found
xdg-open: no method available for opening '1.pdf'
I don't know what to do. Please help.
Copy the pdf out of the alpine container with docker cp alpine:/path/to/pdf . and open it on the host.
What you need is Mounting a volume Use volumes
But if:
you want to open the file within your container you can use a VNC
client like Xming and foward your dispaly from the container by passing the DISPLAY variable to the container
you want to open it, just go to the mounted folder an open it with the any pdf viewer application
And with the second option check if the file was well created or not. A kind way to check if the problem is not comming from your file
Related
So in order to set up the amazon-ecr-credential-helper I need to add some lines to the .docker/config.json file in my EC2. When I try to run the script
echo "{ \"credHelpers\": { \"acc_id.dkr.ecr.acc_region.amazonaws.com\": \"ecr-login\" } }" > ~/.docker/config.json
I get an error -bash: /root/.docker/config.json: No such file or directory
Docker is installed, I'm using root user. This is an Amazon Linux EC2. Can someone please tell me what is wrong here? Doesn't Docker already create the .docker folder? Or is this something I need to do?
Some context: I intend to run this script as part of the EC2 Userdata, but have been facing issues, so trying to debug within the container first.
Any hints in the right direction would be highly appreciated.
Thanks!
"Doesn't Docker already create the .docker folder?"
No, Docker doesn't create a .docker folder in every user's home directory. You need to create that folder yourself.
I'm trying to demonstrate an ROP attack and keep getting a "Read-only file system" error on my LXC container.
I'm trying to execute the command:
echo "0" > /proc/sys/kernel/randomize_va_space
The following is returned:
bash: /proc/sys/kernel/randomize_va_space: Read-only file system
Any help is appreciated.
If this is still relevant, you can't change such settings in a Container. Docker doesn't allow this.
Docker does not support changing sysctls inside of a Container that also modify the host system.
Therefore you have to change the settings "outside" of your Container, on your host system.
Just run the same commands on your host system and then create your Container. The File randomize_va_space in your Container should automatically have your given value 0 in it!
I've created a simple example where i'm trying to achive a simple task. I need to mount a specific folder inside my container to a specific folder on the host machine.
The dockerfile that create the image i use in my docker-compose is the following:
FROM ubuntu:bionic
COPY ./file.txt /vol/file.txt
In this folder, on the container side, i copy a file called file.txt, and i need this behaviour on the host machine:
If the Host machine folder is empty or not exising or the file is missing, i need to copy inside that the file.txt file.
If the Host machine folder has already a file with that name, the file is preserved and not overriden.
This is exactly what happens when volume is used inside docker-compose. The problem with this is that you can't choose the folder on the host machine, but you can simply assign to the volume a name and Docker will create it inside one of his folders.
Instead using mount point lets you choose the folder on the host machine, but here the behaviour is different: even if the folder on the host machine is empty or not exising, the file file.txt is deleted on the container.
A pratical example of this is a common application distributed with docker. I start from a dotnet image and copy inside it my application that will start on container load. Inside this application i have a configuration file that i need to preserve on image updates, so i need to copy it on Host machine, and this file has to be editable by the user that use the host machine, so it can't be on a subfolder of docker installation with a random name.
There's a way to achieve this?
I need some help for referencing the image present in the Ubuntu server. My image path in Ubuntu server "/home/Ubuntu/Chat/public/images/directions-icon.jpg". When i try to open the saved html its not displaying any thing. I thing its referencing my Laptop path. How to excursively mention a Ubuntu Path over here. Please Help Me. Thank you all.
My Code:
<html>
<img src="/home/Ubuntu/Chat-BOT/public/images/directions-icon.jpg">
</html>
Error: Not Displaying any thing
Just check the following things for diplaying the image
Ubuntu is a linux based operating system and its paths are case sensitive make sure you give exact path
Secondly their might be a permission issue for accessing the directory. You must atleast have read permission to show the image. Run "sudo chmod" command to change the permission of your desired directory.
Thirdly, make sure that the path you are giving is in the www or public web directory of your web server. Because if you are running your pages as a localhost the relative paths mostly starts from the folder within www or web directory example if file is in
/www/images/image.png
The relative path will be
src="/images/image.png"
Which means path according to local web address is
http://localhost/images/image.png
Thanks.
Perhaps you should start by checking you path again
/home/Ubuntu/Chat/public/images/directions-icon.jpg
and
/home/Ubuntu/Chat-BOT/public/images/directions-icon.jpg
are not exactly the same.
You seem to be referencing Chat-Bot while you probably meant to reference Chat (or probably vice-versa)
I've got a situation where I've got a .json file that I want to persist between runs of a given container. In addition this file needs to be appended to by the container as part of it running.
The syntax which I've been using so far is as follows:
docker run -d -p 3001:3001 -v /usr/bob/database.json:/app/data/database.json:rw --name=myapp appImage
Nothing gets inserted into the file (though I can cat the contents inside and outside the container to confirm it's the same). I have ensured that the root user (yes not best practice) who is running docker owns all of the files in that folder and has full rwx.
What DOES work is if I bind at the folder level eg:
docker run -d -p 3001:3001 -v /usr/bob:/app/data --name=myapp appImage
Can anyone explain the difference?
I feel that sharing access to a folder instead of a single file is a lot less precise and also causes structural changes in the app (eg. source control with multiple files (plus the .json file mentioned) in the same folder).
Thanks in advance for any pointers.
Thanks,
Andrew
Mounting a file as a volume mounts a specific inode inside the container. Many tools that modify a file will change the inode when writing a new copy of the file. This new inode will be stored in the directory as the new pointer to that filename. When they directory is mounted you see the change in your host, but otherwise you only see it inside the container since the inode on the host and the pointer to it in the host directory are unchanged.
There are more details on this behavior in Docker's tutorial on volumes: https://docs.docker.com/engine/tutorials/dockervolumes