My aim is to change directory of NGINX installation to run as a web server. Motive - custom compiled NGINX, with functions which doesn't come with standard.
I've compiled NGINX from source and as was suggested on this page, all configuration was pointed in the new location /usr/local/nginx when compiled. Default installation is at /usr/share/nginx.
After starting the service, NGINX still runs on the default installation.
I've tried to load nginx with new configuration nginx -c /usr/local/nginx/nginx.conf which breaks everything, returning error 404 for index.html.
Multiple attempts at searching, only shows up with changing site directory.
Is there a solid solution to specify from where NGINX loads?
Edit:
As suggested by John Ankanna below, the following fixed it:
sudo mv /usr/share/nginx /usr/share/nginx.bkp - just renaming the
directory to recover current setup.
sudo ln -s /usr/local/nginx /usr/share/nginx - create symlink in place of original.
Debian/Ubuntu use a standard directory hierarchy. The command man hier will describe this for you. It is common for packages to create symlinks to place files in the correct place when the program expects them elsewhere.
Try Creating Symlink
sudo ln -s /usr/share/nginx /usr/local/nginx
Related
I'm doing a tutorial to learn more about adminsys and services on linux. I arrived to a chapter about Tomcat and Jenkins. It's about installing jenkins as a Tomcat servlet. I was following the instructions and I ran into trouble when I tried to change Jenkins default configuration directory like advised in the tutorial.
Ok so I first installed tomcat alone and the default web page showed up like expected at http://www.example.com:8080/
I downloaded jenkins using : wget https://get.jenkins.io/war-stable/2.361.2/jenkins.war
I moved the .war file in /var/lib/tomcat9/webapps using : sudo mv jenkins.war /var/lib/tomcat9/webapps
Now it's where it gets tricky : the tutorial says that Jenkins puts the configurations, logs and builds files in /root/.jenkins/ by default and advises to change that to put them in /var/lib/jenkins/.
To do that, I first created the directory : sudo mkdir /var/lib/jenkins
I changed the permissions so that tomcat can access it, using : sudo chown tomcat:tomcat /var/lib/jenkins
I went into /etc/tomcat9/context.xml . And I added inside the <Context /> tags :
<Context>
...
<Environment name=”JENKINS_HOME” value=”/var/lib/jenkins” type=”java.lang.String” />
</Context>
I edited tomcat service file /lib/systemd/system/tomcat9.service to avoid read and write problems for jenkins by adding in the sub-section # Security of section [Service] :
ReadWritePaths=/var/lib/jenkins/
I reloaded the systemd daemon to take the new file service configuration by using sudo systemctl daemon-reload
I reloaded Tomcat : sudo systemctl restart tomcat9
I went to http://www.example.com:8080/jenkins to access Jenkins installation. I see Jenkins logo but I get an error :
Error
Unable to create the home directory ‘/var/lib/tomcat/.jenkins’. This is most likely a permission problem.
To change the home directory, use JENKINS_HOME environment variable or set the JENKINS_HOME system property. See Container-specific documentation for more details of how to do this.
Obviously there is a permission problem, but I can't find the problem, my knowledge is too small on these technologies and on linux. In the video tutorial, the teacher does exactly what I did and everything worked perfectly. I searched a lot on stackoverflow and google but couldn't exactly find something similar.
Still, it's weird that jenkins wants to create the home directory at /var/lib/tomcat/.jenkins as I specified to create it in /var/lib/jenkins. So it looks like that, maybe, even if I restarted and reloaded all, what I've changed haven't been taken into consideration.
Thank you for the help :)
I found a solution to get around the problem thanks to the message indicating the creation of the .jenkins in the /var/lib/tomcat directory instead of /var/lib/jenkins as specified in the context.xml configuration file.
When you look in the /etc/passwd file, a tomcat user is present with /var/lib/tomcat as home directory. I simply changed this path to /var/lib/jenkins.
Then run sudo systemctl daemon-reload and sudo systemctl restart tomcat9 and it worked for me.
I found no other solution on stackoverflow and google either.
I still don't unsterdand why tomcat daemon does not take into account the context.xml file and prefers its own home directory to install jenkins.
If someone has a cleaner solution I am interested.
While trying to start minishift, it automatically updates the cache in the home directory.
/home/abc/.minishift/cache/.....
However I want minishift to use a custom directory instead of the default home directory as I am running out of space
Can this be achieved by changing any parameters during ./minishift start
Tried codeready containers too, but it copies in the default home directory..
FATA Failed to copy embedded 'crc_libvirt_4.5.1.crcbundle' from /opt/data/crc-linux-1.13.0-amd64/crc to /home/abc/.crc/cache/crc_libvirt_4.5.1.crcbundle: write /home/abc/.crc/cache/crc_libvirt_4.5.1.crcbundle: no space left on device
I found the answer to this :
export MINISHIFT_HOME=/opt/softwares/
echo 'export MINISHIFT_HOME=/opt/softwares/' >> ~/.bashrc
This helped in solving the redirecting the installation from home to customized directory.
Similar for crc installation as well.
As noted, you should use Code Ready Containers (CRC) and not minishift. This is a known issue and is being tracked here: code-ready/crc/issues/817.
The current workaround seems to be to create the directory where you want it to be and then create a symlink to ~/.crc:
mkdir -p /opt/crc
ln -s /opt/crc/ ~/.crc
So I configured My FREE Amazon EC2 (Linux) server with PHP and MySQL using SSH. Then Installed WordPress and set it up accordingly. Which went smoothly and could access perfectly through my "IP/directoryname". After that I had a lot of issues with writing directories and Transferring files. Which I took care of using these commands,
sudo chown -R apache:apache /var/www/html
sudo chown -R ec2-user:ec2-user /var/www/html
sudo chmod -R 755 /var/www/html
Increased the PHP Upload limit using SSH to 256M. And when my test site was running perfectly, I decided to install a Prebuilt site of mine using the Duplicator Plugin which has an installer.php file with the "Old Site Files and Database" a package. It requires an empty directory. So I deleted all my files using FTP from my test site Directory and uploaded the Duplicator Package and Installer. Installed the site and put my EC2 mysql info accordingly. I selected to overwrite my old Database and delete all existing tables. Which installed successfully (At least the installer said that!). But I can't seem to access my site anymore. Now my site says, "This page isn't working". I don't know what to do here. Can you guys spare me a little bit of your time. Any info you need , just ask. Could this be a WordPress Permalink issue? I don't know how to update the Permalink table using SSH. Maybe I've deleted an important Directory file when I deleted the Old Directory files?
Your Friendly Neighborhood Sidekick,
Ratul
UPDATE – Old question title:
Docker - How to execute unzipped/unpacked/extracted binary files during docker build (add files to docker build context)
--
I've been trying (half a day :P) to execute a binary extracted during docker build.
My dockerfile contains roughly:
...
COPY setup /tmp/setup
RUN \
unzip -q /tmp/setup/x/y.zip -d /tmp/setup/a/b
...
Within directory b is a binary file imcl
Error I'm getting was:
/bin/sh: 1: /tmp/setup/a/b/imcl: not found
What was confusing, was that displaying the directory b (inside the dockerfile, during build) before trying to execute the binary, showed the correct file in place:
RUN ls -la /tmp/setup/a/b/imcl
-rwxr-xr-x 1 root root 63050 Aug 9 2012 imcl
RUN file /tmp/setup/a/b/imcl
ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped`
Being a Unix noob at first I thought it was a permission issue (root of the host being different than root of the container or something) but, after checking, the UID was 0 for both so it got even weirder.
Docker asks not to use sudo so I tried with su combinations:
su - -c "/tmp/setup/a/b/imcl"
su - root -c "/tmp/setup/a/b/imcl"
Both of these returned:
stdin: is not a tty
-su: /tmp/setup/a/b: No such file or directory
Well heck, I even went and defied Docker recommendations and changed my base image from debian:jessie to the bloatish ubuntu:14.04 so I could try with sudo :D
Guess how that turned out?
sudo: unable to execute /tmp/setup/a/b/imcl: No such file or directory
Randomly googling I happened upon a piece of Docker docs which I believe is the reason to all this head bashing:
"Note: docker build will return a no such file or directory error if the file or directory does not exist in the uploaded context. This may happen if there is no context, or if you specify a file that is elsewhere on the Host system. The context is limited to the current directory (and its children) for security reasons, and to ensure repeatable builds on remote Docker hosts. This is also the reason why ADD ../file will not work."
So my question is:
Is there a workaround to this?
Is there a way to add extracted files to docker build context during a build (within the dockerfile)?
Oh and the machine I'm building this is not connected to the internet...
I guess what I'm asking is similar to this (though I see no answer):
How to include files outside of Docker's build context?
So am I out of luck?
Do I need to unzip with a shell script before sending the build context to Docker daemon so all files are used exactly as they were during build command?
UPDATE:
Meh, the build context actually wasn't the problem. I tested this and was able to execute unpacked binary files during docker build.
My problem is actually this one:
CentOS 64 bit bad ELF interpreter
Using debian:jessie and ubuntu:14.04 as base images only gave No such file or directory error but trying with centos:7 and fedora:23 gave a better error message:
/bin/sh: /tmp/setup/a/b/imcl: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
So that led me to the conclusion that this is actually the problem of running a 32-bit application on a 64-bit system.
Now the solution would be simple if I had internet access and repos enabled:
apt-get install ia32-libs
Or
yum install glibc.i686
However, I dont... :[
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
According to IBM, the precise libraries I need are gtk2.i686 and libXtst.i686 and possibly libstdc++
[root#localhost]# yum install gtk2.i686
[root#localhost]# yum install libXtst.i686
[root#localhost]# yum install compat-libstdc++
UPDATE:
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
You could use various non-official 32-bit images available on DockerHub, search for debian32, ubuntu32, fedora32, etc.
If you can't trust them, you can build such an image by yourself, and you can find instruction on DockerHub too, e.g.:
on f69m/ubuntu32 home page, there is a link to GitHub repo used to generate images;
on hugodby/fedora32 home page, there is an example of commands used to build the image;
and so on.
Alternatively, you can prepare your own image based on some official image and add 32-bit packages to it.
Say, you can use a Dockerfile like this:
FROM debian:wheezy
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y ia32-libs
...and use produced image as a base (with FROM directive) for images you're building without internet access.
You can even create an automated build on DockerHub that will rebuild your image automatically when your Dockerfile (posted, say, on GitHub) or mainline image (debian in the example above) changes.
No matter how did you obtain an image with 32-bit support (used existing non-official image or built your own), you can then store it to a tar archive using docker save command and then import using docker load command.
You're in luck! You can do this using the ADD command. The docs say:
If <src> is a local tar archive in a recognized compression format
(identity, gzip, bzip2 or xz) then it is unpacked as a directory... When a directory is
copied or unpacked, it has the same behavior as tar -x: the result is
the union of:
Whatever existed at the destination path and
The contents of the
source tree, with conflicts resolved in favor of “2.” on a
file-by-file basis.
I have gitweb on localhost and a sample project for which I've executed the git init git add and so on. I create a symlink with sudo ln -s /media/dir/project/.git/ /var/cache/git/project.git but it doesn't work and I still get 404 - no project found at localhost/gitweb.
The only way I can only bring the project info titles such as description, ..., then without project info, only the 4 sorting options, is to copy the git physically to /var/cache/git/project.git/ though some files won't be copied. This is the only way I could only not receive the not found error.
I manipulated the /etc/gitweb.conf and /etc/apache2/conf.d/gitweb in anyway, but it didn't help.
(I'm using apache 2.2 under Kubuntu 11.10)
Thanks so much for your helps!
Check the permissions on all of the directories in the symlink path. Whatever user your cgi is running as needs at least +x on all the parent dirs and +r on the .git directory and files.