Building WebLogic Docker Image in Ubuntu is not working - linux

I have installed in virtual machine the Ubuntu 20.4 operating system. The docker version is 19.03.8.
I cloned the oracle repository with command below:
git clone https://github.com/oracle/docker-images.git
After that I downloaded the Oracle WebLogic Server 12.2.1.3 - Generic installer.
Than went to the WebLogic docker build directory and place the installer there with the commands below:
cd ./docker-images/OracleWebLogic/dockerfiles
mv ./path/to/fmw_12.2.1.3.0_wls_Disk1_1of1.zip ./12.2.1.3
In the end I run the build like below:
./buildDockerImage.sh -v 12.2.1.3 -g -s
In theory everything should be going right, but that is not the case.
I have the problem below:
pull access denied for oracle/serverjre, repository does not exist or may require 'docker login'
To fix the issue i did the following action:
I logged succefully https://container-registry.oracle.com/ and selected serverjre and accepted the license.
After that I made the following changes in the dockerFile:
#FROM oracle/serverjre:8
FROM container-registry.oracle.com/java/serverjre:8
Then I logged in the console like below
docker login container-registry.oracle.com
username:<SSO USERNAME>
password:<SSO PASSWORD>
In the end I run again the build like below, but still it throws the same error.
./buildDockerImage.sh -v 12.2.1.3 -g -s
Please help with some guidance. Thank you in advance.
P.S This is my first question, I am new here, please don't be hard on me.

From the command below:
./buildDockerImage.sh -v 12.2.1.3 -g -s
I see you are running the build with -g option which creates the images based on generic distribution.
Check this link for more information about the attributes in the link below:
https://github.com/oracle/docker-images/blob/master/OracleWebLogic/dockerfiles/12.2.1.3/README.md
This means you are using the DockerFile.generic. You need to do the modification in this file.
You need to do the following replacement:
#Line 30
#FROM oracle/serverjre:8 as builder
FROM container-registry.oracle.com/java/serverjre:8 as builder
# Line 69
#FROM oracle/serverjre:8
FROM container-registry.oracle.com/java/serverjre:8

Related

Some question on Boot2docker setup for build and run

I’m a fresh beginner on bioinformatics. Recently, I start learning it with the book named “Bioinformatics with Python Cookbook (by Antao, Tiago)”. I met some issues while setting up Docker for Linux. Please see below for the issues:
I was trying to set up the Docker files following the author’s instruction, but I found some files were “failed to download”.
docker build -t bio
https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile
Then I still went ahead set up the container following the instruction:
“Now, you are ready to run the container, as follows: docker run -ti -p 9875:9875 -v YOUR_DIRECTORY:/data bio”
I typed as docker run -ti -p 9875:9875 -v C:/Users/guangliang/Desktop/Bioinformation/data bio
However, it gave me an error saying “Unable to find image “bio:latest” locally”.
Can anyone give me any suggestions on this? My thought could be the first step I missed downloading some files for setting the Dockers, but I am not sure if I can fetch these files.
Thank you so much for any comments!
Best regards
Johnny
I tried downloading the docker files a few time, but the error still appears
docker build -t bio
https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile
docker run -ti -p 9875:9875 -v C:/Users/guangliang/Desktop/Bioinformation/data bio
In the first issue, I found some files were “failed to download”.
In the 2nd issue, an error saying “Unable to find image “bio:latest” locally”. appears
Here you have a couple of problems:
1) It looks you do not download that docker file and build required docker image locally
2) You are getting that error about not finding image locally because of previous problem
So, you should do like this:
1) Download that Dockerfile (https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile). If you cant download that file for some reason, just open it at the git, select all content, copy, than in some folder on your computer make a new file, name it "Dockerfile" and paste the content.
2) Build locally image - go to the folder you download that dockerfile and execute following command:
docker build -t bio .
3)Run your container with docker run ... command

Team City "minimal build agent" Docker image - "npm: not found" Linux issue?

First of all, I think this is more of a Linux issue as the problem seems to be on a linux-flavoured Docker container, but I'm happy to accept that I can do something to the team city config to overcome this.
I'm also not very experienced with Linux, Docker or node/npm, though I do have a lot of development experience and am very comfortable with command line interfaces in general.
Background
We currently have Team City set up as a build server, for building a variety of projects:
.Net Framework,
.Net Core
Angular CLI
A couple of simple websites which use node packages to generate HTML from Markdown.
The server is running as a Docker container using Docker for Windows on a Windows Server box, and this is working well.
We have one Windows 10 Build agent (a VM) which is also working fine, and builds all the .Net and .Net Core stuff fine.
The simple docs site stuff primarily uses the markdown-to-html node package, so its build steps simply get all the source .md files and compile to html with markdown-to-html, plus use some other npm packages for SASS compilation and minification of js etc. No actual node code as such, just some jQuery. In order to not tie up the other agent, and because this stuff can run happily on Linux, I want to have this running on a small docker image rather than a full VM build agent somewhere.
I previously successfully used a node.js team city agent docker image (either jacobpeddk/teamcity-agent-nodejs or omez/teamcity-agent-nodejs - can't recall) which did work for a time, though I had issues with being able to install some npm packages globally in build scripts, which meant I had to get a bash terminal into the container and run some manual npm commands. I also I think had to run apt-get install zip to get a zipping step to work. This worked fine for a while (weeks).
I added some extra JS stuff to one of these simple projects, and suddenly I was getting errors when trying to build. I (perhaps stupidly) decided that this was probably due to the container having older versions of node and/or npm etc, so I attempted to update this by getting a bash shell into the container, installing nvm and updating node.js & npm.
This ended up with a rather broken container (node errors), so I thought I'd instead start again, but actually start with the jetbrains/minimal-build-agent Docker image instead, with the aim of ending up with a nice bespoke image for our needs specifically (as I couldn't find a very up-to-date pre-existing one)
I've running a Bash shell directly on the build agent container by executing this on the host:
docker exec -it basicagent /bin/bash
then from there I've installed nvm, Python (required for node install step) and node:
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh | bash
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
apt-get update
apt-get install python 3.6
nvm install v8.11.1 (matching version on my dev machine)
npm install -g markdown-folder-to-html (npm package I previously found I had to install globally)
apt-get install zip (just used for a build step to zip up artifacts)
If I now run (via the bash shell) npm -version I get back 5.6.
If I try to get a build to run that uses npm in a command line step, then I get this error in the build log:
/opt/buildagent/temp/agentTmp/custom_script2764770419520852926: npm: not found
I wondered if it was an issue with the user/path that the team city agent process is using vs. the one I'm using in Bash, so I added the following to the build script:
echo PATH = $PATH
echo user var = $USER
echo user via 'id':
id -u -n
the output of which is:
PATH = /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
user var =
user via id:
root
So it's running the agent as root, and doesn't appear to have node in the $PATH at all.
If I run the above directly from Bash however, I can see that I am root, but my $PATH is different:
PATH = /root/.nvm/versions/node/v8.11.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
root
So I'm now confused: I've re-started the container and this has had no effect - it seems that when I'm logged in as root manually I have a certain path set, but when the build agent service is running as root it's different.
I have no idea why this happens, but I've basically worked around the problem by adding:
export PATH=$PATH:/root/.nvm/versions/node/v8.11.1/bin
to the top of every build step that uses npm in a script. To my mind this seems a rather daft thing to have to do - considering this used to work without this, and the only real difference is possibly a slightly different flavour of linux container. AFAIK the original build agent container was based on the jetbrains minimal-build-agent one, so unless they've changed what they base that on it should be roughly the same...
I also had to change the compressor being used in a node-minify build step from gcc (google closure compiler) to babel-minify as the former was basically hanging indefinitely, but that is a separate problem (though also something that was fine and now isn't...)
Thanks to anyone who took the time to read... though I do wonder if one-day I'll exhaust my own research options, and finally go ask the internet and actually get someone respond - for some reason whenever I get to the point where I have to ask, it always seems no-one else has the answer either and I end up having to work it out myself. It's probably character-building though I suppose.. (this isn't just SO - I've found this be the case for over 15 years on various forums about various things...)

Unable to deploy a Ceph manager daemon with ceph-deploy: Error EACCES: access denied

I am trying to set up a Ceph storage cluster using the quick start guide found here: http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
When I try to deploy a manager daemon using this command:
ceph-deploy mgr create enickel7
I get this error:
[ceph_deploy.mgr][ERROR ] OSError: [Errno 2] No such file or directory: '/var/lib/ceph/mgr/ceph-enickel7'
[ceph_deploy][ERROR ] GenericError: Failed to create 1 MGRs
(enickel7 is the name of the node I'm using - the Ceph documentation calls the nodes node1, node2, and node3.) I tried to manually create the directory /var/lib/ceph/mgr, then ran the command again. Then I got this error:
[enickel7][ERROR ] Error EACCES: access denied
[enickel7][ERROR ] exit code from command was: 13
[ceph_deploy.mgr][ERROR ] could not create mgr
[ceph_deploy][ERROR ] GenericError: Failed to create 1 MGRs
Does anyone know what this error means, or how to fix it? ceph-deploy definitely has sudo permissions, and the mgr directory has the same permissions as other directories in /var/lib/ceph.
Thank you for your time!
It's because your ceph version is not Luminous >=12.2.0, you must use ceph-deploy to install ceph as the document said, the default version installed by ceph-deploy is 10.2.10 Jewel for now.
If you want to create a manager daemon process, you need to upgrade your ceph to Luminous 12.2.1. The doc is here: http://docs.ceph.com/docs/master/release-notes/#v12-2-1-luminous
I just ran into this same issue on ubuntu 16.04 trying to deploy kraken with ceph-deploy version 1.5.39.
Ceph-deploy automatically created the directories for me but they were not owned correctly. It looks like the keyring it created in /var/lib/ceph/bootstrap-mgr along with that directory is owned by root. I chowned it to ceph. and that got me past that error.
In your case I would guess that the directory is owned by your user instead of "ceph". I hope this helps.
please test a below command:
chown ceph:ceph /var/lib/ceph
and
what ceph version used?
please use a latest version (mimic 13.2)
and ceph-deploy 2
Faced the same issue. As Michael Meepo said it was version problem.
On admin node I registered the ceph repo for luminous & installed ceph-deploy.
But when I tried to use it ceph-deploy installed the default version (Jewel) on remote node.
To install specific version you should ask for it:
ceph-deploy install master --release luminous
To use the ceph-deploy version matching your distribution's, as from https://github.com/ceph/ceph-deploy page, use ceph repositories. For instance, as Debian stretch provides Jewel (Ceph v. 10), use the following repository: http://ceph.com/debian-jewel by creating a /etc/apt/source.list.d/ceph-deploy.list file containing:
deb http://download.ceph.com/debian-jewel/ stretch main
Install the keys:
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
Then proceed with
apt-get install ceph-deploy
From there it should work as expected.

Installing Jenkins Plugins to Docker Jenkins

I have the following Dockerfile with jenkins as the base image:
FROM jenkins
USER root
ENV JENKINS_MIRROR http://mirrors.jenkins-ci.org
RUN for plugin in git-client git ws-cleanup ; do wget -O $JENKINS_HOME/plugins/${plugin}.hpi $JENKINS_MIRROR/plugins/${plugin}/latest/${plugin}.hpi ; done
EXPOSE 8080
I'm trying to install some additional plugins but it gives me an error saying no such file or directory
I then started and connected to the container of this build step in order to "debug" the error:
However, I could not find out the cause because every directory seems to exist. Furthermore, if I then run the for-loop manually in the bash, all plugins are installed correctly...
I further noticed, that the installation of the the plugins works, if I install them in the root directory as follows:
RUN for plugin in git-client git ws-cleanup ; do wget -O ${plugin}.hpi $JENKINS_MIRROR/plugins/${plugin}/latest/${plugin}.hpi ; done
However, this is the wrong place as they have to be placed in the directory $JENKINS_HOME/plugins
Why I am not able to install the plugins in $JENKINS_HOME/plugins?
I can't read your screenshots, but you don't seem to be following the official instructions. See https://github.com/cloudbees/jenkins-ci.org-docker under "Installing more tools". Note:
You should save the plugins to /usr/share/jenkins/ref/plugins
You could use a plugins.txt file instead, which contains the names of your plug-ins, and you can process with the provided plugins.sh script. This looks like:
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/plugins.txt
I think the reason your approach wasn't working was to do with some processing in the start-up script.
install-plugins.sh is deprecated. I had to switch to jenkins-plugin-cli:
FROM jenkins/jenkins
...
RUN jenkins-plugin-cli \
--plugins \
git \
workflow-aggregator \
blueocean \
other-plugins
jenkins-plugin-cli also supports -f parameter, which gets the list of plugins as a file.
See Jenkins Official Documentation for details.

Permission denied error while installing gitlab-ci

While installing gitlab ci (continues integration) on ubuntu (12.04LTS) i get the following error in step 5 (Setup application)
from: https://github.com/gitlabhq/gitlab-ci/blob/master/doc/installation.md
root#s2:~# cd /home/gitlab_ci/gitlab-ci/
root#s2:/home/gitlab_ci/gitlab-ci# sudo -u gitlab_ci -H gem install bundler
ERROR: While executing gem ... (Gem::FilePermissionError)
You don't have write permissions into the /usr/local/lib/ruby/gems/1.9.1 directory.
It seems these gems try to install outside /home/gitlab_ci which indeed would fail as user gitlab_ci
My question is - are these instructions wrong? - or - am i an edge case.
And offcourse how would I safely solve this problem, just running the command as root might give me more trouble later on...
Extra information, Ruby was originally installed for gitlab itself and that works fine.
Considering that gitlab installation step 2 proposes to recompile ruby, I usually compile it with a --prefix=/home/gitlab/ruby1.9.3 argument, in order to use a ruby in which I have full rights to write/add any gem I want without using sudo.
So the $PATH used by the gitlab_ci account should include /home/gitlab/ruby1.9.3/bin and any gem installed by that account would go into the local compiled ruby.
If both accounts are part of the same group, they should both be able to write into /home/gitlab/ruby1.9.3/lib/ruby/gems/1.9.1.

Resources