I've been trying to install Jenkins on ubuntu using Vagrant. Even though I am not getting any errors along the way I am not able to open http://localhost:8080
Here's my steps:
Install Vagrant and Virtual Box on MAC
Create a folder for the vagrant
vagrant init bento/ubuntu-16.04
nano Vagrantfile - delete the hasztag from the port forwarding to 8080
Vagrant up
Vagrant ssh
Install git:
Sudo apt-get install git
Git --version
Installing Java:
sudo apt update
Sudo apt-get upgrade
sudo apt install default-jdk
sudo apt install default-jre
Install Jenkins:
wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
sudo sh -c 'echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt-get install -y jenkins --allow-unauthenticated
To check the password: vi /var/log/jenkins/jenkins.log
To start Jenkins: systemctl start jenkins
systemctl status jenkins
When I start the jenkins server I can do telnet 127.0.0.1 8080 but when I turn it off it doesn't work. Any idea why I cannot access GUI using the browser?
Try in ubuntu to do "curl http://localhost:8080" , if response is not some error code page, like 4xx/5xx, check firewall and allow traffic on port 8080. Therefore try to access Jenkins from Host Machine.
Make sure you don't have a service running on port 8080 on your guest machine.
Related
I'm trying to install microk8s, using Ansible.
I get the error : "No snap matching 'microk8s' available"
I'm using WSL 2 (Ubuntu 20.04), and snap version 2.44.3+20.04.
My configuration:
- name: Install microk8s
snap:
name:
- microk8s
classic: yes
become: true
Does anyone know how to fix this?
On the WSL terminal, what happens if you type: snap version.
It seems that snap is broken on WSL2 and using Ubuntu 20.04.
You could try to:
sudo apt-get update && sudo apt-get install -yqq daemonize dbus-user-session fontconfig
sudo daemonize /usr/bin/unshare --fork --pid --mount-proc /lib/systemd/systemd --system-unit=basic.target
exec sudo nsenter -t $(pidof systemd) -a su - $LOGNAME
It could also be a network/firewall issue. What happens if you try to install other packages.
On my Linux server, Jenkins was already installed, and I tried to install GitLab on same server, with following commands:
sudo yum install -y curl policycoreutils-python openssh-server cronie
sudo lokkit -s http -s ssh
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ee/script.rpm.sh | sudo bash
sudo EXTERNAL_URL="http://gitlab.fabcd.com" yum -y install gitlab-ee
It says that GitLab is now installed, but when i open the URL in browser, its not opening and also my Jenkins URL isn't working.
Please help me rollback this GitLab installation as I don't want to mess with my Jenkins.
Both servers exposed in port 8080.
You should change one of it to be different port.
To set the GitLab port, edit the file /etc/gitlab/gitlab.rb
external_port "8888"
Then run reconfigure:
gitlab-ctl reconfigure
I tried installing as mentioned here: https://about.gitlab.com/downloads/
sudo yum install openssh-server
sudo yum install postfix
sudo yum install cronie
sudo service postfix start
sudo chkconfig postfix on
sudo lokkit -s http -s ssh
curl -O https://downloads-packages.s3.amazonaws.com/centos-6.6/gitlab-7.7.2_omnibus.5.4.2.ci-1.el6.x86_64.rpm
sudo rpm -i gitlab-7.7.2_omnibus.5.4.2.ci-1.el6.x86_64.rpm
sudo gitlab-ctl reconfigure
It looks like everything went successfully, but but I don't get anything on my hostname. Is there anything I missed?
GitLab will try to setup the FQDN by using the hostname of your machine. To do this manually, open /etc/gitlab/gitlab.rb and edit the external url according to the documentation, which by the way if you haven't already read, I suggest that you do it!
When i try to connect with ssh to the fresh installed gitlab he ask for a password. the http is working aswel the webinterface.
I have already added the rsa key to gitlab but it looks like the openssh server not use the gitlab authorized_keys file.
Gitlab version 7.0
installed fresh CentOS 6.5 and followed this commands:
wget https://downloads-packages.s3.amazonaws.com/centos-6.5/gitlab-7.0.0_omnibus-1.el6.x86_64.rpm
sudo yum install openssh-server
sudo yum install postfix # Select 'Internet Site', using sendmail or exim is also OK
sudo rpm -i gitlab-7.0.0_omnibus-1.el6.x86_64.rpm
sudo -e /etc/gitlab/gitlab.rb
(added my hostname)
sudo gitlab-ctl reconfigure
sudo lokkit -s http -s ssh
I had the same issue on GitLab 7 omnibus on CentOS 6.5: after a fresh install, when I git push git#.... it was asking for a password. I fixed it by changing the permissions on .ssh folder and .ssh/authorized_keys:
yum install policycoreutils-python -y
chmod 700 /var/opt/gitlab/.ssh/
chmod 600 /var/opt/gitlab/.ssh/authorized_keys
semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh"
semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys"
restorecon -R -v /var/opt/gitlab/.ssh/
You will probably need policycoreutils-python package to run semanage. Install it with yum if needed !
I'm running a virtual machine in Windows Azure with the prebuild image for Ubuntu 14.04 LTS.
When I want to install Docker.io like described here:
http://blog.docker.io/2014/04/docker-in-ubuntu-ubuntu-in-docker/
The installation works but when i`m running:
sudo docker.io pull ubuntu
An error will be thrown:
Cannot connect to the Docker daemon. Is docker -d running on this host?
Can anyone help or has the similar problem?
P.S.: Can anyone with a high reputation create a Tag for Ubuntu-14.04?
Evidently the docker daemon is not running. You wanna check /etc/default/docker.conf for proper configuration and issue
sudo service docker.io start
or
sudo service docker start
depending on how they called the service
Adding myself to the docker group:
sudo usermod -a -G docker myuser
and rebooting the machine worked for me. This solution is discussed in: https://github.com/docker/docker/issues/5314
On Ubuntu 14.04, the docker.io package installs Docker 0.9.1.
According to the documentation, to install the current version use these commands:
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb https://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update
$ sudo apt-get install lxc-docker
There is also a simple script available to help with this process:
$ curl -s https://get.docker.io/ubuntu/ | sudo sh
Alternatively, check the azure-docker-registry project for an example of how to automate Azure provisioning and Docker container deployment. For instance, this Ansible playbook:
- name: create docker data directory
file: path=/mnt/data/docker state=directory
- name: store docker files in data disk
file: src=/mnt/data/docker dest=/var/lib/docker state=link
- name: add repository key
command: creates=/etc/apt/sources.list.d/docker.list apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
- name: copy repository source file
copy: src=docker.list dest=/etc/apt/sources.list.d/docker.list
- name: install docker package
apt: name=lxc-docker update_cache=yes state=present
Also make sure to symlink the docker.io binary to docker to use the tutorials/documentation without rewriting every command.
ln -s /usr/bin/docker.io /usr/bin/docker
Run docker -d to see if it shows any error messages.
If apparmor is missing install it with sudo apt-get install apparmor
Then sudo service docker start
Hard to say but sometime official docker installation procedure fails on Ubuntu 14.04.
One can simply install docker using below given commands [Quick and Dirty]
sudo apt-get update
sudo apt-get -y install docker.io