Oracle 12c Ubuntu 17.04 Installation error - linux

I'm trying to install Oracle 12c database on Ubuntu 17.04, but I get error ORA-27104:
My /etc/sysctl.conf file:
#Added for fresh Oracle 12cR1 Installation
kernel.sem = 250 32000 100 128
# Assumes all of a 5120MB RAM is allocated, using 4K pages
kernel.shmall=8388608 # (=32*1024*1024*1024 / 4096) - 4096 is page size
# Assumes half of a 5120MB RAM is allocated, in bytes
kernel.shmmax=34359738368 # (=32*1024*1024*1024),
kernel.shmmni = 4096
kernel.panic_on_oops = 1
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
Any idea what may help to fix this?

If you refer the Oracle docs you would see that Oracle doesnt support Ubuntu,
I used a hack around as mentioned in the Install Oracle 12c in Ubuntu 16.04 . The document and the script is originally written for installation of Oracle 12c in Ubuntu 16.04.The script had to be tweaked a bit to install Oracle 12c(version 12.1.0.2.0) in Ubuntu 16.04.2.
I downloaded the script mandela.sh and made the below 2 changes to script and followed the instructions as is in Install Oracle 12c in Ubuntu 16.04 .
Change line 81 ver=16.04 to ver=16.04.2
Change line 122 if [ "$VERCHECK" != "Description: Ubuntu 16.04 LTS" ]; then to if [ "$VERCHECK" != "Description: Ubuntu 16.04.02 LTS" ]; then
Make sure in step 2 give the Description as is given in the output of command
lsb_release -a
Try your luck on Ubuntu 17.04 with above steps.

Related

system service is not started on fedora

Using the below code, I created a service
Code snippet from agentInstaller.sh
fileAgentController="agent_controller.sh"
if [[ "$os" = "debian" ]] ;then
update-rc.d $fileAgentController defaults
else
chkconfig --add /etc/init.d/$fileAgentController
fi
export start="start"
export command="/etc/init.d/$fileAgentController"
sh $command ${start}
Above code successfully start the service 'agent_controller.sh' on Amazon Linux AMI 2017.03 - amzn rhel fedora and Ubuntu 16.04.2 LTS
But give error with following machine details :-
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.3"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.3 (Maipo)"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.3:GA:server"
Red Hat Enterprise Linux Server release 7.3 (Maipo)
I encountered following error on above machine :-
Reloading systemd: [ OK ]
Starting agent_controller.sh (via systemctl): Failed to start
agent_controller.sh.service: Unit not found.
[FAILED]

Missing "kernel: Firewall" messages

Where are my iptables logging Blocked messages? I wonder if this is an OpenVZ issue or something from the scripted install. Note, I'm highly technical, but not a server admin. Could the OpenVZ host be blocking and logging outside of my VSP?
I have two newly installed machines running running text-mode CentOS 7 x64, yum up to date packages, and with iptables/CSF.
Also, I ensured machine #2 has all the packages that are on machine #1, though #2 has some extras.
OpenVZ VPS (installed with their image of CentOS 7 x64)
VMware VM (installed with official CentOS 7 x64 minimal mode)
I performed my extra installs/configs exactly the same on both machines, and I have these lines in /etc/csf/csf.conf
TESTING = "0"
TCP_IN = "22,80,443"
UDP_IN = ""
On the VM, I'm getting these /var/log/messages when I nmap scan it:
Apr 12 17:25:23 mach kernel: Firewall: *UDP_IN Blocked* IN=ens192 OUT= ...
Apr 12 17:25:55 mach kernel: Firewall: *TCP_IN Blocked* IN=ens192 OUT= ...
On the VPS, I'm NOT getting any Firewall /var/log/messages when I nmap scan it... but I think it is properly blocking traffic.
How do I even proceed/diagnose this?

UWSGI https configuration for ubuntu

I have django app that is running using the following uwsgi configuration in redhat 7.3:
[uwsgi]
project = helloworld
base = %d
chdir=%(base)
module=helloworld.wsgi:application
plugins = router_redirect
route-if = equal:${HTTPS};on addheader:Strict-Transport-Security: max-age=31536000
master = true
processes = 1
enable-threads = true
threads = 1
max-requests = 2000
shared-socket = 0.0.0.0:443
https = =0,cert/hello.crt,cert/hello.key,HIGH
pidfile = hello_uwsgi.pid
vacuum = true
die-on-term = true
However, when I run it on Ubuntu 16.04.1 LTS, I got the following error:
your processes number limit is 31283
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
Python version: 3.5.2 (default, Nov 17 2016, 17:05:23) [GCC 5.4.0 20160609]
Python main interpreter initialized at 0x1dfabe0
python threads support enabled
The -s/--socket option is missing and stdin is not a socket.
VACUUM: pidfile removed.
Does the error means that uwsgi fail to bind the port?
Is there a special way of using "shared-socket" in ubuntu?
I need to have this running on both port 443 and 8443. I have tried the above configuration for both port 443 and 8443 without success.
Thanks in advance.
I got this to work in ubuntu by reinstalling the python 3.5.2 from source.
I am guessing that there is some issue/incompatibility installing python 3.5.2 using apt-get.

Xen HVM domU VNC not refreshing screen

On one of our hypervisors running Xen (v.4.6.0 on top of Debian Jessie on a Dell R420), when we configure a domU for HVM and connect to the console via VNC, the connection displays a static image and appears to not accept mouse or keyboard input (leading you to think that the VM is frozen/not responsive). The behavior persists after closing and reconnecting over VNC, but the mouse/keyboard input from the previous session is now reflected (so if you tab three times, you can see that the appropriate radio or input button is highlighted after closing/opening the VNC connection, but you need to close the window again to see where the next input is, making it unusable).
We have Xen running smoothly on three other physical machines with HVM-configured domUs (2x Debian Jessie, 1x Ubuntu Xenial, all with v.4.6.0) and have been comparing what could be different, we noticed that QEMU could be updated on the troublesome Xen host. After upgrading QEMU from 1.2.2 to 1.2.5 (matching the version on the working hosts) and rebooting, the issue still persists. We have copied the VM config to another host with successful results, leading us to believe there is something isolated to this machine.
Results of cat /sys/hypervisor/properties/capabilities
xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
Results of xl info:
host : vm-host
release : 3.16.0-4-amd64
version : #1 SMP Debian 3.16.7-ckt25-2+deb8u3 (2016-07-02)
machine : x86_64
nr_cpus : 16
max_cpu_id : 47
nr_nodes : 1
cores_per_socket : 8
threads_per_core : 2
cpu_mhz : 2500
hw_caps : bfebfbff:2c100800:00000000:00007f00:77bee3ff:00000000:00000001:00000281
virt_caps : hvm hvm_directio
total_memory : 32704
free_memory : 17945
sharing_freed_memory : 0
sharing_used_memory : 0
outstanding_claims : 0
free_cpus : 0
xen_major : 4
xen_minor : 6
xen_extra : .0
xen_version : 4.6.0
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler : credit
xen_pagesize : 4096
platform_params : virt_start=0xffff800000000000
xen_changeset :
xen_commandline : placeholder dom0_mem=1024M,max:1024M dom0_max_vcpus=1 dom0_vcpus_pin no-real-mode edd=off
cc_compiler : gcc (Debian 5.3.1-8) 5.3.1 20160205
cc_compile_by : ijc
cc_compile_domain : debian.org
cc_compile_date : Tue Feb 9 17:46:27 UTC 2016
xend_config_format : 4
Sample domU config:
name="VM1"
uuid="91f4c306-101b-431b-bf73-2146b2a137fb"
vcpus=2
memory=2048
disk = [ "phy:/dev/vg1/centos,xvda2,w",
"file:/path/folder/images/CentOS-7-x86_64-Minimal-511.iso,xvdb:cdrom,r" ]
builder = "hvm"
boot = "dc"
vnc = "1"
vnclisten = "0.0.0.0"
vncdisplay = "0"
vncpasswd = "password"
vga ="stdvga"
videoram = 64
Any and all advice on how to get VNC working smoothly and properly would be greatly appreciated!
Try add GRUB_GFXPAYLOAD_LINUX="keep" or GRUB_GFXPAYLOAD_LINUX="640x480" (or another resolution) into /etc/default/grub on DomU and then run update-grub2 (on DomU) and reboot. This helped me with the same error.
Thanks for the recommendation. It turned out that we had mixed versions of Xen and its dependencies installed (some 4.4, some 4.6). We ended up removing Xen and all related packages and reinstalling. During installation, we noticed that installing xen-hypervisor-4.6-amd64 was coming from the stretch repo (expected), but its dependencies were coming from the jessie main repo with older versions (e.g., libxen-4.4 instead of libxen-4.6). To solve it, we ran
apt-get -t stretch install xen-hypervisor-4.6-amd64
which properly installed all dependencies from stretch, and after a reboot, VNC connections to HVM domU were working as expected.

How do I access a USB drive on a OSX host from inside a docker container?

I have an application that I eventually want to run on a cloud computing service (e.g., such as AWS or Google Cloud) packaged inside a docker image. The reason the application will need to run in the cloud is because it's designed to process large data files, but before I actually deploy, I'd like to test it first on a local laptop, using a single large data file that I've stored (for test and development purposes) on an external USB drive.
My development machine is an OSX laptop, and I'm using a recent version of docker:
stachyra> uname -a
Darwin Andrews-MacBook-Pro-76.local 14.5.0 Darwin Kernel Version 14.5.0: Tue Sep 1 21:23:09 PDT 2015; root:xnu-2782.50.1~1/RELEASE_X86_64 x86_64
stachyra> docker --version
Docker version 1.10.2, build c3959b1
OSX has mounted my external USB drive, device /dev/disk2s2, as /Volumes/MGR DATA:
stachyra> df
Filesystem 512-blocks Used Available Capacity iused ifree %iused Mounted on
/dev/disk1 974770480 435721376 538537104 45% 54529170 67317138 45% /
devfs 375 375 0 100% 650 0 100% /dev
map -hosts 0 0 0 100% 0 0 100% /net
map auto_home 0 0 0 100% 0 0 100% /home
/dev/disk2s2 3906291632 3869523640 36767992 100% 483690453 4595999 99% /Volumes/MGR DATA
/dev/disk3s1 196608 193160 3448 99% 24143 431 98% /Volumes/VirtualBox
stachyra> diskutil list
/dev/disk0
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *500.3 GB disk0
1: EFI EFI 209.7 MB disk0s1
2: Apple_CoreStorage 499.4 GB disk0s2
3: Apple_Boot Recovery HD 650.0 MB disk0s3
/dev/disk1
#: TYPE NAME SIZE IDENTIFIER
0: Apple_HFS Macintosh HD *499.1 GB disk1
Logical Volume on disk0s2
DB70B91A-3B57-4C82-A758-C4BDEA4160FD
Unlocked Encrypted
/dev/disk2
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *2.0 TB disk2
1: EFI EFI 209.7 MB disk2s1
2: Apple_HFS MGR DATA 2.0 TB disk2s2
/dev/disk3
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *100.7 MB disk3
1: Apple_HFS VirtualBox 100.7 MB disk3s1
and it should also be noted, the drive has several directories and data which are visible inside it, at least when viewed directly through OSX:
stachyra> ls -l /Volumes/MGR\ DATA
total 0
drwxr-xr-x 6 stachyra staff 204 Apr 14 2015 1000genomes
drwxr-xr-x 5 stachyra staff 170 Oct 12 17:41 GIAB
drwxr-xr-x 4 stachyra staff 136 Apr 28 2015 genome_browser_tracks
drwxr-xr-x 24 stachyra staff 816 Oct 6 14:00 mitty
I have tried to follow the advice from this question, which describes how to mount a USB drive in docker when docker is running within a linux host. But my local laptop is OSX, not linux, so it doesn't seem to work.
Explicitly, when attempting to follow the advice of the accepted answer, I obtain the following result:
stachyra> docker run -i -t --privileged -v /dev/disk2s2:/dev/foo ubuntu bash
root#8da7b492a707:/# uname -a
Linux 8da7b492a707 4.1.18-boot2docker #1 SMP Sat Feb 20 08:24:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
root#8da7b492a707:/# ls -l /dev/foo
total 0
root#8da7b492a707:/#
Based upon the response, one can see that docker does indeed launch a linux container correctly, and it also creates a volume /dev/foo inside of the container as requested, but the actual contents of the USB drive are not accessible via that location--the ls -l command claims there are no files or directories there.
I also tried the second method described in an alternate response to the same question, and that fails even worse:
stachyra> docker run -i -t --device=/dev/disk2s2 ubuntu bash
docker: Error response from daemon: error gathering device information while adding custom device "/dev/disk2s2": not a device node.
stachyra>
I have found another discussion thread on stackoverflow which suggests that raw USB access is handled quite differently in OSX than in linux, which I suspect is probably the reason why both of the above attempts at USB access are failing.
But, what should I actually do about it? That is to say, what is the correct sequence of actions or commands to allow docker to access a USB device mounted on an OSX host, rather than linux?
I was finally able to access my USB drive from /var/media inside my container by using the machine-diskutil.sh script mentioned in warmoverflow's comment like so
machine-diskutil.sh mount my-machine-name /Volumes/my-usb-drive
and then starting the container like so
docker run -v /Volumes/my-usb-drive:/var/media -it my/image:latest bash
Because I had tried to add /Volumes/my-usb-drive as a shared folder manually in VirtualBox, I first got this error.
Error: The shared folder /Volumes/Seagate already exists on the
docker machine, please unmount it first.
So I removed it manually and re-ran the machine-diskutil.sh mount command without any problems. Great stuff!
As per #pgayvallet comment on GitHub:
As the daemon runs inside a VM in Docker Desktop, it is not possible to actually share a mac host device with the container inside the VM, and this will most definitely never be possible.

Resources