Openshift disable build quota - linux

I'm trying to use buildconfig/builds in OpenShift. The machine is a CentOS 7.3 with kernel 4.5.7-std-3
Unfortunately the kernel I'm using doesn't have CONFIG_CFS_BANDWIDTH enabled.
gunzip < /proc/config.gz | grep CFS
# CONFIG_CFS_BANDWIDTH is not set
Therefore every build I try instantly fails with:
error: failed to retrieve cgroup limits: cannot determine cgroup limits: open /sys/fs/cgroup/cpu/cpu.cfs_quota_us: no such file or directory
Is there a way to bypass this?
I already disabled the quotas inside the kubelet section on the node config file without success.

Unfortunately it's currently necessary to enable cgroups for builds to work properly, you can see additional discussion of this issue here:
https://github.com/openshift/origin/issues/8074

Related

Nodejs/Gcloud/kubectl any command we run from WSL2 is deadly slow

I referred many solutions yet no luck. I have a linux automation which runs few gcloud commands with some conditions. I made this script with node js, but it is incredibly slow that I even finish it manually before the scrips completes the run.
Same with the gcloud commands when I connect to a cluster and kubectl commands when i query something.
Please help!!
It could be a DNS config error on WSL side. I hadthe same issue today, here's how I fixed it !
1. Checking the (deadly slow) response time
[tbg#~] time kubectl get deployments
No resources found in default namespace.
real 0m1.212s
user 0m0.151s
sys 0m0.050s
2. Checking the WSL/DNS configuration
[tbg#~] cat /etc/wsl.conf
[network]
generateResolvConf=false
[tbg#~] cat /etc/resolv.conf
nameserver XX.XXX.XXX.X
nameserver YYY.YY.YY.YY
nameserver 1.1.1.1
If you see that, remove these lines to get back to automatic resolv.conf generation and restart WSL (wsl --shutdown)
3. Checking the (fixed !) response time
[tbg#~] time kubectl get deployments
No resources found in default namespace.
real 0m10.530s
user 0m0.087s
sys 0m0.043s
I found out my resolv.conf configuration was causing that latency, by trying to reinstall kubectl with apt, and finding apt really slow too
Right now access to /mnt folders in WSL2 is too slow and by default at launch the entire Windows PATH is added to the Linux $PATH so any Linux binary that scans $PATH will make things unbearably slow.
To disable this feature, edit the /etc/wsl.conf to add the following section:
[interop]
appendWindowsPath = false
Avoid adding Windows Path to Linux $PATH and best for now is adding folders to the $PATH manually.
Terminate the WSL distro (wsl.exe --terminate <distro_name>) to make it immediately effective or wsl.exe --shutdown and start the terminal again.
Refer to the stack link for more information.

PhpStorm (Re)Index NFS mounted Preject from VM

Setup:
Virtual Machine: VMware Fusion with CentOS 7.4.1708 with NFS Server config:
"/dev/ServerPath" 10.20.0.104(rw,fsid=0,sync,crossmnt,no_subtree_check,all_squash,anonuid=1111,anongid=1111)
Local Latest OSX:
Mount:
sudo mount -t nfs -o resvport,rw 10.20.0.136:/dev/LocalPath /Users/USERNAME/dev/ServerPath
Everything is working great except at opening the Project (Directory) in PhpStorm, each ~500ms it (re)indexes and a loading bar shows this operation (Updating Indices). Except of danger of epileptic seizure I am afraid about the HDD writing operations on SSD and therefore I wanted the ask the Community if such Issue can be fixed and how? The Synchronisation Setting was disabled. Maybe has this something with the way the NFS is exported/mounted?
PhpStorm mentions:
"External file changes sync may be slow: Project files cannot be watched (are they under network mount?)"
Any Tips are appreciated, thank you in advance!
As far I could tell, the problem is not with the NFS Mount or the Infrastructural issue but how PhpStorm renew it's Indexes. One quick but short living fix is to invalidate the Indices and Cache by going to:
File > Invalidate Caches / Restart
After that, there is no more quick indexing of Directories and till some unknown change, the Filesystem is handled properly by PhpStorm.

How to enable Linux namespace in system based on kernel 2.6.38 and initd?

I want to run LXC 2.0 on linux kernel 2.6.38 and init.d, whether both kernel version and initd are mandatory.
I have recompiled the Kernel with namespace support as follow.
# Kernel parameters
CONFIG_NAMESPACES=y
CONFIG_CGROUP_NS=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
[root#ts ~]# CONFIG=$(pwd)/.config lxc-checkconfig
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
newuidmap is not installed
newgidmap is not installed
Network namespace: enabled
Multiple /dev/pts instances: enabled
--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: missing
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled
Everything runs successful until I hit the following issue on lxc-start -n ts1 --logfile=ts1.log.
lxc_start - start.c:preserve_ns:138 - No such file or directory - Kernel does not support attaching to namespaces.
LXC/start.c:138 says that my parent process does not have /proc/<PID>/ns folder and when I check it was true even for all processes in the system including initd.
I assume that initd does not take namespaces into account as initial process.
What I need to do in order to get initd attached with namespace?
edit: misread question ,correcting:
it seems having - https://cateee.net/lkddb/web-lkddb/PID_NS.html is not enough, there is probably another option required. (CONFIG_EXPERIMENTAL?)
I do remember seeing a howto Debian Squeeze (6 , 2.6 kernel line) with lxc containers somewhere, so it should be doable, maybe try and grab Kconfig from there and compare .
I also found this patch-set, try and compare maybe:
http://lxc.sourceforge.net/patches/linux/2.6.38/2.6.38.2-lxc1/patches/
Also, consider old lxc (v1) , I wouldn't expect compatibility with kernels from ~2009 would be high (if at all) priority - so chances are there will be many more caveats and traps with lxd with such ancient kernel .

Kubernaties unable to mount NFS FS on Google Container Engine

I am following the basic nfs server tutorial here, however when I am trying to create the test busybox replication controler I get an error indicating that the mount has failed.
Can someone point out what am I doing wrong ?
MountVolume.SetUp failed for volume
"kubernetes.io/nfs/4e247b33-a82d-11e6-bd41-42010a840113-nfs"
(spec.Name: "nfs") pod "4e247b33-a82d-11e6-bd41-42010a840113" (UID:
"4e247b33-a82d-11e6-bd41-42010a840113") with: mount failed: exit
status 32 Mounting arguments: 10.63.243.192:/exports
/var/lib/kubelet/pods/4e247b33-a82d-11e6-bd41-42010a840113/volumes/kubernetes.io~nfs/nfs
nfs [] Output: mount: wrong fs type, bad option, bad superblock on
10.63.243.192:/exports, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a
/sbin/mount. helper program) In some cases useful info is found
in syslog - try dmesg | tail or so
I have tried using a ubuntu vm as well just to see if I can manage to mitigate a possible missble /sbin/mount.nfs dependency by running apt-get install nfs-common, but that too fails with the same error.
Which container image are you using? On 18th of October Google announce a new container image, which doesn't support NFS, yet. Since Kubernetes 1.4 this image (called gci) is the default. See also https://cloud.google.com/container-engine/docs/node-image-migration#known_limitations

Can't Write to /sys/kernel/ to disable Transparent Huge Pages (THP) for MongoDB on OVH CentOS 7

My Issue
I am having trouble removing MongoDB warnings about Transparent Huge Pages (THP) on an OVH CentOS 7 installation, and the issue appears to be the inability to write to /sys/kernel/mm as root.
First, I realize the OVH kernel is customized, and I know many of you will say to go with a fresh non-customized kernel, but that's not an option right now. I need to solve this problem for the current OS.
MongoDB Warnings:
2016-03-09T00:31:45.889-0500 W CONTROL [initandlisten] Failed to probe "/sys/kernel/mm/transparent_hugepage": Permission denied
2016-03-09T00:31:45.889-0500 W CONTROL [initandlisten] Failed to probe "/sys/kernel/mm/transparent_hugepage": Permission denied
MongoDB is trying to read the transparent_hugepage files (below), but they do not exist:
/sys/kernel/mm/transparent_hugepage/enabled
/sys/kernel/mm/transparent_hugepage/defrag
Cannot Create the Files
All of the solutions I've seen involve creating the files and populating them with never, including the script in the MongoDB documentation. In all of the solutions, this is the key part:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
However, the files do not exist, and I cannot create anything under /sys/kernel/mm as root.
root#myhost [~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
-bash: /sys/kernel/mm/transparent_hugepage/enabled: No such file or directory
root#myhost [~]# mkdir -p /sys/kernel/mm/transparent_hugepage
mkdir: cannot create directory ‘/sys/kernel/mm/transparent_hugepage’: Operation not permitted
The owner and group of directory /sys/kernel/mm are root, and I have temporarily changed the permissions from 700 to 777, yet I still cannot create the directory as root.
Tuned Profile Also Doesn't Help
To be thorough, I have also created the custom Tuned profile (per instructions in MongoDB link above) and activated it, but it generates the error WARNING tuned.plugins.plugin_vm: Option 'transparent_hugepages' is not supported on current hardware.
Tuned Profile (/etc/tuned/no-thp/tuned.conf):
[main]
include=virtual-guest
[vm]
transparent_hugepages=never
Error in Tuned log:
WARNING tuned.plugins.plugin_vm: Option 'transparent_hugepages' is not supported on current hardware.
Some Solution in MongoDB Itself?
It seems like the best solution would be to somehow explicitly configure MongoDB not to use THP so that it wouldn't have to check for the missing files, but I've seen nothing like this. If there is a way, even if it involves customizing MongoDB (and repeating after every update), I'm willing to do it.
Right now I've installed CentOS 7 on OVH. They use /boot/bzImage-3.14.32-xxxx-grs-ipv6-64 that implements grsecurity (https://grsecurity.net) which precludes access to some folders.
The very simple solution to the warnings from MongoDB about huge pages can be solved by replacing the kernel. The procedure for CentOS7 is as follows:
Download required kernel from OVH ftp: ftp://ftp.ovh.net/made-in-ovh/bzImage2 into /boot folder.
Edit /etc/grub2.cfg:
# linux /boot/bzImage-3.14.32-xxxx-grs-ipv6-64 root=/dev/md1 ro net.ifnames=0
linux /boot/bzImage-4.8.17-xxxx-std-ipv6-64 root=/dev/md1 ro net.ifnames=0
Here I replaced bzImage-3.14.32-xxxx-grs-ipv6-64 default by bzImage-4.8.17-xxxx-std-ipv6-64 without grs.
Now, reboot and check if the new kernel is ok:
root#ns506846 ~]# uname -r
4.8.17-xxxx-std-ipv6-64

Resources