Docker inside LXC unprivileged container - linux

I am trying to run Docker containers inside LXC unprivileged container. Can anyone suggest what am I missing?
If I remove apparmor from the LXC container it works fine. Seems like I need to do some apparmor magic to make it work without disabling apparmor?
This is my current LXC container config:
lxc.include = /usr/share/lxc/config/nesting.conf
# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
# For Ubuntu 14.04
lxc.mount.entry = /sys/kernel/debug sys/kernel/debug none bind,optional 0 0
lxc.mount.entry = /sys/kernel/security sys/kernel/security none bind,optional 0 0
lxc.mount.entry = /sys/fs/pstore sys/fs/pstore none bind,optional 0 0
lxc.mount.entry = mqueue dev/mqueue mqueue rw,relatime,create=dir,optional 0 0
lxc.include = /usr/share/lxc/config/userns.conf
# For Ubuntu 14.04
lxc.mount.entry = /sys/firmware/efi/efivars sys/firmware/efi/efivars none bind,optional 0 0
lxc.mount.entry = /proc/sys/fs/binfmt_misc proc/sys/fs/binfmt_misc none bind,optional 0 0
lxc.arch = linux64
# Container specific configuration
lxc.idmap = u 0 1258512 65536
lxc.idmap = g 0 1258512 65536
lxc.rootfs.path = dir:/var/lib/lxc/ubuntu/rootfs
lxc.uts.name = ubuntu
# Network configuration
lxc.net.0.type = veth
lxc.net.0.link = br0
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:3e:3f:77
lxc.net.0.ipv4.address = 10.0.3.242/24
lxc.net.0.ipv4.gateway = auto
lxc.cgroup.memory.limit_in_bytes = 512M
lxc.cgroup.cpuset.cpus = 0-31
lxc.start.auto = 1

Is the following in the config helpful in resolving:
lxc.aa_profile = unconfined
It may break your security profile, but may get you started in the right direction.

Related

python code runs well; when installed using `pip`, it runs twice

I've written my self a program in python, that you can find here. It's a code to compare logs. When I run it as a python script, it runs well. See an example:
$ python3 src/logChecker/logChecker.py -pre logs_pre/ -post logs_pos/ -tf templ/
##### Successfully Loaded Templates from folder templ/ #####
##### Successfully Loaded Templates from folder templ/ #####
##### Logs Loaded Successfully from folder logs_pre/ #####
##### Logs Loaded Successfully from folder logs_pos/ #####
0 0 sh_rtr_opsf_op_db.template agg01.cpe_rx.json
1 0 sh_rtr_bgp_neigh.template agg01.cpe_rx.json
2 0 sh_rtr_rt_tbl_summ.template agg01.cpe_rx.json
3 0 sh_rtr_ospf_neigh.template agg01.cpe_rx.json
0 0 sh_rtr_opsf_op_db.template agg01.cpe_rx.json
1 0 sh_rtr_bgp_neigh.template agg01.cpe_rx.json
2 0 sh_rtr_rt_tbl_summ.template agg01.cpe_rx.json
3 0 sh_rtr_ospf_neigh.template agg01.cpe_rx.json
Saving Excel
# 0 sh_rtr_opsf_op_db.template
# 1 sh_rtr_bgp_neigh.template
# 2 sh_rtr_rt_tbl_summ.template
# 3 sh_rtr_ospf_neigh.template
However, if I do install it using pip such as pip3 install logChecker, when invoked, the program runs twice.
$ pip3 show logChecker
Name: logChecker
Version: 3.5.6
Summary: A simple log analysis tool
Home-page: https://github.com/laimaretto/logChecker
Author: Lucas Aimaretto
Author-email: laimaretto#gmail.com
License: BSD 3-clause
Location: /home/lucas/.local/lib/python3.8/site-packages
Requires: pandas, textfsm, ttp, XlsxWriter
Required-by:
$ logChecker -pre logs_pre/ -post logs_pos/ -tf templ/
##### Successfully Loaded Templates from folder templ/ #####
##### Successfully Loaded Templates from folder templ/ #####
##### Logs Loaded Successfully from folder logs_pre/ #####
##### Logs Loaded Successfully from folder logs_pos/ #####
0 0 sh_rtr_opsf_op_db.template agg01.cpe_rx.json
1 0 sh_rtr_bgp_neigh.template agg01.cpe_rx.json
2 0 sh_rtr_rt_tbl_summ.template agg01.cpe_rx.json
3 0 sh_rtr_ospf_neigh.template agg01.cpe_rx.json
0 0 sh_rtr_opsf_op_db.template agg01.cpe_rx.json
1 0 sh_rtr_bgp_neigh.template agg01.cpe_rx.json
2 0 sh_rtr_rt_tbl_summ.template agg01.cpe_rx.json
3 0 sh_rtr_ospf_neigh.template agg01.cpe_rx.json
Saving Excel
# 0 sh_rtr_opsf_op_db.template
# 1 sh_rtr_bgp_neigh.template
# 2 sh_rtr_rt_tbl_summ.template
# 3 sh_rtr_ospf_neigh.template
##### Successfully Loaded Templates from folder templ/ #####
##### Successfully Loaded Templates from folder templ/ #####
##### Logs Loaded Successfully from folder logs_pre/ #####
##### Logs Loaded Successfully from folder logs_pos/ #####
0 0 sh_rtr_opsf_op_db.template agg01.cpe_rx.json
1 0 sh_rtr_bgp_neigh.template agg01.cpe_rx.json
2 0 sh_rtr_rt_tbl_summ.template agg01.cpe_rx.json
3 0 sh_rtr_ospf_neigh.template agg01.cpe_rx.json
0 0 sh_rtr_opsf_op_db.template agg01.cpe_rx.json
1 0 sh_rtr_bgp_neigh.template agg01.cpe_rx.json
2 0 sh_rtr_rt_tbl_summ.template agg01.cpe_rx.json
3 0 sh_rtr_ospf_neigh.template agg01.cpe_rx.json
Saving Excel
# 0 sh_rtr_opsf_op_db.template
# 1 sh_rtr_bgp_neigh.template
# 2 sh_rtr_rt_tbl_summ.template
# 3 sh_rtr_ospf_neigh.template
I'm clueless. If the program would run twice when invoked from within python, then I would have a starting point. But it's only running twice after being installed by pip and used as a normal program from the CLI.
I've already checked the setup.py (which is available in the git repo), but it looks rather standard.
Unfortunately I don't have a minimal code to share; only the original code in here. But if someone has faced something similar, may be a hint or experience will be very helpful.
Thanks.
Ok, I've found the solution. It's two-fold.
First part is in the code itself. When I run it as a python script, this is what I see in the code that allows me to run it.
def main():
[...my code..]
main()
That's why it runs when invoked from within Python.
However, when installing it using pip, I have the following inside setup.py.
entry_points={
'console_scripts': ['logChecker=src.logChecker.logChecker:main'],
},
The path for console_scripts points to the main function, which in the code itself it exists twice: a) as a definition and b) as a call to it.
I've removed the call to main() and now it runs once when invoked from the CLI. The downside of this is that I no longer can run it as a python script. I'll find a solution for that.

snakemake allocates memory twice

I am noticing that all my rules request memory twice, one at a lower maximum than what I requested (mem_mb) and then what I actually requested (mem_gb). If I run the rules as localrules they do run faster. How can I make sure the default settings do not interfere?
resources: mem_mb=100, disk_mb=8620, tmpdir=/tmp/pop071.54835, partition=h24, qos=normal, mem_gb=100, time=120:00:00
The rules are as follows:
rule bwa_mem2_mem:
input:
R1 = "data/results/qc/{species}.{population}.{individual}_1.fq.gz",
R2 = "data/results/qc/{species}.{population}.{individual}_2.fq.gz",
R1_unp = "data/results/qc/{species}.{population}.{individual}_1_unp.fq.gz",
R2_unp = "data/results/qc/{species}.{population}.{individual}_2_unp.fq.gz",
idx= "data/results/genome/genome",
ref = "data/results/genome/genome.fa"
output:
bam = "data/results/mapped_reads/{species}.{population}.{individual}.bam",
log:
bwa ="logs/bwa_mem2/{species}.{population}.{individual}.log",
sam ="logs/samtools_view/{species}.{population}.{individual}.log",
benchmark:
"benchmark/bwa_mem2_mem/{species}.{population}.{individual}.tsv",
resources:
time = parameters["bwa_mem2"]["time"],
mem_gb = parameters["bwa_mem2"]["mem_gb"],
params:
extra = parameters["bwa_mem2"]["extra"],
tag = compose_rg_tag,
threads:
parameters["bwa_mem2"]["threads"],
shell:
"bwa-mem2 mem -t {threads} -R '{params.tag}' {params.extra} {input.idx} {input.R1} {input.R2} | "
"samtools sort -l 9 -o {output.bam} --reference {input.ref} --output-fmt CRAM -# {threads} /dev/stdin 2> {log.sam}"
and the config is:
cluster:
mkdir -p logs/{rule} && # change the log file to logs/slurm/{rule}
sbatch
--partition={resources.partition}
--time={resources.time}
--qos={resources.qos}
--cpus-per-task={threads}
--mem={resources.mem_gb}
--job-name=smk-{rule}-{wildcards}
--output=logs/{rule}/{rule}-{wildcards}-%j.out
--parsable # Required to pass job IDs to scancel
default-resources:
- partition=h24
- qos=normal
- mem_gb=100
- time="04:00:00"
restart-times: 3
max-jobs-per-second: 10
max-status-checks-per-second: 1
local-cores: 1
latency-wait: 60
jobs: 100
keep-going: True
rerun-incomplete: True
printshellcmds: True
scheduler: greedy
use-conda: True # Required to run with local conda enviroment
cluster-status: status-sacct.sh # Required to monitor the status of the submitted jobs
cluster-cancel: scancel # Required to cancel the jobs with Ctrl + C
cluster-cancel-nargs: 50
Cheers,
Angel
Right now there are two separate memory resource requirements:
mem_mb
mem_gb
From the perspective of snakemake these are different, so both will be passed to the cluster. A quick fix is to use the same units, e.g. if the resource really requires only 100 mb, then the default resource should be changed to:
default-resources:
- partition=h24
- qos=normal
- mem_mb=100

Perf record hanging on armv7

I have a device with embedded Linux. The base image is built using ptxdist 2019.01 with the OSELAS toolchain build 2018.02 with gcc 7.3.1. Ptxdist has native option to enable perf support, so I enabled it and installed it on the device. It is using Linux 4.19.72.
However, when I run perf record -g (with process to trace) without explicitly specifying events, it seems to hang using a lot of CPU and not responding to SIGINT. I am not sure what the default event is; it does not seem to say anywhere. How can I find what it is hanging on and/or which events to specify that will actually work?
Update #1: Trying to strace perf recorg -g app… shows (
openat(AT_FDCWD, "/proc/sys/kernel/kptr_restrict", O_RDONLY|O_LARGEFILE) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
read(3, "0\n", 1024) = 2
geteuid32() = 0
getuid32() = 0
close(3) = 0
statfs64("/sys", 88, 0xbef2f388) = 0
stat64("/sys/bus/event_source/devices/cs_etm/format", 0xbef2f440) = -1 ENOENT (No such file or directory)
stat64("/sys/bus/event_source/devices/cs_etm/type", 0xbef2f440) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/sys/devices/system/cpu", O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_CLOEXEC|O_DIRECTORY) = 3
fstat64(3, {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
getdents64(3, /* 12 entries */, 32768) = 360
getdents64(3, /* 0 entries */, 32768) = 0
close(3) = 0
stat64("/sys/bus/event_source/devices/arm_spe_0/format", 0xbef2f440) = -1 ENOENT (No such file or directory)
stat64("/sys/bus/event_source/devices/arm_spe_0/type", 0xbef2f440) = -1 ENOENT (No such file or directory)
geteuid32() = 0
perf_event_open(
Unfortunately arguments of the perf_event_open don't ever get written out. Listing with ps shows it in R state.

snmpwalk failed with authorizationError

I tried to execute:
snmpwalk -v 3 -u snmpv3username -A <passphrase> -a MD5 -l authNoPriv localhost .1.3.6.1.4.1.334.72.1.1.6.2.1.0
However, I got the following error:
Error in packet.
Reason: authorizationError (access denied to that object)
I have already define the following in /etc/snmp/snmpd.conf:
createUser snmpv3username MD5 <passphrase> AES <passphrase>
Question is:
1. What is the meaning of this error? I thought I have defined the user in the config file
2. How to solve this issue?
If I execute:
snmpwalk -v 1 -c public -O e 127.0.0.1
I got this result:
SNMPv2-MIB::sysDescr.0 = STRING: Linux ip-10-251-138-141 2.6.32-358.14.1.el6.x86_64 #1 SMP Mon Jun 17 15:54:20 EDT 2013 x86_64
SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (615023) 1:42:30.23
SNMPv2-MIB::sysContact.0 = STRING: Root <root#localhost>
SNMPv2-MIB::sysName.0 = STRING: ip-10-251-138-141
SNMPv2-MIB::sysLocation.0 = STRING: aws-west
SNMPv2-MIB::sysORLastChange.0 = Timeticks: (2) 0:00:00.02
SNMPv2-MIB::sysORID.1 = OID: SNMP-MPD-MIB::snmpMPDMIBObjects.3.1.1
SNMPv2-MIB::sysORID.2 = OID: SNMP-USER-BASED-SM-MIB::usmMIBCompliance
SNMPv2-MIB::sysORID.3 = OID: SNMP-FRAMEWORK-MIB::snmpFrameworkMIBCompliance
SNMPv2-MIB::sysORID.4 = OID: SNMPv2-MIB::snmpMIB
SNMPv2-MIB::sysORID.5 = OID: TCP-MIB::tcpMIB
SNMPv2-MIB::sysORID.6 = OID: IP-MIB::ip
SNMPv2-MIB::sysORID.7 = OID: UDP-MIB::udpMIB
SNMPv2-MIB::sysORID.8 = OID: SNMP-VIEW-BASED-ACM-MIB::vacmBasicGroup
SNMPv2-MIB::sysORDescr.1 = STRING: The MIB for Message Processing and Dispatching.
SNMPv2-MIB::sysORDescr.2 = STRING: The MIB for Message Processing and Dispatching.
SNMPv2-MIB::sysORDescr.3 = STRING: The SNMP Management Architecture MIB.
SNMPv2-MIB::sysORDescr.4 = STRING: The MIB module for SNMPv2 entities
SNMPv2-MIB::sysORDescr.5 = STRING: The MIB module for managing TCP implementations
SNMPv2-MIB::sysORDescr.6 = STRING: The MIB module for managing IP and ICMP implementations
SNMPv2-MIB::sysORDescr.7 = STRING: The MIB module for managing UDP implementations
SNMPv2-MIB::sysORDescr.8 = STRING: View-based Access Control Model for SNMP.
SNMPv2-MIB::sysORUpTime.1 = Timeticks: (2) 0:00:00.02
SNMPv2-MIB::sysORUpTime.2 = Timeticks: (2) 0:00:00.02
SNMPv2-MIB::sysORUpTime.3 = Timeticks: (2) 0:00:00.02
SNMPv2-MIB::sysORUpTime.4 = Timeticks: (2) 0:00:00.02
SNMPv2-MIB::sysORUpTime.5 = Timeticks: (2) 0:00:00.02
SNMPv2-MIB::sysORUpTime.6 = Timeticks: (2) 0:00:00.02
SNMPv2-MIB::sysORUpTime.7 = Timeticks: (2) 0:00:00.02
SNMPv2-MIB::sysORUpTime.8 = Timeticks: (2) 0:00:00.02
HOST-RESOURCES-MIB::hrSystemUptime.0 = Timeticks: (562693901) 65 days, 3:02:19.01
End of MIB
Thanks in advance
You do the snmpwalk with seclevel authnopriv but your user has seclevel authpriv configured.
Try:
snmpwalk -v 3 -u snmpv3username -A <passphrase> -a MD5 -x AES -X <passphrase> -l authNoPriv localhost .1.3.6.1.4.1.334.72.1.1.6.2.1.0
Besides creating the user, you must also "authorize" it to see data. Users can exist without any permissions to see data (its part of the SNMPv3 specifications).
For Net-SNMP, you can do this easily by granting it read-only access using this line in your snmpd.conf file:
rouser snmpv3username
or for write access to everything:
rwuser snmpv3username
Edit: Additionally, you should put the create user line in /var/net-snmp/snmpd.conf instead so it gets replaced by a private, localized key that can't be stolen and used in other devices.

Mysql seconds_behind master very high

Hi we have mysql master slave replication, master is mysql 5.6 and slave is mysql 5.7, seconds behind master is 245000, how I make it catch up faster. Right now it is taking more than 6 hours to copy 100 000 seconds.
My slave ram is 128 GB. Below is my my.cnf
[mysqld]
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
innodb_buffer_pool_size = 110G
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
# These are commonly set, remove the # and set as required.
basedir = /usr/local/mysql
datadir = /disk1/mysqldata
port = 3306
#server_id = 3
socket = /var/run/mysqld/mysqld.sock
user=mysql
log_error = /var/log/mysql/error.log
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
join_buffer_size = 256M
sort_buffer_size = 128M
read_rnd_buffer_size = 2M
#copied from old config
#key_buffer = 16M
max_allowed_packet = 256M
thread_stack = 192K
thread_cache_size = 8
query_cache_limit = 1M
#disabling query_cache_size and type, for replication purpose, need to enable it when going live
query_cache_size = 0
#query_cache_size = 64M
#query_cache_type = 1
query_cache_type = OFF
#GroupBy
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
#sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
enforce-gtid-consistency
gtid-mode = ON
log_slave_updates=0
slave_transaction_retries = 100
#replication related changes
server-id = 2
relay-log = /disk1/mysqllog/mysql-relay-bin.log
log_bin = /disk1/mysqllog/binlog/mysql-bin.log
binlog_do_db = brandmanagement
#replicate_wild_do_table=brandmanagement.%
replicate-wild-ignore-table=brandmanagement.t\_gnip\_data\_recent
replicate-wild-ignore-table=brandmanagement.t\_gnip\_data
replicate-wild-ignore-table=brandmanagement.t\_fb\_rt\_data
replicate-wild-ignore-table=brandmanagement.t\_keyword\_tweets
replicate-wild-ignore-table=brandmanagement.t\_gnip\_data\_old
replicate-wild-ignore-table=brandmanagement.t\_gnip\_data\_new
binlog_format=row
report-host=10.125.133.220
report-port=3306
#sync-master-info=1
read-only=1
net_read_timeout = 7200
net_write_timeout = 7200
innodb_flush_log_at_trx_commit = 2
sync_binlog=0
sync_relay_log_info=0
max_relay_log_size=268435456
Lots of possible solutions. But I'll go with the simplest one. Have you got enough network bandwidth to send all changes over the network? You're using "row" binlog, which may be good in case of random, unindexed updates. But if you're changing a lot of data using indexes only, then "mixed" binlog may be better.

Resources