cloud-init-output.log increases it size very quickly on RHEL 8 - linux

I use Linux machine on AWS EC2 instance with Red Hat Enterprise Linux 8.6 and my cloud-init-output.log is increasing its size very quickly causing my app logs to stop writing in one-two days even though I have 20GB of storage.
user$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 195M 1.7G 11% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/xvda2 20G 17G 3.5G 83% /
tmpfs 373M 0 373M 0% /run/user/1000
user$ ls -ltr /var/log
total 10499168
drwxr-x---. 2 chrony chrony 6 Jun 15 2021 chrony
drwxr-xr-x. 2 root root 6 Apr 5 18:43 qemu-ga
drwx------. 2 root root 6 Apr 8 04:50 insights-client
drwx------. 2 root root 6 May 3 08:58 private
-rw-rw----. 1 root utmp 0 May 3 08:58 btmp
-rw-------. 1 root root 0 May 3 08:59 maillog
-rw-------. 1 root root 0 May 3 08:59 spooler
drwxr-x---. 2 sssd sssd 73 Jun 3 11:15 sssd
drwx------. 2 root root 23 Jul 12 14:43 audit
drwxr-xr-x. 2 root root 23 Jul 12 14:44 tuned
-rw-r--r--. 1 root root 128263 Jul 12 14:44 cloud-init.log
drwxr-xr-x. 2 root root 43 Jul 12 14:44 rhsm
-rw-r--r--. 1 root root 806 Jul 12 14:44 kdump.log
-rw-r--r--. 1 root root 1017 Jul 12 14:45 choose_repo.log
drwxr-xr-x. 2 root root 67 Jul 12 14:47 amazon
-rw-r--r--. 1 root root 1560 Jul 14 05:04 hawkey.log
-rw-------. 1 root root 26318 Jul 14 06:39 secure
-rw-rw-r--. 1 root utmp 4224 Jul 14 06:58 wtmp
-rw-rw-r--. 1 root utmp 292292 Jul 14 06:58 lastlog
-rw-r--r--. 1 root root 10752 Jul 14 07:00 dnf.rpm.log
-rw-r--r--. 1 root root 48816 Jul 14 07:00 dnf.librepo.log
-rw-r--r--. 1 root root 97219 Jul 14 07:00 dnf.log
-rw-------. 1 root root 12402 Jul 14 07:01 cron
-rw-------. 1 root root 2160833934 Jul 14 07:02 messages
-rw-r-----. 1 root adm 5257112056 Jul 14 07:03 cloud-init-output.log
I changed logging level from default DEBUG to ERROR in /etc/cloud/cloud.cfg.d but it didn't help. Messages are also filling up fast.
Is this log file even supposed to be filling in after the EC2 instance is up?
Is there something I can do to stop the size increase?
I tried also to manually do the logrotate logrotate --force /etc/logrotate.d/ but it didn't do much.

Related

How to view logs of container before restart

I have a container which was restart 14 hours ago. The container is running since 7 weeks. I want to inspect the container logs during a certain interval. When i run below command, I see there is no output
docker container logs pg-connect --until 168h --since 288h
When i run below commands i only see logs since the container was restarted.
docker logs pg-connect
Any idea how to retrieve older logs for the container?
More info if helps
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f08fb6fb0fb kosta709/alpine-plus:0.0.2 "/connectors-restart…" 7 weeks ago Up 14 hours connectors-monitor
7e919a253a29 debezium/connect:1.2.3.Final "/docker-entrypoint.…" 7 weeks ago Up 14 hours pg-connect
>
>
> docker logs 7e919a253a29 -n 2
2022-08-26 06:37:10,878 INFO || WorkerSourceTask{id=relations-0} Committing offsets [org.apache.kafka.connect.runtime.WorkerSourceTask]
2022-08-26 06:37:10,878 INFO || WorkerSourceTask{id=relations-0} flushing 0 outstanding messages for offset commit [org.apache.kafka.connect.runtime.WorkerSourceTask]
> docker logs 7e919a253a29 |head
org.apache.kafka.common.KafkaException: Producer is closed forcefully.
at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:766)
at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:753)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:279)
at java.base/java.lang.Thread.run(Thread.java:834)
2022-08-24 16:13:06,567 ERROR || WorkerSourceTask{id=session-0} failed to send record to barclays.public.session: [org.apache.kafka.connect.runtime.WorkerSourceTask]
org.apache.kafka.common.KafkaException: Producer is closed forcefully.
at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortBatches(RecordAccumulator.java:766)
at org.apache.kafka.clients.producer.internals.RecordAccumulator.abortIncompleteBatches(RecordAccumulator.java:753)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:279)
>
> ls -lart /var/lib/docker/containers/7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1/
total 90720
drwx------ 2 root root 6 Jul 1 10:39 checkpoints
drwx--x--- 2 root root 6 Jul 1 10:39 mounts
drwx--x--- 4 root root 150 Jul 1 10:40 ..
-rw-r----- 1 root root 10000230 Aug 24 16:13 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.9
-rw-r----- 1 root root 10000163 Aug 24 16:13 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.8
-rw-r----- 1 root root 10000054 Aug 24 16:16 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.7
-rw-r----- 1 root root 10000147 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.6
-rw-r----- 1 root root 10000123 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.5
-rw-r----- 1 root root 10000019 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.4
-rw-r----- 1 root root 10000159 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.3
-rw-r----- 1 root root 10000045 Aug 24 16:42 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.2
-rw-r--r-- 1 root root 199 Aug 25 16:30 hosts
-rw-r--r-- 1 root root 68 Aug 25 16:30 resolv.conf
-rw-r--r-- 1 root root 25 Aug 25 16:30 hostname
-rw------- 1 root root 7205 Aug 25 16:30 config.v2.json
-rw-r--r-- 1 root root 1559 Aug 25 16:30 hostconfig.json
-rw-r----- 1 root root 10000085 Aug 25 16:31 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log.1
drwx--x--- 4 root root 4096 Aug 25 16:31 .
-rw-r----- 1 root root 2843232 Aug 26 06:38 7e919a253a296494b74361e258e49d8c3ff38f345455316a15e1cb28cf556fa1-json.log
As stated by [the official guide][1]:
The docker logs command batch-retrieves logs present at the time of execution.```
To solve this issue you should instrument the container software to log its output to a persistent (rotated if you want) log file.
[1]: https://docs.docker.com/engine/reference/commandline/logs/

"mount: /dev/mqueue: must be superuser to use mount" when starting a Yocto Linux system via NFS and TFTP

I followed the guide "Yocto NFS & TFTP boot" from the i.MX knowledge base to make my embedded Linux device run a kernel and a filesystem on my development machine.
The kernel seems to be correctly loaded via TFTP, but the system doesn't boot up properly and systemd goes into maintenance mode.
Here's the first error in the log:
[ 10.637534] systemd[1]: dev-mqueue.mount: Mount process exited, code=exited, status=32/n/a
[ 10.657077] systemd[1]: dev-mqueue.mount: Failed with result 'exit-code'.
[ 10.666907] systemd[1]: Failed to mount POSIX Message Queue File System.
[FAILED] Failed to mount POSIX Message Queue File System.
See 'systemctl status dev-mqueue.mount' for details.
It seems similar to the log included in an unanswered comment in that same guide.
Looking into systemctl status dev-mqueue.mount I see:
* dev-mqueue.mount - POSIX Message Queue File System
Loaded: loaded (/lib/systemd/system/dev-mqueue.mount; static)
Active: failed (Result: exit-code) since Sun 2022-03-06 11:52:40 UTC; 5h 29min ago
Where: /dev/mqueue
What: mqueue
Docs: man:mq_overview(7)
https://www.freedesktop.org/wiki/Software/systemd/APIFileSystems
Mar 06 11:52:40 pico-imx7 mount[180]: mount: /dev/mqueue: must be superuser to use mount.
Notice: journal has been rotated since unit was started, output may be incomplete.
Not sure what's wrong, or why the system would fail like this.
The message must be superuser to use mount is a hint to a permission problem.
The Linux system expects most system files to be owned by UID 0 (root), but when reading the NFS filesystem set up in the guide it actually reads UID 1000, or the UID of whoever built the system in the development machine. If I list the contents of ${YOCTO_BUILD_DIR}/tmp/work/${TARGET}-poky-linux-gnueabi/${IMAGE}/1.0-r0/rootfs, I get:
drwxr-xr-x 20 1000 1000 4096 Mar 9 2018 ./
drwxr-xr-x 14 1000 1000 4096 Mar 5 19:19 ../
drwxr-xr-x 2 1000 1000 4096 Mar 9 2018 bin/
drwxr-xr-x 2 1000 1000 4096 Mar 9 2018 boot/
drwxr-xr-x 2 1000 1000 4096 Mar 9 2018 dev/
drwxr-xr-x 59 1000 1000 4096 Mar 9 2018 etc/
drwxr-xr-x 4 1000 1000 4096 Mar 9 2018 home/
drwxr-xr-x 10 1000 1000 4096 Mar 9 2018 lib/
drwxr-xr-x 2 1000 1000 4096 Mar 9 2018 media/
drwxr-xr-x 2 1000 1000 4096 Mar 9 2018 mnt/
drwxr-xr-x 4 1000 1000 4096 Mar 9 2018 opt/
drwxr-xr-x 2 1000 1000 4096 Mar 9 2018 proc/
drwxr-xr-x 2 1000 1000 4096 Mar 9 2018 run/
drwxr-xr-x 3 1000 1000 4096 Mar 9 2018 sbin/
drwxr-xr-x 2 1000 1000 4096 Mar 9 2018 srv/
drwxr-xr-x 2 1000 1000 4096 Mar 9 2018 sys/
drwxr-xr-t 2 1000 1000 4096 Mar 9 2018 tmp/
drwxr-xr-x 38 1000 1000 4096 Mar 9 2018 unit_tests/
drwxr-xr-x 11 1000 1000 4096 Mar 9 2018 usr/
drwxr-xr-x 9 1000 1000 4096 Mar 9 2018 var/
Notice the 1000 as UID and GID.
Compare that with the listing of the filesystem image tarball, made with tar --exclude=\*/\*/\* --no-wildcards-match-slash -tjvf ${YOCTO_BUILD_DIR}/tmp/deploy/images/${TARGET}/${IMAGE}-${TARGET}.tar.bz2:
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./bin/
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./boot/
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./dev/
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./etc/
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./home/
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./lib/
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./media/
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./mnt/
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./opt/
dr-xr-xr-x 0/0 0 2018-03-09 13:34 ./proc/
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./run/
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./sbin/
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./srv/
dr-xr-xr-x 0/0 0 2018-03-09 13:34 ./sys/
drwxrwxrwt 0/0 0 2018-03-09 13:34 ./tmp/
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./unit_tests/
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./usr/
drwxr-xr-x 0/0 0 2018-03-09 13:34 ./var/
Here the root directories are correctly owned by root.
The same permissions are in the .ext4 image file, that is part of the SD/MMC image file.
Two possible solutions:
mount ${YOCTO_BUILD_DIR}/tmp/deploy/images/${TARGET}/${IMAGE}-${TARGET}.ext4 file as loop in a directory, then export that directory via NFS (might require root privileges);
extract ${YOCTO_BUILD_DIR}/tmp/deploy/images/${TARGET}/${IMAGE}-${TARGET}.tar.bz2 in a directory, then export that directory via NFS; will require root privileges and some more time to extract the embedded filesystem.

kubelet failed to find mountpoint for CPU

I'm using kubeadm 1.15.3, docker-ce 18.09 on Debian 10 buster 5.2.9-2, and seeing errors in journalctl -xe | grep kubelet:
server.go:273] failed to run Kubelet: mountpoint for cpu not found
My /sys/fs/cgroup contains:
-r--r--r-- 1 root root 0 Sep 2 18:49 cgroup.controllers
-rw-r--r-- 1 root root 0 Sep 2 18:50 cgroup.max.depth
-rw-r--r-- 1 root root 0 Sep 2 18:50 cgroup.max.descendants
-rw-r--r-- 1 root root 0 Sep 2 18:49 cgroup.procs
-r--r--r-- 1 root root 0 Sep 2 18:50 cgroup.stat
-rw-r--r-- 1 root root 0 Sep 2 18:49 cgroup.subtree_control
-rw-r--r-- 1 root root 0 Sep 2 18:50 cgroup.threads
-rw-r--r-- 1 root root 0 Sep 2 18:50 cpu.pressure
-r--r--r-- 1 root root 0 Sep 2 18:50 cpuset.cpus.effective
-r--r--r-- 1 root root 0 Sep 2 18:50 cpuset.mems.effective
drwxr-xr-x 2 root root 0 Sep 2 18:49 init.scope
-rw-r--r-- 1 root root 0 Sep 2 18:50 io.pressure
-rw-r--r-- 1 root root 0 Sep 2 18:50 memory.pressure
drwxr-xr-x 20 root root 0 Sep 2 18:49 system.slice
drwxr-xr-x 2 root root 0 Sep 2 18:49 user.slice
docker.service is running okay and has /etc/docker/daemon.json:
{
"exec-opts": [
"native.cgroupdriver=systemd"
],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
The kubeadm docs say if using docker the cgroup driver will be autodetected, but I tried supplying it anyway for good measure - no change.
With mount or cgroupfs-mount:
$ mount -t cgroup -o all cgroup /sys/fs/cgroup
mount: /sys/fs/cgroup: cgroup already mounted on /sys/fs/cgroup/cpuset.
$ cgroupfs-mount
mount: /sys/fs/cgroup/cpu: cgroup already mounted on /sys/fs/cgroup/cpuset.
mount: /sys/fs/cgroup/blkio: cgroup already mounted on /sys/fs/cgroup/cpuset.
mount: /sys/fs/cgroup/memory: cgroup already mounted on /sys/fs/cgroup/cpuset.
mount: /sys/fs/cgroup/pids: cgroup already mounted on /sys/fs/cgroup/cpuset.
Is the problem that it's at cpuset rather than cpu? I tried to create a symlink, but root does not have write permission for /sys/fs/cgroup. (Presumably I can change it, but I took that as enough warning not to meddle.)
How can let kubelet find my CPU cgroup mount?
I would say that something very weird with your docker-ce installation and not kubelet. You are looking into the right direction showing mapping problem.
I have tried 3 different docker versions on both GCP and AWS environments instances.
What I have noticed comparing our results - you have wrong folder structure under /sys/fs/cgroup. Pay attention that I have much more permissions in /sys/fs/cgroup comparing to your output. This is how my results looks like:
root#instance-3:~# docker version
Client: Docker Engine - Community
Version: 19.03.1
API version: 1.39 (downgraded from 1.40)
Go version: go1.12.5
Git commit: 74b1e89
Built: Thu Jul 25 21:21:24 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.1
API version: 1.39 (minimum version 1.12)
Go version: go1.10.6
Git commit: 4c52b90
Built: Wed Jan 9 19:02:44 2019
OS/Arch: linux/amd64
Experimental: false
root#instance-3:~# ls -la /sys/fs/cgroup
total 0
drwxr-xr-x 14 root root 360 Sep 3 11:30 .
drwxr-xr-x 6 root root 0 Sep 3 11:30 ..
dr-xr-xr-x 5 root root 0 Sep 3 11:30 blkio
lrwxrwxrwx 1 root root 11 Sep 3 11:30 cpu -> cpu,cpuacct
dr-xr-xr-x 5 root root 0 Sep 3 11:30 cpu,cpuacct
lrwxrwxrwx 1 root root 11 Sep 3 11:30 cpuacct -> cpu,cpuacct
dr-xr-xr-x 2 root root 0 Sep 3 11:30 cpuset
dr-xr-xr-x 5 root root 0 Sep 3 11:30 devices
dr-xr-xr-x 2 root root 0 Sep 3 11:30 freezer
dr-xr-xr-x 5 root root 0 Sep 3 11:30 memory
lrwxrwxrwx 1 root root 16 Sep 3 11:30 net_cls -> net_cls,net_prio
dr-xr-xr-x 2 root root 0 Sep 3 11:30 net_cls,net_prio
lrwxrwxrwx 1 root root 16 Sep 3 11:30 net_prio -> net_cls,net_prio
dr-xr-xr-x 2 root root 0 Sep 3 11:30 perf_event
dr-xr-xr-x 5 root root 0 Sep 3 11:30 pids
dr-xr-xr-x 2 root root 0 Sep 3 11:30 rdma
dr-xr-xr-x 5 root root 0 Sep 3 11:30 systemd
dr-xr-xr-x 5 root root 0 Sep 3 11:30 unified
root#instance-3:~# ls -la /sys/fs/cgroup/unified/
total 0
dr-xr-xr-x 5 root root 0 Sep 3 11:37 .
drwxr-xr-x 14 root root 360 Sep 3 11:30 ..
-r--r--r-- 1 root root 0 Sep 3 11:42 cgroup.controllers
-rw-r--r-- 1 root root 0 Sep 3 11:42 cgroup.max.depth
-rw-r--r-- 1 root root 0 Sep 3 11:42 cgroup.max.descendants
-rw-r--r-- 1 root root 0 Sep 3 11:30 cgroup.procs
-r--r--r-- 1 root root 0 Sep 3 11:42 cgroup.stat
-rw-r--r-- 1 root root 0 Sep 3 11:42 cgroup.subtree_control
-rw-r--r-- 1 root root 0 Sep 3 11:42 cgroup.threads
drwxr-xr-x 2 root root 0 Sep 3 11:30 init.scope
drwxr-xr-x 52 root root 0 Sep 3 11:30 system.slice
drwxr-xr-x 3 root root 0 Sep 3 11:30 user.slice
Encourage you completely reinstall docker from scratch(or recreate instance and install docker again). That should help.
Let me share with you my docker-ce installation steps:
$ sudo apt update
$ sudo apt install apt-transport-https ca-certificates curl gnupg2 software-properties-common
$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
$ sudo apt update
$ apt-cache policy docker-ce
$ sudo apt install docker-ce=5:18.09.1~3-0~debian-buster
I have also seen a workaroung in Kubelet: mountpoint for cpu not found issue answer, but also dont have a permission under root to fix it:
mkdir /sys/fs/cgroup/cpu,cpuacct
mount -t cgroup -o cpu,cpuacct none /sys/fs/cgroup/cpu,cpuacct

`cd` to a folder in bash script not working: No such folder [duplicate]

This question already has answers here:
Why can't I change directories using "cd" in a script?
(33 answers)
'\r': command not found [duplicate]
(3 answers)
Closed 4 years ago.
This is my entire script. it is simply to avoid having to type it again and again
cd api
rails s -p 3001 -b 0.0.0.0
cd ..
When I am in the directory of the script and run cd api it works just fine. However when I run the script via ./start_server It does not work. Here is the output of ls -al:
mendel#DESKTOP-LIKG5E5:/mnt/c/Projects/chaverim-update$ ls -al
total 8
drwxrwxrwx 0 root root 512 Apr 13 12:32 .
drwxrwxrwx 0 root root 512 Apr 5 12:40 ..
drwxrwxrwx 0 root root 512 Apr 13 12:09 api
-rwxrwxrwx 1 root root 1237 Apr 5 12:40 boxfile.yml
drwxrwxrwx 0 root root 512 Apr 8 16:54 .bundle
drwxrwxrwx 0 root root 512 Apr 8 16:54 client
drwxrwxrwx 0 root root 512 Apr 13 12:09 .git
-rwxrwxrwx 1 root root 11 Apr 5 12:40 .gitignore
-rwxrwxrwx 1 root root 1097 Apr 5 12:40 LICENSE
drwxrwxrwx 0 root root 512 Apr 5 12:40 nginx
-rwxrwxrwx 1 root root 82 Apr 5 12:40 Procfile
-rwxrwxrwx 1 root root 67 Apr 5 12:40 README.md
-rwxrwxrwx 1 root root 188 Apr 5 12:40 run_tests
-rwxrwxrwx 1 root root 28 Apr 5 12:40 start_client
-rwxrwxrwx 1 root root 41 Apr 13 12:52 start_server
drwxrwxrwx 0 root root 512 Apr 8 16:54 vendor
As you can see there is a folder called api at the top and the start_server script is set to have execution permissions.

Why dev.bus is NULL in my device?

i'm trying to understand how linux device/driver model works and to do this i've written a little module. This module is simple, retrieves a pointer to a struct net_device (let's call it netdev) by the function dev_get_by_name(&init_net, "eth0"). Why the value of netdev->dev.bus is NULL? Should that pointer represent the bus_type structure on which my device is attached? The field netdev->parent->bus is however not NULL but it should represent the bus for eth controller...any explanation?
This is because your eth device, or better said its device "object" in the kernel, is not a bus and thus its bus value is left unitialized. But its parent device usually is on a bus and it is sufficient that the parent device knows the bus it is on, since both device eventually are linked during the driver initialization.
Let's have a look at an example: here is what I have in sysfs for my eth0 device (notice the device field):
$ ll /sys/class/net/eth0/
total 0
-r--r--r-- 1 root root 4096 May 20 11:10 address
-r--r--r-- 1 root root 4096 May 20 11:10 addr_len
-r--r--r-- 1 root root 4096 May 20 11:10 broadcast
-r--r--r-- 1 root root 4096 May 20 11:10 carrier
lrwxrwxrwx 1 root root 0 May 20 11:10 device -> ../../../devices/pci0000:00/0000:00:19.0
-r--r--r-- 1 root root 4096 May 20 11:10 dev_id
-r--r--r-- 1 root root 4096 May 20 11:10 dormant
-r--r--r-- 1 root root 4096 May 20 11:10 features
-rw-r--r-- 1 root root 4096 May 20 11:10 flags
-rw-r--r-- 1 root root 4096 May 20 11:10 ifalias
-r--r--r-- 1 root root 4096 May 20 11:10 ifindex
-r--r--r-- 1 root root 4096 May 20 11:10 iflink
-r--r--r-- 1 root root 4096 May 20 11:10 link_mode
-rw-r--r-- 1 root root 4096 May 20 11:10 mtu
-r--r--r-- 1 root root 4096 May 20 11:10 operstate
drwxr-xr-x 2 root root 0 May 20 11:10 power
drwxr-xr-x 2 root root 0 May 20 11:10 statistics
lrwxrwxrwx 1 root root 0 May 20 11:10 subsystem -> ../../net
-rw-r--r-- 1 root root 4096 May 20 11:10 tx_queue_len
-r--r--r-- 1 root root 4096 May 20 11:10 type
-rw-r--r-- 1 root root 4096 May 20 11:10 uevent
The link for the device is created from this code from the driver probe function, where netdev is the network device, and pdev the associated PCI device:
SET_NETDEV_DEV(netdev, &pdev->dev);
Which according to the documentation is:
/* Set the sysfs physical device reference for the network logical device
* if set prior to registration will cause a symlink during initialization.
*/
#define SET_NETDEV_DEV(net, pdev) ((net)->dev.parent = (pdev))
And here is what I have in the corresponding PCI device, that was set by SET_NETDEV_DEV (where you can notice the bus field):
$ ll /sys/devices/pci0000\:00/0000\:00\:19.0/
total 0
-rw-r--r-- 1 root root 4096 May 20 11:54 broken_parity_status
lrwxrwxrwx 1 root root 0 May 20 11:22 bus -> ../../../bus/pci
-r--r--r-- 1 root root 4096 May 20 11:07 class
-rw-r--r-- 1 root root 256 May 20 11:22 config
-r--r--r-- 1 root root 4096 May 20 11:54 device
lrwxrwxrwx 1 root root 0 May 20 11:22 driver -> ../../../bus/pci/drivers/e1000e
-rw------- 1 root root 4096 May 20 11:22 enable
-r--r--r-- 1 root root 4096 May 20 11:07 irq
-r--r--r-- 1 root root 4096 May 20 11:54 local_cpulist
-r--r--r-- 1 root root 4096 May 20 11:07 local_cpus
-r--r--r-- 1 root root 4096 May 20 11:22 modalias
-rw-r--r-- 1 root root 4096 May 20 11:22 msi_bus
lrwxrwxrwx 1 root root 0 May 20 11:22 net:eth0 -> ../../../class/net/eth0
drwxr-xr-x 2 root root 0 May 20 11:11 power
-r--r--r-- 1 root root 4096 May 20 11:22 resource
-rw------- 1 root root 131072 May 20 11:22 resource0
-rw------- 1 root root 4096 May 20 11:22 resource1
-rw------- 1 root root 32 May 20 11:22 resource2
lrwxrwxrwx 1 root root 0 May 20 11:22 subsystem -> ../../../bus/pci
-r--r--r-- 1 root root 4096 May 20 11:22 subsystem_device
-r--r--r-- 1 root root 4096 May 20 11:22 subsystem_vendor
-rw-r--r-- 1 root root 4096 May 20 11:22 uevent
-r--r--r-- 1 root root 4096 May 20 11:22 vendor
I hope this clarifies the situtation.

Resources