Write to a sysfs node, causing the system always write to the node - linux

I locally wirte a module to test function/feature, And I create follow node info:
/sys/class/dbc/dbc # ls -l
total 0
-rw------- 1 root root 4096 2021-10-08 21:52 dbc_backlight
-rw------- 1 root root 4096 2021-10-08 22:30 dbc_pwm_max
-rw------- 1 root root 4096 2021-10-08 22:30 dbc_pwm_min
-rw------- 1 root root 4096 2021-10-08 21:52 dbc_setting
-rw------- 1 root root 4096 2021-10-08 21:52 dbc_thread_enable
-r--r--r-- 1 root root 4096 2021-10-08 22:30 dev
drwxr-xr-x 2 root root 0 2021-10-08 22:30 power
lrwxrwxrwx 1 root root 0 2021-10-08 22:30 subsystem -> ../../../../class/dbc
-rw-r--r-- 1 root root 4096 2021-10-08 22:30 uevent
when I echo right value to dbc_backlight node, can normally work, but when I write error value to dbc_backlight node, will result always write, info is follow:
node source code is follow:
static ssize_t dbc_backlight_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
unsigned int DBC_BACKLIGHT = 0;
int readCount = 0;
printk("===========Set DBC Backlight========\n");
readCount = sscanf(buf, "%d", &DBC_BACKLIGHT);
if (readCount != 1)
{
printk("[ERROR] cannot read DBC_BACKLIGHT from [%s] \n", buf);
return 0;
}
if (DBC_BACKLIGHT > 100)
{
printk("Invalid Parameter DBC_BACKLIGHT=%d \n", DBC_BACKLIGHT);
return 0;
}
printk("Set Parameter DBC_BACKLIGHT=%d success\n", DBC_BACKLIGHT);
m_u8BacklightSetting = DBC_BACKLIGHT;
SetActiveBacklightSwitch(m_eActiveBackLight, m_u8BacklightSetting);
return count;
}
abnormal status dmesg log info is:
[ 2562.416693] ===========Set DBC Backlight========
[ 2562.416739] Invalid Parameter DBC_BACKLIGHT=101
[ 2562.416786] ===========Set DBC Backlight========
[ 2562.416832] Invalid Parameter DBC_BACKLIGHT=101
[ 2562.416878] ===========Set DBC Backlight========
[ 2562.416960] Invalid Parameter DBC_BACKLIGHT=101
[ 2562.417006] ===========Set DBC Backlight========
[ 2562.417089] Invalid Parameter DBC_BACKLIGHT=101
[ 2562.417135] ===========Set DBC Backlight========
[ 2562.417181] Invalid Parameter DBC_BACKLIGHT=101
[ 2562.417265] ===========Set DBC Backlight========
[ 2562.417309] Invalid Parameter DBC_BACKLIGHT=101
[ 2562.417391] ===========Set DBC Backlight========
[ 2562.417436] Invalid Parameter DBC_BACKLIGHT=101
[ 2562.417481] ===========Set DBC Backlight========
[ 2562.417564] Invalid Parameter DBC_BACKLIGHT=101
the log will always running and can't stop, otherwise, kill -9 pid can kill(kill pid can't kill it), top info is follow:
Tasks: 410 total, 2 running, 349 sleeping, 0 stopped, 0 zombie
Mem: 1694992k total, 1583088k used, 111904k free, 12844k buffers
Swap: 409596k total, 13056k used, 396540k free, 732388k cached
400%cpu 6%user 102%nice 135%sys 157%idle 0%iow 0%irq 0%sirq 0%host
PID USER PR NI VIRT RES SHR S[%CPU] %MEM TIME+ ARGS
2272 logd 30 10 34M 9.4M 4.1M S 152 0.5 2:29.57 logd
10181 root 20 0 4.4M 2.3M 1.9M R 98.6 0.1 1:33.14 sh -
kill -9 10181 can stop thread running.
I don't know why always write the node(dbc_backlight), please help me.
And locally, I do follow modify, the problem will not reproduce:
printk("===========Set DBC Backlight========\n");
readCount = sscanf(buf, "%d", &DBC_BACKLIGHT);
if (readCount != 1)
{
printk("[ERROR] cannot read DBC_BACKLIGHT from [%s] \n", buf);
return 0;
}
if (DBC_BACKLIGHT > 100)
{
printk("Invalid Parameter DBC_BACKLIGHT=%d \n", DBC_BACKLIGHT);
return 0;
}
//modify follow will fix it the problem
printk("===========Set DBC Backlight========\n");
readCount = sscanf(buf, "%d", &DBC_BACKLIGHT);
if (readCount != 1)
{
printk("[ERROR] cannot read DBC_BACKLIGHT from [%s] \n", buf);
return -EINVAL; //........
}
if (DBC_BACKLIGHT > 100)
{
printk("Invalid Parameter DBC_BACKLIGHT=%d \n", DBC_BACKLIGHT);
return -EINVAL;........
}
Do you know why? thanks for your help.

On success, .store function should return number of characters written.
In fail, it should return negative error code.
Returning 0 (return 0;) from that function is incorrect.
As you correctly noted, you can use return -EINVAL; for indicate that input is invalid.

Related

Virtual size in docker is always increasing with puppeteer

I am developing a little program with nodejs and Puppeteer which goal is to generate a certain amount of PDF files. The program is working with the bluebird module in order to achieve concurrency. The problem is that the use of Physical and virtual memory does not stop to increase. The size of all the generated documents is approximately 25GB, but the used memory in the docker container is much bigger:
pdf-generator 64.2GB (virtual 68.4GB)
We generate the PDF with Puppeteer in this way:
async function generatePDF(browser, num) {
const page = await browser.newPage();
try {
const pdfUrl = pdfPathURL(num);
await page.goto(pdfUrl, {
waitUntil: ["domcontentloaded", "networkidle0", "load"],
timeout: 300000,
});
// await page.waitForLoadState({ waitUntil: "domcontentloaded" });
const buffer = await page.pdf({
format: "A4",
printBackground: true,
landscape: true,
preferCSSPageSize: true,
});
await page.close()
return buffer.toString("base64");
} catch (error) {
let messageError = error;
console.log(messageError);
return "";
}
} finally {
await page.close();
}
}
[EDIT]
This is the code that opens the Chromium instance. One per request:
async function generatePDFFile(id) {
let pdfFile;
let errorGeneration = "";
const browser = await launchPuppeteer();
try {
if (!browser) {
errorGeneration = `Error getting Chromium instance`;
}
if (!browser.isConnected()) {
errorGeneration = `Error connecting to Chromium`;
}
pdfFile = await generatePDFPage(browser, id);
console.log(`PDF file generated of id:`, id);
} catch (error) {
errorGeneration = error;
console.log("errorGeneration: ", error);
}
finally {
await browser.close();
}
return { id, pdf: pdfFile, error: errorGeneration };
}
const puppeteerParams = {
headless: true,
args: [
"--disable-gpu",
"--disable-dev-shm-usage",
"--disable-setuid-sandbox",
"--no-sandbox",
"--font-render-hinting=none",
'--single-process',
'--no-zygote'
],
};
The top command in the container
Tasks: 19 total, 1 running, 18 sleeping, 0 stopped, 0 zombie
%Cpu(s): 45.4 us, 11.0 sy, 0.0 ni, 42.5 id, 0.3 wa, 0.0 hi, 0.9 si, 0.0 st
MiB Mem : 15862.5 total, 1418.4 free, 9686.2 used, 4757.9 buff/cache
MiB Swap: 2048.0 total, 1291.0 free, 757.0 used. 4953.4 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
34855 myuser 20 0 1124.8g 222708 149664 S 81.0 1.4 0:02.74 chrome
34810 myuser 20 0 1124.8g 212544 146712 S 77.3 1.3 0:02.61 chrome
34918 myuser 20 0 1124.8g 184764 141052 S 36.7 1.1 0:01.10 chrome
31 myuser 20 0 706628 142100 33080 S 35.7 0.9 2:19.22 node
34968 myuser 20 0 1124.7g 136748 112832 S 9.3 0.8 0:00.28 chrome
35062 myuser 20 0 1124.7g 138452 114036 S 9.0 0.9 0:00.27 chrome
35013 myuser 20 0 1124.8g 137448 113456 S 8.3 0.8 0:00.25 chrome
60 myuser 20 0 965160 103512 33040 S 7.7 0.6 0:22.25 node
35106 myuser 20 0 1124.6g 105352 89208 S 5.0 0.6 0:00.15 chrome
8 myuser 20 0 630596 51892 32908 S 0.7 0.3 0:04.14 node
1 myuser 20 0 2420 524 452 S 0.0 0.0 0:00.05 sh
19 myuser 20 0 707412 57724 35064 S 0.0 0.4 0:02.26 npm start
30 myuser 20 0 2420 580 512 S 0.0 0.0 0:00.00 sh
48 myuser 20 0 705296 53336 34512 S 0.0 0.3 0:01.71 npm run example
59 myuser 20 0 2420 524 456 S 0.0 0.0 0:00.00 sh
24495 myuser 20 0 1124.7g 140500 116068 S 0.0 0.9 0:00.31 chrome
31812 myuser 20 0 4100 3376 2948 S 0.0 0.0 0:00.05 bash
31920 myuser 20 0 7012 3420 2848 R 0.0 0.0 0:00.03 top
34415 myuser 20 0 1124.8g 138368 114276 S 0.0 0.9 0:00.28 chrome
There are 8 chrome processes because I am concurrently doing 8 requests. The memory continues increasing

Impossible to delete file in debugfs

I'm playing with debugfs. In a module, I've created a directory 'test_debugfs' in the debugfs filesystem (mounted at /sys/kernel/debug) and a file 'demo_file'.
// Create the test_debufs in /sys/kernel/debug
struct dentry * my_dirent;
static int __init my_module_init_module(void) {
my_dirent = debugfs_create_dir("test_debugfs", NULL);
debugfs_create_file("demo_file", 0666, my_dirent, NULL, &fops_debugfs);
}
Unfortunately, I forgot to remove the directory on module unload, and now I cannot remove the demo_file anymore.
# rmmod my_module
# cd /sys/kernel/debug/test_debugfs
# ls
demo_file
# rm -rf demo_file
rm: cannot remove 'demo_file': Operation not permitted
# sstat
File: demo_file
Size: 0 Blocks: 0 IO Block: 4096 regular empty file
Device: 6h/6d Inode: 16426 Links: 1
Access: (0666/-rw-rw-rw-) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2021-04-28 10:20:14.807999989 +0200
Modify: 2021-04-28 10:20:14.807999989 +0200
Change: 2021-04-28 10:20:14.807999989 +0200
Birth: -
After rebooting my machine, the demo_file is still there.
Do you know how I could remove it ?
Answer:
Thanks to Varun, I managed to remove the file directly in the module with this code:
struct dentry * my_dirent;
static int __init my_module_init_module(void) {
struct path path;
ret = kern_path("/sys/kernel/debug/test_debugfs", LOOKUP_DIRECTORY, &path);
if (ret)
pr_err("Failed to lookup /sys/kernel/debug/test_debugfs err %d\n", ret);
else
debugfs_remove_recursive(path.dentry);
}
You cannot use rm command to remove file from debug_fs ..
The debugfs filesystem does not support unlink function in the directory inode operations. Hence rm command will fail
You have to use debugfs function void debugfs_remove(struct dentry *dentry) where dentry parameter is the return value from debugfs_create_file function call

How to determine if an SFTP file is a directory in Node.js?

The ssh2 library's SFTP readdir method gives me back all the files in the remote directory. How can I tell if any of of them are directories?
Here's some example output from the library:
{ filename:
'myfile',
longname:
'-rwxr-x--- 1 myuser mygroup 19036227 Nov 21 11:05 myfile',
attrs:
Stats {
mode: 33256,
permissions: 33256,
uid: 603,
gid: 1014,
size: 19036227,
atime: 1542859216,
mtime: 1542816340 } }
The file's mode contains bits indicating its type. You can check it like this:
const fs = require('fs');
function isDir(mode) {
return (mode & fs.constants.S_IFMT) == fs.constants.S_IFDIR;
}
isDir(myfile.attrs.mode);

nodejs process.setgid, process.setuid behavior with fs module

directory:
drwxrwxr-x 2 alex alex 4096 Aug 3 12:03 ./
drwxr-xr-x 17 alex alex 4096 Aug 3 11:18 ../
-rwx------ 1 root root 19 Aug 3 11:24 privilegedStuff*
-rwxrwx--- 1 root root 28 Aug 3 12:10 privilegedStuff1*
-rwxrwxr-x 1 alex alex 830 Aug 3 12:12 test.js*
test.js:
#!/usr/bin/env node
var fs = require('fs');
console.log(' user id: ', process.getuid());
console.log(' group id: ', process.getgid());
console.log(' user effective id: ', process.getegid());
console.log('group effective id: ', process.getegid());
console.log('\n switching user and group...\n');
process.setgid(1000);
process.setegid(1000);
process.setuid(1000);
process.seteuid(1000);
console.log(' user id: ', process.getuid());
console.log(' group id: ', process.getgid());
console.log(' user effective id: ', process.getegid());
console.log('group effective id: ', process.getegid());
console.log('\n output: \n');
console.log(fs.readFileSync('./privilegedStuff1', 'utf8'))
// this throws error as expected so I commented that
// console.log(fs.readFileSync('./privilegedStuff', 'utf8'))
privilegedStuff1:
content of privilegedStuff1
result:
alex#hp:/apps/test$ sudo ./test.js
user id: 0
group id: 0
user effective id: 0
group effective id: 0
switching user and group...
user id: 1000
group id: 1000
user effective id: 1000
group effective id: 1000
output:
content of privilegedStuff1
so what I don't understand is why node doesn't throw an error as it does nicely with privilegedStuff file? What am I missing?
alex#hp:/apps/test$ groups
alex adm cdrom sudo dip plugdev lpadmin sambashare
alex#hp:/apps/test$ cat privilegedStuff1
cat: privilegedStuff1: Permission denied
alex#hp:/apps/test$ sudo -s
root#hp:/apps/test# groups
root
In my test, I don't have such problem.
Can you enter the following command and show the result:
ls -l privilegedStuff1
id

How to understand the pid() and new_pid are same value in executing forktracker.stp?

I am using forktracker.stp to track the fork process flow. The script is like this:
probe kprocess.create
{
printf("%-25s: %s (%d) created %d\n",
ctime(gettimeofday_s()), execname(), pid(), new_pid)
}
probe kprocess.exec
{
printf("%-25s: %s (%d) is exec'ing %s\n",
ctime(gettimeofday_s()), execname(), pid(), filename)
}
Executing the script, I find it outputs the following results:
......
Thu Oct 22 05:09:42 2015 : virt-manager (8713) created 8713
Thu Oct 22 05:09:42 2015 : virt-manager (8713) created 8713
Thu Oct 22 05:09:42 2015 : virt-manager (8713) created 8713
Thu Oct 22 05:09:43 2015 : virt-manager (8713) created 8713
......
I can't understand why pid() and new_pid are same value. I doubt whether it is related to "fork call once, return twice". So I write a simple program to test:
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
int main(void)
{
pid_t pid;
pid = fork();
if (pid < 0) {
exit(1);
} else if (pid > 0) {
printf("Parent exits!\n");
exit(0);
}
printf("hello world\n");
return 0;
}
Tracking this program, the script outputs:
Thu Oct 22 05:27:10 2015 : bash (3855) created 8955
Thu Oct 22 05:27:10 2015 : bash (8955) is exec'ing "./test"
Thu Oct 22 05:27:10 2015 : test (8955) created 8956
So it seems not related to "fork call once, return twice".
How can I understand the pid() and new_pid are same value?
I think what you're seeing are simply new threads, where the pids will be the same while the tids will differ. You can easily add tids to that script like so:
probe kprocess.create {
printf("%-25s: %s (%d:%d) created %d:%d\n",
ctime(gettimeofday_s()), execname(), pid(), tid(), new_pid, new_tid)
}
probe kprocess.exec {
printf("%-25s: %s (%d) is exec'ing %s\n",
ctime(gettimeofday_s()), execname(), pid(), filename)
}
You could also report tid in the exec, but that's often less interesting since an exec will replace the whole process anyway.
(This question was also posted to the mailing list, and I replied here.)

Resources