How to use stackcount bcc tool with Rust? - rust

I would like to create a memory flamegraph of a process using bcc/eBPF as seen here and using:
sudo ./stackcount-bpfcc -p <pid> -U -r ".*malloc.*" -v -d
Doesn't seem to write anything interesting in stdout, just have this:
cannot attach kprobe, Invalid argument
cannot attach kprobe, Invalid argument
cannot attach kprobe, Invalid argument
cannot attach kprobe, Invalid argument
cannot attach kprobe, Invalid argument
Tracing 86 functions for ".*malloc.*"...
Hit Ctrl-C to end.
My executable is written in Rust and was build with .cargo/config:
[build]
rustflags = "-C force-frame-pointers=yes"

Related

dmidecode inside go program running in a kubernetes pod

I have a go routine running in docker container. I need output of the command dmidecode. But its coming blank.
Go:
func main() {
cmd := exec.Command("dmidecode","-t 1")
x,_ := cmd.Output()
fmt.Println("output =======", string(x))
}
Docker run:
docker run --device /dev/mem:/dev/mem --cap-add SYS_RAWIO -p 8086:8086 -it my_img:1.0.1
What am I missing here?
Updated:
The above worked in docker after I added below in Dockerfile:
FROM alpine:latest
RUN apk --no-cache --update --verbose add grep bash dmidecode &&
rm -rf /var/cache/apk/* /tmp/* /sbin/halt /sbin/poweroff /sbin/reboot
And below in docker compose file:
privileged: true
But When tried to use the above in kubernetes it not able to fetch demidecode output.
A help will be really appreciated.
What am I missing here?
For starters ,error handling.
x,_ := cmd.Output()
Never, ever ignore an error in Go. Unlike languages like, say, Pyhton, there is no exception raising - handling error return values is your only chance to figure out if something went wrong.
Secondly, you're also ignoring your command's Standard Output stream. This is likely to contain a useful error message whenever a command execution doesn't work, so os/exec's Output() provides it as part of the error value if not already captured in the Cmd configuration. Part of your error handling should be doing a type assertion on that error value, if not nil, and if it's a valid *exec.ExitError, and if that type assertion succeeds, check its Stderr field for an error message.
Third, looking at your command, I can see you made an easy mistake:
cmd := exec.Command("dmidecode","-t 1")
At the shell, whitespace separates arguments. but there is no shell here; you're passing -t 1 all as one argument to dmidecode. You should be passing them as separate arguments, almost certainly:
cmd := exec.Command("dmidecode","-t", "1")
Finally, you've already found Can't run dmidecode on docker container , but make sure to read and understand the accepted answer. Then, get your docker container configured to be able to run dmidecode without Go. Once it works at the command line, the same docker configuration should allow it to work under Go invocation as well.

Why does gem5 run parsec3.0 encounter deadlock error?

I run gem5 full system mode on a multi-core system, use AtomicCPU to establish a checkpoint, and then turn to O3CPU to start, and execute a command similar to the following:
./build/ARM_MOESI_hammer/gem5.opt -d fs_results/blackscholes configs/example/fs.py --ruby --num-cpus=64 --caches --l2cache --cpu-type=AtomicSimpleCPU --network=garnet2.0 --disk-image=$M5_PATH/disks/expanded-linaro-minimal-aarch64.img --kernel=/home/GEM5/gem5/2017sys/binaries/vmlinux.vexpress_gem5_v1_64.20170616 --param 'system.realview.gic.gem5_extensions = True'
Next, establish a checkpoint, and use the following command to restore the checkpoint and run PARSEC.
./build/ARM_MOESI_hammer/gem5.opt -d fs_results/blackscholes configs/example/fs.py --ruby --num-cpus=64 --caches --l2cache --cpu-type=AtomicSimpleCPU --network=garnet2.0 --disk-image=$M5_PATH/disks/expanded-linaro-minimal-aarch64.img --kernel=/home/GEM5/gem5/2017sys/binaries/vmlinux.vexpress_gem5_v1_64.20170616 --param 'system.realview.gic.gem5_extensions = True' --restore-with-cpu=DeriveO3CPU --script=../arm-gem5-rsk/parsec_rcs/blackscholes_simsmall_64.rcS -r 1
But I encountered the following problems:
First of all, the rcs file is not executed. Does the startup checkpoint conflict with the --script command?
The second point, I manually enter in the operating system booted by gem5:
parsecmgmt -a run -c gcc-hooks -i simsmall -n 1 -p blackscholes
I got the following error:
panic: Possible Deadlock detected. Aborting!
I tried to find the solution from the internet,it seems that there used to be a way to add parameters--garnet-network=flexible,but this method is no longer applicable in gem5-20.0 version.
Can someone help me solve this deadlock problem? By the way, when running the facesim program, I can get the correct running result by using 'test' input.

Can docker entrypoint in shell form use runtime command args?

Here is a sample dockerfile with a shell-form entrypoint:
FROM ubuntu
ENTRYPOINT /bin/echo "(shell) ENTRYPOINT#image ($0)"
Below, are some of the outputs that I see from various runs:
$ docker run -it sc-test:v4
(shell) ENTRYPOINT#image (/bin/sh)
$ docker run -it sc-test:v4 /bin/echo
(shell) ENTRYPOINT#image (/bin/echo)
$ docker run -it sc-test:v4 /bin/cat
(shell) ENTRYPOINT#image (/bin/cat)
$ docker run -it sc-test:v4 /bin/dog
(shell) ENTRYPOINT#image (/bin/dog)
$ docker run -it sc-test:v4 "/bin/dog ($0)"
(shell) ENTRYPOINT#image (/bin/dog (-bash))
Based on docker documentation here, we can see that the command args are ignored.
However, the value of $0 changes with the args provided. Can someone explain why this happens? Thanks!
The table in that part of the Docker documentation isn't technically correct: Docker doesn't actually drop or ignore the command part when there's a shell-form entrypoint. What actually happens here (and your example demonstrates) is:
If either the ENTRYPOINT or CMD (or both) is the shell form, it's wrapped in ["/bin/sh", "-c", "..."].
The ENTRYPOINT and CMD lists are concatenated to form a single command list.
Let's take your third example. This is, in Dockerfile syntax,
ENTRYPOINT /bin/echo "(shell) ENTRYPOINT#image ($0)"
CMD ["/bin/cat"]
and the resulting combined command is (in JSON array syntax, expanded for clarity)
[
"/bin/sh",
"-c",
"/bin/echo \"(shell) ENTRYPOINT#image ($0)\"",
"/bin/cat"
]
So, what does sh -c do if you give it multiple arguments? The POSIX spec for the sh command documents the syntax as
sh -c command_string [command_name [argument...]]
and further documents
-c: Read commands from the command_string operand. Set the value of special parameter 0 [...] from the value of the command_name operand and the positional parameters ($1, $2, and so on) in sequence from the remaining argument operands. No commands shall be read from the standard input.
That's what you're seeing in your examples. If ENTRYPOINT is a bare string and CMD is a JSON array, then within the ENTRYPOINT string command, the arguments in CMD can be used as $0, $1, and so on. If both are bare strings, both get wrapped in sh -c, and you'll get something like:
ENTRYPOINT /bin/echo "$0 is /bin/sh, $1 is -c, and then $2"
CMD the rest of the line
In your first example, the command part is empty, and in this case (still from the POSIX sh documentation)
If command_name is not specified, special parameter 0 shall be set to [...] normally a pathname used to execute the sh utility.
Your last example is slightly more subtle:
docker run -it sc-test:v4 "/bin/dog ($0)"
Since the string is double-quoted, your local shell expands the $0 reference in it, which is how bash gets in there; then since it's a single (quoted) word, it becomes the single command_name argument to sh -c.
There's two more normal patterns for using ENTRYPOINT and CMD together. The pattern I prefer has CMD be a full shell command, and ENTRYPOINT does some first-time setup and then runs a command like exec "$#" to run that command. There's also a "container as command" pattern where ENTRYPOINT has a complete command (perhaps with involved JVM arguments) and CMD additional options. In these cases the ENTRYPOINT must be JSON-array syntax:
ENTRYPOINT ["/script-that-exec-dollar-at.sh"]
CMD ["the_command", "--option"]
Since the ENTRYPOINT string doesn't directly reference $0, $1, etc. the CMD arguments are effectively ignored by the sh -c wrapper. If you had ENTRYPOINT script.sh it would be invoked by sh -c as a subprocess with no arguments, and the CMD would get lost.
It's probably clearer for the Docker documentation to say "if ENTRYPOINT is a string then CMD is ignored" than to try to explain the subtleties of this.

Getting profiling file from "stack exec"

I would like to profile a program that is being managed by Stack. The file was built using with the following command:
stack build --executable-profiling --library-profiling --ghc-options="-fprof-auto -rtsopts"
And run with this command
stack exec myProgram.exe -- inputArg +RTS -p
I know that the program has run (from the output file) but I am expecting a myProgram.prof file to be produced as well, I cannot find this file.
If I execute the program without using stack the profiling file is produced, but is there a way to get this to work using Stack?
-- stops the RTS from processing further command-line arguments but is passed through to the program. So, your -- is visible to both stack and myProgram.exe and therefore the +RTS -p flags are not visible to myProgram.exe's RTS. Instead try
stack exec -- myProgram.exe inputArg +RTS -p

How to run command during Docker build which requires a tty?

I have some script I need to run during a Docker build which requires a tty (which Docker does not provide during a build). Under the hood the script uses the read command. With a tty, I can do things like (echo yes; echo no) | myscript.sh.
Without it I get strange errors I don't completely understand. So is there any way to use this script during the build (given that its not mine to modify?)
EDIT: Here's a more definite example of the error:
FROM ubuntu:14.04
RUN echo yes | read
which fails with:
Step 0 : FROM ubuntu:14.04
---> 826544226fdc
Step 1 : RUN echo yes | read
---> Running in 4d49fd03b38b
/bin/sh: 1: read: arg count
The command '/bin/sh -c echo yes | read' returned a non-zero code: 2
RUN <command> in Dockerfile reference:
shell form, the command is run in a shell, which by default is /bin/sh -c on Linux or cmd /S /C on Windows
let's see what exactly /bin/sh is in ubuntu:14.04:
$ docker run -it --rm ubuntu:14.04 bash
root#7bdcaf403396:/# ls -n /bin/sh
lrwxrwxrwx 1 0 0 4 Feb 19 2014 /bin/sh -> dash
/bin/sh is a symbolic link of dash, see read function in dash:
$ man dash
...
read [-p prompt] [-r] variable [...]
The prompt is printed if the -p option is specified and the standard input is a terminal. Then a line
is read from the standard input. The trailing newline is deleted from the line and the line is split as
described in the section on word splitting above, and the pieces are assigned to the variables in order.
At least one variable must be specified. If there are more pieces than variables, the remaining pieces
(along with the characters in IFS that separated them) are assigned to the last variable. If there are
more variables than pieces, the remaining variables are assigned the null string. The read builtin will
indicate success unless EOF is encountered on input, in which case failure is returned.
By default, unless the -r option is specified, the backslash ``\'' acts as an escape character, causing
the following character to be treated literally. If a backslash is followed by a newline, the backslash
and the newline will be deleted.
...
read function in dash:
At least one variable must be specified.
let's see read function in bash:
$ man bash
...
read [-ers] [-a aname] [-d delim] [-i text] [-n nchars] [-N nchars] [-p prompt] [-t timeout] [-u fd] [name...]
If no names are supplied, the line read is assigned to the variable REPLY. The return code is zero,
unless end-of-file is encountered, read times out (in which case the return code is greater than
128), or an invalid file descriptor is supplied as the argument to -u.
...
So I guess your script myscript.sh is start with #!/bin/bash or something else but not /bin/sh.
Also, you can change your Dockerfile like below:
FROM ubuntu:14.04
RUN echo yes | read ENV_NAME
Links:
https://docs.docker.com/engine/reference/builder/
http://linux.die.net/man/1/dash
http://linux.die.net/man/1/bash
Short answer : You can't do it straightly because docker build or either buildx didn't implement [/dev/tty, /dev/console]. But there is a hacky solution where you can achieve what you need but I highly discourage using it since it break the concept of CI. That's why docker didn't implement it.
Hacky solution
FROM ubuntu:14.04
RUN echo yes | read #tty requirement command
As mentioned in docker reference document the RUN consist of two stage, first is execution of command and the second is commit to the image as a new layer. So you can do the stages manually on your own where we will provide tty to first stage(execution) and then commit the result.
Code:
cd
cat >> tty_wrapper.sh << EOF
echo yes | read ## Your command which needs tty
rm /home/tty_wrapper.sh
EOF
docker run --interactive --tty --detach --privileged --name name1 ubuntu:14.04
docker cp tty_wrapper.sh name1:/home/
docker exec name1 bash -c "cd /home && chmod +x tty_wrapper.sh && ./tty_wrapper.sh "
docker commit name1 your:tag
Your new image is ready.
Here is a description about the code.
At first we make a bash script which wrap our tty to it and then remove itself after fist execute. Then we run a container with provided tty option(you can remove privileged if you don't need). Next step we copy wrapped bash script inside container and do the execution & commit stage on our own.
You don't need a tty for feeding your data to your script . just doing something like (echo yes; echo no) | myscript.sh as you suggested will do. also please make sure you copy your file first before trying to execute it . something like COPY myscript.sh myscript.sh
Most likely you don't need a tty. As the comment on the question shows, even the example provided is a situation where the read command was not properly called. A tty would turn the build into an interactive terminal process, which doesn't translate well to automated builds that may be run from tools without terminals.
If you need a tty, then there's the C library call to openpty that you would use when forking a process that includes a pseudo tty. You may be able to solve your problem with a tool like expect, but it's been so long that I don't remember if it creates a ptty or not. Alternatively, if your application can't be built automatically, you can manually perform the steps in a running container, and then docker commit the resulting container to make an image.
I'd recommend against any of those and to work out the procedure to build your application and install it in a non-interactive fashion. Depending on the application, it may be easier to modify the installer itself.

Resources