What is the effect of "/proc/$pid/latency" file? - linux

I find there is a /proc/$pid/latency file on my SuSE Enterprise Linux 12. E.g.:
# cat /proc/1476/latency
Latency Top version : v0.1
But from proc manual, I can't find any info about this /proc/$pid/latency file. After googling, I just find this old page, and http://www.latencytop.org/ can't be opened now.
So my question is what is the effect of /proc/$pid/latency file ? Only shows the Latency Top version? Since http://www.latencytop.org/ is broken now, what is the status of Latency Top?

Quick grep on fs/proc reveals how to enable the thing - sysctl kernel.latencytop=1. Afterwards you can check results with latencytop.
It definitely looks quite dead though.

Related

OpenSSH: option named 46AaCfGgKkMNnqsTtVvXxYy

With the 2017 Fall Creators Update, Microsoft added a version of OpenSSH to Windows 10, which got out of beta and is enabled by default in the recent April Update.
Today I tried to take it for a spin and saw that its usage page lists an option named 46AaCfGgKkMNnqsTtVvXxYy. usage: ssh -46AaCfGgKkMNnqsTtVvXxYy.
Upon running the above command. PowerShell prints the following output:
PS C:\WINDOWS\system32> ssh -46AaCfGgKkMNnqsTtVvXxYy
OpenSSH_for_Windows_7.6p1, LibreSSL 2.6.4
This looks like a version number of sorts.
I have looked in the Microsoft Docs to find more information about ssh and this flag, but to no avail. A documentation page on this website shows the option, but doesn't explain what it should do. To me, it looks like a combination of multiple options, but that doesn't explain why it outputs a version number.
My questions are the following:
Is it normal for ssh to have an option with such a peculiar name?
If yes, where does it originate from?
Is this the expected output for this command?
Is anyone able to provide some more insight on this?
As you can see from the site you linked to they are separate options, not a single option. -4 means one thing, -6 another etc. The reason they’re in one blob is because they don’t take any parameters and they can be combined, meaning -4A would be the same as -4 -A. This saves space on manual pages but is confusing unless you know about it.
After them are all the optiona that do take parameters, like -B.
The version number is shown because -V shows version number and exits and is included there.

How to disable a Virtual Terminal in Yocto Linux

Would anyone know how to disable the virtual terminals in linux? I am using Yocto, Morty version on an i.MX6 processor. Even though our base distribution is Yocto, unfortunately we have diverged from building it with recipes, so this is more of a straight linux question than Yocto…
To give some detail as to my problem: It is for an embedded device that has an HDMI port - when I attach a terminal to the HDMI port it shows the Linux Penguin logo, a getty service and blanks out after 600 seconds. I just want to use the hdmi port as an output with nothing displayed on the output and I want it to stay on all the time.
I have found that the hdmi port maps to /dev/tty1 – when I type: echo “asdfasdf” > /dev/tty1 I see the characters output to the monitor.
Here are a few things I have tried to no avail – a lot of these are not needed if I can figure out how to disable it as a virtual terminal…
• I figured out how to disable the getty service but a cursor still blinks. I don’t even want a cursor to show
• I have tried to disable the display of the penguins by disabling the LOGO in the kernel config parameters - I commented anything with LOGO out:
CONFIG_LOGO=y
CONFIG_LOGO_LINUX_MONO=y
CONFIG_LOGO_LINUX_VGA16=y
CONFIG_LOGO_LINUX_CLUT224=y
To no avail. The logo still shows : .
• The fact that it blanks after 600 seconds is console blanking – I can see it set to 600 in the file: /sys/module/kernel/parameters/consoleblank. When I issue the command: echo -e '\033[9;0]'>/dev/tty1
It sets the console blanking to 0 and wakes the terminal. Being able to wake the console up is limited success but I would like to disable the virtual terminal altogether…
• I tried commenting out any virtual terminal defines in the config file to no avail:
CONFIG_VT=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
Everything I have read suggests that /dev/tty1 is a virtual terminal or console. From what I read about the VT option, disabling the CONFIG_VT should do it:
VT — Virtual terminal Say yes here to get support for terminal devices
with display and keyboard devices. These are called "virtual" because
you can run several virtual terminals (also called virtual consoles)
on one physical terminal. You need at least one virtual terminal
device in order to make use of your keyboard and monitor. Therefore,
only people configuring an embedded system would want to say no here
in order to save some memory; the only way to log into such a system
is then via a serial or network connection. Virtual terminals are
useful because, for example, one virtual terminal can display system
messages and warnings, another one can be used for a text-mode user
session, and a third could run an X session, all in parallel.
Switching between virtual terminals is done with certain key
combinations, usually Alt-function key. If you are unsure, say yes, or
else you won't be able to do much with your Linux system.
But for some reason it doesn’t do anything!
• I found this thread; https://askubuntu.com/questions/357039/how-do-i-disable-virtual-consoles-tty1-6 among others, but none are much help since my distribution does not have any of the directories in the solutions offered in this thread or any others I have found. For instance I do not have a /etc/events.d nor do I have a /etc/default/console-setup file nor do I have a /etc/init directory… I imagine the reason for this is that my distribution uses systemd and the solutions are SysV based init maybe?
Disabling the logo or console blanking would not be necessary if I could just figure out how to disable that port as a terminal…
So does anyone have pointers or things I could try? I am relatively new (returning after 10 years - I worked with DNX 10 years ago v2.6 and it seems everything I knew about init is fairly obsolete lol) to linux so I am sure I am missing a lot…
Thanks,
- Chuck
I think I found the answer to my question. This is actually a frame buffer console documented here: Documentation/fb/fbcon.txt. From the documentation:
The framebuffer console (fbcon), as its name implies, is a text
console running on top of the framebuffer device. It has the
functionality of any standard text console driver, such as the VGA
console, with the added features that can be attributed to the
graphical nature of the framebuffer.
Commenting out the line
CONFIG_FRAMEBUFFER_CONSOLE=y
In the configuration file located in /arch/arm/configs will disable it.
Also this part of the documentation shows you how to disable it at runtime:
So, how do we unbind fbcon from the console? Part of the answer is in
Documentation/console/console.txt. To summarize:
Echo a value to the bind file that represents the framebuffer console
driver. So assuming vtcon1 represents fbcon, then:
echo 1 > sys/class/vtconsole/vtcon1/bind - attach framebuffer console
to
console layer echo 0 > sys/class/vtconsole/vtcon1/bind - detach framebuffer console from
console layer
When I issue the echo 0 command, the cursor stops blinking and starts blinking again when I issue the echo 1 command.
I think there is another way of doing it as well by modifying the Yocto build environment by putting the USE_VT="0" in the OpenEmbedded machine config file. The "USE_VT" variable is referenced by the sysvinit-inittab recipe. This answer was given to me from the Yocto Linux mailing list - but I have not tested it since we have diverged from Yocto...

Why nodejs can make hardlink with dircortory? [duplicate]

How do you create a hardlink (as opposed to a symlink or a Mac OS alias) in OS X that points to a directory? I already know the command "ln target destination" but that only works when the target is a file. I know that Mac OS, unlike other Unix environments, does allow hardlinking to folders (this is used for Time Machine, for example) but I don't know how to do it myself.
I agree that hard-linking folders/directories can cause problems if not careful, but they have a very definite advantage - Time Machine is a perfect example. Without them it simply would not be practical as the duplication of redundant versions of files would very quickly consume even the largest of disks.
Snow Leopard can create hard links to directories as long as you follow Amit Singh's six rules:
The file system must be journaled HFS+.
The parent directories of the source and destination must be different.
The source’s parent must not be the root directory.
The destination must not be in the root directory.
The destination must not be a descendent of the source.
The destination must not have any ancestor that’s a directory hard link.
So it's not correct at all that Snow Leopard has lost the ability to create hard links to
folders.
I just verified that link/unlink do work on Snow Leopard - as long as you follow the six
rules. I just tried it and it works fine on my Snow Leopard 10.6.6 system - tried it on the boot volume and on a separate USB external volume and it worked fine in both cases.
Here is the "hunlink.c" program:
#include <stdio.h>
#include <unistd.h>
int
main(int argc, char *argv[])
{
if (argc != 2)
return 1;
int ret = unlink(argv[1]);
if (ret != 0)
perror("unlink");
return ret;
}
gcc -o hunlink hunlink.c
So, be careful if you try it - remember to follow the rules and use hlink to create these hard links and use hunlink to remove the hard link afterwards. And don't forget to document
what you've done for later on or for someone else who might need to know this.
One other "gotcha" that I just learned about these "hard links" to folders. When you create them there is really a lot that happens "behind the curtain" of Mac OS X. One really important issue is that the folder you create the link to is really moved to a super-magical super-hidden folder called /.HFS+ Private Directory Data%000d/dir_xxx where xxx is the inode number of the "source_folder" - remember the format of the command is
hlink source_folder target_folder
So because of this, you have to be careful of not having any files open in the "source_folder" because if you do, they just got moved to the super-magical folder and you will likely have a problem if you try and save any changes to those files that were open in the "source_folder". This happened to me a couple of times until it dawned on me what was happening and the solution is pretty simple. I noticed that you couldn't do a "ls -la" command any longer without getting funny errors for all the folders/directories that were in the original "source_folder" but you could do a "ls" command and all looked well.
If you run "Verify disk" in the "Disk Utility" program, you will notice that it probably complains and gives a "Volume bitmap needs minor repair for orphaned blocks" which is what just happened with the creation of the super-magical folder and the movement of the "source_folder" to it.
If you do find yourself in this situation with "orphaned blocks", first save the changed files to some other temporary location not in the volume containing the "source_folder" tree, then use "Disk Utility" to unmount and remount the volume that contains the "source_folder" or just restart the computer. Then copy the files you saved to the temporary locations back to their original locations and you should be back in business. This is what worked for me, so can't guarantee this will work for you too. So it might be a good idea to try this out on a volume you have a good backup of just in case.
It seems so very weird that all this overhead occurs just for the simple task of creating a hard link to a folder. Does anyone have any idea why Mac OS X goes to all this effort for this hard link creation to folders? Does it have something to do with the fact that this is a "journaled" file system?
I discovered the info about the super-magical, super-hidden location by reading Amit Singh's explanation of his "hfsdebug" utility. If you want more details see his web site at Amit Singh's hfsdebug utility. It's a very interesting piece of software and will tell you lots of details about HFS+ file systems. It's free and I encourage you to download it and try it out. It's no longer supported but it still works on both Snow Leopard and Leopard - basically any HFS+ supported system. You can't really do any harm with it as it's a "read-only" tool - so it's great to use to look at some details of the filesystem.
One more issue about these "hard links to folders" - once you create one and the super-magical super-secret-hidden folder gets created, it's there for good. Even if you unlink the folder that caused it to be created in the first place, this magic folder stays around. Not sure why, but it definitely does. You can use "hfsdebug" to find this out if you wish to try it out. You can also use "hfsdebug" to find out how many of these "hard links to folders" exist on a drive. For these details refer to Amit's article on the "hfsdebug" utility.
He also has another newer utility that's supported but costs. It's called fileXray and costs $79 for one person on any number of computers in the same household for a personal non-business type license. It has an extensive 173-page User Guide that you can download to see what it can do before you purchase. Unfortunately there is no trial version, so read the manual and check out the web site for more details to see if it can help you out of a jam. Learn all the details about it at their web site - see fileXray web site for more info.
There are a couple of issues you should be aware of when using these hard links to folders. If the volume that they are created on is mounted to a remote client, there can be significant problems, depending on how they are mounted. If you use AFP to mount the volume to a remote client, there are big problems as any folder that currently has a hard link to it or has ever had one but later removed, will be unable to be used as all the lower level folders (but not files) will be inaccessible from either the Finder or a Terminal window. If you try to do a simple "ls -lR" command, it will fail and give you "ls: xxx: No such file or directory" error messages for all lower level folders. If you use a Finder window to traverse the directory tree of the remote volume, the folders that are in the folder that had or has a hard link to it will simply disappear without any error when you first click on the folder name.
These problems don't appear to occur (except for the error message) if you use NFS to mount the remote client (and assuming you had a NFS server on the system that has the volume as a local HFS+ filesystem). Details on how to use NFS to mount volumes are not provided here. I used a nice program from Dr. Marcel Bresink called "NFS Manager" to help with the NFS mounts on the server and client. You can get it from his web site - just search for "Bresink NFS Manager" in your favorite search engine, but he has a free trial version so you can try before you buy. It's not that big a deal if you want to learn how to do the NFS mounts, but the "NFS Manager" makes it pretty easy to set things up and to tweak all the different settings to help optimize it. He has several other neat Mac OS X utilities too that are very reasonably priced - one called "Hardware Monitor" that lets you monitor and graph all kinds of things like power usage, temperature of CPU, speed of fans and many many other variables for both the local and remote Mac systems over extended periods of time (from minutes to days). Definitely worth checking out if you are into handy utilities.
One thing I did notice is that NFS file transfers were about 20% slower than doing them via AFP, but your "mileage may vary", so no guarantees one way or the other, but I would rather have something that works even if I have to pay a 20% performance hit as compared to having nothing work at all.
Apple is aware of the problems with hard links and remote AFP filesystems, and they refer to it as an "implentation limitation" of the AFP client - I prefer to call it what it really appears to me to be - A BUG!!! I can only hope the next release of Mac OS X fixes the problem, as I really like having the ability to use hard links to folders when it makes sense.
These notes are my own personal opinion and I don't make any warranty about their correctness so use them at your own risk. Have a good backup before you play around with these "hard links to folders" just in case something unforeseen happens. But I hope you have fun if you do decide to look a bit more into this interesting aspect of Mac OS X.
You can't do it directly in BASH then. However... I found an article here that discusses how to do it indirectly: http://www.mactech.com/articles/mactech/Vol.23/23.11/ExploringLeopardwithDTrace/index.html by compiling a simple little C program:
#include <unistd.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
if (argc != 3) return 1;
int ret = link(argv[1], argv[2]);
if (ret != 0) perror("link");
return ret;
}
...and build in Terminal.app with:
$ gcc -o hlink hlink.c -Wall
Piffle. On 10.5, it tells you in the man page for ln:
-d, -F, --directory
allow the superuser to attempt to hard link directories (note:
will probably fail due to system restrictions, even for the
superuser)
So yes:
sudo ln -d existing_dir new_hard_link
Give it your password, and you're not done yet. You didn't document it, did you? You must document hard linked directories; even if it's a single user machine.
Deleting is a different story: if you go about it the usual way to delete directories, you'll delete the contents. So you must "unlink" the directory:
unlink new_hard_link
There. Hope you don't wreck your filesystem!
Cross-posting this great tool which neatly solves the problem, originally posted by Sam:
To install Hardlink, ensure you've installed homebrew, then run:
brew install hardlink-osx
Once installed, create a hard link with:
hln [source] [destination]
I also noticed that unlink command does not work on snow leopard, so I added an option to unlink:
hln -u destination
Code is available on Github for those who are interested: https://github.com/selkhateeb/hardlink
Yes it's supported by the kernel and the filesystem, but since it's not intended for general usage it's not exposed to the shell.
You could probably work out which APIs Time Machine uses and wrap them in a commandline tool, but it'd be better to take the hint and steer well-clear.
The OSX version of ln cannot do it, but, as mentioned in the other answer by rich, it is possible with the GNU version of ln which is available in homebrew as gln as part of the coreutils formula. man gln lists the -d option with the OSX-specific warning provided in rich's answer. In other words, it does not work in all cases. What exactly determines whether it works or not does not seem to be documented anywhere.
As a prerequisite, install coreutils:
brew install coreutils
Now you can do:
sudo gln -d /original_folder /mirror_folder
IMPORTANT: To remove the hard link you must use gunlink:
sudo gunlink /mirror_folder
❗️❗️❗️ Using rm or Finder will also delete the original folder.
FYI: The coreutils homebrew formula provides the GNU-compatible versions of generic unix tools. Use brew list coreutils to see the full list.
As of 2018 no longer possible. APFS (introduced in MacOS High Sierra 10.13) is not compatible with directory hardlinks. See https://github.com/selkhateeb/hardlink/issues/31
My case was that I found out that from a windows virtual machine, I cannot follow symlinks. (i wanted to test some HTML pages in Internet Explorer). And my directory structure had symlinks for CSS and images folders.
My workaround to solve the problem was a different approach than the other answers implied. I used rsync to create a copy of the folder. Rsync can resolve the symlinks and copy the linked files in stead.
This solved my problem without using hard links to directories. And it's actually an easy solution if you're just working on a small set of files.
rsync -av --copy-dirlinks --delete ../htmlguide ~/src/
From the article linked to, you'll get that error if you try to create the hard link in the same directory as the original. You have to create it somewhere else.
In Linux you can use bind mount to simulate hard linking directories. Not sure about OSX
sudo mount --bind /some/existing_real_contents /else/dummy_but_existing_directory
sudo umount /else/dummy_but_existing_directory
This can also be done with built-in Perl (from Terminal) without compiling anything. My specific use case is for Google Drive (which doesn't support symbolic links), so the examples below reflect the use case.
To link your "Documents" folder to Google Drive so it's synced:
perl -e 'link "/Users/me/Documents", "/Users/me/Google Drive/Documents"'
To remove the link to your "Documents" folder from Google Drive:
sudo perl -U -e 'unlink "/Users/me/Google Drive/Documents"'
You need "root" to unlink (see "unlink" perldoc).
Another solution is to use bindfs https://code.google.com/p/bindfs/ which is installable via port:
sudo port install bindfs
sudo bindfs ~/source_dir ~/target_dir
The short answer is you can't. :) (except possibly as root, when it would be more accurate to say you shouldn't.)
Unixes only allow a set number of links to directories - ".." from within all its children and "." from within itself. Anything else is potentially a recipe for a very confused directory tree. This is/was apparently a design decision by Ken Thompson.
(Having said that, apparently Apple's Time Machine does do this :) )
in case there is no sub folder, you can try
ln folder_path/*.* target_folder
it worked for me on OSX 10.9

Name screen session log using session name

I'm using this very simple .screenrc:
logtstamp on
logfile /tmp/screenlog-%S.log
I tried launching screens with these two methods:
screen -L -S testing
screen -S tester -L
but the filename used is /tmp/screenlog.0S.log. What am I doing wrong? Using Screen version 4.00.03jw4 (FAU) 2-May-06, and according to the manual I should be able to name the log file using the session name
If you look at the man page (man screen) for your (8-year-old?) version of screen, you'll see it's missing the %S specifier. They must have added it since your version. I'm not sure why Ubuntu 12.04 shipped screen from 2006..
P.S. I'd advocate looking into tmux. It's a little bit harder to learn, but a lot more flexible: You can move windows between sessions, You can see multiple windows at once, You can nest sessions inside of other sessions, etc.
Also, if you are just looking to log the output of long-running processes, take a look at nohup.

Centos option deletion possible

In my boot options of my installed CentOS on VBox, I have the followings that really mess me up to figure out how to eliminate those that doesn't work anymore, e.g the first one which is reported as unavailability of kernel root to boot. I can only choose the last one to boot the system.
> CentOS(2.6.32-200.17.1.e16.x86_64)
> CentOS(2.6.32-200.17.1.e16.x86_64.debug)
> CentOS(2.6.32-200.4.2.e16.x86_64.debug)
> CentOS(2.6.32-200.4.2.e16.x86_64)
> CentOS(2.6.32-200.4.1.e16.x86_64)
> CentOS(2.6.32-200.e16.x86_64)
Where are these stored once I boot the system with the last option ? What if I would like to delete (completely) one of them ? I don't know what the xxx.debug's are there for ?
Thank you for any help
On most distros today the boot manager is GRUB. The configuration of boot menu is usually stored in /boot/grub in a file called menu.lst or grub.cfg depending on GRUB version and distro. In that file, you can comment out sets of lines corresponding to the OS you don't want in the menu - syntax should be pretty intuitive.
On some distros the file is generated by a set of scripts, in this case a comment at the top say you shouldn't edit that file directly. For example in Debian, the scripts which generate the configuration reside in /etc/grub.d/ and they do all sorts of auto-probing for available OS's. In this case one needs to either modify the script or to remove the OS images which are automatically appended to the menu. The exact way to do this cleanly may vary depending on your setup - perhaps some of these boot images can be removed using a package manager which would be more elegant than just removing files manually.
Either way, be careful, since removing the wrong file related to booting may make it impossible to boot your OS or even to start GRUB at all if you're extremely unlucky.

Resources