Copy the data generated by locale-gen to another computer - linux

I am using an OpenVZ VPS with only 128M RAM.
The RAM is so limited that I cannot get locale-gen to run successfully. The script always gets killed during the operation.
Killed localedef -i $input -c -f $charset -A /usr/share/locale/locale.alias $locale
Is there any way that I can set correct locale information manually? e.g. run the command on another computer and copy necessary files?

Short answer: no, it won't help to copy locale data generated elsewhere and it might corrupt the system since commands like cd and ls depend on it.
It should suffice to run locale-gen on a 128M RAM VPS.
If it keeps failing, try switching to a locale with less footprint. Any locale ending with .UTF-8 requires more memory and CPU time to generate. In most case, switching from en_US.UTF-8 to en_US.iso88591 can save some memory.
So instead of
sudo locale-gen en_US.UTF-8
It's worth trying
sudo locale-gen en_US.iso88591

Related

Rust compilation on AWS fails while succeeding on other machines [duplicate]

I am using opensuse, specific the variant on mono's website when you click vmware
I get this error. Does anyone know how i might fix it?
make[4]: Entering directory `/home/rupert/Desktop/llvm/tools/clang/tools/driver'
llvm[4]: Linking Debug+Asserts executable clang
collect2: ld terminated with signal 9 [Killed]
make[4]: *** [/home/rupert/Desktop/llvm/Debug+Asserts/bin/clang] Error 1
The full text can be found here
Your virtual machine does not have enough memory to perform the linking phase. Linking is typical the most memory intensive part of a build since it's where all the object code comes together and is operated on as a whole.
If you can allocate more RAM to the VM then do that. Alternatively you could increase the amount of swap space. I am not that familiar with VMs but I imagine the virtual hard drive you set-up will have a swap partition. If you can make that bigger or allocate a second swap partition that would help.
Increasing the RAM, if only for the duration of your build, is the easiest thing to do though.
Also got the same issue and solved by doing following steps (It is memory issue only) -
Checks current swap space by running free command (It must be around 10GB.).
Checks the swap partition
sudo fdisk -l
/dev/hda8 none swap sw 0 0
Make swap space and enable it.
sudo swapoff -a
sudo /sbin/mkswap /dev/hda8
sudo swapon -a
If your swap disk size is not enough you would like to create swap file and use it.
Create swap file.
sudo fallocate -l 10g /mnt/10GB.swap
sudo chmod 600 /mnt/10GB.swap
OR
sudo dd if=/dev/zero of=/mnt/10GB.swap bs=1024 count=10485760
sudo chmod 600 /mnt/10GB.swap
Mount swap file.
sudo mkswap /mnt/10GB.swap
Enable swap file.
sudo swapon /mnt/10GB.swap
I tried with make -j1 and it works!. But it takes long time to build.
I had the same problem building on a VirtualBox system. FWIW I was building on a laptop with XP and 2GB RAM. I had to bump the virtual RAM up to 1462MB to get a successful build. Also note the recommended disk size of 8GB is not sufficient to build and install both LLVM and Clang under Ubuntu. I'd recommend at least 16GB.
I would suggest using of -l (--max-load) option instead of limiting -j in this case. Possibly helpful
answer.

OSX Mojave sysctl -p illegal

14 "Mojave" on my macbook and I am trying to increase the fs.inotify.max_user_watches value in /etc/sysctl.conf (to solve another problem). To conclude this rite I need to run sudo sysctl -p /etc/sysctl.conf. But I get
"illegal option -- p"
When I check the man page on osx it in fact does not have the -p option (to supply a file) nor the --system option (to load all known config files); on another system I clearly see that those options are available.
How else then can I get sysctl to take my new configs? Is there a different way to configure fs.inotify.max_user_watches on osx?
On Big Sur, the first lines for sysctl manpage are:
SYSCTL(8) BSD System Manager's Manual SYSCTL(8)
NAME
sysctl -- get or set kernel state
This must mean sysctl itself can be used to update some values. However, sysctl does not show the fs.inotify.max_user_watches name. Must be another mac thing...

modify kernel parameters in linux without using sysctl

I have an embedded system. An old linux OS runs on it. When i enter "uname -r" command i get the version information as "3.3.8-3.4".
I want to modify some of network kernel parameters (increase tcp receive buffer size etc.) in /proc/sys. But sysctl command does not exist in this old linux kernel version. Also sysctl.conf does not exist under /etc directory
I tried changing kernel parameter files manually but system does not allow this operation even for super user.
How can i modify kernel parameters in this linux version?
You can use /proc/sys. For example the following command:
echo 1 > /proc/sys/net/ipv4/ip_forward
... is basically the same as
sysctl -w net.ipv4.ip_forward=1
However, you'll need to make sure on your own that parameters will be set on boot.

How to measure IOPS for a command in linux?

I'm working on a simulation model, where I want to determine when the storage IOPS capacity becomes a bottleneck (e.g. and HDD has ~150 IOPS, while an SSD can have 150,000). So I'm trying to come up with a way to benchmark IOPS in a command (git) for some of it's different operations (push, pull, merge, clone).
So far, I have found tools like iostat, however, I am not sure how to limit the report to what a single command does.
The best idea I can come up with is to determine my HDD IOPS capacity, use time on the actual command, see how long it lasts, multiply that by IOPS and those are my IOPS:
HDD ->150 IOPS
time df -h
real 0m0.032s
150 * .032 = 4.8 IOPS
But, this is of course very stupid, because the duration of the execution may have been related to CPU usage rather than HDD usage, so unless usage of HDD was 100% for that time, it makes no sense to measure things like that.
So, how can I measure the IOPS for a command?
There are multiple time(1) commands on a typical Linux system; the default is a bash(1) builtin which is somewhat basic. There is also /usr/bin/time which you can run by either calling it exactly like that, or telling bash(1) to not use aliases and builtins by prefixing it with a backslash thus: \time. Debian has it in the "time" package which is installed by default, Ubuntu is likely identical, and other distributions will be quite similar.
Invoking it in a similar fashion to the shell builtin is already more verbose and informative, albeit perhaps more opaque unless you're already familiar with what the numbers really mean:
$ \time df
[output elided]
0.00user 0.00system 0:00.01elapsed 66%CPU (0avgtext+0avgdata 864maxresident)k
0inputs+0outputs (0major+261minor)pagefaults 0swaps
However, I'd like to draw your attention to the man page which lists the -f option to customise the output format, and in particular the %w format which counts the number of times the process gave up its CPU timeslice for I/O:
$ \time -f 'ios=%w' du Maildir >/dev/null
ios=184
$ \time -f 'ios=%w' du Maildir >/dev/null
ios=1
Note that the first run stopped for I/O 184 times, but the second run stopped just once. The first figure is credible, as there are 124 directories in my ~/Maildir: the reading of the directory and the inode gives roughly two IOPS per directory, less a bit because some inodes were likely next to each other and read in one operation, plus some extra again for mapping in the du(1) binary, shared libraries, and so on.
The second figure is of course lower due to Linux's disk cache. So the final piece is to flush the cache. sync(1) is a familiar command which flushes dirty writes to disk, but doesn't flush the read cache. You can flush that one by writing 3 to /proc/sys/vm/drop_caches. (Other values are also occasionally useful, but you want 3 here.) As a non-root user, the simplest way to do this is:
echo 3 | sudo tee /proc/sys/vm/drop_caches
Combining that with /usr/bin/time should allow you to build the scripts you need to benchmark the commands you're interested in.
As a minor aside, tee(1) is used because this won't work:
sudo echo 3 >/proc/sys/vm/drop_caches
The reason? Although the echo(1) runs as root, the redirection is as your normal user account, which doesn't have write permissions to drop_caches. tee(1) effectively does the redirection as root.
The iotop command collects I/O usage information about processes on Linux. By default, it is an interactive command but you can run it in batch mode with -b / --batch. Also, you can a list of processes with -p / --pid. Thus, you can monitor the activity of a git command with:
$ sudo iotop -p $(pidof git) -b
You can change the delay with -d / --delay.
You can use pidstat:
pidstat -d 2
More specifically pidstat -d 2 | grep COMMAND or pidstat -C COMMANDNAME -d 2
The pidstat command is used for monitoring individual tasks currently being managed by the Linux kernel. It writes to standard output activities for every task selected with option -p or for every task managed by the Linux kernel if option -p ALL has been used. Not selecting any tasks is equivalent to specifying -p ALL but only active tasks (tasks with non-zero statistics values) will appear in the report.
The pidstat command can also be used for monitoring the child processes of selected tasks.
-C commDisplay only tasks whose command name includes the stringcomm. This string can be a regular expression.

How do do configure a Hudson linux slave to generate core files?

I've seeing occasional segmentation faults in glibc on several different Fedora Core 9 Hudson Slaves. I've attempted to configure each slave to generate core files and place them in /corefiles, but have had no luck.
Here is what I've done on each linux slave:
1) Create a corefile storage location
sudo install -m 1777 -d /corefiles
2) Directed the corefiles to the storage location by adding the following to /etc/sysctl.conf
kernel.core_pattern = /corefiles/core.%e-PID:%p-%t-signal_%s-%h
3) Enabled unlimited corefiles for all users by adding the following to /etc/profile
ulimit -c unlimited
Is there some additional Linux magic required or do I need to do something to the Hudson slave or JVM?
Thanks for the help
Did you reboot or run "sysctl -p" (as root) after editing /etc/sysctl.conf ?
Also, if i remember correctly, ulimit values are per user and calling ulimit wont survive a boot. You should add this to /etc/security/limits.conf:
* soft core unlimited
Or call ulimit in the script that starts hudson if you don't wont everyone to produce coredumps.
I figured this out :-).
The issue is Hudson invokes the bash shell as a non-interactive shell, which will bypass the ulimit setting in /etc/profile. The solution is to add the BASH_ENV environmental variable tothe Hudson slaves and set the value to a file with ulimit -c unlimited set.

Resources