Incorrect key file for table '/tmp/#sql_42cd_0.MYI'; try to repair it - ubuntu-server

I recently had an issue of disk space. More than 97% of disk was full. I cleaned the disk space by clearing the log files.The issue now i am having is the error in my api which says,
Incorrect key file for table '/tmp/#sql_42cd_0.MYI'; try to repair it.Below is the output of the df -h command i executed on the SSH
root#localhost:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda 24G 16G 6.4G 72% /
devtmpfs 994M 4.0K 994M 1% /dev
none 200M 180K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 996M 0 996M 0% /run/shm
none 100M 0 100M 0% /run/user
overflow 1.0M 0 1.0M 0% /tmp
Other than this, i also tried running the myisamchk -r profiles.MYI in order to repair the .myi file but nothing seems to be working.

Sorted the issue. It was related with the incorrect join in the query.I do not understand, if the issue is related with the query then how does it echo the error Incorrect key file for table '/tmp/#sql_42cd_0.MYI'; try to repair it

Related

/dev/mapper/RHELCSB-Home marked as full when it is not after verification

I was trying to copy a 1.5GiB file from a location to another and was warned that my disk space is full, so I proceeded to a verification using df -h, which gave the following output:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 114M 16G 1% /dev/shm
tmpfs 16G 2.0M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/RHELCSB-Root 50G 11G 40G 21% /
/dev/nvme0n1p2 3.0G 436M 2.6G 15% /boot
/dev/nvme0n1p1 200M 17M 184M 9% /boot/efi
/dev/mapper/RHELCSB-Home 100G 100G 438M 100% /home
tmpfs 3.1G 88K 3.1G 1% /run/user/4204967
where /dev/mapper/RHELCSB-Home seemed to cause the issue. But when running sudo du -xsh /dev/mapper/RHELCSB-Home, I got the following result:
0 /dev/mapper/RHELCSB-Home
and same thing for /dev/ and /dev/mapper/. After researching this issue, I figured out that this might have been caused by undeleted log files in /var/log/, but the total size of files there is far from approaching the 100GiB. What could cause my disk space to be full?
Additional context: I was running a local postgresql database when this happened, but I can't see how this can relate to my issue as postgres log files are not taking that much space either.
The issue was solved by deleting podman container volumes in ~/.local/share/containers/

Tensorflow filling up memory

I am logging tensorboard data on an ubuntu server. All of a sudden I getting errors like
-bash: cannot create temp file for here-document: No space left on device
from running cd and hitting tab in the terminal. It seems like the logs in tensorflow has filled up the disk space.
How to I make tensorflow not fill up my memory?
running: $ df -h
Filesystem Size Used Avail Use% Mounted on
udev 30G 0 30G 0% /dev
tmpfs 6.0G 8.9M 6.0G 1% /run
/dev/xvda1 73G 73G 0 100% /
tmpfs 30G 0 30G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 30G 0 30G 0% /sys/fs/cgroup
tmpfs 6.0G 0 6.0G 0% /run/user/1000
tmp folder is default log directory for tensorflow, you can change to any folder I am posting sample code try this.
summary_directory=os.path.abspath("yourlog_dir")
train_summary_dir = os.path.join(summary_directory, "train")
test_summary_dir = os.path.join(summary_directory, "test")
train_summary_writer = tf.summary.FileWriter(train_summary_dir, sess.graph)
train_summary_writer.add_summary(train_summaries, step)
Ok. So the problem wasn't tensorboard at all. I turns out my model checkpoints were taking up about 3GB each. To locate the files taking up space I ran this command:
sudo du -x -h / | sort -h | tail -40

Setting up a swapfile in local SSD (temporary drive) in Azure VM

I'm using a DS4 Azure VM (Ubuntu 14.04). It comes with a 56GB local SSD.
I need to set up a 25GB swapfile in this local SSD. When I do df -h in the VM, I can see that it seems to be mapped to the /mnt/ folder. Following is the entire output:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 29G 22G 6.4G 77% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 14G 4.0K 14G 1% /dev
tmpfs 2.8G 472K 2.8G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 14G 0 14G 0% /run/shm
none 100M 0 100M 0% /run/user
none 64K 0 64K 0% /etc/network/interfaces.dynamic.d
/dev/sdb1 56G 97M 56G 1% /mnt
However, if I try to initialize a swapfile in /mnt, it still gets added to the available disk space in /dev/sda1.
What do I need to do to set up my swap file? An illustrative example would be great. Thanks in advance.
I normally use the following commands to set up a swapfile:
sudo fallocate -l 25G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
Update:
I went into /etc/waagent.conf, and tweaked the followed:
# Format if unformatted. If 'n', resource disk will not be mounted.
ResourceDisk.Format=y
# File system on the resource disk
# Typically ext3 or ext4. FreeBSD images should use 'ufs2' here.
ResourceDisk.Filesystem=ext4
# Mount point for the resource disk
ResourceDisk.MountPoint=/mnt
# Create and use swapfile on resource disk.
ResourceDisk.EnableSwap=y
# Size of the swapfile.
ResourceDisk.SwapSizeMB=26000
After this, I resized (and consequently rebooted) my Azure VM from the portal. Currently I can't tell whether the settings have taken effect. Are my settings correct and what's the best way to ensure they've taken effect?
You are right, we should modify /etc/waagent.conf to add a swap file.
By modifying the /etc/waagent.conf file and setting the following 3 parameters a swap file will be created in the directory defined by ResourceDisk.MountPoint  
 
ResourceDisk.Format=y  
ResourceDisk.EnableSwap=y    
ResourceDisk.SwapSizeMB=26000
Then we should restart walinuxagent:
service walinuxagent restart
Commands to show the new swap space in use after agent restart:
dmesg | grep swap
root#ubuntu:~# swapon -s
Filename Type Size Used Priority
/mnt/swapfile file 26623996 0 -1
root#ubuntu:~# df -Th
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 3.4G 12K 3.4G 1% /dev
tmpfs tmpfs 697M 412K 697M 1% /run
/dev/sda1 ext4 29G 869M 27G 4% /
none tmpfs 4.0K 0 4.0K 0% /sys/fs/cgroup
none tmpfs 5.0M 0 5.0M 0% /run/lock
none tmpfs 3.5G 0 3.5G 0% /run/shm
none tmpfs 100M 0 100M 0% /run/user
/dev/sdb1 ext4 99G 26G 68G 28% /mnt
I resized (and consequently rebooted) my Azure VM from the portal
I resized my VM, and the swap file does not lose.
Are my settings correct and what's the best way to ensure they've
taken effect?
After modify the /etc/waagent.conf and restart walinuxagent, we can use swapon -s to check it.

Which value is referenced to show capacity by davfs?

I know there is no way to know REAL SIZE of volume through webdav protocol,
so MS Windows' is showing the SAME SIZE OF THE SYSTEM DRIVE. (usually, C:)
ref : https://support.microsoft.com/en-us/kb/2386902
Then, which value is referenced by davfs in ubuntu 14.04?
In my case>
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 46G 22G 22G 50% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 2.0G 4.0K 2.0G 1% /dev
tmpfs 395M 1.5M 394M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 2.0G 152K 2.0G 1% /run/shm
none 100M 52K 100M 1% /run/user
http://127.0.0.213/uuid-4d4f02fb-6d34-405f-b952-d00eb350b9ee 26G 13G 13G 50% /home/jin/mount/webdavTest
I used 50G disk and root partition(sda1) is 46G, but Total size of webdav is 26G and used 13G.
I can't determine what kind of rule(?) was used to show the webdav size and couldn't find the DOCUMENTATION about this anywhere.
Someone knows about this?
There is actually a way to communicate the real available space via the quota properties:
https://www.rfc-editor.org/rfc/rfc4331
It's pretty widely supported, apparently just not by Windows.

cronjob : No space left on device

I have attached a new volume to an EC2 instance. Volume was attached successfully.Below the output of command.
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 32G 8.1G 22G 27% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 2.0G 12K 2.0G 1% /dev
tmpfs 396M 340K 395M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 2.0G 0 2.0G 0% /run/shm
none 100M 0 100M 0% /run/user
overflow 1.0M 1.0M 0 100% /tmp
When i tried to add new cronjob it shows the error that there is no space left.
sudo crontab -e
/tmp/crontab.jVOoWT/crontab: No space left on device
Your /tmp directory is full, first remove the files from your temp directory by issuing the command below
rm -rf /tmp/*
Run your crontab again
sudo crontab -e
Please execute df -i may be inode's 100% full Remove unnecessary file from /var
run your crontab again
crontab -e
I had the same issue on AWS and ultimately the solution was to boost the capacity of the hard drive. Solved the issue.

Resources