ubuntu /var file full and do not have read permissions [closed] - ubuntu-10.04

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I seem to have hosed myself, as I was running/learning some failing PHP/MySQL data scrape script and went off to bed. I don't know if it looped or what exactly happened, but I came back and it showed disk almost full. The analyzer said I had used almost 100G of a 100G drive on a /var directory. I try du and df-ah, but it will not show where the hog is. Says, "Permission denied." for many of the directories.
Clues:
1) gdm directory is listed as recent but won't let me look inside.
2) I was running an edit program called gksudo gedit, because I could not write to /var/www files for PHP. It appears that in the ps window, a nautilus program is dormant.
Any help is greatly appreciated and I love ubuntu, but I'm pretty much a linux newbie.
Thanks.

Do you have root permissions?
sudo bash
Then you can go in and look into what is going on.
cd /var
du -s *
Oh, and I hope I don't have to mention that you should not delete stuff that you didn't create yourself. You might just delete something important.
You report that /var/log/apache seems "large". I do NOT recommend simply deleting the files. Instead, if you are very very sure that no-one will ever need to see any historical archives of the errors and accesses made, you can:
cd /var/log/apache
for f in *; do > $f; done
which will truncate the files. This will make it less likely to cause problems due to non-existant files or bad permissions or required rotation signaling. If you might need these files in the future, we could talk about using logrotate to try and save them.

The filesystem permissions require root access to read many of the directories in /var:
ls -l /var
...
drwx--x--x 3 root root 4096 2011-04-04 23:13 www
You just need root privileges to read them all:
sudo -s
cd /var/www
ls -l
Be careful running with a root shell. You can make a ton of mistakes really quickly, some might be difficult to undo. :)

Related

Linux backup files command [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I had a problem with my Ubuntu install. I was able to boot from liveCD and connect an external hard drive. I want to backup my files now.
I tried cp -r /home destination, but I get problem with spaces in filenames, symlinks, errors "Cannot create fifo: Operation not permitted" "Permission denied" "Invalid argument" and plenty more. What is the best way to do it? Will cp -a fix these issues or should I do something more clever?
I found out that rsync doesn't have problems with filenames. But it doesn't copy .so and .a files. Also it is running extremely slow comparing to cp.
EDIT:
I followed the advice of John Bollinger and created an archive, because my external drive wasn't ext4 formatted, so is not able to preserve all file attributes.
From a liveCD home refers to liveCD home, so one has to use:
tar -c -z -f /my/backup/disk/home.tar.gz -C / media/ubuntu/longDeviceName/home
Despite sudo, I still received some "Cannot open: Permission denied" and "socket ignored" errors creating a tar for several .png files in .cache/software-center/icons/blabla. I wonder whether it is normal.
If you do not want to reformat your backup disk with a filesystem that has enough capabilities to represent all of the attributes of your files (e.g. ext4) then preserving them across the backup requires putting them into some sort of container. The traditional container for this sort of thing is a [compressed] tarball. You might therefore try
tar -c -z -f /my/backup/disk/home.tar.gz -C / home
You would recover the contents of that tarball via
tar -x -z -f /my/backup/disk/home.tar.gz -C /
Either or both might need to be run with privilege, obtained by being root or by using sudo.
That will handle symlinks, executable files, and any filename just fine, but it may still have trouble if the data you are trying to back up include any special files, such as device nodes or FIFOs. In that event, you may simply need to remove such files first, and recreate them after restoring the other files. You can identify such files via find:
find /home -not -type f -not -type d -not -type l
The accepted answer does not backup / recover file permission.
You should use parameter "p" while backing up and while recovering.
Also you might want to recover to specific folder and then move things around to not overwrite files you might want to keep.
"/" on the end of the command stands for backing up entire system:
sudo tar -cvpzf /backupfolder/backup.tar.gz --exclude=/mnt /
sudo mkdir /recover_v1.1
sudo tar -xvpzf backup.tar.gz -C /recover_v1.1
... // replacing whatever you need manually
Manually replace files you need to recover and keep those you want to keep.
-x extract
-p include permissions
-v verbose will show you the files name while working
-z compression
-f name the file
You might want to setup cron jobs to run backup automatically.

Changed /usr file permissions to 0744 with sudo chmod command [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I had a brilliant idea to change my /usr file permissions to 0744 using 'sudo chmod 0744 /usr' while I was trying to install a program. So now I can't access any of my files, my home directory 'home/username' doesn't exist apparently (which isn't true), and all commands that are located in the /usr/bin folder are 'do not exist', which also isn't true.
I think the reason for the commands not existing is because I don't have write/execute permissions on the /usr/bin folder (owned by root), but I don't know why my home folder 'does not exist'.
My question is what exactly have I done, and how do I fix it, if possible?
As a side note, the computer is now having a problem when I try to turn it on. It immediately goes to a blinking cursor (top left) and black screen, but I can ssh into the machine. Finally, I don't have access to the root user or a root shell on this machine.
Solved:
So what happens when you run a command like sudo chmod 744 /usr is that all users on the network get locked out of the home folders as well as making it impossible for the computer to boot (hence the black screen with a blinking cursor). Also, more technically, your root file system (/dev/sda1 for me) becomes read-only.
Since you can't boot the computer, you need to go into single user mode from the grub menu. Next, run the command mount -o remount, rw /dev/(your root file system, possibly sda1). This will remount your root file system as read only. Then, run the command chmod 755 /usr. This will of course change the file permissions to read,write, and execute for the owner of the files and read/execute for group and world users.
You need to login as root, or boot the computer into single-user mode, and then execute:
chmod 755 /usr
You won't be able to do this with sudo because that command is in /usr, and without execute permission you can't access anything in it.

What does the 'x' mean in rwx on a directory? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I got a testdir by mkdir testdir, and created a file in it by touch testdir/a.
drwxr-xr-x 2 jermaine jermaine 4096 Mar 12 22:57 testdir
If I remove the 'x' by chmod -x testdir
Then I won't be able to
cd testdir
touch testdir/b
ls -l testdir
cat testdir/a
So my question is why can't I list the file hierarchy under a directory with a 'r' but without an 'x'? What exactly does the 'x' mean on directory?
I knew some explanations like "x means entering the direcotry, you have to enter before read and write". But what does 'enter' mean? I really appreciate answers on inode or dentry level. Thanks a lot.
"Execute" is the traversal permission on a directory. It allows you to access files and folders within the directory.
If you can read a directory, you can list the contents.
If you can write a directory, you can make new files and folders within it.
If you can "execute" a directory, you can move through the hierarchy, even if you don't know what's inside.
When applying permissions to directories on Linux, the permission bits have different meanings than on regular files.
The write bit allows the affected user to create, rename, or delete files within the directory, and modify the directory's attributes
The read bit allows the affected user to list the files within the directory
The execute bit allows the affected user to enter the directory, and access files and directories inside
Execute permission on a directory means you can access files in that directory.
Check this link out for more information about Unix permissions:
http://www.cyberciti.biz/faq/how-linux-file-permissions-work/

read-only files on Ubuntu [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
when trying to delete file, I got the rm command on ubuntu page at https://help.ubuntu.com/community/DeletingFiles. And on, this page I got the term read-only files. I have tried to google read-only file linux, but cannot find any definition about this. Could you tell me what read-only mean? Does it mean all owner, group and other have only read permission? Thank you!
Does it mean all owner, group and other have only read permission?
They may have different permissions. But you (the current user) have only read permission.
Linux has three kind of permission for user, group and others.
r: read permission
w: write permission
x: execute permission
If the file is read-only, it means you (the user) don't have the w permission on it and so you cannot delete the file.
Use:
chmod +w FILE
To add that permission. You can change files permission only if you're the owner of the file.
Otherwise, you can remove the file using sudo, gaining super user privilege.
sudo rm FILE
It will prompt you for a password and it will works only if you're in the /etc/sudoers/ file (and you're likely to be there if you're the only user, since you're using Ubuntu).
A read-only file is a file that you don't have permission to alter its content. To see detailed info about your permissions use ls -l; if you want to change the permissions, use chmod. Also see this example for better understanding.
Change the Permissions with chmod or try with sudu:
sudo rm file.xxx

Anyone ever actually tried 'rm -rf /*' in Linux? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Anyone ever actually tried rm -rf /*, or something similar, in Linux? You always hear people joke about it, but I'm curious if it actually executes, and if so, what kind of damage it actually does (not in terms of deleting disk).
Just for you, I tried it. I got a whole bunch of "rm: cannot remove ...", even with sudo. If you would like to try it out, I recommend VirtualBox and a copy of Ubuntu.
Have I "tried" it? No. Have I done it? Yes, and it's bad. However, if you're lucky:
You weren't logged in as root, so the damage will be minimal
You've installed safe-rm which will prevent stuff like this.
Yes, I did, but only in a VM that could be reverted, just to test and demonstrate (I used to teach OS).
In older distributions it will execute and wipe out your distribution, but in most newer distributions this will fail.
If you want to try, do it in a place you don't care about, or in a VM that you can revert like me.
Wikipedia : rm -rf (variously, rm -rf /, rm -rf *, and others) is frequently used in jokes and anecdotes about Unix disasters[2] . The rm -rf variant of the command, if run by a superuser on the root directory, would cause the contents of nearly every writable mounted filesystem on the computer to be deleted, up to the point the system itself crashes from missing some crucial file, directory, or the like.
http://en.wikipedia.org/wiki/Rm_(Unix)
I think some distros added a protection.
EDIT : muffinista gave the link to the protection.

Resources