Automatically cleanup tmp directory in Unix - linux

I am implementing my own file cache server using Play framework, and I put my cached files in /tmp directory.
However I do not know how the OS manages the /tmp directory. What I wish to know is if the OS will automatically cleanup some files that are old enough, or have not been accessed for a long time.
I am running my server in a Docker container, based on Debian jessie.

Your OS won't clean up /tmp. Some Unix variants clear it out at reboot. You will need to do this yourself.
find /tmp/yourpath -mtime +30 -type f -exec rm {} \;
For example.
But Docker is a bit of a special case, as the containers are an encapsulation layer. That find will still do the trick, but you could probably just dump and restart your container 'fresh' and trash the old one.

Related

Copy files within multiple directories to one directory

We have an Ubuntu Server that is only accessed via terminal, and users transfer files to directories within 1 parent directory (i.e. /storage/DiskA/userA/doc1.doc /storage/DiskA/userB/doc1.doc). I need to copy all the specific files within the user folders to another dir, and I'm trying to specifically target the .doc extension.
I've tried running the following:
cp -R /storage/diskA/*.doc /storage/diskB/monthly_report/
However, it keeps telling me there is no such file/dir.
I want to be able to just pull the .doc files from all the user dirs and transfer to that dir, /storage/monthly_report/.
I know this is an easy task, but apparently, I'm just daft enough to not be able to figure this out. Any assistance would be wonderful.
EDIT: I updated the original to show that I have 2 Disks. Moving from Disk A to Disk B.
I would go for find -exec for such a task, something like:
find /storage/DiskA -name "*.doc" -exec cp {} /storage/DiskB/monthly_report/ \;
That should do the trick.
Use
rsync -zarv --include="*/" --include="*.doc" --exclude="*" /storage/diskA/*.doc /storage/diskB/monthly_report/

Git: Recovery of files possible after local remove before push?

I wanted to "clean" my git repo before pushing by removing every JPG file, so I entered:
find . | xargs rm *.png
in the git root and now everything is delted. Also my *.py files are deleted, but I do not know why? It is a Linux, Ubuntu machine. Is there any chance to recover my files? Maybe over my OS?
The command you typed is plain wrong:
find .
This command outputs the name of every file and directory below ., including hidden files.
xargs
This command takes its input and runs the command given as its argument, feeding it one line at a time as an argument. Therefore, it will run rm *.png <line1_from_find>, then rm *.png <line2_from_find>, etc.
There is no safeguard like stop on errors, so if you let the command run completely, it unlinked all files and you know have an empty directory tree. And git will not help you, because it works by storing its metadata and current state within a .git directory at the root of the working directory. In which you just deleted all files.
So no, unless you made a copy, either manually or by pushing you state to some other place, it's probably gone, but see below. For future reference, here is the correct command to destroy all files ending in png:
find . -name '*.png' -delete
Optionnaly add -type f before the -delete if you may have directories ending in .png, to filter them out.
Okay, what now: it happens that git marks some of its internal state as read-only, which rm honors if you didn't use rm -f and you might be able to recover some data from that. Go to the .git directory at your working directory's root. It will contain a objects directory, and some files may have survived there.
Those files are raw compressed streams, you can see their content using that command:
zlib-flate -uncompress <path_to_the_file
(the zlib-flate command comes from qpdf package on ubuntu)
for me the following worked:
$ git checkout -- name_of_deleted_directory
or
$ git checkout -- path_to_local_delected_directory
In my case, I deleted a directory by mistake and I didn't change any code in that directory but I had changed code in other directories of my git repo.

Access forbidden to website after scp transfer

I used scp2 to tranfer a folder from windows to ubuntu.
I executed the scp2 process as part of a gulp execution.
My project was successfuly transfered to the server but when I tried to navigate to the site from the browser I encountered a 403 Forbidden message.
The problem is that the scp2 process didn't grant permissions to the newly created folder and files.
When I execute the following lines on the server it's work fine:
find ProjFolder -type d -exec chmod 755 {} \;
find ProjFolder -type f -exec chmod 644 {} \;
My question is: how can I transfer my project from my local machine to the server without the need to repeatedly write the permission orders?
To preserve permissions try to use rsync, it has a lot more benefits besides keeping ownership, permissions and incremental copies:
rsync -av source 192.0.2.1:/dest/ination
EDIT [according to comments]:
This works well for transferring between 2 Linux systems but doesn't seem to work for Windows -> Linux transfer. Apparently PuTTY seems to work best for transfers involving Windows on one side and Linux on another

How do I clear space on my main system drive on a Linux CentOS system?

Sorry if this sounds dumb, but I'm not sure what to do.
I've got an Amazon EC2 instance with a completely full Ephemeral drive ( the main drive with all the system files ). Almost all the directories where I've installed things like Apache, MySQL, Sphinx, my applications, etc. are on a separate physical drive and have symlinks from the ephemeral drive. As far as I am aware, none of thier data or logs write to the ephemeral drive, so I'm not sure what happened to the space.
Obviously lots of system stuff is still on the ephemeral drive, but I'm not sure how to clear things off to make space. My best guess is that amazon filled the drive when it did some auto updates to the system. I'm trying to install some new packages, and update all my system packages via YUM, but the drive has no space.
What should I do?
du --max-depth=1 -h /
where / can be any directory starting from root will show you the size in human readable form (-h) without further recursing further down.
Once you find something big that you want to remove you can do it via
rm <thing you want to remove>
this accepts shell expansion, so for instance to remove all mp3 files:
rm *.mp3
if it's a directory then you need to add -r
rm -r /dir/to/remove
to protect yourself it would be advisable to add the -i switch to every rm call, this forces you to acknowledge that you want the files removed.
if there are a lot of readonly files you want to remove then you could add the -f switch to force deletion, be very careful with this.
Be careful that rm accepts multiple parameters so when you specify an absolute path make sure to do it within quotes or not to have any spaces, especially should you execute it as root and super especially with the -r and -f options. (otherwise you'll join the group of people that did rm -rf / some/directory/* and killed their / inadvertantly)
If you just want to look for big files and delete those then you could also use find
find / -type f -size +100M
would search for files only (-type f) with a size > 100MB (-size +100M)
subsequently you could use the same command to delete them.
find / -type f -size +100M -exec rm \{\} \;
-exec executes a program which gets passed the file or folder it has found ( \{\} ), needs to be terminated with \;
don't forget you could add -i to rm to approve or disapprove a deletion.
You can use the unix disk utility command du to see what's taking up all the space for starters.
This works great. Can take a few minutes on bigger drives ( over a few hundred GB ):
find /directory/to/scan/ -type f -exec du -a {} + | sort -n -r | less
The output will be the biggest files first. You can page through the results with normal "less" commands ... space bar ( next page ) and b ( previous page ).

Copying the .svn directories from a checkout to a non-checkout to make it a checkout

I have a large application in a production environment that I'm trying to move under version control. So, I created a new repo and imported the app, minus various directories and files that shouldn't be under version control. Now, I need to make the installed copy a checkout (but still retain the extra files). At this point, in a recent version of SVN, I'd simply do a checkout over top the existing directory using the --force option. But sadly, I have an ancient version of SVN, from before when the --force option was added (and can't yet upgrade... long story).
So, I checked out the app to another directory, and want to simply copy all of the .svn directories into the original directory, thus turning the original into a checkout whilst leaving alone the extra files. Now, maybe I'm just having a rough day and missing something that's in plain site, but I can't seem to be able to figure this out. Here are the approaches I've tried so far:
Use rsync: I can't seem to hit the right combination of include and exclude directives to recursively capture all the .svn directories but nothing else.
Use cp: I can't figure out a good way to have it hit all the .svn directories across and down through the whole app.
Use find with -exec cp: I'm running into trouble with the leading part of the pathnames of the found files messing up the destination paths. I can exclude it using -printf '%P', but that doesn't seem to go into the {} replacement for exec.
Use find with xargs to cp: I'm running into trouble with find sending over child directories before sending their parents. Unfortunately, find does not have a --breadth option.
Any thoughts out there?
Other info:
bash 3.0.0.14
rsync 2.6.3 p28
cp 5.2.1
svn 1.3.2
Use tar and find to capture the .svn dirs in a clean checkout, then untar into your app.
cd /tmp/
svn co XXX
cd XXX
find . -name .svn -print0 | tar cf /tmp/XXX.tar --null -T -
cd /to/your/app/
tar xf /tmp/XXX.tar
Edit: switched find/tar command to use NUL terminator for robustness in the face of filenames containing spaces. Original command was:
tar cf /tmp/XXX.tar $(find . -name .svn)
(5) Can't you just make a checkout to a different directory, then copy the extra files to that directory, verify that everything's fine before switching to running the app from the new directory?

Resources