db.* files in /home from Perforce? - perforce

I see several db.* files in my /home directory, and it seems they come from perforce. For example, some files are db.archmap, db.bodtext, db.change, db.changex
Are these files useful? Can I delete them? They are making my /home directory messy

You have started a server using your home directory as the Perforce server's P4ROOT folder. Those files are files that are generated from starting the server and cannot be deleted unless you want to hose your server installation. It's not clear to me how you've started the server instance, so I'll try and cover multiple bases with my answer.
If you want to start up the server under your own account, you should set the P4ROOT environment variable and point it to where you want the server to store its files. Alternatively, when you start the server, you can specify the root folder on the command line using the -r option:
p4d -r /home/mark/p4server
which would put the server's files into the directory called 'p4server' off of my home directory.
Typically it is best to run the perforce server using a user that is dedicated to running perforce. I use a user called 'perforce'. I set P4ROOT (and other variables) in that users environment. If you cannot use a separate user, it might be easier to use the -r command line option that I mentioned above.

Those files are only server files, not client files. So it is safe to delete them, but if you start the server back up it will recreate them. So you might want to uninstall the server.
Unless you are running a beta version, they have p4sandbox coming soon(maybe in the beta, I forget) which MAY create those files. I don't have a beta version, so I can't verify what new files the client may or may not create.

You can check the documentation here to see what these files do/are for.

Related

-bash: /usr/local/cpanel/3rdparty/bin/perl: No such file or directory - linux server kind of crashed

My VPS Server (CPanel) got full so I decided to delete some files, namely the following
ssh root#server_ip_address
MB(space) File Location
2583 /home/someuser/tmp/analog/cache <-------------1
1883 /home/someuser/tmp/analog/cache.out <-------------2
1061 /usr/tmpDSK <------------3
When I deleted the first two files it did free up some 4GB of space the disk was showing 85% occupied. Then I deleted the tmpDSK file (1.06GB) but it had no impact on disk size. After some half, an hour or so, our server crashed and it won't serve pages. It is up though and I can ping it but pages are not served.
What I noticed right away after I deleted tmpDSK, the user/local/ folder was missing (got deleted by mistake?) which contains Apache, Perl etc and hence the reason web sites are not being served.
Just now I logged in to the server via SSH and I got the following messages
-bash: /usr/local/cpanel/3rdparty/bin/perl: No such file or directory
-bash: /usr/local/cpanel/3rdparty/bin/perl: No such file or directory
-bash: /usr/local/cpanel/3rdparty/bin/perl: No such file or directory
I am wondering, what possibly happened. Can I restore things myself, we do have a backup on the external drive? Is there a way deleted files can be restored? I deleted them from sftp, one by one. There is already a ticket opened with the host provider, but I want to understand things myself better and see if I can restore it somehow. Note none of the Linux commands works on SSH, like ls etc. Also tmpDSK is supposed to be cPanel related where it store sessions, temporary files
Thanks
Answering myself
After more than 24 hours of troubleshooting with the host, and then personally researching we found the issue was very trivial.
We accidentally moved the contents of /usr/local/ folder to /usr/include which cause all the problems. Simply moving it back fixed the problem.
The content was moved unexpectedly when working with server with FTP (sftp). It likely had nothing to do with the three deleted files that I had deleted.

pywatchdog and pyinotify not detecting changes on files inside ftp created directories

I have an application monitoring files sent to a FTP server (proftpd 1.3.5a). I am using pywatchdog to monitor file creation on FTP server root (app running locally), but under some very specific circumstance it does not issue a notification: when I create a new dir through ftp and, after that, create a file under this directory. The file creation/modification events are not caught!
In order to reproduce it in a simple way I've used pyinotify (0.9.6) itself and it looks like the problem comes from there. So, a simple way to reproduce the problem:
Install proftpd and pyinotify (python3) on the server with default settings
In the server, run the following command to monitor ftp root (recursive and autoadd turned on - considering user "user"):
python3 -m pyinotify -v -r -a /home/user
In the client, create a sample.txt, connect in the ftp server and issue the following commands, in this order:
mkdir dir_a
cd dir_a
put sample.txt
There will be no events related to sample.txt - neither create nor modify!
I've tried to remove the ftp factor from the issue by manually creating and moving directories inside the observed target and creating files inside these directories, but the issue does not happen - it all works smoothly.
Any help will be appreciated!

Is it ok to delete all files and folders in ec2-user directory in Amazon-EC2 linux?

My first time using Amazon-EC2 and NodeJS. After a day of playing the ec2-user directory looks messy since I was trying to install packages in wrong folders.
I want to delete everything and start anew.
Is it ok to delete everything in ec2-user directory or some files needs to be kept?
What I did is create a new user, which would also create a new directory for that user.
home
|_ec2-user
|_newuser
When I created the newuser the system created these files:
.bash_history, .bash_logout, .bash_profile, .bashrc
So I guess these are the files that should stay (my guess is even if you delete them the system supposed to re-create them automatically, but it's better that they stay to be on the safe side). I deleted everything except for these files in my original ec2-user directory and it seems to be fine.
Also very important this directory must stay too .ssh - it contains authorized_keys file that holds a key-pair and is needed for SSH access.

rsync local root to remote server with non root preserve user:group it's possible

I'm writing code for create a backup rsync based.
On server a run code how root, and send with rsync some question about system, and all users accounts.
On backup server put content (via rsync) on one user account (user)
Try -azhEX --numeric-ids and -azh, y others.. but in any case I can keep the user and group id for when making a restore.
It's possible with rsync on this scenario, restore with original user:group ?
I run on both sides latest version 3.1.1 of rsync.
rsync alone cannot do this.
A very close solution to your problem is rdiff-backup which uses librsync internally and stores the user permissions and other meta-data in a separate directory.
http://www.nongnu.org/rdiff-backup/

Pulling remote 'client uploaded assets' into a local website development environment

This is an issue we come up against again and again, how to get live website assets uploaded by a client into the local development environment. The options are:
Download all the things (can be a lengthy process and has to be repeated often)
Write some insane script that automates #1
Some uber clever thing which maps a local folder to a remote URL
I would really love to know how to achieve #3, have some kind of alias/folder in my local environment which is ignored by Git but means that when testing changes locally I will see client uploaded assets where they should be rather than broken images (and/or other things).
I do wonder if this might be possible using Panic Transmit and the 'Transmit disk' feature.
Update
Ok thus far I have managed to get a remote server folder mapped to my local machine as a drive (of sorts) using the 'Web Disk' option in cPanel and then the 'Connect to server' option in OS X.
However although I can then browse the folder contents in a safe read only fashion on my local machine when I alias that drive to a folder in /sites Apache just prompts me to download the alias file rather that follow it as a symlink might... :/
KISS: I'd go with #2.
I usally put a small script like update_assets.sh in the project's folder which uses rsync to download the files:
rsync --recursive --stats --progress -aze user#site.net:~/path/to/remote/files local/path
I wouldn't call that insane :) I prefer to have all the files locally so that I can work with them when I'm offline or on a slow connection.
rsync is quite fast and maybe you also want to check out the --delete flag to delete local files when they were removed from remote.

Resources