Efficiently syncing large Perforce repository? [closed] - perforce

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have troubles getting the latest version from the perforce server . The depot is very large and I don't want to do a complete checkout as it would tale a long time . Instead I put the unversioned sources and dependencies in the workspace and do a "p4 sync -k" . This successfully versions my files but it doesn't bring the new files from the server .
How can I do that ?

If I understand your situation:
You work against a very large project.
You have a local snapshot of the data that is not entirely up to date.
You wish to only obtain the differences between your snapshot and the latest on the server.
The steps you should follow are:
Run p4 sync -k to make Perforce think you have the latest copy of all files.
Run p4 diff -se ... | p4 -x - sync -f to force-sync any out of date files
Run p4 diff -sd ... | p4 -x - sync -f to force-sync any missing files
At that point you may have local files that were deleted from the server. If you care about those, you can write a simple script that detects them and removes them from your file system.
The good news is that Perforce's next release (2012.1) has a status command that will pick up all differences more easily.
To approach this from another angle, do you need the entire project in your workspace? Could you narrow your workspace view to only work with a subset of the data?

This is a classic Perforce question.
To achieve this you will need pipe a few commands together.
p4 diff -sd //Depot/Path/... | p4 -x – sync -f
p4 diff -sd command will find all the files that do not exist in the workspace.
p4 -x – sync -f will forcible sync these files.
As p4-randall has in his answer, you may also want to run p4 diff -se ... | p4 -x - sync -f to sync any files that are out-of-date.
HTH,

Related

Make n copies of files in folder and copy them to a specific folders [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
i have already edited this question, because probably would useful for someone else,and well , my problematics is the next:
I have hundreds of files, 256 to be exact ,this files are .csv,
and I need to generate a directory/folder with 50 copies for each file,
for example:
-file1.csv---->folder_for_file1---->1copy_file1.csv,2copy_file1.csv,3copy_file1.csv........50copy_file1.csv
-file2.csv---->folder_for_file2---->1copy_file2.csv,2copy_file2.csv,3copy_file2.csv........50copy_file2.csv
-file3.csv---->folder_for_file3---->1copy_file3.csv,2copy_file3.csv,3copy_file3.csv........50copy_file3.csv
...
-file256.csv---->folder_forfile256---->1copy_file256.csv,2copy_file256.csv,3copy_file256.csv........50copy_file256.csv
What can I use to do this ??, some bash script or some simple ubuntu/linux command, like mk dir?
P.D/P.S.The answer that they have provided me, works very well, but I had a problem with the name of the generated folder, because it's name is the extension of the file and it's not useful for me.
thanks in advance
Your requirements are not very clear. I will answer with some assumptions.
For each file named file_i , suppose you want to create a directory named file_i_folder for that file under the same path of these files. You can do this via this command:
ls | xargs -t -n1 -i mkdir {}_folder
Then you want to create copies of each files under their corresponding directories. Since the names of files cannot be duplicated, you may want to give prefixes for copies, e.g. copy1_file1. You can do this via this command:
ls -p | grep -v / | xargs -t -n1 -i bash -c 'for i in {1..50}; do cp {} "{}_folder/copy${i}_{}" ; done'
You can alter the commands to change the format of the names of files and directories at your own will.

Linux backup files command [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I had a problem with my Ubuntu install. I was able to boot from liveCD and connect an external hard drive. I want to backup my files now.
I tried cp -r /home destination, but I get problem with spaces in filenames, symlinks, errors "Cannot create fifo: Operation not permitted" "Permission denied" "Invalid argument" and plenty more. What is the best way to do it? Will cp -a fix these issues or should I do something more clever?
I found out that rsync doesn't have problems with filenames. But it doesn't copy .so and .a files. Also it is running extremely slow comparing to cp.
EDIT:
I followed the advice of John Bollinger and created an archive, because my external drive wasn't ext4 formatted, so is not able to preserve all file attributes.
From a liveCD home refers to liveCD home, so one has to use:
tar -c -z -f /my/backup/disk/home.tar.gz -C / media/ubuntu/longDeviceName/home
Despite sudo, I still received some "Cannot open: Permission denied" and "socket ignored" errors creating a tar for several .png files in .cache/software-center/icons/blabla. I wonder whether it is normal.
If you do not want to reformat your backup disk with a filesystem that has enough capabilities to represent all of the attributes of your files (e.g. ext4) then preserving them across the backup requires putting them into some sort of container. The traditional container for this sort of thing is a [compressed] tarball. You might therefore try
tar -c -z -f /my/backup/disk/home.tar.gz -C / home
You would recover the contents of that tarball via
tar -x -z -f /my/backup/disk/home.tar.gz -C /
Either or both might need to be run with privilege, obtained by being root or by using sudo.
That will handle symlinks, executable files, and any filename just fine, but it may still have trouble if the data you are trying to back up include any special files, such as device nodes or FIFOs. In that event, you may simply need to remove such files first, and recreate them after restoring the other files. You can identify such files via find:
find /home -not -type f -not -type d -not -type l
The accepted answer does not backup / recover file permission.
You should use parameter "p" while backing up and while recovering.
Also you might want to recover to specific folder and then move things around to not overwrite files you might want to keep.
"/" on the end of the command stands for backing up entire system:
sudo tar -cvpzf /backupfolder/backup.tar.gz --exclude=/mnt /
sudo mkdir /recover_v1.1
sudo tar -xvpzf backup.tar.gz -C /recover_v1.1
... // replacing whatever you need manually
Manually replace files you need to recover and keep those you want to keep.
-x extract
-p include permissions
-v verbose will show you the files name while working
-z compression
-f name the file
You might want to setup cron jobs to run backup automatically.

Unzip and move a downloaded file - Linux [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Surprisingly I could not find a straight-forward answer to this question on here yet. I am still learning Linux. Say I have downloaded a zip file to my Downloads folder. Now, I want to move it into a protected folder, like /opts or /var. Is there a good command to both sudo move AND unzip the file to where I need it to go?
If you wish to perform two separate operations (move and extract) then you have no option but to use two commands.
However, if your end goal is to extract the zip file to a specific directory, you can leave the zip file where it is and specify an extraction directory using the -d option:
sudo unzip thefile.zip -d /opt/target_dir
From the manpage:
[-d exdir]
An optional directory to which to extract files. By default, all files and subdirectories are recreated in the current directory; the -d option allows extraction in an arbitrary directory (always assuming one has permission to write to the directory). This option need not appear at the end of the command line; it is also accepted before the zipfile specification (with the normal options), immediately after the zipfile specification, or between the file(s) and the -x option. The option and directory may be concatenated without any white space between them, but note that this may cause normal shell behavior to be suppressed. In particular, ''-d ~'' (tilde) is expanded by Unix C shells into the name of the user's home directory, but ''-d~'' is treated as a literal subdirectory ''~'' of the current directory.
sudo mv <file_name> /opts && unzip /opts/<file_name>
Also you may specify the unzip destination to unzip so you can do this in a single command. This however will be a bit different from the command above as the zip will be kept in its current location, only the unzipped files will be extracted to the pointed destination.
unzip -d [target directory] [filename].zip

Keeping a copy of a file in a same directory [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am working on linux scripts , Assume that the directory is consisting of these following scripts .
ls *.sh
test.sh
MyScripts.sh
My question is , before making any modifications to test.sh script , i want to keep a backup copy of it , so that if anything messes up , i will be not screwed up .
please tell me how can i keep a copy of test.sh in the same directory ?? before making any modifications to the actual file test.sh .
Thank you very much .
Consider using revision control, such as git or Subversion.
You can make a copy before your work too:
cp test.sh test.sh.orig
The usual approach is to
cp test.sh test.sh~
(or test.sh.bck or whatever naming convention). In fact, any decent editor should have an option to do this automatically for you. Vim does it by default (saves a backup name filename~ on modification)
May I heartily suggest a version control solution for this purpose instead?
Good 'starter' options include:
bazaar
mercurial
I personally vouch for git.
I took care to name (D)VCS methods that have ample interoperability options so as to prevent data lockin.
cp test.sh test.sh.`date +"%m_%d_%Y"`
Will make a timestamped backup named test.sh.10_10_2011

Incremental backup Linux command [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
What is the command to do a incremental backup? Any source or any links would be much appreciated.
rsync is what you are looking for. Here is a nice tutorial.
Depending on what you need from your backups, rdiff-backup may be what you want. It's based on the same idea as rsync, but also keeps historical backups (in a space-efficient manner, by storing the differences).
Dirvish makes incremental snapshots (that look as full directories trees, thanks to the magic of hardlinks), using rysnc under the hood. It works well for me.
Here's the command I use for incremental backups of my virtual machine using rsync.
rsync -avh --delete --progress --link-dest="/Volumes/canteloup/vm_backups/`ls -1tr /Volumes/canteloup/vm_backups/ | tail -1`" "/Users/julian/Documents/Parallels" "/Volumes/canteloup/vm_backups/`date +%Y-%m-%d-%H-%M-%S`"
-avh means make an archive, with verbose output in a human readable form.
--delete will make sure each incremental backup does not contain files that have been deleted since the last backup. It means the backup taken on a particular date will be a snapshot of the directory as it was on that date.
--progress will display in the terminal the amount transferred, the percentage, and the time remaining for each file. Handy for virtual machine backups with 40Gb+ file sizes.
--link-dest specifies the directory to use for making links for the files that haven't changed. It uses ls -rt | tail -1 to get the last file. Seems to be fine if the file doesn't exist, as in the first time it is run.
The next arg is the directory to backup.
The last arg is the target directory. The name is a timestamp.
Try the following bash script. Please replace src and dest with the source and destination you want to use. If you are not backing up in a local storage then remove --inplace (This option is useful for transfer of large files with block-based changes or appended data, and also on systems that are disk bound, not network bound. you should also not use this option to update files that are being accessed by others).
#!/bin/bash
rsync -ab --dry-run --stats --human-readable --inplace --debug=NONE --log-file=rsync.log --backup-dir=rsync_bak.$(date +"%d-%m-%y_%I-%M-%S%P") --log-file-format='%t %f %o %M' --delete-after src dest | sed -e '1d;5,12d;14,17d'
echo -e "\nDo you want to continue?"
while true; do
case $yn in
[Yy]* ) rsync -ab --human-readable --inplace --info=PROGRESS2,BACKUP,DEL --debug=NONE --log-file=rsync.log --backup-dir=rsync_bak.$(date +"%d-%m-%y_%I-%M-%S%P") --log-file-format='%t %f %o %M' --delete-after src dest; break;;
[Nn]* ) exit;;
* ) read -p "Please answer yes or no: " yn;;
esac
done

Resources