How do I get a count of files in a folder in Perforce? - perforce

I would like to know how many files are in any given folder/directory/branch of a Perforce depot, but I don't see a way to do this. The p4 fstat command was my first thought, but it doesn't appear to have options to return file counts. Is there a simple way to get a count of files in a folder using either the graphical or command-line client?

While the p4 fstat doesn't offer a way to obtain file counts per se, you can easily parse its output to obtain this information. Note, this works in Windows, but I would imagine it is easily modified for other OSes. Here's how you do it:
p4 fstat -T depotFile //depot/some/folder/... | find /c "... depotFile"
It can also be done with the p4 files command as thusly:
p4 files //depot/some/folder/... | find /c "//"

Use the p4 sizes -s command.
C:\test>p4 sizes -s ...
... 5 files 342 bytes
or if you want just the count:
C:\test>p4 -F %fileCount% sizes -s ...
5
(obviously the file argument can be anything else in place of ... which is the current directory)

Try this command:
p4 files //depot/dir1/dir2/... | wc -l
Explanation:
p4 files //depot/dir1/dir2/... is the command that recursively prints the files under dir2 to screen.
Pipe that through wordcount lines (| wc -l) to count the number of output lines.
This works in Linux/Unix systems, and in Windows if you're using cygwin (or some other Linux-like terminal).

Powershell method:
To count the files in a specific directory:
p4 files //depot/dir1/dir2/... | Measure-Object
Alternatively:
(p4 files //depot/dir1/dir2/...).Count
The following will give you a list of all child directories and their file counts in an object that you can view as a table, sort, export to csv, etc:
p4 dirs //depot/dir1/* | Select-Object -Property #{Name="Path";Expression={$PSItem}},#{Name="FileCounts";Expression={(p4 files $PSItem/...).Count}}

Related

shell script to read directory names and create .txt files with the same names in another directory

I have two directories, one called clients and another called test, inside the directory called clients I have some folders, I need a shell script that reads the name of the folders inside clients and creates .txt files with the same name inside the folder test, I am very new to shell and I have no idea how to do this, could you guys help me please?
Try using xargs with ls. ls -F displays all files in the directory client, but then displays the folders with an extra / at the end. the grep uses the extra / in the output of ls -F to only pass folders to the next command. Then, sed 's/\///g removes the extra / from grep, and passes the names to xargs. xargs will then pass the folders to the % symbol, and then make text files with the names.
ls -F client | grep / | sed 's/\///g' | xargs -I % touch tests/%.txt

Finding the oldest folder in a directory in linux even when files inside are modified

I have two folders A and B, inside that there are two files each.
which are created in the below order
mkdir A
cd A
touch a_1
touch a_2
cd ..
mkdir B
cd B
touch b_1
touch b_2
cd ..
From the above i need to find which folder was created first(not modified).
ls -c <path_to_root_before_A_and_B> | tail -1
Now this outputs as "A" (no issues here).
Now i delete the file a_1 inside the Directory A.
Now i again execute the command
ls -c <path_to_root_before_A_and_B> | tail -1
This time it shows "B".
But the directory A contains the file a_2, but the ls command shows as "B". how to overcome this
How To Get File Creation Date Time In Bash-Debian
You'll want to read the link above for that, files and directories would save the same modification time types, which means directories do not save their creation date. Methods like the ls -i one mentioned earlier may work sometimes, but when I ran it just now it got really old files mixed up with really new files, so I don't think it works exactly how you think it might.
Instead try touching a file immediately after creating a directory, save it as something like .DIRBIRTH and make it hidden. Then when trying to find the order the directories were made, just grep for which .DIRBIRTH has the oldest modification date.
Assuming that all the stars align (You're using a version of GNU stat(1) that supports the file birth time formats, you're using a filesystem that records them, and a linux kernel version new enough to support the statx(2) syscall, this script should print out all immediate subdirectories of the directory passed as its argument sorted by creation time:
#!/bin/sh
rootdir=$1
find "$rootdir" -maxdepth 1 -type d -exec stat -c "%W %n" {} + | tail -n +2 \
| sort -k1,1n | cut --complement -d' ' -f1

Quickly list random set of files in directory in Linux

Question:
I am looking for a performant, concise way to list N randomly selected files in a Linux directory using only Bash. The files must be randomly selected from different subdirectories.
Why I'm asking:
In Linux, I often want to test a random selection of files in a directory for some property. The directories contain 1000's of files, so I only want to test a small number of them, but I want to take them from different subdirectories in the directory of interest.
The following returns the paths of 50 "randomly"-selected files:
find /dir/of/interest/ -type f | sort -R | head -n 50
The directory contains many files, and resides on a mounted file system with slow read times (accessed through ssh), so the command can take many minutes. I believe the issue is that the first find command finds every file (slow), and only then prints a random selection.
If you are using locate and updatedb updates regularly (daily is probably the default), you could:
$ locate /home/james/test | sort -R | head -5
/home/james/test/10kfiles/out_708.txt
/home/james/test/10kfiles/out_9637.txt
/home/james/test/compr/bar
/home/james/test/10kfiles/out_3788.txt
/home/james/test/test
How often do you need it? Do the work periodically in advance to have it quickly available when you need it.
Create a refreshList script.
#! /bin/env bash
find /dir/of/interest/ -type f | sort -R | head -n 50 >/tmp/rand.list
mv -f /tmp/rand.list ~
Put it in your crontab.
0 7-20 * * 1-5 nice -25 ~/refresh
Then you will always have a ~/rand.list that's under an hour old.
If you don't want to use cron and aren't too picky about how old it is, just write a function that refreshes the file after you use it every time.
randFiles() {
cat ~/rand.list
{ find /dir/of/interest/ -type f |
sort -R | head -n 50 >/tmp/rand.list
mv -f /tmp/rand.list ~
} &
}
If you can't run locate and the find command is too slow, is there any reason this has to be done in real time?
Would it be possible to use cron to dump the output of the find command into a file and then do the random pick out of there?

How do I split a large changelist in P4

I have a very large changelist (~40,000 files) and I need to split it into several smaller changelists so I can view the files contained. I am aware that I can adjust my p4 preferences to show more files in a changelist, but I need to run commands against the files in the changelist, and when I run the command it hangs and doesn't complete after 18 hours.
I'm running the 2012.2 P4 server.
The command I'm running is:
C:>p4 -u some_user -c some_client revert -k -c 155530 //...
Thanks
If you want to move files into a separate changeset, you could do:
p4 reopen -c default //some/subdirectory/...
p4 change
The above would move a portion of the files into the "default" changeset and then create a new changeset from them. Or, if you have another changeset to use already, you could of course do:
p4 reopen -c NEW_CLN //some/subdirectory/...
directly.
If the files you want to split out aren't nicely contained within a subdirectory, a more general approach would be to do:
p4 -ztag opened -c OLD_CLN | grep depotFile | cut -d ' ' -f 3 > files.txt
to get a list of files opened in that changeset. Then edit that file so that only files you want to remove from the changeset are listed, and then do:
p4 -x files.txt reopen -c NEW_CLN
The above calls p4 reopen -c NEW_CLN using each of line from files.txt as an argument.

rsync to backup one file generated in dynamic folders

I'm trying to backup just one file that is generated by other application in dynamic named folders.
for example:
parent_folder/
back_01 -> file_blabla.zip (timestam 2013.05.12)
back_02 -> file_blabla01.zip (timestam 2013.05.14)
back_03 -> file_blabla02.zip (timestam 2013.05.22)
and I need to get the latest generated zip, just that one it doesnt matter the name of the file as long as is the latest, is a zip and is inside "parent_folder" get that one.
as well when I do the rsync the folder structure + file name is generated and I want to omit that I want to backup that file in a folder and with a name so I know where is the latest and it will be always named the same.
now im doing this with a perl that get the latest generated folder with
"ls -tAF | grep '/$' | head -1"
and perform the rsync but it does brings the last zip but with the folder structure that I dont want because it doesnt override my latest zip file.
rsync -rvtW --prune-empty-dirs --delay-updates --no-implied-dirs --modify-window=1 --include='*.zip' --exclude='*.*' --progress /source/ /myBackup/
as well it would be great if I could do the rsync without needing to use perl or any other script.
thanks
The file names will differ each time ?
This would be hard for any type of syncing to work.
What you could do is :
create a new folder outside of where it is found, then :
Before you start remove the last sym linked file in that folder
When the file is found i.e. ls -tAF | grep '/$' | head -1 ....
symlink it this folder
then rsync,ssh,unison file across to new node.
If the symlink name is file-latest.zip then it will always be this
one file sent across.
But why do all that when you can just scp and you can take a look at here:
https://github.com/vahidhedayati/definedscp
for a more long winded approach, and not for this situation but it uses the real file date/time stamp then converts to seconds... It might be useful if you wish to do the stat in a different way
Using stat to work out file, work out latest file then simply scp it across, here is something to get you started:
One liner:
scp $(find /path/to/parent_folder -name \*.zip -exec stat -t {} \;|awk '{print $1" "$13}'|sort -k2nr|head -n1|awk '{print $1}') remote_server:/path/to/name.zip
More long winded way, maybe of use to understand what above is doing:
#!/bin/bash
FOUND_ARRAY=()
cd parent_folder;
for file in $(find . -name \*.zip); do
ptime=$(stat -t $file|awk '{print $13}');
FOUND_ARRAY+=($file" "$ptime)
done
IFS=$'\n'
FOUND_FILE=$(echo "${FOUND_ARRAY[*]}" | sort -k2nr | head -n1|awk '{print $1}');
scp $FOUND_FILE remote_host:/backup/new_name.zip

Resources