p4 unshelve results in "rename: ... Permission denied" - perforce

I use p4 under WSL on Windows 10. I can normally use other operations from open files for editing to shelving, etc.
Now I have my own shelved changelist i.e. 12345, and current state of my workspace is clean. I want to unshelve all shelved files inside such changelist into the local workspace. I execute with the following command
p4 unshelve -s 12345
The result is similar to the following for each file
//depot/mydir/myfile#22 - unshelved, opened for edit
... //depot/mydir/myfile#22 - also opened by someone#someone_at_work
rename: /mnt/d/mydir/myfile: Permission denied
I tried with strace p4 unshelve -s 12345 and it seems like it tried to create a temporary file, chmod it, then finally try to rename temporary file into the actual file name as seen in workspace. The last step has "-1 EACCES (Permission Denied)" similar to the following
chmod("/mnt/d/mydir/tmp.1648.139747365556800.93", 0666) = 0
stat("/mnt/d/mydir/tmp.1648.139747365556800.93", {st_mode=S_IFREG|0777, st_size=16367, ...}) = 0
openat(AT_FDCWD, "/etc/localtime", O_RDONLY|O_CLOEXEC) = 4
fstat(4, {st_mode=S_IFREG|0644, st_size=2326, ...}) = 0
fstat(4, {st_mode=S_IFREG|0644, st_size=2326, ...}) = 0
read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\t\0\0\0\t\0\0\0\0"..., 512) = 512
lseek(4, 347, SEEK_CUR) = 859
read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\t\0\0\0\t\0\0\0\0"..., 512) = 512
read(4, ".\204\313\220\0\0\0\0/t\274\220\0\0\0\0000d\255\220\0\0\0\0001]\331\20\0\0\0\0"..., 512) = 512
read(4, "j\335T\220\0\0\0\0k\250[\220\0\0\0\0l\306q\20\0\0\0\0m\210=\220\0\0\0\0"..., 512) = 443
close(4) = 0
stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2326, ...}) = 0
rename("/mnt/d/mydir/tmp.1648.139747365556800.93", "/mnt/d/mydir/myfile") = -1 EACCES (Permission denied)
write(2, "rename: /mnt/d/mydir/m"..., 90rename: /mnt/d/mydir/myfile: Permission denied
) = 90
brk(0x1b40000) = 0x1b40000
unlink("/mnt/d/mydir/tmp.1648.139747365556800.93") = 0
unlink("/mnt/d/mydir/tmp.1648.139747365556800.93") = -1 ENOENT (No such file or directory)
brk(0x1b30000) = 0x1b30000
gettimeofday({tv_sec=1596130958, tv_usec=871360}, NULL) = 0
Despite possibly knowing what the root cause might be but I have no clue what I can do to make p4 works. Note that my WSL username is different from P4 username.
Update:
This WSL issue is very close to my problem and probably it is for the compatibility between WSL and p4 at this point. Thanks to Samwise in the comment for pointing out into proper direction as well.
Update 2:
The workaround I found is to
Open all files for edit as listed in shelved changelist
Unshelve the files from shelved changelist
With this, we need to do some cleanup to get actual clean file name. 3 steps needed as follows.
p4 files #=12345 | sed 's/#.*//g' | cut -d'/' -f5- | xargs -n 1 -I{} p4 edit -c 12345 {} (get list of files from changelist, remove unrelevant part, remove first two components in file path, then one by one edit such file into changelist of our shelved file 12345). Note: We can also remove cut -d'/' -f5- out as I tested it, it still works fine. Your streams/views path might be complicated thus -f5- won't be able to cover for all cases.
p4 unshelve -s 12345
p4 resolve -af (or at to get theirs changes, or ay to only get yours)
PS: it's important to use edit -c <changelist> as if you edit into default changelist, you won't be able to shelve all files updating it again as perfoce cannot detect any shelved files associated with that changelist.
Update 3:
If use WSL(1 or 2), then we can avoid applying workarounds by directly using p4.exe and work logically with Windows filesystem. To make this works, your Root: property as seen in p4.exe client needs to be modified to based on Windows filesystem. Then continue using p4.exe for all other commands.

Related

Restore bash script

I have a backup directory with a tar file in that directory (/home/username/userhome/backup/userbackup-${currentdate}.tar.gz).
I would like a create a script that:
creates the restore directory if it does not exist
displays the contents of the backup directory containing tar files of previous backups
asks the user to enter the name of the tar file to restore
uses the tar command to restore the file to the new restore directory and log file
So far my script has
#!/bin/bash
mkdir restore -p
ls /home/username/userhome/backup
echo "Enter the file name to be restored"
read $filename
tar xvf "/home/username/userhome/restore/$filename" &>/home/username/userhome/restore/restore.log
I am a complete newbie so any help will be greatly appreciated.
Continuing from the comment, one thing you always want to do when writing a script is to validate each step along the way. For instance, if you cannot create the new restore directory, you don't want to loop and then attempt to extract a tar.gz file to the directory that doesn't exits.
Virtually all commands return 0 on success or a nonzero error on failure. You can use this to your advantage to check if the command succeeded and if it didn't you simply exit. A quick way to do that is:
command || exit 1
In your case creating the restore directory that could be:
mkdir -p restore || exit 1
You can add additional error messages if you like, but most time the error generated by the failure will be sufficient to tell you what went wrong.
Whenever you are operating on a fixed base directory using subdirectories of that base, it is best to create a variable for that base directory you can use in your script. For example:
#!/bin/bash
userdir="${1:-/home/username/userhome}"
budir="$userdir"/backup
restore="$userdir"/restore
Here userdir is the base directory and you have additional variables for the backup directory and restore directory. This makes it convenient to reference or operate on files in any of the directories. Note also how userdir can be set from the first command line argument or uses /home/username/userhome by default if no argument is given.
You can create restore and change to that directory, validating each step as follows:
mkdir -p restore || exit 1
cd "$restore" || exit 1
For the menu, let select create the menu for you (now if you have hundreds of .tar.gz files, you may need to write a custom pager, but for a few dozen files, select will be fine). You can generate the menu and restore the selected file with:
select choice in "$budir"/*.tar.gz; do
tar -xvf "$choice" &>"$restore/restore.log"
break
done
Putting it altogether, you would have:
#!/bin/bash
userdir="${1:-/home/username/userhome}"
budir="$userdir"/backup
restore="$userdir"/restore
mkdir -p restore || exit 1
cd "$restore" || exit 1
select choice in "$budir"/*.tar.gz; do
tar -xvf "$choice" &>"$restore/restore.log"
break
done
Example Use/Output
Say I have a couple of .tar.gz files in a directory, e.g.
$ tree /home/david/tmpd/backup
backup
├── v.tar.gz
└── x.tar.gz
Then to create a restore directory under the tmpd directory I can run the script as:
$ bash ~/scr/tmp/restore.sh /home/david/tmpd
1) /home/david/tmpd/backup/v.tar.gz
2) /home/david/tmpd/backup/x.tar.gz
#? 2
By choosing 2 above the x.tar.gz file is restored under the restore directory, e.g.
$ ls -al restore/
total 4
drwxr-xr-x 3 david david 80 Oct 29 01:00 .
drwxr-xr-x 4 david david 80 Oct 29 01:00 ..
drwxr-xr-x 3 david david 60 Oct 29 01:00 home
-rw-r--r-- 1 david david 57 Oct 29 01:00 restore.log
So the restore directory was successfully created, the restore.log was created and the .tar.gz file was restored under the restore directory.
The contents of restore.log are
$ cat restore/restore.log
home/david/scr/utl/xscrd.sh
home/david/scr/utl/xzdate.sh
(which were the two sample files I added to the x.tar.gz file)
Look things over and let me know if you have further questions.

How to retrieve the size of a specific stream before doing sync

I'm working on a script and need to query the size of a specific stream before I do a sync to a local harddrive. The local unsynced folder is empty.
I know there is a p4 sizes and I have tried the following command
p4 -u TheUserName -p ExternalServerUrl -c MyWorkspace sizes -s //Path/To/Stream -H
But it seems to report my local storage (which is empty).
So the output I get is this:
//Path/To/Stream 0 files 0 bytes
-H 0 files 0 bytes
So how can I query the server for the size before actually doing the sync?
Thanks in advance for any feedback!
You need to use wildcards to refer to a set of multiple files (including a depot directory path):
p4 sizes -s -H //Path/To/Stream/...
Note that if the -H goes after the file path, it's treated as another file path, which is why your output included the line -H 0 files 0 bytes.
Another important thing to note is that the depot path corresponding to the stream name is not necessarily the same thing as the stream contents -- if the stream is a virtual stream, or if it has import paths, some or all of its files live under other depot paths. For your purposes you probably want to instead use a client-path syntax, which will correspond to everything in the workspace (i.e. everything mapped by the workspace's stream):
p4 sizes -s -H //MyClient/...
The other solution, which will also work if the local directory isn't empty, is to take advantage of the totalFileSize field in the tagged output of p4 sync:
p4 -Ztag -F "%totalFileSize% bytes (%totalFileCount% files)" sync -n -q

Fuse file system with "default permission" option

I am new to fuse. I have mounted fuse by the following command.
/mnt/fuse -o default_permissions -o allow_other -o nonempty -o hard_remove –d
Now If I login as "test" user and tried to create a file called "testfile".
test#11540302:/registration> touch testfile
touch: setting times of `testfile': Permission denied
Strace output:
uname({sys="Linux", node="11540302", ...}) = 0
brk(0) = 0x8055000
brk(0x8076000) = 0x8076000
open("testfile", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK|O_LARGEFILE, 0666) = 3
dup2(3, 0) = 0
close(3) = 0
utimensat(0, NULL, NULL, 0) = -1 EACCES (Permission denied)
close(0) = 0
But "testfile" creation is successful with owner as root user,
-rw-r--r-- 1 root trusted 0 Jan 19 13:51 testfile
I can understand that fuse application is running in root level, file creation happened with the owner as root. Because of that test user cannot perform any operation on "testfile".
My question:
Since I have given "allow_other" while mounting, why test user cannot having privileges to access the "testfile"?
Please correct me if my understanding is wrong.
I got the solution for this problem.
Detailed explanation for this issue.
Solution:
As #dirkt said we need to handle permissions on our own.
Code to get the caller uid and gid:
fuse_get_context()->uid;
fuse_get_context()->gid;
Get the caller user id and group id and set the ownership of the file/directory while creating via fuse API's.
Always there is room for improvement. Kindly correct me if I am not correct.
Thank you dirkt for your explanation.
Try adding the test user to the fuse group:
usermod -a -G fuse test
Also, make sure that #user_allow_other is uncommented on the fuse configuration file (generally on /etc/fuse.conf):
sed -i -e "s/#user_allow_other/user_allow_other/gi" /etc/fuse.conf
After running either of those, reboot the computer and try again.

How can I create a file as well as missing dirs along the path in linux system programatically?

I want to create a file on linux system in my program, say /a/b/c/test, currently I only have dir /a exist and I want to do it in my program, is there any function that can create file as well as missing dirs (that is, create b/c/test) in one deal? open, fopen, mkdir all seem can't work if there are missing dirs along the path. Thank you.
From strace mkdir -p a/b/c:
mkdir("a", 0751) = 0
open("a", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY|O_NOFOLLOW) = 3
fchdir(3) = 0
close(3) = 0
mkdir("b", 0751) = 0
open("b", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY|O_NOFOLLOW) = 3
fchdir(3) = 0
close(3) = 0
mkdir("c", 0751) = 0
In other words, you have to call mkdir() yourself, in a loop per directory, then create the file. This is how the mkdir exe does it (in C). Why not run the mkdir -p /a/b/c command with execve? With C stuff set up for the call properly, of course.
I think you need two commands:
$ mkdir -p /a/b/c/ && touch /a/b/c/test
Sorry, you will have to be satisfied with a two-step process:
mkdir -p /a/b/c
touch /a/b/c/test
If you are using Glib (the lowest level library of Gtk, which can be used independently) you could just call g_mkdir_with_parents to make all the directories (like the mkdir -p command does)

Linux: 'transferring'/mirroring read-only permissions for symlinks (for webserver)

Please let me explain what I mean by the question:
This is the context: I'm a user on a webserver, where I have phpicalendar installed; then, I choose a directory, say /webroot/mylogin/phpicalendar/mycals to host my .ics calendar text files.
EDIT: Previously, instead of '/webroot', I had used '/root' - but I really didn't mean the Linux '/root' directory - I'm just wanted to use it as a stand in for the real location on the webserver (so it serves just as a common point of reference). Otherwise, what I mean by common point of reference, is simply /webroot = /media/some/path ..
Then, I can enter this directory in the phpicalendar's config.inc.php:
$configs = array(
'calendar_path' => '/webroot/mylogin/phpicalendar/mycals;
...
Then, phpicalendar will run through this directory, grab the .ics files there (say, mycal.ics and mycal2.ics) and render them - so far, so good.
The thing is, I would now like to add a second calendar directory, located at the same webserver, but where I have read-only permissions, say /webroot/protected/cals. I know that I have read permissions, because I can do in the shell, say
$ less /webroot/protected/cals/maincal.ics
and I can read the contents fine.. So now:
If I enter /webroot/protected/cals as a 'calendar_path', phpicalendar can read and render the files there (say, 'maincal.ics', 'maincal2.ics') without a problem
However, phpicalendar can have only one 'calendar_path', so I can either use the protected calendars, or my customized calendars - but not both
So, I thought, I could symlink the protected calendars in my customized directory - and get the best of both worlds :)
So, here is a shell snippet of what I would do
$ cd /webroot/mylogin/phpicalendar/mycals
$ ls -la
drwxrwxrwx 2 myself myself 4096 2011-03-03 12:50 .
-rw-r--r-- 1 myself myself 1234 2011-01-20 07:32 mycal.ics
-rw-r--r-- 1 myself myself 1234 2011-01-20 07:32 mycal2.ics
...
$ ln /webroot/protected/cals/maincal.ics . # try a hard link first
ln: creating hard link `./maincal.ics' => `/webroot/protected/cals/maincal.ics': Invalid cross-device link'
$ ln -s /webroot/protected/cals/maincal.ics . # symlink - works
$ ln -s ../../../protected/cals/maincal.ics relmaincal.ics # symlink via relative
$ ln -s mycal.ics testcal.ics # try a symlink to a local file
$ ls -la # check contents of dir now
drwxrwxrwx 2 myself myself 4096 .
-rw-r--r-- 1 myself myself 1234 mycal.ics
-rw-r--r-- 1 myself myself 1234 mycal2.ics
lrwxrwxrwx 1 myself myself 21 testcal.ics -> mycal.ics
lrwxrwxrwx 1 myself myself 56 maincal.ics -> /webroot/protected/cals/maincal.ics
lrwxrwxrwx 1 myself myself 66 relmaincal.ics -> ../../../protected/cals/maincal.ics
Ok, so here's what happens:
less maincal.ics works on shell
less relmaincal.ics fails with 'relmaincal.ics: No such file or directory' (even if shell autocompletion for the relative path did work during the execution of the symlink command!)
When you open phpicalendar now, it will render mycal.ics, mycal2.ics and testcal.ics (and they will work)
however, maincal.ics and relmaincal.ics will not be parsed or displayed
Now - this could be that PHP cannot resolve symlinks; however I speculate that the situation is this:
When I do less maincal.ics - it is myself who is user, who has read permission for /webroot/protected/cals
phpicalendar (so Apache webserver user) can otherwise also access /webroot/protected/cals as read-only, when given 'hardcoded' path
phpicalendar is also capable of reading local symlinks fine
Thus, I suspect, that the problem is: when trying to read the symlinks to protected cals, the user that is visible to the shell during that operation is Apache web user, which then doesn't get permissions to access a symlink to the protected/cals location!
The thing now is - I can easily copy the .ics files locally; however they are being changed by someone else, which is why I'd have preferred a symlink.
And my question is: can I do some sort of trickery, so that when phpicalendar/Apache tries to access a symlink to protected/cals, it 'thinks' that it is a local file - and otherwise, the contents of the protected/cals file are being 'piped' back to phpicalendar/Apache?? I guess I'm thinking something in terms of:
$ mkfifo mypipe
$ ln -s mypipe testpipe.ics
$ cat ./testpipe.ics # in one terminal
$ cat /webroot/protected/cals/maincal.ics > mypipe # in other terminal
... which would otherwise (I think) handle the permissions problem - except that, I don't want to cat manually; that would be something that would have to be done in the background, each time an application requests to read testpipe.ics:)
Well, thanks in advance for any comments on this - looking forward to hearing some,
Cheers!
Umm, I really doubt that the account the web server runs under can read anything under /root. That directory is usually mode 0700, user root, group root, or something very similar to that - meaning no non-root access is allowed. If you're running the web server as root, file read permissions are the least of your problems...
Your best bet then would be to place the read-only calendar files somewhere publicly available, and symlink to that location from wherever under /root you want to be able to access them.
Start by checking whether the Apache user can view your calendars:
you#host $ sudo -i -u <apache-user> -s /bin/bash
apache#host $ less /root/protected/cals/maincal.ics

Resources