Azure Container Instance and Azure Storage = permission issue - azure

Iam running gitlab instance in ACI with Azure File Storage mount.
This is output of container:
storage_directory[/var/opt/gitlab/.ssh] (gitlab::gitlab-shell line 38) had an error: Mixlib::ShellOut::ShellCommandFailed: ruby_block[directory resource: /var/opt/gitlab/.ssh] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/package/resources/storage_directory.rb line 33) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of chmod 00700 /var/opt/gitlab/.ssh ----
STDOUT:
STDERR: chmod: changing permissions of '/var/opt/gitlab/.ssh': Operation not permitted
---- End output of chmod 00700 /var/opt/gitlab/.ssh ----
Ran chmod 00700 /var/opt/gitlab/.ssh returned 1
Is there anything that I have to do to correct permissions on Storage ?
I see that some files are created, so problem is explicitly with this...
I am using official image from docker hub. I dont want to use custom layer of image with correcting permissions.
Any idea?
Thanks
EDIT:
my deployment looks like this: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files#mount-multiple-volumes

Azure Storage Account - File is basically SMB protocol, SA is mounted with root:root 777 permissions. If you need another permissions, You have to use Blob storage.

Related

azcopy of directory failing when copying between blob storage and file share - cannot transfer to the root of a service

I am trying to copy an existing directory from a blob storage to a directory already existing in an azure file share via the Azure CLI in the Azure portal
I get the following error
failed to perform copy command due to error: cannot transfer
individual files/folders to the root of a service. Add a container or
directory to the destination URL
What I have tried
azcopy copy ' https://myazurename.blob.core.windows.net/subdirectory' 'https://myazurename.file.core.windows.net/blob-mirror/subdirectory' --recursive
azcopy copy ' https://myazurename.blob.core.windows.net/subdirectory/*' 'https://myazurename.file.core.windows.net/blob-mirror/subdirectory' --recursive
azcopy copy ' https://myazurename.blob.core.windows.net/subdirectory/*' 'https://myazurename.file.core.windows.net/blob-mirror/subdirectory/*' --recursive
Yet everything gives the same error
I tried in my environment and got below results:
Initially I have tried the same commands and got same error:
command:
azcopy copy "https://<storage account name>.blob.core.windows.net/<c ontainer name>/directory1/?[SAS]" "https://<storage account name>.file.core.windows.net/fileshare1/directory1/" --recursive
Console:
Portal:
Blob container:
File share:
After I added SAS in both blob and file url copied files successfully from blobstorage to fileshare:
command:
azcopy copy "https://<storage account name>.blob.core.windows.net/<c ontainer name>/directory1/?[SAS]" "https://<storage account name>.file.core.windows.net/fileshare1/directory1/?[SAS]" --recursive=true
Console:
Portal:
Fileshare:
Update
You can get SAS for both blob and file
Home -> storageaccount -> shared access signature -> check the allowed resource group -> click the generate SAS and Connection string.

Az copy remove files is failing

The azcopy to remove files failed when I tried to remove a folder with subfolders and files from a fileshare with azcopy v10.
My az copy command was as follows
azcopy rm https://<storage-account-name>.file.core.windows.net/<file-share-name>/SystemScheduledJobs-22-06-01?<sas-token> --recursive=true
The error which I was getting is
panic: inconsistent path separators. Some are forward, some are back. This is not supported.
and the stack trace
goroutine 565 [running]:
github.com/Azure/azure-storage-azcopy/v10/common.DeterminePathSeparator({0xc0071c6d00, 0x3e})
/home/vsts/work/1/s/common/extensions.go:140 +0x97
github.com/Azure/azure-storage-azcopy/v10/common.GenerateFullPath({0x0, 0x0}, {0xc0071c6d00, 0x3e})
/home/vsts/work/1/s/common/extensions.go:155 +0xe5
github.com/Azure/azure-storage-azcopy/v10/common.GenerateFullPathWithQuery({0x0, 0x248f909e0be}, {0xc0071c6d00, 0xb85c3d}, {0x0, 0x0})
/home/vsts/work/1/s/common/extensions.go:172 +0x34
github.com/Azure/azure-storage-azcopy/v10/ste.(*JobPartPlanHeader).TransferSrcDstStrings(0x248f8ec0000, 0x1853)
/home/vsts/work/1/s/ste/JobPartPlan.go:181 +0x28f
github.com/Azure/azure-storage-azcopy/v10/ste.(*jobPartMgr).ScheduleTransfers(0xc000029500, {0x17c3190, 0xc0005b2000})
/home/vsts/work/1/s/ste/mgr-JobPartMgr.go:418 +0x692
github.com/Azure/azure-storage-azcopy/v10/ste.(*jobMgr).scheduleJobParts(0xc0007a3880)
/home/vsts/work/1/s/ste/mgr-JobMgr.go:851 +0x3e
created by github.com/Azure/azure-storage-azcopy/v10/ste.NewJobMgr
/home/vsts/work/1/s/ste/mgr-JobMgr.go:180 +0x9a6
I would be really grateful if anyone could provide more insight into this issue.
I tried to reproduce in my environment using the powershell command I got result successfully.
Powershell command
azcopy rm https://<storage-account-name>.file.core.windows.net/<file-share-name>/<path>?<sas-token> --recursive=true
Output:
*FileShare Url (https://<storage-account-name>.file.core.windows.net/<file-share-name>/<path>)
SAS Token at Storage Account level..
kindly check once with path in your url and try to reproduce once again.
Reference:
azcopy remove | Microsoft Docs

missing permissions to create folder from java application

I am setting up a spring boot application and when running it, it should generate a folder in the source directory (see step 3: https://www.baeldung.com/spring-boot-h2-database)
But when running the application I receive the following error:
org.h2.message.DbException: Log file error: "/data/sample.trace.db", cause: "org.h2.message.DbException: Error while creating file ""/data"" [90062-200]" [90034-200]
at org.h2.message.DbException.get(DbException.java:194)
at org.h2.message.TraceSystem.logWritingError(TraceSystem.java:294)
at org.h2.message.TraceSystem.openWriter(TraceSystem.java:315)
at org.h2.message.TraceSystem.writeFile(TraceSystem.java:263)
at org.h2.message.TraceSystem.write(TraceSystem.java:247)
at org.h2.message.Trace.error(Trace.java:194)
it seems to be a permission problem but I do not understand why. My current user, has admin permissons. What am I missing here?
When I encounter this problem on my machine I proceed through following steps:
If I don't know what user & group I am right now: $whoami && groups
What user is the program executed with (I'm not into Java so eg. PHP "echo exec('whoami');")
Who has access to the directory: $ls -la
3.1 If only owner has access and you are not the owner: $chown user:group file
3.2 If group and owner should have access consider: $chmod 770 file

SSH on console google cloud permission denied (publickey) with google-cloud-sdk file error

I'm new on cloud computing and I'm trying to use SSH to control my VM instance but when I use command (with debug)
gcloud compute ssh my-instance-name --verbosity=debug
it's show error
DEBUG: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code
[255]. Traceback (most recent call last): File
"/google/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line
983, in Execute
resources = calliope_command.Run(cli=self, args=args) File "/google/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py",
line 784, in Run
resources = command_instance.Run(args) File "/google/google-cloud-sdk/lib/surface/compute/ssh.py", line 262, in
Run
return_code = cmd.Run(ssh_helper.env, force_connect=True) File "/google/google-cloud-sdk/lib/googlecloudsdk/command_lib/util/ssh/ssh.py",
line 1256, in Run
raise CommandError(args[0], return_code=status) CommandError: [/usr/bin/ssh] exited with return code [255]. ERROR:
(gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I try to solve the problem in this link but it's not work
https://groups.google.com/forum/#!topic/gce-discussion/O-c10TM4ZLM
SSH error code 255 is a general error returned by GCP. You can try one of the following options.
1. Wait a few minutes and try again. It is possible that:
The instance has not finished starting up.
Metadata for SSH keys has not finished being propagated to the project or instance.
The Guest Environment has not yet read the SSH keys metadata.
2. Verify that SSH access to the instance is not blocked by a firewall.
gcloud compute firewall-rules list | grep "tcp:22"
If necessary, create a firewall rule to allow TCP 22 for a given VPC network, subnet, or instance tag.
gcloud compute firewall-rules create ssh-allow-incoming --priority=0 --allow=tcp:22 --network=[VPC-Network]
3. Make sure that the root volume is not out of disk space. Messages like the following will be visible in the console log when it is out of disk space:
...No space left on device...
...google-accounts: ERROR Exception calling the response handler.
[Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp',
'/usr/tmp', '/']...
4. Make sure that the instance has not run out of memory
5. Verify that temporary SSH Keys metadata is set for either the project or instance.
Finally you could follow any of their supported or third-party methods
Assuming you have the correct IAM permissions, it is much easier and preferred by GCP to use OSlogin to ssh into an instance, rather than manage ssh keys
in cloud shell, enter this
gcloud compute --project PROJECTID project-info add-metadata --metadata enable-oslogin=TRUE
This enables OSLogin on all instances in a project, instead of using ssh keys gcp will check your IAM permissions and authenticate based on those.
If you are not project owner, make sure you have the compute.osloginviewer or admin permissions in Cloud IAM
Once enables, try SSHing into the instance again using the command you posted.
This is not a concrete answer but I think at first you should set your project by :
gcloud config set project PROJECT_ID
Then
gcloud compute ssh my-instance-name --verbosity=debug
This link would be useful:
https://cloud.google.com/sdk/gcloud/reference/compute/ssh

VSTS CopyFiles task produces error 'Could not write to dest file (code=EPERM)'

I have VSTS Agent running as a service under the 'Network Service' account.
When I attempt to use "Copy Files Task" the task sometimes generates an error
"Failed cp: cp: copyFileSync: could not write to dest file (code=EPERM)..."
Example Error:
2018-09-25T15:26:00.2055152Z ##[error]Error: Failed cp: cp: copyFileSync: could not write to dest file (code=EPERM):F:\Legacy\WinTools.Web\Web.config
Other posts on StackOverflow mentioned an open file or insufficient rights to perform action.
The issue turned out to be the account 'NETWORK SERVICE' did not have the appropriate modify rights to the target folder. By adding modify rights for the mentioned account the release pipeline was able to successfully copy over the desired files.
I am adding this for posterity in the hopes that someone else out there avoids the same issue I encountered.
Dan Friedman,Jayendran
Question:
Why does Copy Files Task in VSTS sometimes produce an error "Failed cp: cp: copyFileSync: could not write to dest file (code=EPERM)"
Possible Answers:
1. File is potentially locked
2. User does not have sufficient rights to perform action
My Answer:
In my case the issue was the service account under which VSTS Agent was running 'Network Service' did not have appropriate rights to modify files in the specified folder. By enabling modify rights I was able to avoid the exception noted.
My "Network Service" had god rights on the web site deployment folder and the account under which the agent was running also had god rights as it was a member of the administrators group.
I explicitly added the user to the folder and gave it full control and now there doesn't appear to be any permissions issue.

Resources