Azure File Share - Mount - azure

I create an Azure File Share on my Storage Account v2. Going under the label Connect I copied the command lines to mount the File Share with Samba v3.0
I didn't achieve my goal. Error received: Mount error(115): Operation now in progress
Useless the link of Azure: https://learn.microsoft.com/en-us/azure/storage/files/storage-troubleshoot-linux-file-connection-problems#mount-error115-operation-now-in-progress-when-you-mount-azure-files-by-using-smb-30
I have a Debian 10 fresh-updated ( yesterday ). I tried also with a docker image ubuntu:18.04, but the result didn't change, so I guess that there are more than my errors or possible mistakes.
The error is returned by the latest instruction:
$> mount -t cifs //MY_ACCOUNT.file.core.windows.net/MY_FILE_SHARE /mnt/customfolder -o vers=3.0,credentials=/etc/smbcredentials/MY_CREDENTIALS,dir_mode=0777,file_mode=0777,serverino
My tentatives:
I tried to change the version of Samba from 3.0 to 3.11 ---> NOTHING
I tried to use username and password instead of credentials ---> NOTHING
Using smbclient -I IP -p 445 -e -m SMB3 -U MY_USERNAME \\\\MY_ACCOUNT.file.core.windows.net\\MY_FILE_SHARE ----> NOTHING
Thanks for help.

Related

How to resolve the file processing issue during docker volume mount in linux?

I am trying to containerize my application. The application basically process files and place it in a different folder after renaming it. The source folder "opt/fileprocessing/input" and target it "opt/fileprocessing/output"
Scenario 1. - without volume mount
When I start my docker container and place file in the source folder us docker cp command, the application process it and place it successfully in the target folder
Scenario 2 . - with volume mount with the host
docker run -d -v /opt/input:/opt/fileprocessing/input -v /opt/output:/opt/fileprocessing/output --name new_container processor
when I place the file in the /opt/input folder of the host, the application throws an error that it cant place the file in the destination. If I go inside the container and view the input folder I see the file in the input folder that confirms that the mount has happened succesfullu.It fails when renaming and posting it in the destination (well this is application level code error , no much help there I get).
I tried the following to make it work.
Made sure the host and container users are the same and has the same uid and gid
File has 775 permission set.
The container folder has 777 permission
Same file has been placed that was used for scenario 1.
File name same and format as well
container OS
NAME="CentOS Linux"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
host os
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
Scenario - 3 - Mounted the file in a different way , as below
docker run -d -v /opt/fileprocessing:/opt/fileprocessing -v /opt/fileprocessing:/opt/fileprocessing --name new_container processor
while the fileprocessing folder in the container and the host has two subdirectories name input and output.
This way of mount seems to work for me with out any issues.
Please let me know why scenario 2 failed to work and how to fix it ?

SMB Client on azure server not deleting file from azure storage

I have a flask webapp running on an Ubuntu Azure sever. I also have an azure storage account, and to access the storage from the webapp, I use SMB. This has worked so far, with adding and updating files on the server, but I tried to delete a file and it didn't work. No error or anything, it just did nothing and the file is still on the server. I tried the command locally and it worked fine. Is there something I'm doing wrong and how could I fix this problem. Here's the command I've been using:
smbclient //name.file.core.windows.net/website -mSMB3 -e -Uname%password -c 'rm tempplugins/test2.ini'
This may not solve your exact problem, but I was attempting to perform operations on a file share on an Azure Storage Account from an Azure VM running CentOS, and I ran into several different problems. It took me a while to get the kinks worked out.
In my case, I had to use to use backslashes, but I had to double them so that they were escaped properly. Example:
smbclient \\\\storageaccount.file.core.windows.net\\sharename
Additionally, we weren't using an integrated active directory, and so we had to use the storage account name as the username and it had to be "prefixed" with "Azure" like "Azure\storageaccount". And don't forget that backslashes have to be doubled! Also, the password was the storage account key. Example:
smbclient \\\\storageaccount.file.core.windows.net\\sharename -U Azure\\storageaccount%key
I used the "-d" option to debug the command line options for smbclient. However, in my case, the "-d" option had to be on the end of the command or it interfered. If it hadn't been for the clues provided by "-d", I never would have gotten this to work. Example:
smbclient \\\\storageaccount.file.core.windows.net\\sharename -U Azure\\storageaccount%key -d
Here's a simple, one-liner that shows a directory of a file share on an Azure Storage Account. Example:
smbclient \\\\storageaccount.file.core.windows.net\\sharename -U Azure\\storageaccount%key -c dir -d
I hope that this helps someone else as I must of blown 2 to 3 hours to get this worked out.

Storage object access with docker FTP getting error 550

I'm actually using an object storage from scaleway. I want to be able to access it with ftp and be able to do some action. Right now I can access and view files/folders from it, but I can't do action, like rename a file, create a dir...
I'm using CentOS 7 as operating system.
Here is my mounted volume in my host:
drwxrwxr-x. 1 root root 0 Jan 1 1970 mnt
I'm using the following command to create a container :
docker run -d --name ftpd_server -p 21:21 -p 30000-30009:30000-30009 -e "PUBLICHOST=123.123.123.123" -v /mnt:/home/ftpusers/userA stilliard/pure-ftpd:latest
Then I enter in the container with :
docker exec -it ftpd_server /bin/bash
And I create the user
pure-pw useradd userA -f /etc/pure-ftpd/passwd/pureftpd.passwd -m -u ftpuser -d /home/ftpusers/userA
Then I get this when I try to create a dir
And I can see my contents
I'm using stilliard/pure-ftpd as the docker image
I also tried to give ftpuser root privilege and change in /etc/pure-ftpd/passwd/pureftpd.passwd to change from 1000.1000 to 0.0 but the problem persist
I also found in their github an issue which is similar to mine https://github.com/stilliard/docker-pure-ftpd/issues/35#issuecomment-325583705 but I can't make it work.
HTTP response code 550 indicated to me the Scaleway object storage user does not have permissions to complete the requested operation. The first place I would look is at Scaleway / remote host user account permissions. My guess is that the user account does not have the required permissions.

How to connect to Samba SMB2/SMB3 with domain authentication from Ubuntu 16.04 using GUI?

I can't connect to samba after blocking SMB1 protocol.
Early, I just click in Nautilus to "Connect to Server" and inputted smb://mySambaHost/. Now, I can get connected with this way.
Unhandled error message: Failed to retrieve share list from server: Connection timed out
I tried installing smbclient, but I can only connect to server whith no-friendly command (for me):
/usr/bin/smbclient \\\\my_server\\shared_folder -U WINDOWS_DOMAIN/WINDOWS_DOMAIN_USERNAME -W WINDOWS_WORKGROUP -mSMB3 and every time input password.
That is, I infer that the server and login/password are working
I tried mounted samba share_folder, but could not do it (I don't now how).
sudo mount -t cifs -o username=WINDOWS_DOMAIN/WINDOWS_DOMAIN_USERNAME,password=WINDOWS_DOMAIN_PASSWORD,rw //myServer/ /media/windowsshare
/media/windowsshare is exist
Can some help me to easely get acces to samba share folders using SMB2/SMB3 protocol?
Thanks!
Try adding the version (vers=3.0):
sudo mount -t cifs -o username=WINDOWS_DOMAIN/WINDOWS_DOMAIN_USERNAME,password=WINDOWS_DOMAIN_PASSWORD,rw,vers=3.0 //myServer/ /media/windowsshare /media/windowsshare
You could also try with 2.0 & 2.1

HDP 2.5 Hortonworks ambari-admin-password-reset missing

I have downloaded the sandbox from hortonworks (Centos OS), then tried to follow the tutorial. It seems like the ambari-admin-password-reset command is not there and missing. I tried also to login with putty, the console asked me to change the password so I did.
now it seems like the command is there, but I have different passwords for the console and one for the putty for the same user.
I have tried to look for the reason why for the same user 'root' I have 2 different passwords (one for the virtual box console and one for the putty) that I can login with. I see different commands on each box. more than that when I share folder I can only see it on the virtual box console but not on the putty console) which is really frustrating.
How can I enforce that what I would see from putty would be the same as what I see from the virtual box console.
I think it somehow related to TTY but I am not sure.
EDIT:
running commands from the virtual box machine output:
grep "^passwd" /etc/nsswitch.conf
OUT: passwd: files sss
grep root /etc/passwd
OUT: rppt"x"0"0"root:/root:/bin/bash
operator:x:11:0:operator:/root:/sbin/nologin
getent passwd root
OUT: root:x:0:0:root:/root:/bin/bash
EDIT:
I think this is all about docker containers. It seems like the machine 2222 port is the ssh port for the hdp 2.5 container and not for the hosting machine.
Now I get another problem. when running
docker exec sandbox ls
it is getting stuck. any help ?
Thanks for helpers
So now I had the time to analyze the sandbox vm, and write it up for other users.
As you stated correctly in your edit of the question, its the docker container setup of the sandbox, which confuses with two separate root users:
via ssh root#127.0.0.1 -p 2222 you get into the docker container called "sandbox". This is a CentOS release 6.8 (Final), containing all the HDP services, especially the ambari service. The configuration enforces a password change at first login for the root user. Inside this VM you can also execute the ambari-admin-password-reset and set there a password for the ambari admin.
via console access you reach the docker host running a Centos 7.2, here you can login with the default root password for the VM as found in the HDP docs.
Coming to your sub-question with the hanging docker exec, it seems to be a bug in that specific docker version. If you google that, you will find issues discussing this or similar problems with docker.
So I thought that it would be a good idea to just update the host via yum update. However this turned out to be a difficult path.
yum tried to update the kernel, but complained that there is not enough space on the boot partion.
So I moved the boot partion to the root partition:
edit /etc/fsab and comment out the boot entry
unmount /boot
mv /boot
cp -a /boot.org /boot
grub2-mkconfig -o /boot/grub2/grub.cfg
grub2-install /dev/sda
reboot
After that I have found out that the docker configuration is broken and docker does not start anymore. In the logs it complained about
"Error starting daemon: error initializing graphdriver:
\"/var/lib/docker\" contains other graphdrivers: devicemapper; Please
cleanup or explicitly choose storage driver (-s )"
So I edited /etc/systemd/system/multi-user.target.wants/docker.service and changed the ExecStart setting to:
ExecStart=/usr/bin/dockerd --storage-driver=overlay
After a service docker start and a docker start sandbox. The container worked again and I could could login to the container and after a ambari-server restart everything worked again.
And now - with the new docker version 1.12.2, docker exec sandbox ls works again.
So to sum up the docker exec command has a bug in that specific version of the sandbox, but you should think twice if you want to upgrade your sandbox.
I ran into the same issue.
The HDP 2.5 sandbox runs all of its components in a docker container, but commands like docker exec -it sandbox /bin/bash or docker attach sandbox got stuck.
When I ran a simple ps aux, I found several /usr/bin/docker-proxy commands which looked like :
/usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 60000 -container-ip 172.17.0.2 -container-port 60000
They probably forward the HTTP ports of the various UIs of HDP components.
I could ssh into the container ip (here 172.17.0.2) using root/hadoop to authenticate. From there, I could use all "missing" commands like ambari-admin-password-reset.
$ ssh root#172.17.0.2
... # change password
$ ambari-admin-password-reset
NB: I am new to docker, so there's probably a better way to deal with this.
I'd like to post here the instructions for 3.0.1 here.
I followed the instructions of installing hortonworks version 3.0.1 here: https://youtu.be/5TJMudSNn9c
After running the docker container, go to your browser and enter "localhost:4200", that will take you to the in browser terminal of the container, that hosts ambari. Enter "root" for login and "hadoop" for password, change the root password, and then enter "ambari-admin-password-reset" in order to reset ambari password.
In order to be able to use sandbox-hdp.hortonworks.com, you need to add the line "127.0.0.1 sandbox-hdp.hortonworks.com" at the end of the /private/etc/hosts file on your mac.
Incorrect Pass
Then right corner click on power button >> power off drop down >> Restart >> when it boots up then press Esc key to get into recovery menu
Restart
select advance option and hit enter
Advance Option
Select Recovery mode hit enter
Select Root
Root enter
Command
mount -rw -o remount/
ls /home
change pass command
passwd username
user as yours
last step
enter pass two times by pressing enter
enter image description here
Hopefully you changed password (:

Resources