I want to share a folder from the host (linux) to a Linux VM that is running on it.
After some research I used 9p (version = 9p2000.L) sharing, following instructions given on the link below:
http://www.linux-kvm.org/page/9p_virtio
PROBLEM: I am unable to read/write to the mounted folder.
mount command shows the mounted fs as : 9p (rw,trans=virtio,version=9p2000.L)
Even doing a simple "ls" command after entering the mount point says : Permission Denied.
Any help is appreciated
Looks like the 9p kernel module is broken in some kernels (3.5, 3.11). I ungraded by guest to 3.10.9 and things started working! :)
I just went with a hunch and have no bug reports or anything to share. Now when I googled it,I see there are few other facing the similar problem and have solved it in a similar fashion.
https://bugs.archlinux.org/task/36992
-HTH
Related
I'm attempting to build chromium on WSL2 according to this guide.
When I get to the fetch --nohooks chromium everything is loading and then I get the following error:
OSError: [Errno 30] Read-only file system: '/home/ghadar/chromium/src/third_party/libprotobuf-mutator/_gclient_src_0ve3yqhz'
I've looked everywhere and couldn't find any explanation to this error.
I'm running WSL2 on Windows 11 with Ubuntu 20.04 as the Linux distribution.
A few possibilities that I can think of:
Filesystem corruption (it happens)
Out of disk space on the host Windows drive
For the first one, see #6220 on the WSL Github. Recommended solution is (and it might be a good idea to backup any critical files first):
# Identify the correct drive:
mount | grep ext4
# Take the drive returned (e.g. /dev/sdd) and:
sudo e2fsck /dev/sdd -p
It could also be a disk-space issue. The Chromium source is pretty large at around 57GB. Is it possible that you are out of disk space on the Windows drive? If so, then WSL still thinks it has space remaining, because it is on a sparse virtual disk that can grow up to (250GB/1TB depending on the WSL release), but once space on the host drive is gone, WSL probably set the device read-only.
If that's the case, and you have a larger drive (SSD/NVMe recommended for performance, of course), you can "move" the virtual disk if you'd like -- See my Super User answer on the topic.
Or you might try fetching without the full repo history, as suggested in the docs, with the --no-history flag.
I am trying to mount to a Windows share folder from RedHat Linux, I have below code in /etc/fstab
//TheWindowsIP/ShareFolder /LinuxPath/LinuxFolder cifs username=username,password=password,domain=windowsDomain,dir_mode=0777,file_mode=0777 0 0
When I run "mount -a" I got a "Resource temporarily unavailable" error, can someone tell me how can I solve this issue? Or maybe advise another way to access the window folder from RedHat Linux(cifs is driving me crazy)
OK, after the tremendous effort of searching and trying in different ways, turns out the only action I need to solve this problem is reboot. I guess, somehow, the content in /etc/fstab will be mount correctly when the Linux start, but cannot be mount properly when I run mount -a. Ahhh what can I say...
Been using samba on my Linux Mint machine to map to a windows network drive of a large university for a couple of years. Has always worked.
Linux Mint version: 18 (Sarah)
Kernel: 4.4.0-164-generic
Samba version: Version 4.3.11-Ubuntu
I use their VPN and then map to the samba with:
smb://DOMAIN;user#subdomain.address.edu/ssd_drives_k/my/path/to/files
This has worked for ages; but recently the problem has arisen that I can only read (and therefore open/copy) some files but not others. I can see everything in nemo but some files (of all types, word, pdf, etc) WILL NOT copy to my computer or open in their respective program. There doesn't seem to be any particular pattern as to which files it affects but basically some are visible but inaccessible to me.
The error I get on those files is "Function not implemented", for example:
When trying to copy some files to my desktop gives "Function not implemented" error window (i.e. "Error while copying FILE/PATH"; There was an error while copying the file into /path/path"; and then Cancel or Skip options - "Show more details" says "Function not implemented").
When trying to open some pdfs gives "Function not implemented" in my pdf reader (that is the default system reader; if I try okular it simply doesnt open and no error).
Error screenshot:
Hence, there's a bunch of stuff I can no longer access... The IT team at the university are normally really great but in this instance have just left me hanging with nothing... frustrating but I wondered if anyone here might be able to help answer what is causing this and how to correct it?
Thanks to this answer I have deduced a solution: https://serverfault.com/questions/414074/mount-cifs-host-is-down/929331#929331
Not sure why but I suspect an upgrade in my computer of SMB has meant I am no longer compatible with their (older?) version.
This now works if I do it manually in the terminal and specify vers = 1.0:
sudo mount -t cifs //subdomain.address.edu/ssd_drives_k/my/path/to/files /mnt/driveiwant -o username=user,domain=DOMAIN,vers=1.0
But vers=3.0 doesn't work:
sudo mount -t cifs //subdomain.address.edu/ssd_drives_k/my/path/to/files /mnt/driveiwant -o username=user,domain=DOMAIN,vers=3.0
So it seems they need to upgrade their gear maybe, I am not sure, but this works!
I was unable to get homestead to boot using the directions provided here https://laravel.com/docs/5.7/homestead using hyper-V. The original issue was that the machine would not boot it would just hang indefinitely. Once I fixed this issue I encountered 2 more before I was able complete the vagrant up command.
I am not 100% sure this is the right place to post this but I have spent about 2 weeks off and on trying to solve this issue and hopefully I can save someone else a little time if they have similar issues. I was able to use homestead using virtual-box but it was extremely inconvenient to not have Hyper-V running on my PC so I uninstalled virtual-box and tried to setup homestead using Hyper-V. For me the VM would not boot at all. When I looked at it in Hyper-V manager it was just hung at startup. This turned out to be that it is setup as generation 1 box with the drive connected as IDE. For me the solution was to create a new generation 2 VM and connect the provided drive using SCSI. I then disabled secure boot and I was able to boot. Then it failed during the provisioning script trying to mount the default vagrant share. I could not figure out how to modify this call so ended up disabling it as for homestead it is not needed as far as I can tell. Then my third issue was not being able to mount any of the user defined shares in the homestead.yaml file. Some googling on this showed that I needed to make this call with no additional paramters which the script did not seem to provide an option to do. I modified the script and whola the vagrant up command completed successfully. Below are the details of the steps I took. If there is a simpler way to get Vagrant Homestead running using Hyper-V I would appreciate the advice.
Issue 1: Will not boot
Description: The issue seems to be that is trying to boot as a Generation 1 using the IDE controller. This does not seem work for my installation of windows 10 Pro.
Resolution:
1. Created a new VM using Generation 2 and attached the existing
"ubuntu-18.04-amd64.vhdx" to it using SCSI.
2. Boot this VM and then shutdown.
3. Turn off secure boot
4. Replace the Virtual machine files in [VagrantInstallFolder]\boxes\laravel-VAGRANTSLASH-homestead\6.4.0\hyperv with the new ones created above.
5. Delete newly created box from HyperV
Issue 2: Will not mount default Vagrant share
Error Message:
==> homestead-7: Machine booted and ready!
No valid IDs were given to the NFS synced folder implementation to
prune. This is an internal bug with Vagrant and an issue should be
filed.
Description: The vagrant up command fails at the attempt to mount the default vagrant share. I found no way to override the parameters for this call so it was always trying to make the call using nfs which is not supported on Windows. If it is possible to override this call settings then that would be the preferable way. But the only way I could figure out to get the provisioning script to continue to execute is to disable this share.
Resolution:
1. Modify the scripts\homestead.rb file and add the code below to the
Hyper V config settings section "Configure A Few Hyper-V Settings". This
will disable the default file share but you can still add your own from
the homestead.yaml file after completion of issue 3.
#Disable the default Vagrant file share
config.vm.synced_folder ".", "/vagrant", disabled: true
Issue 3: User defined shares in the homestead.yaml file still error.
Error Message:
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t cifs -o vers=3,credentials=/etc/smb_creds_vgt-96269f65d23acb279735d26264428995-66f0bd5cbca4d218f5f0b8a5f1712727,uid=1000,gid=1000,nolock,udp,noatime //192.168.1.107/vgt-96269f65d23acb279735d26264428995-66f0bd5cbca4d218f5f0b8a5f1712727 /home/vagrant/code
The error output from the last command was:
mount error(22): Invalid argument
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
Description: The vagrant up command fails at the attempt to mount the user defined shares in the homestead.yaml file. The mount seems to be passing unneeded parameters to the mount command. We need to override the mount call in the scripts\homestead.rb file to use no parameters.
Resolution:
1. In the "Register All Of The Configured Shared Folders" section replace the line below.
Replace
config.vm.synced_folder folder['map'], folder['to'], type: folder['type'] ||= nil, **options
With
config.vm.synced_folder folder['map'], folder['to'], type: "smb"
2. Then run "vagrant up --provider hyperv"
What Vagrant Plugins are installed (vagrant plugin list)?
I was getting the following error:
No valid IDs were given to the NFS synced folder implementation to prune. This is an internal bug with Vagrant and an issue should be filed.
Previously, I'd been using NFS and had the following plugin installed: https://github.com/winnfsd/vagrant-winnfsd.
Once I removed the plugin via vagrant plugin uninstall vagrant-winnfsd, provisioning worked.
I had the same issue on windows 11 and i found something that might help you
Open Hyper-V Manager on windows
You'll find the VM created by the vagrant up command
Run it from the Manager and login into ubuntu VM
Try vagrant up command again inside your project folder
It should work now!
I hope this help you.
I want to make a virtual machine for web development on archlinux guest that acts like vagrant box. I don't want to use vagrant box because I want to learn how to do things on my own first and I want to keep the disk space used by the machine at minimum as possible. For this I have installed and configured apache2, php, mariadb with a total of 640M used on disk. I have forwarded guest 80 port to host 127.0.0.1:8080.
I encounter an error with the vboxfs module, I have installed virtualbox-guest-module as described here and after a machine reboot tried:
mount -t vboxfs share_name mount_location and I get this error unknown filesystem type 'vbox'.
I have searched google and all the results make reference to the virtualbox-guest-utils from archlinux but the problem is I don't need all the dependencies that package has (alsa, xorg, video driver etc.) and I don't know witch deps I need or I don't need from that package... so I wander if it is possible and it is enough to use just the vboxfs module to be able to use the share functionality from Virtualbox.
You made a typo. It should be vboxsf instead of vboxfs. I did the same and was wondering why it doesn't work. So the full command is:
sudo mount -t vboxsf share_name mount_location
To remember the correct type you can think of it as the abbreviation of VirtualBox Shared Folder.