WSL2 "read-only" file system while building chromium - linux

I'm attempting to build chromium on WSL2 according to this guide.
When I get to the fetch --nohooks chromium everything is loading and then I get the following error:
OSError: [Errno 30] Read-only file system: '/home/ghadar/chromium/src/third_party/libprotobuf-mutator/_gclient_src_0ve3yqhz'
I've looked everywhere and couldn't find any explanation to this error.
I'm running WSL2 on Windows 11 with Ubuntu 20.04 as the Linux distribution.

A few possibilities that I can think of:
Filesystem corruption (it happens)
Out of disk space on the host Windows drive
For the first one, see #6220 on the WSL Github. Recommended solution is (and it might be a good idea to backup any critical files first):
# Identify the correct drive:
mount | grep ext4
# Take the drive returned (e.g. /dev/sdd) and:
sudo e2fsck /dev/sdd -p
It could also be a disk-space issue. The Chromium source is pretty large at around 57GB. Is it possible that you are out of disk space on the Windows drive? If so, then WSL still thinks it has space remaining, because it is on a sparse virtual disk that can grow up to (250GB/1TB depending on the WSL release), but once space on the host drive is gone, WSL probably set the device read-only.
If that's the case, and you have a larger drive (SSD/NVMe recommended for performance, of course), you can "move" the virtual disk if you'd like -- See my Super User answer on the topic.
Or you might try fetching without the full repo history, as suggested in the docs, with the --no-history flag.

Related

Trouble converting Docker to Singularity: "Function not implemented" in Singularity, but works fine in Docker

I have an Ubuntu docker container that works perfectly fine as is. I have a custom binary inside that executes and returns as expected. Because of security reasons, I cannot use docker for automated testing. I created a docker archive and then I load a singularity container from this docker archive. The binary that I need to run fails with the following error:
MyBinary::BinaryNameSpace::BinaryFunction[FATAL]: boost::filesystem::status: Function not implemented: "/var/tmp/username"
When I run $ldd <binary_path>, I see that a boost filesystem binary was linked. I am not sure why the binary is unable to find the status function...
So far, I have used a tool called ermine to turn the dynamically linked binary into a static binary
I still got the same error, which I found very strange.
Any suggestions on directions to look next are very appreciated. Thank you.
Both /var/tmp and /tmp are silently automounted by default. If anything was added to /var/tmp during singularity build or in the source docker image, it will be hidden when the host's /var/tmp is mounted over it.
You can disable the automounts individually when you run a singularity command, which is probably what you want to do first to check that it is the source of the problem (e.g., singularity run --no-mount tmp ...). I'd also recommend using --writable-tmpfs or manually mounting -B /tmp to make sure that there is somewhere writable for any temp files. You are likely to get an error about a read-only filesystem if not.
The host OS environment can also cause problems in unexpected ways that are hard to debug. I recommend using --cleanenv as a general practice to minimize this.
The culprit was an outdated Linux kernel. The containers still use the host's kernel.
On Docker, I was using Kernel 5.4.x and the computer that runs the singularity container runs 3.10.x
There are instructions in the binary which are not supported on 3.10.x
There is no fix for now except running the automated tests on a different computer with a newer kernel.

Samba giving "Function not implemented" error

Been using samba on my Linux Mint machine to map to a windows network drive of a large university for a couple of years. Has always worked.
Linux Mint version: 18 (Sarah)
Kernel: 4.4.0-164-generic
Samba version: Version 4.3.11-Ubuntu
I use their VPN and then map to the samba with:
smb://DOMAIN;user#subdomain.address.edu/ssd_drives_k/my/path/to/files
This has worked for ages; but recently the problem has arisen that I can only read (and therefore open/copy) some files but not others. I can see everything in nemo but some files (of all types, word, pdf, etc) WILL NOT copy to my computer or open in their respective program. There doesn't seem to be any particular pattern as to which files it affects but basically some are visible but inaccessible to me.
The error I get on those files is "Function not implemented", for example:
When trying to copy some files to my desktop gives "Function not implemented" error window (i.e. "Error while copying FILE/PATH"; There was an error while copying the file into /path/path"; and then Cancel or Skip options - "Show more details" says "Function not implemented").
When trying to open some pdfs gives "Function not implemented" in my pdf reader (that is the default system reader; if I try okular it simply doesnt open and no error).
Error screenshot:
Hence, there's a bunch of stuff I can no longer access... The IT team at the university are normally really great but in this instance have just left me hanging with nothing... frustrating but I wondered if anyone here might be able to help answer what is causing this and how to correct it?
Thanks to this answer I have deduced a solution: https://serverfault.com/questions/414074/mount-cifs-host-is-down/929331#929331
Not sure why but I suspect an upgrade in my computer of SMB has meant I am no longer compatible with their (older?) version.
This now works if I do it manually in the terminal and specify vers = 1.0:
sudo mount -t cifs //subdomain.address.edu/ssd_drives_k/my/path/to/files /mnt/driveiwant -o username=user,domain=DOMAIN,vers=1.0
But vers=3.0 doesn't work:
sudo mount -t cifs //subdomain.address.edu/ssd_drives_k/my/path/to/files /mnt/driveiwant -o username=user,domain=DOMAIN,vers=3.0
So it seems they need to upgrade their gear maybe, I am not sure, but this works!

Cell/BE: make use of the SPEs under Linux

Currently I'm experimenting with the Cell/BE CPU under Linux. What I'm trying to do is running simulations in the near future, e.g. about the weather or black holes.
Problem is, Linux only discovers the main CPU of the Cell (the PPE), all other SPUs (7 should be available to Linux) are "sleeping". They just don't work out of the box.
What works is the PPE and it's recognized as a two-threaded CPU with one core by the OS. Also, the SPEs are shown at every boot (with small penguins showing a red "PPE" in them), but afterwards are shown nowhere.
Is it possible to "free" these specialised cores for use by the Linux OS? If so, how?
As noone seems to be interested or can answer this question I'll provide the details myself.
In fact there exists a workaround:
First, create an entry point for the SPUFS:
# sudo mkdir /spu
Create a mount point for the filesystem so you won’t have to manually mount after a reboot. Add this line to /etc/fstab
spufs /spu spufs defaults 0 0
Now reboot and test to make sure the SPUFS is mounted (in a terminal):
spu-top
You should see the 7 SPEs running with 0% load average.
Now Google for the following package to get the runtime library and headers you need for SPE development:
libspe2-2.3.0.135.tar.gz
You should find it on the first hit. Just unpack, build, and install it:
./configure
make
sudo make install
You can ignore the build warnings (or fix them if you have obsessive compulsive disorder).
You can use pkg-config to find the location of the runtime and headers though they are in /usr/local if I recall.
You of course need the gcc-spe compiler and the rest of the PPU and SPU toolchains but those you can install with apt-get as they are in the repos.
Source: comment by Exillis via redribbongnulinux.000webhostapp.com

How to fix virtualbox unknown filesystem type 'vbox'

I want to make a virtual machine for web development on archlinux guest that acts like vagrant box. I don't want to use vagrant box because I want to learn how to do things on my own first and I want to keep the disk space used by the machine at minimum as possible. For this I have installed and configured apache2, php, mariadb with a total of 640M used on disk. I have forwarded guest 80 port to host 127.0.0.1:8080.
I encounter an error with the vboxfs module, I have installed virtualbox-guest-module as described here and after a machine reboot tried:
mount -t vboxfs share_name mount_location and I get this error unknown filesystem type 'vbox'.
I have searched google and all the results make reference to the virtualbox-guest-utils from archlinux but the problem is I don't need all the dependencies that package has (alsa, xorg, video driver etc.) and I don't know witch deps I need or I don't need from that package... so I wander if it is possible and it is enough to use just the vboxfs module to be able to use the share functionality from Virtualbox.
You made a typo. It should be vboxsf instead of vboxfs. I did the same and was wondering why it doesn't work. So the full command is:
sudo mount -t vboxsf share_name mount_location
To remember the correct type you can think of it as the abbreviation of VirtualBox Shared Folder.

How to set up MIT Scheme for 6.001 in Ubuntu 8.10

I play to self-study 6.001 with the video lectures and lecture handouts. However, I have some problems setting up MIT Scheme in Ubuntu (intrepid).
I used package management and installed MIT-Scheme, but it's obviously the wrong version to use. It should be 7.5.1 instead of 7.7.90
I followed the instructions from this website (http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/6-001Spring-2005/Tools/detail/linuxinstall.htm)
So far, I've downloaded the tar file, and extracted to /usr/local. I have no idea what step 3 means.
Then I entered command
scheme -large -band 6001.com -edit
and the error is
Not enough memory for this configuration.
I tried to run under sudo mode, and this time the error is different
Unable to allocate process table.
Inconsistency detected
I have close to 1GB of free memory, with ample HDD space. What should I do to successfully set this up?
Step 3 means that you should type export MITSCHEME_6001_DIRECTORY=${your_problems_path}. If you don't want to type it every time you launch Scheme, you should put it as a string in your ~/.bash_profile file(in case you use bash)
About the problem itself, Google instantly suggests a solution:
sudo sysctl -w vm.mmap_min_addr=0(taken from http://ubuntuforums.org/showthread.php?p=4868292)
Instead of the package manager, you may also want to compile the portable C sources for Unix. I am using it happily.

Resources