I'm really new to Ubuntu and WSL.
My problem is simple: I want to access from Ubuntu which I have installed in my computer (dual boot alongside with Windows) to my WSL2 filesystem that I have in Windows. I located a file named ext4.vhdx which I suppose is my entire wsl drive, but I'm not really sure, it is in
c:\Users\USER\Appdata\Local\Packages\CanonicalGroupLimited.Ubuntu20.04o...\LocalState\
I'm currently into web development and I want to share that environment within WSL2 and Ubuntu, I noticed that using the linux fs is way faster than windows fs and it works better with things like watchers. So, is it possible?
I'm currently running Windows 10 19041 (2004), Ubuntu LTS 20.04
I've also encountered similar issues when doing this WSL Ubuntu sharing thing, and I've finally found a solution on the internet that works out perfectly.
Reference Link:
https://www.nicholasmelnick.com/2020/07/sharing-your-wsl2-environment-with-linux/
So basically these are the steps,
First of all, the WSL's "ext4.vhdx" file should be accessible inside your Ubuntu system (So you must mount your windows drive inside your linux OS)
Install libguestfs-tools package with APT
And finally just create a folder and guestmount the drive with following commands.
$ sudo mkdir -p /mnt/wsl
$ sudo guestmount -o allow_other \
--add /mnt/c/Users/username/AppData/Local/Packages/CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc/LocalState/ext4.vhdx \
-i /mnt/wsl
And done! Hope this would solve your problem. :)
Related
there.
I'm trying to install the Microchip XC8 compiler on a Ubuntu container to make a pipeline for building the project with Gitlab CI. But there's no response after I run the "xc8-v1.45-full-install-linux-installer.run" file.
Here is the environment I have:
Official Ubuntu 18.04 LTS image on a Docker container
Docker version 19.03.13
Windows 10 as Docker host
Microchip XC8 v1.45 compiler
And the commands I used for downloading and installing are as following:
# Download XC8 from the Microchip official site
wget http://ww1.microchip.com/downloads/en/DeviceDoc/xc8-v1.45-full-install-linux-installer.run
# Change the access permission
chmod +x xc8-v1.45-full-install-linux-installer.run
# Execute the ".run" file
./xc8-v1.45-full-install-linux-installer.run
After I did them all, there's no response. Obviously, something went wrong.
I have tried the installation process above on a native Ubuntu computer, and it just works fine.
Is there any prerequisite I missed? Or there have some ways for me to achieve the same purpose?
Thanks!
I was having this problem on 64 bit Ubuntu 20.04 as well.
I had several problems, could not change execution bit because it was on an NTFS partition and the executable required 32 bit libraries to run.
First I had to move the file from an NTFS partition so that I could set the file to executable. In my case I moved it to my downloads directory and then in that folder executed:
sudo chmod +x ./xc8-v1.42-full-install-linux-installer.run
It still would not run, so I checked its type by executing:
file ./xc8-v1.42-full-install-linux-installer.run
which resulted in the response:
./xc8-v1.42-full-install-linux-installer.run: ELF 32-bit LSB executable, Intel 80386, version 1 (GNU/Linux), statically linked, no section header
Eventually the main solutions was to install 32 bit libraries:
sudo apt-get install lib32z1
Finally I could install install that 32 bit library. Then running this worked:
sudo ./xc8-v1.42-full-install-linux-installer.run
Using existing MPLAB Docker repo
This GitLab.com project exists:
MPLAB X IDE/IPE podman/docker container
This may not help with your .run file problem, but maybe switching to an existing docker container might make it easier for you.
They also work with .run files, so you may find your solution over there as well.
Features:
Has a general-purpose installation of MPLAB X and the toolchain.
X11 forwarding for working in the IDE is supported.
Can use USB from inside the container.
Requires some setup. See readme for installation instructions.
MIT License
Need to test this still myself, but just wanted to share here, might just as well.
Posted in the microchip forums by the creator:
Dockerfile for MPLAB X IDE/IPE and toolchains
I am new to Ubuntu 18.04. It has been a long time that I have not updated anything new on my current OS which is Loki (interestingly, besides Loki does not allow ppl to upgrade Juno).
For some reasons, I have to install another Linux OS on my machine, which is ubuntu 18.04 - the minimal installation. Although everything works perfectly to Ubuntu, it makes me cannot log in to my main OS.
Description of an issue: After selecting a boot of elementary, I log in my account on a log-in page of Elementary. It does not show anything else and move back to the log-in. Another try with a guest account (with no password), still the same problem.
Because most of my data and work in Elementary, I have to find a way to solve this problem. Is there anyone here giving me a hand? Thank you very much.
I ran into same problem once.
Login into tty mode by Ctrl+Shift+F1,and enter:
sudo dpkg --configure -a
sudo apt-get install ecryptfs-utils
sudo reboot
And I was able to login
Trying tty mode by Ctrl+Shift+F1, I was able to log-in in a terminal. Turn out that the Ubuntu has used almost the left memory to load an OS. By removing some files by the command rm -rf [filename] to release around 2GB, I was able to have everything back. It was freaking for a while. Phewww...
I'd love to hear from you some advice on setting up what I'm looking for.
I'm using OSX and I need to develop some code on a Linux machine, the thing is that I was looking for some VM alternative since it takes too much battery power.
The first thing I come across with was a docker container. I know It is not what it was designed for, but I thought it might work anyway. So I tried running a container as
docker run -i -t ubuntu /bin/bash
and it worked well. However all the changes I make are gone and I can't fins a way to solve it. I also tried
docker run -i -v /Users/JaimehRubiks/test:/home/Jaime -t ubuntu /bin/bash
and all files in there are saved (also very interesting because I can share my files with host), but it's kind of boring having to commit to the docker image if I change anything in the config files of my ubuntu.
What I'm looking for is just a simple way to run linux in my mac, and then access to it somehow, like I did in docker or via SSH.
Docker currently does not run natively on osx as Docker relies on the Linux kernel for its isolation features. In fact, the Docker Toolbox uses a Virtual Box virtual machine running the boot2docker Linux distro to run the Docker daemon on osx. See the official documentation on Mac osx installation.
The boot2docker linux image is quite light weight, but I'm not sure you will get much benefit from running Docker on osx for Linux development over simply running a full Virtualbox machine with Ubuntu (or other distro). If you want to run a virtual machine vagrant is a good tool to help you set that up. It lets you easily pull down images from an image repo, setup the image, and ssh into it. It also makes host -> guest-machine folder sharing and port forwarding quite simple.
but it's kind of boring having to commit to the docker image if I change anything in the config files of my ubuntu.
You don't have to docker commit anything: any file change make on the host (/Users/JaimehRubiks/test) will be visible in the container (/home/Jaime)
what about using vagrant to run Ubuntu or CentOS? you can access the system via command vagrant ssh and configure it with configuration file and share it like using docker.
How do I run Linux binaries under Mac OS X?
Googling around I found a couple of emulators but none for running Linux binaries on a Mac. There are quite a few posts about running Mac OS X on Linux and that kind of stuff - but that's the opposite of what I want to do.
Update:
Thanks for all the answers! I am fully aware of MacPorts and Fink or any of the other things; and no, I do not want any of these utilities, and I do not want any of the package managers, I prefer to compile things myself. I also have Parallels and could set up virtual machines and all that jazz...
The only thing I want to do is to find a way to run a binary that I do not have the source code for and has been compiled for Linux, but I do not want to run it under Linux but under Mac OS X. Therefore my question about emulators.
Well there is a project introducing something like Linux's binfmt_misc to OS X so now what you need is an ELF loader, a dynamic linker that can load both Mach-O and ELF, and some mechanism to translate Linux calls to OS X ones.
Just for inspiration, you can implement the dynamic linker in the fashion that it ignores filename extension - both libfoo.so.1 (as an Linux ELF) and libfoo.1.dylib (as an Mach-O) can be loaded so that OS X versions of system libraries can be reused so that you do not need to write a "hosted on OS X" libc.so and syscalls can be handled by an kext that translates Linux calls to OS X ones in kernel.
Or, in an more elegant way, implement a stripped down Linux kernel as a kext that makes the OS X kernel a dual-purpose. However that will require you to use two sets of libraries. (Binaries do not clash so it is largely okay)
Set up a virtual machine (I personally use VMWare Fusion) and then install whatever distro of Linux you desire on the virtual machine.
Or, if you have the source to the Linux program, chances are you can recompile it on a Mac and run it natively. If you install Fink or MacPorts, you can install a lot of open source programs without much trouble.
I recently found Noah, which you can use to run Linux binaries on macOS. You can install Noah via homebrew (brew install linux-noah/noah/noah). Then you should be able to do this:
noah linux_binary
In my experience the behavior of the binary matches what I see on my Ubuntu machine.
You might have some luck with running Linux executables under Mac OS X using Qemu's User Space Emulator
If you decide to go the virtualization route, consider also VirtualBox.
Also, if you only need UNIX like command line tools, there is the MacPorts project. This is basically how I set up git on my mac: after having installed MacPorts you just have to run the sudo port install git command to install git on your system.
noah does not allow the binaries to execute properly for me. Use Docker Desktop for Mac.
Just do:
docker pull centos:latest # 73MB CentOS docker image
Make a folder for what is needed to run your binary, and in your Dockerfile:
FROM centos
COPY your_binary /bin/
ENTRYPOINT ["your_binary"]
and you can build it with
docker build -t image_name
then execute with
docker run image_name as if it were the binary itself. Worked for me. Hope it helps someone else. And if you need specific outputs or to store files somewhere you can mount volumes onto the docker with -v, for example:
docker run -v path_to_my_stuff:/docker_stuff image_name,
though adding a WORKDIR /docker_stuff line to the Dockerfile before ENTRYPOINT is probably best.
If you change ENTRYPOINT to
ENTRYPOINT ["bash", "-c"]
and add
CMD ["your_binary"]
underneath it, you can actually pass the command into the image like
docker run -v path_on_local:/in_container_path image_name "your_binary some_parameters -optionrequiringzerowhitespacebeforeinputvalue"
My host is Bluehost. My server is on Linux.
I have tried to follow the tutorial.
You can quite easily compile it from source, with the usual ./configure && make && sudo make install commands.
See "How to install git". Specifically the Mac OS X section (which applies to Linux also)
If the machine doesn't have apt-get, then chances are it isn't a Debian or Ubuntu machine, which means that using a tutorial designed for Debian or Ubuntu is unlikely to get you very far.
Either use the packaged releases for whatever Linux distribution you are running, or build from source.
Get the source from http://git.or.cz/
Maybe you have the same problem I have that I cannot have an outgoing connection but I can have an incoming connection, that´s why I cannot use apt-get. What I do to move files is just use WinSCP and move the files there and after do whatever I want with them.