When reading the WSL documentation, it is stated that:
"Unlike our practice with trying to exclusively install programs and software on Ubuntu, our files and folders need to live exclusively on the Windows FS [...] Windows and Windows Apps can only read and write Windows files, and VSCode will be making our changes."
I understand the reasoning behind this and indeed, if one uses VSCode for example, it all makes sense. But my question is:
Is there any real reason why you couldn't keep your files (i.e. scripts) on the WSL filesystem itself? More specifically, if you don't ever intend to use the Windows filesystem (i.e you won't ever need a GUI or else), is there any sense in placing the files in the Windows FS?
Obviously you need to make sure you backup your data (GitHub or else) but aside from that, is there any downside? I guess what I'm saying is: can I use WSL like a VM? Can I keep BOTH software AND scripts all in WSL, separate from the Windows filesystem?
PS: The reason for avoiding a VM in this context is because I have a low spec laptop which has struggled a lot in the past with VMs (slow, not enough RAM), and so far, WSL seems be running much more smoothly.
Thanks
The simple answer is yes, you can use WSL as if it were a VM. WSL is for the most part fully-fledged Linux, and you can use Linux as your primary operating system, ignoring the fact you need to start it from within Windows. I haven't tried WSL 2, but it's said to be implemented as a fast VM, which is exactly what you ask for. (Further, the lack of GUIs can be mitigated using built in support for sending X data over SSH to the Windows half of your computer, and display it with an X Server. If I remember correctly, these two articles got me most of my way there.)
However, if you want to get pedantic, you can't store any files separate from your Windows filesystem on WSL 1. If you run e.g. Ubuntu, your Linux filesystem is instead always contained within %USERPROFILE%\AppData\Local\Packages\CanonicalGroupLimited.Ubuntu18.04onWindows_79rhkp1fndgsc\LocalState, so it'll technically not be separate. I can't test WSL 2, but according to this article, WSL 2 also stores its data in that folder, just as a single VDHX image. Presumably every WSL distro stores its data on the Windows filesystem.
Warning: Do not access the files themselves in your Linux filesystem within AppData using Windows tools, or you run a high risk of corrupting those files.
Yes, you can, and only place files in the Windows filesystem if you want to share them with Windows programs. Moreover, in Windows 1903 you don't need to place files in Windows filesystem to share them with Windows Programs, they can access them.
In WSL2 they encourage you to keep everything in WSL filesystem to take advantage of the filesystem's performance improve.
So, yes you can and you should.
Related
I have a project and the files are on Guest OS (Red Hat Enterprise Linux) with Virtualbox, my host OS is Mac OS. I used to coding right in RHEL with editor Atom. But my boss told me that it's inefficient to code in a Guest OS, well, it makes sense because Mac OS or Windows is more responsive than linux, so I changed my way:
Copy the whole project located on RHEL to a share folder between Mac OS and RHEL using rsync
Code with Atom in Mac OS
Copy back the project in share folder to the original project in RHEL by rsync
I'm using Atom (not vim in RHEL) because it can edit the whole project in one window which is convenient for my situation. But there is a problem: after copying back the project in Step 3, git status shows everything has been changed even though I just edited only a few files. That is a little annoying.
Is there any better way to code in such environment? any advice is appreciated.
BretzL's suggestion to use shared folders is a good one, but I think it's important to address the underlying issue: your boss' assumption about coding being inefficient or slow just because you're working on a VM is simply not true.
It sounds like your new workflow, which was instituted as a result of his/her advice, is causing you to have a harder time developing that you did on the VM. The shared folders will help with that, but if you have the VM configured to have access to enough cores and memory, then its performance for most tasks will be fine, and there may not be any problem with developing on the VM directly. I do a significant amount of development on a VM, and haven't had any issues. You may experience slower builds on the VM if you're building whole kernels or other large projects, but if that's not the case, it should be fine.
If you didn't have any performance or productivity problems before forcing yourself to work outside of the VM, then... it wasn't a problem.
(I also have an issue with the assumption that Linux is always less responsive than Windows or Mac OS, but that's a debate for a different day.)
VirtualBox supports shared folders, so you dont need to rsync back and forth. Just mount the shared folder into where your application server on RHEL guest expects the code.
I also recommend you take a look at https://www.vagrantup.com/ for managing developer VMs.
I have two computers and installed ubuntu in one computer and windows7 in one computer. I would like to access ubuntu files from windows7.
How can i access ubuntu files from windows7?
There are several solutions you can use, like this two:
Ext2Fsd
Ext2Read
Second solution doesn't even have to be installed - you can just run .exe file and access partitions you want.
However, this solutions are 100% safe only for reading. If you need to read and write to Ubuntu partition often, you should create shared NTFS partition which you could access from both operating systems.
< skippable part >
I work in IT (mostly desktop support and network administration) in a Windows environment, and I occasionally program.
A couple weeks ago, I decided I couldn't be as effective as I want to be without a Bash environment for my command prompt needs. This is especially true when I am using Ruby and git. I used Msysgit for a while, but I just didn't like how it wasn't extensible like Linux. So, I installed Cygwin and played around with that for a couple weeks.
As great as Cygwin is, it seems like it is meant to be a suped up command prompt, and its compatibility with Linux is just a pleasant side effect. This especially became evident when I tried to upgrade Ruby to 1.9.3 (it worked, but it wasn't straightforward), install rvm (never worked), and install RMagick (may or may not work, but looks like a headache).
So, now I'm considering running Linux in a virtual machine. But I'm worried that might be another can of worms and I'll have wasted hours before I find that out. I like that Cygwin runs in Windows and I get to use my IDE, user folder, and more with it. But I don't like that support for it is not as thorough as for a major distro.
< /skippable part >
Does anyone here have insight on using Cygwin vs running a Linux virtual machine?
Any advice on setting up a Linux development environment in a virtual machine within Windows?
I have faced common issues before, and the best solution according to my experience is just 2 workstations :).
Apart from that having Linux running in a virtual environment is way better.
First of all, you will have full Linux capabilities (except 3d acceleration, but you probably don't need that).
You will have the capability of creating snapshots and revert back to them when things go wrong!
You can start multiple environment using templates, which is very convenient.
The only downfall I can think of is performance issues of the host machine.
If it's a normal workstation/PC, an IDE + one virtual machine + a 100+tabs browser just makes it slow.
1: cygwin is good for quick hacks, and for being able to acces host-os resources(you can run IE for example in a bash script). For something tightly integrated and some "real" word, go to a vm. It will emulate everything and separate development from the real machine, and this may be a good thing in some cases... as a plus it simulates a real server:)
2: in virtualbox at least, you have shared folders, and you can share a local folder, and see it in the vm as a local folder(local or as a windows share..it actually depends). Then you can use that "entry point" to symlink stuff into the vm, and do the things you need with the real files being located in the real(host) machine
SSH into a linux box. This is what everyone does. Why isn't this the answer?
There is something I have heard of called Cooperative Linux. It runs Linux alongside with Windows kernel so you can use them at the same time. I've never used it, but here:
http://www.colinux.org/
What I think now is getting the pros of 2 options is using
Docker
, it is giving you cygwin simplicity and VM functionality with better performance.
Linux in a virtual machine will give you the experience you want more than cygwin or any mock shell as I like to call them.
Running VM's though require a lot of ram depending on whether you want a desktop version of linux or just a command line version.
Myself in work I have a pc with 8gb of ram and I run ubuntu 64bit as main OS, two ubuntu servers (these are for dev environments two different projects) and a windows 7 VM and a win XP VM.
I can run the two ubuntu servers and one other VM at the same time, key here is more ram if you want to be able to do VM's.
If you're going to be working with Ruby then get an Ubuntu virtual machine up and running :) I've not tried Ruby, etc on Windows but I have heard that it is a pain to setup and configure. I use a Mac for all my Rails development so I cannot comment on the Windows side for that.
As for virtual machine creation, I prefer VMware Workstation, however there are free alternatives such as Virtualbox and VMware Server.
I'm using a Linux VM within a Windows seven environment as this VM is as representative as possible of the final production environment. The whole setup is binded to the Eclipse IDE under ms-Windows seven. So this is really great for local full testing, before committing or tagging the tested version to the production servers.
As you mentioned as well, this takes some time to get properly setup and fully configured. So if your need is only for little tricks or tasks, you may keep using cygwin. For example, I faced significant issues to configure perl and compile mysql within cygwin. So it's ok for basic usages, but not to fully take advantage of a full linux environment.
Your choice strongly depends on the final server setup purpose. A VM will do it whatever your need is. The setup cost for it is higher, so this time investment must be used often to get returned.
I have both linux and windows installed on my pc. when I make some programs in lex and yacc (when working on linux)and store all the files in a folder ,they are corrupted If I use windows for some time. for example 3 days back after storing all the files( xyz.l , a.out ) I switched to windows for some other work after rebooting my pc. after 3 days when I again open that folder(while using linux) a.out was converted into an image and when I double clicked on it, an image opened. the image was same which I downloaded 2 days back while working on windows but I stored in some other folder. so does the memory space used for storage for linux and windows overlap? if not what could be the reasons? It has happened 2 times. and really I have to recode all my programs . I am not able to understand why?
It is not supposed to overlap.This sounds like a configuration problem , where windows and linux are configured to mount the same partition.
Check the file /etc/fstab (under linux) and find out whether this is true.You can try making files in various places and observe if they can be found on the other os.
I don't know how your partitioning looks like, but I guess that it is set up in a way that both OS have read/write access to all partitions, or at least windows has read/write access to the Linux partition.
Is your linux partition a FAT32 partition? You should set it to read only in windows, but I'm not sure how to do this.
Do you use hibernate on the windows side? Windows can get confused if data changes while it is asleep, and this might be the cause of the problems.
Can anyone tell how I might best mirror selected files and folders to a NAS, (Network Addrssable Storage) box from a Linux workstation in real-time?
These are very large files, (> 50GB) and are being continually modified, so I would only like to change those portions of the files that have been changed, added or deleted.
FYI: These files are actually Virtual Box virtual hard disk (VDI) files.
I discovered that my Synology DS211J NAS can run an RSync service. So I enabled that and used lsyncd for the live mirror... the VirtualBox VMs... all works very well.
Rsync only synchronises the parts of files that have change and so is very efficient at synchronising large files.
Of the solutions that #awm mentioned, only drbd provides block-level, realtime synchronization. The other tools will meet your goal of only propagating deltas, but they operate asynchronously. In fact, rsync will work just as well in this case, since you're not trying to provide bi-directional synchronization.
For drbd to provide block-level replication, you need need to install the drbd kernel modules and userspace tools on both the workstation on the NAS...which means this solution is only appropriate if your NAS is actually a fairly generic Linux box over which you have a great deal of control.
Before hand I just want to suggest that you don't do this. You can easily bottlenet your network and NAS and cause all sorts of problem on your host.
That being said, these claim they can do it:
Unison can be found at: http://www.cis.upenn.edu/~bcpierce/unison/
PeerSoft can do it too: http://www.peersoftware.com/products/peersync/peersyncserver/overview.aspx
Maybe - http://www.drbd.org/