How can I access C folder in power shell? - linux

I have very simple problem with approaching C drive in Ubuntu 20.04.3 on my personal computer.
I'm using command following:
cd \home\C:
But I get error: bash: cd: homeC:: No such file or directory
Do you know which command should be used to do so?

First check your mointing points, using the mount function. In case of a Windows subsystem for Linux you might have something like this:
C:\ on /mnt/c type drvfs (rw,noatime,uid=1000,gid=1000,case=off)
D:\ on /mnt/d type drvfs (rw,noatime,uid=1000,gid=1000,case=off)
The part on /mnt/c (can be different in your case, obviously), shows the actual Linux folder. Please keep in mind that Linux is case sensitive.

Related

Linux: Non-Executable Installation File

I've gotten a USB drive in which the installation files of MATLAB are found. I've tried executing the following command but none worked. Something is wrong. The file doesn't seem executable.
This is the file I need to execute:
-rw-r--r-- 1 user user 8360 Jul 19 03:29 install
I do:
sudo sh ./install
and get:
./install: 1: exec: /media/user/DPI/R2019b/bin/glnxa64/install_unix: Permission denied
I tried chmod +x install but it also doesn't work. The file cannot be turned into an executable one.
Is the file corrupt, or do I miss something?
Probably the USB drive is formatted by the manufacturer as FAT32 or similar. This file system doesn't support UNIX permissions.
That means you lose the permission information when you copy the files to a USB drive.
You have several options to fix this:
On a Linux/UNIX system create a .tar archive of the files, copy the archive onto the USB drive and unpack the archive to a UNIX compatible file system on the destination system. (This is a bit more work but allows to use the USB drive also on a Windows system.)
Format the USB drive as a UNIX compatible file system. (This might be the best solution if you plan to use the USB drive with Linux systems only. You can no longer use it on a Windows system except if you install special drivers.)
Copy all the files from the USB drive to a local drive on the destination system, which schould have a UNIX compatible file system, and try to fix the permissions manually. (I don't recommend this solution except if you cannot use the others, e.g. if you no longer have access to the original files.)

Windows Linux Subsystem - File permissions when edited outside bash [duplicate]

As the title suggests, if I paste a c file written somewhere else into the root directory of the Linux Subsystem, I can't compile it.
I did a test where I made two differently titled hello world programs: one in vi that I can get into from the bash interface, and one elsewhere. When I compiled the one made in vi, it worked fine. Trying to do so for the one made elsewhere (after pasting it into the root directory), however, resulted in this:
gcc: error: helloWorld2.c: Input/output error
gcc: fatal error: no input files
compilation terminated
Any help with this would be much appreciated.
Do not change Linux files using Windows apps and tools!
Assuming what you meant by "paste a C file written somewhere else into the root directory of the Linux subsystem" is that you pasted your file into %localappdata%\lxss, this is explicitly unsupported. Files natively created via Linux syscalls in this area have UNIX metadata, which files natively created with Windows tools don't have.
Use /mnt/c (and the like) to access your Windows files from Linux; don't try to modify Linux files from Windows.
Quoting from the Microsoft blog linked at the top of this answer (emphasis from the original):
Therefore, be sure to follow these two rules in order to avoid losing files, and/or corrupting your data:
DO store files in your Windows filesystem that you want to create/modify using Windows tools AND Linux tools
DO NOT create / modify Linux files from Windows apps, tools, scripts or consoles
You cannot copy (by default, who knows how Windows bash is set up!) files into the root directory! Your gcc error is say "no input files", so the copy has most likely failed. Copy the files to your home directory instead, for instance:
cp helloWorld2.c ~/
instead of:
cp helloWorld2.c /

Cannot run any commands because I moved the libc.so file

I have a dynamic linker which is /lib64/libc.so.6
I stupidly renamed it to /lib64/libc.so.6.old and now NO commands work.
I cannot do ls or mv to rename it back.
I can run ldconfig but it says permission denied and I cannot run sudo or su - What on earth can I do to fix this? I am running Oracle Linux redhat 6.7
LD_PRELOAD=/lib64/libc.so.6.old mv /lib64/libc.so.6.old /lib64/libc.so.6
Start from a recovery/install iso and rename the file back.
If you can't reboot or don't have physical access to the machine you could try to install a compiled version of BusyBox https://busybox.net/FAQ.html#getting_started and use its su and mv commands. Since BusyBox is statically linked it should work without libc.so.
Go to single user mode, mount the file system with rw, since you know the location of the renamed file move /lib64/libc.so.6.old /lib64/libc.so.6
I would also propose a workaround with a mount point as already mentioned by #wildplasser.
You can make majority of command line tools working again if you have a mounted directory on your broken host. If you are lucky to have one then all you need is to upload the libc-x.yz.so (which you can take from another host of from Internet) on the share, rename it there to libc.so.6 and add the mounted directory to LD_LIBRARY_PATH.
If the version x.yz is the same as for the one which you thoughtlessly moved then the commands like ls, cp etc. will work again in the console where you set LD_LIBRARY_PATH. You should not logout from this console, because you won't be able to login again.
! Be aware that setuid command line tools won't work (see https://askubuntu.com/a/1029363/832810). Unfortunately "sudo" is one of them, this is why you won't be able to put back easily your long-suffering .so (unless you have a root# console). However it gives you a possibility to save all data and finish all actions before you do some hard restore.
If you managed to do the above-mentioned trick and you have enough time you can try to build a statically linked version of "sudo" as suggested on https://askubuntu.com/a/1030475/832810 (probably even build on another host and copy through NFS) and move the .so back using it.

Blank SSHFS mount folder

I am attempting to mount a remote directory located on my web server to a directory in my xUbuntu installedation hosted in a VirtualBox.
I'm using the following command syntax:
sshfs root#*.*.*.*:/var/www Desktop/RemoteMount
Using the file manager, I navigate to the Desktop/RemoteMount directory but find it entirely blank. The SSHFS command above executed with no indication of an error.
Completely by chance, I use the terminal to long list the contents of the Desktop/RemoteMount directory and it shows all the data I was expecting to see in the file manager.
Can anyone tell me why the file manager does not show my remotely mounted data and how I might fix it?
Thanks.
you are missing local mountpoint.
sshfs -o idmap=user mika#192.168.1.2:/home/mika/remotepoint /home/mika/localmountpoint.
And You need to have localmount folder exist.
thanks Mika

Access root of drive with a Unix-like shell

I'm using Cygwin to compile a library. The library is not stored within the same directory as Cygwin. I need to navigate to this directory in order to compile the library. The Cygwin shell only allows me to go back as far as the Cygwin root directory using cd .. .
The command su returns the following:
su: user root does not exist
How do I navigate my hard drive using Cygwin if the su command doesn't work?
As suggested by Wooble, the solution is to use the command /cygdrive/ , followed by the drive letter. So, to access the root of the C: drive, type cd /cygdrive/c.

Resources