Renaming/Mapping Cygwin Folders - cygwin

Can I safely rename the cygdrive folder? Also, I would like to add other folders at root and map them to folders on windows in the same way as /cygdrive/c maps to my C drive. Is that possible?

Yes, you can. See The Cygwin Mount Table in Cygwin's documentation. I have my documents folder mounted as /doc. These mounts end up in the registry and are retained across reboots etc.

I wouldn't rename cygdrive as I don't know what that would do, but you can map other directories at root to various windows directories using the mount command.

Related

How to extract/decompress this multi-part zip file in Linux?

I have a zip file thats titled like so file1.zip,file2.zip,file3.zip,etc...
How do I go about extracting these files together correctly? They should produce one output file.
Thanks for the help!
First, rename them to "file.zip", "file.z01", "file.z02", etc. as Info-ZIP expects them to be named, and then unzip the first file. Info-ZIP will iterate through the split files as expected.
I found a way. I had to mount the remote machines user home folder on my Ubuntu desktop pc and use File Roller Archive Manager, which is just listed as Archive Manger in Ubuntu 18.
Mount remote home folder on local machine...
Install sshfs
sudo apt install sshfs
Make a directory for the mount. Replace remote with whatever folder name you want
mkdir remote
Mount the remote file system locally replacing linuxusername with the user account you want to use to login and xxx.* with its IP address or hostname.
sudo sshfs -o allow_other linuxusername#xxx.xxx.xxx.xxx:/ remote
Now in the mounted "remote" folder you can see the contents of the whole linux filesystem and navigate them in a File Manager just like your local file system, limited by user privileges of course where you can only write to the home folder of the remote user account.
Using Archive Manager I openened up the .zip file of the spanned set (not the .z01, .z02 etc files) and extracted inside the "remote" folder. I saw no indication of extraction progress, the bar stayed at 0% until it was complete. Other X Windows based archive applications might work.
This is slow, about 3-5 megabytes per second on my LAN. I noticed Archive Manager use 7z to extract but do not know how as 7z is not supposed to support spanned sets.
Also if your ssh server is dropbear instead of openssl's sshd it will be unbearably slow for large files. I had to extract a 160gb archive and the source filesystem was fat32 so was not able to combine the spanned set into one zip file as it has a 4gb file size limit.

Acessing to /root folder on BeagleBone (Debian) after /usr directory was deleted

I have had some .cpp programs in root directory of my BeagleBoneBlack (Debian). Due to a studpid accident a /usr directory was deleted on my BeagleBone. It make sense for me now, that I can not access the BeagleBone anymore. What I can do is to boot the BeagleBone from SSD-card, but of course I come in this case to another root directory. Do I still have chance to access my .cpp programs from old root directory? The most funniest thing in the story of my stupidness is that I didn't store .cpp programs somewhere else.
Thank you all in advance!
Yes, boot a regular SD-card image (make sure there is no "flasher" in the image name).
Once booted you can mount the eMMC and access your files. Something like this should do the job:
mount /dev/mmcblk1p2 /media
ls /media/root
Depending on what you have installed on the eMMC it may be a different partition (last digit) like mmcblk1p0 or mmcblk1p1.
You can then get the files from /media/root e.g. via SCP (winSCP if you are on Windows).

phpstorm write issues in ./idea directory

When I try to save a file to disc within a project directory, I get this error:
java.io.IOException: W:\\[projectname]\\.idea not found
Some research tells me, the (network) location is not writable.
I'm trying to write this file from phpstorm in windows 8.
The drive (W:) is a network drive to a linux machine.
The directory I try to write to is chowned to the same user and group as I connect with in windows.
This is a result of ls -alh:
drwxrwxrwx 2 correct-user correct-user
On Linux and other Unix-like operating systems files starting with a . are considered 'hidden files' by default. As such, when the Windows-based program creates it, it suddenly doesn't see it anymore right after since it's hidden, even though the creation was successful. You can fix this in your Samba config by adding the following line to the share configuration:
hide dot files = no
In my samba settings I added a veto files parameter. Removing this parameter allows me to write dot files again.
Samba describes this setting as follows:
This is a list of files and directories that are neither visible nor accessible

linux (red hat) compare directories and copy over files that are different

I basically want rsync, but don't have the luxury of being able to install it.
But I need a way to deploy files from one server to another. I edit one or more files on one server and then need to copy all modified files to another server by comparing files that aren't the same (and being able to exclude .htaccess files)
Does anyone know of an easy way to do this?
Thanks,
Scott
(I will assume that you have shell access to both servers)
You do not need to install rsync system-wide. You can install it in your home-directory. First get a copy of the rsync binary for your distribution:
You can extract it from the rsync RPM package using rpm2cpio and cpio
You can copy it from another RedHat installation
You can copy it from another Linux installation for the same platform - there is a string possibility that it will work fine
Then you need to permanently modify the PATH environment variable so that the rsync command is found by your shell. If you do that for your user accounts in both servers, you can use rsync normally without the need for root privileges.
If you have access to install rsync on one server, that's all you need minimum.
If not, the question is what tools do you currently have available? scp? sftp? ftp? ssh? telnet? find?

Why does FHS have a usr directory

It appears you can put all you need in /bin so why do we bother with the /usr/bin directory?
/bin is supposed to reside on the root filesystem, whereas /usr may be an alternate filesystem - even network mounted (multiple boxes sharing the same /usr).
This means that any essential basic utilities you need to bring up the system and mount filesystems, including troubleshooting, should live in /bin. Everything non-essential can go in /usr.

Resources