Linux: Non-Executable Installation File - linux

I've gotten a USB drive in which the installation files of MATLAB are found. I've tried executing the following command but none worked. Something is wrong. The file doesn't seem executable.
This is the file I need to execute:
-rw-r--r-- 1 user user 8360 Jul 19 03:29 install
I do:
sudo sh ./install
and get:
./install: 1: exec: /media/user/DPI/R2019b/bin/glnxa64/install_unix: Permission denied
I tried chmod +x install but it also doesn't work. The file cannot be turned into an executable one.
Is the file corrupt, or do I miss something?

Probably the USB drive is formatted by the manufacturer as FAT32 or similar. This file system doesn't support UNIX permissions.
That means you lose the permission information when you copy the files to a USB drive.
You have several options to fix this:
On a Linux/UNIX system create a .tar archive of the files, copy the archive onto the USB drive and unpack the archive to a UNIX compatible file system on the destination system. (This is a bit more work but allows to use the USB drive also on a Windows system.)
Format the USB drive as a UNIX compatible file system. (This might be the best solution if you plan to use the USB drive with Linux systems only. You can no longer use it on a Windows system except if you install special drivers.)
Copy all the files from the USB drive to a local drive on the destination system, which schould have a UNIX compatible file system, and try to fix the permissions manually. (I don't recommend this solution except if you cannot use the others, e.g. if you no longer have access to the original files.)

Related

mac how to get the symlink original path

I have symlinked a file in hard drive A in linux with nodejs symlink. When I plug the hard drive to macbook, the symlink breaks because the mounted root direcotry in macOS is different to linux. Is there a way in macOS to get the file's original path string with node, so I can use it by replacing the mounted directory in order to read the original file in the hard drive?
For example,
in linux link: /media/A/src/abc.jpg -> /media/A/dst/1.jpg
in mac, read /Volumes/A/dst/1.jpg's link /media/A/src/abc.jpg, then manual change to /Volums/A/src/abc.jpg to read the file
Use relative paths instead of absolute paths in the symlinks. E.g.
/media/A/src/abc.jpg -> ../dest/1.jpg
Then the links will work the same no matter where the drive is mounted.

How to extract/decompress this multi-part zip file in Linux?

I have a zip file thats titled like so file1.zip,file2.zip,file3.zip,etc...
How do I go about extracting these files together correctly? They should produce one output file.
Thanks for the help!
First, rename them to "file.zip", "file.z01", "file.z02", etc. as Info-ZIP expects them to be named, and then unzip the first file. Info-ZIP will iterate through the split files as expected.
I found a way. I had to mount the remote machines user home folder on my Ubuntu desktop pc and use File Roller Archive Manager, which is just listed as Archive Manger in Ubuntu 18.
Mount remote home folder on local machine...
Install sshfs
sudo apt install sshfs
Make a directory for the mount. Replace remote with whatever folder name you want
mkdir remote
Mount the remote file system locally replacing linuxusername with the user account you want to use to login and xxx.* with its IP address or hostname.
sudo sshfs -o allow_other linuxusername#xxx.xxx.xxx.xxx:/ remote
Now in the mounted "remote" folder you can see the contents of the whole linux filesystem and navigate them in a File Manager just like your local file system, limited by user privileges of course where you can only write to the home folder of the remote user account.
Using Archive Manager I openened up the .zip file of the spanned set (not the .z01, .z02 etc files) and extracted inside the "remote" folder. I saw no indication of extraction progress, the bar stayed at 0% until it was complete. Other X Windows based archive applications might work.
This is slow, about 3-5 megabytes per second on my LAN. I noticed Archive Manager use 7z to extract but do not know how as 7z is not supposed to support spanned sets.
Also if your ssh server is dropbear instead of openssl's sshd it will be unbearably slow for large files. I had to extract a 160gb archive and the source filesystem was fat32 so was not able to combine the spanned set into one zip file as it has a 4gb file size limit.

Acessing to /root folder on BeagleBone (Debian) after /usr directory was deleted

I have had some .cpp programs in root directory of my BeagleBoneBlack (Debian). Due to a studpid accident a /usr directory was deleted on my BeagleBone. It make sense for me now, that I can not access the BeagleBone anymore. What I can do is to boot the BeagleBone from SSD-card, but of course I come in this case to another root directory. Do I still have chance to access my .cpp programs from old root directory? The most funniest thing in the story of my stupidness is that I didn't store .cpp programs somewhere else.
Thank you all in advance!
Yes, boot a regular SD-card image (make sure there is no "flasher" in the image name).
Once booted you can mount the eMMC and access your files. Something like this should do the job:
mount /dev/mmcblk1p2 /media
ls /media/root
Depending on what you have installed on the eMMC it may be a different partition (last digit) like mmcblk1p0 or mmcblk1p1.
You can then get the files from /media/root e.g. via SCP (winSCP if you are on Windows).

Buildroot - System doesn't boot - /dev/ttyS0 no such file

I m using buildroot to create a filesystem for a Raspberry Pi. I have uncompressed the filesystem image in the Root partition of my SD card but I can't boot the operative system. I get the following errors:
Can't open /dev/null no such file or directory
Can't open /dev/ttyS0 no such file or directory
Which line of the configuration tool should I enable or modify in order to boot the system?
EDIT
I've followed the steps provided by Thomas Petazzoni and used a preconfigured version of buildroot. Now the system works but I still don't know which option in the kernel configuration tool was causing the problem.
You don't have devtmpfs enabled in your kernel.
Also, you should start by using the raspberrypi_defconfig in Buildroot instead of rolling your own. Do:
make distclean
make raspberrypi_defconfig
make
And then follow the instructions in board/raspberrypi/readme.txt to know how to use the resulting images.

phpstorm write issues in ./idea directory

When I try to save a file to disc within a project directory, I get this error:
java.io.IOException: W:\\[projectname]\\.idea not found
Some research tells me, the (network) location is not writable.
I'm trying to write this file from phpstorm in windows 8.
The drive (W:) is a network drive to a linux machine.
The directory I try to write to is chowned to the same user and group as I connect with in windows.
This is a result of ls -alh:
drwxrwxrwx 2 correct-user correct-user
On Linux and other Unix-like operating systems files starting with a . are considered 'hidden files' by default. As such, when the Windows-based program creates it, it suddenly doesn't see it anymore right after since it's hidden, even though the creation was successful. You can fix this in your Samba config by adding the following line to the share configuration:
hide dot files = no
In my samba settings I added a veto files parameter. Removing this parameter allows me to write dot files again.
Samba describes this setting as follows:
This is a list of files and directories that are neither visible nor accessible

Resources