Wordpress, linux, server foldername - linux

I have a problem with deleting or even accessing folders on linux server.
The folders are located in wp-content of wordpress.
The problem is I can't open server folder listing in winscp because the folders have weird names.
Example names:
If I execute ls -l I can see I have required permissions and names like :
??? < - folder name example
??
I tried openning in filezilla which successfully connects to wp-content folder(winscp can't even do that) , but after entering the wp-content I can't open above mentioned folders or even rename them .
I tried ssh-ing into linux server but I can't manage to cd into above folders because it says that it can't find the file/directory.
what are the options for deleting files with special characters?
Tried using single quotes and backlashes, but when clicking tab nothing happens...
is it possible to delete all folders except the required ones - then I could name which ones to leave and delete all others.

You will need to use double quotation marks around the name of the file and asterisk wildcard as far as I know (asterisk means zero or more characters).
Have you tried this:
rm -rf -- *" ### "
where ### are the special characters
This website might be helpful:
https://www.computerhope.com/unix/urm.htm
Good luck!

Related

my npm is broken - Cannot read property 'get' of undefined

I have visual studio 2017 and wanted to make a Cordova app. I have had many problems with it now and have found a problem. Right now I run windows 10 and have installed nodejs but npm does not work. I have tested with different command but I always get the same mistake. I have canceled Node but I can not drive.
Just posting this here to help any future wanderers,
In my case the actual issue was due to the presence of a space in my windows user name folder. Which was also clear by looking at the first line of the stack trace,
Error: EPERM: operation not permitted, mkdir 'C:\Users\FirstName'
Since there is no directory present named FirstName and the actual directory was supposed to be FirstName LastName its trying to run mkdir, for which its getting operation not permitted.
Following is how i fixed it thanks to citoreek, g8up & gijswijs
run npm config edit to edit your config, this will open up a text file in notepad or your configured editor,
then change cache path from
; cache=C:\Users\Gijs van Dam\AppData\Roaming\npm-cache
to
cache=C:\Users\GIJSVA~1\AppData\Roaming\npm-cache
Remember to remove the ; at the start,
next question would be how do we know to replace our user name with GIJSVA~1?
There are a couple of ways to target this,
Go to C:\Users open power Shell and execute following command
cmd /c dir /x
what this does is, list down all the directories in current directory along with their short names which aren't supposed to contain any spaces and normally are 6 characters or less in length. Copy that short name against your user name directory and use this in your cache path.
You will notice these short names only exist for directories either containing spaces or which are longer than 6 characters. (for the rest of the directories their short names should be same as their directory name)
If you don't want to use above command, then simply remove all the spaces from your user name in your cache path, then take the first 6 characters of the user directory name and postfix it with ~1. You should also uppercase it, but it appears not to be making any difference.
After you are done with editing this file, save your changes then try again after closing any active power shell / bash process and reopening them.
I apologize for my question. I just needed to reboot the windows.
In my case this was a permissions problem with the ~/.np* files and directories. These were owned by root by mistake. I did
sudo find "~/.np*" -exec chown myuser {} \;
and that solved it.
base path is specified in the file
.npmrc

cd command : how to go back an unknown number of levels from current subdirectory to a particular parent directory (unix and dos)

Ok, so I am trying to resolve a uri in an xmlcatalog and I want to go back from a particular sub-directory back to a parent-directory that is an-unknown-number-of-levels behind.
eg:
file:///D:/Sahil/WorkSpaces1/Cartridges1/Project1/ParticularFolder/Level1/Level2/<so-many-levels>/CurrentFolder
I want to go back from "CurrentFolder" to "ParticularFolder" without typing in the full FilePath.
I want to achieve this because, I work in multiple Projects which all have "ParticularFolder" in it, so the codes inside the sub-directories of this folder should dynamically have access to all other files in other sub-directories inside this parent folder. I do not want to specify separate full filepaths for my various projects and make the code too rigid.
Is it possible? Please mention how to achieve this in windows, unix as well as linux os.
In UNIX/Linux/OS X/etc.:
while [ "$(basename $PWD)" != "ParticularFolder" ]; do cd ..; done

SCP gives "File or directory not found"

I am having an issue. I am using the SCP command to transfer files from my desktop of my mac osx too my virtual server. The thing is I ran the command and successfully transferred one file from my desktop over to the server, no problem.
So i use the same command which is:
scp filename_I_want_to_transfer user#serverip:
So basically that looks like scp test user#10.0.0.0:
(I just used random IP for example)
Anyways on the second file I'm trying to transfer which is also in the document format I continually get "No such file or directory".
Any ideas on why this might be happening?
To send a file from the local host to another server use:
scp /path/to/file.doc user#<IP or hostname>:/path/to/where/it/should/go/
To get a file from another server to the local host use:
scp user#<IP or hostname>:/path/to/file.doc /path/to/where/it/should/go/
This is the format I reliably use for copying from a location to another location. You can use absolute path, or relative/special character path, such as
scp suiterdev#fakeserver:~/folder/file .
which would be "Securely copy the file named file in $HOME/folder/ (~ equivalent to ~suiterdev or $HOME) as user suiterdev from host fakeserver to the current directory (.).
However you'll have to take care that special characters (see the shell's filename expansion mechanism) used for the remote path are not expanded locally (because that typically is not what you want).
Well for me am using Ubuntu 15.10 and this is what worked for me.
scp user#host.com:path/to/file.txt /home/to/local/folder/
instead of
scp user#host.com:/path/to/file.txt /home/to/local/folder/
Note that after user#host.com host i do not include the forward slash i immediately append the folder after the ":"
Scp uses the target user's home directory as default directory (as a relative path), so when you need an absolute path, then use one (starting with a slash (/)).
I know this is way too late for this to help you, but it may help others who had the same problem as me
for my case my pc is set up to use backwards slash "\" instead of forward slash "/" and changing to backwards slashes removed the errors
But I only had to change the slashes to backward slashes on my pc's directory as my raspberry pi uses forward slashes.
I know it is a bit confusing but it worked for me.

rsync not synchronizing .htaccess file

I am trying to rsync directory A of server1 with directory B of server2.
Sitting in the directory A of server1, I ran the following commands.
rsync -av * server2::sharename/B
but the interesting thing is, it synchronizes all files and directories except .htaccess or any hidden file in the directory A. Any hidden files within subdirectories get synchronized.
I also tried the following command:
rsync -av --include=".htaccess" * server2::sharename/B
but the results are the same.
Any ideas why hidden files of A directory are not getting synchronized and how to fix it. I am running as root user.
thanks
This is due to the fact that * is by default expanded to all files in the current working directory except the files whose name starts with a dot. Thus, rsync never receives these files as arguments.
You can pass . denoting current working directory to rsync:
rsync -av . server2::sharename/B
This way rsync will look for files to transfer in the current working directory as opposed to looking for them in what * expands to.
Alternatively, you can use the following command to make * expand to all files including those which start with a dot:
shopt -s dotglob
See also shopt manpage.
For anyone who's just trying to sync directories between servers (including all hidden files) -- e.g., syncing somedirA on source-server to somedirB on a destination server -- try this:
rsync -avz -e ssh --progress user#source-server:/somedirA/ somedirB/
Note the slashes at the end of both paths. Any other syntax may lead to unexpected results!
Also, for me its easiest to perform rsync commands from the destination server, because it's easier to make sure I've got proper write access (i.e., I might need to add sudo to the command above).
Probably goes without saying, but obviously your remote user also needs read access to somedirA on your source server. :)
I had the same issue.
For me when I did the following command the hidden files did not get rsync'ed
rsync -av /home/user1 server02:/home/user1
But when I added the slashes at the end of the paths, the hidden files were rsync'ed.
rsync -av /home/user1/ server02:/home/user1/
Note the slashes at the end of the paths, as Brian Lacy said the slashes are the key. I don't have the reputation to comment on his post or I would have done that.
I think the problem is due to shell wildcard expansion. Use . instead of star.
Consider the following example directory content
$ ls -a .
. .. .htaccess a.html z.js
The shell's wildcard expansion translates the argument list that the rsync program gets from
-av * server2::sharename/B
into
-av a.html z.js server2::sharename/B
before the command starts getting executed.
The * tell to rsynch to not synch hidden files. You should not omit it.
On a related note, in case any are coming in from google etc trying to find while rsync is not copying hidden subfolders, I found one additional reason why this can happen and figured I'd pay it forward for the next guy running into the same thing: if you are using the -C option (obviously the --exclude would do it too but I figure that one's a bit easier to spot).
In my case, I had a script that was copying several folders across computers, including a directory with several git projects and I noticed that the I couldn't run any of the normal git commands in the copied repos (yes, normally one should use git clone but this was part of a larger backup that included other things). After looking at the script, I found that it was calling rsync with 7 or 8 options.
After googling didn't turn up any obvious answers, I started going through the switches one by one. After dropping the -C option, it worked correctly. In the case of the script, the -C flag appears to have been added as a mistake, likely because sftp was originally used and -C is a compression-related option under that tool.
per man rsync, the option is described as
--cvs-exclude, -C auto-ignore files in the same way CVS does
Since CVS is an older version control system, and given the man page description, it makes perfect sense that it would behave this way.

Linux folder permissions

At my office, we have a network directory structure like this:
/jobs/2004/3999-job_name/...
/jobs/2004/4000-job_name/...
The issue is that employees rename the "4000-job_name" folders (which in turn breaks other things that rely on the name being consistent with a database).
How can I stop users from renaming the parent folder while still allowing them full control of that folder's contents?
Please keep in mind that this is a Samba share that Windows users will be accessing.
I think you want to do this:
chmod a=rx /jobs #chdir and lsdir allowed, modifying not
chmod a=rwx /jobs/* #allow everything to everyone in the subdirectories
Since the directories /jobs/* are in fact files in /jobs their names cannot be changed without the write permission for /jobs. In the subdirectories of /jobs/ everyone is allowed to do anything with the commands above.
Also, be sure to set the permissions of new directories to rwx as you add them.
(edit by Bill K to fix the examples--the solution was correct but he misread the question due to the strange coloring SO added)
The question has already been answered, so I'm just gonna make a brief remark: in your question, you use the terms "folder" and "directory" interchangeably. Those two are very different, and in my experience 99% of all problems with Unix permissions have to do with confusing the two. Remember: Unix has directories, not folders.
EDIT: a folder is two pieces of cardboard glued together, that contain files. So, a folder is a container, it actually physically contains the files it holds. So, obviously a file can only be in one container at a time. To rename a file, you not only need access to the folder, you also need access to the file. Same to delete a file.
A directory, OTOH, is itself a file. [This is, in fact, exactly how directories were implemented in older Unix filesystems: just regular files with a special flag, you could even open them up in an editor and change them.] It contains a list of mappings from name to location (think phone directory, or a large warehouse). [In Unix, these mappings are called links or hardlinks.] Since the directory only contains the names of the files, not the files themselves, the same file can be present in multiple directories under different names. To change the name of a file (or more precisely to change a name of a file, since it can have more than one), you only need write access to the directory, not the file. Same to delete a file. Well, actually, you can't delete a file, you can only delete an entry in the directory – there could still be other entries in other directories pointing to that file. [That's why the syscall/library function to delete a file is called unlink and not delete: because you just remove the link, not the file itself; the file gets automatically "garbage collected" if there are no more links pointing to it.]
That's why I believe the folder metaphor for Unix directories is wrong, and even dangerous. The number one security question on one of the Unix mailinglists I'm on, is "Why can A delete B's files, even though he doesn't have write access to them?" and the answer is, he only needs write access to the directory. So, because of the folder metaphor, people think that their files are safe, even if they are not. With the directory metaphor, it would be much easier to explain what's going on: if I want to delete you from my phonebook, I don't have to hunt you down and kill you, I just need a pencil!
If you make the parent directory--/jobs/2004/--non-writable for the users, they won't be able to rename that folder.
I did the following experiment on my own machine to illustrate the point:
ndogg#seriallain:/tmp$ sudo mkdir jobs
ndogg#seriallain:/tmp$ sudo mkdir jobs/2004
ndogg#seriallain:/tmp$ sudo mkdir jobs/2004/3999-job_name/
ndogg#seriallain:/tmp$ cd jobs/2004/
ndogg#seriallain:/tmp/jobs/2004$ sudo chown ndogg.ndogg 3999-job_name/
ndogg#seriallain:/tmp/jobs/2004$ ls -alh
total 12K
drwxr-xr-x 3 root root 4.0K 2009-03-13 18:23 .
drwxr-xr-x 3 root root 4.0K 2009-03-13 18:23 ..
drwxr-xr-x 2 ndogg ndogg 4.0K 2009-03-13 18:23 3999-job_name
ndogg#seriallain:/tmp/jobs/2004$ touch 3999-job_name/foo
ndogg#seriallain:/tmp/jobs/2004$ mv 3999-job_name/ blah
mv: cannot move `3999-job_name/' to `blah': Permission denied

Resources