Trying to upload a file from local Mac to remote Ubuntu server - linux

scp -r /Users/Brain/Desktop/tree.png account#address:/home/directory
I successfully connect to server and enter password, but receive this message "/Users/Brain/Desktop/tree.png: No such file or directory found"
I know the file exists, it is sitting on my desktop and I can open it.
Any guidance would be much appreciated!!
Tried looking at this post but it did not help:scp files from local to remote machine error: no such file or directory

Typo? For a location like /Users, better odds are suggested for a person with the name Brian over one like Brain. After reversing the vowels, what happens with this command?
ls -l /Users/Brian/Desktop/tree.png
When presented with unexpected errors for file(s) known to exist, there's usually an issue with one pathname component. Start with the full path and drop trailing components until there's no error, eg:
ls /Users/Brain/Desktop/tree.png
ls /Users/Brain/Desktop
ls /Users/Brain
ls /Users
Some shells can trim a pathname component from the previous command with :h ; try repeating this:
!!:h
After typing the above, another possible shortcut is UP-arrow RETURN

Related

my npm is broken - Cannot read property 'get' of undefined

I have visual studio 2017 and wanted to make a Cordova app. I have had many problems with it now and have found a problem. Right now I run windows 10 and have installed nodejs but npm does not work. I have tested with different command but I always get the same mistake. I have canceled Node but I can not drive.
Just posting this here to help any future wanderers,
In my case the actual issue was due to the presence of a space in my windows user name folder. Which was also clear by looking at the first line of the stack trace,
Error: EPERM: operation not permitted, mkdir 'C:\Users\FirstName'
Since there is no directory present named FirstName and the actual directory was supposed to be FirstName LastName its trying to run mkdir, for which its getting operation not permitted.
Following is how i fixed it thanks to citoreek, g8up & gijswijs
run npm config edit to edit your config, this will open up a text file in notepad or your configured editor,
then change cache path from
; cache=C:\Users\Gijs van Dam\AppData\Roaming\npm-cache
to
cache=C:\Users\GIJSVA~1\AppData\Roaming\npm-cache
Remember to remove the ; at the start,
next question would be how do we know to replace our user name with GIJSVA~1?
There are a couple of ways to target this,
Go to C:\Users open power Shell and execute following command
cmd /c dir /x
what this does is, list down all the directories in current directory along with their short names which aren't supposed to contain any spaces and normally are 6 characters or less in length. Copy that short name against your user name directory and use this in your cache path.
You will notice these short names only exist for directories either containing spaces or which are longer than 6 characters. (for the rest of the directories their short names should be same as their directory name)
If you don't want to use above command, then simply remove all the spaces from your user name in your cache path, then take the first 6 characters of the user directory name and postfix it with ~1. You should also uppercase it, but it appears not to be making any difference.
After you are done with editing this file, save your changes then try again after closing any active power shell / bash process and reopening them.
I apologize for my question. I just needed to reboot the windows.
In my case this was a permissions problem with the ~/.np* files and directories. These were owned by root by mistake. I did
sudo find "~/.np*" -exec chown myuser {} \;
and that solved it.
base path is specified in the file
.npmrc

How to stop Linux aborting PATH search?

I have /usr/bin in my PATH, and dot (meaning current directory) later in the PATH. I have a program 'abcxyz' in two directories, /var and /someother. If I am in a mate-terminal in /var and key in some absurd name, dgxuznk, then bash says: "bash: dgxuznk: command not found" as you would expect. If I now make a link in /usr/bin called dgxuznk pointing to the program in /someother it runs the program, also as you would expect. But if I now remove that link, it doesn't say "command not found" any longer, but rather "bash: /usr/bin/dgxuznk: No such file or directory". It's as if it remembered where it found it before and expects to find it under /usr/bin again.
Even worse, if I now rename the program in /var (where I am) to dgxuznk, and key in "dgxuznk" it still complains "bash: /usr/bin/dgxuznk: No such file or directory" as if it can't get past the /usr/bin in the PATH to see the dot and look in the current directory to find the program.
Is this only in Fedora 19? How can I program it to get past the /usr/bin in the search path and find the current directory dot?
(Hint: if you want to reproduce this error - don't let it find the program in the current directory until after it's found it in /usr/bin.)
bash maintains an in-memory hash of where programs are found so that it doesn't have to go through the full path lookup every time a command is run. Each bash session maintains its own hash, but you can manipulate it with the built-in hash command. To see what is in the hash, just run it with no arguments. To clear it, use hash -r. In your case, you just want to remove dgxuznk with hash -d gdxuznk.
(You might ask why bash doesn't just remove a entry from the hash if the location isn't found. There might be a good reason for reporting an error instead of falling back to path lookup, or it might be a bug or an area to improve.)

Shell script to download file from UNIX system directory

Can any one help me writing a shell script to Download files from Linux/UNIX system?
Regards
On UNIX systems, such as Linux and OSX, you have access to a utility called rsync. It is installed by default and is the tool to use to download files from another UNIX system.
It is a drop-in replacement for the cp (copy) command, but it is much more powerful.
To copy a directory from a remote system to yours, using SSH, you would do this:
rsync username#hostname:path/to/dir .
(notice the dot at the end, this means 'place everything here please', you can also give the name of the local dir where the files should be placed.)
To download only some specific files, use this:
rsync 'username#hostname:path/to/dir/*.txt' .
(notice the quotes: if you omit them, your shell will try to expand the *.txt part locally, will fail and give you an error.)
Useful flags:
--progress: show a progress bar
--append: if a file has only partially downloaded, resume it where it left off
I find the rsync utility so useful, I've created an alias for it in my shell and use it as a 'super-copy':
alias cpa 'rsync -vae ssh --progress --append'
With that alias, copying files between machines is just as easy as copying files locally:
cpa user#host:file .
Making it even better
Since rsync is using SSH, it helps to setup a private/public key pair, so you don't have to type in your password every time:
How do I setup Public-Key Authentication?
Futhermore, you can write down your username in your .ssh/config file and give the remote host a short name: read about it here.
For example, I have something like this:
Host panda
Hostname panda.server.long.hostname.com
User rodin
With this setup, my command to download files from the panda server is just:
cpa panda:path/to/my/files .
And there was much rejoicing.

SCP gives "File or directory not found"

I am having an issue. I am using the SCP command to transfer files from my desktop of my mac osx too my virtual server. The thing is I ran the command and successfully transferred one file from my desktop over to the server, no problem.
So i use the same command which is:
scp filename_I_want_to_transfer user#serverip:
So basically that looks like scp test user#10.0.0.0:
(I just used random IP for example)
Anyways on the second file I'm trying to transfer which is also in the document format I continually get "No such file or directory".
Any ideas on why this might be happening?
To send a file from the local host to another server use:
scp /path/to/file.doc user#<IP or hostname>:/path/to/where/it/should/go/
To get a file from another server to the local host use:
scp user#<IP or hostname>:/path/to/file.doc /path/to/where/it/should/go/
This is the format I reliably use for copying from a location to another location. You can use absolute path, or relative/special character path, such as
scp suiterdev#fakeserver:~/folder/file .
which would be "Securely copy the file named file in $HOME/folder/ (~ equivalent to ~suiterdev or $HOME) as user suiterdev from host fakeserver to the current directory (.).
However you'll have to take care that special characters (see the shell's filename expansion mechanism) used for the remote path are not expanded locally (because that typically is not what you want).
Well for me am using Ubuntu 15.10 and this is what worked for me.
scp user#host.com:path/to/file.txt /home/to/local/folder/
instead of
scp user#host.com:/path/to/file.txt /home/to/local/folder/
Note that after user#host.com host i do not include the forward slash i immediately append the folder after the ":"
Scp uses the target user's home directory as default directory (as a relative path), so when you need an absolute path, then use one (starting with a slash (/)).
I know this is way too late for this to help you, but it may help others who had the same problem as me
for my case my pc is set up to use backwards slash "\" instead of forward slash "/" and changing to backwards slashes removed the errors
But I only had to change the slashes to backward slashes on my pc's directory as my raspberry pi uses forward slashes.
I know it is a bit confusing but it worked for me.

delete non-empty directory in vim

Vim users would be familiar with getting into and viewing the current directory listing by using
:o .
In this directory view, we are able to give additional commands like d and vim will respond with "Please give directory name:". This of course allows us to create a new directory in the current directory once we provide a directory name to vim.
Similarly, we can delete an empty directory by first moving our cursor down to the listing that underlines the specific directory we want to remove and typing D.
The problem is, vim does not allow us to delete a non-empty directory.
Is that any way to insist that we delete the non-empty directory?
The directory view you're referring to is called netrw. You could read up on its entire documentation with :help netrw, but what you're looking for in this case is accessible by :help netrw-delete:
The g:netrw_rmdir_cmd variable is used to support the removal of directories.
Its default value is:
g:netrw_rmdir_cmd: ssh HOSTNAME rmdir
If removing a directory fails with g:netrw_rmdir_cmd, netrw then will attempt
to remove it again using the g:netrw_rmf_cmd variable. Its default value is:
g:netrw_rmf_cmd: ssh HOSTNAME rm -f
So, you could override the variable that contains the command to remove a directory like so:
let g:netrw_rmf_cmd = 'ssh HOSTNAME rm -rf'
EDIT: Like sehe pointed out, this is fairly risky. If you need additional confirmation in case the directory is not empty, you could write a shell script that does the prompting. A quick google turned up this SO question: bash user input if.
So, you could write a script that goes like this:
#! /bin/bash
hostname = $1
dirname = $2
# ...
# prompt the user, save the result
# ...
if $yes
then
ssh $hostname rm -rf $dirname
fi
Then, set the command to execute your script
let g:netrw_rmf_cmd = 'safe-delete HOSTNAME'
A lot of careful testing is recommended, of course :).
Andrew's answer doesn't work for me. I have found another way to this question. Try :help netrw_localrmdir.
Settings from my .vimrc file:
let g:netrw_localrmdir="rm -r"

Resources