Text based FTP client settings behind a proxy - linux

I need to create a bash script which will connect to an FTP server, upload a file and close the connection. Usually this would be an easy task but I need to specify some specific proxy settings which is making it difficult.
I can connect to the FTP fine using a GUI client i.e. Filezilla with the following settings:
Proxy Settings
--------------
FTP Proxy : USER#HOST
Proxy Host: proxy.domain.com
Proxy User: blank
Proxy Pass: blank
FTP Settings
------------
Host : 200.200.200.200
Port : 21
User : foo
Pass : bar
What I can't seem to do is replicate these settings within a text based ftp client i.e. ftp, lftp etc. Can anyone help with setting this script up?
Thanks in advance!

According to the docs, lftp should support the ftp_proxy environment variable, e.g.
ftp_proxy=ftp://proxy.domain.com lftp -c "cd /upload; put file" ftp://200.200.200.200
If that works, you can put
export ftp_proxy=ftp://proxy.domain.com
in your shell configuration files, or
set ftp:proxy=ftp://proxy.domain.com
in your ~/.lftprc.
Alternatively, try running the commands that your GUI FTP client is running, e.g.
upload.lftp
USER ...#...
PASS ...
PUT ...
And run it using -s:
lftp -s upload.lftp 200.200.200.200
Or try curl -T (docs) ncftpput (docs).
Something like:
FTP_PROXY=ftp://proxy.domain.com curl -T uploadfile -u foo:bar ftp://200.200.200.200/myfile
might work.

Related

Can p4api.net connect to a (local) p4 personal server?

Can p4api.net connect to a (local) p4 personal server?
I started a personal server with
p4 -u itsame -d c:\perforce\local -c itsameClient clone -m 1 -v -p p4server:somePort -f //repo/path/...
It works - it can use it in p4v or from the command like - there's even the .p4root in c:\perforce\local.
However, from the latest p4api.net, it just keeps trying to use TCP to connect. Is there no way to say this is to the file system - or perhaps does the personal server expose itself to localhost:port somehow?
The client application's P4PORT needs to be set correctly in order to connect to a server. For a remote server you simply specify the hostname:port. For a personal server, there's a special P4PORT syntax that specifies how the local server executable is to be invoked to service client requests. You can see it by running p4 set P4PORT within your personal server directory:
C:\Perforce\test>p4 set P4PORT
P4PORT=rsh:p4d.exe -i -r "c:\Perforce\test\.p4root" (config 'c:\Perforce\test\p4config.txt')
Note that when you initialize a personal server, it automatically sets up P4CONFIG in that directory, which is why that P4PORT is automagically set for you already. Your P4API.NET application should be able to use that same config (removing the need to manually copy over the P4PORT string) as long as:
it has the correct cwd set (i.e. the directory the personal server lives in)
the P4PORT is not overridden with an incorrect value

Ubuntu 18, proxy not working on terminal but work on browser

(related and perhaps more simple problem to solve: proxy authentication by MSCHAPv2)
Summary: I am using a Ubuntu 18, the proxy is working with web-browser but not with terminal applications (wget, curl or apt update). Any clues? Seems the problem is to interpretate a proxy's "PAC file"... Is it? How to translate to Linux's proxy variables? ... Or the problem is simple: my proxy-config (see step-by-step procedure below) was wrong?
Details:
By terminal env | grep -i proxy we obtain
https_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
http_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
no_proxy=localhost,127.0.0.0/8,::1
NO_PROXY=localhost,127.0.0.0/8,::1
ftp_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
and browser (Firefox) is working fine for any URL, but:
wget http://google.com say Resolving pac._ProxyDomain_ (pac._ProxyDomain_)... etc.etc.0.26 connecting to pac._ProxyDomain_ (pac._ProxyDomain_)|etc.etc.0.26|:80... conected.
Proxy request has been sent, waiting for response ... 403 Forbidden
2019-07-25 12:52:19 ERROR 403: Forbidden.
curl http://google.com say "curl: (5) Could not resolve proxy: pac._ProxyDomain_/proxy.pac"
Notes
(recent news here: purge exported proxy changes something and not tested all again...)
The proxy configuration procedures that I used (there are some plug-and-play PAC file generator? I need a PAC file?)
Config procedures used
All machine was running, with a direct non-proxy internet connection... Them the machine goes to the LAN with the proxy.
Add lines of "export *_proxy" (http, https and ftp) in my ~/.profile. The URL definitions are in the form http_proxy="http://user:pwd#etc" (supposing that is correct, because testesd before with user:pwd#http://pac.domain/proxy.pac syntax and Firefox promped proxy-login)(if the current proxy-password is using # character, need to change?)
Add lines of "export *_proxy" in my ~root/.profile.(need it?)
(can reboot and test with echo $http_proxy)
visudo procedure described here
reboot and navigate by Firefox without need of login, direct (good is working!). Testing env | grep -i proxy, it shows all correct values as expected.
Testing wget and curl as the begin of this report, proxy bug.
Testing sudo apt update, bug.
... after it more one step, supponing that for apt not exist a file, created by sudo nano /etc/apt/apt.conf.d/80proxy and add 3 lines for Acquire::*::proxy "value"; with value http://user:pass#pac._ProxyDomain_/proxy.pac:8080. where pass is etc%23etc, url-encoded.
Summary of tests performed
CONTEXT-1.1
(this was a problem but now ignoring it to focus on more relevant one)
After (the proxied) cable connection and proxy configurations in the system. (see above section "Config procedures used"). Proxy-password with special character.
curl http://google.com say "curl: (5) Could not resolve proxy..."
When change all .profile from %23 to # the error on wget changes, but curl not. Wget changes to "Error parsing proxy URL http://user:pass#pac._ProxyDomain_/proxy.pac:8080: Bad port number"
PS: when used $ on password the system (something in the internal export http_proxy command or use of http_proxy confused it with a variable).
CONTEXT-1.2
Same as context-1.1 above, but password with no special character. Good and clean proxy-password.
curl http://google.com say "curl: (5) Could not resolve proxy..."
CONTEXT-2
After (the proxied) cable connection and no proxy configurations in the system (but confirmed that connection is working on browser after automatic popup form login).
curl -x 192.168.0.1:8080 http://google.com "curl: (7) Failed to connect..."
curl --verbose -x "http://user:pass#pac._proxyDomain_/proxy.pac" http://google.com say "curl: (5) Could not resolve proxy..."
Other configs in use
As #Roadowl suggested to check:
files ~/.netrc and ~root/.netrc not exists
file more /etc/wgetrc exists, but all commented, exept by passive_ftp = on

sublime text sftp tunnel wbond

To work remotely I need to SSH into the main server and then again into the departmental server.
I would like to set up a tunnel using sublime text 3 wbond sftp package to view and edit files remotely but I can't seem to find any information for setting up a tunnel. Is this even possible?
The reason I'm interested in this particular package is because I am unable to install any packages locally on the server, hence using something like rsub is not possible.
Any other suggestions besides sublime sftp are welcome.
I'm not sure the SFTP plugin would allow to do this directly.
What i would suggest is for you to use ssh -L to create a tunnel.
ssh -L localhost:random_unused_port:target_server:22 username_for_middle_server#middle_server -nNT
Use the password/identity_file for the middle server
The -nNT is to avoid opening an interactive shell in the middle server.
IMPORTANT: You need to keep the ssh -L command running so keep that shell open.
In this way you can connect to the target_server as such:
ssh username_for_target_server#localhost -p random_port_you_allocated
Similarly you can setup the SFTP plugin file as such
{
...
"host":"localhost",
"user":"username_for_target_server",
"ssh_key_file": "path_to_target_server_key",
"port":"random_port_you_allocated",
....
}
As a sidenote, always use the same port to tunnel to the same server, otherwise, with the default ssh configuration, you will be warned of a "Man in the middle attack" because the signature saved in the .ssh/known_hosts will not match with the previous one. This can be avoided by disabling this feature but I wouldn't recommend it.

SSH Secure Shell Tunnel X11 - Display not shown

I am using SSH Secure Shell to connect to a server. My connection is allowed to Tunnel X11 connections but when I execute the command. The display is not showing up. I get the message:
couldn't connect to display "localhost:12.0"
I have a ssh server installed and running on my machine.
Remember: Both the client and the server have to allow X forwarding.
On the server look in /etc/ssh/sshd_config and make sure you have X11Forwarding yes. You will need to restart the service if you edit this file.
On the client look in /etc/ssh/ssh_config (your user ~/.ssh/ssh/config will override global settings, if you have created this file) and make sure you have ForwardX11 yes.
Alternatively give the -X switch when you create your client connection. e.g. ssh -X user#host
Oh and of course, your client needs to be running an X server which you have authority to use! E.g. if you connect from Windows using PuTTY it will never work, as Windows is not an X server!
I figured it out. I needed to have X-Server installed on my computer instead of SSH-Server. I installed Xming for that purpose and now everything works as it should.

ftp: Name or Service not known

in command line
> ftp ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/data/
Work on one computer but does not work on my other one. Error returned
ftp: ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/data/: Name or service not known
I also tried the raw IP address which is
> ftp ftp://130.14.250.10/1000genomes/ftp/data/
But it didn't work.
What is the problem here? how can I fix this?
The ftp command accepts the server name, not a URL. Your session likely should look like:
ftp ftp-trace.ncbi.nih.gov
(Server asks for login and password)
cd /1000genomes/ftp/data/
mget *
This depends on the ftp client you are using. On Mac OSX (ftp client from BSD), for example, the default command line ftp client accepts the full url, while for example in CentOS the default client doesn't, and you need to connect just to the hostname. So, it depends on the flavor of linux and the installed default ftp client.
Default ftp client in CentOS (ARPANET):
ftp ftp-trace.ncbi.nih.gov
cd 1000genomes/ftp/data
If you want to use the full url in CentOS 5.9 or Fedora 18 (where I tested it), you could install an additional ftp client. For example ncftp and lftp have the behavior you are looking for.
ncftp, available through yum or your favorite package manager:
ncftp ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/data/
NcFTP 3.2.2 (Aug 18, 2008) by Mike Gleason (http://www.NcFTP.com/contact/).
Connecting to ...
...
Logged in to ftp-trace.ncbi.nih.gov.
Current remote directory is /1000genomes/ftp/data
lftp, also available through your favorite package manager:
lftp ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/data/
cd ok, cwd=/1000genomes/ftp/data
lftp ftp-trace.ncbi.nih.gov:/1000genomes/ftp/data>
Another, more efficient, way to retrieve a page, is using wget or curl. These work for http, ftp and other protocols.
It looks to me like the computer that isn't working is already adding the ftp: to the URL, have you tried removing it from yours and seeing if that works?
> ftp ftp-trace.ncbi.nih.gov/1000genomes/ftp/data

Resources