trace a particular IP and port - linux

I have an app running on port 9100 on a remote server serving http pages. After I ssh into the server I can curl localhost 9100 and I receive the response.
However I am unable to access the same app from the browser using http://ip:9100
I am also unable to telnet from my local PC. How do I debug it? Is there a way to traceroute a particular IP and port combination, to see where it is being blocked?
Any linux tools / commands / utilities will be appreciated.
Thanks,
Murtaza

You can use the default traceroute command for this purpose, then there will be nothing to install.
traceroute -T -p 9100 <IP address/hostname>
The -T argument is required so that the TCP protocol is used instead of UDP.
In the rare case when traceroute isn't available, you can also use ncat.
nc -Czvw 5 <IP address/hostname> 9100

tcptraceroute xx.xx.xx.xx 9100
if you didn't find it you can install it
yum -y install tcptraceroute
or
aptitude -y install tcptraceroute

you can use tcpdump on the server to check if the client even reaches the server.
tcpdump -i any tcp port 9100
also make sure your firewall is not blocking incoming connections.
EDIT: you can also write the dump into a file and view it with wireshark on your client if you don't want to read it on the console.
2nd Edit: you can check if you can reach the port via
nc ip 9100 -z -v
from your local PC.

Firstly, check the IP address that your application has bound to. It could only be binding to a local address, for example, which would mean that you'd never see it from a different machine regardless of firewall states.
You could try using a portscanner like nmap to see if the port is open and visible externally... it can tell you if the port is closed (there's nothing listening there), open (you should be able to see it fine) or filtered (by a firewall, for example).

it can be done by using this command: tcptraceroute -p destination port destination IP. like: tcptraceroute -p 9100 10.0.0.50 but don't forget to install tcptraceroute package on your system. tcpdump and nc by default installed on the system. regards

If you use the 'openssl' tool, this is one way to get extract the CA cert for a particular server:
openssl s_client -showcerts -servername server -connect server:443
The certificate will have "BEGIN CERTIFICATE" and "END CERTIFICATE" markers.
If you want to see the data in the certificate, you can do: "openssl x509 -inform PEM -in certfile -text -out certdata" where certfile is the cert you extracted from logfile. Look in certdata.
If you want to trust the certificate, you can add it to your CA certificate store or use it stand-alone as described. Just remember that the security is no better than the way you obtained the certificate.
https://curl.se/docs/sslcerts.html
After getting the certificate use keytool to install it.

Related

SSh remote tunnel, am I missing something?

I want to make a local port that serve a python http.server accessible to the internet without messing around with port-forwarding on my home router, tunnelling it on a digital ocean vps.
My local port is 8080, the port on the remote vps 4444, just for example
ssh -i .ssh/mykey -R 4444:localhost:8080 root#myvpsip
But still http://myvpsip:4444 is not accessible
ufw is disabled on the vps..What am I missing?
For the forwarded port to listen on any address (and not just localhost) you need to prepend an additional : to the forward specification.
ssh -i .ssh/mykey -R :4444:localhost:8080 root#myvpsip
Additionally, you must have GatewayPorts yes or GatewayPorts clientspecified on the server-side sshd configuration.

CoAPS Server, which coaps server can be used

1/ now, i want make libcoap client connect to a coaps server, but it cannot find a coaps server
2/ so, i neened a coaps server with psk, who can give it to me?
I have implementation of CoAP (libcoap), and implementation of DTLS (tinyDTLS). I want make libcoap client connect to a coaps server
I will be grateful for the any advice.
You can use Eclipse Californium to start up your own DTLS based CoAP server.
Take a look at the DTLS example in the source repository, that should get you started.
Alternatively, you can connect to the Eclipse Californium sandbox CoAP server at californium.eclipse.org:5684.
Here's how to do that using the openssl s_client tool:
openssl s_client -dtls1_2 -psk_identity password -psk 736573616D65 -connect californium.eclipse.org:5684
You can test coaps connection local like this
$ ./coap-server -A ::1 -k 1234 &
$ ./coap-client 'coaps://[::1]/' -k 1234 -u CoAP

Change the domains of SSL certificate - like Charles Proxy

When generating an SSL paid or self signed you assign a set of specific domains (wildcard or not), known as canonical names. If you use this SSL to open domains which are not on the list Chrome gives warning - NET::ERR_CERT_COMMON_NAME_INVALID - you know, click advanced > Proceed Unsafe.
I use the same certificate on Charles Proxy which opens all urls fine on chrome, without warning. Viewing on dev options > security > view certificate, I can see that it's my certificate, my domain etc. However Charles changes the domains on the cert automatically for any website you visit, which pass all Chrome validations / warnings.
-- >
How can I achieve this?
Preferably using Nginx or NodeJS via https.createServer(...)
Not worried about how to bypass chrome but how can a .cer be modified so instantly for each http request and be served to the browser.
Solved!
There are several options which include mitmproxy, sslsniff and my favorite SSLSPLIT
It is available for all distros, prepackaged, install via apt-get or yum install sslsplit and that's all.
You simply need to run 1 command, simply package your certificate with key and bundle into 1 pem file and run this:
Forward the port though NAT via iptables and then run sslsplit
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 8888
sslsplit -p /path/anywhere.pid -c certbundle.pem -l connections.log ssl 0.0.0.0 8888 sni 443
It reissues new certificates on the fly with modified subject names as well as log all traffic if you wish. Bypasses all chrome validations and it is quite fast. It doesn't proxy through Nginx though as I was hoping.
.
--- Edit 1/6/2018
Also found a node solution which is beyond what I need https://www.npmjs.com/package/node-forge

mosquitto_sub Error:A TLS error occurred but is ok whit --insecure

I'm building a mqtt server. I used the mosquitto with the TLS on the server as a broker.
I encountered this problem:
I created the ca.crt, server certificate, server key, client certificate, client key via generate-CA.sh
I can connect the broker and publish and subscribe msg via MQTT.fx, but when I tried to connect the broker with the mosquitto_sub, it came out Error:A TLS error occurred on the client PC(ubuntu), at the same time, the server prints
New connection from xx.xx.xx.xx on port 8883.
Openssl Error: error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown
Openssl Error: error:140940E5:SSL routines:SSL3_READ_BYTES:ssl handshake failure
my command used is:
mosquitto_sub -p 8883 -i test -t mqtt -h 150.xx.xx.xx --cafile ca.crt --cert xx.crt --key xx.key in which, the 150.xx.xx.xx is the IP of my broker.
when I used the option --insecure with the command above, the problem disappeared.
so I think it is the server hostname which leads to this problem.
In the mosquitto_sub command the option -h specifies the hostname, but i need to use this parameter to point to the IP address of my broker, so how could i specify the hostname of my server??
Old question but perhaps this might help someone:
If the --insecure option makes it work, you have a certificate problem. What hostname did you set whilst signing the certificate? What does openssl s_client -showcerts -connect 150.xx.xx.xx:8883 say?
Related: although it should be possible to use SSL certs for your servers using public IP addresses (see Is it possible to have SSL certificate for IP address, not domain name?), I'd recommend not doing this and just using DNS, even if this means server.localdomain and/or editing your /etc/hosts file if necessary.

Not able to ssh to port 443 on a Amazon ec2 server

I am running ssh on Amazon EC2 (linux) machine on Port 443.
Yet i am unable to ssh it, as i am behind a firewall.
When i do
http:// host:443
Following message is displayed:
SSH-2.0-OpenSSH_5.3
That means ssh is clearly listening on port 443, and the port is even reachable (via browser).
But yet when i do ssh from my desktop command-line (or putty), it just doesn't work.
Is it that firewall is examining packets and blocking it?
Any ideas?
Are you doing ssh -p 443 host? Sorry to state te obvious... but sometimes the obvious is what eludes us.
Worked!
The putty also required proxy entries :)

Resources