ngrok retrieve assigned subdomain - node.js

I've got a NodeJS script which spins up a ngrok instance, which starts the ngrok binary.
However I'm needing to be able to return back the automatically generated url. I can't find anywhere in the documentation about how to do this.
for example, when you run ngrok http 80 it spins up, generates you a random unique url each time it starts

This question is kinda old, however, I thought to give another more generic option as it doesn't require NodeJS
curl --silent --show-error http://127.0.0.1:4040/api/tunnels | sed -nE 's/.*public_url":"https:..([^"]*).*/\1/p'
This one just inspects the response of calling api/tunnels by applying text processing (sed) to the resulted text and identifies the public URL.

ngrok serves tunnel information at http://localhost:4040/api/tunnels.
curl + jq
curl -Ss http://localhost:4040/api/tunnels | jq -r '.tunnels[0].public_url'
=> https://719c933a.ap.ngrok.io
curl + ruby
curl -Ss http://localhost:4040/api/tunnels | \
ruby -e 'require "json"; puts JSON.parse(STDIN.read).dig("tunnels", 0, "public_url")'
=> https://719c933a.ap.ngrok.io
curl + node
json=$(curl -Ss http://127.0.0.1:4040/api/tunnels);
node -pe "var data = $json; data.tunnels[0].public_url"
=> https://719c933a.ap.ngrok.io

Related

How to resume interrupted download of specific part of file automatically in curl?

I'm working with curl in Linux. I'm downloading a part of a file from Media-fire using bad internet connection , the download always stops after few minutes and when i use the parameter -C - instead of continue downloading only the part of a file i mentioned from where the download stopped , it starts downloading the hole file .
This is command i have used :
curl -v -o file.part8 -r3000000001-3200000000 --retry 999 --retry-max-time 0 -C - http://download2331.mediafire.com/58gp2u3yjuzg/6upaohqwd8kdi9n/Olarila+High+Sierra+2020.raw.bz2
It's clear that server doesn't support byte ranges
i tried with :
curl -L -k -C - -O --header "Range: bytes=0-1000000" https://cdimage.ubuntu.com/kubuntu/releases/20.10/release/kubuntu-20.10-desktop-amd64.iso
and i get :
curl: (33) HTTP server doesn't seem to support byte ranges. Cannot resume.
it seems that the problem is in the server.

Unable to POST a request to a server using CURL in BASH

I have been trying to run a BASH script which posts a request to an SMS server and on successful execution a message is received on the mentioned mobile number. Script is as shown below:
curl -k -X POST "http://192.168.10.3/u=admin&h=452ba065ebd1723598a51c7eca11d362&op=pv&to=1234567891&msg=Hello+to+all"
The above script is working fine. The message "Hello to all" is being received on the mobile number 1234567891. This number is however hard coded in the URL. In the actual scenario the mobile number would be available in a variable and the SMS would be sent to the mobile number available in this variable.
I have tried scripts like:
mobile_number="1234567891"
curl -k -X POST "http://192.168.10.3/u=admin&h=452ba065ebd1723598a51c7eca11d362&op=pv&to=$mobile_number&msg=Message+From+world"
and
x="http://192.168.10.3/u=admin&h=452ba065ebd1723598a51c7eca11d362&op=pv&to="
x+="1234567891
x+=&msg=Hello+to+all"
curl -k -X POST $x
However, i have been unsuccessful in executing them successfully. It would be of great help if someone could help me with the syntax.
Try out this principle, bash is different language than c++ or so :-):
#!/bin/bash
to="1234567891"
msg="Hello+to+all"
u="admin"
hash="452ba065ebd1723598a51c7eca11d362"
op="pv"
ip="192.168.10.3"
url="http://${ip}/u=admin&h=${hash}&op=${op}&to=${to}&msg=${msg}"
echo ${url}
Than : curl -k -X POST $url should work fine.

How to get http status code and content separately using curl in linux

I have to fetch some data using curl linux utility. There are two cases, one request is successful and second it is not. I want to save output to a file if request is successful and if request is failed due to some reason then error code should be saved only to a log file. I have search a lot on www but could not found exact solution that's why I have posted a new question on curl.
One option is to get the response code with -w, so you could do it something like
code=$(curl -s -o file -w '%{response_code}' http://example.com/)
if test "$code" != "200"; then
echo $code >> response-log
else
echo "wohoo 'file' is fine"
fi
curl -I -s -L <Your URL here> | grep "HTTP/1.1"
curl + grep is your friend, then you can extract the status code later for your need.

How to see all Request URLs the server is doing (final URLs)

How list from the command line URLs requests that are made from the server (an *ux machine) to another machine.
For instance, I am on the command line of server ALPHA_RE .
I do a ping to google.co.uk and another ping to bbc.co.uk
I would like to see, from the prompt :
google.co.uk
bbc.co.uk
so, not the ip address of the machine I am pinging, and NOT an URL from servers that passes my the request to google.co.uk or bbc.co.uk , but the actual final urls.
Note that only packages that are available on normal ubuntu repositories are available - and it has to work with command line
Edit
The ultimate goal is to see what API URLs a PHP script (run by a cronjob) requests ; and what API URLs the server requests 'live'.
These ones do mainly GET and POST requests to several URLs, and I am interested in knowing the params :
Does it do request to :
foobar.com/api/whatisthere?and=what&is=there&too=yeah
or to :
foobar.com/api/whatisthathere?is=it&foo=bar&green=yeah
And does the cron jobs or the server do any other GET or POST request ?
And that, regardless what response (if any) these API gives.
Also, the API list is unknown - so you cannot grep to one particular URL.
Edit:
(OLD ticket specified : Note that I can not install anything on that server (no extra package, I can only use the "normal" commands - like tcpdump, sed, grep,...) // but as getting these information with tcpdump is pretty hard, then I made installation of packages possible)
You can use tcpdump and grep to get info about activity about network traffic from the host, the following cmd line should get you all lines containing Host:
tcpdump -i any -A -vv -s 0 | grep -e "Host:"
If I run the above in one shell and start a Links session to stackoverflow I see:
Host: www.stackoverflow.com
Host: stackoverflow.com
If you want to know more about the actual HTTP request you can also add statements to the grep for GET, PUT or POST requests (i.e. -e "GET"), which can get you some info about the relative URL (should be combined with the earlier determined host to get the full URL).
EDIT:
based on your edited question I have tried to make some modification:
first a tcpdump approach:
[root#localhost ~]# tcpdump -i any -A -vv -s 0 | egrep -e "GET" -e "POST" -e "Host:"
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
E..v.[#.#.......h.$....P....Ga .P.9.=...GET / HTTP/1.1
Host: stackoverflow.com
E....x#.#..7....h.$....P....Ga.mP...>;..GET /search?q=tcpdump HTTP/1.1
Host: stackoverflow.com
And an ngrep one:
[root#localhost ~]# ngrep -d any -vv -w byline | egrep -e "Host:" -e "GET" -e "POST"
^[[B GET //meta.stackoverflow.com HTTP/1.1..Host: stackoverflow.com..User-Agent:
GET //search?q=tcpdump HTTP/1.1..Host: stackoverflow.com..User-Agent: Links
My test case was running links stackoverflow.com, putting tcpdump in the search field and hitting enter.
This gets you all URL info on one line. A nicer alternative might be to simply run a reverse proxy (e.g. nginx) on your own server and modify the host file (such as shown in Adam's answer) and have the reverse proxy redirect all queries to the actual host and use the logging features of the reverse proxy to get the URLs from there, the logs would probably a bit easier to read.
EDIT 2:
If you use a command line such as:
ngrep -d any -vv -w byline | egrep -e "Host:" -e "GET" -e "POST" --line-buffered | perl -lne 'print $3.$2 if /(GET|POST) (.+?) HTTP\/1\.1\.\.Host: (.+?)\.\./'
you should see the actual URLs
A simple solution is to modify your '/etc/hosts' file to intercept the API calls and redirect them to your own web server
api.foobar.com 127.0.0.1

Streaming log file data over http by using unix command. Combination of tail and curl

I need to follow a log file on linux machine and stream the updates of log file over http port to a remote machine. I have written a command with the combination of "tail" and "curl".
To test it initially, i used "tail -n", it works well and posts data successfully to remote machine. Below is the command.
$tail -n 200 /path/to/logfile/file1.log | curl --data-binary #- http://remotemachineIP:9000
Now, When i try to run the same command with "tail -f", it's not posting any data over http even though the log file is updated multiple times. Below is the command
$tail -f --follow=name /path/to/logfile/file1.log | curl --data-binary #- http://remotemachineIP:9000
As per my understanding, "tail -f" is not conveying my "curl" command that "input feed is complete over stdin(#-)". Any help on how to rectify this issue?
Thanks in advance
curl will make a single HTTP POST request with the piped data. What you want to do instead is to continuously send the data.
Assuming that by "HTTP port" you actually meant TCP there is a way using netcat:
Remote
nc -l 9000
Local
tailf /path/to/log/file | nc remote_ip 9000

Resources