So i was using the curl command the other day to get the information on a webpage and i piped it into nano to try and save the information but all it did was make the console completely unresponsive. The command i used was in the form:
curl -vk [web address] | nano
This caused the console to completely seize up, i sorted the issue by using a different command but i cant seem to find an answer anywhere on why this happens...
Can anyone enlighten me.
nano reads stdin with the dash - notation.
In your case, that'd give you :
curl -vk [web address] | nano -
Related
I'm using curl command to send the traffic to the Server. In curl command, I'm using --local-port option. Below is the command:
curl -v --interface 10.1.1.3 -b --local-port 10000 http://30.1.1.101/myfile.txt
I'm taking tcpdump to confirm that whether the local-port is used or not. Below is the tcpdump stats(image):
After checking the tcpdump, I have observed that the local-port value is different in tcpdump. It is supposed to be like 10.1.1.1:10000 not like 10.1.1.1:random_val.
So my questions are:
Is it possible to force curl to use the same local-port that I have mentioned in the command?
What's the reason for getting different local-port value?
Any help would be appreciated.
The question of redirecting output of a command was already asked many times, however I am having a strange behavior. I am using a bash shell (debian) with version
4.3.30(1)-release and tried to redirect output to a file, however not everything are logged in the file.
The bin file that I tries to run is sauce-connectv4.4.1 for linux (client of saucelabs that is publicly available in internet)
If I run
#sudo ./bin/sc --doctor
it showed me a complete lines
it prints :
INFO: resolved to '23.42.27.27'
INFO: resolving 'g2.symcb.com' using
DNS server '10.0.0.5'...
(followed by other line)
INFO: 'google.com' is not in hosts file
INFO: URL https://google.com can be reached
However, if I redirect the same command to a file with the following command
#sudo ./bin/sc --doctor > alloutput.txt 2>&1
and do
#cat alloutput.txt
the same command output is logged, but deprecated as following:
INFO: resolved to '23.42.2me#mymachine:/opt/$
It has incomplete line, and the next lines that follows are not even logged (missing).
I have tried with >> for appending, it has the same problem. Using command &> alloutput.txt also is not printing the whole stuff. Can anyone point out how to get all lines of the above command to be logged completely to the text file?
UPDATE
In the end I manage to use the native binary logging by using --log
alloutput.txt where it completely provide me with the correct output.
However I let this issue open as I am still wondering why one misses some information/lines by doing an output redirection
you should try this: stdbuf -o0
like:
stdbuf -o0 ./bin/sc --doctor 2>&1 | tee -a alloutput.txt
That is a funny problem, I've never seen that happening before. I am going to go out on a limb here and suggest this, see how it works:
sudo ./bin/sc --doctor 2>&1 | tee -a alloutput.txt
#commandtorun &> alloutput.txt
This command will redirects both the error and output to same file.
I'am using a EC2 instance to run a large job that I estimate to take approx 24 hours to complete. I get the same issue described here ssh broken pipe ec2
I followed the suggestion/solutions in the above post and in my ssh session shell I launched my python program by the following command:
nohup python myapplication.py > myprogram.out 2>myprogram.err
Once I did this the connection remained intact longer than if I didn't use the nohup but it eventually fails with broken pipe error and I'm back to square one. The process 'python myapplication.py' is terminated as a result.
Any ideas on what is happening and what I can do to prevent this from occuring?
You should try screen.
Install
Ubuntu:
apt-get install screen
CentOS:
yum install screen
Usage
Start a new screen session by
$> screen
List all screen sessions you had created
$>screen -ls
There is a screen on:
23340.pts-0.2yourserver (Detached)
1 Socket in /var/run/screen/S-root.
Next, restore your screen
$> screen -R 23340
$> screen -R <screen-id>
A simple solution is to send the process to the background by appending an ampersand & to your command:
nohup python myapplication.py > myprogram.out 2>myprogram.err &
The process will continue to run even if you close your SSH session. You can always check progress by grabbing the tail of your output files:
tail -n 20 myprogram.out
tail -n 20 myprogram.err
I actually ended up fixing this accidentally with a router configuration, allowing all ICMP packets. I allowed all ICMP packets to diagnose a strange issue with some websites loading slowly randomly, and I noticed none of my SSH terminals died anymore.
I'm using a Ubiquiti EdgeRouter 4, so I followed this guide here https://community.ubnt.com/t5/EdgeRouter/EdgeRouter-GUI-Tutorial-Allow-ICMP-ping/td-p/1495130
Of course you'll have to follow your own router's unique instructions to allow ICMP through the firewall.
I've a number of accounts running cron-started php jobs hourly.
The generic structure of the command is this:
wget -q -O - http://some.site.com/cron.php
Now, this used to be running just fine.
Lately, though, on a number of accounts it has started playing up - but only on this one server. Once or twice a day the php file is not run.
The access log is missing the relevant entry.
While the cron log shows that the job was run.
We've added a bit to the command to log things out (-o /tmp/logfile) but it shows nothing.
I'm at a loss, really. I'm looking for ideas what can be wrong, or how to sidestep this issue as it has started taking up way too much of my time.
Has anyone seen anything remotely like this?
Thanks in advance!
Try this command
wget -d -a /tmp/logfile -O - http://some.site.com/cron.php
With -q you turn off wget's output. With -d you turn on debug output (maybe -v for verbose output is already enough). With -a you append logging messages to /tmp/logfile instead of always creating a new file.
You can also use curl:
curl http://some.site.com/cron.php
I have a small vps where I host a web app that I developed, and it's starting to receive a lot of visits.
I need to check/verify, some how, every X minutes to see if the web is up and running (check for status code, 200) or if is down (code 500), and if down, then restart run a script that I made to restart some services.
Any idea how to check that in linux? Curl, Lynx?
curl --head --max-time 10 -s -o /dev/null \
-w "%{http_code} %{time_total} %{url_effective}\n" \
http://localhost
Times out after 10 seconds, and reports Response Code and time
Curl will exit with an error code of 28 if the request times out (check $?)
Found this on a sister website (serverfault)
https://serverfault.com/questions/124952/testing-a-website-from-linux-command-line
wget -p http://site.com
This seems to do the trick
For questions like that the man pages of the tools normally provide a pretty good list of all possible options.
For curl you can also find it here.
The option you seem to search is -w with the http-code variable.
EDIT:
Please see #Ken's answer of how to use -w.
Ok, I created two scripts:
site-statush.sh http://yoursite.com => to check site status and if 200, do no thing, else invoke services-action.sh restart
services-action.sh restart => to restart all services indicated in $services
Check it out at https://gist.github.com/2421072