I plan to write a Hashify command line client, and I'd like to confirm that simply writing to stdout is not an option before getting creative.
(Ideally the command will behave like TextMate's mate, in this case opening hashify.me in a browser and waiting to be fed input.)
WGet (simple http fetching tool) supports redirecting a file to STDOUT with the -qO - optioncalled like wget -qO - http://url/.
Lynx (full CLI browser) supports dumping to STDOUT with the -dump or -crawl option.
You could have a look at Google Native Client, available in the dev channel version of Google Chrome.
When run with the native client flags, it allows you to print to the terminal as stated here. Look into the documentation here and here for details.
Getting it to work on windows could be a pain, but it (supposedly) works great on linux. I'm trying it out right now.
Related
I am using ansi-escapes to make the link.
How would I make it run a function when the link is clicked? I'm okay with making it run a terminal command. And if so is there some sort of URI that I can use to run a terminal command? I am using Windows Terminal, however I would like it to work for other terminals as well.
If you make a link with a file:/// uri to a batch file, Windows Terminal will run it, because it simply executes open ${url}. From that batch file you can then run something like node terminalCallback.js.
The only allowed uri protocols are:
http(s)
file
I've used xampp (lampp) to run an Apache2 server out of my Intel ComputeStick with Ubuntu 18.04.1 installed. I've written a .cgi script with perl:
#!/usr/bin/env perl
use strict;
use warnings;
use CGI;
print CGI::header();
system("vlc /home/jack/Desktop/ayni1.mp4");
However, when I hop onto my laptop (Win10) and use Chrome to visit http://12.345.678.90:80/dashboard/perl/ayni1.cgi (That's my server, and that directory does exist), nothing happens on the hosting machine. In the browser, this line is displayed:
VLC media player 3.0.3 Vetinari Command Line Interface initialized. Type `help' for help. > Shutting down.
I'm afraid I don't quite understand what's happening. I thought cgi scripts interacted directly with the command line; i.e. they could issue commands to Terminal directly. Was I mistaken in that? What am I doing wrong?
Sorry, I know this question is somewhat scattered. If needed, I'll try to provide clarification as best I can.
I'm somewhat new to all of this.
I think you have set VLC to open a command line interface automatically.
Reset the VLC preferences and it should work normally again.
Alright here's a fun one, we get the following stdout:
Debugger listening on port 9229.
Warning: This is an experimental feature and could change at any time.
To start debugging, open the following URL in Chrome:
chrome-devtools://devtools/remote/serve_file/#60cd6e859b9f557d2312f5bf532f6aec5f284980/inspector.html?experiments=true&v8only=true&ws=127.0.0.1:9229/a7a2a58d-dfd1-4cc6-875c-47da78adeb1b
when we run a command like so:
node --inspect --debug-brk bin/www.js
I was thinking of creating a bash script like this:
#!/usr/bin/env bash
NODE_DEBUG_OUTPUT=$(node --inspect --debug-brk bin/www.js)
open -a "/Applications/Google Chrome.app"/ getUrl(${NODE_DEBUG_OUTPUT})
but here's where my bash skills end - how can I implement the getUrl function so that I can get the url from the output?
I am pretty certain that if we pass a url to the open function in bash, it will open a browser, in this case though it has to be Google Chrome browser.
Here's a bash command to extract the url
url=$(grep 'chrome-devtools://' <(node --inspect bin/www.js $# 2>&1))
Explanation
The 2>&1 tells the shell to redirect stderr to stdout.
The node command is run within <( node ), this is called process substitution and behaves like a read-only file with the contents being the stdout of the command (node in this case)
Other Issues
There is a bug, filed recently Chrome DevTools not able to connect to Node inspect since 7.5.0
Version 7.4.0 is reported to work
To open a chrome-devtools url from the command line requires some AppleScript, fortunately someone's already done that:
chrome-node-devtools-refresh.sh
So the command would be something like this:
chrome-node-devtools-refresh.sh $(grep 'chrome-devtools://' \
<(node --inspect bin/www.js $# 2>&1))
Alternative
So I am probably not answering your question but I want to give this as a simpler solution to the higher problem you are trying to resolve.
There is a npm package called node-inspector. This package provides the same functionality as you need from chrome dev-tools. Here is the description from their github page :
Node Inspector is a debugger interface for Node.js applications that
uses the Blink Developer Tools (formerly WebKit Web Inspector).
How do you Install?
npm install -g node-inspector
How do you use?
node-debug app.js
where app.js is your js file. you can do node-debug bin/www in your case
What does it do?
Here is a gif for you to show what it does
Summary
The node-inspector package can help you achieve the functionality you desire. i.e. launch chrome to debug your js code. I feel this is a easier option. Also you can read more about this to explore additional use cases if you have
From the developers of this package:
The node-debug command will load Node Inspector in your default
browser.
NOTE: Node Inspector works in Chrome and Opera only. You have to
re-open the inspector page in one of those browsers if another browser
is your default web browser (e.g. Safari or Internet Explorer). Node
Inspector works almost exactly as the Chrome Developer Tools. Read the
excellent DevTools overview to get started.
This is really close! The only thing that's not working is that chrome is not actually opening a url, it just opens to google.com
export PATH=$PATH:/Applications/Google\ Chrome.app/Contents/MacOS/
(node --inspect --debug-brk bin/www.js $# &> debug-server.log) &
sleep 1; // wait for the stdout to get written to file...
LINE=$(grep -n "chrome-devtools" debug-server.log | awk '{print $2}')
echo " => Supposed chrome-devtools link => '$LINE'";
Google\ Chrome "${LINE}"
anybody know why the "Google\ Chrome" command would not open the url provided as the first argument?
I'm trying to install ElasticSearch on a Linux environment. I'm almost there, but I'm doing something wrong.
Basically, my issue is this: when I run bin/elasticsearch in the terminal, I can access http://localhost.com:9200 just fine and run queries. But when I end the SSH session, I get a 404 when I try to hit http://localhost.com:9200 both in my browser window and through CURL.
This is the process I used to install ElasticSearch:
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.1.0.tar.gz
tar -zxvf elasticsearch-1.1.0.tar.gz
cd elasticsearch-1.1.0
bin/elasticsearch
Like I said, this works, and I can hit http://localhost.com:9200 to run queries, but only as long as the SSH session is active.
Is there something obvious I'm doing wrong? I'm a novice at command line stuff, so I'm trying to feel my way through.
From 1.0 onwards the default option launches the service in foreground
ElasticSearch Documentation http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/_system_and_settings.html
Try using the "-d" option which runs the server as a daemon.
Kcachegrind rocks and it's of my favorite tools, however from time to time I'm missing the ability to run it in terminal(e.g using ssh) on the remote server.
I know it's possible to setup VNC or X-server forwarding but it's quite clumsy while I need something simple.
Is there any kcachegrind console alternative(e.g ncurses based)?
You can get basic information and annotations from callgrind output file (created by valgrind --tool=callgrind) with the command-line utility callgrind_annotate. (manual page section in docs). For files, generated by cachegrind (valgrind --tool=cachegrind), you can use a cg_annotate (section in docs). These utilities are build by default with valgrind itself.
Another way is to copy program and cache/callgrind output files to your machine and then analyze them with local kcachegrind as usual. But this way it can be difficult if your machine have no unix but windows or you running on incompatible arch (kcachegrind can call local objdump to get information about program).
I also want to recommend command line options to valgrind:
valgrind --tool=cachegrind --branch-sim=yes
valgrind --tool=callgrind --dump-instr=yes --trace-jump=yes
You can also specify cache sizes for cachegrind with --I1=32768,8,64 --D1=32768,8,64 --L2=3145728,12,64
webgrind is a web-frontend written in PHP that parses the output of XDebug (which uses the same format as callgrind).
Use SSHFS to mount the remote directory that you're working in (see e.g. SSHFS installation instructions for Ubuntu).
Then just ssh in and run valgrind --tool=callgrind with whatever options you want in the remote directory. Since you have that directory mounted locally, it's as easy to open the data files with KCacheGrind as if you were debugging locally.
Running Valgrind/Callgrind at the remote host appears to be a good option. Or, you could scp the output from Valgrind or Callgrind to your PC and run KCachegrind on it.
If you use Emacs, you could fire it up on the remote host and do as shown here, which will give you an Emacs'ish frontend for the valgrind tool.
kcachegrind states in its help that it is a frontend to cachegrind and callgrind.
cachegrind and callgrind are parts of the latest version (3.5) of valgrind. So, download valgrind from http://valgrind.org/downloads/current.html#current and build it. Run valgrind --help, it shows you how to use callgrind functionality.
For more information I need to know what you want to do.