I'm having some trouble with getting this to do what I want it to do.
read -p "URL to read: " U
read -p "Word to fin: " O
read -p "Filename: " F
curl -O $U | sed "s/\<$O\>/\*$O\*/g" > $F.txt
So basically what I want is to use curl to get a .txt file from a url, then sort through it to find the word specified by the user input. Then mark all those words with a * and put them in a file specified by the user.
Almost the exact same code works in Linux, but this doesn't work on my Mac. Anyone got an idea?
Two issues:
-O makes curl store the downloaded file, not output it on stdout.
word boundary metacharacters \< and \> are a GNU extension. On BSD sed, you can use [[:<:]] and [[:>:]] instead.
This should work on OSX:
curl "$U" | sed "s/[[:<:]]$O[[:>:]]/\*$O\*/g" > $F.txt
Related
I want to ask if is possible to combine linux command and <
sendmail -S "lalalal" -f "dailaakak" -au "kakakak" <<EOF
>lalal:lalal
>opp:ttt
>ggg:zzz
EOF
I want to have something like that sendmail -S "lalalal" -f "dailaakak" -au "kakakak" <<EOF; lalal:lalal; opp:ttt; ggg:zzz; EOF
I need to use that not in bash script
If it has to be in one line without newlines use that:
echo -e "lalal:lalal\nopp:ttt\nggg:zzz" | sendmail -S "lalalal" -f "dailaakak" -au "kakakak"
echo -n interpretes escapes characters such as \n as a newline.
If you are asking whether you can use the << EOF in an interactive shell then the answer is yes, you can.
Note this functionality is called here document and that there can be any word instead of EOF. For example:
$ cat - << someword
> Here you
> can
> write text with as many
> newlines as you want.
> someword
Here you
can
write text with as many
newlines as you want.
(cat - prints whatever it receives on stdin)
For more information on here documents you can read for example this: http://tldp.org/LDP/abs/html/here-docs.html
I have tried and succeeded but it's messy. EOF simply does not like to accept substituted new lines for some reason so it needs to be put in another format. Now I'm sure this could be achieved with an expect script one one line but the below is what I have made and works.
echo "ssh localhost `printf "<< EOF\necho "Working!" >> /tmp/myfile \nEOF\n"`" > file.sh; chmod770 file.sh; ./file.sh
printf "<< EOF\necho Test! >> /tmp/myfile \nEOF\n" | xargs ssh localhost
Please ensure chmod file permissions are suitable for your own work case! Putting it into an environment variable instead of a file is also likely to work.
I have this
exec 5<>/dev/tcp/twitter.ca/80
echo -e "GET / HTTP/1.0\n" >&5
cat <&5
I looked a similar script
curl http://cookpad.com 2>&1 | grep -o -E 'href="([^"#]+)"' | cut -d'"' -f2
but I need to use the sed command only.
the output i get is this
sed: -e expression #1, char 2: extra characters after command
#!/bin/bash
exec 5<>/dev/tcp/twitter.ca/80
echo -e "GET / HTTP/1.0\n" >&5
cat <&5 | sed -r -e 'href="([^"#]+)"'
Is what I currently have and I guess what im trying to do is how to use sed to strip it of all extras and keep it with just the htmls?
my output should be look something like this:
href="UnixFortune.apk"
href="UnixFortune-1.0.tgz"
href="BeagleCar.apk"
href="BeagleCar.zip"
sed is a scripting language. Your command looks like you are trying to use the h command (copy pattern to hold space) with options starting with ref=... but the h command doesn't take any options.
Anyway, the command you want is the s command, which performs substitutions. Namely, you want to substitute everything before and after the matching group with nothing (and thus print only the captured group).
sed -r -e 's/.*href="([^"#]+)".*/\1/'
However, this still doesn't do the right thing if there are multiple matches on a line (or lines without a match, although that is easy to fix with sed -n 's/.../p'). You can certainly solve that in sed, but I would suggest you go with grep -o instead, unless you specifically want to learn, write, and maintain sed script. (Or, alternatively, rewrite into an Awk or Perl script. Perl in particular has a lot more leverage for tasks like this.)
And of course, for this particular task, the proper tool is an HTML parser. There is no way to properly pick apart HTML using just regular expressions. See e.g. How to extract links from a webpage using lxml, XPath and Python?
I'm trying to figure out how to efficiently copy-paste from X application to the terminal. Specifically I want to highlight a text section in my web browser, then paste this commented to a file after the shebang line.
the code I have so far is this:
xclip -o | sed 's/^/#/' | sed '2n' myscript.pl
the first command takes the text that I have highlighted in my browser
the second command comments the lines by adding #
the last bit does not work..
what I am trying to do here is append the text after line number 2 to my script. But obviously I am doing this wrong.. Does anyone have a helpful suggestion?
You can use sed read for safely handling all types of input, including input with special characters and multiple lines. This requires an intermediate file:
xclip -o | sed -e 's/^/#/g' -e '$s/$/\n/' > TMP && sed -i '1r TMP' den && rm TMP
sed only operates on one input stream (either a pipe or a file), if you are using the output of xclip as the data stream then you can't also tell sed to read from a file. Instead you could use command substitution to store the modified output, and use that in a separate command. How about:
sed "2i$(xclip -o | sed 's/^/#/')" myscript.pl
This will print the amended file to stdout, if you want to edit the file itself then use the -i flag.
Is it possible to use a bash script to format the output of the ls to a json array? To be valid json, all names of the dirs and files need to be wrapped in double quotes, seperated by a comma, and the entire thing needs to be wrapped in square brackets. I.e. convert:
jeroen#jeroen-ubuntu:~/Desktop$ ls
foo.txt bar baz
to
[ "foo.txt", "bar", "baz" ]
edit: I strongly prefer something that works across all my Linux servers; hence rather not depend on python, but have a pure bash solution.
If you know that no filename contains newlines, use jq:
ls | jq -R -s -c 'split("\n")[:-1]'
Short explanation of the flags to jq:
-R treats the input as string instead of JSON
-s joins all lines into an array
-c creates a compact output
[:-1] removes the last empty string in the output array
This requires version 1.4 or later of jq. Try this if it doesn't work for you:
ls | jq -R '[.]' | jq -s -c 'add'
Yes, but the corner cases and Unicode handling will drive you up the wall. Better to delegate to a scripting language that supports it natively.
$ ls
あ a "a" à a b 私
$ python -c 'import os, json; print json.dumps(os.listdir("."))'
["\u00e0", "\"a\"", "\u79c1", "a b", "\u3042", "a"]
Hello you can do that with sed and awk:
ls | awk ' BEGIN { ORS = ""; print "["; } { print "\/\#"$0"\/\#"; } END { print "]"; }' | sed "s^\"^\\\\\"^g;s^\/\#\/\#^\", \"^g;s^\/\#^\"^g"
EDIT: updated to solve the problem with " and spaces. I use /# as replacement pattern for ", since / is not a valid character for filename.
Use perl as the encoder; it's guaranteed to be non-buggy, is everywhere, and with pipes, it's still reasonably clean:
ls | perl -e 'use JSON; #in=grep(s/\n$//, <>); print encode_json(\#in)."\n";'
Most of the Linux machine already has python. all you have to do is:
python -c 'import os, json; print json.dumps(os.listdir("/yourdirectory"))'
This is for . directory , you can add any path.
Here's a bash line
echo '[' ; ls --format=commas|sed -e 's/^/\"/'|sed -e 's/,$/\",/'|sed -e 's/\([^,]\)$/\1\"\]/'|sed -e 's/, /\", \"/g'
Won't properly deal with ", \ or some commas in the name of the file. Also, if ls puts newlines between filenames, so will this.
I was also searching for a way to output a Linux folder / file tree to some JSON or XML file. Why not use this simple terminal command:
$ tree --dirsfirst --noreport -n -X -i -s -D -f -o my.xml
so, just the linux tree command, and config your own parameters. Here -X gives XML output! For me, that's OK, and i guess there's some script to convert XML to JSON ..
NOTE: I think this covers the same question.
Personnaly, I would code script that would run the command ls, send the output to a file of you choice while parsing the output to make format it to a valid JSON format.
I'm sure that a simple Bash file will do the work.
Bash ouput
Can't you use a python script like this?
myOutput = subprocess.check_output["ls"]
output = ["+str(e)+" for e in myOutput]
return output
I didn't check if it works, but you can find the specification here
Should be pretty easy.
$ cat ls2json.bash
#!/bin/bash
echo -n '['
for FILE in $(ls | sed -e 's/"/\\"/g')
do
echo -n \"${FILE}\",
done
echo -en \\b']'
then run:
$ ./ls2json.bash > json.out
but python would be even easier
import os
directory = '/some/dir'
ls = os.listdir(directory)
dirstring = str(ls)
print dirstring.replace("'",'"')
Here's an elegant one-liner solution that doesn't rely on jq:
echo '[ "'"$(echo "$list" | sed ':a;N;$!ba;s/\n/", "/g')"'" ]'
$list here is a newline-separated string.
Using gnu column (i.e. doesn't work on OSX)
ls -ldG * --time-style=long-iso | column -t -n "$PWD" -N mod,links,user,size,date,time,name -J
Output :
{
"/home/pouet": [
{"mod":"-rwxr-xr-x", "links":"1", "user":"pouet", "size":"21978", "date":"2022-08-12", "time":"11:47", "name":"file1"},
{"mod":"-rw-r--r--", "links":"1", "user":"pouet", "size":"2634", "date":"2022-06-20", "time":"11:14", "name":"file2"}
]
}
Don't use bash, use a scripting language. Untested perl example:
use JSON;
my #ls_output = `ls`; ## probably better to use a perl module to do this, like DirHandle
print encode_json( #ls_output );
I have a Markdown string in JavaScript, and I'd like to display it (with bolding, etc) in a less (or, I suppose, more)-style viewer for the command line.
For example, with a string
"hello\n" +
"_____\n" +
"*world*!"
I would like to have output pop up with scrollable content that looks like
hello
world
Is this possible, and if so how?
Pandoc can convert Markdown to groff man pages.
This (thanks to nenopera's comment):
pandoc -s -f markdown -t man foo.md | man -l -
should do the trick. The -s option tells it to generate proper headers and footers.
There may be other markdown-to-*roff converters out there; Pandoc just happens to be the first one I found.
Another alternative is the markdown command (apt-get install markdown on Debian systems), which converts Markdown to HTML. For example:
markdown README.md | lynx -stdin
(assuming you have the lynx terminal-based web browser).
Or (thanks to Danny's suggestion) you can do something like this:
markdown README.md > README.html && xdg-open README.html
where xdg-open (on some systems) opens the specified file or URL in the preferred application. This will probably open README.html in your preferred GUI web browser (which isn't exactly "less-style", but it might be useful).
I tried to write this in a comment above, but I couldn't format my code block correctly. To write a 'less filter', try, for example, saving the following as ~/.lessfilter:
#!/bin/sh
case "$1" in
*.md)
extension-handler "$1"
pandoc -s -f markdown -t man "$1"|groff -T utf8 -man -
;;
*)
# We don't handle this format.
exit 1
esac
# No further processing by lesspipe necessary
exit 0
Then, you can type less FILENAME.md and it will be formatted like a manpage.
If you are into colors then maybe this is worth checking as well:
terminal_markdown_viewer
It can be used straightforward also from within other programs, or python modules.
And it has a lot of styles, like over 200 for markdown and code which can be combined.
Disclaimer
It is pretty alpha there may be still bugs
I'm the author of it, maybe some people like it ;-)
A totally different alternative is mad. It is a shell script I've just discovered. It's very easy to install and it does render markdown in a console pretty well.
I wrote a couple functions based on Keith's answer:
mdt() {
markdown "$*" | lynx -stdin
}
mdb() {
local TMPFILE=$(mktemp)
markdown "$*" > $TMPFILE && ( xdg-open $TMPFILE > /dev/null 2>&1 & )
}
If you're using zsh, just place those two functions in ~/.zshrc and then call them from your terminal like
mdt README.md
mdb README.md
"t" is for "terminal", "b" is for browser.
Using OSX I prefer to use this command
brew install pandoc
pandoc -s -f markdown -t man README.md | groff -T utf8 -man | less
Convert markupm, format document with groff, and pipe into less
credit: http://blog.metamatt.com/blog/2013/01/09/previewing-markdown-files-from-the-terminal/
This is an alias that encapsulates a function:
alias mdless='_mdless() { if [ -n "$1" ] ; then if [ -f "$1" ] ; then cat <(echo ".TH $1 7 `date --iso-8601` Dr.Beco Markdown") <(pandoc -t man $1) | groff -K utf8 -t -T utf8 -man 2>/dev/null | less ; fi ; fi ;}; _mdless '
Explanation
alias mdless='...' : creates an alias for mdless
_mdless() {...}; : creates a temporary function to be called afterwards
_mdless : at the end, call it (the function above)
Inside the function:
if [ -n "$1" ] ; then : if the first argument is not null then...
if [ -f "$1" ] ; then : also, if the file exists and is regular then...
cat arg1 arg2 | groff ... : cat sends this two arguments concatenated to groff; the arguments being:
arg1: <(echo ".TH $1 7date --iso-8601Dr.Beco Markdown") : something that starts the file and groff will understand as the header and footer notes. This substitutes the empty header from -s key on pandoc.
arg2: <(pandoc -t man $1) : the file itself, filtered by pandoc, outputing the man style of file $1
| groff -K utf8 -t -T utf8 -man 2>/dev/null : piping the resulting concatenated file to groff:
-K utf8 so groff understands the input file code
-t so it displays correctly tables in the file
-T utf8 so it output in the correct format
-man so it uses the MACRO package to outputs the file in man format
2>/dev/null to ignore errors (after all, its a raw file being transformed in man by hand, we don't care the errors as long as we can see the file in a not-so-much-ugly format).
| less : finally, shows the file paginating it with less (I've tried to avoid this pipe by using groffer instead of groff, but groffer is not as robust as less and some files hangs it or do not show at all. So, let it go through one more pipe, what the heck!
Add it to your ~/.bash_aliases (or alike)
I personally use this script:
#!/bin/bash
id=$(uuidgen | cut -c -8)
markdown $1 > /tmp/md-$id
google-chrome --app=file:///tmp/md-$id
It renders the markdown into HTML, puts it into a file in /tmp/md-... and opens that in a kiosk chrome session with no URI bar etc.. You just pass the md file as an argument or pipe it into stdin. Requires markdown and Google Chrome. Chromium should also work but you need to replace the last line with
chromium-browser --app=file:///tmp/md-$id
If you wanna get fancy about it, you can use some css to make it look nice, I edited the script and made it use Bootstrap3 (overkill) from a CDN.
#!/bin/bash
id=$(uuidgen | cut -c -8)
markdown $1 > /tmp/md-$id
sed -i "1i <html><head><style>body{padding:24px;}</style><link rel=\"stylesheet\" type=\"text/css\" href=\"http://maxcdn.bootstrapcdn.com/bootstrap/3.2.0/css/bootstrap.min.css\"></head><body>" /tmp/md-$id
echo "</body>" >> /tmp/md-$id
google-chrome --app=file:///tmp/md-$id > /dev/null 2>&1 &
I'll post my unix page answer here, too:
An IMHO heavily underestimated command line markdown viewer is the markdown-cli.
Installation
npm install markdown-cli --global
Usage
markdown-cli <file>
Features
Probably not noticed much, because it misses any documentation...
But as far as I could figure out by some example markdown files, some things that convinced me:
handles ill formatted files much better (similarly to atom, github, etc.; eg. when blank lines are missing before lists)
more stable with formatting in headers or lists (bold text in lists breaks sublists in some other viewers)
proper table formatting
syntax highlightning
resolves footnote links to show the link instead of the footnote number (not everyone might want this)
Screenshot
Drawbacks
I have realized the following issues
code blocks are flattened (all leading spaces disappear)
two blank lines appear before lists