how to print a text/plain document in CUPS printer without using raw option - linux

I am using a CUPs command to print the pages of documents,But it is printing all the pages ignoiring the pages option. After some investigation I came to know raw option is overwriting the pages option , Please tell me how to print the pages without using raw option ,If I am not using this option , text file not supporting error is coming ,Here is my code :
system("lpr -P AFSCMSRPRNT3 -o pages=1,2,6 -o raw -T test_womargin abc.txt"

Plain text files don't really specify how things should be printed, and thus aren't allowed.
Try to convert the text to any usable format first. There's a popular tool a2ps which should be available for every linux distribution in the world. Try that!
EDIT you seem to be confused by the word "convert":
What I meant is that instead of printing the text file, you print a postscript file generated form that; something that you can get by doing something like
a2ps -o temporaryoutput.ps input.txt
and then
lpr -P AFSCMSRPRNT3 -o pages=1,2,6 -T test_womargin temporaryoutput.ps

Related

How to convert markdown to pdf in command line

I need to convert the GitHub README.md file to pdf. I tried many modules, those are not working fine. Is there any new tool to get the exact pdf format. In this website is providing good conversion format of pdf. http://www.markdowntopdf.com/
I need command line tool like this format.
Try this software:
https://github.com/BlueHatbRit/mdpdf
Or explain what tools you've tried and why those are not working fine.
Also check this question on superuser:
https://superuser.com/questions/689056/how-can-i-convert-github-flavored-markdown-to-a-pdf
Pandoc
I've personally liked using pandoc as it support a wide range of input and output formats.
Installation
Pandoc is available in most repositories: sudo apt install pandoc
Usage
Sometimes, pandoc can tell the formats to use which makes converting easy. However, I find that this often interprets the input format as plain text which might not be what you want:
pandoc README.md -o README.pdf
Instead, you might want to be explicit about the input/output formats to ensure a better conversion. In the below case, I'm specifically claiming the README.md is in Github-Flavored Markdown:
pandoc --from=gfm --to=pdf -o README.pdf README.md
Again, there are quite a few different formats and options to choose from but to be honest, the basics suffice for the majority of my needs.
I found md-to-pdf very useful.
Examples:
– Convert ./file.md and save to ./file.pdf
$ md-to-pdf file.md
– Convert all markdown files in current directory
$ md-to-pdf ./*.md
– Convert all markdown files in current directory recursively
$ md-to-pdf ./**/*.md
– Convert and enable watch mode
$ md-to-pdf ./*.md -w
And many more options.

CUPS printing remote(http://) files from command line

I am trying to create a custom script to control my CANON SELPHY PRINTER form the command-line.
lp -d Canon_CP900 -o media="CP_C_size" /Users/sangyookim/Desktop/selphy.jpg
I have the tested the above code and it's working perfectly as I intend it to.
But I have stumbled upon a problem.
When I replace the /Users/sangyookim/Desktop/selphy.jpg or filname to a web link such as the below, it will return me unable to access.. No such file or directory
http://res.cloudinary.com/splexz/image/upload/v1447239237/yer60xuvd6nmeldcbivd.png
How can I print images from the web using CUPS command line?
You cannot directly print a remote web page (because most Linux commands, lp included, do not know about URLs).
At least, you'll need to first fetch that web page using a command line HTTP client like wget or curl, then use another command (with lp or lpr) to print it (and perhaps later remove that downloaded file from your local filesystem).
For images, you probably would need some converter before printing them, e.g. the convert command from ImageMagick (which happens to understand URLs, thanks to Mark Setchell for commenting on this), to convert them to some .pdf or perhaps .ps file (unless you have configured lp or CUPS to do the conversion automagically). Maybe you could use a2ps
You could write some script (or shell function) to do all the job.
In limited cases, you might also consider using some network file systems NFS, CIFS or set up some FUSE (I don't recommend that).

How to grab different sort of images from it's src using wget?

This is an example image src. I want to save this image using wget. How to do that?
http://lp.hm.com/hmprod?set=key[source],value[/environment/2013/2BV_0002_007R.jpg]&set=key[rotate],value[-0.1]&set=key[width],value[3694]&set=key[height],value[4319]&set=key[x],value[248]&set=key[y],value[354]&set=key[type],value[FASHION_FRONT]&hmver=0&call=url[file:/product/large]
wget -L "http://lp.hm.com/hmprod?set=key[source],value[/environment/2013/2BV_0002_007R.jpg]&set=key[rotate],value[-0.1]&set=key[width],value[3694]&set=key[height],value[4319]&set=key[x],value[248]&set=key[y],value[354]&set=key[type],value[FASHION_FRONT]&hmver=0&call=url[file:/product/large]" -O zz.jpg
Providing quotes to your link to be downloaded is very essential. This link in particular has many special character capable of screwing things up.

Download multiple files, with different final names

OK, what I need is fairly simple.
I want to download LOTS of different files (from a specific server), via cURL and would want to save each one of them as a specific new filename, on disk.
Is there an existing way (parameter, or whatever) to achieve that? How would you go about it?
(If there was an option to input all URL-filename pairs in a text file, one per line, and get cURL to process it, would be ideal)
E.g.
http://www.somedomain.com/some-image-1.png --> new-image-1.png
http://www.somedomain.com/another-image.png --> new-image-2.png
...
OK, just figured a smart way to do it myself.
1) Create a text file with pairs of URL (what to download) and Filename (how to save it to disk), separated by comma (,), one per line. And save it as input.txt.
2) Use the following simple BASH script :
while read line; do
IFS=',' read -ra PART <<< "$line";
curl $PART[0] -o $PART[1];
done < input.txt
*Haven't thoroughly tested it yet, but I think it should work.

how to print file containing text with ANSI escapes

I would like to print a file containing text with ANSI escapes.
Here is file content (generated with bash script):
\033[1m bold message example \033[0m
normal font message
When printing file to screen in terminal, it works nice:
cat example.txt
shows:
bold message example
normal font message
But my problem when I try to send it to a printer:
lp example.txt
prints:
1mbold message example2m
normal font message
Is there a way to print this file correctly ? Maybe with groff (can be used to print a styled man page), but I did not manage to get anything efficient with it...
Maybe a2ps might be able to handle that (but I am not sure, you should try).
And I would rather suggest changing the way you get such a file with ANSI escapes (that is, also provide some alternative output format).
I mean that the program producing such a file (or such an output) could instead produce a more printable output (perhaps by generating some intermediate form, e.g. LaTeX, or Lout, or groff or HTML format, then forking the appropriate command to print it. That program could also generate directly PDF thru libharu or poppler, etc....)
Also, it might depend upon your printer, and the driver.

Resources