How to create a thumbnail view using system libraries in Linux - linux

I want to create a thumbnail view of a file type similar to thumbnails displayed in gnome/kde.
Wondering if anyone knows which libraries gnome/kde uses to display thumbnail view of different file types in Linux.

It appears there is a D-BUS specification for sending requests to a cross-toolkit Thumbnailing service called Tumber: http://gezeiten.org/post/2009/10/Using-Tumbler-in-Client-Applications
But documentation seems to be very sparse.

ImageMagick is a command-line tool and the library. This library has interfaces for C++ and Perl. Or you can also try GraphicsMagick.

The utility 'convert' from ImageMagick is often used for this.
http://www.cyberciti.biz/tips/howto-linux-creating-a-image-thumbnails-from-shell-prompt.html has an example that I have adapted here.
Given two directories, images/ and thumbnails/, this little script will convert all the images into thumbnails in the other directory, with 'thumb.' at the start of the filename :
#!/bin/bash
for i in images/*
do
echo "Prcoessing $i ..."
/usr/bin/convert -thumbnail 200 "$i" thumbnails/thumb.$(basename "$i")
done

Related

WGET - how to download embedded pdf's that have a download button from a text file URL list? Is it possible?

Happy New Years!
I wanted to see if anybody has ever successfully downloaded embedded pdf file's from multiple url's contained in a .txt file for a website?
For instance;
I tried several combinations of wget -i urlist.txt (which downloads all the html files perfectly); however it doesn't also grab each html file's embedded .pdf?xxxxx <---- slug on the end of the .pdf?*
The exact example of this obstacle is the following:
This dataset I have placed all 2 pages of links into a url.txt:
https://law.justia.com/cases/washington/court-of-appeals-division-i/2014/
1 example URL within this dataset:
https://law.justia.com/cases/washington/court-of-appeals-division-i/2014/70147-9.html
The embedded pdf link is the following:
https://cases.justia.com/washington/court-of-appeals-division-i/2014-70147-9.pdf?ts=1419887549
The .pdf files are actually "2014-70147-9.pdf?ts=1419887549" .pdf?ts=xxxxxxxxxx
each one is different.
The URL list contains 795 links. Does anyone have a successful method to download every .html in my urls.txt while also downloading the .pdfxxxxxxxxxxxxxx file's also to go with the .html's ?
Thank you!
~ Brandon
Try using the following:
wget --level 1 --recursive --span-hosts --accept-regex 'https://law.justia.com/cases/washington/court-of-appeals-division-i/2014/.*html|https://cases.justia.com/washington/court-of-appeals-division-i/.*.pdf.*' --input-file=urllist.txt
Details about the options --level, --recursive, --span-hosts, --accept-regex, and --input-file can be found in wget documentation at https://www.gnu.org/software/wget/manual/html_node/index.html.
You will also need to know how regular expressions work. You can start at https://www.grymoire.com/Unix/Regular.html
You are looking for a web-scraper. Be careful to not break any rules if you ever use one.
You could also process the content you have received through wget using some string manipulation in a bash script.

asciidoctor - offline user manual is nowhere to be found

Does anyone know where to get an offline version of the Asciidoctor's user manual: https://asciidoctor.org/docs/user-manual/
It is weird, how developers brag about Asciidoctor being able to export to PDF, HTML... But at the same time they fail to present a nice PDF document for offline use...
You can get the raw adoc source from: https://github.com/asciidoctor/asciidoctor.org/blob/master/docs/user-manual.adoc and convert it using asciidoctor.
Feel free to grab the result directly from:
https://sqli.dev/asciidoctor/user-manual.pdf (asciidoctor-pdf threw a few errors with this document, which I haven’t investigated, so some things may not show up as intended)
https://sqli.dev/asciidoctor/user-manual.html (this html will still fetch online resources for fonts, mathjax, etc.)
You can use the following to limit the amount of online resources needed:
git clone https://github.com/asciidoctor/asciidoctor.org.git
cd asciidoctor.org/docs
curl -O https://fontawesome.com/v4.7.0/assets/font-awesome-4.7.0.zip
7z x font-awesome-4.7.0.zip
asciidoctor -a !iconfont-remote=# -a icons=font -a stylesdir=font-awesome-4.7.0/css -a !webfonts=# user-manual.adoc
The resulting user-manual.html will only try to fetch the MathJax.js from a remote site.
I would recommend to open an issue at https://github.com/asciidoctor/asciidoctor.org proposing that they offer offline download options.

collada2gltf converter can't produce *.json file

I am reading a book: Programming 3D Applications with HTML5 and WebG , it involve a Vizi framework.
All the examples load the *.json file instead of *.gltf file. Why?
When I load *.gltf, it doesn't load any result, and the collada2gltf converters only produce *.gltf, *.bin, *.glsl files and so on.
What should I do?
.gltf is a JSON file. Try to open it with a text editor and see for youself. .bin and .glsl files are just additional resources, linked from .gltf file. Those are geometry buffers and shaders respectively. So to make it work you should make sure that all the files produced with the converter are also available to a web browser you running your code in.
Also you can try to add -e CLI flag to collada2gltf and it'll embed all the resources into result .gltf file.

GraphicsMagick unable to process Unicode filenames

I have found that GraphicsMagick is unable to process my files named in Chinese. I did the same test on ImageMagick but IM worked as expected.
I thought this might be a bug so I filed a bug report here: https://sourceforge.net/p/graphicsmagick/bugs/384/
Anyway, this is how to reproduce my situation:
Platform: Win10
Version: GraphicsMagick 1.3.20
Code: gm -identify 獅藝學會.jpg
This is the returned text from Command Prompt:
>gm -identify 獅藝學會.jpg
gm identify: Unable to open file (????.jpg) [Invalid argument].
gm identify: Request did not return an image.
Using IM worked:
identify 獅藝學會.jpg
ç?.è-?å-,æoƒ.jpg JPEG 3264x2448 3264x2448+0+0 8-bit sRGB 2.691MB 0.016u 0:00.004
Although the text returned is scrambled, but converting the file to a .png still maintained the same filename apart from the different extensions of course.
What happened
I found this problem by using the gm node.js library batch processing my images, the source of the call is made from a UTF-8 webpage, so I assume the filename is passed as Unicode encoding.
I found no documentation related to this problem, although the documentation states that there was a -encoding option, it cannot be sent as parameter on Windows as it does not recognize it and I cannot find relevant solutions on Google.
Please help, is there any easy way around this problem, while keeping the exact filename?
In case someone uses the C api.
(You can only give (char *)-type filenames. And UTF-8 encoding does not work, if using GraphicsMagick on Windows.)
You could do the following:
Open the file for input (or output) yourself (use fopen(), _wfopen() etc).
Then set the filehandle within the ImageInfo structure for reading and Image structure for writing respectively (instead of setting the filename).
To have GraphicsMagick generate the right output file format, set magick within the Image structure.
f.e.:
//Reading
imageInfo->file=_wfopen(input_filename,L"rb"); //ImageInfo *imageInfo;
ReadImage(imageInfo,exception);
//Writing
image->file=_wfopen(output_filename,L"wb"); //Image *image;
strcpy(image->magick,"PNG");
WriteImage(imageInfo,image);
GraphicsMagick automatically closes the file after writing/reading.
I have the same problem using GM in C++. UTF-8 filenames are not supported under Windows (not even in the API!).
My workaround is to get the short path name (8.3), you can do that both using command line and Win32. However this doesn't work 100% - and if you want to save a file you have to create an empty one first to be able to get the short name.

Creating KML superoverlays with gdal2tiles

I am trying to use gdal2tiles to batch-create a bunch of KML superoverlays from a set big geotiff files; problem is that the gdal2tiles output seems to be stripped of its lowest-values point (i.e. the blue ones in the first image below)
This is an example of a superoverlay created directly from Google Earth Pro (using its built-in function):
This is the corresponding output of gdal2tiles, which I generated following the instructions explained in this KML guide; particularly, this is what I did:
gdalwarp -of VRT -t_srs EPSG:4326 input.tif output.vrt
gdal2tiles.py -p geodetic -k output.vrt outputdir/
Does anyone know why this happens? Any suggestions on how to avoid it?
Thanks

Resources