Landmasking in SAR geotiff image - python-3.x

I am trying to mask the land in a satellite (SAR) grayscale geotiff image. The functionality is available in rsgislib, but it works on Linux and I am working on conda python 3.5 (Windows) and not able to find a possible way out.
Kindly guide as to how the land can be masked out in an image.

I found the way out :
First we have to download an appropriate shapefile of the region we wish to mask,
then there is a beautiful functionality available in gdal called as gdalwarp. We need to just open the anaconda prompt and from there just type in :`
gdalwarp -cutline shapefile_name.shp original_image.tif output_filename.tif
Now, the image with borderlines of the land will get saved in the file output_filename.tif
This is the file which contains the land portion and the ocean is masked out.
Then the procedure becomes fairly simple mask out the land by subtracting the output_filename.tif image from original image.
We will get the image of the ocean part with land portion in black, after that we can make the land portion as NaN.

Related

Image to text conversion python

i am trying to extract only the highlighted text from an image using pytesseract module in python.
Issue is that i am unable to extract the highlighted part and the whole image is getting converted to text, and i have no idea how to extract specific part based on the background colour.
The best way to achieve it is by crop and send just the part you need from the image, it will also improve the performance.
There is a related discussion that may help -> Select part of text that was extracted using the Tesseract OCR

Open CV, find blurry images

I want a help in python code that should show me if the images in a folder is blurry or not.
I find this article useful https://www.pyimagesearch.com/2015/09/07/blur-detection-with-opencv/
but in the output it shows a text on the top of the image with its blurry value.
But I want the result should generate a text file (output.txt) that shows the image path , it's blurry value and also states whther it is blurry or not. Rather than writing these things on the top of image.
I am using Anaonda 3 and install cv2, argparse and imutils explained in the article.
In output.txt it should be like this

How to change a part of the color of the background, which is black, to white?

I have been working on PyTesseract OCR and converting PDF to JPEG inorder to OCR the image. A part of the image has a black background and white text, which Tesseract is unable to identify, whereas all other parts of my image are being read perfectly well. Is there a way to change a part of the image that has black background? I tried a few SO resources, but doesn't seem to help.
I am using Python 3, Open CV version 4 and PyTesseract
opencv has a bitwise not function wich correctly reverses the image
you can put a mask / freeze on the rest of the image (the part that is correct already) and use something like this:
imageWithMask = cv2.bitwise_not(imageWithMask)
alternatively you can also perform the operation on a copy of the image and only copy over parts / pixels / regions you need....

How to set exact coordinates in a PNG image?

Platform: Linux
Tool: Qrencode (Open source application for creating QR code in linux)
I am using this qrencode application to generate QR code. The output file format that I am using is PNG. But when I try to print the PNG file using dot-matrix printer, it prints correctly but scrolls down whole page i.e. it occupies the whole page, but my requirement is that I should be able to print the image to any point of an page.
Unfortunately I don't have time to go through the entire source code of LIBPNG and QRENCODE.
I strongly recommend to check the man page for lpr
It has an option for positioning the image on the page, e.g. -o position=name.
Check the possible position names in the manual.
Most probably you would need to scale your images.
Be sure that your image is not too large for fitting in the page.

Using fbi as slideshow causes portrait images to autorotate

Im using a Raspberry Pi running Raspbian Wheezy as a digital photo frame. The Pi is configured to autologin on boot and execute a bash script that starts fbi as a slideshow, like so:
fbi -noverbose -a -t 10 /home/pi/Pictures/*.jpg /home/pi/Pictures/*.png
Ive noticed that any portrait photos (ie photos that are taller than they are wide) are automatically rotated 90 degrees so that appear as landscape.
If I remove the -nonverbose switch, the dimensions are displayed underneath each image and what was once a 480x640 pixel image is displayed as 640x480. Removing the -a autozoom switch doesnt help either.
Can anyone help get my photos displaying in their original orientation regardless of aspect ratio?
I know this issue is a little old, but I've been running into this issue as well and think I found the solution this morning. I think it has to do with the EXIF data rotate flag. From what I understand, all programs can handle this flag differently, or not even acknowledge it. So I believe the solution is to rotate the images and save them that way ignoring the EXIF data.
I plan on doing it using a windows program I found located here: http://www.makeuseof.com/tag/are-your-iphone-photos-refusing-to-rotate-in-windows-explorer-here-is-the-solution/

Resources