How can I make an image pass a high res test? - graphics

I am designing a t-shirt online and I am using a source image as a pattern, however when I go to render the finished design it says it cannot proceed as the source image is too low resolution.
I have tried various filters and effects to no avail. Obviously I cannot create detail or resolution that doesn't exist but the image looks fine and I believe that there may be some way to apply texture or sharpening effect to the image that would give it the needed resolution to pass the threshold for rendering.
Does anyone have any ideas on which software or filters I might try to achieve my goal?. Thanks

Related

API to retrieve images from within an image or pdf

I am looking for a way to extract images from within another image. For example:
Here is a picture taken of a paper. It includes text, an image of a camera, and an image of a qr code. Is there an API that can possibly extract those two(camera and qr code) from this larger image and separate them into their own individual images. I know this is doable with the text(OCR), but I need to find some way to do Image Recognition if that even exists. For now, I cant find any reference to doing this besides extracting images from pdf's, which none of those softwares have the capability to extract them from a non-perfect pdf.
Price for the API(node.js prefered, but i can adapt to use any language) is not a big concern, I'm just not sure this is even possible to due without programming a legitable artificial intelligence using machine learning, which I would no doubt cause a global internet shutdown from breaking everything if I attempted to do so.
Anyway, any suggestions would be great and much appreciated. Thanks!
EDIT: the images aren't always those, it can be an image of anything, from potatoes to flags
For the QR code, you can simply use a QR code scanner library and convert the output back into a QR code. As for the camera, you are going to need an image recognition service like Google Cloud Vision or train your own neural network with something like TensorFlow to recognize pictures of cameras.
QR detectors abound around the web and some are on github but for single objects you could try hotpot API https://hotpot.ai/docs/api
your code example linked into https://hotpot.ai/remove-background
for striping back you may need a secondary autocrop task

find an altered image from the original image data set

Here is my problem:
I must match two images. One image from the project folder and this folder have over 20.000 images. The other one is from a camera.
What I have done?
I can compare images with basic OpenCV example codes that I found in the documentation. OpenCV Doc I can also compare and find an image by using the hash of my image data set. It is so fast and it is only suitable for 2 exact images. One for query the other one is the target. But they are the same exact image.
So, I need something as reliable as feature matching and as fast as hash methods. But I can't use machine learning or anything on that level. It should be basic. Plus, I'm new to these stuff. So, my term project is on risk.
Example scenario:
If I ever take a picture of an image in my image data set from my computer's screen. This would change many features of the original image. In the case of defining what's in that image, a human won't struggle much but a comparison algorithm will struggle. Such a case leaves lot's of basic comparison algorithm out of the game. But, a machine-learning algorithm could solve the problem but it's forbidden to use in my project.
Needs:
It must be fast.
It must be accurate.
It must be easy to understand.
Any help is okay. A piece of code, maybe an article or a tutorial. Even an advice or a topic title might be really helpful to me.
Once saw this camera model identification challenge on kaggle. This notebook discusses about noise pattern changes with changing devices. May be you should look in to this and other notebooks in that challenge. Thanks!

Extent of support for using Vulkan Swapchain Images as Transfer Destination

In my Vulkan backend implementation I currently check the supported usage flags for the Swapchain and then proceed to either use copy commands or a fall-back render pass to draw to the back buffer from an intermediary Render Target. I wanted to know whether this check is required or is it safe to assume that the Swapchain Images allow usage as a Transfer Destination on typical Desktop hardware.
Also, if anyone knows about Vulkan implementations that do not allow Copying to Swapchain Images, I'd appreciate it if you could share. This is mostly for the sake of curiosity rather than solving a problem.
You can look at the Vulkan Hardware Database.
I couldn't find anywhere which summarized the data, but if you click on a device from the list then click onto the surface tab, then on the surface properties tab you can see supportedUsageFlags in the table and look for TRANSFER_DST_BIT.
I only looked at a few and they all had TRANSFER_DST_BIT present. I believe the database and code for the viewer are open source, so perhaps you can find a better way to mine the particular information you're after.

The image seems no so clear in openseadragon as its origin

I use openseadragon show arts in my web site, but they seem not so clear as its origin. (I do not have so much reputation to post snapshots):
The original one is sharper, and the one rendered by openseadragonone is blurred.
I was thought that it caused by Deep Zoom Composer, which decreased quality when creating DZI image parts, but i was wrong. The part image in DZI directory and the original one are exactly same, and all of them are readered by a browser(IE 10).
Now the reasonable explaination is the randering type of openseadragon cause the difference, is this a bug? or is there an option / argument which can improve rendering effect in openseadragon?
Two possible issues here. Are you on an HDPI (i.e. "retina") screen? If so, there is a bug fix in master that's not on the latest release yet.
Otherwise, it's probably the minPixelRatio. By default OpenSeadragon allows the image to not be entirely full resolution at some zoom levels, to save on bandwidth. You can modify this value by passing minPixelRatio: 1 as one of the options when you create your viewer (the default value is 0.5).
We can continue the discussion in the issue here: https://github.com/openseadragon/openseadragon/issues/646

Possible resolution error for vtk

I'm trying to visualize the vtk data given here https://www.dropbox.com/sh/51kjftvdko3g6s8/wEe88Id9QN. I'm doing something wrong which probably is related to resolution. I was wondering if anyone runs this code and sends me the result. If that is more clear than the image I have, it's more likely to be related to graphics driver I think. In that case, what may be the cause of this problem? The image my computer generates can be found at the dropbox link, too.
It looks correct to me (a grid of points is being displayed). If you create a vtkImageData from your vtkStructuredPoints, it will be able to interpolate between the points to display a volume.

Resources