I'm trying to visualize the vtk data given here https://www.dropbox.com/sh/51kjftvdko3g6s8/wEe88Id9QN. I'm doing something wrong which probably is related to resolution. I was wondering if anyone runs this code and sends me the result. If that is more clear than the image I have, it's more likely to be related to graphics driver I think. In that case, what may be the cause of this problem? The image my computer generates can be found at the dropbox link, too.
It looks correct to me (a grid of points is being displayed). If you create a vtkImageData from your vtkStructuredPoints, it will be able to interpolate between the points to display a volume.
Related
Here is my problem:
I must match two images. One image from the project folder and this folder have over 20.000 images. The other one is from a camera.
What I have done?
I can compare images with basic OpenCV example codes that I found in the documentation. OpenCV Doc I can also compare and find an image by using the hash of my image data set. It is so fast and it is only suitable for 2 exact images. One for query the other one is the target. But they are the same exact image.
So, I need something as reliable as feature matching and as fast as hash methods. But I can't use machine learning or anything on that level. It should be basic. Plus, I'm new to these stuff. So, my term project is on risk.
Example scenario:
If I ever take a picture of an image in my image data set from my computer's screen. This would change many features of the original image. In the case of defining what's in that image, a human won't struggle much but a comparison algorithm will struggle. Such a case leaves lot's of basic comparison algorithm out of the game. But, a machine-learning algorithm could solve the problem but it's forbidden to use in my project.
Needs:
It must be fast.
It must be accurate.
It must be easy to understand.
Any help is okay. A piece of code, maybe an article or a tutorial. Even an advice or a topic title might be really helpful to me.
Once saw this camera model identification challenge on kaggle. This notebook discusses about noise pattern changes with changing devices. May be you should look in to this and other notebooks in that challenge. Thanks!
I am designing a t-shirt online and I am using a source image as a pattern, however when I go to render the finished design it says it cannot proceed as the source image is too low resolution.
I have tried various filters and effects to no avail. Obviously I cannot create detail or resolution that doesn't exist but the image looks fine and I believe that there may be some way to apply texture or sharpening effect to the image that would give it the needed resolution to pass the threshold for rendering.
Does anyone have any ideas on which software or filters I might try to achieve my goal?. Thanks
I have set up the color scheme to identify the different system of the building services in Revit and Navisworks. When I uploaded to the forge viewer, the colors were shown correctly at the beginning. However, when I zoomed in, some of the colors were disappeared. Did anyone have this problem? How could the problem be solved?
Thank you.
Forge Display Error:
Zooming Lo01
Apologizing for any inconvenience caused, this might be a known issue of our model extraction service for the Revit, it has been logged as REVIT-120524 in our case system to let our dev team to allocate time to investigate it. You can send this id to forge.help#autodesk.com to inquire updates in the future.
BTW, the reason caused this issue I discovered is there are many MEP system type, and they have owned different colored material, fittings will take materail color from the first system type from corrosponding MEP systems. Currently, there is not formal solution to avoid this. We apology for this again. Fortunately, there is a workaroud you could try:
Split your MEP models into servral RVT files that contain single pipe system, duct system and others.
Upload them to the Forge for translation seperately.
Load translated models via the Forge Viewer.
Hope it helps.
This workaround is working on my live projects now, but might not suite for you. And it's not the formal solution, you might have to use it at your own risk.
Strangely worded but a rather potential strange situation.
I'm plotting on a canvas a massive number of lines, err points in some cases. This plotting goes beyond the edge of the screen...by the time all the data is included at a decent, usable resolution it would probably be in the area of 10,000-20,000x10,000-20,000 pixels...maybe bigger than that even. In the current situation where I'm only displaying maybe 1-2% of the data to the screen it takes 30 seconds to create the graph. I got to thinking this morning that maybe my best bet given how much more data is going to be presented before everything is said and done would be to create an initial graph and save the image and then display the image since it should be much quicker to display the image of the graph already created/saved then to create it each and every time I run the program.
I started doing some research into the idea this morning and have run into a few problems and questions to help me decide if this is a good idea or not.
When I try to
from PIL import Image, ImageTk
I get the error message that it cannot import ImageTk
If I try
import PIL
I get no errors. I'm using Linux and Python 3.4.3. Since several of the ideas involve ImageTk, how do I correct the importing problem?
Edit:
Also when I try
from PIL import ImageGrab
I get the error no module named _grabscreen. Since that falls under PIL shouldn't it have been installed with PIL?
I know I would still need to install pyscreenshot to try the one technique I've seen presented here on SO.
Do any of these methods save the full canvas or only what you see on the physical screen. I would be wanting to save everything which is a much larger image then the physical screen display?
After saving the image and bringing it back up would I still be able to scroll around the image with tkinter? As well, would I be able to add extras to the image, in terms of adding points of interest to the image(this would be a road map image).
So far I've only ever used tkinter for line graphs and not anything else so I don't have the experience to know the limitations versus what I might be wanting to do.
I have a few other ideas of how I might be able to handle the problem but none of the other ideas are as fast as this idea would be.
I just to need to get an idea as to what the limitations are to decide if I want to try to go any further with testing this idea out or not.
Thanks.
My application present a (raster) moving map.
I need to be able to show the map rotated base on any given angle.
The program is currently in VC++/MFC but the problem is generic.
I have a source bitmap (CBitmap or HBITMAP) and draw it to the device context (CDC) using StretchBlt.
While this works fast and smooth for angle=0 (and the user can grab the map smoothly with the mouse), this is not the case if I try to rotate the bitmap and then present it (the rotation of the bitmap using SetWorldTransform() or so takes hundreds of miliseconds and this is too slow).
I think that the solution is to be able to relate only to the pixels that currently on the screen and not rotating the original source bitmap - and this is the key.
If someone has experience with similar implementation then it might save me lots of trial and error efforts.
Thanks!
Avi.
It looks like SetWorldTransform is extremely slow:
http://www.codeguru.com/Cpp/G-M/bitmap/specialeffects/article.php/c1743
And while the other options presented in that article are faster, there are of course other better solutions like this:
http://www.codeguru.com/cpp/g-m/gdi/article.php/c3693/ (check the comments for fixes and improvements as well)
Also here are some non-Windows centric fast rotation algorithms:
http://www.ddj.com/windows/184416337?pgno=11
Note that if you guarantee power of 2 dimensions you can get significant speed improvements.
As follow up to my question and provided answer, let me summarize the following:
I used the algorithm mentioned at http://www.codeguru.com/cpp/g-m/gdi/article.php/c3693/.
It works and provide pretty good performance and smooth display.
There were some bugs in it that I needed to fix as well as simplify the formulas and
code in some cases.
I will examine the algorithm mentioned at http://www.ddj.com/windows/184416337?pgno=11 to see if it provides some break through performance that worth adapting it.
My implementation required using a large source bitmap, so I needed to modify the code so I will not rotate the whole bitmap each time but only the relevant portion that will be displayed at the screen (otherwise performance would be unacceptable).
Avi.