I'm trying to design a Userform in Excel 2010, and I'm coming across a very annoying stumbling block where as I'm trying to move things around, resize them and align them so the form looks appealing.
Unfortunately, different mechanisms are snapping to differently sized grids. For example, drawing a box onto the grid snaps it to multiples of 6, which is the default option found in the Tools> Options> General> Grid units. Resizing these objects snaps it to a seemingly arbitrary grid size that seems to be approximately 7.2 units.
I need these units to match up so I'm not constantly fighting myself getting these grids to function. I don't care what the actual number ends up being, I just need them to be the same. While I'm able to change the grid size, it must be a whole number, which the arbitrary grid is not.
Problem us your units Points <-> Pixels
6 Points * 4/3 points per pixel =>> 8 Pixels .. all good
A nice integer of pixels
Your approx 7.2 I suspect was 7.3333333
Maybe say 11 Points Set gets changed as
* 4/3 => 14.6666 pix Rounded to 15 pix
by 3/4 Pts/pix +> 11.33333
Points as single .. Pixels as integer may be the problem in the math
Related
I read everywhere that resolution is defined by the number of pixels on a screen.
But if you imagine 1000 x 1000 pixels on a screen the size of 20 skyscrapers and compare it to 999 x 999 pixels on a box of matches, the resolution would make the skyscrapers screen look 'low-res' and the box of matches screen look 'high-res'. Instinctively, I would say that the box of matches screen is higher resolution than the skyscrapers screen.
Am I wrong to say this? Is resolution definitely defined by the total number of pixels instead of the dots per inch?
Indeed, in the context of displays, the term resolution says nothing about pixel density. As stated in Wikipedia's article on Display Resolution:
The term "display resolution" is usually used to mean pixel dimensions, the number of pixels in each dimension (e.g. 1920 × 1080), which does not tell anything about the pixel density of the display on which the image is actually formed: broadcast television resolution properly refers to the pixel density, the number of pixels per unit distance or area, not total number of pixels. In digital measurement, the display resolution would be given in pixels per inch (PPI)
Definition of resolution varies according with the context. Every thing has a measurement unit.
When you talk about screens(Monitors), screen has pixels not dots thats why its resolution is measured in Pixels.
And When you talk about printing or video it is all about dots per inch. In your case, Box of Match is not a screen, its on printed paper.
For eg: you might have heard people saying DPI's(not resolution) while scanning documents.
So don't get yourself confused with the definition of resolution that is meant for different context.
what gives one the ability to define how deep the zooming process would be?
what i mean is that i tried earlier to run mandelbrot set with 200 iteration and then compared the results with a 1000 iterations run. the results were kinda surprising because i got the same zooming level.the iterations were constant the entire process and the mandelbrot set was defined with 512X512 pixels constant. what should i change in order to get a deeper zooming level?
thanks!
edit : i would also like to mention that from nice looking picture, after i get to the 2nd-3rd level of mandelbrot the entire set is viewed as a giant pixel. why is that?
2d edit : after an extensive research i've just noticed that what makes the entire set to look like a big pixel is because all points get same iterations count,in my case they are all 60...
This may be too abstract, or too concrete, or incomprehensible. Like I said in the comment, it would be easier to discuss with your code at hand.
If you mean what I think you mean by zooming, you'd change the boundaries of c (in the formula z[n+1] = z[n]^2 + c).
To explain, the full Mandelbrot set is contained within a circle with radius 2 around a center [0;0]. The c in the formula is a complex number, i.e. [r;i] (real;imaginary), which, on the computer screen, corresponds to x and y.
In other words, if we place that radius 2 circle so that it is exactly contained within our image, then [-2;2] will be the upper left corner of our image, and [2;-2] is the lower right corner.
We then take each point of our image, calculate what its pixel coordinates [x;y] correspond to in terms of the smaller, "actual" coordinate system [r;i]. Then we have our c and can send it through our iterations.
So, to "zoom", you'd pick other boundaries [r;i] than the full [-2;2],[2:-2], e.g. [-1;1],[1:-1].
With 512x512 pixels, and an "actual" coordinate system that's now 2 by 2, that would mean each pixel corresponds to 2/512 units of the "actual" coordinate system. So your first r value would be -1, the next would be -1 + 2/512 = -0.99609375 etc.
The number of iterations only decide how accurate your rendering will be. Generally, the further you "zoom" in, the more accurate they'll need to be, so the more iterations you'll need in order to capture the details.
I've been playing around with the carbon multitouch support private framework and I've been able to retrieve various type of data.
Among these, each contact seems to have a size and is as well described by an ellipsoid (angle, minor axis, major axis). However, I haven't been able to identify the frame of reference used for the size and the minor and major axis.
If anybody has been able to find it out, I'm interested in your information.
Thanks in advance
I've been using the framework for two years now and I've found that the ellipse is not in standard units (e.g. inches, milimeters). You could approximate millimeters by doubling the values you get for the ellipse.
Here's how I derived the ellipse information.
First, my best guess for how it works is that it's close to Synaptics "units per mm": http://ccdw.org/~cjj/l/docs/ACF126.pdf But since Apple has not released any of that information for developers, I'm relying on information that I print to the console.
You may get slightly different values based on the dimensions of the device (e.g. native trackpad vs magic mouse) you're using with the MultiTouchSupport.framework. This might also be caused by the differences in the surface (magic mouse is curved).
The code on http://www.steike.com/code/multitouch/ has a parameter called mm. This gives you the raw (non-normalized) position and velocity for the device.
Based on the width's observed min & max values from mm (-47.5,52.5), the trackpad is ~100 units wide (~75 units the other way). The trackpad is about 100mm wide x 80mm. But no, it's not a direct unit to millimeter translation. I think the parameter being named 'mm' may have just been a coincidence.
My forearm can cover about 90% of the surface of the trackpad. After laying it across the trackpad, the output will read to about 58 units wide by 36 units long, with a size of 55. If you double the units you get 116 by 72 which is really close to 100mm by 80mm. So that's why I say just double the units to approximate the millimeters. I've done this with my forearm the other way and with my palm and the approximations still seem to work.
The size of 55 doesn't seem to coincide with the values of ellipse. I'm inclined to believe that ellipse is an approximation of the surface dimensions and size is the actual surface area (probably in decimeters).
Sorry there's no straight answer (this is after all a reverse engineering project) but maybe this information can help you find the answer yourself.
(Note: I'd like to know what you're working on?)
So say I have an image that I want to "pixelate". I want this sharp image represented by a grid of, say, 100 x 100 squares. So if the original photo is 500 px X 500 px, each square is 5 px X 5 px. So each square would have a color corresponding to the 5 px X 5 px group of pixels it swaps in for...
How do I figure out what this one color, which is best representative of the stuff it covers, is? Do I just take the R G and B numbers for each of the 25 pixels and average them? Or is there some obscure other way I should know about? What is conventionally used in "pixelation" functions, say like in photoshop?
If you want to know about the 'theory' of pixelation, read up on resampling (and downsampling in particular). Pixelation algorithms are simply downsampling an image (using some downsampling method) and then upsampling it using nearest-neighbour interpolation. Note that in code these two steps may be fused into one.
For downsampling in general, to downsample by a factor of n the image is first filtered by an appropriate low-pass filter, and then one sample out of every n is taken. An "ideal" filter to use is the sinc filter, but because of issues with implementing it, the Lanczos filter is often used as a close alternative.
However, for almost all purposes when doing pixelization, using a simple box blur should work fine, and is very simple to implement. This is just an average of nearby pixels.
If you don't need to change the output size of the image, then this means you divide the image into blocks (the big resulting pixels) which are k×k pixels, and then replace all the pixels in each block with the average value of the pixels in that block.
when the source and target grids are so evenly divisible and aligned, most algorigthms give similar results. if the grids are fixed, go for simple averages.
in other cases, especially when resizing by a small percentage, the quality difference is quite evident. the simplest enhancement over simple average is weighting each pixel value considering how much of it's contained in the target pixel's area.
for more algorithms, check multivariate interpolation
I'm working on a UI which needs to work in different aspect ratios, 16:9, 16:10, 4:3
The idea is conceptually simple: Everything is centered to the screen in a rough 4:3 area and anything outside this portion of screen has basic artwork, so something like this:
(not drawn to scale)
Where the pink area represents whre all the UI objects are positioned and the blue area is just background and effects.
The trick is in usability, if I pass in coordinates (0,0) in a 4:3 aspect ratio environment (0,0) would be the top left of the screen. However if I'm in a 16:9 environment (0,0) needs to get renormalized based on the new aspect ratio for it to be in the appropriate place. So my question is: How can I achieve this?
edit: for clarification this is basically for a UI system and while I listed the ratios above as 4:3, 16:9, 16:10 it should be able to dynamically adjust values for whatever aspect ratio it is set to.
edit 2: Just to add more details to the situation: When the positions fo rsetting are passed in they are passed in as a % of the screens current widht height, so basically setting position x would be: [pos x as portion of screen]*SCREEN_WIDTH where screen width is the width of the current screen itself.
The obvious answer seems to be an offset. Since 4x3 is 16x9, it appears you want a 16x9 screen to have 2x9 bands to the left and the right. Hence, the X offset should be (2/16) * width.
For 16x10 screens, the factor is slightly more complicated: 4x3 is 13.33x10, so you have edges of width 1.67, and the X offset should be (1.67/16) * width = (5/48)* width.
So ... Can't you just come up with an abstraction layer, that hides the differences? One idea could be to model a "border" around the active area, that gets added. For 4:3 displays, set the border size to 0 to make the active area cover the full screen.