Color Ranges in ILNumerics Surface Plot - colors

I am using ILNumerics to generate a surface plot.
I want to use a flat shaded color map (i.e. color ranges) instead of a smooth shaded color map (i.e. each pixel has it's own color).
Is this possible with ILNumerics?
Example of Flat-Shaded surface plot and color bar legend:
Example of Smooth-Shaded surface plot and color bar legend:

You can create a colormap which exposes a flat shading behavior. Just duplicate the keypoints existing in a common colormap in order to model a range of color data getting the same color assigned.
How flat shaded colormaps are working
According to the documentation the keypoints for colormaps consist out of 5 columns: a "position" and 4 color values (RGBA). In order to model a 'flat' shaded colormap, place two keypoints 'almost' exactly on top of each other, giving the first one the color of the next lower range and the second one the color of the next higher range. A color range therefore is modeled by two keypoints having the same color assigned.
I wrote 'almost' in the above paragraph, because I thought you have to leave at least a tiny gap between the edges of two ranges - hoping no actual color data value will ever hit that gap. But it seems, there is no gap needed at all and one can give both keypoints exactly the same value. But you will have to be careful while sorting: don't mix the colors up (the quicksort in ILMath.sort() is not stable!)
In the following example a flat shaded colormap is created from Colormaps.Jet:
Keypoints for Colormaps.Jet (original, interpolating)
<Single> [6,5]
[0]: 0 0 0 0,5625 1
[1]: 0,1094 0 0 0,9375 1
[2]: 0,3594 0 0,9375 1 1
[3]: 0,6094 0,9375 1 0,0625 1
[4]: 0,8594 1 0,0625 0 1
[5]: 1 0,5000 0 0 1
The flat shading version derived from it:
Colormaps.Jet - flat shading version
<Single> [11,5]
[0]: 0 0 0 0,5625 1
[1]: 0,1094 0 0 0,5625 1
[2]: 0,1094 0 0 0,9375 1
[3]: 0,3594 0 0 0,9375 1
[4]: 0,3594 0 0,9375 1 1
[5]: 0,6094 0 0,9375 1 1
[6]: 0,6094 0,9375 1 0,0625 1
[7]: 0,8594 0,9375 1 0,0625 1
[8]: 0,8594 1 0,0625 0 1
[9]: 1,0000 1 0,0625 0 1
[10]: 1 0,5000 0 0 1
As you can easily see, I made a mistake in CreateFlatShadedColormap(): The last keypoint with (0.5,0,0,1) will never be used. I'll leave it as an exercise to fix that... ;)
Full Flat Shaded Example
private void ilPanel1_Load(object sender, EventArgs e) {
ILArray<float> A = ILMath.tosingle(ILSpecialData.terrain["0:400;0:400"]);
// derive a 'flat shaded' colormap from Jet colormap
var cm = new ILColormap(Colormaps.Jet);
ILArray<float> cmData = cm.Data;
cmData.a = Computation.CreateFlatShadedColormap(cmData);
cm.SetData(cmData);
// display interpolating colormap
ilPanel1.Scene.Add(new ILPlotCube() {
Plots = {
new ILSurface(A, colormap: Colormaps.Jet) {
Children = { new ILColorbar() },
Wireframe = { Visible = false }
}
},
ScreenRect = new RectangleF(0,-0.05f,1,0.6f)
});
// display flat shading colormap
ilPanel1.Scene.Add(new ILPlotCube() {
Plots = {
new ILSurface(A, colormap: cm) {
Children = { new ILColorbar() },
Wireframe = { Visible = false }
}
},
ScreenRect = new RectangleF(0, 0.40f, 1, 0.6f)
});
}
private class Computation : ILMath {
public static ILRetArray<float> CreateFlatShadedColormap(ILInArray<float> cm) {
using (ILScope.Enter(cm)) {
// create array large enough to hold new colormap
ILArray<float> ret = zeros<float>(cm.S[0] * 2 - 1, cm.S[1]);
// copy the original
ret[r(0, cm.S[0] - 1), full] = cm;
// double original keypoints, give small offset (may not even be needed?)
ret[r(cm.S[0], end), 0] = cm[r(1, end), 0] - epsf;
ret[r(cm.S[0], end), r(1, end)] = cm[r(0, end - 1), r(1, end)];
// reorder to sort keypoints in ascending order
ILArray<int> I = 1;
sort(ret[full, 0], Indices: I);
return ret[I, full];
}
}
Result

This is not possible. Surface plots in ILNumerics always interpolate colors between grid points. For other shading models you must create your own surface class.

Related

how to find count of a specific value in column on pandas data frame and use it for calculations

I have a pandas data frame similar like mentioned below and for all the (Domain) unique value I want to calculate Count(EV)+Count(PV)+count(DV)+count(GV) where values are = green / total count of values in that unique domain
Domain
EV
PV
DV
GV
Numerator(part)
denominator(part)
ideal Output
KA-BLR
Green
Blue
Green
1
6
0.166
KA-BLR
Green
Green
Blue
1
6
0.166
KL-TRV
Green
Blue
Yellow
Red
0.5
7
0.071
KL-TRV
Green
Blue
Blue
0.5
7
0.071
KL-COK
Blue
Blue
Yellow
Green
0.25
4
0.0625
TN-CHN
Green
Blue
0.5
5
0.1
TN-CHN
Green
Blue
Yellow
0.5
5
0.1
Sample Code
OVER_ALL_SCORE = {}
for Domain in df_RR["Domain"].unique():
#count of greens
EV_G = (df_RR['EV'] == 'Green').sum()
PV_G = (df_RR['PV'] == 'Green').sum()
DV_G = (df_RR['DV'] == 'Green').sum()
GV_G= (df_RR['GV'] == 'Green').sum()
#count of all values excluding null
EV = df_RR['EV'].sum()
PV = df_RR['PV'].sum()
DV = df_RR['DV'].sum()
GV = df_RR['GV'] .sum()
# so (0.25*(SUM for "DV" of greens (totally correct))+0.25*(SUM for "PV" of greens (totally correct))+0.25*(SUM for "EV" of greens (totally correct))+0.25*(SUM for "GV" of greens (totally correct)) / total count of values
Numerator = (0.25*EV_G) + (0.25*PV_G) + (0.25* DV_G) + (0.25* GV_G)
denominator = EV+PV+DV+GV
try:
OVER_ALL_SCORE [domain]=(Numerator /denominator )
except:
OVER_ALL_SCORE [domain]=0
df_RR['Overall_score']=df_RR['Domain'].map(OVER_ALL_SCORE)
Currently this logic is returning same value across all the domain. please help in resolving it
Thanks in Advance
Here's a solution that gives the ideal output:
OVER_ALL_SCORE = {}
for Domain in df_RR["Domain"].unique():
sub_df = df_RR.loc[df_RR['Domain']==Domain]
#count of greens
EV_G = (sub_df['EV'] == 'Green').sum()
PV_G = (sub_df['PV'] == 'Green').sum()
DV_G = (sub_df['DV'] == 'Green').sum()
GV_G = (sub_df['GV'] == 'Green').sum()
#count of all values
EV = sub_df['EV'].count()
PV = sub_df['PV'].count()
DV = sub_df['DV'].count()
GV = sub_df['GV'].count()
numerator = (0.25*EV_G) + (0.25*PV_G) + (0.25* DV_G) + (0.25* GV_G)
denominator = EV+PV+DV+GV
try:
OVER_ALL_SCORE[Domain] = (numerator /denominator )
except:
OVER_ALL_SCORE[Domain] = 0
df_RR['Overall_score']=df_RR['Domain'].map(OVER_ALL_SCORE)
There are a few changes are were key:
count() vs. sum()
In your count of ALL VALUES you'll want to use the count method rather than the sum method (otherwise, this code will just concatenate the string values in the table):
df_RR['EV'].sum()
This returns: 'GreenGreenGreenBlueGreen' (since the sum method simply adds all of the values in the series).
Use this instead:
df_RR['EV'].count()
The reason it works in your count of the greens is that this code df_RR['EV'] == 'Green' is returning a series of booleans which can be summed correctly to give you the number of greens (since it will add the trues as 1's and the falses as zeros):
True True True False True is the same as 1 1 1 0 1
The Main Problem
Currently, your counts are the same in each loop that you run since you're not filtering according to the domain. I would create sub-dataframe. Based on the domain you're looking at as the first step in your loop:
domain_df = df_RR.loc[df_RR['Domain'] == Domain]

Create a colour histogram from an image file

I'd like to use Nim to check the results of my Puppeteer test run executions.
Part of the end result is a screenshot. That screenshot should contain a certain amount of active colours. An active colour being orange, blue, red, or green. They indicate activity is present in the incoming data. Black, grey, and white need to be excluded, they only represent static data.
I haven't found a solution I can use yet.
import stb_image/read as stbi
var
w, h , c:int
data: seq[uint8]
cBin: array[256,int] #colour range was 0->255 afaict
data = stbi.load("screenshot.png",w,h,c,stbi.Default)
for d in data:
cBin[(int)d] = cBin[(int)d] + 1
echo cBin
Now I have a uint array, which I can see I can use to construct a histogram of the values, but I don't know how to map these to something like RGB values. Pointers anyone?
Is there a better package which has this automagically, I didn't spot one.
stbi.load() will return a sequence of interleaved uint8 color components. The number of interleaved components is determined either by c (i.e. channels_in_file) or desired_channels when it is non-zero.
For example, when channels_in_file == stbi.RGB and desired_channels == stbi.Default there are 3 interleaved components of red, green, and blue.
[
# r g b
255, 0, 0, # Pixel 1
0, 255, 0, # Pixel 2
0, 0, 255, # Pixel 3
]
You can process the above like:
import colors
for i in countUp(0, data.len - 3, step = stbi.RGB):
let
r = data[i + 0]
g = data[i + 1]
b = data[i + 2]
pixelColor = colors.rgb(r, g, b)
echo pixelColor
You can read more on this within comments for the stb_image.h.

How to make space for stitching multiple images in OpenCV - Python3 [duplicate]

I'm trying to stitch 2 images together by using template matching find 3 sets of points which I pass to cv2.getAffineTransform() get a warp matrix which I pass to cv2.warpAffine() into to align my images.
However when I join my images the majority of my affine'd image isn't shown. I've tried using different techniques to select points, changed the order or arguments etc. but I can only ever get a thin slither of the affine'd image to be shown.
Could somebody tell me whether my approach is a valid one and suggest where I might be making an error? Any guesses as to what could be causing the problem would be greatly appreciated. Thanks in advance.
This is the final result that I get. Here are the original images (1, 2) and the code that I use:
EDIT: Here's the results of the variable trans
array([[ 1.00768049e+00, -3.76690353e-17, -3.13824885e+00],
[ 4.84461775e-03, 1.30769231e+00, 9.61912797e+02]])
And here are the here the points passed to cv2.getAffineTransform: unified_pair1
array([[ 671., 1024.],
[ 15., 979.],
[ 15., 962.]], dtype=float32)
unified_pair2
array([[ 669., 45.],
[ 18., 13.],
[ 18., 0.]], dtype=float32)
import cv2
import numpy as np
def showimage(image, name="No name given"):
cv2.imshow(name, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
return
image_a = cv2.imread('image_a.png')
image_b = cv2.imread('image_b.png')
def get_roi(image):
roi = cv2.selectROI(image) # spacebar to confirm selection
cv2.waitKey(0)
cv2.destroyAllWindows()
crop = image_a[int(roi[1]):int(roi[1]+roi[3]), int(roi[0]):int(roi[0]+roi[2])]
return crop
temp_1 = get_roi(image_a)
temp_2 = get_roi(image_a)
temp_3 = get_roi(image_a)
def find_template(template, search_image_a, search_image_b):
ccnorm_im_a = cv2.matchTemplate(search_image_a, template, cv2.TM_CCORR_NORMED)
template_loc_a = np.where(ccnorm_im_a == ccnorm_im_a.max())
ccnorm_im_b = cv2.matchTemplate(search_image_b, template, cv2.TM_CCORR_NORMED)
template_loc_b = np.where(ccnorm_im_b == ccnorm_im_b.max())
return template_loc_a, template_loc_b
coord_a1, coord_b1 = find_template(temp_1, image_a, image_b)
coord_a2, coord_b2 = find_template(temp_2, image_a, image_b)
coord_a3, coord_b3 = find_template(temp_3, image_a, image_b)
def unnest_list(coords_list):
coords_list = [a[0] for a in coords_list]
return coords_list
coord_a1 = unnest_list(coord_a1)
coord_b1 = unnest_list(coord_b1)
coord_a2 = unnest_list(coord_a2)
coord_b2 = unnest_list(coord_b2)
coord_a3 = unnest_list(coord_a3)
coord_b3 = unnest_list(coord_b3)
def unify_coords(coords1,coords2,coords3):
unified = []
unified.extend([coords1, coords2, coords3])
return unified
# Create a 2 lists containing 3 pairs of coordinates
unified_pair1 = unify_coords(coord_a1, coord_a2, coord_a3)
unified_pair2 = unify_coords(coord_b1, coord_b2, coord_b3)
# Convert elements of lists to numpy arrays with data type float32
unified_pair1 = np.asarray(unified_pair1, dtype=np.float32)
unified_pair2 = np.asarray(unified_pair2, dtype=np.float32)
# Get result of the affine transformation
trans = cv2.getAffineTransform(unified_pair1, unified_pair2)
# Apply the affine transformation to original image
result = cv2.warpAffine(image_a, trans, (image_a.shape[1] + image_b.shape[1], image_a.shape[0]))
result[0:image_b.shape[0], image_b.shape[1]:] = image_b
showimage(result)
cv2.imwrite('result.png', result)
Sources: Approach based on advice received here, this tutorial and this example from the docs.
July 12 Edit:
This post inspired GitHub repos providing functions to accomplish this task; one for a padded warpAffine() and another for a padded warpPerspective(). Check out the Python version or the C++ version.
Transformations shift the location of pixels
What any transformation does is takes your point coordinates (x, y) and maps them to new locations (x', y'):
s*x' h1 h2 h3 x
s*y' = h4 h5 h6 * y
s h7 h8 1 1
where s is some scaling factor. You must divide the new coordinates by the scale factor to get back the proper pixel locations (x', y'). Technically, this is only true of homographies---(3, 3) transformation matrices---you don't need to scale for affine transformations (you don't even need to use homogeneous coordinates...but it's better to keep this discussion general).
Then the actual pixel values are moved to those new locations, and the color values are interpolated to fit the new pixel grid. So during this process, these new locations get recorded at some point. We'll need those locations to see where the pixels actually move to, relative to the other image. Let's start with an easy example and see where points are mapped.
Suppose your transformation matrix simply shifts pixels to the left by ten pixels. Translation is handled by the last column; the first row is the translation in x and second row is the translation in y. So we would have an identity matrix, but with -10 in the first row, third column. Where would the pixel (0,0) be mapped? Hopefully, (-10,0) if logic makes any sense. And in fact, it does:
transf = np.array([[1.,0.,-10.],[0.,1.,0.],[0.,0.,1.]])
homg_pt = np.array([0,0,1])
new_homg_pt = transf.dot(homg_pt))
new_homg_pt /= new_homg_pt[2]
# new_homg_pt = [-10. 0. 1.]
Perfect! So we can figure out where all points map with a little linear algebra. We will need to get all the (x,y) points, and put them into a huge array so that every single point is in it's own column. Lets pretend our image is only 4x4.
h, w = src.shape[:2] # 4, 4
indY, indX = np.indices((h,w)) # similar to meshgrid/mgrid
lin_homg_pts = np.stack((indX.ravel(), indY.ravel(), np.ones(indY.size)))
These lin_homg_pts have every homogenous point now:
[[ 0. 1. 2. 3. 0. 1. 2. 3. 0. 1. 2. 3. 0. 1. 2. 3.]
[ 0. 0. 0. 0. 1. 1. 1. 1. 2. 2. 2. 2. 3. 3. 3. 3.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]
Then we can do matrix multiplication to get the mapped value of every point. For simplicity, let's stick with the previous homography.
trans_lin_homg_pts = transf.dot(lin_homg_pts)
trans_lin_homg_pts /= trans_lin_homg_pts[2,:]
And now we have the transformed points:
[[-10. -9. -8. -7. -10. -9. -8. -7. -10. -9. -8. -7. -10. -9. -8. -7.]
[ 0. 0. 0. 0. 1. 1. 1. 1. 2. 2. 2. 2. 3. 3. 3. 3.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]
As we can see, everything is working as expected: we have shifted the x-values only, by -10.
Pixels can be shifted outside of your image bounds
Notice that these pixel locations are negative---they're outside of the image bounds. If we do something a little more complex and rotate the image by 45 degrees, we'll get some pixel values way outside our original bounds. We don't care about every pixel value though, we just need to know how far the farthest pixels are that are outside the original image pixel locations, so that we can pad the original image that far out, before displaying the warped image on it.
theta = 45*np.pi/180
transf = np.array([
[ np.cos(theta),np.sin(theta),0],
[-np.sin(theta),np.cos(theta),0],
[0.,0.,1.]])
print(transf)
trans_lin_homg_pts = transf.dot(lin_homg_pts)
minX = np.min(trans_lin_homg_pts[0,:])
minY = np.min(trans_lin_homg_pts[1,:])
maxX = np.max(trans_lin_homg_pts[0,:])
maxY = np.max(trans_lin_homg_pts[1,:])
# minX: 0.0, minY: -2.12132034356, maxX: 4.24264068712, maxY: 2.12132034356,
So we see that we can get pixel locations well outside our original image, both in the negative and positive directions. The minimum x value doesn't change because when an homography applies a rotation, it does it from the top-left corner. Now one thing to note here is that I've applied the transformation to all pixels in the image. But this is really unnecessary, you can simply warp the four corner points and see where they land.
Padding the destination image
Note that when you call cv2.warpAffine() you have to input the destination size. These transformed pixel values reference that size. So if a pixel gets mapped to (-10,0), it won't show up in the destination image. That means that we'll have to make another homography with translations which shift all pixel locations be positive, and then we can pad the image matrix to compensate for our shift. We'll also have to pad the original image on the bottom and the right if the homography moves points to positions bigger than the image, too.
In the recent example, the min x value is the same, so we need no horizontal shift. However, the min y value has dropped by about two pixels, so we need to shift the image two pixels down. First, let's create the padded destination image.
pad_sz = list(src.shape) # in case three channel
pad_sz[0] = np.round(np.maximum(pad_sz[0], maxY) - np.minimum(0, minY)).astype(int)
pad_sz[1] = np.round(np.maximum(pad_sz[1], maxX) - np.minimum(0, minX)).astype(int)
dst_pad = np.zeros(pad_sz, dtype=np.uint8)
# pad_sz = [6, 4, 3]
As we can see, the height increased from the original by two pixels to account for that shift.
Add translation to the transformation to shift all pixel locations to positive
Now, we need to create a new homography matrix to translate the warped image by the same amount that we shifted by. And to apply both transformations---the original and this new shift---we have to compose the two homographies (for an affine transformation, you can simply add the translation, but not for an homography). Additionally we need to divide by the last entry to make sure the scales are still proper (again, only for homographies):
anchorX, anchorY = 0, 0
transl_transf = np.eye(3,3)
if minX < 0:
anchorX = np.round(-minX).astype(int)
transl_transf[0,2] -= anchorX
if minY < 0:
anchorY = np.round(-minY).astype(int)
transl_transf[1,2] -= anchorY
new_transf = transl_transf.dot(transf)
new_transf /= new_transf[2,2]
I also created here the anchor points for where we will place the destination image into the padded matrix; it's shifted by the same amount the homography will shift the image. So let's place the destination image inside the padded matrix:
dst_pad[anchorY:anchorY+dst_sz[0], anchorX:anchorX+dst_sz[1]] = dst
Warp with the new transformation into the padded image
All we have left to do is apply the new transformation to the source image (with the padded destination size), and then we can overlay the two images.
warped = cv2.warpPerspective(src, new_transf, (pad_sz[1],pad_sz[0]))
alpha = 0.3
beta = 1 - alpha
blended = cv2.addWeighted(warped, alpha, dst_pad, beta, 1.0)
Putting it all together
Let's create a function for this since we were creating quite a few variables we don't need at the end here. For inputs we need the source image, the destination image, and the original homography. And for outputs we simply want the padded destination image, and the warped image. Note that in the examples we used a 3x3 homography so we better make sure we send in 3x3 transforms instead of 2x3 affine or Euclidean warps. You can just add the row [0,0,1] to any affine warp at the bottom and you'll be fine.
def warpPerspectivePadded(img, dst, transf):
src_h, src_w = src.shape[:2]
lin_homg_pts = np.array([[0, src_w, src_w, 0], [0, 0, src_h, src_h], [1, 1, 1, 1]])
trans_lin_homg_pts = transf.dot(lin_homg_pts)
trans_lin_homg_pts /= trans_lin_homg_pts[2,:]
minX = np.min(trans_lin_homg_pts[0,:])
minY = np.min(trans_lin_homg_pts[1,:])
maxX = np.max(trans_lin_homg_pts[0,:])
maxY = np.max(trans_lin_homg_pts[1,:])
# calculate the needed padding and create a blank image to place dst within
dst_sz = list(dst.shape)
pad_sz = dst_sz.copy() # to get the same number of channels
pad_sz[0] = np.round(np.maximum(dst_sz[0], maxY) - np.minimum(0, minY)).astype(int)
pad_sz[1] = np.round(np.maximum(dst_sz[1], maxX) - np.minimum(0, minX)).astype(int)
dst_pad = np.zeros(pad_sz, dtype=np.uint8)
# add translation to the transformation matrix to shift to positive values
anchorX, anchorY = 0, 0
transl_transf = np.eye(3,3)
if minX < 0:
anchorX = np.round(-minX).astype(int)
transl_transf[0,2] += anchorX
if minY < 0:
anchorY = np.round(-minY).astype(int)
transl_transf[1,2] += anchorY
new_transf = transl_transf.dot(transf)
new_transf /= new_transf[2,2]
dst_pad[anchorY:anchorY+dst_sz[0], anchorX:anchorX+dst_sz[1]] = dst
warped = cv2.warpPerspective(src, new_transf, (pad_sz[1],pad_sz[0]))
return dst_pad, warped
Example of running the function
Finally, we can call this function with some real images and homographies and see how it pans out. I'll borrow the example from LearnOpenCV:
src = cv2.imread('book2.jpg')
pts_src = np.array([[141, 131], [480, 159], [493, 630],[64, 601]], dtype=np.float32)
dst = cv2.imread('book1.jpg')
pts_dst = np.array([[318, 256],[534, 372],[316, 670],[73, 473]], dtype=np.float32)
transf = cv2.getPerspectiveTransform(pts_src, pts_dst)
dst_pad, warped = warpPerspectivePadded(src, dst, transf)
alpha = 0.5
beta = 1 - alpha
blended = cv2.addWeighted(warped, alpha, dst_pad, beta, 1.0)
cv2.imshow("Blended Warped Image", blended)
cv2.waitKey(0)
And we end up with this padded warped image:
![[Padded and warped1]1
as opposed to the typical cut off warp you would normally get.

How to prevent xtick labels too cramped?

With xticks automatically generated by gnuplot we find too often that the labels are too tight / cramped together as shown in this snapshot.
How can we fix this issue?
This is a very crude workaround. The idea is telling how many (approximate) number of tick labels one wants, and have gnuplot "translate" that into a suitable tick spacing according to the whole plotting range.
I am posting what I am using now, and it works reasonably well.
It takes xmin=0.
You could guess the way it works and tune it.
# Get/print stats about input data, ...
stats "output.csv" using 2:5 nooutput
# ... and use them for setting the number of tick labels for x axes, to avoid overlap
#tmin = STATS_min_x
tmin = 0
tmax = STATS_max_x
nxtics = 5 # Tune this
# Do not count 0 as a tick
nxtics = nxtics - 1
# Shift numbers to the range [1,10)
ttic1 = tmax / nxtics
nshift_digits = -floor(log10(ttic1))
shift = 10.0**nshift_digits
tmax_shift = tmax * shift
ttic1_shift = ttic1 * shift
# ttic1_shift should be between [1,10)
# Use (arbitrary) specified tick spacing (here at 1, 2, 5 in the first significant digit, but one could use others, including 2.5, e.g.), which better matches the data range and selected number of tick labels. Note that the number of tick labels would not be strictly maintained.
# Tune these numbers
ttic_shift = 1.0
if (ttic1_shift < 1.3) {
ttic_shift = 1.0
} else { if (ttic1_shift < 3.0) {
ttic_shift = 2.0
} else { if (ttic1_shift < 7.0) {
ttic_shift = 5.0
} else {
ttic_shift = 10.0
} } }
ttic = ttic_shift / shift
print "ttic=", ttic
PS: I could not have this working, although I did not try "hard". I guess that solution might work for a single plotted dataset, but not sure it would work for more than one.
If they aren't too cramped, you can rotate them 90 degrees i.e.
set xtics rotate by 90

Raphael transform animation not behaving

I'm doing an animated transform in Raphael (and Snap.svg, which does the same).
If I apply a rotation to a basic element, it rotates normally as I would expect. However, if I already have a previous transform applied (even if its t0,0 or r0), the element seems to scale down and back up, as though it always has to fit in its previous bounding box or something.
Here is an example fiddle
var r1 = s.rect(0,0,100,100,20,20).attr({ fill: "red", opacity: "0.8", stroke: "black", strokeWidth: "2" });
r1.transform('t0,0'); // any transform leads to shrink on rotate...
r1.animate({ transform: 'r90,50,50' }, 2000);
var r2 = s.rect(150,0,100,100,20,20).attr({ fill: "blue", opacity: "0.8", stroke: "black", strokeWidth: "2" });
r2.animate({ transform: 'r90,200,50' }, 2000);
Is there something obvious I'm missing on animated transforms as to what is happening ?
There are a couple different things you need to understand to figure out what's going on here.
The first is that your animating transform is replacing your original transform, not adding on to it. If you include the original transform instruction in the animation, you avoid the shrinking effect:
var r1 = s.rect(0,0,100,100,20,20)
.attr({ fill: "red", opacity: "0.8", stroke: "black", strokeWidth: "2" });
r1.transform('t0,0');
// any transform leads to shrink on rotate...
r1.animate({ transform: 't0,0r90,50,50' }, 5000);
//unless you repeat that transform in the animation instructions
http://jsfiddle.net/96D8t/3/
You can also avoid the shrinking effect if your original transformation is a rotation around the same center:
var r1 = s.rect(0,0,100,100,20,20)
.attr({ fill: "red", opacity: "0.8", stroke: "black", strokeWidth: "2" });
r1.transform('r0,50,50'); // no shrinking this time...
r1.animate({ transform: 'r90,50,50' }, 2000);
http://jsfiddle.net/96D8t/4/
But why should it make a difference, seeing as a translation of 0,0 or a rotation of 0 doesn't actually change the graphic? It's a side effect of the way the program calculates in-between values when you ask it to convert between two different types of transformations.
Snap/Raphael are converting your two different transformations into matrix transformations, and then interpolating (calculating intermediate values) between each value in the matrix.
A 2D graphical transformation can be represented by a matrix of the form
a c e
b d f
(that's the standard lettering)
You can think of the two rows of the matrix as two algebra formulas for determining the final x and y value, where the first number in the row is multiplied by the original x value, the second number is multiplied by the original y value, and the third number is multiplied by a constant 1:
newX = a*oldX + c*oldY + e;
newY = b*oldX + d*oldY + f;
The matrix for a do-nothing transformation like t0,0 is
1 0 0
0 1 0
Which is actually represented internally as an object with named values, like
{a:1, c:0, e:0,
b:0, d:1, f:0}
Either way, it just says that the newX is 1 times the oldX, and the newY is 1 times the oldY.
The matrix for your r90,50,50 command is:
0 -1 100
1 0 0
I.e., if your old point is (50,100), the formulas are
newX = 0*50 -1*100 + 100*1 = 0
newY = 1*50 + 0*100 + 0 = 50
The point (50,100) gets rotated 90degrees around the point (50,50) to become (0,50), just as expected.
Where it starts getting unexpected is when you try to transform
1 0 0
0 1 0
to
0 -1 100
1 0 0
If you transform each number in the matrix from the start value to the end value, the half-way point would be
0.5 -0.5 50
0.5 0.5 0
Which works out as the matrix for scaling the rectangle down by half and rotating it 45degrees around (50,50).
All of that might be more math than you needed to know, but I hope it helps make sense of what's going on.
Regardless, the easy solution is to make sure that you always match up the types of transforms before and after the animation, so that the program can interpolate the original values, instead of the matrix values.

Resources