How to increase resolution of gif image? - linux

How to increase resolution of gif image generated by rgl package of R (plot3d and movie3d functions) - either externally or through R ?
R Code :
MyX<-rnorm(10,5,1)
MyY<-rnorm(10,5,1)
MyZ<-rnorm(10,5,1)
plot3d(MyX, MyY, MyZ, xlab="X", ylab="Y", zlab="Z", type="s", box=T, axes=F)
text3d(MyX, MyY, MyZ, text=c(1:10), cex=5, adj=1)
movie3d(spin3d(axis = c(0, 0, 1), rpm = 4), duration=15, movie="TestMovie",
type="gif", dir=("~/Desktop"))
Output :
Update
Adding this line at the beginning of code solved the problem
r3dDefaults$windowRect <- c(0, 100, 1400, 1400)

I don't think you can do much about the resolution of the gif itself. I think you have to make the image much larger as an alternative, and then when you display it smaller it looks better. This is untested as a recent upgrade broke a thing or two for me, but this did work under 2.15:
par3d(windowRect = c(0, 0, 500, 500)) # make the window large
par3d(zoom = 1.1) # larger values make the image smaller
# you can test your settings interactively at this point
M <- par3d("userMatrix") # save your settings to pass to the movie
movie3d(par3dinterp(userMatrix=list(M,
rotate3d(M, pi, 1, 0, 0),
rotate3d(M, pi, 0, 1, 0) ) ),
duration = 5, fps = 50,
movie = "MyMovie")
HTH. If it doesn't quite work for you, check out the functions used and tune up the settings.

Related

How does Elevation of a Head Pose in Python-OpenCV work?

I am trying to estimate the head pose of single images mostly following this guide:
https://towardsdatascience.com/real-time-head-pose-estimation-in-python-e52db1bc606a
The detection of the face works fine - if i plot the image and the detected landmarks they line up nicely.
I am estimating the camera matrix from the image, and assume no lens distortion:
size = image.shape
focal_length = size[1]
center = (size[1]/2, size[0]/2)
camera_matrix = np.array([[focal_length, 0, center[0]],
[0, focal_length, center[1]],
[0, 0, 1]], dtype="double")
dist_coeffs = np.zeros((4, 1)) # Assuming no lens distortion
I am trying to get the head pose by matching points in the image with points in the 3D model using solvePNP:
# 3D-model points to which the points extracted from an image are matched:
model_points = np.array([
(0.0, 0.0, 0.0), # Nose tip
(0.0, -330.0, -65.0), # Chin
(-225.0, 170.0, -135.0), # Left eye corner
(225.0, 170.0, -135.0), # Right eye corner
(-150.0, -150.0, -125.0), # Left Mouth corner
(150.0, -150.0, -125.0) # Right mouth corner
])
image_points = np.array([
shape[30], # Nose tip
shape[8], # Chin
shape[36], # Left eye left corner
shape[45], # Right eye right corne
shape[48], # Left Mouth corner
shape[54] # Right mouth corner
], dtype="double")
success, rotation_vec, translation_vec) = \
cv2.solvePnP(model_points, image_points, camera_matrix, dist_coeffs)
finally, I am getting the euler angles from the rotation:
rotation_mat, _ = cv2.Rodrigues(rotation_vec)
pose_mat = cv2.hconcat((rotation_mat, translation_vec))
_, _, _, _, _, _, angles = cv2.decomposeProjectionMatrix(pose_mat)
now the azimuth is what i would expect - it is negative if i look to the left, zero in the middle and positive to the right.
the elevation however is strange - if i look in the middle it has a constant value but the sign is random - changing from image to image (the value is around 170).
When i look up the sign is positive and the value decreases the more i look up,
When i look down the sign is negative and the value decreases the more i look down.
Can someone explain this output to me?
Ok so it seems i have found a solution - the model points (which i have found in several blogs on the topic) seem to be wrong. The code seems to work with this combination of model and image points (no idea why it was trial and error):
model_points = np.float32([[6.825897, 6.760612, 4.402142],
[1.330353, 7.122144, 6.903745],
[-1.330353, 7.122144, 6.903745],
[-6.825897, 6.760612, 4.402142],
[5.311432, 5.485328, 3.987654],
[1.789930, 5.393625, 4.413414],
[-1.789930, 5.393625, 4.413414],
[-5.311432, 5.485328, 3.987654],
[2.005628, 1.409845, 6.165652],
[-2.005628, 1.409845, 6.165652],
[2.774015, -2.080775, 5.048531],
[-2.774015, -2.080775, 5.048531],
[0.000000, -3.116408, 6.097667],
[0.000000, -7.415691, 4.070434]])
image_points = np.float32([shape[17], shape[21], shape[22], shape[26],
shape[36], shape[39], shape[42], shape[45],
shape[31], shape[35], shape[48], shape[54],
shape[57], shape[8]])

Undesired lines in FilledCurve (Wolfram Mathematica)

I am trying to make a custom arrowhead with Wolfram Mathematica (v. 10.0) using function FilledCurve. The result is looking fine on the Wolfram output. When I save the picture as pdf, some undesirable vertical line appears on the left border of my arrowhead. It is visible also in the latex document where I insert my pictures.
The code is
px = 0.7; py = 0.14; mpx = -0.2;
pts = {{-px, py}, {mpx, 0}, {-px, -py}};
ah = Graphics[{FilledCurve[{BSplineCurve[pts], Line[{{-px, -py}, {0, 0}, {-px, py}}]}]}]
To see the problem you need to save the output as pdf and open it in Adobe Acrobat Reader (or insert it in latex document).
Any suggestions?
Thank you!
It seems there is some bag in WM.
I solves the problem just by creating the required curve "manually" from lines.
The final code (with some minor improvements) is as follows:
px = 0.7; py = 0.14; mpx = -0.3;
pts = {{-px, py}, {mpx, 0}, {-px, -py}};
curpts = Table[f[t], {t, 0, 1, 0.02}];
f = BSplineFunction[pts];
linpts = {{-px, -py}, {0, 0}, {-px, py}};
allpts = Join[curpts, {linpts[[-2]], linpts[[-1]]}];
ah = Graphics[{FilledCurve[Line[allpts]], Line[linpts]}]
Result:

Aspect ration of a image for instagram

i want to change the aspect ratio of an image for the instagram with python .here is my code for change the aspact ration :
width,height=imageFile.size
aspectRatio = width/height
if(aspectRatio>=0.80 and aspectRatio<=1.90):
print("yeah")
else:
if(height>width):
futureHeight = width/.85
print(str(width)+" ,"+str(futureHeight))
print(width/futureHeight)
left = 0
int(futureHeight)
teetet = height-futureHeight/2
top = teetet / 4
right = width
bottom = height -150
im1 = imageFile.crop((left, top, right, bottom))
print(im1.size)
im1.show()
im1.save(image)
but still it show
ValueError: Incompatible aspect ratio.
whenever i try to upload this image
i resolved that using basic dimensions. using the inspector i saw that instagram convert images to 598.02x598.02 in the home page, so i typed:
im=PIL.Image.open(path)
im=im.resize((598,598), Image.ANTIALIAS)
im.save(path) # overwrite the image
instead of
im=PIL.Image.open(path)
baseheight = 560
hpercent = (baseheight / float(im.size[1]))
wsize = int((float(im.size[0]) * float(hpercent)))
im=im.resize((wsize, baseheight), PIL.Image.ANTIALIAS)
im.save(path)
in wich aspect ratio is out of range anyway.
now you can post it using something like instabot without troubles 'cause the aspect ratio is 1.0 .

Overwrite GPS coordinates in Image Exif using Python 3.6

I am trying to transform image geotags so that images and ground control points lie in the same coordinate system inside my software (Pix4D mapper).
The answer here says:
Exif data is standardized, and GPS data must be encoded using
geographical coordinates (minutes, seconds, etc) described above
instead of a fraction. Unless it's encoded in that format in the exif
tag, it won't stick.
Here is my code:
import os, piexif, pyproj
from PIL import Image
img = Image.open(os.path.join(dirPath,fn))
exif_dict = piexif.load(img.info['exif'])
breite = exif_dict['GPS'][piexif.GPSIFD.GPSLatitude]
lange = exif_dict['GPS'][piexif.GPSIFD.GPSLongitude]
breite = breite[0][0] / breite[0][1] + breite[1][0] / (breite[1][1] * 60) + breite[2][0] / (breite[2][1] * 3600)
lange = lange[0][0] / lange[0][1] + lange[1][0] / (lange[1][1] * 60) + lange[2][0] / (lange[2][1] * 3600)
print(breite) #48.81368778730952
print(lange) #9.954511162420633
x, y = pyproj.transform(wgs84, gk3, lange, breite) #from WGS84 to GaussKrüger zone 3
print(x) #3570178.732528623
print(y) #5408908.20172699
exif_dict['GPS'][piexif.GPSIFD.GPSLatitude] = [ ( (int)(round(y,6) * 1000000), 1000000 ), (0, 1), (0, 1) ]
exif_bytes = piexif.dump(exif_dict) #error here
img.save(os.path.join(outPath,fn), "jpeg", exif=exif_bytes)
I am getting struct.error: argument out of range in the dump method. The original GPSInfo tag looks like: {0: b'\x02\x03\x00\x00', 1: 'N', 2: ((48, 1), (48, 1), (3449322402, 70000000)), 3: 'E', 4: ((9, 1), (57, 1), (1136812930, 70000000)), 5: b'\x00', 6: (3659, 10)}
I am guessing I have to offset the values and encode them properly before writing, but have no idea what is to be done.
It looks like you are already using PIL and Python 3.x, not sure if you want to continue using piexif but either way, you may find it easier to convert the degrees, minutes, and seconds into decimal first. It looks like you are trying to do that already but putting it in a separate function may be clearer and account for direction reference.
Here's an example:
def get_decimal_from_dms(dms, ref):
degrees = dms[0][0] / dms[0][1]
minutes = dms[1][0] / dms[1][1] / 60.0
seconds = dms[2][0] / dms[2][1] / 3600.0
if ref in ['S', 'W']:
degrees = -degrees
minutes = -minutes
seconds = -seconds
return round(degrees + minutes + seconds, 5)
def get_coordinates(geotags):
lat = get_decimal_from_dms(geotags['GPSLatitude'], geotags['GPSLatitudeRef'])
lon = get_decimal_from_dms(geotags['GPSLongitude'], geotags['GPSLongitudeRef'])
return (lat,lon)
The geotags in this example is a dictionary with the GPSTAGS as keys instead of the numeric codes for readability. You can find more detail and the complete example from this blog post: Getting Started with Geocoding Exif Image Metadata in Python 3
After much hemming & hawing I reached the pages of py3exiv2 image metadata manipulation library. One will find exhaustive lists of the metadata tags as one reads through but here is the list of EXIF tags just to save few clicks.
It runs smoothly on Linux and provides many opportunities to edit image-headers. The documentation is also quite clear. I recommend this as a solution and am interested to know if it solves everyone else's problems as well.

R simplify heatmap to pdf

I want to plot a simplified heatmap that is not so difficult to edit with the scalar vector graphics program I am using (inkscape). The original heatmap as produced below contains lots of rectangles, and I wonder if they could be merged together in the different sectors to simplify the output pdf file:
nentries=100000
ci=rainbow(nentries)
set.seed=1
mean=10
## Generate some data (4 factors)
i = data.frame(
a=round(abs(rnorm(nentries,mean-2))),
b=round(abs(rnorm(nentries,mean-1))),
c=round(abs(rnorm(nentries,mean+1))),
d=round(abs(rnorm(nentries,mean+2)))
)
minvalue = 10
# Discretise values to 1 or 0
m0 = matrix(as.numeric(i>minvalue),nrow=nrow(i))
# Remove rows with all zeros
m = m0[rowSums(m0)>0,]
# Reorder with 1,1,1,1 on top
ms =m[order(as.vector(m %*% matrix(2^((ncol(m)-1):0),ncol=1)), decreasing=TRUE),]
rowci = rainbow(nrow(ms))
colci = rainbow(ncol(ms))
colnames(ms)=LETTERS[1:4]
limits=c(which(!duplicated(ms)),nrow(ms))
l=length(limits)
toname=round((limits[-l]+ limits[-1])/2)
freq=(limits[-1]-limits[-l])/nrow(ms)
rn=rep("", nrow(ms))
for(i in toname) rn[i]=paste(colnames(ms)[which(ms[i,]==1)],collapse="")
rn[toname]=paste(rn[toname], ": ", sprintf( "%.5f", freq ), "%")
heatmap(ms,
Rowv=NA,
labRow=rn,
keep.dendro = FALSE,
col=c("black","red"),
RowSideColors=rowci,
ColSideColors=colci,
)
dev.copy2pdf(file="/tmp/file.pdf")
Why don't you try RSvgDevice? Using it you could save your image as svg file, which is much convenient to Inkscape than pdf
I use the Cairo package for producing svg. It's incredibly easy. Here is a much simpler plot than the one you have in your example:
require(Cairo)
CairoSVG(file = "tmp.svg", width = 6, height = 6)
plot(1:10)
dev.off()
Upon opening in Inkscape, you can ungroup the elements and edit as you like.
Example (point moved, swirl added):
I don't think we (the internet) are being clear enough on this one.
Let me just start off with a successful export example
png("heatmap.png") #Ruby dev's think of this as kind of like opening a `File.open("asdfsd") do |f|` block
heatmap(sample_matrix, Rowv=NA, Colv=NA, col=terrain.colors(256), scale="column", margins=c(5,10))
dev.off()
The dev.off() bit, in my mind, reminds me of an end call to a ruby block or method, in that, the last line of the "nested" or enclosed (between png() and dev.off()) code's output is what gets dumped into the png file.
For example, if you ran this code:
png("heatmap4.png")
heatmap(sample_matrix, Rowv=NA, Colv=NA, col=terrain.colors(32), scale="column", margins=c(5,15))
heatmap(sample_matrix, Rowv=NA, Colv=NA, col=greenred(32), scale="column", margins=c(5,15))
dev.off()
it would output the 2nd (greenred color scheme, I just tested it) heatmap to the heatmap4.png file, just like how a ruby method returns its last line by default

Resources