Creating a Movie from a Series of Plots in R [closed] - graphics

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Is there an easy way to create a "movie" by stitching together several plots, within R?

Here is one method I found using R help:
To create the individual image frames:
jpeg("/tmp/foo%02d.jpg")
for (i in 1:5) {
my.plot(i)
}
dev.off()
To make the movie, first install ImageMagick.
Then call the following function (which calls "convert", part of ImageMagick I suppose):
make.mov <- function(){
unlink("plot.mpg")
system("convert -delay 0.5 plot*.jpg plot.mpg")
}
Or try using the ffmpeg function as described in this article (I've found this gives cleaner results):
ffmpeg -r 25 -qscale 2 -i tmp/foo%02d.jpg output.mp4
May require a bit of tinkering, but this seemed pretty simple once everything was installed.
Of course, anywhere you see "jpg" or "jpeg", you can substitute GIF or PNG to suit your fancy.

Take a look at either the animation package created by Yihui Xie or the EBImage bioconductor package (?animate).

I think you can do this also with the write.gif function in the caTools library. You'd have to get your graph into a multi-frame image first. I'm not sure how to do that. Anyone? Bueller?
The classic example of an animated GIF is this code which I didn't write but I did blog about some time ago:
library(fields) # for tim.colors
library(caTools) # for write.gif
m = 400 # grid size
C = complex( real=rep(seq(-1.8,0.6, length.out=m), each=m ), imag=rep(seq(-1.2,1.2, length.out=m), m ) )
C = matrix(C,m,m)
Z = 0
X = array(0, c(m,m,20))
for (k in 1:20) {
Z = Z^2+C
X[,,k] = exp(-abs(Z))
}
image(X[,,k], col=tim.colors(256)) # show final image in R
write.gif(X, 'Mandelbrot.gif', col=tim.colors(256), delay=100)
Code credit goes to Jarek Tuszynski, PhD.

If you wrap your R script within a larger Perl/Python/etc. script, you can stitch graphs together with your favorite command-line image stitching tool.
To run your R script with a wrapper script, use the R CMD BATCH method.

I'm not sure it is possible in R. I did a project once when data points from R were exported to a MySQL database and a Flex/Flash application picked up those data points and gave animated visualizations..

I've done some movies using XNview's (freeware graphics viewer) Create Slideshow function. I wanted to show trends through time with spatial data, so I just created a series of plots, named sequentially [paste() is your friend for all sorts of naming calistethics] then loaded them into XNviews slideshow dialogue and set a few timer variables, voila. Took like 5 minutes to learn how to do it and produce some executable graphics.

Here's a full example on making an animated GIF "movie" from an HDF5 file. The data should be an HDF Dataset of a 3 dimensional array [Nframes][Nrows][Ncolumns].
#
# be sure to be run as Administrator to install new packages
#
source("http://bioconductor.org/biocLite.R")
biocLite("rhdf5")
install.packages('caTools')
install.packages('fields')
library(caTools)
library(fields)
library(rhdf5)
x = h5read(file="mydata.h5",name="/Images")
write.gif(x,"movie1.gif",col=rainbow,delay=10,flip=TRUE)

Related

How can i create a terminal like design using tkinter

I want to create a terminal like design using tkinter. I also want to include terminal like function where once you hit enter, you would not be able to change your previous lines of sentences. Is it even possible to create such UI design using tkinter?
An example of a terminal design may look like this:
According to my research, i have found an answer and link that may help you out
Firstly i would like you to try this code, this code takes the command "ipconfig" and displays the result in a new window, you can modify this code:-
import tkinter
import os
def get_info(arg):
x = Tkinter.StringVar()
x = tfield.get("linestart", "lineend") # gives an error 'bad text index "linestart"'
print (x)
root = tkinter.Tk()
tfield = tkinter.Text(root)
tfield.pack()
for line in os.popen("ipconfig", 'r'):
tfield.insert("end", line)
tfield.bind("<Return>", get_info)
root.mainloop()
And i have found a similar question on quora take a look at this
After asking for additional help by breaking down certain parts, I was able to get a solution from j_4321 post. Link: https://stackoverflow.com/a/63830645/11355351

Photutils DAOPhot Not Fitting stars well?

I recently ran across the PhotUtils package and am trying to use it to perform PSF Photometry on some images I have. However, when I try to run the code, I get very strange results. When I plot the image generated by get_residual_image(), the stars are not removed well. Some sample images are shown below.
The first image has sigma set to 2.05, as it is in one of the sample programs in the PhotUtils documentation:
However, the stars only appear to be removed in their center.
The second image has sigma set to 5.0. This one is especially strange. Some stars are way over-removed, some are under removed, some black squares are added to the image, etc.
Here is my code:
import photutils
from photutils.psf import DAOPhotPSFPhotometry as DAOP
from photutils.psf import IntegratedGaussianPRF as PRF
from photutils.background import MMMBackground
bkg = MMMBackground()
background = 2.5*bkg(img)
gaussian_prf = PRF(sigma=5.0)
gaussian_prf.sigma.fixed = False
photTester = DAOP(8,background,5,gaussian_prf,31)
photResults = photTester(imgStars)
finalImg = photTester.get_residual_image()
After this, I simply plot the original and final image in MatPlotLib. I use a greyscale colormap. The reason that the left images appear slightly darker is that they use a different color scaling.
Perhaps I have set one of the parameters incorrectly?
Could someone help me out with this? Thank you!
Looking at the residual image instantly told me that the background subtraction might be wrong. I could reproduce the result and wondered, if MMMBackground did not do the job correctly.
After taking a closer look at the documentation, Getting startet with Photutils finally gave the essential hint:
image -= np.median(image)

dlib.image_window.add_overlay() does not show closed figure of partial face landmarks

I am trying to display landmark points 48-67 that form the jaw region of 68-landmark face object.
The image_window 's method in use has the following signature-
add_overlay(dlib.full_object_detection)
I am extracting dlib.points 48-67 from shape_predictor returned full_object_detection object this way - :
faces = faceDetector(img,1)
lipPoints = []
shape = detector(img,faces[0])
for i in range(48,68):
lipPoints.append(shape.part(i))
lipDetections = dlib.full_object_detection(faces[0],lipPoints)
win.add_overlay(lipDetections)
But it results in isolated points of jaws instead of closed curve-
When I use all 68 points by setting range(0,68), the image_window shows all points connected to describe the face.
faces = faceDetector(img,1)
lipPoints = []
shape = detector(img,faces[0])
for i in range(0,68):
lipPoints.append(shape.part(i))
lipDetections = dlib.full_object_detection(faces[0],lipPoints)
win.add_overlay(lipDetections)
The code outputs curves instead of point indices as expected-
How does add_overlay() method behave differently even after using the same overload of same instance of image_window? The full_object_detector is created in the same manner in both situations so I expected it to show show a closed figure of lip boundaries.
I looked at the source code for add_overlay(full_object_detection) but did not find anything that shows any link between add_overlay with detector model. How do I display connected points of jaws only?
I have also tried training dlib.shape_predictor() myself using only jaw landmarks by removing other landmarks from train xml file but the results are same, no closed curve, only isolated points are displayed.
I am using dlib v19.15, installed from source on Ubuntu 18.04. #davis-king can you help?

Adding watermark to video

I am able to use the moviepy library to add a watermark to a section of video. However when I do this it is taking the watermarked segment, and creating a new file with it. I am trying to figure out if it is possible to simply splice in the edited part back into the original video, as moviepy is EXTREMELY slow writing to the disk, so the smaller the segment the better.
I was thinking maybe using shutil?
video = mp.VideoFileClip("C:\\Users\\admin\\Desktop\\Test\\demovideo.mp4").subclip(10,20)
logo = (mp.ImageClip("C:\\Users\\admin\\Desktop\\Watermark\\watermarkpic.png")
.set_duration(20)
.resize(height=20) # if you need to resize...
.margin(right=8, bottom=8, opacity=0) # (optional) logo-border padding
.set_pos(("right","bottom")))
final = mp.CompositeVideoClip([video, logo])
final.write_videofile("C:\\Users\\admin\\Desktop\\output\\demovideo(watermarked).mp4", audio = True, progress_bar = False)
Is there a way to copy the 10 second watermarked snippet back into the original video file? Or is there another library that allows me to do this?
What is slow in your use case is the fact that Moviepy needs to decode and reencode each frame of the movie. If you want speed, I believe there are ways to ask FFMPEG to copy video segments without rencoding.
So you could use ffmpeg to cut the video into 3 subclips (before.mp4/fragment.mp4/after.mp4), only process fragment.mp4, then reconcatenate all clips together with ffmpeg.
The cutting into 3 clips using ffmpeg can be done from moviepy:
https://github.com/Zulko/moviepy/blob/master/moviepy/video/io/ffmpeg_tools.py#L27
However for concatenating everything together you may need to call ffmpeg directly.

Playing a sound in a ipython notebook

I would like to be able to play a sound file in a ipython notebook.
My aim is to be able to listen to the results of different treatments applied to a sound directly from within the notebook.
Is this possible? If yes, what is the best solution to do so?
The previous answer is pretty old. You can use IPython.display.Audio now. Like this:
import IPython
IPython.display.Audio("my_audio_file.mp3")
Note that you can also process any type of audio content, and pass it to this function as a numpy array.
If you want to display multiple audio files, use the following:
IPython.display.display(IPython.display.Audio("my_audio_file.mp3"))
IPython.display.display(IPython.display.Audio("my_audio_file.mp3"))
A small example that might be relevant : http://nbviewer.ipython.org/5507501/the%20sound%20of%20hydrogen.ipynb
it should be possible to avoid gooing through external files by base64 encoding as for PNG/jpg...
The code:
import IPython
IPython.display.Audio("my_audio_file.mp3")
may give an error of "Invalid Source" in IE11, try in other browsers it should work fine.
The other available answers added an HTML element which I disliked, so I created the ringbell, which gets you both play a custom sound as such:
from ringbell import RingBell
RingBell(
sample = "path/to/sample.wav",
minimum_execution_time = 0,
verbose = True
)
and it also gets you a one-lines to play a bell when a cell execution takes more than 1 minute (or a custom amount of time for that matter) or is fails with an exception:
import ringbell.auto
You can install this package from PyPI:
pip install ringbell
If the sound you are looking for could be also a "Text-to-Speech", I would like to mention that every time a start some long process in the background, I queue the execution of a cell like this too:
from IPython.display import clear_output, display, HTML, Javascript
display(Javascript("""
var msg = new SpeechSynthesisUtterance();
msg.text = "Process completed!";
window.speechSynthesis.speak(msg);
"""))
You can change the text you want to hear with msg.text.

Resources