How to get the currently playing audio as audio input in Processing? - audio

I am creating a simple audio visualizer. Here is the code
import ddf.minim.analysis.*;
import ddf.minim.*;
Minim minim;
AudioInput in;
FFT fft;
int w;
PImage fade;
void setup() {
size(640,480); // draw screen
minim = new Minim(this); // new minim object
in = minim.getLineIn(Minim.STEREO,512); // audio input from microphone (have to change this to get currently playing audio)
fft = new FFT(in.bufferSize(),in.sampleRate()); // new fft object
fft.logAverages(60,7);
stroke(255);
w=width/fft.avgSize();
strokeWeight(w); // display lines as bars
background(0);
}
void draw() {
background(0);
fft.forward(in.mix); //generate fourier series
for(int i = 0; i < fft.avgSize(); i++) {
line((i*w)+(w/2),height, (i*w)+(w/2), height - fft.getAvg(i)*4); // draw bars
}
}
Here in = minim.getLineIn(Minim.STEREO,512); gives the audio input from the from the microphone. But I need to get currently playing audio (What you hear from the speaker or headphone) of the computer and visualize it.
If there is any method or any other way to get currently playing audio as the input please mention. Thanks in advance :)

Questions like this are best answered by reading the reference:
An AudioInput is a connection to the current record source of the computer. How the record source for a computer is set will depend on the soundcard and OS, but typically a user can open a control panel and set the source from there. Unfortunately, there is no way to set the record source from Java.
In other words, this is a setting that the user has to change in their control panel. This seems to be confirmed here and here.

Related

How to Hide Title bar from VLC Media Player (VLC Direct 3d output)using LIBVLC in vc++

I am using libvlc in vc++.I want to hide the title bar from VLC Output Window(VLC 3D Direct OUTPUT). Can we hide or remove this title bar?
foreach (var handle in EnumerateProcessWindowHandles(Process.GetCurrentProcess().Id))
{
StringBuilder message = new StringBuilder(1000);
SendMessage(handle, WM_GETTEXT, message.Capacity, message);
if (message.ToString().Equals("VLC (Direct3D11 output)"))
{
var style = (SetWindowLongFlags)GetWindowLong(handle, WindowLongIndexFlags.GWL_STYLE);
style &= ~SetWindowLongFlags.WS_BORDER;
SetWindowLong(handle, WindowLongIndexFlags.GWL_STYLE, style);
}
}

Load an SVG to a P5 sketch

I've been programming in Processing some time now.
I've also worked with shapes and SVG files.
Having this humble experience regarding SVG files in Processing made me think that it would be the same story in P5.js, which is clearly not the case and makes me seek for help.
Back in Processing I would just have simple code like this:
PShape shape;
/***************************************************************************/
void setup()
{
size(400, 400);
shapeMode(CENTER);
shape = loadShape("bot1.svg");
}
/***************************************************************************/
void draw()
{
background(100);
pushMatrix();
translate(width/2, height/2);
shape(shape, 0, 0);
popMatrix();
}
P5 doesn't work that way.
What would be the P5.js equivalent to that?
var shape;
var canvas;
/***************************************************************************/
function setup()
{
canvas = createCanvas(400, 400);
canvas.position(0, 0);
//shapeMode(CENTER);
//shape = loadShape("bot1.svg");
}
/***************************************************************************/
void draw()
{
background(100);
push();
translate(width/2, height/2);
//shape(shape, 0, 0);
pop();
}
P5.js does not support loading SVG files out of the box. Here is a discussion on GitHub containing a ton of information about this.
Processing.js does support SVG files though. More info in the reference.
You've tagged your question with processing.js, but I think you were originally asking about p5.js. But note that Processing.js and P5.js are two completely different things.
In addition to Kevin's answer, if you do want load and SVG in p5.js you should check out p5.js-svg and the svg manipulation example
As a quick'n'dirty start you can try this:
Download p5.svg.js to the p5. sketch libraries folder
Import it in index.html: <script src="libraries/p5.svg.js" type="text/javascript"></script>
Create an SVG Canvas: createCanvas(600, 200, SVG);

Kinect & Processing - Convert position of joint as mouse x and mouse y?

I'm currently using an XBOX KINECT model 1414, and processing 2.2.1. I'm hoping to use the right hand as a mouse to guide a character through the screen.
I managed to draw an ellipse to follow the right hand joint on a kinect skeleton. How would I be able to figure out the position of that joint so that I could replace mouseX and mouseY if needed?
Below is the code that will track your right hand and draw a red ellipse over it:
import SimpleOpenNI.*;
SimpleOpenNI kinect;
void setup()
{
// instantiate a new context
kinect = new SimpleOpenNI(this);
kinect.setMirror(!kinect.mirror());
// enable depthMap generation
kinect.enableDepth();
// enable skeleton generation for all joints
kinect.enableUser();
smooth();
noStroke();
// create a window the size of the depth information
size(kinect.depthWidth(), kinect.depthHeight());
}
void draw()
{
// update the camera...must do
kinect.update();
// draw depth image...optional
image(kinect.depthImage(), 0, 0);
background(0);
// check if the skeleton is being tracked for user 1 (the first user that detected)
if (kinect.isTrackingSkeleton(1))
{
int joint = SimpleOpenNI.SKEL_RIGHT_HAND;
// draw a dot on their joint, so they know what's being tracked
drawJoint(1, joint);
PVector point1 = new PVector(-500, 0, 1500);
PVector point2 = new PVector(500, 0, 700);
}
}
///////////////////////////////////////////////////////
void drawJoint(int userID, int jointId) {
// make a vector to store the left hand
PVector jointPosition = new PVector();
// put the position of the left hand into that vector
kinect.getJointPositionSkeleton(userID, jointId, jointPosition);
// convert the detected hand position to "projective" coordinates that will match the depth image
PVector convertedJointPosition = new PVector();
kinect.convertRealWorldToProjective(jointPosition, convertedJointPosition);
// and display it
fill(255, 0, 0);
float ellipseSize = map(convertedJointPosition.z, 700, 2500, 50, 1);
ellipse(convertedJointPosition.x, convertedJointPosition.y, ellipseSize, ellipseSize);
}
//////////////////////////// Event-based Methods
void onNewUser(SimpleOpenNI curContext, int userId)
{
println("onNewUser - userId: " + userId);
println("\tstart tracking skeleton");
curContext.startTrackingSkeleton(userId);
}
void onLostUser(SimpleOpenNI curContext, int userId)
{
println("onLostUser - userId: " + userId);
}
Any kind of links or help will be very appreciated, thanks!
In your case I would recommend, that you use the coordinates of the right hand joint. This is how you get them:
foreach (Skeleton skeleton in skeletons) {
Joint RightHand = skeleton.Joints[JointType.HandRight];
double rightX = RightHand.Position.X;
double rightY = RightHand.Position.Y;
double rightZ = RightHand.Position.Z;
}
Be aware of the fact that we are looking at 3 dimensions so you will have a x,y and z coordinate.
FYI: You will have to insert these lines of code in the event handler SkeletonFramesReady.
If you still want the circle around it have a look at the Skeleton-Basics WPF Example in the Kinect SDK's.
Does this help you?
It's slightly unclear what you're trying to achieve.
If you simply need the position of the hand in 2D screen coordinates, the code you posted already includes this:
kinect.getJointPositionSkeleton() retrieves the 3D coordinates
kinect.convertRealWorldToProjective() converts them to 2D screen coordinates.
If you want to be able to swap between using kinect tracked hand coordinates and mouse coordinates, you can store the PVector used in the 2D conversion as a variable visible to the whole sketch which you updated either by kinect skeleton if it is being tracked or mouse otherwise:
import SimpleOpenNI.*;
SimpleOpenNI kinect;
PVector user1RightHandPos = new PVector();
float ellipseSize;
void setup()
{
// instantiate a new context
kinect = new SimpleOpenNI(this);
kinect.setMirror(!kinect.mirror());
// enable depthMap generation
kinect.enableDepth();
// enable skeleton generation for all joints
kinect.enableUser();
smooth();
noStroke();
// create a window the size of the depth information
size(kinect.depthWidth(), kinect.depthHeight());
}
void draw()
{
// update the camera...must do
kinect.update();
// draw depth image...optional
image(kinect.depthImage(), 0, 0);
background(0);
// check if the skeleton is being tracked for user 1 (the first user that detected)
if (kinect.isTrackingSkeleton(1))
{
updateRightHand2DCoords(1, SimpleOpenNI.SKEL_RIGHT_HAND);
ellipseSize = map(user1RightHandPos.z, 700, 2500, 50, 1);
}else{//if the skeleton isn't tracked, use the mouse
user1RightHandPos.set(mouseX,mouseY,0);
ellipseSize = 20;
}
//draw ellipse regardless of the skeleton tracking or mouse mode
fill(255, 0, 0);
ellipse(user1RightHandPos.x, user1RightHandPos.y, ellipseSize, ellipseSize);
}
///////////////////////////////////////////////////////
void updateRightHand2DCoords(int userID, int jointId) {
// make a vector to store the left hand
PVector jointPosition = new PVector();
// put the position of the left hand into that vector
kinect.getJointPositionSkeleton(userID, jointId, jointPosition);
// convert the detected hand position to "projective" coordinates that will match the depth image
user1RightHandPos.set(0,0,0);//reset the 2D hand position before OpenNI conversion from 3D
kinect.convertRealWorldToProjective(jointPosition, user1RightHandPos);
}
//////////////////////////// Event-based Methods
void onNewUser(SimpleOpenNI curContext, int userId)
{
println("onNewUser - userId: " + userId);
println("\tstart tracking skeleton");
curContext.startTrackingSkeleton(userId);
}
void onLostUser(SimpleOpenNI curContext, int userId)
{
println("onLostUser - userId: " + userId);
}
Optionally, you can use a boolean to swap between mouse/kinect mode when testing.
If you need the mouse coordinates simply to test without having to get in from of the kinect all the time, I recommend having a look at the RecorderPlay example (via Processing > File > Examples > Contributed Libraries > SimpleOpenNI > OpenNI > RecorderPlay). OpenNI has the ability to record a scene (including depth data) which will make it simpler to test: simply record an .oni file with the most common interactions you're aiming for, then re-use the recording when developing.
All it would take to use the .oni file is using a different constructor signature for OpenNI:
kinect = new SimpleOpenNI(this,"/path/to/yourRecordingHere.oni");
One caveat to keep in mind: the depth is stored at half the resolution (so the coordinates with need to be doubled to be on par with the realtime version).

How to stream video capture between devices like video chat in iphone sdk?

Hi i want to make an app which does video calling between iOS devices .I have studied about opentok and idoubs but i want to do it myself from starting. I searched a lot but could not find any solution . I tried to achieve that in a way i think how video chat works . Untill now i have done following things (by using a streaming bonjour tutorial):
Create avcapture session and got cmsamplebufferref data in
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection{
if( captureOutput == _captureOutput ){
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//Lock the image buffer//
CVPixelBufferLockBaseAddress(imageBuffer,0);
//Get information about the image//
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
//Create a CGImageRef from the CVImageBufferRef//
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
//We release some components
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
previewImage= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
CGImageRelease(newImage);
[uploadImageView performSelectorOnMainThread:#selector(setImage:) withObject:previewImage waitUntilDone:YES];
//We unlock the image buffer//
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
[pool drain];
[self sendMIxedData:#"video1"];
}
else if( captureOutput == _audioOutput){
dataA= [[NSMutableData alloc] init];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &currentInputAudioBufferList, sizeof(currentInputAudioBufferList), NULL, NULL, 0, &blockBuffer);
//CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &bufferList, sizeof(bufferList), NULL, NULL, 0, &blockBuffer);
for (int y = 0; y < currentInputAudioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = currentInputAudioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mData;
[dataA appendBytes:frame length:audioBuffer.mDataByteSize];
}
[self sendMIxedData:#"audio"];
}
Now sendMixeddata method is writing these video/audio bytes to NSStream.
NSData *data = UIImageJPEGRepresentation([self scaleAndRotateImage:previewImage], 1.0);
const uint8_t *message1 = (const uint8_t *)[#"video1" UTF8String];
[_outStream write:message1 maxLength:strlen((char *)message1)];
[_outStream write:(const uint8_t *)[data bytes] maxLength:[data length]];
const uint8_t *message1 = (const uint8_t *)[#"audio" UTF8String];
[_outStream write:message1 maxLength:strlen((char *)message1)];
[_outStream write:(const uint8_t *)[dataA bytes] maxLength:[dataA length]];
Now the bytes are recived in nsstream delegate method on recieving device
NOw the problem is i dont know if thais is the way video chat works
or not
Also i have no success how to use the receiving bytes to be displayed
as video .
I tried by sending "audio" and "video1" string with the bytes to know
if its video or audio. I also tried without using additional string.
The images are received and displayed correctly but the audio is so
distorted .
Please tell me if this is the correct way to make a video chat app or
not . If yes than what should i do to to make it usable .For example
: Should i send audio/video data together rather than separately like
my example .Here iam using simple bonjour tutorial but how will i
achieve the same with a real server
Please guide me proper direction as i am stuck here .
Thanks
(Sorry for the formatting . I tried but was unable to format correctly)
Video streaming apps use video codecs like vp8 or h.264, which will beat your JPEG encoded frames.
You should be able to display you received NSData by doing...
UIImage *image = [UIImage imageWithData:data];

Blackberry - how to resize image?

I wanted to know if we can resize an image. Suppose if we want to draw an image of 200x200 actual size with a size of 100 x 100 size on our blackberry screen.
Thanks
You can do this pretty simply using the EncodedImage.scaleImage32() method. You'll need to provide it with the factors by which you want to scale the width and height (as a Fixed32).
Here's some sample code which determines the scale factor for the width and height by dividing the original image size by the desired size, using RIM's Fixed32 class.
public static EncodedImage resizeImage(EncodedImage image, int newWidth, int newHeight) {
int scaleFactorX = Fixed32.div(Fixed32.toFP(image.getWidth()), Fixed32.toFP(newWidth));
int scaleFactorY = Fixed32.div(Fixed32.toFP(image.getHeight()), Fixed32.toFP(newHeight));
return image.scaleImage32(scaleFactorX, scaleFactorY);
}
If you're lucky enough to be developer for OS 5.0, Marc posted a link to the new APIs that are a lot clearer and more versatile than the one I described above. For example:
public static Bitmap resizeImage(Bitmap originalImage, int newWidth, int newHeight) {
Bitmap newImage = new Bitmap(newWidth, newHeight);
originalImage.scaleInto(newImage, Bitmap.FILTER_BILINEAR, Bitmap.SCALE_TO_FILL);
return newImage;
}
(Naturally you can substitute the filter/scaling options based on your needs.)
Just an alternative:
BlackBerry - draw image on the screen
BlackBerry - image 3D transform
I'm not a Blackberry programmer, but I believe some of these links will help you out:
Image Resizing Article
Resizing a Bitmap on the Blackberry
Blackberry Image Scaling Question
Keep in mind that the default image scaling done by BlackBerry is quite primitive and generally doesn't look very good. If you are building for 5.0 there is a new API to do much better image scaling using filters such as bilinear or Lanczos.
For BlackBerry JDE 5.0 or later, you can use the scaleInto API.
in this there is two bitmap.temp is holding the old bitmap.In this method you just pass
bitmap ,width,height.it return new bitmap of your choice.
Bitmap ImgResizer(Bitmap bitmap , int width , int height){
Bitmap temp=new Bitmap(width,height);
Bitmap resized_Bitmap = bitmap;
temp.createAlpha(Bitmap.HOURGLASS);
resized_Bitmap.scaleInto(temp , Bitmap.FILTER_LANCZOS);
return temp;
}
Here is the function or you can say method to resize image, use it as you want :
int olddWidth;
int olddHeight;
int dispplayWidth;
int dispplayHeight;
EncodedImage ei2 = EncodedImage.getEncodedImageResource("add2.png");
olddWidth = ei2.getWidth();
olddHeight = ei2.getHeight();
dispplayWidth = 40;\\here pass the width u want in pixels
dispplayHeight = 80;\\here pass the height u want in pixels again
int numeerator = net.rim.device.api.math.Fixed32.toFP(olddWidth);
int denoominator = net.rim.device.api.math.Fixed32.toFP(dispplayWidth);
int widtthScale = net.rim.device.api.math.Fixed32.div(numeerator, denoominator);
numeerator = net.rim.device.api.math.Fixed32.toFP(olddHeight);
denoominator = net.rim.device.api.math.Fixed32.toFP(dispplayHeight);
int heighhtScale = net.rim.device.api.math.Fixed32.div(numeerator, denoominator);
EncodedImage newEi2 = ei2.scaleImage32(widtthScale, heighhtScale);
Bitmap _add =newEi2.getBitmap();
I am posting this answers for newbie in Blackberry Application development. Below code is for processing Bitmap images from URL and Resizing them without loass of Aspect Ratio :
public static Bitmap imageFromServer(String url)
{
Bitmap bitmp = null;
try{
HttpConnection fcon = (HttpConnection)Connector.open(url);
int rc = fcon.getResponseCode();
if(rc!=HttpConnection.HTTP_OK)
{
throw new IOException("Http Response Code : " + rc);
}
InputStream httpInput = fcon.openDataInputStream();
InputStream inp = httpInput;
byte[] b = IOUtilities.streamToBytes(inp);
EncodedImage img = EncodedImage.createEncodedImage(b, 0, b.length);
bitmp = resizeImage(img.getBitmap(), 100, 100);
}
catch(Exception e)
{
Dialog.alert("Exception : " + e.getMessage());
}
return bitmp;
}
public static Bitmap resizeImage(Bitmap originalImg, int newWidth, int newHeight)
{
Bitmap scaledImage = new Bitmap(newWidth, newHeight);
originalImg.scaleInto(scaledImage, Bitmap.FILTER_BILINEAR, Bitmap.SCALE_TO_FIT);
return scaledImage;
}
The Method resizeImage is called inside the method imageFromServer(String url).
1) the image from server is processed using EncodedImage img.
2) Bitmap bitmp = resizeImage(img.getBitmap(), 100, 100);
parameters are passed to resizeImage() and the return value from resizeImage() is set to Bitmap bitmp.

Resources