I'm currently writing a Processing sketch that needs to access multiple audio inputs, but Processing only allows access to the default line in. I have tried getting Lines straight from the Java Mixer (accessed within Processing), but I still only get the signal from whichever line is currently set to default on my machine.
I've started looking at sending the sound via OSC from SuperCollider, as recommended here. However, since I'm very new to SuperCollider and their documentation and support is more focused on generating sound than on accessing inputs, my next step will probably be to play around with Beads and Jack, as suggested here.
Does anyone have (1) other suggestions, or (2) concrete examples of getting multiple inputs from either SuperCollider or Beads/Jack to Processing?
Thank you in advance!
Edit: The sound will be used to power custom music visualizations (think the iTunes visualizer, but much more song specific). We have this working with multiple mp3s; now what I need is to able to get a float[] buffer from each mic. Hoping to have 9 different mics, though we'll settle for 4 if that is more doable.
For hardware, at this point, we are just using mics and XLR to USB cables. (Have considered a pre-amp, but so far this has been sufficient.) I am currently on Windows, but I think that we will ultimately switch to a Mac.
Here was my attempt with just Beads (it works fine for the laptop, since I do that one first, but the headset buffer has all 0's; if I switch them and put the headset first, the headset buffer will be correct, but the laptop will contain all 0's):
void setup() {
size(512, 400);
JavaSoundAudioIO headsetAudioIO = new JavaSoundAudioIO();
JavaSoundAudioIO laptopAudioIO = new JavaSoundAudioIO();
headsetAudioIO.selectMixer(5);
headsetAudioCon = new AudioContext(headsetAudioIO);
laptopAudioIO.selectMixer(4);
laptopAudioCon = new AudioContext(laptopAudioIO);
headsetMic = headsetAudioCon.getAudioInput();
laptopMic = headsetAudioCon.getAudioInput();
} // setup()
void draw() {
background(100,0, 75);
laptopMic.start();
laptopMic.calculateBuffer();
laptopBuffer = laptopMic.getOutBuffer(0);
for (int j = 0; j < laptopBuffer.length - 1; j++)
{
println("laptop; " + j + ": " + laptopBuffer[j]);
line(j, 200+laptopBuffer[j]*50, j+1, 200+laptopBuffer[j+1]*50);
}
laptopMic.kill();
headsetMic.start();
headsetMic.calculateBuffer();
headsetBuffer = headsetMic.getOutBuffer(0);
for (int j = 0; j < headsetBuffer.length - 1; j++)
{
println("headset; " + j + ": " + headsetBuffer[j]);
line(j, 50+headsetBuffer[j]*50, j+1, 50+headsetBuffer[j+1]*50);
}
headsetMic.kill();
} // draw()
My attempt at adding Jack contains this line:
ac = new AudioContext(new AudioServerIO.Jack(), 44100, new IOAudioFormat(44100, 16, 4, 4));
but I get the error:
Jun 22, 2016 9:17:24 PM org.jaudiolibs.beads.AudioServerIO$1 run
SEVERE: null
org.jaudiolibs.jnajack.JackException: Can't find native library
at org.jaudiolibs.jnajack.Jack.getInstance(Jack.java:428)
at org.jaudiolibs.audioservers.jack.JackAudioServer.initialise(JackAudioServer.java:102)
at org.jaudiolibs.audioservers.jack.JackAudioServer.run(JackAudioServer.java:86)
at org.jaudiolibs.beads.AudioServerIO$1.run(Unknown Source)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsatisfiedLinkError: Unable to load library 'jack': Native library (win32-x86-64/jack.dll) not found in resource path ([file:/C:/Users/...etc...)
And when I'm in Jack, I don't see my mic (which seems like a huge red flag to me, though I am completely new to Jack). Should this AudioContext show up as an Input in Jack? Or vice versa -- find my mic there first and then get it from Jack to Processing?
(Forgive my inexperience, and thank you again! My lack of knowledge in Jack makes me wonder if I should revisit SuperCollider instead...)
I had the same issue a few years ago and I used a combination of JACK, JNAJack and Beads. You can follow this Beads Google Group thread for more details.
At the that time I had to use this version of Beads (2012-04-23), but I hope those changes probably made it into the main project by now.
For reference, here is the basic class I used:
import java.util.Arrays;
import org.jaudiolibs.beads.AudioServerIO;
import net.beadsproject.beads.analysis.featureextractors.FFT;
import net.beadsproject.beads.analysis.featureextractors.PowerSpectrum;
import net.beadsproject.beads.analysis.segmenters.ShortFrameSegmenter;
import net.beadsproject.beads.core.AudioContext;
import net.beadsproject.beads.core.AudioIO;
import net.beadsproject.beads.core.UGen;
import net.beadsproject.beads.ugens.Gain;
import processing.core.PApplet;
public class BeadsJNA extends PApplet {
AudioContext ac;
ShortFrameSegmenter sfs;
PowerSpectrum ps;
public void setup(){
//defining audio context with 6 inputs and 6 outputs - adjust this based on your sound card / JACK setup
ac = new AudioContext(new AudioServerIO.Jack(),512,AudioContext.defaultAudioFormat(6,6));
//getting 4 audio inputs (channels 1,2,3,4)
UGen microphoneIn = ac.getAudioInput(new int[]{1,2,3,4});
Gain g = new Gain(ac, 1, 0.5f);
g.addInput(microphoneIn);
ac.out.addInput(g);
println("no. of inputs: " + ac.getAudioInput().getOuts());
//test get some FFT power spectrum data form the
sfs = new ShortFrameSegmenter(ac);
sfs.addInput(ac.out);
FFT fft = new FFT();
sfs.addListener(fft);
ps = new PowerSpectrum();
fft.addListener(ps);
ac.out.addDependent(sfs);
ac.start();
}
public void draw(){
background(255);
float[] features = ps.getFeatures();
if(features != null){
for(int x = 0; x < width; x++){
int featureIndex = (x * features.length) / width;
int barHeight = Math.min((int)(features[featureIndex] *
height), height - 1);
line(x, height, x, height - barHeight);
}
}
}
public static void main(String[] args) {
PApplet.main(BeadsJNA.class.getSimpleName());
}
}
Related
I am using AKSequencer to create a sequence of notes that are played by an AKMidiSampler. My problem is, at higher tempos the first note always plays with a little delay, no matter what i do.
I tried prerolling the sequence but it won't help. Substituting the AKMidiSampler with an AKSampler or a AKSamplePlayer (and using a callback track to play them) hasn't helped either, though it made me think that the problem probably resides in the sequencer or in the way I create the notes.
Here's an example of what I'm doing (I tried to make it as simple as I could):
import UIKit
import AudioKit
class ViewController: UIViewController {
let sequencer = AKSequencer()
let sampler = AKMIDISampler()
let callbackInst = AKCallbackInstrument()
var metronomeTrack : AKMusicTrack?
var callbackTrack : AKMusicTrack?
let numberOfBeats = 8
let tempo = 280.0
var startTime : TimeInterval = 0
override func viewDidLoad() {
super.viewDidLoad()
print("Begin setup.")
// Load .wav sample in AKMidiSampler
do {
try sampler.loadWav("tick")
} catch {
print("sampler.loadWav() failed")
}
// Create tracks for the sequencer and set midi outputs
metronomeTrack = sequencer.newTrack("metronomeTrack")
callbackTrack = sequencer.newTrack("callbackTrack")
metronomeTrack?.setMIDIOutput(sampler.midiIn)
callbackTrack?.setMIDIOutput(callbackInst.midiIn)
// Setup and start AudioKit
AudioKit.output = sampler
do {
try AudioKit.start()
} catch {
print("AudioKit.start() failed")
}
// Set sequencer tempo
sequencer.setTempo(tempo)
// Create the notes
var midiSequenceIndex = 0
for i in 0 ..< numberOfBeats {
// Add notes to tracks
metronomeTrack?.add(noteNumber: 60, velocity: 100, position: AKDuration(beats: Double(midiSequenceIndex)), duration: AKDuration(beats: 0.5))
callbackTrack?.add(noteNumber: MIDINoteNumber(midiSequenceIndex), velocity: 100, position: AKDuration(beats: Double(midiSequenceIndex)), duration: AKDuration(beats: 0.5))
print("Adding beat number \(i+1) at position: \(midiSequenceIndex)")
midiSequenceIndex += 1
}
// Set the callback
callbackInst.callback = {status, noteNumber, velocity in
if status == .noteOn {
let currentTime = Date().timeIntervalSinceReferenceDate
let noteDelay = currentTime - ( self.startTime + ( 60.0 / self.tempo ) * Double(noteNumber) )
print("Beat number: \(noteNumber) delay: \(noteDelay)")
} else if ( noteNumber == midiSequenceIndex - 1 ) && ( status == .noteOff) {
print("Sequence ended.\n")
self.toggleMetronomePlayback()
} else {return}
}
// Preroll the sequencer
sequencer.preroll()
print("Setup ended.\n")
}
#IBAction func playButtonPressed(_ sender: UIButton) {
toggleMetronomePlayback()
}
func toggleMetronomePlayback() {
if sequencer.isPlaying == false {
print("Playback started.")
startTime = Date().timeIntervalSinceReferenceDate
sequencer.play()
} else {
sequencer.stop()
sequencer.rewind()
}
}
}
Could anyone help? Thank you.
As Aure commented, the start up latency is a known problem. Even with preroll, there is still noticeable latency, especially at higher tempos.
But if you are using a looping sequence, I found that you can sometimes mitigate how noticeable the latency is by setting the 'starting point' of the sequence to a position after the final MIDI event, but within the loop length. If you can find a good position, you can get the latency effects out of the way before it loops back to your content.
Make sure to call setTime() before you need it (e.g., after stopping the sequence, not when you are ready to play) because the setTime()call itself can introduce about 200ms of wonkiness.
Edit:
As an afterthought, you could do the same thing on a non-looping sequence by enabling looping and using an arbitrarily long sequence length. If you needed playback to stop at the end of the MIDI content, you could do this with an AKCallbackInstrument triggered by an MIDI event placed just after the final note.
After a bit of testing I actually found out that it is not the first note that plays off but the subsequent notes that play in advance. Moreover, the amount of notes that play exactly on time when starting the sequencer depends on the set tempo.
The funny thing is that if the tempo is < 400 there will be one note played on time and the others in advance, if it is 400 <= bpm < 800 there will be two notes played correctly and the others in advance and so on, for every 400 bpm increment you get one more note played correctly.
So... since the notes are played in advance and not late, the solution that solved it for me is:
1) Use a sampler that is not connected directly to a track's midi output but has its .play() method called inside a callback.
2) Keep track of when the sequencer gets started
3) At every callback calculate when the note should play in relation to the start time and store what time it actually is, so you can then calculate the offset.
4) use the computed offset to dispatch_async after the offset your .play() method.
And that's it, I tested this on multiple devices and now all the notes play perfectly on time.
I had the same issue, preroll didn't help, but I have managed to solve it with a dedicated sampler for the first notes.
I used a delay on the other sampler, about 0.06 of a second, works like a charm.
Kind of a silly solution but it did the job and I could go on with the project :)
//This is for fixing AK bug that plays the first playback not in delay
let fixDelay = AKDelay()
fixDelay.dryWetMix = 1
fixDelay.feedback = 0
fixDelay.lowPassCutoff = 22000
fixDelay.time = 0.06
fixDelay.start()
let preDelayMixer = AKMixer()
let preFirstMixer = AKMixer()
[playbackSampler,vocalSampler] >>> preDelayMixer >>> fixDelay
[firstNoteVocalSampler, firstRoundPlaybackSampler] >>> preFirstMixer
[fixDelay,preFirstMixer] >>> endMixer
I have made some very simple bots for some web based games and I wanted to move on to other games which require to use some more advanced features.
I have used pyautogui to bot in web based games and it has been easy because all the images are static (not moving) but when I want to click something in a game what is moving, it could be a Character or a Creature running around pyautogui is not really efficient because it looks for pixels/colors that are exactly the same.
Please suggest any references or any libraries or functions that can detect a model or character even though the character is moving?
Here is an example of something I'd like to click on:
Moving creature Gif image
Thanks.
I noticed the image you linked to is a gif of a mob from world of warcraft.
As a hobby I have been designing bots for MMO's on and off over the past few years.
There are no specific python libraries that will allow you to do what you're asking that I'm aware of; however, taking WoW as an example...
If you are using Windows as your OS in question you will be using Windows API calls to get manipulate your game's target process (here wow.exe).
There are two primary approaches to this:
1) Out of process - you do everything via reading memory values from known offsets and respond by using the Windows API to simulate mouse and/or keyboard input (your choice).
1a) I will quickly mention that although for most modern games it is not an option (due to built-in anti-cheating code), you can also manipulate the game by writing directly to memory. In WAR (warhammer online) when it was still live, I made a grind bot that wrote to memory whenever possible, as they had not enabled punkbuster to protect the game from this. WoW is protected by the infamous "Warden."
2) DLL Injection - WoW has a built-in API created in LUA. As a result, over the years, many hobbyist programmers and hackers have taken apart the binary to reveal its inner workings. You might check out the Memory Editing Forum on ownedcore.com if you are wanting to work with WoW. Many have shared the known offsets in the binary where one can hook into LUA functions and as a result perform in-game actions directly and also tap into needed information. Some have even shared their own DLL's
You specifically mentioned clicking in-game 3d objects. I will close by sharing with you a snippet shared on ownedcore that allows one to do just this. This example encompasses use of both memory offsets and in-game function calls:
using System;
using SlimDX;
namespace VanillaMagic
{
public static class Camera
{
internal static IntPtr BaseAddress
{
get
{
var ptr = WoW.hook.Memory.Read<IntPtr>(Offsets.Camera.CameraPtr, true);
return WoW.hook.Memory.Read<IntPtr>(ptr + Offsets.Camera.CameraPtrOffset);
}
}
private static Offsets.CGCamera cam => WoW.hook.Memory.Read<Offsets.CGCamera>(BaseAddress);
public static float X => cam.Position.X;
public static float Y => cam.Position.Y;
public static float Z => cam.Position.Z;
public static float FOV => cam.FieldOfView;
public static float NearClip => cam.NearClip;
public static float FarClip => cam.FarClip;
public static float Aspect => cam.Aspect;
private static Matrix Matrix
{
get
{
var bCamera = WoW.hook.Memory.ReadBytes(BaseAddress + Offsets.Camera.CameraMatrix, 36);
var m = new Matrix();
m[0, 0] = BitConverter.ToSingle(bCamera, 0);
m[0, 1] = BitConverter.ToSingle(bCamera, 4);
m[0, 2] = BitConverter.ToSingle(bCamera, 8);
m[1, 0] = BitConverter.ToSingle(bCamera, 12);
m[1, 1] = BitConverter.ToSingle(bCamera, 16);
m[1, 2] = BitConverter.ToSingle(bCamera, 20);
m[2, 0] = BitConverter.ToSingle(bCamera, 24);
m[2, 1] = BitConverter.ToSingle(bCamera, 28);
m[2, 2] = BitConverter.ToSingle(bCamera, 32);
return m;
}
}
public static Vector2 WorldToScreen(float x, float y, float z)
{
var Projection = Matrix.PerspectiveFovRH(FOV * 0.5f, Aspect, NearClip, FarClip);
var eye = new Vector3(X, Y, Z);
var lookAt = new Vector3(X + Matrix[0, 0], Y + Matrix[0, 1], Z + Matrix[0, 2]);
var up = new Vector3(0f, 0f, 1f);
var View = Matrix.LookAtRH(eye, lookAt, up);
var World = Matrix.Identity;
var WorldPosition = new Vector3(x, y, z);
var ScreenPosition = Vector3.Project(WorldPosition, 0f, 0f, WindowHelper.WindowWidth, WindowHelper.WindowHeight, NearClip, FarClip, World*View*Projection);
return new Vector2(ScreenPosition.X, ScreenPosition.Y-20f);
If the mobs colors are somewhat easy to differentiate from the background you can use pyautogui pixel matching.
import pyautogui
screen = pyautogui.screenshot()
# Use this to scan the area of the screen where the mob appears.
(R, G, B) = screen.getpixel((x, y))
# Compare to mob color
If colors vary you can use color tolerance:
pyautogui.pixelMatchesColor(x, y, (R, G, B), tolerance=5)
I am trying to create program that roles 2 dice. Then the user and say Yes to role the dice again or say No to stop rolling the dice.
import java.util.*;
public class Dice
{
public static void main(String[] args)
{
Random dice1 = new Random();
Scanner in = new Scanner(System.in);
//varibles
int die1;
int die2;
byte playagain=1;
byte Yes = 1;
byte No = 0;
int total;
int stop = 0;
//Want find I way to change words into #s
String start = Yes;
while(stop<5 && start<Yes){
stop+=1;
die1=dice1.nextInt(6)+1;
die2=dice1.nextInt(6)+1;
total=die1 + die2;
System. out. println("You rolled a " + total+ ".");
System. out. println("Do you want to play again?");
System. out. println("Type Yes to keep playing you and No to stop.");
/*I want people to be able to input Yes and that equal a # so I can use it in the While loop. Same with No.*/
start=in.next();
System. out. println("start is at " + start);
}
}
}
I have looked throughout internet an could not find any help so that is why I am asking.
If you mean you want to read an int from your scanner, try using Scanner.nextInt http://docs.oracle.com/javase/7/docs/api/java/util/Scanner.html#nextInt()
You can use hasNextInt to determine if using nextInt will work or not: http://docs.oracle.com/javase/7/docs/api/java/util/Scanner.html#hasNextInt()
If you already have the string and want to turn it into an int, try Integer.parseInt http://docs.oracle.com/javase/7/docs/api/java/lang/Integer.html#parseInt(java.lang.String)
In general, if you want to know if a function that does something basic exists, you should check the APIs in java.lang, java.util, java.io, etc (depending on where you think it would be)
I am trying to generate a tone to the sound card (Frequency: 1950 hz, duration: 40 ms, level: -30 db, right-channel, on steam 1). Any recommendations on how to accomplish this using C++ or C#. Are there any libraries (C++ or C#) for generating such precise tone?
David, playing audio to the speakers was built right into .NET (i think in the .NET 2.0 Framework). Using the System.Media.SoundPlayer you can play a sound from a memory stream that you build (in WAV format). Here is a function i coded that plays a simple frequency for a certain duration. Regarding the decibels and sending it to the sound card, i don't really understand what specifics you are referring to. For instance i fail to understand how audio as measured in decibels is sent to the sound card. My understanding is that decibels are simply a measure of how loud a sound is, thus after it's been reproduced by the speakers. Thus the volume control on the speakers affects what decibel your sounds will produce, and sending a certain decibel to the sound card thus makes no sense to me. Maybe you need something more detailed and maybe this doesn't work for you. But maybe you can run with this and get it to work for what you need. And maybe it is almost exactly what you are asking.
The process i use in this code allows one to build any audio you want, and plays it. So you can create 2 sine waves or many, many more, or triangle waves, or even speech synthesis with this method if you want. This method takes sound samples which are calculated and then plays those, so you need to code what each audio sample needs to be at the given moment in time. WAV allows stereo sound too, but this code sample only uses non-stereo sound. If you want stereo sound then it just needs modified to generate the bytes for a stereo WAV format instead. I expect it would not be too difficult.
Happy coding!
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Windows.Forms;
public static void PlayBeep(UInt16 frequency, int msDuration, UInt16 volume = 16383)
{
var mStrm = new MemoryStream();
BinaryWriter writer = new BinaryWriter(mStrm);
const double TAU = 2 * Math.PI;
int formatChunkSize = 16;
int headerSize = 8;
short formatType = 1;
short tracks = 1;
int samplesPerSecond = 44100;
short bitsPerSample = 16;
short frameSize = (short)(tracks * ((bitsPerSample + 7) / 8));
int bytesPerSecond = samplesPerSecond * frameSize;
int waveSize = 4;
int samples = (int)((decimal)samplesPerSecond * msDuration / 1000);
int dataChunkSize = samples * frameSize;
int fileSize = waveSize + headerSize + formatChunkSize + headerSize + dataChunkSize;
// var encoding = new System.Text.UTF8Encoding();
writer.Write(0x46464952); // = encoding.GetBytes("RIFF")
writer.Write(fileSize);
writer.Write(0x45564157); // = encoding.GetBytes("WAVE")
writer.Write(0x20746D66); // = encoding.GetBytes("fmt ")
writer.Write(formatChunkSize);
writer.Write(formatType);
writer.Write(tracks);
writer.Write(samplesPerSecond);
writer.Write(bytesPerSecond);
writer.Write(frameSize);
writer.Write(bitsPerSample);
writer.Write(0x61746164); // = encoding.GetBytes("data")
writer.Write(dataChunkSize);
{
double theta = frequency * TAU / (double)samplesPerSecond;
// 'volume' is UInt16 with range 0 thru Uint16.MaxValue ( = 65 535)
// we need 'amp' to have the range of 0 thru Int16.MaxValue ( = 32 767)
double amp = volume >> 2; // so we simply set amp = volume / 2
for (int step = 0; step < samples; step++)
{
short s = (short)(amp * Math.Sin(theta * (double)step));
writer.Write(s);
}
}
mStrm.Seek(0, SeekOrigin.Begin);
new System.Media.SoundPlayer(mStrm).Play();
writer.Close();
mStrm.Close();
} // public static void PlayBeep(UInt16 frequency, int msDuration, UInt16 volume = 16383)
NAudio provides a robust audio library for .NET.
NAudio is an open source .NET audio and MIDI library, containing dozens of useful audio related classes intended to speed development of audio related utilities in .NET. It has been in development since 2002 and has grown to include a wide variety of features. While some parts of the library are relatively new and incomplete, the more mature features have undergone extensive testing and can be quickly used to add audio capabilities to an existing .NET application. NAudio can be quickly added to your .NET application using NuGet.
Here's an article that walks step-by-step through using NAudio to create a sine wave. You can create the sine wave with any desired frequency, for any desired duration:
http://msdn.microsoft.com/en-us/magazine/ee309883.aspx
I want to use custom fonts in my j2me application. so I created a png file contains all needed glyph and an array of glyphs width and another for glyphs offset in PNG file.
Now, I want to render a text in my app using above font within a gameCanvas class. but when I use the following code, rendering text in real device is very slow.
Note: the text is coded(for some purposes) to bytes and stored in this.text variable. 242=[space],241=[\n] and 243=[\r].
int textIndex = 0;
while(textIndex < this.text.length)
{
int index = this.text[textIndex] & 0xFF;
if(index > 243)
{
continue;
}
else if(index == 242) lineLeft += 3;
else if(index == 241 || index == 243)
{
top += font.getHeight();
lineLeft = 0;
continue;
}
else
{
lineLeft += widths[index];
if(lineLeft <= getWidth())
lineLeft = 0;
int left = starts[index];
int charWidth = widths[index];
try{
bg.drawRegion(font, left, 0, charWidth, font.getHeight(), 0, lineLeft, top, Graphics.LEFT|Graphics.TOP);
}catch(Exception ee)
{
}
}
textIndex++;
}
Can anyone help me to improve performance and speed in my code?
At end sorry for my bad English and thanks in advanced.:)
Edit: I changed line
bg.drawRegion(font, left, 0, charWidth, font.getHeight(), 0, lineLeft, top, Graphics.LEFT|Graphics.TOP);
To:
bg.clipRect(left, top, charWidth, font.getHeight());
bg.drawImage(font, lineLeft - left, top,0)
bg.setClip(0, 0, getWidth(), getHeight());
but there was no difference in speed!!
any help please!!
Can anyone plz help me to improve my app?
text will appear after 2-3 seconds in real device by this code, I want reduce this time to milliseconds. this is very important for me.
Can I use threads? If yes, How?
I can't sure why your code's performance is not good in real device.
But, how about refer some well known open source J2ME libraries to check it's text drawing implementation for example, LWUIT.
http://java.net/projects/lwuit/sources/svn/content/LWUIT_1_5/UI/src/com/sun/lwuit/CustomFont.java?rev=1628
You can find from the above link it's font drawing implementation. It uses drawImage rather than drawRegion.
I would advice you to look into this library. The implementation is quite good and makes use of industry standard design patters ( Flyweight pattern, predominantly) and robust.