How do I set volume in SuperCollider in Decibels? - audio

I have a simple SinOsc which plays a 432 hz tone. I want to be able to set that tone to -97 dB. Here's what I have so far:
{
SinOsc.ar(432, 0, 0.01 /*edit this for volume*/, 0)
}.play;
Even though I can see how to edit volume, I don't see a way to set the precise dB level.
In case you are wondering why I am doing this, I need a tone to test 24-bit vs. 16-bit audio.
How can I set the precise dB level or get monitoring to show me what level I am at?

Ah, cool to see a SuperCollider question in Top Questions.
I believe the method you're looking for is .dbamp. See the docs.
Example: (from The SuperCollider Book, Chapter 2)
/* Figure 2.6 */
(
SynthDef(\UGen_ex6, {arg gate = 1, roomsize = 200, revtime = 450;
var src, env, gverb;
env = EnvGen.kr(Env([0, 1, 0], [1, 4], [4, -4], 1), gate, doneAction: 2);
src = Resonz.ar(
Array.fill(4, {Dust.ar(6)}),
1760 * [1, 2.2, 3.95, 8.76] +
Array.fill(4, {LFNoise2.kr(1, 20)}),
0.01).sum * 30.dbamp;
gverb = GVerb.ar(
src,
roomsize,
revtime,
// feedback loop damping
0.99,
// input bw of signal
LFNoise2.kr(0.1).range(0.9, 0.7),
// spread
LFNoise1.kr(0.2).range(0.2, 0.6),
// almost no direct source
-60.dbamp,
// some early reflection
-18.dbamp,
// lots of the tail
3.dbamp,
roomsize);
Out.ar(0, gverb * env)
}).add;
)
a = Synth(\UGen_ex6);

If that 0.01 value is the gain, then simply replace it with the result of
10^(-97/20) = 0.00001412537

Related

What may be wrong about my use of SetGraphicsRootDescriptorTable in D3D12?

For 7 meshes that I would like to draw, I load 7 textures and create the corresponding SRVs in a descriptor heap. Then there's another SRV for IMGUI. There are also 3 CBVs, for triple buffer usage. So it should be like: | srv x7 | srv x1 | cbv x3| in the heap.
The problem I met is that when I called SetGraphicsRootDescriptorTable on range 0, which should be an SRV (which is the texture actually), something went wrong. Here's the code:
ID3D12DescriptorHeap* ppHeaps[] = { pCbvSrvDescriptorHeap, pSamplerDescriptorHeap };
pCommandList->SetDescriptorHeaps(_countof(ppHeaps), ppHeaps);
pCommandList->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
pCommandList->IASetIndexBuffer(pIndexBufferViewDesc);
pCommandList->IASetVertexBuffers(0, 1, pVertexBufferViewDesc);
CD3DX12_GPU_DESCRIPTOR_HANDLE srvHandle(pCbvSrvDescriptorHeap->GetGPUDescriptorHandleForHeapStart(), indexMesh, cbvSrvDescriptorSize);
pCommandList->SetGraphicsRootDescriptorTable(0, srvHandle);
pCommandList->SetGraphicsRootDescriptorTable(1, pSamplerDescriptorHeap->GetGPUDescriptorHandleForHeapStart());
If indexMesh is 5, SetGraphicsRootDescriptorTable will cause the following error though the render output seems still good. And when indexMesh is 6, the following error will still occur and there will be another identical error except that the offset 8 turns into 9.
D3D12 ERROR: CGraphicsCommandList::SetGraphicsRootDescriptorTable: Specified GPU Descriptor Handle (ptr = 0x400750000002c0 at 8 offsetInDescriptorsFromDescriptorHeapStart) of type CBV, for Root Signature (0x0000020A516E8BF0:'m_rootSignature')'s Descriptor Table (at Parameter Index [0])'s Descriptor Range (at Range Index [0] of type D3D12_DESCRIPTOR_RANGE_TYPE_SRV) have mismatching types. All descriptors of descriptor ranges declared STATIC (not-DESCRIPTORS_VOLATILE) in a root signature must be initialized prior to being set on the command list. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE]
That is really weird, because I suppose that the only thing that may cause this is that cbvSrvDescriptorSize is not right. It is 64, and it is set by m_device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV);which I think should work. Besides, if I set it to another value such as 32, the application would crash.
So if cbvSrvDescriptorSize is right, why would the correct indexMesh cause the wrong offset of the descriptor handle? The consequence of this error is that it seems to be influencing my CBV which breaks the render output. Any suggestion would be appreciated, thanks!
Thanks for Chuck's suggestion, here's the code about the rootSig:
CD3DX12_DESCRIPTOR_RANGE1 ranges[3];
ranges[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 4, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_DATA_STATIC);
ranges[1].Init(D3D12_DESCRIPTOR_RANGE_TYPE_SAMPLER, 1, 0);
ranges[2].Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_DATA_STATIC);
CD3DX12_ROOT_PARAMETER1 rootParameters[3];
rootParameters[0].InitAsDescriptorTable(1, &ranges[0], D3D12_SHADER_VISIBILITY_PIXEL);
rootParameters[1].InitAsDescriptorTable(1, &ranges[1], D3D12_SHADER_VISIBILITY_PIXEL);
rootParameters[2].InitAsDescriptorTable(1, &ranges[2], D3D12_SHADER_VISIBILITY_ALL);
CD3DX12_VERSIONED_ROOT_SIGNATURE_DESC rootSignatureDesc;
rootSignatureDesc.Init_1_1(_countof(rootParameters), rootParameters, 0, nullptr, D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT);
ComPtr<ID3DBlob> signature;
ComPtr<ID3DBlob> error;
ThrowIfFailed(D3DX12SerializeVersionedRootSignature(&rootSignatureDesc, featureData.HighestVersion, &signature, &error));
ThrowIfFailed(m_device->CreateRootSignature(0, signature->GetBufferPointer(), signature->GetBufferSize(), IID_PPV_ARGS(&m_rootSignature)));
NAME_D3D12_OBJECT(m_rootSignature);
And here's some declarations in the pixel shader:
Texture2DArray g_textures : register(t0);
SamplerState g_sampler : register(s0);
cbuffer cb0 : register(b0)
{
float4x4 g_mWorldViewProj;
float3 g_lightPos;
float3 g_eyePos;
...
};
It's not very often I come across the exact problem I'm experiencing (my code is almost verbatim) and it's an in-progress post! Let's suffer together.
My problem turned out to be the calls to CreateConstantBufferView()/CreateShaderResourceView() - I was passing srvHeap->GetCPUDescriptorHandleForHeapStart() as the destDescriptor handle. These need to be offset to match your table layout (the offsetInDescriptorsFromTableStart param of CD3DX12_DESCRIPTOR_RANGE1).
I found it easier to just maintain one D3D12_CPU_DESCRIPTOR_HANDLE to the heap and just increment handle.ptr after every call to CreateSomethingView() which uses that heap.
CD3DX12_DESCRIPTOR_RANGE1 rangesV[1] = {{}};
CD3DX12_DESCRIPTOR_RANGE1 rangesP[1] = {{}};
// Vertex
rangesV[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_NONE, 0); // b0 at desc offset 0
// Pixel
rangesP[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 1, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_NONE, 1); // t0 at desc offset 1
CD3DX12_ROOT_PARAMETER1 rootParameters[2] = {{}};
rootParameters[0].InitAsDescriptorTable(1, &rangesV[0], D3D12_SHADER_VISIBILITY_VERTEX);
rootParameters[1].InitAsDescriptorTable(1, &rangesP[0], D3D12_SHADER_VISIBILITY_PIXEL);
D3D12_CPU_DESCRIPTOR_HANDLE srvHeapHandle = srvHeap->GetCPUDescriptorHandleForHeapStart();
// ----
device->CreateConstantBufferView(&cbvDesc, srvHeapHandle);
srvHeapHandle.ptr += device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV);
// ----
device->CreateShaderResourceView(texture, &srvDesc, srvHeapHandle);
srvHeapHandle.ptr += device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV);
Perhaps an enum would help keep it tidier and more maintainable, though. I'm still experimenting.

Clicking a moving object in a game

I have made some very simple bots for some web based games and I wanted to move on to other games which require to use some more advanced features.
I have used pyautogui to bot in web based games and it has been easy because all the images are static (not moving) but when I want to click something in a game what is moving, it could be a Character or a Creature running around pyautogui is not really efficient because it looks for pixels/colors that are exactly the same.
Please suggest any references or any libraries or functions that can detect a model or character even though the character is moving?
Here is an example of something I'd like to click on:
Moving creature Gif image
Thanks.
I noticed the image you linked to is a gif of a mob from world of warcraft.
As a hobby I have been designing bots for MMO's on and off over the past few years.
There are no specific python libraries that will allow you to do what you're asking that I'm aware of; however, taking WoW as an example...
If you are using Windows as your OS in question you will be using Windows API calls to get manipulate your game's target process (here wow.exe).
There are two primary approaches to this:
1) Out of process - you do everything via reading memory values from known offsets and respond by using the Windows API to simulate mouse and/or keyboard input (your choice).
1a) I will quickly mention that although for most modern games it is not an option (due to built-in anti-cheating code), you can also manipulate the game by writing directly to memory. In WAR (warhammer online) when it was still live, I made a grind bot that wrote to memory whenever possible, as they had not enabled punkbuster to protect the game from this. WoW is protected by the infamous "Warden."
2) DLL Injection - WoW has a built-in API created in LUA. As a result, over the years, many hobbyist programmers and hackers have taken apart the binary to reveal its inner workings. You might check out the Memory Editing Forum on ownedcore.com if you are wanting to work with WoW. Many have shared the known offsets in the binary where one can hook into LUA functions and as a result perform in-game actions directly and also tap into needed information. Some have even shared their own DLL's
You specifically mentioned clicking in-game 3d objects. I will close by sharing with you a snippet shared on ownedcore that allows one to do just this. This example encompasses use of both memory offsets and in-game function calls:
using System;
using SlimDX;
namespace VanillaMagic
{
public static class Camera
{
internal static IntPtr BaseAddress
{
get
{
var ptr = WoW.hook.Memory.Read<IntPtr>(Offsets.Camera.CameraPtr, true);
return WoW.hook.Memory.Read<IntPtr>(ptr + Offsets.Camera.CameraPtrOffset);
}
}
private static Offsets.CGCamera cam => WoW.hook.Memory.Read<Offsets.CGCamera>(BaseAddress);
public static float X => cam.Position.X;
public static float Y => cam.Position.Y;
public static float Z => cam.Position.Z;
public static float FOV => cam.FieldOfView;
public static float NearClip => cam.NearClip;
public static float FarClip => cam.FarClip;
public static float Aspect => cam.Aspect;
private static Matrix Matrix
{
get
{
var bCamera = WoW.hook.Memory.ReadBytes(BaseAddress + Offsets.Camera.CameraMatrix, 36);
var m = new Matrix();
m[0, 0] = BitConverter.ToSingle(bCamera, 0);
m[0, 1] = BitConverter.ToSingle(bCamera, 4);
m[0, 2] = BitConverter.ToSingle(bCamera, 8);
m[1, 0] = BitConverter.ToSingle(bCamera, 12);
m[1, 1] = BitConverter.ToSingle(bCamera, 16);
m[1, 2] = BitConverter.ToSingle(bCamera, 20);
m[2, 0] = BitConverter.ToSingle(bCamera, 24);
m[2, 1] = BitConverter.ToSingle(bCamera, 28);
m[2, 2] = BitConverter.ToSingle(bCamera, 32);
return m;
}
}
public static Vector2 WorldToScreen(float x, float y, float z)
{
var Projection = Matrix.PerspectiveFovRH(FOV * 0.5f, Aspect, NearClip, FarClip);
var eye = new Vector3(X, Y, Z);
var lookAt = new Vector3(X + Matrix[0, 0], Y + Matrix[0, 1], Z + Matrix[0, 2]);
var up = new Vector3(0f, 0f, 1f);
var View = Matrix.LookAtRH(eye, lookAt, up);
var World = Matrix.Identity;
var WorldPosition = new Vector3(x, y, z);
var ScreenPosition = Vector3.Project(WorldPosition, 0f, 0f, WindowHelper.WindowWidth, WindowHelper.WindowHeight, NearClip, FarClip, World*View*Projection);
return new Vector2(ScreenPosition.X, ScreenPosition.Y-20f);
If the mobs colors are somewhat easy to differentiate from the background you can use pyautogui pixel matching.
import pyautogui
screen = pyautogui.screenshot()
# Use this to scan the area of the screen where the mob appears.
(R, G, B) = screen.getpixel((x, y))
# Compare to mob color
If colors vary you can use color tolerance:
pyautogui.pixelMatchesColor(x, y, (R, G, B), tolerance=5)

Linux framebuffers with ARGB32. Alpha? How does a framebuffer support Alpha?

After looking at the source for Qt, it seems that it, and framebuffers in general, support alpha transparency.
static QImage::Format determineFormat(const fb_var_screeninfo &info, int depth)
{
const fb_bitfield rgba[4] = { info.red, info.green,
info.blue, info.transp };
QImage::Format format = QImage::Format_Invalid;
switch (depth) {
case 32: {
const fb_bitfield argb8888[4] = {{16, 8, 0}, {8, 8, 0},
{0, 8, 0}, {24, 8, 0}};
const fb_bitfield abgr8888[4] = {{0, 8, 0}, {8, 8, 0},
{16, 8, 0}, {24, 8, 0}};
if (memcmp(rgba, argb8888, 4 * sizeof(fb_bitfield)) == 0) {
format = QImage::Format_ARGB32;
} else if (memcmp(rgba, argb8888, 3 * sizeof(fb_bitfield)) == 0) {
format = QImage::Format_RGB32;
} else if (memcmp(rgba, abgr8888, 3 * sizeof(fb_bitfield)) == 0) {
format = QImage::Format_RGB32;
// pixeltype = BGRPixel;
}
break;
}
// code ommited
}
What does it mean if a framebuffer supports alpha? Don't framebuffers typically represent monitors?
I am investigating the possibility of sending the alpha channel out HDMI for video overlay on an FPGA chip, similar to this users question.
I am wondering that if I have an external monitor, that some hows registers itself within linux to have a depth of 32bits with an alpha channel, with this get sent out the HDMI?
The alpha component is not transmitted to the monitor. But,
Alpha might be used by the compositor, allowing a window on screen to be transparent. For example, you can use the alpha channel in a WebGL framebuffer to show the document underneath the WebGL canvas.
You might use the alpha component in your application, even if the compositor doesn't use it.
It is more convenient to waste a byte of memory per pixel than it is to have an odd-sized pixel. Hardware framebuffers support a variety of 1, 2, and 4-channel formats, but only a few 3-channel formats.
The HDMI cable itself can carry a small variety of different video formats, such as RGB and YCbCr, with variations in subsampling and bit depth. The advantage to even-sized pixel formats does not apply to streamed data.

How to programmatically stop sound playback in SuperCollider

I have the following piece of code, which should play a synth function for one second, stop it, play it again after one second, and so on:
t = Task({{
var a;
a = {[0,0,SinOsc.ar(852, 0, 2.2)+SinOsc.ar(1633, 0, 2.2), 0]} ;
a.play;
1.wait;
a.release(5);
1.wait;
}.loop});
t.play;
The problem is, a doesn't stop playing, but additional a's get started on the server. What is wrong here, how can a playing synth be stopped?
In that code, a is a function, so a.release does not tell the synth to stop playing.
Instead, why not write a SynthDef with a 5 second lone envelope on it:
SynthDef(\sines, {arg out = 0, release_dur, gate =1, amp = 0.2;
var sines, env;
env = EnvGen.kr(Env.asr(0.01, amp, release_dur), gate, doneAction:2);
sines = SinOsc.ar(852, 0, 2.2)+SinOsc.ar(1633, 0, 2.2);
Out.ar(out, sines * env);
}).add
t = Task({{
var a;
a = Synth.new(\sines, [\release_dur, 5, \out, 0, \amp, 0.2, \gate, 1]);
1.wait;
a.set(\gate, 0);
1.wait;
}.loop});
t.play;
We'll pass in the release duration as an argument, so you can set it in the task below in the a =Synth line.
Then, when you want to end the synth, send it gate of 0. this tells the envelope to release, which it does over 5 seconds, then the doneAction removes the synth from the server. Note that you will have more than one synth playing at once, because the release time is longer than your wait time.
Also, you've set the amplitude for your sines to be greater than 1. I did not change this in the synthdef above.

Am I doing something wrong, or do Intel graphic cards suck so bad?

I have
VGA compatible controller: Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) on Ubuntu 10.10 Linux.
I'm rendering statically one VBO per frame. This VBO has 30,000 triangles, with 3 lights and one texture, and I'm getting 15 FPS.
Are intel cards so bad, or am I doing sth wrong?
Drivers are standard, open source drivers from intel.
My code:
void init() {
glGenBuffersARB(4, vbos);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[0]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 3, vertXYZ, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[1]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 4, colorRGBA, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[2]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 3, normXYZ, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[3]);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * verticesNum * 2, texXY, GL_STATIC_DRAW_ARB);
}
void draw() {
glPushMatrix();
const Vector3f O = ps.getPosition();
glScalef(scaleXYZ[0], scaleXYZ[1], scaleXYZ[2]);
glTranslatef(O.x() - originXYZ[0], O.y() - originXYZ[1], O.z()
- originXYZ[2]);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[0]);
glVertexPointer(3, GL_FLOAT, 0, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[1]);
glColorPointer(4, GL_FLOAT, 0, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[2]);
glNormalPointer(GL_FLOAT, 0, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbos[3]);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
texture->bindTexture();
glDrawArrays(GL_TRIANGLES, 0, verticesNum);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0); //disabling VBO
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glPopMatrix();
}
EDIT: maybe it's not clear - initialization is in different function, and is only called once.
A few hints:
Using that number of vertices you should interleave the arrays. Vertex caches usually don't hold more than 1000 entries. Interleaving the data of course implies that the data is hold by a single VBO.
Using glDrawArrays is suboptimal if there are a lot of shared vertices, which is likely the case for a (static) terrain. Instead draw using glDrawElements. You can use the index array to implement some cheap LOD
Experiment with the number of vertices in the index buffer given to glDrawArrays. Try batches of at most 2^14, 2^15 or 2^16 indices. This is again to relieve cache pressure.
Oh and in your code the last two lines
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
I think you meant those to be glDisableClientState.
Make sure your system has OpenGL acceleration enabled:
$ glxinfo | grep rendering
direct rendering: Yes
If you get 'no', then you don't have OpenGL acceleration.
Thanks fo answers.
Yeah, I have direct rendering on, according to glxinfo. In glxgears I get sth like 150 FPS, and games like Warzone or glest works fast enough. So probably problem is in my code.
I'll buy some real graphic card eventually anyway, but I wanted my game to work on integrated graphic cards too, that's why I posted this question.

Resources