How do I successfully draw a bitmap onto another bitmap? - graphics

I've been trying to do some map stuff for a game I'm working with. I'm looking to display 9 map files (which I've already read and processed, and have a function which converts these map files into Bitmap objects).
Basically I have an array of bitmaps, and a large bitmap. Each small bitmap is 256x256, and the large one is 768x768, and I want to draw them at the correct offsets (so it all lines up, each bitmap is designed to next to its predecessor, until end of row, at which point a column is used to start the next 3, and so on).
What I'm doing at the minute:
for (int i = 0; i < 9; i++)
{
bmpList[i] = getMapFile(sb[i].ToString());
}
using (Graphics g = Graphics.FromImage(bmp))
{
g.DrawImage(bmpList[0], 256, 0, 256, 256);
g.DrawImage(bmpList[1], 512, 0, 256, 256);
g.DrawImage(bmpList[2], 0, 256, 256, 256);
g.DrawImage(bmpList[3], 256, 256, 256, 256);
g.DrawImage(bmpList[4], 512, 256, 256, 256);
g.DrawImage(bmpList[5], 0, 512, 256, 256);
g.DrawImage(bmpList[6], 256, 512, 256, 256);
g.DrawImage(bmpList[7], 512, 512, 256, 256);
g.DrawImage(bmpList[8], 0, 0, 256, 256);
}
The result is that bmp is just a blank image (when I convert it to an Image.Source, it displays nothing. To prove the rest of the code I've done this:
Rectangle rect = new Rectangle();
rect.Height = 256;
rect.Width = 512;
rect.X = 0;
rect.Y = 0;
Objects.Colour color = new Objects.Colour();
color = colourlist[0];
Graphics.FromImage(bmp).FillRectangle(new SolidBrush(Color.FromArgb(color.r, color.g, color.b)), rect);
And it draws a black rectangle as expected, of 256 height and 512 width. I've changed these numbers to fill the rectangle, too, so that isn't the problem.
Anyone know what I'm doing wrong? All help will be much appreciated!
Thanks in advance

The answer is that actually there is nothing wrong with the above code, and that I was passing a file path incorrectly to another function. My bad, thanks anyway!

Related

How multiple channels can be combined to create a larger single channel? (Ex 1024x8x8 to 64x32x32)

I'm a very beginner of pytorch.
I want to make multiple channel data of one pixel into one channel like below, without using for loop:
Hence, in this case, 1024x8x8 to 64x32x32.
Can you please what function I can use for this?
Try simply to reshape the tensor reshape().
Here's an example:
a = torch.linspace(0, 1023, 1024) # torch.Size([1024])
b = a.reshape(1, 32, 32) # torch.Size([1, 32, 32])
EDIT:
To do it column-wise, you need to transpose the tensor. So in this case it is as follows:
a = torch.ones((1024, 8, 8)) # torch.Size([1024, 8, 8])
b = a.transpose(0, 2) # torch.Size([8, 8, 1024])
b = b.reshape((64, 32, 32)) # torch.Size([64, 32, 32])

Audio Convolution output is longer than input, how can i get around this when feeding data back to a stream of fixed length?

After taking audio data from a stream of length x, the data is then convolved with an impulse response of length 256.
This gives the output vector a length of (x + 256 - 1).
When the data is then fed back into a stream of length x there are 255 samples of overshoot that then causes popping and clicking.
Is there a work around for this? Im not 100% on how to merge the larger than original buffer into the output again without losing random samples or causing this issue.
I left out the larger irrelevent parts of the code, it all works its just this issue i need fixed. Its just here to give an insight into the problem.
Code:
void ConvolveEffect(int chan, void* stream, int len, void* udata)
{
////...A bunch of settings etc
//Pointer to stream
short* p = (short*)stream; //Using short to rep 16 bit ints as the stream is in 16bits.
int length = len / sizeof(short);
//Processing buffer (float)
float* audioData[2];
audioData[0] = new float[length / 2];
audioData[1] = new float[length / 2];
//Demux to L and R
for (int i = 0; i < length; i++)
{
bool even = i % 2 == 0 ? true : false;
audioData[!even][((i - !even) / 2)] = map(p[i], -32767, 32767, -1.0, 1.0);
}
////....Convolution occurs outputting OUT
std::vector<fftconvolver::Sample> outL = Convolve(audioData[0], IRL, length / 2, 256, 128, 256, 256);
std::vector<fftconvolver::Sample> outR = Convolve(audioData[1], IRR, length / 2, 256, 128, 256, 256);
//Remux
for (int i = 0; i < length; i++)
{
bool even = i % 2 == 0 ? true : false;
p[i] = map(Out[!even][(i - !even) / 2], -1.0, 1.0, -32768, 32767);
}
You remember the 255 extra samples and add their values to the 255 samples at the beginning of the next output block.
For example:
[1, 2, 1, 3] produces [2, 3, 4, 3, 2, 1] you output [2, 3, 4, 3], remember [2,1]
Next block:
[3, 2, 1, 3] produces [4, 3, 4, 5, 5, 2]
you output:
[4, 3, 4 ,5]
+ [2, 1]
-----------------
[6, 4, 4, 5]
remember [5,2]
This is referred to as the "overlap-add" method of convolution. It's usually used with FFT convolution in blocks.
The Wikipedia page is here, but it's not awesome: https://en.wikipedia.org/wiki/Overlap%E2%80%93add_method

What's the usage for convolutional layer that output is the same as the input applied with MaxPool

what's the idea behind when using the following convolutional layers?
especially for nn.Conv2d(16, 16, 3, padding = 1)
self.conv1 = nn.Conv2d(3, 16, 3, padding = 1 )
self.conv2 = nn.Conv2d(16, 16, 3, padding = 1)
self.conv3 = nn.Conv2d(16, 32, 3, padding = 1)
self.pool = nn.MaxPool2d(2, 2)
x = F.relu(self.conv1(x))
x = self.pool(F.relu(self.conv2(x)))
x = F.relu(self.conv3(x))
I thought Conv2d always uses a bigger size like
from (16,32) to (32,64) for example.
Is nn.Conv2d(16, 16, 3, padding = 1) merely for reducing the size?
The model architecture all depends on finally what works best for your application, and it's always going to vary.
You are right in saying that usually, you want to make your tensors deeper (in the dimension of your channels) in order to extract richer features, but there is no hard and fast rule about that. Having said that, sometimes you don't want to make your tensors too big, since more the number of channels more the number of trainable parameters making it difficult for your model to train. This again brings me back to the very first line I said - "It all depends".
And as for the line:
nn.Conv2d(16, 16, 3, padding = 1) # stride = 1 by default.
This will keep the size of the tensor the same as the input in all 3 dimensions (height, width, and number of channels).
I will also add the formula to calculate size of output tensor in a convolution for reference.
output_size = ( (input_size - filter_size + 2*padding) / stride ) + 1

OpenGL ES 3.0 GL_POINTS doesn't render anything

Below is a mininal reproducer:
GL_CHECK(glClearColor(0.4f, 0.4f, 0.4f, 1.0f));
GL_CHECK(glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT |
GL_STENCIL_BUFFER_BIT));
GL_CHECK(glUseProgram(this->pp_->shader_program));
GL_CHECK(glEnable(GL_TEXTURE_2D));
GL_CHECK(glActiveTexture(GL_TEXTURE0));
GL_CHECK(glBindTexture(GL_TEXTURE_2D, this->particle_->id));
GLfloat points[] = { 150.f, 150.f, 10.0f, 150.0f, 175.0f, 10.0f };
GLfloat colors[] = { 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f };
shader_pass_set_uniform(this->pp_, HASH("mvp"), glm::value_ptr(this->camera_));
GL_CHECK(glEnableVertexAttribArray(0));
GL_CHECK(glEnableVertexAttribArray(3));
GL_CHECK(glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, &points[0]));
GL_CHECK(glVertexAttribPointer(3, 4, GL_FLOAT, GL_FALSE, 0, &colors[0]));
GL_CHECK(glDrawArrays(GL_POINTS, 0, 2));
vertex shader:
#version 300 es
layout(location = 0) in highp vec3 vertex;
layout(location = 3) in highp vec4 color;
out lowp vec4 vcolor;
uniform mat4 mvp;
void main()
{
vcolor = color;
gl_Position = mvp * vec4(vertex.xy, 0.0, 1.0);
gl_PointSize = vertex.z;
}
fragment shader:
#version 300 es
uniform sampler2D stexture;
in lowp vec4 vcolor;
layout(location = 0) out lowp vec4 ocolor;
void main()
{
ocolor = texture(stexture, gl_PointCoord) * vcolor;
}
Nothing gets rendered on-screen, my glxinfo can be found in this pastebin. When I render the same texture onto a triangle it works.
Here is also the render loop as captured with apitrace:
250155 glClearColor(red = 0.5, green = 0.5, blue = 0.5, alpha = 1)
250157 glClear(mask = GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT | GL_COLOR_BUFFER_BIT)
250159 glUseProgram(program = 9)
250161 glEnable(cap = GL_TEXTURE_2D)
250163 glActiveTexture(texture = GL_TEXTURE0)
250165 glBindTexture(target = GL_TEXTURE_2D, texture = 2)
250167 glUniformMatrix4fv(location = 0, count = 1, transpose = GL_FALSE, value = {0.001953125, 0, 0, 0, 0, 0.002604167, 0, 0, 0, 0, -1, 0, -1, -1, -0, 1})
250168 glEnableVertexAttribArray(index = 0)
250170 glEnableVertexAttribArray(index = 3)
250174 glVertexAttribPointer(index = 0, size = 3, type = GL_FLOAT, normalized = GL_FALSE, stride = 0, pointer = blob(24))
250175 glVertexAttribPointer(index = 3, size = 4, type = GL_FLOAT, normalized = GL_FALSE, stride = 0, pointer = blob(32))
250176 glDrawArrays(mode = GL_POINTS, first = 0, count = 2)
250178 glXSwapBuffers(dpy = 0x1564010, drawable = 50331661)
I guess that could mean, that the range of point sizes, that your GLES implementation supports, may be not what you are expecting. You can try something like:
GLint range[2];
glGetIntegerv(GL_ALIASED_POINT_SIZE_RANGE, range); // for example, (1, 511) in my implementation
to check it out. Any implementation guarantees, that range[1] is at least 1.0. The value gl_PointSize is getting clamped to the range, so in the worst case you won't be able to set point with size, greater than 1. In this case gl_PointCoord seems to evaluate to (1, 0) (according to formula from specs) at the only fragment, that is going to be drawn for each point. After the texture's wrap mode come into play, (1, 0) may turn into (0, 0), for example if you are using GL_REPEAT as the texture's wrap mode. If you'd like to safely draw a point with side, greater than one, you should try using the ordinary way to draw squares (for example using four vertices and GL_TRIANGLE_FAN or GL_TRIANGLE_STRIP mode).

Set certain position of FlxSprite's loadgraphic method

I have a 122 * 16 image.
It has seven colors, and I use only one color per block.
But, loadGraphic() in the FlxSprite class just uses width and height.
How do I use certain position of image?
For example, if I choose 0, loadGraphic() picks up the position width, height(0 to 15, 16), choose 1, (17 ~ 31, 16)
Could you use an animation but change the current frame manually? Something like this:
var sprite = new FlxSprite(100, 100);
sprite.loadGraphic("assets/image.png", true, 122, 16);
sprite.animation.add("my_animation", [0, 1, 2, 3, 4, 5, 6], 0, false);
sprite.animation.play("my_animation", true, 0);
//sprite.animation.paused = true;
sprite.animation.frameIndex = 2; // The frame you want to display

Resources