I'm builing a fractal application and need to generate a smooth color scheme, and I found a nice algorithm at Smooth spectrum for Mandelbrot Set rendering.
But that required me to call Color.HSBtoRGB and that method is not available in WinRT / Windows Store apps.
Is there some other built-in method to do this conversion?
Other tips on how to convert HSB to RGB?
I ended up using the HSB to RGB conversion algorithm found at http://www.adafruit.com/blog/2012/03/14/constant-brightness-hsb-to-rgb-algorithm/, I adopted the inital (long) version. Perhaps this can be further optimized but for my purpose this was perfect!
As the hsb2rgb method is in C and I needed C#, I'm sharing my version here:
private byte[] hsb2rgb(int index, byte sat, byte bright)
{
int r_temp, g_temp, b_temp;
byte index_mod;
byte inverse_sat = (byte)(sat ^ 255);
index = index % 768;
index_mod = (byte)(index % 256);
if (index < 256)
{
r_temp = index_mod ^ 255;
g_temp = index_mod;
b_temp = 0;
}
else if (index < 512)
{
r_temp = 0;
g_temp = index_mod ^ 255;
b_temp = index_mod;
}
else if ( index < 768)
{
r_temp = index_mod;
g_temp = 0;
b_temp = index_mod ^ 255;
}
else
{
r_temp = 0;
g_temp = 0;
b_temp = 0;
}
r_temp = ((r_temp * sat) / 255) + inverse_sat;
g_temp = ((g_temp * sat) / 255) + inverse_sat;
b_temp = ((b_temp * sat) / 255) + inverse_sat;
r_temp = (r_temp * bright) / 255;
g_temp = (g_temp * bright) / 255;
b_temp = (b_temp * bright) / 255;
byte[] color = new byte[3];
color[0] = (byte)r_temp;
color[1] = (byte)g_temp;
color[2] = (byte)b_temp;
return color;
}
To call it based on the code linked in the original post I needed to make some minor modifications:
private byte[] SmoothColors1(int maxIterationCount, ref Complex z, int iteration)
{
double smoothcolor = iteration + 1 - Math.Log(Math.Log(z.Magnitude)) / Math.Log(2);
byte[] color = hsb2rgb((int)(10 * smoothcolor), (byte)(255 * 0.6f), (byte)(255 * 1.0f));
if (iteration >= maxIterationCount)
{
// Make sure the core is black
color[0] = 0;
color[1] = 0;
color[2] = 0;
}
return color;
}
Related
for some reason the math portion of my sepia code does not seem to work. I get errors when I run check50, and it shows all the pixel values as being too high. I triple check the values for the filter but all seems good.
void sepia(int height, int width, RGBTRIPLE image[height][width])
{
float org_red = 0;
float org_green = 0;
float org_blue = 0;
for (int i = 0; i < height; i++)
{
for (int j = 0; j < width; j++)
{
org_red = image[i][j].rgbtRed;
org_green = image[i][j].rgbtBlue;
org_blue = image[i][j].rgbtGreen;
long sepiaRed = (.393 * org_red + .769 * org_green + .189 * org_blue);
long sepiaGreen = (.349 * org_red) + .686 * org_green + .168 * org_blue;
long sepiaBlue = (.272 * org_red + .534 * org_green + .131 * org_blue);
if (sepiaRed > 255)
{
sepiaRed = 255;
}
if (sepiaGreen > 255)
{
sepiaGreen = 255;
}
if (sepiaBlue > 255)
{
sepiaBlue = 255;
}
image[i][j].rgbtRed = round(sepiaRed);
image[i][j].rgbtGreen = round(sepiaGreen);
image[i][j].rgbtBlue = round(sepiaBlue);
}
}
return;
}
The error i get says
:( sepia correctly filters single pixel
expected "56 50 39\n", not "84 75 58\n"
:( sepia correctly filters simple 3x3 image
expected "100 89 69\n100...", not "100 88 69\n100..."
:( sepia correctly filters more complex 3x3 image
expected "25 22 17\n66 5...", not "30 27 21\n71 6..."
:( sepia correctly filters 4x4 image
expected "25 22 17\n66 5...", not "30 27 21\n71 6..."
Followed this guide here
I am tasked with "using map and unmap methods to draw a line across the screen by setting pixel byte data to rgb red values".
I have the sprite and background displaying but have no idea how to get the data.
I also tried doing this:
//Create device
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(D3D11_TEXTURE2D_DESC));
desc.Width = 500;
desc.Height = 300;
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.Usage = D3D11_USAGE_DYNAMIC;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
desc.MiscFlags = 0;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
m_d3dDevice->CreateTexture2D(&desc, nullptr, &texture);
m_d3dDevice->CreateShaderResourceView(texture, 0, &textureView);
// Render
D3D11_MAPPED_SUBRESOURCE mapped;
m_d3dContext->Map(texture, 0, D3D11_MAP_WRITE_DISCARD, 0, &mapped);
data = (BYTE*)mapped.pData;
rows = (BYTE)sizeof(data);
std::cout << "hi" << std::endl;
m_d3dContext->Unmap(texture, 0);
Problem is that in that case data array is size 0 but has a pointer. This means that I am pointing to a texture that doesn't have any data or am I not getting this?
Edit:
currently I found
D3D11_SHADER_RESOURCE_VIEW_DESC desc;
m_background->GetDesc(&desc);
desc.Buffer; // buffer
I felt the need to create an Answer for this as when I searched for how do this. This question pops up first and the supplied answer didn't really solve the problem for me and wasn't quite the way I wanted to do it anyways...
In my program I have a method as below.
void ContentLoader::WritePixelsToShaderIndex(uint32_t *data, int width, int height, int index)
{
D3D11_TEXTURE2D_DESC desc = {};
desc.Width = width;
desc.Height = height;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA initData;
initData.pSysMem = data;
initData.SysMemPitch = width * 4;
initData.SysMemSlicePitch = width * height * 4;
Microsoft::WRL::ComPtr<ID3D11Texture2D> tex;
Engine::device->CreateTexture2D(&desc, &initData, tex.GetAddressOf());
Engine::device->CreateShaderResourceView(tex.Get(), NULL, ContentLoader::GetTextureAddress(index));
}
Then using the below code I tested drawing a Blue Square with a White Line. And it works perfectly fine. The issue I was getting was setting the System Mem Slice and Mem Pitch after looking in the WICTextureLoader class I was able to figure out how the data is stored. So it appears the
MemPitch = The Row's Size in Bytes.
MemSlice = The Total Image Pixels Size In Bytes.
const int WIDTH = 200;
const int HEIGHT = 200;
const uint32_t RED = 255 | (0 << 8) | (0 << 16) | (255 << 24);
const uint32_t WHITE = 255 | (255 << 8) | (255 << 16) | (255 << 24);
const uint32_t BLUE = 0 | (0 << 8) | (255 << 16) | (255 << 24);
uint32_t *buffer = new uint32_t[WIDTH * HEIGHT];
bool flip = false;
for (int X = 0; X < WIDTH; ++X)
{
for (int Y = 0; Y < HEIGHT; ++Y)
{
int pixel = X + Y * WIDTH;
buffer[pixel] = flip ? BLUE : WHITE;
}
flip = true;
}
WritePixelsToShaderIndex(buffer, WIDTH, HEIGHT, 3);
delete [] buffer;
First of all, most of those functions return HRESULT values that you are ignoring. That's not safe as you will miss important errors that invalidate the remaining code. You can use if(FAILED(...)) if you want, or you can use ThrowIfFailed, but you can't just ignore the return value in a functioning app.
HRESULT hr = m_d3dDevice->CreateTexture2D(&desc, nullptr, &texture);
if (FAILED(hr))
// error!
hr = m_d3dDevice->CreateShaderResourceView(texture, 0, &textureView);
if (FAILED(hr))
// error!
// Render
D3D11_MAPPED_SUBRESOURCE mapped;
hr = m_d3dContext->Map(texture, 0, D3D11_MAP_WRITE_DISCARD, 0, &mapped);
if (FAILED(hr))
// error!
Second, you should enable the Debug Device and look for diagnostic output which will likely point you to the reason for the failure.
sizeof(data) is always going to be 4 or 8 since data is a BYTE* i.e. the size of a pointer. It has nothing to do with the size of your data array. The locked buffer pointed to by mapped.pData is going to be mapped.RowPitch * desc.Height bytes in size.
You have to copy your pixel data into it row-by-row. Depending on the format and other factors, mapped.RowPitch is not necessarily going to be 4 * desc.Width--4 bytes per pixel is because you are using a format of DXGI_FORMAT_B8G8R8A8_UNORM. It should be at least that big, but it could be bigger to align the overall size.
This is pseudo-code and not necessarily an efficient way to do it, but:
for(UINT y = 0; y < desc.Height; ++y )
{
for(UINT x = 0; x < desc.Width; ++x )
{
// Find the memory location of the pixel at (x,y)
int pixel = y * mapped.RowPitch + (x*4)
BYTE* blue = data[pixel];
BYTE* green = data[pixel] + 1;
BYTE* red = data[pixel] + 2;
BYTE* alpha = data[pixel] + 3;
*blue = /* value between 0 and 255 */;
*green = /* value between 0 and 255 */;
*red = /* value between 0 and 255 */;
*alpha = /* value between 0 and 255 */;
}
}
You should take a look at DirectXTex which does a lot of this kind of row-by-row processing.
How do I create a cube map in D3D11 from 6 images? All the examples I've found use only one .dds. Specifically, how do I upload individual faces of the cube texture?
It works like this:
D3D11_TEXTURE2D_DESC texDesc;
texDesc.Width = description.width;
texDesc.Height = description.height;
texDesc.MipLevels = 1;
texDesc.ArraySize = 6;
texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texDesc.CPUAccessFlags = 0;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D11_USAGE_DEFAULT;
texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texDesc.CPUAccessFlags = 0;
texDesc.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE;
D3D11_SHADER_RESOURCE_VIEW_DESC SMViewDesc;
SMViewDesc.Format = texDesc.Format;
SMViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE;
SMViewDesc.TextureCube.MipLevels = texDesc.MipLevels;
SMViewDesc.TextureCube.MostDetailedMip = 0;
D3D11_SUBRESOURCE_DATA pData[6];
std::vector<vector4b> d[6]; // 6 images of type vector4b = 4 * unsigned char
for (int cubeMapFaceIndex = 0; cubeMapFaceIndex < 6; cubeMapFaceIndex++)
{
d[cubeMapFaceIndex].resize(description.width * description.height);
// fill with red color
std::fill(
d[cubeMapFaceIndex].begin(),
d[cubeMapFaceIndex].end(),
vector4b(255,0,0,255));
pData[cubeMapFaceIndex].pSysMem = &d[cubeMapFaceIndex][0];// description.data;
pData[cubeMapFaceIndex].SysMemPitch = description.width * 4;
pData[cubeMapFaceIndex].SysMemSlicePitch = 0;
}
HRESULT hr = renderer->getDevice()->CreateTexture2D(&texDesc,
description.data[0] ? &pData[0] : nullptr, &m_pCubeTexture);
assert(hr == S_OK);
hr = renderer->getDevice()->CreateShaderResourceView(
m_pCubeTexture, &SMViewDesc, &m_pShaderResourceView);
assert(hr == S_OK);
This creates six "red" images, for the CubeMap.
I know this question is old, and there is already a solution.
Here is a code example that loads 6 textures from disk and puts them together as a cubemap:
Precondition:
ID3D11ShaderResourceView* srv = 0;
ID3D11Resource* srcTex[6];
Pointer to a ShaderResourceView and an array filled with the six textures from disc. I use the order right, left, top, bottom, front, back.
// Each element in the texture array has the same format/dimensions.
D3D11_TEXTURE2D_DESC texElementDesc;
((ID3D11Texture2D*)srcTex[0])->GetDesc(&texElementDesc);
D3D11_TEXTURE2D_DESC texArrayDesc;
texArrayDesc.Width = texElementDesc.Width;
texArrayDesc.Height = texElementDesc.Height;
texArrayDesc.MipLevels = texElementDesc.MipLevels;
texArrayDesc.ArraySize = 6;
texArrayDesc.Format = texElementDesc.Format;
texArrayDesc.SampleDesc.Count = 1;
texArrayDesc.SampleDesc.Quality = 0;
texArrayDesc.Usage = D3D11_USAGE_DEFAULT;
texArrayDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
texArrayDesc.CPUAccessFlags = 0;
texArrayDesc.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE;
ID3D11Texture2D* texArray = 0;
if (FAILED(pd3dDevice->CreateTexture2D(&texArrayDesc, 0, &texArray)))
return false;
// Copy individual texture elements into texture array.
ID3D11DeviceContext* pd3dContext;
pd3dDevice->GetImmediateContext(&pd3dContext);
D3D11_BOX sourceRegion;
//Here i copy the mip map levels of the textures
for (UINT x = 0; x < 6; x++)
{
for (UINT mipLevel = 0; mipLevel < texArrayDesc.MipLevels; mipLevel++)
{
sourceRegion.left = 0;
sourceRegion.right = (texArrayDesc.Width >> mipLevel);
sourceRegion.top = 0;
sourceRegion.bottom = (texArrayDesc.Height >> mipLevel);
sourceRegion.front = 0;
sourceRegion.back = 1;
//test for overflow
if (sourceRegion.bottom == 0 || sourceRegion.right == 0)
break;
pd3dContext->CopySubresourceRegion(texArray, D3D11CalcSubresource(mipLevel, x, texArrayDesc.MipLevels), 0, 0, 0, srcTex[x], mipLevel, &sourceRegion);
}
}
// Create a resource view to the texture array.
D3D11_SHADER_RESOURCE_VIEW_DESC viewDesc;
viewDesc.Format = texArrayDesc.Format;
viewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE;
viewDesc.TextureCube.MostDetailedMip = 0;
viewDesc.TextureCube.MipLevels = texArrayDesc.MipLevels;
if (FAILED(pd3dDevice->CreateShaderResourceView(texArray, &viewDesc, &srv)))
return false;
If anyone reads this question again, maybe try this one. Warning: this function is not threadsafe, because i have to use the deviceContext.
I looked at http://rosettacode.org/wiki/Knapsack_problem/0-1 to do the basic knapsack dynamic programming problem, and I got a working solution (knapsack1()), but when I tried a different solution (knapsack2()) I feel like I am off-by-one somewhere because I am not getting the correct value.
The code in question is the method knapsack2, at the bottom. I'm sure the issue is small, but I feel absurd because I can't find the problem. The correct answer should be 1030 and not 880.
(I know that an array of size [n][S] should be enough, as shown in knapsack1(), which works, but knapsack2() doesn't oddly.)
Thanks in advance for anyone who looks through this.
/**
* http://rosettacode.org/wiki/Knapsack_problem/0-1
*/
public class Knapsack {
public static void main(String[] args) {
int n = 22;
int S = 400;
int s[] = new int[22];
int v[] = new int[22];
int i = 0;
s[i] = 9;
v[i] = 150;
i++;
s[i] = 13;
v[i] = 35;
i++;
s[i] = 153;
v[i] = 200;
i++;
s[i] = 50;
v[i] = 160;
i++;
s[i] = 15;
v[i] = 60;
i++;
s[i] = 68;
v[i] = 45;
i++;
s[i] = 27;
v[i] = 60;
i++;
s[i] = 39;
v[i] = 40;
i++;
s[i] = 23;
v[i] = 30;
i++;
s[i] = 52;
v[i] = 10;
i++;
s[i] = 11;
v[i] = 70;
i++;
s[i] = 32;
v[i] = 30;
i++;
s[i] = 24;
v[i] = 15;
i++;
s[i] = 48;
v[i] = 10;
i++;
s[i] = 73;
v[i] = 40;
i++;
s[i] = 42;
v[i] = 70;
i++;
s[i] = 43;
v[i] = 75;
i++;
s[i] = 22;
v[i] = 80;
i++;
s[i] = 7;
v[i] = 20;
i++;
s[i] = 18;
v[i] = 12;
i++;
s[i] = 4;
v[i] = 50;
i++;
s[i] = 30;
v[i] = 10;
System.out.println("--Given items with these values--");
System.out.println("[item, weight (dag), value]");
System.out.println("map 9 150");
System.out.println("compass 13 35");
System.out.println("water 153 200");
System.out.println("sandwich 50 160");
System.out.println("glucose 15 60");
System.out.println("tin 68 45");
System.out.println("banana 27 60");
System.out.println("apple 39 40");
System.out.println("cheese 23 30");
System.out.println("beer 52 10");
System.out.println("suntan cream 11 70");
System.out.println("camera 32 30");
System.out.println("T-shirt 24 15");
System.out.println("trousers 48 10");
System.out.println("umbrella 73 40");
System.out.println("waterproof trousers 42 70");
System.out.println("waterproof overclothes 43 75");
System.out.println("note-case 22 80");
System.out.println("sunglasses 7 20");
System.out.println("towel 18 12");
System.out.println("socks 4 50");
System.out.println("book 30 10");
System.out.println("--The max value you can achieve in your knapsack--");
System.out.println(knapsack2(n, S, s, v));
// proper value should be 1030, from website
}
/**
* #param n items
* #param S bag size/capacity
* #param s array where s[i] is size/weight of item at index i
* #param v array where v[i] is value of item at index i
* #return best/max value possible
*/
#SuppressWarnings("unused")
private static int knapsack1(int n, int S, int s[], int v[]) {
int dp[][] = new int[n][S];
for (int i=n-1; i >= 0; i--) { // # items being left out + 1
for (int j=0; j < S; j++) { // size/weight + 1
if (i == n-1) {
dp[i][j] = 0;
} else {
int choices[] = {0,0};
choices[0] = dp[i+1][j];
if (j >= s[i])
choices[1] = v[i] + dp[i+1][j-s[i]];
dp[i][j] = max(choices);
}
}
}
return dp[0][S-1];
}
private static int max(int choices[]) {
if (choices[0] > choices[1])
return choices[0];
else
return choices[1];
}
/**
* #param n items
* #param S bag size/capacity
* #param s array where s[i] is size/weight of item at index i
* #param v array where v[i] is value of item at index i
* #return best/max value possible
*/
// #SuppressWarnings("unused")
private static int knapsack2(int n, int S, int s[], int v[]) {
int dp[][] = new int[n][S]; // dp[i][j] holds max value using i items and not exceeding weight j+1
for (int i=0; i < n; i++) { // # items. don't need to reach n because we assume we can fit at most n-1 items.
for (int j=0; j < S; j++) { // size/weight+1
if (i == 0) {
dp[i][j] = 0;
} else {
int choices[] = {0,0};
choices[0] = dp[i-1][j];
if (j >= s[i])
choices[1] = v[i] + dp[i-1][j-s[i]];
dp[i][j] = max(choices);
}
}
}
return dp[n-1][S-1]; // TODO: WHY IS THIS 880 AND NOT 1030?
}
}
Both your knapsack functions have issues. Both won't pass the simple test - S=1 n=1 s[0]=1 v[0]=10. The answer should be 10 but your code will return 0.
You can see that the issue are in the lines if (i==(n-1)) (knapsack1) or if (i==0) (knapsack2).
Also, you will see issues because of condition if (j >= s[i]. Your j starts at 0 so, if S was 1, you'll never be able to the item with s[i] == 1 because j will never reach 1.
Hope this helps. Good luck.
I have written an image resizer using Lanczos re-sampling. I've taken the implementation straight from the directions on wikipedia. The results look good visually, but for some reason it does not match the result from Matlab's resize with Lanczos very well (in pixel error).
Does anybody see any errors? This is not my area of expertise at all...
Here is my filter (I'm using Lanczos3 by default):
double lanczos_size_ = 3.0;
inline double sinc(double x) {
double pi = 3.1415926;
x = (x * pi);
if (x < 0.01 && x > -0.01)
return 1.0 + x*x*(-1.0/6.0 + x*x*1.0/120.0);
return sin(x)/x;
}
inline double LanczosFilter(double x) {
if (std::abs(x) < lanczos_size_) {
double pi = 3.1415926;
return sinc(x)*sinc(x/lanczos_size_);
} else {
return 0.0;
}
}
And my code to resize the image:
Image Resize(Image& image, int new_rows, int new_cols) {
int old_cols = image.size().cols;
int old_rows = image.size().rows;
double col_ratio =
static_cast<double>(old_cols)/static_cast<double>(new_cols);
double row_ratio =
static_cast<double>(old_rows)/static_cast<double>(new_rows);
// Apply filter first in width, then in height.
Image horiz_image(new_cols, old_rows);
for (int r = 0; r < old_rows; r++) {
for (int c = 0; c < new_cols; c++) {
// x is the new col in terms of the old col coordinates.
double x = static_cast<double>(c)*col_ratio;
// The old col corresponding to the closest new col.
int floor_x = static_cast<int>(x);
horiz_image[r][c] = 0.0;
double weight = 0.0;
// Add up terms across the filter.
for (int i = floor_x - lanczos_size_ + 1; i < floor_x + lanczos_size_; i++) {
if (i >= 0 && i < old_cols) {
double lanc_term = LanczosFilter(x - i);
horiz_image[r][c] += image[r][i]*lanc_term;
weight += lanc_term;
}
}
// Normalize the filter.
horiz_image[r][c] /= weight;
// Strap the pixel values to valid values.
horiz_image[r][c] = (horiz_image[r][c] > 1.0) ? 1.0 : horiz_image[r][c];
horiz_image[r][c] = (horiz_image[r][c] < 0.0) ? 0.0 : horiz_image[r][c];
}
}
// Now apply a vertical filter to the horiz image.
Image new_image(new_cols, new_rows);
for (int r = 0; r < new_rows; r++) {
double x = static_cast<double>(r)*row_ratio;
int floor_x = static_cast<int>(x);
for (int c = 0; c < new_cols; c++) {
new_image[r][c] = 0.0;
double weight = 0.0;
for (int i = floor_x - lanczos_size_ + 1; i < floor_x + lanczos_size_; i++) {
if (i >= 0 && i < old_rows) {
double lanc_term = LanczosFilter(x - i);
new_image[r][c] += horiz_image[i][c]*lanc_term;
weight += lanc_term;
}
}
new_image[r][c] /= weight;
new_image[r][c] = (new_image[r][c] > 1.0) ? 1.0 : new_image[r][c];
new_image[r][c] = (new_image[r][c] < 0.0) ? 0.0 : new_image[r][c];
}
}
return new_image;
}
Here is Lanczosh in one single loop. no errors.
Uses mentioned at top procedures.
void ResizeDD(
double* const pixelsSrc,
const int old_cols,
const int old_rows,
double* const pixelsTarget,
int const new_rows, int const new_cols)
{
double col_ratio =
static_cast<double>(old_cols) / static_cast<double>(new_cols);
double row_ratio =
static_cast<double>(old_rows) / static_cast<double>(new_rows);
// Now apply a filter to the image.
for (int r = 0; r < new_rows; ++r)
{
const double row_within = static_cast<double>(r)* row_ratio;
int floor_row = static_cast<int>(row_within);
for (int c = 0; c < new_cols; ++c)
{
// x is the new col in terms of the old col coordinates.
double col_within = static_cast<double>(c)* col_ratio;
// The old col corresponding to the closest new col.
int floor_col = static_cast<int>(col_within);
double& v_toSet = pixelsTarget[r * new_cols + c];
v_toSet = 0.0;
double weight = 0.0;
for (int i = floor_row - lanczos_size_ + 1; i <= floor_row + lanczos_size_; ++i)
{
for (int j = floor_col - lanczos_size_ + 1; j <= floor_col + lanczos_size_; ++j)
{
if (i >= 0 && i < old_rows && j >= 0 && j < old_cols)
{
const double lanc_term = LanczosFilter(row_within - i + col_within - j);
v_toSet += pixelsSrc[i * old_rows + j] * lanc_term;
weight += lanc_term;
}
}
}
v_toSet /= weight;
v_toSet = (v_toSet > 1.0) ? 1.0 : v_toSet;
v_toSet = (v_toSet < 0.0) ? 0.0 : v_toSet;
}
}
}
The line
for (int i = floor_x - lanczos_size_ + 1; i < floor_x + lanczos_size_; i++)
should be
for (int i = floor_x - lanczos_size_ + 1; i <= floor_x + lanczos_size_; i++)
Do not know but perhaps other mistakes linger too.
I think there is a mistake in your sinc function. Below the fraction bar you have to square pi and x. Additional you have to multiply the function with lanczos size
L(x) = **a***sin(pi*x)*sin(pi*x/a) * (pi**²**x**²**)^-1
Edit: My mistake, there is all right.