SFML OpenGL Application artifacts - something "wierd" is happening - graphics

Firstly, sorry for the poor question title, I didn't know exactly what to put!
So I have an OpenGL application running from a SFML Window context.
I previously posted a question about poor performance, but that issue seems to be solved now.
As you can see on the images I have uploaded, something rather odd is happening. I don't know really how to describe it, but it looks like the right half of the window shouldn't be there!
Anyone any ideas on the problem?
Here is my code:
sf::ContextSettings settings;
settings.depthBits = 32;
settings.stencilBits = 8;
settings.antialiasingLevel=4;
settings.majorVersion = 3;
settings.minorVersion = 0;
sf::Window window(sf::VideoMode(800, 600), "insert title", sf::Style::Default, settings);
window.setVerticalSyncEnabled(true);
bool running = true;
while(running)
{
sf::Event e;
while(window.pollEvent(e))
{
if(e.type == sf::Event::Closed)
{
running = false;
}
if(e.type == sf::Event::Resized)
{
glViewport(0, 0, e.size.width, e.size.height);
gluLookAt(0,0,-1, 0,0,0, 0,1,0);;
}
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glutSolidSphere(1, 12, 12);
window.display();
}

Turns out this is caused by copying and pasting code.
Above the code shown, I had the lines:
sf::ContextSettings settings;
settings.depthBits = 24;
settings.stencilBits = 0;
settings.antialiasingLevel = 0;
settings.majorversion = 3;
settings.minorversion = 2;
The minor version was incorrect. Removing the lines ' settings.majorversion = 3;' and 'settings.minorversion = 2;' fixed the issue!
As an experiment I changed the major to 4. This caused the program to crash all together.

Related

Strange behaviour on MFC Thread

I'm new here, but always found help in the past.
I'm facing a strange problem with a simple block of code:
UINT ThreadMain(LPVOID pParam)
{
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL);
THREADDATA* pData = (THREADDATA*) pParam;
INT* pnEditBoxValue = pData->pnEditBoxValue;
UINT nEid = pData->nEditId;
PLC *parent = (PLC*)pParam;
bool trigger_basso = false;
m_thread_running = 1;
CDeCoMotorDlg* wnd = pData->pObjDlg;
while(m_keep_alive==1)
{
SleepEx(1, 0);
wnd->CheckDlgButton(nEid, TRUE);
}
m_thread_running = 0;
return 0;
}
The problem seems related to this line:
wnd->CheckDlgButton(nEid, TRUE);
The lines works well (a check box is set on the gui), BUT, if I press a button on my GUI that sets the
m_keep_alive = 0
then the function stop working immediately: I don't see the while loop terminating properly and I never reach this line:
m_thread_running = 0;
What's strange is the if I remove this line
wnd->CheckDlgButton(nEid, TRUE);
the while loop exits properly, the flag is set and the thread exit properly...
I'm getting crazy...any suggestion?
Many thanks

Exception thrown at 0x00000000 in ConsoleApplication1.exe: 0xC0000005: Access violation executing location 0x00000000

After people told me to shorten the program I did it and here is the shortened version of the program with the same error as stated above.It only appears after a few moments into the program.If i hit continue the program works fine.However see the movement function?It does't work.The sprite refuses to move in any direction.However if i give a very large floating value in the move,then the sprite is displaced from it's position when i start the program and it stays there in that position with no further movement.For example if i write sprite.move(400.f,400.f) the sprite moves from (0,0) to (400,400) and stays there.It doesn't move any more.
Here's the shortened version of the code:
#include"SFML\Graphics.hpp"
#include<iostream>
int main()
{
sf::RenderWindow window(sf::VideoMode(640, 480), "CHECK",sf::Style::Default);
std::cout << "WORKS";
sf::Texture text;
text.loadFromFile("bahamut.png");
sf::Sprite sprite;
sf::Clock frap;
sprite.setTexture(text);
while (window.isOpen())
{
float fps = frap.restart().asSeconds();
sf::Vector2f movements;
if (sf::Keyboard::isKeyPressed(sf::Keyboard::Key::A))
{
movements.y = 0;
movements.x = -1 * fps;
}
else
{if (sf::Keyboard::isKeyPressed(sf::Keyboard::Key::D))
{
movements.y = 0;
movements.x = 1 * fps;
}
else
{ if (sf::Keyboard::isKeyPressed(sf::Keyboard::Key::S))
{
movements.y = 1 * fps;
movements.x = 0;
}
else
{
if (sf::Keyboard::isKeyPressed(sf::Keyboard::Key::W))
{
movements.y = -1 * fps;
movements.x = 0;
}
else
{
movements.x = 0;
movements.y = 0;
}
}
}
}
sprite.move(movements);
window.clear();
window.draw(sprite);
window.display();
}
return 0;
}
I improved upon the code and it still produces the same results and error.
On using the dissassembler i saw the crash occurs at
00B37AEE cmp esi,esp
in window.display().
when i create a function and use it to display the sprite,the movement occurs but witthout the unction nada
Your logic says your movement is 0/0 if W is not pressed. The else of the W pressed block overrides all prior settings. And moving the sprite should happen before you display.
I cannot see a reason for the null pointer exception, but that is what the debugger is for. Next time this happens, debug.
Oh and it's int main(), not void. I know the compiler tolerates this error, but it's still an error and undefined behavior.

Is it normal to solve a TSP with GA(Genetic Algorithyms) implementation takes much time?

I am working on GA for a project. I am trying to solve Travelling Salesman Problem using GA. I used array[] to store data, I think Arrays are much faster than List. But for any reason it takes too much time. e.g. With MaxPopulation = 100000, StartPopulation=1000 the program lasts to complete about 1 min. I want to know if this is a problem. If it is, how can I fix this?
A code part from my implementation:
public void StartAsync()
{
Task.Run(() =>
{
CreatePopulation();
currentPopSize = startPopNumber;
while (currentPopSize < maxPopNumber)
{
Tour[] elits = ElitChromosoms();
for (int i = 0; i < maxCrossingOver; i++)
{
if (currentPopSize >= maxPopNumber)
break;
int x = rnd.Next(elits.Length - 1);
int y = rnd.Next(elits.Length - 1);
Tour parent1 = elits[x];
Tour parent2 = elits[y];
Tour child = CrossingOver(parent1, parent2);
int mut = rnd.Next(100);
if (mutPosibility >= mut)
{
child = Mutation(child);
}
population[currentPopSize] = child;
currentPopSize++;
}
progress = currentPopSize * 100 / population.Length;
this.Progress = progress;
GC.Collect();
}
if (GACompleted != null)
GACompleted(this, EventArgs.Empty);
});
}
In here "elits" are the chromosoms that have greater fit value than the average fit value of the population.
Scientific papers suggest smaller population. Maybe you should follow what is written by the other authors. Having big population does not give you any advantage.
TSP can be solved by GA, but maybe it is not the most efficient approach to attack this problem. Look at this visual representation of TSP-GA: http://www.obitko.com/tutorials/genetic-algorithms/tsp-example.php
Ok. I have just found a solution. Instead of using an array with size of maxPopulation, change new generations with the old and bad one who has bad fitness. Now, I am working with a less sized array, which has length of 10000. The length was 1,000.000 before and it was taking too much time. Now, in every iteration, select best 1000 chromosomes and create new chromosomes using these as parent and replace to old and bad ones. This works perfect.
Code sample:
public void StartAsync()
{
CreatePopulation(); //Creates chromosoms for starting
currentProducedPopSize = popNumber; //produced chromosom number, starts with the length of the starting population
while (currentProducedPopSize < maxPopNumber && !stopped)
{
Tour[] elits = ElitChromosoms();//Gets best 1000 chromosoms
Array.Reverse(population);//Orders by descending
this.Best = elits[0];
//Create new chromosom as many as the number of bad chromosoms
for (int i = 0; i < population.Length - elits.Length; i++)
{
if (currentProducedPopSize >= maxPopNumber || stopped)
break;
int x = rnd.Next(elits.Length - 1);
int y = rnd.Next(elits.Length - 1);
Tour parent1 = elits[x];
Tour parent2 = elits[y];
Tour child = CrossingOver(parent1, parent2);
int mut = rnd.Next(100);
if (mutPosibility <= mut)
{
child = Mutation(child);
}
population[i] = child;//Replace new chromosoms
currentProducedPopSize++;//Increase produced chromosom number
}
progress = currentProducedPopSize * 100 / maxPopNumber;
this.Progress = progress;
GC.Collect();
}
stopped = false;
this.Best = population[population.Length - 1];
if (GACompleted != null)
GACompleted(this, EventArgs.Empty);
}
Tour[] ElitChromosoms()
{
Array.Sort(population);
Tour[] elits = new Tour[popNumber / 10];
Array.Copy(population, elits, elits.Length);
return elits;
}

ScriptProcessorNode Memory leak

I'm working on a large project that relies heavily on web audio and ScriptProcessorNodes. After some recent intermittent crashing I've tracked down the problems to memory leaking from the ScriptProcessorNodes. I've read many many tutorials, guides, bug reports, etc.. and none of it seems to be helping. Here's a small toy example:
http://jsfiddle.net/6YBWf/
var context = new webkitAudioContext();
function killNode(node)
{
return function()
{
node.disconnect();
node.onaudioprocess = null;
node = null;
}
}
function noise()
{
var node = context.createScriptProcessor(1024, 0, 1);
node.onaudioprocess = function(e)
{
var output = e.outputBuffer.getChannelData(0);
for(var i = 0; i < 1024; ++i)
{
output[i] = (Math.random() * 2 - 1) * 0.001;
}
}
node.connect(context.destination);
setTimeout(killNode(node), 100);
}
function generateNoise()
{
for(var i = 0; i < 99999; ++i)
{
noise();
}
}
generateNoise();
This will spin up many nodes and then disconnect them and set their onaudioprocess to null. From what I've read, given that I'm not retaining any references to them, shouldn't they get garbage collected?
My computer memory jumps up to about 16% and settles down to 14% a bit later but never goes below that. Can anyone show me an example similar to this where the nodes get properly collected? Is there something obvious I'm missing?
This has been confirmed as a regression in Chrome:
https://code.google.com/p/chromium/issues/detail?id=379753

How to properly implement cheat codes?

what would be the best way to implement kind of cheat codes in general?
I have WinForms application in mind, where a cheat code would unlock an easter egg, but the implementation details are not relevant.
The best approach that comes to my mind is to keep index for each code - let's consider famous DOOM codes - IDDQD and IDKFA, in a fictional C# app.
string[] CheatCodes = { "IDDQD", "IDKFA"};
int[] CheatIndexes = { 0, 0 };
const int CHEAT_COUNT = 2;
void KeyPress(char c)
{
for (int i = 0; i < CHEAT_COUNT; i++) //for each cheat code
{
if (CheatCodes[i][CheatIndexes[i]] == c)
{ //we have hit the next key in sequence
if (++CheatIndexes[i] == CheatCodes[i].Length) //are we in the end?
{
//Do cheat work
MessageBox.Show(CheatCodes[i]);
//reset cheat index so we can enter it next time
CheatIndexes[i] = 0;
}
}
else //mistyped, reset cheat index
CheatIndexes[i] = 0;
}
}
Is this the right way to do it?
Edit: Probably the worst thing I should have done was to include the first cheat codes that came from the top of my head as an example. I really did not want to see Doom's source code or their implementation, but general solution to this problem.
Why not download the DOOM source and see for yourself? =)
http://www.doomworld.com/idgames/?id=14576
I think this one's a bit easier to understand, though your original will probably perform better than this one:
using System.Collections.Generic;
void KeyPress(char c)
{
string[] cheatCodes = { "IDDQD", "IDKFA"};
static Queue<char> buffer; //Contains the longest number of characters needed
buffer.Enqueue(c);
if (buffer.Count() > 5) //Replace 5 with whatever your longest cheat code is
buffer.Dequeue();
bufferString = new System.String(buffer.ToArray());
foreach(string code in cheatCodes) {
if (bufferString.EndsWith(code)) {
//Do cheat work
}
}
}
here is the DOOM cheat implementation from the doom source:
#define SCRAMBLE(a) \
((((a)&1)<<7) + (((a)&2)<<5) + ((a)&4) + (((a)&8)<<1) \
+ (((a)&16)>>1) + ((a)&32) + (((a)&64)>>5) + (((a)&128)>>7))
int cht_CheckCheat ( cheatseq_t* cht, char key )
{
int i;
int rc = 0;
if (firsttime)
{
firsttime = 0;
for (i=0;i<256;i++) cheat_xlate_table[i] = SCRAMBLE(i);
}
if (!cht->p)
cht->p = cht->sequence; // initialize if first time
if (*cht->p == 0)
*(cht->p++) = key;
else if
(cheat_xlate_table[(unsigned char)key] == *cht->p) cht->p++;
else
cht->p = cht->sequence;
if (*cht->p == 1)
cht->p++;
else if (*cht->p == 0xff) // end of sequence character
{
cht->p = cht->sequence;
rc = 1;
}
return rc;
}

Resources