Maximum temp GTX780m Alienware - graphics

I have an alienware 17 with an GTX780m.
And im currently mining feathercoins.
But I noticed that my gpu temperature went to 78 degrees.
Is this dangerous?
Greetz Vinnie

78 degress celsius? (remember some people is used to fahrenheit)
Anyway that you not burn your gpu but can shorten its life time.
For any eletronics the best option is to keep temperature constant.
You can try to install fans or buy a case with fans like this.
case whit 6 fans

Related

LWJGL starts to run low FPS on Display

I'm having this problem with LWJGL. I have a simple game and all works fine. My main loop is calculating when it should render and update my game. It stays constant 59-60 fps. The problem comes in opengl I guess. After random amounts of time my whole game starts to run at very low fps. My game loop still calculates 60 fps and updates, but what I see on screen doesn't match it. I'm guessing I overload openGL. I'm clearing color buffer bit and depth buffer(though I don't do any depth). Is there anything more I need to clear?
It's king of tough to say what may be wrong with your program without actually looking at the code. Clearing off the screen is one thing but it really shouldn't have the biggest impact so unfortunately I can't really tell you without any additional information.
Possibly it is a problem with slow hardware? This seems like a trivial "I have a slow graphics card" or "I have a lot of things open in the background" kind of problem. Also note that on most laptops if you shake it the hard drive will lock up for a few seconds, causing stuttering.
As Andrew said you can't really pinpoint this sort of problem without code.

Tools for parsing natural language questions in realtime

photos in washington VS show me photos in washington VS I wanna see all my photos in washington taken day before yesterday
what:photos
entities:washington (dont want to be too assuming)
when: 2013-03-14
I want to parse preset queries into conditions (like above). I want these qualities:
I can extract relevant terms even in presence of fluff ("I wanna see) and lowercase nouns
warm program can accept requests over HTTP or allow me to add some network communication
warm program responds in 50ms and needs atmost 500Mb of memory for reasonable sentences
I am more experienced in Python, less so in Java
Parser data structure is easy to handle
I use NLTK, but its slow. I see StanfordNLP and OpenNLP as viable alternatives but I find the program-start latency to be too high. I dont mind integrating them over servlets if I am left with no alternative.
The Stanford Parser is a solid choice, and pretty well-supported (as research code goes). But it sounds like low latency is an important requirement for you, so I'd also suggest you look at the BUBS Parser (full disclosure - I'm one of the primary researchers working on BUBS).
I haven't compared directly to NLTK, but I think you may find that the Stanford Parser doesn't meet your performance needs. This paper found a total throughput of ~60 words/second (~2-3 sentences/second). Those timings are pretty old, so newer hardware will certainly improve that, but probably still won't come close to 50 ms latency.
As you note, startup time will be an issue with any parser - a high-accuracy model is necessarily quite large. And 500 MB is probably pretty tight too (I usually run BUBS with 1-1.2 GB). But once loaded, BUBS latency is generally in the neighborhood of 10 ms per sentence (for ~20-25-word sentences), and we can push the total throughput up around 2500 words/second before accuracy starts to drop off. I think those numbers might meet your performance needs, and I don't know of any other high-accuracy (F1 >= 88-89) parser that comes close in speed.
Note: the fastest results are with recent pruning models that aren't yet posted to the website, but I can get you a model if you need. Hope that helps, and if you have more questions, feel free to ask.

Terminal labyrinth game in Haskell

I was thinking of implementing a labyrinth game in Haskell - the labyrinth will be of ASCII symbols and I would like it to be colored - for example walls to be blue "#", coins to be yellow 'o' and so on, and I was looking at System-Console-ANSI.
I would like to ask if it will be possible at all to do this with this packet and I was thinking how to refresh the labirynth when an action happens (for example it can have coins in it, represented by 'o' and when the hero steps on a coin, he gets it and it should disappear) - will claering the screen and printing the labyrinth again do the job smoothly?
Can you please give me some ideas and maybe packets if System Console ANSI won't do the job? Thank you very much in advance!
I suggest you have a look at vty-ui at http://hackage.haskell.org/package/vty-ui and http://jtdaugherty.github.com/vty-ui/. There's a very good user's manual for it. I've only played with it a little, but I think it would be well suited to your application.

2D platformers: why make the physics dependent on the framerate?

"Super Meat Boy" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel-perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps; this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast.
Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly [fps/60] as fast.)
What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system-specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old-school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine?
Thank you, and sorry if the question was confusing.
There are no reasons why physics should depend on the framerate and this is clearly a bad design.
I've once tried to understand why people do this. I did a code review for a game written by another team in the company, and I didn't see it from the beginning but they used a lot of hardcoded value of 17 in their code. When I ran the game on debug mode with the FPS shown, I saw it, FPS was exactly 17! I look over the code again and now it's clear: the programmers assumed that the game will always have a 17 FPS constant frame rate. If the FPS was greater than 17, they did a sleep to make the FPS be exactly 17. Of course, they did nothing if the FPS was smaller than 17 the game just went crazy (like when played at 2 FPS and driving a car in the game, the game system alerted me: "Too Fast! Too Fast!").
So I write an email asking why they hardcoded this value and use it their physics engine and they replied that this way they keep the engine simpler. And i replied again, Ok, but if we run the game on a device that is incapable of 17 FPS, your game engine runs very funny but not as expected. And they said that will fix the issue until the next code review.
After 3 or 4 weeks I get a new version of the source code so I was really curious to find out what they did with the FPS constant so first thing i do is search through code after 17 and there are only a couple matches, but one of them was not something i wanted to see:
final static int FPS = 17;
So they removed all the hardcoded 17 value from all the code and used the FPS constant instead. And their motivation: now if I need to put the game on a device that can only do 10 FPS, all i need to do is to set that FPS constant to 10 and the game will work smooth.
In conclusion, sorry for writing such a long message, but I wanted to emphasize that the only reason why anyone will do such a thing is the bad design.
Here's a good explanation on why your timestep should be kept constant: http://gafferongames.com/game-physics/fix-your-timestep/
Additionally, depending on the physics engine, the system may get unstable when the timestep changes. This is because some of the data that is cached between frames is timestep-dependant. For example, the starting guess for an iterative solver (which is how constraints are solved) may be far off from the answer. I know this is true for Havok (the physics engine used by many commericial games), but I'm not sure which engine SMB uses.
There was also an article in Game Developer Magazine a few months ago, illustrating how a jump with the same initial velocity but different timesteps was achieved different max heights with different frame rates. There was a supporting anecdote from a game (Tony Hawk?) where a certain jump could be made when running on the NTSC version of the game but not the PAL version (since the framerates are different). Sorry I can't find the issue at the moment, but I can try to dig it up later if you want.
They probably needed to get the game done quickly enough and decided that they would cover sufficient user base with the current implementation.
Now, it's not really that hard to retrofit independence, if you think about it during development, but I suppose they could go down some steep holes.
I think it's unnecessary, and I've seen it before (some early 3d-hw game used the same thing, where the game went faster if you looked at the sky, and slower if you looked at the ground).
It just sucks. Bug the developers about it and hope that they patch it, if they can.

What is the best way to learn Touch Typing? [duplicate]

This question already has answers here:
Closed 13 years ago.
Duplicate
How do I improve my Typing Skills?.**
I tried the test on http://speedtest.10-fast-fingers.com/. I reach only:
You type 337 characters per minute You
have 58 correct words and you have 1
wrong words
How can I improve my typing speed? What free resources do you know of?
Should I learn the Dvorak Keyboard?
Practice is the best way to get faster. I've found TypeRacer to be a fun and easily accessible game. Using it I quickly got from around 55 words per minute to over 70.
I removed all of the key caps from my IBM Model-M. Since I can't see the letters, I was forced to learn their positions and type without looking at the keys other than to initially orient my hands. When you're not able to take shortcuts, you tend to learn very quickly.
Mario taught me.
I also took the test and reached 371 characters with one mistake. However, for programming, I would not see this as a bad result. I'm more worried about how to use tools like Intellisense and code templates better to speed up my coding. The jedi coding demo shows that you can get much higher gains that way than by doubling your typing speed.
No need to learn Dvorak according to XKCD (and more here).
I also remember reading in The Design of Everyday Things that QWERTY actually does quite a good job or spreading the commonly used letters across your fingers and whilst the Dvorak keyboard is a little better than QWERTY the benefits aren't significant enough to justify making the change. (If I can find my copy I'll try and put up an exact quote.)
As with all things: practice makes perfect. Making posts on StackOverflow is a start :)
Unless if you want to win typing contests, a Qwerty or Azerty keyboard will work just fine.
You don't need to learn Dvorak. I can type 600+ Chars/minute on a querty pad, no problem.
The key is: Repetition, repetition, repetition.
What you're doing while you learn typing is creating new 'highways' straight form your brain's spelling center through your spine to your fingers.
Hence, a good typist will spell a word in his mind, and his fingers 'automatically' type those characters because there's a 10 lane highway from his brain to his fingers. In your case, it's a modest 3 lane highway.
Practice, practice, practice.
Good training for if you already know how to type : www.play4traffic.com
There's also loads of typing tutor programs available online, but the key is repetition and persistance.
My native language is Dutch, so in english it's not as good. I tried the test you gave:
317 points, so you achieved position 194065 of 2927935 on the ranking list
You type 476 characters per minute
You have 80 correct words and you have 4 wrong words
Why?
Why do you want to type more quickly? I seldom find that my fingers or typing speed are the issue when it comes to software development. Sure I have a fair speed, but programming is about SO much more than typing speed. I've been using a QUERTY keyboard since about 1983 so I guess repetition helps.
But learning to hold back on typing and thinking about what it is you're about to do is far more valuable IMHO.
Having said that, I would expect any developer to be able to type reasonably quickly using most fingers, or at least more than their two index fingers ;)
This game taught me a few years back.
The Typing Of The Dead
I can now type fairly quickly without looking at the keyboard. You need to learn to use the correct hand position. Then you must have good discipline and only use the correct finger to type the correct letters. I even went so far as to delete correct chars typed with the wrong finger.
It takes time, and you will almost definitely go slower before you go faster, but it is worth it.

Resources