How to wait x seconds in 6502 basic - basic

How do I wait x amount of time in basic 6502? I am on VICE xpet, and I would like to print a character, wait a little, then delete it, and repeat for a while, as a sort of status indicator. The only problem is that it deletes to fast so the indicator never shows up. I've tried looking for such a command on the reference, but there is nothing for just flat out waiting a little bit. I know that if I make a huge for loop I may be able to slow the machine down enough to do it by brute force, but I'd rather avoid such if possible. Is there not a better way?
Thanks!

You can refer to the system variable TI for timing purposes. Value of the variable incremented automatically in 1/60 seconds. It will not be perfect but it works.
Below example will print current value of TI for each second:
10 PRINT "START " TI
20 T0=T1
30 IF TI-T0>=60 THEN PRINT TI;TI-T0 : GOTO 20
40 GOTO 30

It's been decades since I've programmed on a 6502 (C-64/VIC-20) but I'm pretty sure even their version of BASIC had the keyword TIMER. If memory serves, it counts milliseconds, but I could be wrong. You might have to play with it. Set a variable equal to TIMER, do a for/next loop to take up some time, then check its value again. Once you figure out how many ticks occur in a second, you'll be able to make that a constant and then loop until variable = timer start plus constant ticks per second (first setting variable to timer before loop, of course).

Related

Executing a function periodically on accurate and precise intervals

I want to implement an accurate and precise countdown timer for my application. I started with the most simple implementation, which was not accurate at all.
loop {
// Code which can take upto 10 ms to finish
...
let interval = std::time::Duration::from_millis(1000);
std::thread::sleep(interval);
}
As the code before the sleep call can take some time to finish, I cannot run the next iteration at the intended interval. Even worse, if the countdown timer is run for 2 minutes, the 10 milliseconds from each iteration add up to 1.2 seconds. So, this version is not very accurate.
I can account for this delay by measuring how much time this code takes to execute.
loop {
let start = std::time::Instant::now();
// Code which can take upto 10 ms to finish
...
let interval = std::time::Duration::from_millis(1000);
std::thread::sleep(interval - start.elapsed());
}
Even though this seems to precise up to milliseconds, I wanted to know if there is a way to implement this which is even more accurate and precise and/or how it is usually done in software.
For precise timing, you basically have to busy wait: while time.elapsed() < interval {}. This is also called "spinning" (you might have heard of "spin lock"). Of course, this is far more CPU intensive than using the OS-provided sleep functionality (which often transitions the CPU in some low power mode).
To improve upon that slightly, instead of doing absolutely nothing in the loop body, you could:
Call thread::yield_now().
Call std::hint::spin_loop()
Unfortunately, I can't really tell you what timing guarantees these two functions give you. But from the documentation it seems like spin_loop will result in more precise timing.
Also, you very likely want to combine the "spin waiting" with std::thread::sleep so that you sleep the majority of the time with the latter method. That saves a lot of power/CPU-resources. And hey, there is even a crate for exactly that: spin_sleep. You should probably just use that.
Finally, just in case you are not aware: for several use cases of these "timings", there are other functions you can use. For example, if you want to render a frame every 60th of a second, you want to use some API that synchronizes your loop with the refresh rate/v-blanking of the monitor directly, instead of manually sleeping.

Celery chains: Is it necessary to wait before getting the results?

So, I have a chain of tasks in Python 3 that a celery worker runs. Currently, I use the following piece of code to get and print the final result of the chain :
while not result.ready():
pass
print(result.get())
I have run the code with and without the while-loop, and it seems that the while-loop is redundant.
My question is: "is it necessary to have that while-loop?"
If by redundant, you mean that the code works fine without the while loop, then I would venture to say that the loop is not necessary. If, however, you throw an error without the loop because you're trying to print something that doesn't exist yet, then you should keep it. This can be a problem, though, because an empty while loop means you're just checking the same variable as fast as your computer can physically handle it, which tends to eat up your CPU. I recommend something like the following:
import time
t = 1 #The number of seconds you want to wait between checking if the result is ready
while not result.ready():
time.sleep(t)
print(result.get())
You can set t to whatever makes sense. If the task you're running takes several hours, maybe set it to 60, and you'll get the result within a minute. If you want the result faster, you can make the interval smaller. This will keep the program from dragging down the rest of your computer. However, if you don't mind your fans blowing and you absolutely need to know the moment the result is ready, ignore all of the above and leave your code the way it is :)

Jmeter - how to get higher randomize effect?

I need to simulate "real traffic" on Web farm, by other words I need to generate high peaks but as well periods which less or even no HTTP requests (hits) at all. Reason for that is to test some atomized mechanisms for adding and reducing CPU and memory for Web servers itself (that is another story). That is why I need "totally random" sceneries when I have loads but as well period with zero or less traffic (so I can add or reduce compute power).
This is situation that I get now, as you can see I always have some avg load its always around some number of hits, even if I change 10 to 100 threads. Values (results) will always have some average value. There are no periods with less or more traffic which would be separated be +10 mints or so, only by few seconds.
Current situation
I would like to get "higher" variations by HITS/REQUESTS with some time breaks between it.
Situation that I want: i.stack.imgur.com/I4LhU.png
I tried several timers but no success and I do not want to use "Ultimate Thread Group" and similar components because I want test to be totaly randome and not predefined with time breaks and pause periods (thread delays). I would like test which will be totally randomized by it self - which could for example generate from 1 to 100 users per XY time.
This is my current Jmeter setup: i.stack.imgur.com/I4LhU.png
I do not know if I am missing some parameter in current setup or there is totally another way to do this.
Thanks a lot!
If this is something you really want (I strongly believe that the test needs to be repeatable, not random), I would suggest using Constant Throughput Timer for this. Despite the word "Constant" in its name you can use a Function or a Variable there, for instance __Random() and you will get different controllable "spikes" each iteration.
Moreover, you put a __P() function and amend its value via Beanshell Server while the test is running

modified activity selection

in activity selection we sort on finish time of activities and then apply the constraint that no two activities can overlap.i want to know whether can we do it by sorting on start time andthenseeing if activities do not overlap
i was going through http://www.geeksforgeeks.org/dynamic-programming-set-20-maximum-length-chain-of-pairs/
this link has a dynamic programming solution for finding maximum length chain of pairs of numbers .. this according to me is another formulation of activity selection problem but i have searched on net and as also have read cormen but everywhere they ask to sort on finish times ...
i guess it shouldnt matter on what times(start or finish)we sort but just want to confirm the same
In greedy algorithm we always try to maximize our result. Thus, In activity selection we try to accommodate as many processes as we can in a given time interval without overlapping each other.
If you sort on start time then your solution might not be an optimal solution. Let's take an example,
Processes start Time Finish Time
A 1 9
B 3 5
C 6 8
Sorted on start Time:
If you execute process A because it starts at the earliest no other process can be executed because they will overlap. Therefore, for a given time interval you can only execute one process.
Sorted on Finish Time:
If you execute process B because it ends at the earliest you can execute process C after that. Therefore, for a given time interval you can execute two processes.

Issue with timer event handler - Vc++

I started a new windows form in visual studio 2010 using C++ language.
There is only one timer configured to generate an event each 1ms (1milisecond)
Inside the timer event handler, I just increment a variable named Counter (who is used only in this event) and I write the current value of this variable in a textbox, so that I can see its current value.
Considering that the timer event occurs each 1ms, for each 1 second, the variable Counter should increment 1000 times, but the Counter variable takes around 15 seconds to increment 1000 times. After 15 seconds the value shown in textbox is 1000.
I set the timer event to 1ms, but seems that the event is occuring only each 15ms, because the variable Counter took 15 times (15 seconds) more than in theory to reach the value of 1000 (1 second = 1000*1ms).
Someone have an ideia on how to solve this problem?
I need to generate an event each 1ms, where I will call another function.
How cold I generate an event each 1ms interval? Or less than this if possible.
A person of anther forum told me to create a Thread to do this job but I don't know how to do that.
Im using windows 7 profesional 64bits, I don't know if 64bits OS have any relationship with this issue. I think the PC hardware is enough to generate the event. Core 2 duo 2GHz and 3GB RAM.
http://img716.imageshack.us/img716/3627/teste1ms.png
System.Windows.Forms.Timer states that
The Windows Forms Timer component is single-threaded, and is limited to an accuracy of 55 milliseconds
So that should explain the discrepancy. Your approach seems to be a little wrong IMHO. Having a thread wake up every 1ms and that too precisely is very hard to do in a preemptive multitasking OS.
What you can do instead is
Initialize a counter to zero, a high precision time variable to current time.
Have a timer wake you up periodically
When you timer fires , user a high precision timer to find current time.
Compute delta between new old high precision time and increment counter as much as you expect it to actually be or call some callback function that many times.
This approach will be way more precise than any timer event.

Resources