What is the logic behind 6502 instruction clock cycles? [closed] - emulation

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Some questions to consider taken from page 10 of the 6502 datasheet:
http://archive.6502.org/datasheets/rockwell_r650x_r651x.pdf
What does page 10 by "Add 1 to N if page boundary is crossed"?
What does page 10 imply "Add 1 to N if branch occurs on the same page?"
What does it mean by add 2 to N if branch occurs to different page?
Does read and write to other devices ie.RAM cause any irregularities in clock cycles?
Are there any other factors that could affect clock cycles on the 6502 (more specifically the NES) ?

With regards to 6502 machine instructions. Instruction addresses are calculated and stored as two eight bit bytes. When doing address calculations such as with register indexed instructions, or for the target address for branching instructions - it's possible that there is an internal carry from the least significant byte to the most significant byte. This is what 'crossing a page boundary' means - a 'page' being 256 bytes. The internal carry process can impose a penalty of one cycle.
To see it more clearly, if you encode your addresses in hexadecimal, then the lower byte of the address is the right-hand two digits. For example address $1234 hex, the lower byte would contain $34 hex and the upper $12. If the address you branch to or load or store from 'crosses the page boundary', by tipping the upper byte over by one, for example to address $1300, then a cycle penalty will be incurred.
With branching instructions there is a further cycle added if the branch is 'taken' in other words the condition is satisfied and the program jumps to the new location. So if the branch happens to go into another page, then effectively 2 cycles will be added.

Related

How could I identify a sentence disclosing some specific information in a paragraph? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
For example, I have such a paragraph as below:
The first sentence (bold and italic) is what I hope to identify out.
The identification goal includes:
1. whether this paragraph contain such disclosure.
2. what this disclosure is.
The possible problems are :
1. this sentence may not be in the begin of the text string. it could be in any place of the given paragraph.
2. this sentence may vary with words but with same meaning. For example, it could also be expressed as: "Sample provided for review" or "They sent to me an item for evaluation" or something like this.
So how could I identify such disclosures ? Anyone's idea would be greatly appreciated. Thanks.
The paragraph:
I was sent this Earbuds Audiophile headphones to review. I am just going to copy here the information from the site: "High Definition Stereo Earphones with microphone Equipped with two 9mm high fidelity drivers, unique sound performance, well-balanced bass, mids and trebble. Designed specially for those who enjoy classic music, rock music, pop music, or gaming with superb quality sound. Let COR3 be your in ear sports earbuds. Replaceable Back Caps, inline controller and mic
Extreme flexible tangle free flat TPE cable including inline controller with universal microphone. Play/Pause your music or Answer/Hang up a call with a touch of a button right next to your hands, feature available depending on your device capability. COR3 should be your best gaming earbuds.
Extremely Comfortable
Methods I have tried:
Up to now, my processing is very naive: 1) humanly labeled 1000 pieces of reviews as a binary variable (1 represents including the disclosure text, 0 otherwise). 2) Collect all the disclosure texts as a corpus denoted by DisclosureCor; 3) Based on these DisclosureCor, I discovered some basic regular regression rules, like " review.* evaluation|test|opinion". 4) Using these summarized rules to label new data. 5) The problem is that rules may not be complete, since they are just my own subject summarizations. Besides, theses rules may not only occur in the disclosure text, but also some other parts in the review paragraphs, thus generating lots of noises (i.e. low precision); 6) I tried to use classification based association rules to train some rules from the labeled data. As keywords number is huge, long long time is needed to train the rule, crashed often. 7) I also tried to compare the similarity the review paragraph with the DisclosureCorp, but it's difficult to find a threshold to cut whether a review paragraph contains disclosure. These are all the efforts I have tried, could you please give me some hints ? Thanks.

Adding records to VSAM DATASET [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have some confusions regarding VSAM as I am new to it. Do correct me where I am wrong and solve the queries.
A cluster contains control areas and a control area contains control intervals. One control interval contains one dataset. Now for defining a cluster we mention a data component and index component. Now this name of data component that we gives creates a dataset and name of index generates a key. My queries are as follows-
1)If I have to add a new record in that dataset, what is the procedure?
2)What is the procedure for creating a new dataset in control area?
3)How to access a dataset and a particular record after they are created?
I tried finding a simple code but was unable so kindly explain with a simple example.
One thing that is going to help you is the IBM Redbook VSAM Demystified: http://www.redbooks.ibm.com/abstracts/sg246105.html which, these days, you can even get on your smartphone, amongst several other ways.
However, your current understanding is a bit astray so you'll need to drop all of that understanding first.
There are three main types of VSAM file and you'll probably only come across two of those as a beginner: KSDS; ESDS.
KSDS is a Key Sequenced Data Set (an indexed file) and ESDS is an Entry Sequenced Data Set (a sequential file but not a "flat" file).
When you write a COBOL program, there is little difference between using an ESDS and a flat/PS/QSAM file, and not even that much difference when using a KSDS.
Rather than providing an example, I'll refer you to the chapter in the Enterprise COBOL Programming Guide for your release of COBOL, it is Chapter 10 you want, up to and including the section on handling errors, and the publication can be found here: http://www-01.ibm.com/support/docview.wss?uid=swg27036733, you can also use the Language Reference for the details of what you can use with VSAM once you have a better understanding of what it is to COBOL.
As a beginning programmer, you don't have to worry about what the structure of a VSAM dataset is. However, you've had some exposure to the topic, and taken a wrong turn.
VSAM datasets themselves can only exist on disk (what we often refer to as DASD). They can be backed-up to non-DASD, but are only directly usable on DASD.
They consist of Control Areas (CA), which you can regard as just being a lump of DASD, and almost exclusively that lump of DASD will be one Cylinder (30 Tracks on a 3390 (which these days is very likely emulated 3390). You won't need to know much more about CAs. CAs are more of a conceptual thing that an actual physical thing.
Control Intervals (CI) are where any data (including index data) is. CIs live in CAs.
Records, the things you will have in the FILE SECTION under an FD in a COBOL program, will live in CIs.
Your COBOL program needs to know nothing about the structure of a VSAM dataset. COBOL uses VSAM Access Method Services (AMS) to do all VSAM file accesses, as far as your COBOL program is concerned it is an "indexed" file with a little bit on the SELECT statement to say that it is a VSAM file. Or is is a sequential file with a little... you know by now.

Why we cant use arithmetic mean when in planning poker? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Why so? I really can't understand that. Why we can only select from numbers proposed by players?
The numbers used are spaced far apart on purpose (typically from the Fibonacci sequence). If you get numbers from all across the board from 1 to 23, you're supposed to ask why the person who voted 1 gave it such a low score ("Did you think about testing and deployment? What about these other acceptance criteria?") and why the person who voted 23 gave it such a high score ("Are you aware we already have an existing code base? Did you know that Karen knows a lot about this and can pair up with you?") and then re-vote. If you're really stuck because half the team says 8 and the other half says 13, you can take the 13 and move on with your lives.
The extra precision isn't necessary when your accuracy is not great. My team goes for even less precision and buckets stories into "small" (one person can do a bunch in an iteration), "medium" (one person can handle a few of these), "large" (one person a week or more), and "extra large" (too big and needs to be split).
You can do what you want to do. However, the thought about choosing the exact numbers that are proposed is that with growing numbers, you cannot estimate small details reliably. That's why with growing numbers, the gaps between numbers become larger.
Once you start giving detailed numbers (like one estimating 8 and the next 13, chosing 11 as a mean) people assume this actually is a detailed estimation. It's not. The method is meant to be a rough guess.
Behind the idea that people should agree on one number is that everybody should have the same understand of the story.
If people pick very different numbers they have a different understanding how much work needed to complete the story or how difficult it will be. The different numbers should start discussions then and finally lead to a shared view of the story.
You should think to numbers as symbols with none arithmetic meaning, except for a (partially) ordered relation, because they are estimates (of effort need to do done a user story).
If you use math to model an estimate you should provide a way:
to represent certainty
to represent uncertainty
to operate with that representations
to define an average as a function of certainty and uncertainty
If you use some kind of average which operates on estimates modeled as single numbers you are supposing that certainty and uncertainty can be handle in the same way, and I guess it's a bad assumption.
I think that the spirit of planning poker session is achiving a team-shared estimates by a discussion among human being and not using arithmetic on human being estimates.

Fibonacci Vs Binomial estimation points? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Are there good arguments for using the modified Fibonacci series (0, 1, 2, 3, 5, 8, 13, 20, 40, etc) instead of a geometric progression (1, 2, 4, 8, 16, 32, etc) when estimating story points in Scrum (or any agile methodology, really)?
I know that Scrum does not specify Fibonacci, or any specific system, but it is definitely the most popular.
I also see that there are good reasons the either of these systems would be better than a linear progression - increasing uncertainty, removing time consuming - and meaningless - arguments (is this a 6 or a 7?).
So is it just by convention and history that Fibonacci is (almost) all that is mentioned when talking about story point scales or are there specific arguments for it over geometric?
First, you're thinking of a "geometric progression", not a "binomial progression" (which is not a real thing).
As for which.... it doesn't matter very much. The Fibonacci series actually approaches a geometric progression as the numbers get larger, so clearly the two have a lot in common. The baseline story units matter far more than the set you choose.
This is what I believe is beneficial if you use Fibonacci:
1- You don't have to compare complexity relative to other stories that precise. If you are not using Fibonacci series, you may end up comparing which story is bigger twice or 4 times relative to another story, the idea is to have user stories with the lower points. So if you are getting to the higher point range, we don't want to have focus on them and decide if it is 4 times bigger than the user story assigned just now.
2- Fibonacci numbers can be found in many natural patterns, so it might be more natural to us estimating the user stories by them. https://en.wikipedia.org/wiki/Fibonacci_number
3- Back to the lower point stories, see the difference between 8 and 13 and 8 and 16. There is no in-between points. The whole point is to give flexibility when it comes to the stories you want to accomplish within an sprint (which should be fairly simple, so sticking to 2 3 5 8 13 would be way better than 2 4 8 16 32)
(BTW the Fibonacci sequence has a 21 instead of 20, usually they simplify that and make it 20)
(If I have to argue only by one, I would chose the 3rd one)

Is it truck factor or bus factor? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
It seems that both these terms get thrown around a lot. Both, I think, describe the same idea. Which was established first?
Also, it seems like some people describe it as a good thing to have a low x factor, while others describe it as a good thing to have a high x factor. Which is it?
You want a high truck/bus factor:
Truck Factor (definition): "The number
of people on your team who have to be
hit with a truck before the project is
in serious trouble"
(From: http://www.agileadvice.com/archives/2005/05/truck_factor.html)
i.e. you don't want parts of the code that only one person knows how it works or only one person can extend/maintain. Knowledge should be spread amongst the whole team via things like wiki info and pair-programming.
Wikipedia says bus number is "more commonly known as truck number" But in the US, "hit by a bus" is practically an idiom, while "hit by a truck" is not (although either phrase is easily understood.) Regarding high/low being good, the wikipedia article says:
"High bus numbers are good (with the
best case being equal to the number of
developers on a project). This means
all developers understand the codebase
more or less equally. A low bus number
(especially, one) represents a high
risk."
I'd add to what #cartoonfox said: Promiscuous pair progamming is a good way to distribute critical knowledge around a team so that the truck number is as high as possible. If you don't swap pairs often and with many different team members, knowledge isn't distributed very quickly.
The Truck Number (or Truck Factor) is the number of people with key knowledge that you cannot replace i.e. if that number of people went simultaneously under a truck you wouldn't be able to carry on developing.
I believe that certain chemical companies forbid key members of staff from travelling together for this very reason...
Discussion here: http://c2.com/cgi/wiki?TruckNumber
Here's a story about Bill Atkinson being the one key person in the Mac's truck factor - one of the key people that worked on QuickDraw during the early days of the Mac. Had a car accident apparently and people were concerned that he wouldn't be able to finish his work on the Mac's graphics software:
http://folklore.org/StoryView.py?project=Macintosh&story=I_Still_Remember_Regions.txt
A high truck number is better - i.e. it's harder to wipe out that many critical people at once...
A low truck number is worse - i.e. there is a greater risk that a few critical people could be ill, or leave or die, leaving the project in a state of unrecoverable collapse.
Pair progamming is a good way to distribute critical knowledge around a team so that the truck number is as high as possible.
The principal is the same, whether you call it:
bus number
truck number
bus factor
truck factor
et al
Also, the principal is the same whether or not you describe it using a higher number as being better, or a lower number being better:
A high bus number is good if you are describing the number of project members who could be hit by a bus and have the project survive;
A low bus number is good if you are describing the number of project members who survive a bus crash and have the project survive.
I did look into it once upon a time, but I don't recall which came first (see #Paolo's answer). Regardless of which came first, I have experienced enough confusion about it that I make sure all parties are using the same version of the number, high or low. ;)

Resources