If a DRAM entity has a 12 bit address input and 16 bits of data input/output, is each address refering to one bit, or one 16 bit word? - memory-address

I have a diagram that has the address bus into the DRAM entity as 12 bits, and the data input/output as 16 bits.
-This means its a "2^12 * 16" memory?
-This is where I get confused, my textbook shows a bit level organization of a a 16 bit DRAM, where the first 2 bits of the 4 bit address represent the row, and the last 2 bits the column.
-In that textbook example each address is referring to one bit, so in the diagram described above if I were to arrange the DRAM bits into a square matrix it would have 2^6 rows addressable, and 2^6 columns addressable, correct?
-Now, does each column address refer to 1 bit, or does each column address refer to 1, 16 bit word?
-Example, if I gave memory address "000001000011" into the entity in the diagram, the row the data is located would be the "000001" row, and the the "000011" column would hold my data? And this column is referring to 1 bit or 1, 16 bit word?

Related

how to compute t-test step by step

I am trying to compute the t-test using Excel, without the macro included in the software.
Specifically, given a dataset, for example
var1
21
34
23
32
21
42
32
12
53
31
21 - from here
41
12
14
24 - to here
I am interesting in analysing the change in the last five rows (from 21 to 24).
What I did is to compute the mean of the two samples of data, i.e. of the set1 (from 21 to 31) and of the set2 (from 21 to 24).
Then, I compute the variance of these sets, using var.S.
Once I did it, I used the formula for unequal sample sizes, unequal variances to determine the degree of freedom.
Now, what I should do is using the t.dist function in Excel to get the final result. However, I cannot understand the parameters to insert.
Could you please tell me what I should do and if it is ok what I have done until now?
Thank you.
Having 21 at the start of both sets of data led to some confusion about which points were being compared with which; a different example (or explicitly identifying both sets in the diagram, not just the second one) could avoid that ambiguity.
Note that if these are time series data, the assumption of independence will usually not be tenable.
You don't need a macro, Excel has a built in t-test function (T.TEST) that is capable of doing what you ask for.
T.TEST(array1,array2,tails,type)
array1 and array2 should be two non-overlapping sets of observations.
tails should be 1 or 2 depending on whether you need a 1- or 2-tailed test. I presume you want two tailed. (If it's one-tailed, beware; it actually just reports half the two-tailed p-value, which is not correct if the sample means are in the opposite direction to the one hypothesized in the one-tailed alternative.)
type is 1,2 or 3 depending on whether you want paired, independent with equal variance or independent with unequal variance. (It's possible to make the paired test option do a one-sample t-test as well)
There's an explicit example of using this function at the link above.
With your approach you next need to compute the t-statistic before trying to use the t.dist function; that's the first argument.

Given column + rows of a terminal / tty, how to calculate min/max number of bytes that can fit

Say we get the current columns and rows of a terminal with node.js:
console.log('rows:', process.stdout.rows);
console.log('columns:', process.stdout.columns);
is there a way to calculate the number of bytes that can fit in the terminal window? I mean I would guess that it's rows*columns, but I really have no idea.
My guess is that rows*columns is the max number of bytes that can fit, but in reality, it's probably less, it wouldn't be exact.
The maximum number depends on the nominal size of the window (rows times columns) as well as the way the character cells are encoded. The node application assumes everything is encoded as UTF-8, so that means each cell could be 4 bytes (see this answer for example).
Besides that, you have to allow for a newline at the end of each row (unless you're relying upon line-wrapping the whole time). A newline is a single byte.
So...
(1 + columns) * rows * 4
as a first approximation.
If you take combining characters into account, that could increase the estimate, but (see this answer) the limit on that is not well defined. In practice, those are rarely used in European characters, but are used in some Asian characters (ymmv).

Is there a way to store a matrix of million bits on FPGA?

I am working towards the implementation of a channel decoder on an FPGA. Esentially , the problem sums up to this :
1) I have a matrix . I do some computations on the rows. Then, I do some computations on the columns.
The decoder basically picks up each row of the matrix, performs some operations and move onto the next row. It does the same with the columns.
The decoder however operates on a 1023 * 1023 matrix i.e I have 1023 rows and 1023 columns.
Small test case that works :
I first created a reg [1022:0] product_code[0:1] i.e 2 rows and 1023 columns. The output is as expected. However, the LUT utilization shows up to be 9 percent approximately. Then , I increase the size to 10 rows and 1023 columns(reg [1022:0] product_code[0:9]) which works as expected too. But the resource utilization has gone up to 27 percent.
Now my goal is to work get 1023 rows and 1023 columns. I does not even synthesize. Is there a better way to store such matrix on the FPGA ?
I would really appreciate any feedback !!!
You can find out the amount of storage an FPGA has from the manufacturers data sheet. However those memories are highly configurable.
Thus a 36 bit wide memory can be used as 36x1 or 18x2 or 4x9 units. Alternative you can read units of e.g. 36 bits but split the data yourself in 8 units of 4 bits. Process each nibble separately and write the whole back again.
Make sure your are using synchronous memories as all big memory blocks in all FPGAs are synchronous. If you start using asynchronous memories, the memories must be build from LUTS and you run out very quickly.
Also beware that your row and column processing must take into account how the data is stored. You can e.g. store the data row-wise. Using nibbles as example: when you read one 36 memory entry, that gives you a row of 8 nibbles. But in column mode one read gives you the first 8 entries of 8 adjacent columns. So there you should ideally process 8 columns in parallel at the same time.

Input decimal values like 0.0047 in verilog

I have an array of decimal values like 0.0047, -45.34 etc. Is there a way I can add this in verilog and automatically view it's 16 bit converted value?
You can use 'real' but you can not synthesize it. You have to find a binary representation for your numbers either floating point or fixed point. You have to define a range for your numbers and also a precision as binary representation of real number is often an approximation.
I did some calculations. You have a positive and negative number so you need a sign bit. Leaves 15 bits for the values. You want to have at least 45, that requires 6 bits. leaves 9 bits for the fraction. The closest you can get to 0.0047 is then 0.0046875. Your range is then -63.998 .... +63.998

Why is there still a row limit in Microsoft Excel? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 14 years ago.
Improve this question
Until Office 2007, Excel has a maximum of 65,000 rows. Office 2007 bumped that up to a max of 1 million rows, which is nicer of course; but I'm curious -- why is there a limit at all? Obviously, performance will slow down exponetially as you increase the spreadsheet size; but it shouldn't be very hard to have Excel optimize for that by starting with a small sheet and dynamically "re-sizing" it only as needed.
Given how much work it must have been to increase the limit from 65K to 1 million, why didn't they go all the way so it's limited only by the amount of available memory and disk space?
Probably because of optimizations. Excel 2007 can have a maximum of 16 384 columns and 1 048 576 rows. Strange numbers?
14 bits = 16 384, 20 bits = 1 048 576
14 + 20 = 34 bits = more than one 32 bit register can hold.
But they also need to store the format of the cell (text, number etc) and formatting (colors, borders etc). Assuming they use two 32-bit words (64 bit) they use 34 bits for the cell number and have 30 bits for other things.
Why is that important? In memory they don't need to allocate all the memory needed for the whole spreadsheet but only the memory necessary for your data, and every data is tagged with in what cell it is supposed to be in.
Update 2016:
Found a link to Microsoft's specification for Excel 2013 & 2016
Open workbooks: Limited by available memory and system resources
Worksheet size: 1,048,576 rows (20 bits) by 16,384 columns (14 bits)
Column width: 255 characters (8 bits)
Row height: 409 points
Page breaks: 1,026 horizontal and vertical (unexpected number, probably wrong, 10 bits is 1024)
Total number of characters that a cell can contain: 32,767 characters (signed 16 bits)
Characters in a header or footer: 255 (8 bits)
Sheets in a workbook: Limited by available memory (default is 1 sheet)
Colors in a workbook: 16 million colors (32 bit with full access to 24 bit color spectrum)
Named views in a workbook: Limited by available memory
Unique cell formats/cell styles: 64,000 (16 bits = 65536)
Fill styles: 256 (8 bits)
Line weight and styles: 256 (8 bits)
Unique font types: 1,024 (10 bits) global fonts available for use; 512 per workbook
Number formats in a workbook: Between 200 and 250, depending on the language version of Excel that you have installed
Names in a workbook: Limited by available memory
Windows in a workbook: Limited by available memory
Hyperlinks in a worksheet: 66,530 hyperlinks (unexpected number, probably wrong. 16 bits = 65536)
Panes in a window: 4
Linked sheets: Limited by available memory
Scenarios: Limited by available memory; a summary report shows only the first 251 scenarios
Changing cells in a scenario: 32
Adjustable cells in Solver: 200
Custom functions: Limited by available memory
Zoom range: 10 percent to 400 percent
Reports: Limited by available memory
Sort references: 64 in a single sort; unlimited when using sequential sorts
Undo levels: 100
Fields in a data form: 32
Workbook parameters: 255 parameters per workbook
Items displayed in filter drop-down lists: 10,000
In a word - speed. An index for up to a million rows fits in a 32-bit word, so it can be used efficiently on 32-bit processors. Function arguments that fit in a CPU register are extremely efficient, while ones that are larger require accessing memory on each function call, a far slower operation. Updating a spreadsheet can be an intensive operation involving many cell references, so speed is important. Besides, the Excel team expects that anyone dealing with more than a million rows will be using a database rather than a spreadsheet.

Resources