How to write cover points for transition in systemverilog? - verilog

there is a N bit register in my RTL design and I want to check if testbench is covering following particular case-
000..0 -> 000..001 -> 000....011 -> 00...111 -> ...... -> 111....111
I'm not sure how to write cover group for above. I can see how transition coverage can be useful. As an example:
covergroup cg;
cover_point_y : coverpoint y {
bins tran_34 = (3=>4);
bins tran_56 = (5=>6);
}
However in my case, my register is paraterized (N bits: reg[(N-1):0]) and it's too big to write the full sequence manually. Can I write a generate or for loop to cover above sequence that I want to see?

It's not really clear to me which transitions you want to cover. I guess you want to cover that each value changed to every other value. What you need to keep in mind is that you can write multiple values on either side of the => operator. For example:
cover_point_y : coverpoint y {
bins transitions = ( 0, 1 => 0, 1 );
}
This will create bins for 0 => 0, 0 => 1, 1 => 0, 1 => 1. If I interpreted the BNF properly, according to the LRM, the values you put on either side of the => operator are of type covergroup_value_range, meaning that any value range syntax for coverpoints should be accepted. This means the following should also be legal:
cover_point_y : coverpoint y {
bins transitions = ( [0 : 2^N - 1] => [0 : 2^N - 1] );
}
This should create transition bins from every value to every other value. You're at the mercy of tool support here. This doesn't work in my simulator, for example, but it might work in others.
If you want to exclude certain transitions (for example, 0 => 0, 1 => 1, etc.) this won't help you anyway, because the syntax to specify transition bins just isn't expressive enough...
Don't fret, there are ways to do it. Going back to basics, transition coverage is basically a form of cross coverage between the current value and the past value. Cross coverage allows much more diverse ways of specifying bins. You need to track the previous value of the variable you're covering. The thing you need to be careful of is that you should only start collecting coverage once you've sampled at least 2 values (so that you have a previous). With transition coverage, the tool would do this for you under the hood.
The best way I can think of doing it is to wrap the covergroup inside a class:
class cg_wrapper #(int unsigned WIDTH = 3);
covergroup cg with function sample(bit [WIDTH-1 : 0] val,
bit [WIDTH-1 : 0] prev
);
coverpoint val;
coverpoint prev;
cross prev, val;
endgroup
function new();
cg = new();
endfunction
// ...
endclass
The class would keep track of the previous value and whether a previous value was collected (i.e. we tried to sample a second value):
class cg_wrapper #(int unsigned WIDTH = 3);
protected bit has_prev;
protected bit [WIDTH-1 : 0] prev;
// ...
endclass
To ensure that coverage is sampled at the appropriate points, the class would expose a sample(...) function (similar to what a covergroup has) that handles sampling the actual covergroup and storing the previous value:
class cg_wrapper #(int unsigned WIDTH = 3);
// ...
function void sample(bit [WIDTH-1 : 0] val);
if (has_prev)
cg.sample(val, prev);
prev = val;
has_prev = 1;
endfunction
endclass
This will ensure that you'll get meaningful crosses. For example, calling sample(...) twice with values 0 and 1, will result in only a single "transition" from 0 to 1 (i.e. one bin in the cross getting filled).
If you want to start excluding bins the "transitions" you can use a lot different ways to do that. For example, to exclude identical transitions, you could do:
cross prev, val {
ignore_bins ignore =
(binsof (val) && binsof (prev)) with (prev == val);
}
This ignores transitions of type 0 => 0, 1 => 1, 2 => 2, etc.
There's also a nice article from AMIQ Consulting showcasing some cool ways of specifying cross bins.

Your first solution needs little modification. You have to use [] after bin name to make it auto bin. I think that is the reason why it was not working for you.
cover_point_y : coverpoint y {
bins transitions[] = ( 0, 1 => 0, 1 );
}

Assuming you want to cover consecutive increments a cover property could do the trick:
bit [7:0] y;
property y_inc(int n); #(posedge clk) y == $past(y+1)[*n]; endproperty
y_inc_3: cover property (y_inc(3));
y_inc_full: cover property (y_inc((1<<$bits(y))-1));

Related

How to identify unit vectors with angle threshold quickly?

I am writing a program about computing geometry.
In this program, I need to identify unit vectors. (The word identify maybe not accurate)
i.e., writing a program to check whether a unit vector already exists.
This procedure is used when checking whether two polygons are on one plane. The first step is to check whether normal of two polygons are very close (angle < 1.0 degree).
So, we can assume that
all vectors are unit vectors
vectors are random
For example, set the angle threshold to 1.0 degree. And we have 6 vectors.
(1,0,0)
(0,1,0)
(1,0,1e-8) // in program, this will be normalized
(1,0,0)
(sin(45), cos(45),0)
(sin(44.9), cos(44.9),0)
then, the index of each vector is
0 1 0 0 2 2
i.e., the 1st / 3rd / 4th vectors are the same one because their angle is within 1.0 degree or just the same direction. angle between the 5th/6th vector is smaller than 1.0 degree.
Now, the problem comes, I have hundreds of thousands unit vectors to identify in different stages. This procedure costs about half of total time.
example code
std::vector<Vector3d> unitVecs; // all unit vectors
// more than 100,000 unit vectors in real case
int getVectorID(const Vector3d& vec)
{
for(int i=0; i<unitVecs.size(); ++i) {
if(calcAngle(unitVecs[i], vec) <1.0) // 1.0 is angle degree threshold
return i;
/// alternatively, check with cos value
if(unitVecs[i].dot(vec)>cos(1.0*RADIAN))
return i;
}
return -1;
}
int insertVector(const Vector3d& vec)
{
int idx = getVectorID(vec);
if(idx!=-1) return idx;
unitVecs.push_back(vec);
return unitVecs.size()-1;
}
Does anyone have good ideas to accelerate this process ?
If you are able to accept vectors which are merely "very close to being unit vectors", as opposed to vectors which are strictly less than or equal to 1 degree from being a unit vector, you can simply check that for a given vector 3 values are very close to 0, and one value is very close to 1:
int valueCloseTo(float value, float trg, float epsilon=0.0001) {
return abs(value) - trg <= epsilon;
}
int isRoughlyUnitVector(float x, float y, float z, float epsilon=0.0001) {
// We can quickly return false if units don't add near 1
// Could also consider multiplying `epsilon` x 3 here to account for accumulated error
if (!valueCloseTo(x + y + z, 1, epsilon)) return false;
// Now ensure that of x, y, and z, two are ~0 and one is ~1
int numZero = 0;
int numOne = 0;
std::vector<float> vec{ x, y, z };
for (float v : vec) {
// Count another ~0 value
if (valueCloseTo(v, 0, epsilon)) numZero++;
// Count another ~1 value
else if (valueCloseTo(v, 1, epsilon)) numOne++;
// If any value isn't close to 0 or 1, (x,y,z) is not a unit vector
else return false;
// False if we exceed 2 values near 0, and one value near 1
if (numZero > 2 || numOne > 1) return false;
}
return true;
}
Note that this method does not give any way to define a "maximum offset angle" (like 1deg in your question) - instead it lets us work with an epsilon value, which isn't an angle but rather a simple linear value. As epsilon increases vectors that are further from being unit vectors get accepted, but epsilon doesn't have an "angular" nature to it.

find the number of ways you can form a string on size N, given an unlimited number of 0s and 1s

The below question was asked in the atlassian company online test ,I don't have test cases , this is the below question I took from this link
find the number of ways you can form a string on size N, given an unlimited number of 0s and 1s. But
you cannot have D number of consecutive 0s and T number of consecutive 1s. N, D, T were given as inputs,
Please help me on this problem,any approach how to proceed with it
My approach for the above question is simply I applied recursion and tried for all possiblity and then I memoized it using hash map
But it seems to me there must be some combinatoric approach that can do this question in less time and space? for debugging purposes I am also printing the strings generated during recursion, if there is flaw in my approach please do tell me
#include <bits/stdc++.h>
using namespace std;
unordered_map<string,int>dp;
int recurse(int d,int t,int n,int oldd,int oldt,string s)
{
if(d<=0)
return 0;
if(t<=0)
return 0;
cout<<s<<"\n";
if(n==0&&d>0&&t>0)
return 1;
string h=to_string(d)+" "+to_string(t)+" "+to_string(n);
if(dp.find(h)!=dp.end())
return dp[h];
int ans=0;
ans+=recurse(d-1,oldt,n-1,oldd,oldt,s+'0')+recurse(oldd,t-1,n-1,oldd,oldt,s+'1');
return dp[h]=ans;
}
int main()
{
int n,d,t;
cin>>n>>d>>t;
dp.clear();
cout<<recurse(d,t,n,d,t,"")<<"\n";
return 0;
}
You are right, instead of generating strings, it is worth to consider combinatoric approach using dynamic programming (a kind of).
"Good" sequence of length K might end with 1..D-1 zeros or 1..T-1 of ones.
To make a good sequence of length K+1, you can add zero to all sequences except for D-1, and get 2..D-1 zeros for the first kind of precursors and 1 zero for the second kind
Similarly you can add one to all sequences of the first kind, and to all sequences of the second kind except for T-1, and get 1 one for the first kind of precursors and 2..T-1 ones for the second kind
Make two tables
Zeros[N][D] and Ones[N][T]
Fill the first row with zero counts, except for Zeros[1][1] = 1, Ones[1][1] = 1
Fill row by row using the rules above.
Zeros[K][1] = Sum(Ones[K-1][C=1..T-1])
for C in 2..D-1:
Zeros[K][C] = Zeros[K-1][C-1]
Ones[K][1] = Sum(Zeros[K-1][C=1..T-1])
for C in 2..T-1:
Ones[K][C] = Ones[K-1][C-1]
Result is sum of the last row in both tables.
Also note that you really need only two active rows of the table, so you can optimize size to Zeros[2][D] after debugging.
This can be solved using dynamic programming. I'll give a recursive solution to the same. It'll be similar to generating a binary string.
States will be:
i: The ith character that we need to insert to the string.
cnt: The number of consecutive characters before i
bit: The character which was repeated cnt times before i. Value of bit will be either 0 or 1.
Base case will: Return 1, when we reach n since we are starting from 0 and ending at n-1.
Define the size of dp array accordingly. The time complexity will be 2 x N x max(D,T)
#include<bits/stdc++.h>
using namespace std;
int dp[1000][1000][2];
int n, d, t;
int count(int i, int cnt, int bit) {
if (i == n) {
return 1;
}
int &ans = dp[i][cnt][bit];
if (ans != -1) return ans;
ans = 0;
if (bit == 0) {
ans += count(i+1, 1, 1);
if (cnt != d - 1) {
ans += count(i+1, cnt + 1, 0);
}
} else {
// bit == 1
ans += count(i+1, 1, 0);
if (cnt != t-1) {
ans += count(i+1, cnt + 1, 1);
}
}
return ans;
}
signed main() {
ios_base::sync_with_stdio(false), cin.tie(nullptr);
cin >> n >> d >> t;
memset(dp, -1, sizeof dp);
cout << count(0, 0, 0);
return 0;
}

Find MAX value of `define in Verilog

I want to create a define constant that is assigned to one of multiple otherdefine constants that has the largest value. Something like:
`define MAXWIDTH $MAX(`WIDTH0,`WIDTH1,`WIDTH2)
Is this possible in Verilog/SystemVerilog?
Depending on exactly what you need, there are a few ways to do it (there is no builtin call for maximum like in other languages):
You have some vectors and you need to get the maximum width for a new vector
$bits(v1 + v2 + v3 ...)
Use the language to your advantage, noting that the addition of vectors results in a vector that has the maximum width of all the operand vector widths and use $bits to get that size. Example:
logic [1:0] x;
logic [7:0] y;
logic [10:6] z;
...
max = $bits(x + y + z); // Result: max = 8
You have a few numbers of which you need the maximum
If you put your numbers in an array or queue, you can use the max method to get the largest:
int x[] = '{n1, n2, n3, ...};
...
max = x.max;
Note that this approach has the downside that it cannot be used at compile time for getting the maximum size. Example:
int _nums[] = '{13, 2, 17, 8, -1};
...
max = _nums.max; // Result: max = 17
Basically any other time
You'll just have to use the conditional operator, either in a macro or using let:
`define max2(v1, v2) ((v1) > (v2) ? (v1) : (v2))
`define max3(v1, v2, v3) `max2((v1), `max2((v2), (v3)))
OR
let max2(v1, v2) = (v1 > v2) ? v1 : v2;
let max3(v1, v2, v3) = max2(v1, max2(v2, v3));
The advantage of macros is that you can use them as compile-time constants in a wider range of tools, while older tools might not support let as a compile-time constant (or at all). Example:
max = `max3(10, 2, 3); // Result: max = 10
OR
max = max3(10, 2, 3); // Result: max = 10

Traverse a graph in parallel

I'm revising for an exam (still) and have come across a question (posted below) that has me stumped. I think, in summary, the question is asking "Think of any_old_process that has to traverse a graph and do some work on the objects it finds, including adding more work.". My question is, what data structure can be parallelised to achieve the goals set out in the question?
The role of a garbage collector (GC) is to reclaim unused memory.
Tracing collectors must identify all live objects by traversing graphs
of objects induced by aggregation relationships. In brief, the GC has
some work-list of tasks to perform. It repeatedly (a) acquires a task
(e.g. an object to inspect), (b) performs the task (e.g. marks the
object unless it is already marked), and (c) generates further tasks
(e.g. adds the children of an unmarked task to the work-list). It is
desirable to parallelise this operation.
In a single-threaded
environment, the work-list is usually a single LIFO stack. What would
you have to do to make this safe for a parallel GC? Would this be a
sensible design for a parallel GC? Discuss designs of data structure
to support a parallel GC that would scale better. Explain why you
would expect them to scale better.
The natural data structure for a graph is, well, a graph, i.e. a set of graph elements (nodes) which can refer other elements. Though, for the better cache reuse, the elements can be placed/allocated in an array or arrays (generally, vectors) in order to put neighbor elements as close in memory as possible. Generally, each element or a group of elements should have a mutex (spin_mutex) to protect access to it, the contention means that some other thread is busy working on it, so no need to wait. Though, if possible, an atomic operation over the flag/state fields is preferable to mark the element as visited without a lock. For example, the simplest data structure can be the following:
struct object {
vector<object*> references;
atomic<bool> is_visited; // for simplicity, or epoch counter
// if nothing resets it to false
void inspect(); // processing method
};
vector<object> objects; // also for simplicity, if it can be for real
// things like `parallel_for` would be perfect here
Given this data structure and the way how GC work is described, it perfectly fits for a recursive parallelism like divide-and-conquer pattern:
void object::inspect() {
if( ! is_visited.exchange(true) ) {
for( object* o : objects ) // alternatively it can be `parallel_for` in some variants
cilk_spawn o->inspect(); // for Cilk or `task_group::run` for TBB or PPL
// further processing of the object
}
}
If the data structure in the question is how the tasks are organized. I'd recommend a work-stealing scheduler (like tbb or cilk. There are tons of papers on this subject. To put it simple, each worker thread has its own but shared deque of tasks, and when the deque is empty, a thread steals tasks from others deques.
The scalability comes from the property that each task can add some other tasks which can work in prarallel..
Your questions:
Think of any_old_process that has to traverse a graph and do some work on the objects it finds, including adding more work.
... what data structure can be parallelised to achieve the goals set out in the question?
Quoted questions:
Some stuff about garbage collection.
Since you are specifically interested in parallelizing graph algorithms, I'll give an example of one kind of graph traversal that can be parallelized well.
Executive Summary
Finding local minima ("basins") or maxima ("peaks") are useful operations in digital image processing. A concrete example is geological watershed analysis. One approach to the problem treats each pixel or small group of pixels in the image as a node and finds non-overlapping minimum spanning trees (MST) with the local minima as the tree roots.
Gory details
Below is a simplistic example. It's a web interview question from Palantir Technologies brought to Programming Puzzles & Code Golf by AnkitSablok. It's simplified by two assumptions (bolded below):
That a pixel/cell only has 4 neighbors instead of the usual eight.
That a cell has all uphill neighbors (it's the local minima) or has a unique downhill neighbor. I.e., plains aren't allowed.
Below that is some JavaScript that solves this problem. It violates every reasonable coding standard against use of side-effects, but illustrates where some of the opportunities for parallelization exist.
In the "Create list of sinks (i.e. roots)" loop, note that each cell can be evaluated completely independently for elevation with respect to it's neighbors as long as the elevation data is static. In a sequential program, one thread of execution examines each cell. In a parallel program, the cells are divvied up so that one, and only one, thread reads and writes the local minima state information (sink[] in the program below). If generating the list of minima/roots in parallel, the queuing operations for the stack would have to be synchronized. For a discussion how to do that for stacks and other queues, see "Simple, Fast, and Practical Non-Blocking and Blocking Concurrent Queue Algorithms", Michael & Scott, 1996. For modern updates, follow the citation tree on Google Scholar (no mutex required :).
In the "Each root explores it's basin" loop, note that each basin could explored/enumerated/flooded in parallel.
If you want dive deeper into parallelizing MSTs, see "Scalable Parallel Minimum Spanning Forest Computation", Nobari, Cao, arras, Bressan, 2012. The first two pages contain a clear and concise survey of the field.
Simplified example
A group of farmers has some elevation data, and we’re going to help them understand how rainfall flows over their farmland. We’ll represent the land as a two-dimensional array of altitudes and use the following model, based on the idea that water flows downhill:
If a cell’s four neighboring cells all have higher altitudes, we call this cell a sink; water collects in sinks. Otherwise, water will flow to the neighboring cell with the lowest altitude. If a cell is not a sink, you may assume it has a unique lowest neighbor and that this neighbor will be lower than the cell.
Cells that drain into the same sink – directly or indirectly – are said to be part of the same basin.
Your challenge is to partition the map into basins. In particular, given a map of elevations, your code should partition the map into basins and output the sizes of the basins, in descending order.
Assume the elevation maps are square. Input will begin with a line with one integer, S, the height (and width) of the map. The next S lines will each contain a row of the map, each with S integers – the elevations of the S cells in the row. Some farmers have small land plots such as the examples below, while some have larger plots. However, in no case will a farmer have a plot of land larger than S = 5000.
Your code should output a space-separated list of the basin sizes, in descending order. (Trailing spaces are ignored.)
Here's an example:
Input:
5
1 0 2 5 8
2 3 4 7 9
3 5 7 8 9
1 2 5 4 2
3 3 5 2 1
Output: 11 7 7
The basins, labeled with A’s, B’s, and C’s, are:
A A A A A
A A A A A
B B A C C
B B B C C
B B C C C
// lm.js - find the local minima
// Globalization of variables.
/*
The map is a 2 dimensional array. Indices for the elements map as:
[0,0] ... [0,n]
...
[n,0] ... [n,n]
Each element of the array is a structure. The structure for each element is:
Item Purpose Range Comment
---- ------- ----- -------
h Height of cell integers
s Is it a sink? boolean
x X of downhill cell (0..maxIndex) if s is true, x&y point to self
y Y of downhill cell (0..maxIndex)
b Basin name ('A'..'A'+# of basins)
Use a separate array-of-arrays for each structure item. The index range is
0..maxIndex.
*/
var height = [];
var sink = [];
var downhillX = [];
var downhillY = [];
var basin = [];
var maxIndex;
// A list of sinks in the map. Each element is an array of [ x, y ], where
// both x & y are in the range 0..maxIndex.
var basinList = [];
// An unordered list of basin sizes.
var basinSize = [];
// Functions.
function isSink(x,y) {
var myHeight = height[x][y];
var imaSink = true;
var bestDownhillHeight = myHeight;
var bestDownhillX = x;
var bestDownhillY = y;
/*
Visit the neighbors. If this cell is the lowest, then it's the
sink. If not, find the steepest downhill direction.
*/
function visit(deltaX,deltaY) {
var neighborX = x+deltaX;
var neighborY = y+deltaY;
if (myHeight > height[neighborX][neighborY]) {
imaSink = false;
if (bestDownhillHeight > height[neighborX][neighborY]) {
bestDownhillHeight = height[neighborX][neighborY];
bestDownhillX = neighborX;
bestDownhillY = neighborY;
}
}
}
if (x !== 0) {
// upwards neighbor exists
visit(-1,0);
}
if (x !== maxIndex) {
// downwards neighbor exists
visit(1,0);
}
if (y !== 0) {
// left-hand neighbor exists
visit(0,-1);
}
if (y !== maxIndex) {
// right-hand neighbor exists
visit(0,1);
}
downhillX[x][y] = bestDownhillX;
downhillY[x][y] = bestDownhillY;
return imaSink;
}
function exploreBasin(x,y,currentSize,basinName) {
// This cell is in the basin.
basin[x][y] = basinName;
currentSize++;
/*
Visit all neighbors that have this cell as the best downhill
path and add them to the basin.
*/
function visit(x,deltaX,y,deltaY) {
if ((downhillX[x+deltaX][y+deltaY] === x) && (downhillY[x+deltaX][y+deltaY] === y)) {
currentSize = exploreBasin(x+deltaX,y+deltaY,currentSize,basinName);
}
return 0;
}
if (x !== 0) {
// upwards neighbor exists
visit(x,-1,y,0);
}
if (x !== maxIndex) {
// downwards neighbor exists
visit(x,1,y,0);
}
if (y !== 0) {
// left-hand neighbor exists
visit(x,0,y,-1);
}
if (y !== maxIndex) {
// right-hand neighbor exists
visit(x,0,y,1);
}
return currentSize;
}
// Read map from file (1st argument).
var lines = $EXEC('cat "' + $ARG[0] + '"').split('\n');
maxIndex = lines.shift() - 1;
for (var i = 0; i<=maxIndex; i++) {
height[i] = lines.shift().split(' ');
// Create all other 2D arrays.
sink[i] = [];
downhillX[i] = [];
downhillY[i] = [];
basin[i] = [];
}
for (var i = 0; i<=maxIndex; i++) { print(height[i]); }
// Everyone decides if they are a sink. Create list of sinks (i.e. roots).
for (var x=0; x<=maxIndex; x++) {
for (var y=0; y<=maxIndex; y++) a
if (sink[x][y] = isSink(x,y)) {
// This node is a root (AKA sink).
basinList.push([x,y]);
}
}
}
//for (var i = 0; i<=maxIndex; i++) { print(sink[i]); }
// Each root explores it's basin.
var basinName = 'A';
for (var i=basinList.length-1; i>=0; --i) { // i-- makes Closure Compiler sad
var x = basinList[i][0];
var y = basinList[i][5];
basinSize.push(exploreBasin(x,y,0,basinName));
basinName = String.fromCharCode(basinName.charCodeAt() + 1);
}
for (var i = 0; i<=maxIndex; i++) { print(basin[i]); }
// Done.
print(basinSize.sort(function(a, b){return b-a}).join(' '));

Bayes' formula for updating probabilistic map

I'm trying to get a mobile robot to map an arena based on what it can see from a camera. I've created a map, and managed to get the robot to identify items placed in the arena and give an estimated location, however, as I'm only using an RGB camera the resulting numbers can vary slightly ever frame due to noise, or change in lighting, etc. What am now trying to do is create a probability map using Bayes' formula to give a better map of the arena.
Bayes' Formula
P(i | x) = (p(i)p(x|i))/(sum(p(j)(p(x|j))
This is what I've got so far. All points on the map are initialised to 0.5.
// Gets the Likely hood of the event being correct
// Para 1 = Is the object likely to be at that location
// Para 2 = is the sensor saying it's at that location
private double getProbabilityNum(bool world, bool sensor)
{
if (world && sensor)
{
// number to test the function works
return 0.6;
}
else if (world && !sensor)
{
// number to test the function works
return 0.4;
}
else if (!world && sensor)
{
// number to test the function works
return 0.2;
}
else //if (!world && !sensor)
{
// number to test the function works
return 0.8;
}
}
// A function to update the map's probability of an object being at location (x,y)
// Para 3 = does the sensor pick up the an object at (x,y)
public double probabilisticMap(int x,int y,bool sensor)
{
// gets the current likelihood from the map (prior Probability)
double mapProb = get(x,y);
//decide if object is at location (x,y)
bool world = (mapProb < threshold);
//Bayes' formula to update the probability
double newProb =
(getProbabilityNum(world, sensor) * mapProb) / ((getProbabilityNum(world, sensor) * mapProb) + (getProbabilityNum(!world, sensor) * (1 - mapProb)));
// update the location on the map
set(x,y,newProb);
// return the probability as well
return newProb;
}
It does work, but the numbers seem to jump rapidly, and then flicker when they are at the top, it also errors if the numbers drop too near to zero. Anyone have any idea why this might be happening? I think it's something to do with the way the equations is coded, but I'm not too sure. (I found this, but I don't quite understand it, so I'm not sure of it's relevents, but it seems to be talking about the same thing
Thanks in Advance.
Use log-likelihoods when doing numerical computations involving probabilities.
Consider
P(i | x) = (p(i)p(x|i))/(sum(p(j)(p(x|j)).
Because x is fixed, the denominator, p(x), is a constant. Thus
P(i | x) ~ p(i)p(x|i)
where ~ denotes "is proportional to."
The log-likelihood function is just the log of this. That is,
L(i | x) = log(p(i)) + log(p(x|i)).

Resources