Let's say we have a class with a bunch of random variables (around 100 rand variables). Now, I want to randomize only one of the variables from that class and the rest of the variables should be the same. How do I do that?
class rand_var extends vmm_data;
rand bit abc;
rand bit [3:0] cde;
rand bit en;
...
endclass
Now from one of the task, I want to randomize abc only, not any other variables of rand_var.
task custom_task();
rand_var rand1;
rand1 = new config
if (config.randomize () with {
})
else $display("randomization failed");
The easiest way of randomizing a single variable is using the std::randomize instead of the randomize method of the class.
task custom_task();
rand_var config;
config = new();
if (std::randomize(config.abc) with {
/* constraints on config.abc */
})
else $display("randomization failed");
You can also use what's called in-line randomization control when calling the class randomize method as in
if (!config.randomize(abc)) $display("randomization failed");
But you have to be careful when doing it this way because all of the active class constraints in the config object must still be satisfied, even on variables not being randomized.
One way is to call randomize and pass only the variable you want to be random. Refer to IEEE Std 1800-2017, section 18.11 In-line random variable control. Here is a complete example:
class rand_var;
rand bit abc;
rand bit [3:0] cde;
rand bit en;
endclass
module tb;
task custom_task();
rand_var rand1;
rand1 = new();
if (!rand1.randomize(abc)) $display("randomization failed");
$display("%b %d %b", rand1.abc, rand1.cde, rand1.en);
endtask
initial repeat (10) custom_task();
endmodule
Output:
1 0 0
1 0 0
1 0 0
0 0 0
1 0 0
1 0 0
0 0 0
1 0 0
1 0 0
0 0 0
Only abc is random for each task call.
See also How to randomize only 1 variable in a class?
I would like to write a shellscript that reads the current CPU utilisation on a per-core basis. Is it possible to read this from the /sys directory in Linux (CentOS 8)? I have found /sys/bus/cpu/drivers/processor/cpu0 which does give me a fair bit of information (like current frequency), but I've yet to figure out how to read CPU utilisation.
In other words: Is there a file that gives me current utilisation of a specific CPU core in Linux, specifically CentOS 8?
I believe that you should be able to extract information from /proc/stat - the lines that start with cpu$N, where $N is 0, 1, 2, ...... For example:
Strongly suggesting reading articles referenced on other answer.
cpu0 101840 1 92875 80508446 4038 0 4562 0 0 0
cpu1 81264 0 68829 80842548 4424 0 2902 0 0 0
Repeated call will show larger values:
cpu 183357 1 162020 161382289 8463 0 7470 0 0 0
cpu0 102003 1 93061 80523961 4038 0 4565 0 0 0
cpu1 81354 0 68958 80858328 4424 0 2905 0 0 0
Notice CPU0 5th column (idle count) moving from 80508446 to 80523961
Format of each line in
cpuN user-time nice-time system-time idle-time io-wait ireq softirq
steal guest guest_nice
So a basic solution:
while true ;
for each cpu
read current counters, at least user-time system-time and idle
usage = current(user-time + system-time) - prev(user-time+system-time)
idle = current(idle) - prev(idle)
utilization = usage/(usage+idle)
// print or whatever.
set prev=current
done
When I execute this:
buf := new(bytes.Buffer)
buf.WriteString("Hello world")
fmt.Println(buf)
it prints Hello World.
But if I execute this:
var buf bytes.Buffer
buf.WriteString("Hello world")
fmt.Println(buf)
it prints: {[72 101 108 108 111 32 119 111 114 108 100] 0 [72 101 108 108 111 32 119 111 114 108 100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] 0}
I understand that this is the content of the structure byte.Buffer but why it is printed in a different format?
Because a value of type *bytes.Buffer has a String() method (the method set of *bytes.Buffer contains the String() method), and a value of type bytes.Buffer does not.
And the fmt package checks if the value being printed has a String() string method, and if so, it is called to produce the string representation of the value.
Quoting from package doc of fmt:
Except when printed using the verbs %T and %p, special formatting considerations apply for operands that implement certain interfaces. In order of application:
If the operand is a reflect.Value, the operand is replaced by the concrete value that it holds, and printing continues with the next rule.
If an operand implements the Formatter interface, it will be invoked. Formatter provides fine control of formatting.
If the %v verb is used with the # flag (%#v) and the operand implements the GoStringer interface, that will be invoked.
If the format (which is implicitly %v for Println etc.) is valid for a string (%s %q %v %x %X), the following two rules apply:
If an operand implements the error interface, the Error method will be invoked to convert the object to a string, which will then be formatted as required by the verb (if any).
If an operand implements method String() string, that method will be invoked to convert the object to a string, which will then be formatted as required by the verb (if any).
The Buffer.String() method returns its content as a string, that's what you see printed when you pass a pointer of type *bytes.Buffer. And when you pass a non-pointer value of type bytes.Buffer, it is simply printed like a normal struct value, for which the default format is:
{field0 field1 ...}
See related / similar questions:
The difference between t and *t
Why not use %v to print int and string
Why does Error() have priority over String()
I want to convert legacy .vtk files into binary files, preferably .vtu files, because I am using an Unstructured Grid.
To do so I adapted the ConvertFile-Example from http://www.vtk.org/Wiki/VTK/Examples/Cxx/IO/ConvertFile
#include <string>
#include <vtkSmartPointer.h>
#include <vtkGenericDataObjectReader.h>
#include <vtkVersion.h>
#include <vtkXMLUnstructuredGridWriter.h>
#include <vtkUnstructuredGrid.h>
int main(int argc, char *argv[])
{
if(argc < 3)
{
std::cerr << "Required arguments: input.vtk output.vtu" << std::endl;
return EXIT_FAILURE;
}
std::string inputFileName = argv[1];
std::string outputFileName = argv[2];
vtkSmartPointer<vtkGenericDataObjectReader> reader = vtkSmartPointer<vtkGenericDataObjectReader>::New();
reader->SetFileName(inputFileName.c_str());
reader->Update();
vtkSmartPointer<vtkXMLUnstructuredGridWriter> writer = vtkSmartPointer<vtkXMLUnstructuredGridWriter>::New();
writer->SetFileName(outputFileName.c_str());
writer->SetInputConnection(reader->GetOutputPort());
writer->Update();
return EXIT_SUCCESS;
}
But when I use this to convert my legacy file, I lose all Cell Data after the first set. In this minimal example of my legacy file Scal_1 is in the .vtu file but Scal_2 is not.
# vtk DataFile Version 3.1
Lattice Boltzmann data
ASCII
DATASET UNSTRUCTURED_GRID
POINTS 9 INT
0 0 0 1 0 0 2 0 0
0 1 0 1 1 0 2 1 0
0 2 0 1 2 0 2 2 0
CELLS 4 20
4 0 1 3 4
4 1 2 4 5
4 3 4 6 7
4 4 5 7 8
CELL_TYPES 4
8 8 8 8
CELL_DATA 4
SCALARS Scal_1 DOUBLE
LOOKUP_TABLE default
1 2 1 0
SCALARS Scal_2 DOUBLE
LOOKUP_TABLE default
1 3 2 1
I am still new to vtk. Should I use another reader or writer? Or is something completely wrong?
The issue here is that the reader you chose is getting confused by having the input file containing 2 cell data arrays both marked as scalars. So with this the reader only outputs one cell data array. My suggestion is to use ParaView, specifically the pvpython executable, to convert the files. The corresponding Python code would look something like:
from paraview.simple import *
r = LegacyVTKReader( FileNames=['input.vtk'] )
w = XMLUnstructuredGridWriter()
w.FileName = 'output.vtu'
w.UpdatePipeline()
You can just use meshio (a project of mine). Install with
pip3 install meshio
and run
meshio-convert in.vtk out.vtu
We are currently using file I/O but need a better/faster way. Sample code would be appreciated.
By using files for transfer, you're already implementing a form of message passing, and so I think that would be the most natural fit for this sort of program. Now, you could write something yourself that uses shared memory when available and something like TCP/IP when not - or you could just use a library that already does that, like MPI, which is widely available, works, will take advantage of shared memory if you are running on the same machine, but would then also extend to letting you run them on different machines entirely without you changing your code.
So as a simple example of one program sending data to a second and then waiting for data back, we'd have two programs as follows; first.f90
program first
use protocol
use mpi
implicit none
real, dimension(n,m) :: inputdata
real, dimension(n,m) :: processeddata
integer :: rank, comsize, ierr, otherrank
integer :: rstatus(MPI_STATUS_SIZE)
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD, comsize, ierr)
if (comsize /= 2) then
print *,'Error: this assumes n=2!'
call MPI_ABORT(1,MPI_COMM_WORLD,ierr)
endif
!! 2 PEs; the other is 1 if we're 0, or 0 if we're 1.
otherrank = comsize - (rank+1)
inputdata = 1.
inputdata = exp(sin(inputdata))
print *, rank, ': first: finished computing; now sending to second.'
call MPI_SEND(inputdata, n*m, MPI_REAL, otherrank, firsttag, &
MPI_COMM_WORLD, ierr)
print *, rank, ': first: Now waiting for return data...'
call MPI_RECV(processeddata, n*m, MPI_REAL, otherrank, backtag, &
MPI_COMM_WORLD, rstatus, ierr)
print *, rank, ': first: recieved data from partner.'
call MPI_FINALIZE(ierr)
end program first
and second.f90:
program second
use protocol
use mpi
implicit none
real, dimension(n,m) :: inputdata
real, dimension(n,m) :: processeddata
integer :: rank, comsize, ierr, otherrank
integer :: rstatus(MPI_STATUS_SIZE)
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD, comsize, ierr)
if (comsize /= 2) then
print *,'Error: this assumes n=2!'
call MPI_ABORT(1,MPI_COMM_WORLD,ierr)
endif
!! 2 PEs; the other is 1 if we're 0, or 0 if we're 1.
otherrank = comsize - (rank+1)
print *, rank, ': second: Waiting for initial data...'
call MPI_RECV(inputdata, n*m, MPI_REAL, otherrank, firsttag, &
MPI_COMM_WORLD, rstatus, ierr)
print *, rank, ': second: adding 1 and sending back.'
processeddata = inputdata + 1
call MPI_SEND(processeddata, n*m, MPI_REAL, otherrank, backtag, &
MPI_COMM_WORLD, ierr)
print *, rank, ': second: completed'
call MPI_FINALIZE(ierr)
end program second
For clarity, stuff that the two programs must agree on could be ina module they both use, here protocol.f90:
module protocol
!! shared information like tag ids, etc goes here
integer, parameter :: firsttag = 1
integer, parameter :: backtag = 2
!! size of problem
integer, parameter :: n = 10, m = 20
end module protocol
(A makefile for building the executables follows:)
all: first second
FFLAGS=-g -Wall
F90=mpif90
%.mod: %.f90
$(F90) -c $(FFLAGS) $^
%.o: %.f90
$(F90) -c $(FFLAGS) $^
first: protocol.mod first.o
$(F90) -o $# first.o protocol.o
second: protocol.mod second.o
$(F90) -o $# second.o protocol.o
clean:
rm -rf *.o *.mod
and then you run the two programs as following:
$ mpiexec -n 1 ./first : -n 1 ./second
1 : second: Waiting for initial data...
0 : first: finished computing; now sending to second.
0 : first: Now waiting for return data...
1 : second: adding 1 and sending back.
1 : second: completed
0 : first: recieved data from partner.
$
We could certainly give you a more relevant example if you give us more information about the workflow between the two programs.
Are you using binary (unformatted) file I/O? Unless the data quantity is huge, that should be fast.
Otherwise you could use interprocess communication, but it would be more complicated. You might find code in C, which you could call from Fortran using the ISO C Binding.