TLClientNode connection - riscv

I need to instantiate the recent version of the ICache in ROCKET-CHIP project stand-alone. I was able to test this instantiation using 6 months old version. However, I am facing troubles with its 'mem' port in the recent version:
val node = TLClientNode(TLClientParameters(sourceId = IdRange(0,1)))
.....
val mem = outer.node.bundleOut
According to my understanding, ROCKET-CHIP project started to use special type of nodes where both SOURCE and SINK nodes shall be connected on a bar using 'TLXbar' class. I tried to follow the code in http://stackissue.com/ucb-bar/rocket-chip/tilelink2-245.html but it seem obsolete. Can anyone point to me how can I connect this port?

Recently I successfully created a trivial TileLink2 node (just passing input to output with some masks) and inserted it between l1backend.node and TileNetwork.masterNodes.head. So I think my experience might be helpful.
Rocket-chip's diplomacy package extends chisel's Module hierarchy. It mainly consists two parts: LazyModule and LazyModuleImp, where LazyModuleImp is the real Module in chisel world.
Nodes are always created in LazyModule, while node.bundleIn/Out should be referenced inside LazyModuleImpl. We should use nodes in LazyModule to interconnect with each other by :=.
Another thing that might be helpful is that inside LazyModuleImp we can only reference bundleIn/Out in IO bundles from nodes that directly belong to the corresponding LazyModule.
For example, if you have a sub lazy module of XXXCrossing which contains a node. You'd better not use its bundleIn/Out as your current lazy module's IO bundles. Otherwise, the chisel code might successfully get compiled but the firrtl result contains undeclared symbols.

Related

DirectX 12 Ultimate graphics sample generates a D3D12 "CBV Invalid Resource" error

Presently I'm working on updating a Windows 11 DX12 desktop app to take advantage of the technologies introduced by DX12 Ultimate (i.e. mesh shaders, VRS & DXR).
All the official samples for Ultimate compile and run on my machine (Core i9/RTX3070 laptop) so as a first step, I wish to begin migrating as much static (i.e. unskinned) geometry over from the conventional (IA-vertex shader) rendering pipeline over to the Amplification->Mesh shader pipeline.
I'm naturally using code from the official samples to facilitate this, and in the process I've encountered a very strange issue which only triggers in my app, but not in the compiled source project.
The specific problem relates to setting up meshlet instancing culling & dynamic LOD selection. When setting descriptors into the mesh shader SRV heap, my app was failing to create a CBV:
// Mesh Info Buffers
D3D12_CONSTANT_BUFFER_VIEW_DESC cbvDesc{};
cbvDesc.BufferLocation = m.MeshInfoResource->GetGPUVirtualAddress();
cbvDesc.SizeInBytes = MeshletUtils::GetAlignedSize<UINT>(sizeof(MeshInfo)); // 256 bytes which is correct
device->CreateConstantBufferView(&cbvDesc, OffsetHandle(i)); // generates error
A CBV into the descriptor range couldn't be generated because the resource's GPU address range was created with only 16 bytes:
D3D12 ERROR: ID3D12Device::CreateConstantBufferView:
pDesc->BufferLocation + SizeInBytes - 1 (0x0000000008c1f0ff) exceeds
end of the virtual address range of Resource
(0x000001BD88FE1BF0:'MeshInfoResource', GPU VA Range:
0x0000000008c1f000 - 0x0000000008c1f00f). [ STATE_CREATION ERROR
#649: CREATE_CONSTANT_BUFFER_VIEW_INVALID_RESOURCE]
What made this frustrating was the code is identical to the official sample, but the sample was compiling without issue. But after many hours of trying dumb things, I finally decided to examine the size of the MeshInfo structure, and therein lay the solution.
The MeshInfo struct is defined in the sample's Model class as:
struct MeshInfo
{
uint32_t IndexSize;
uint32_t MeshletCount;
uint32_t LastMeshletVertCount;
uint32_t LastMeshletPrimCount;
};
It is 16 bytes in size, and passed to the resource's description prior to its creation:
auto meshInfoDesc = CD3DX12_RESOURCE_DESC::Buffer(sizeof(MeshInfo));
ThrowIfFailed(device->CreateCommittedResource(&defaultHeap, D3D12_HEAP_FLAG_NONE, &meshInfoDesc, D3D12_RESOURCE_STATE_COPY_DEST, nullptr, IID_PPV_ARGS(&m.MeshInfoResource)));
SetDebugObjectName(m.MeshInfoResource.Get(), L"MeshInfoResource");
But clearly I needed a 256 byte range to conform with D3D12_CONSTANT_BUFFER_DATA_PLACEMENT_ALIGNMENT, so I changed meshInfoDesc to:
auto meshInfoDesc = CD3DX12_RESOURCE_DESC::Buffer(sizeof(MeshInfo) * 16u);
And the project compiles successfully.
So my question is, why isn't this GPU virtual address error also occurring in the sample???
PS: It was necessary to rename Model.h/Model.cpp to MeshletModel.h/MeshletModel.cpp for use in my project, which is based on the DirectX Tool Kit framework, where Model.h/Model.cpp files already exist for the DXTK rigid body animation effect.
The solution was explained in the question, so I will summarize it here as the answer to this post.
When creating a constant buffer view on a D3D12 resource, make sure to allocate enough memory to the resource upon creation.
This should be at least 256 bytes to satisfy D3D12_CONSTANT_BUFFER_DATA_PLACEMENT_ALIGNMENT.
I still don't know why the sample code on GitHub could compile without this requirement. Without having delved into the sample's project configuration in detail, it's possible that D3D12 debug layer errors are being dealt with differently, but that's purely speculative.

Driver's source code structure requirements for Linux Kernel upstream

I am planning to rewrite my sensor's driver in order to try to get my module in the Linux Kernel. I was wondering whether there were requirements regarding the organization of the source code. Is it mandatory to keep all the code in one single source file or is it possible to split it up in several ones?
I would prefer a modular approach for my implementation, with one file containing the API and all the structures required for the Kernel registration, and another file with the low level operations to exchange data with the sensor (i.e. mysensor.c & mysensor_core.c).
What are the requirements from this point of view?
Is there a limitation in terms of lines of codes for each file?
Note:
I tried to have a look at the official github repo and it seems to me that the code is always limited to one single source file.
https://github.com/torvalds/linux/tree/master/drivers/misc
Here is an extract from "linux/drivers/iio/gyro/Makefile" as an example:
# Currently this is rolled into one module, split it if
# we ever create a separate SPI interface for MPU-3050
obj-$(CONFIG_MPU3050) += mpu3050.o
mpu3050-objs := mpu3050-core.o mpu3050-i2c.o
The "mpu3050.o" file used to build the "mpu3050.ko" module is built by linking two object files "mpu3050-core.o" and "mpu3050-i2c.o", each of which is built by compiling a correspondingly named source file.
Note that if the module is built from several source files as above, the base name of the final module "mpu3050" must be different to the base name of each of the source files "mpu3050-core" and "mpu3050-i2c". So in your case, if you want the final module to be called "mysensor.ko" then you will need to rename the "mysensor.c" file.

Does AVAudioEngine support recursive routing?

Can I route Node A into Node B, and Node B back into Node A (of course using a Mixer in between) -- otherwise called "Feedback"? (For example, WebAudio supports this).
No, trying to setup a recursive route will result in AVAudioEngine freezing and a seemingly unrelated error appearing in the console:
warning: could not execute support code to read Objective-C class data in the process. This may reduce the quality of type information available.

1000 rows limit for chef-api module/wrapper

So im using this Node module to connect to chef from my API.
https://github.com/normanjoyner/chef-api
The same contains a method called "partialSearch" which will fetch determined data for all nodes that match a given criteria. The problem I have, on of our environments have 1386 nodes attached to it, but it seems the module only returns 1000 as a maximum.
There does not seem to be any method to "offset" the results. This module works pretty well and its a shame this feature is not implemented since its lack really breaks the utility of such.
Does someone bumped into a similar issue with this module and can advise how to workaround it?
Here its an extract of my code :
chef.config(SetOptions(environment));
console.log("About to search for any servers ...");
chef.partialSearch('node',
{
q: "name:*"
},
{
name: ['name'] ,
'ipaddress': ['ipaddress'] ,
'chef_environment': ['chef_environment'] ,
'ip6address': ['ip6address'],
'run_list': ['run_list'],
'chef_client': ['chef_client'],
'ohai_time': ['ohai_time']
}
, function(err, chefRes) {
Regards!
The maximum is 1000 results per page, you can still request pages in order. The Chef API doesn't have a formal cursor system for pagination so it's just separate requests with a different start value, which can sometimes lead to minor desync (as in an item at the end of one page might shift in ordering and also show up at the start of the next page) so just make sure you handle that. That said, the fancy API in the client library you linked doesn't seem to expose that option, so you'll have to add it or otherwise workaround the problem. Check out https://github.com/sethvargo/chef-api/blob/master/lib/chef-api/resources/partial_search.rb#L34 for a Ruby implementation that does handle it.
We have run into similar issues with Chef libraries. One work-around you might find useful is if you have some node attribute that you can use to segment all of your nodes into smaller groups that are less than 1000.
If you have no such natural segmentation friendly already, a simple implementation would be to create a new attribute called segment and during your chef runs set the attribute's value randomly to a number between 1 and 5.
Now you can perform 5 queries (each query will only search for a single segment) and you should find all your nodes and if the randomness is working each group will be sized about 275 (1386/5).
As your node population grows you'll need to keep increasing the number of segments to ensure the segment sizes are less than 1000.

Where can i get source code of IoOutput8 ()

I'm looking for the code of IoOutput8 () function ,which is used to write a value to the specified I/O port.
Can anybody help knowing the right location of this functions source code??
In Linux, there is no IoOutput8() function. You should use void iowrite8(u8 value, void *addr); followed by the wmb(); (write memory barrier). For more details, see §9.4.2 Accessing I/O Memory of LDD Book (also see §9.4.3. Ports as I/O Memory for ioport_map/ioport_unmap).
As for the source code for IoOutput8(), you should probably get it from the same place you got that function. The only place I could find it is Phoenix IO Access Library, if that is something you are using, you should ask Phoenix company for the source code if they haven't provided it already.

Resources