I'm trying to understand how devicetrees work.
According to the kernel documentation, they are used, in arm architecture, in the following manner:
In the majority of cases, the machine identity is irrelevant, and the kernel will instead select setup code based on the machine’s core CPU or SoC. On ARM for example, setup_arch() in arch/arm/kernel/setup.c will call setup_machine_fdt() in arch/arm/kernel/devtree.c which searches through the machine_desc table and selects the machine_desc which best matches the device tree data. It determines the best match by looking at the ‘compatible’ property in the root device tree node, and comparing it with the dt_compat list in struct machine_desc (which is defined in arch/arm/include/asm/mach/arch.h if you’re curious).
The ‘compatible’ property contains a sorted list of strings starting with the exact name of the machine, followed by an optional list of boards it is compatible with sorted from most compatible to least.
I found the source code related to the comparison of machine_desc to the compatible parameter set in the dts file:
const struct machine_desc * __init setup_machine_fdt(void *dt_virt)
{
const struct machine_desc *mdesc, *mdesc_best = NULL;
#if defined(CONFIG_ARCH_MULTIPLATFORM) || defined(CONFIG_ARM_SINGLE_ARMV7M)
DT_MACHINE_START(GENERIC_DT, "Generic DT based system")
.l2c_aux_val = 0x0,
.l2c_aux_mask = ~0x0,
MACHINE_END
mdesc_best = &__mach_desc_GENERIC_DT;
#endif
if (!dt_virt || !early_init_dt_verify(dt_virt))
return NULL;
mdesc = of_flat_dt_match_machine(mdesc_best, arch_get_next_mach);
if (!mdesc) {
const char *prop;
int size;
unsigned long dt_root;
early_print("\nError: unrecognized/unsupported "
"device tree compatible list:\n[ ");
dt_root = of_get_flat_dt_root();
prop = of_get_flat_dt_prop(dt_root, "compatible", &size);
while (size > 0) {
early_print("'%s' ", prop);
size -= strlen(prop) + 1;
prop += strlen(prop) + 1;
}
early_print("]\n\n");
dump_machine_table(); /* does not return */
}
/* We really don't want to do this, but sometimes firmware provides buggy data */
if (mdesc->dt_fixup)
mdesc->dt_fixup();
early_init_dt_scan_nodes();
/* Change machine number to match the mdesc we're using */
__machine_arch_type = mdesc->nr;
return mdesc;
}
However, I didn't find machine_desc table definition.
If I'd like to read all machine_desc, Where can I find it?
TL;DR - The machine description is built by building and linking different source files into the kernel. So each machine source file adds an entry into the table.
The table is based in arch/arm/kernel/vmlinux.lds.S (or relevant architecture linker file). It is built with the macros MACHINE_START and MACHINE_END. This places a structure in the 'arch.info.init' sections of the object file. All of these objects get globbed together by the linker. This forms the table. So, it is constructed by linking different source files with the MACHINE_START and MACHINE_END macros. Therefore, it doesn't exist in one place.
However, you can use git grep -A10 MACHINE_START to get a fairly good list. This command works well, as typically, it is the last thing in the file so only five or six lines may print. Or you could write init code to dump the table by printing the machine_desc entries.
That said, the table is not too interesting as it is just function pointers to call at different times. The majority will be NULL as it used designated initializers.
Related: Control to 'dt_machine_start' on Android
Related
Since no new shader can be created during runtime, the full set is known ahead at compile-time. Each shader must reference a "pass" in which it will used to render.
To avoid frame-spikes during runtime, I'd like to pre-create all pipeline objects during startup. And to create a pipieline, the number of outputs and the format of each output attachment must be known - either to create a VkRenderPass or to specify the outputs for the dynamic rendering feature.
However, I'd also like to use the frame graph concept (speech by Yuriy O'Donnell) which in turn builds a graph of render passes with input-output specification and dependencies between them. Some passes are conditionally created (e.g. debug passes), some passes might be dropped from the graph (after "compiling" it).
Additionally, I need to support the "write on top" feature, so instead of specifying a new output during the building of the render pass, I can simply say that the output of this pass will use an output from a previous pass - this is useful for adding alpha-blended rendering, for example.
How can I match the two separate section of the code? In other words, how can I define all render passes during initialization but also use a dynamic approach of building the frame graph each frame without repeating myself?
This is what I'd like to avoid (pseudo-code):
struct Pass1Def
{
output1 = ImageFormat::RGBA8;
output2 = ImageFormat::RGBA8;
// ...
outputs = // outputs in order (corresponds to location in shader)
};
void init()
{
for_each_shaders shader {
passDef = findPassDef(shader);
createPipeline(shader, passDef);
}
}
void render()
{
auto previousResource = someCondition ? passA.outputResource1 : passB.outputResource2;
graph.addPass(..., [&](PassBuilder& builder, Pass1Data& data) {
// error-prone: order of function calls matter (corresponds to location in shader)
// error-prone: use the same format defined in Pass1Def
data.outputResource1 = builder.create(... ImageFormat::RGBA8);
// error-prone: the format depends on the outputResource of a previous pass
// however the format must be (and was) specified in Pass1Def
data.outputResource2 = builder.write(previousResource);
});
}
My SwiftUI segmented control picker uses plain Int ".tag(1)" etc values for its selection.
CoreData only has Int16, Int32 & Int64 options to choose from, and with any of those options it seems my picker selection and CoreData refuse to talk to each other.
How is this (??simple??) task achieved please?
I've tried every numeric based option within CoreData including Int16-64, doubles and floats, all of them break my code or simply just don't work.
Picker(selection: $addDogVM.gender, label: Text("Gender?")) {
Text("Boy ♂").tag(1)
Text("?").tag(2)
Text("Girl ♀").tag(3)
}
I expected any of the 3 CoreData Int options to work out of the box, and to be compatible with the (standard) Int used by the picker.
Each element of a segmented control is represented by an index of type Int, and this index therefore commences at 0.
So using your example of a segmented control with three segments (for example: Boy ♂, ?, Girl ♀), each segment is represented by three indexes 0, 1 & 2.
If the user selects the segmented control that represents Girl ♀, then...
segmentedControl.selectedSegmentIndex = 2
When storing a value using Core Data framework, that is to be represented as a segmented control index in the UI, I therefore always commence with 0.
Everything you read from this point onwards is programmer preference - that is and to be clear - there are a number of ways to achieve the same outcome and you should choose one that best suits you and your coding style. Note also that this can be confusing for a newcomer, so I would encourage patience. My only advice, keep things as simple as possible until you've tested and debugged and tested enough to understand the differences.
So to continue:
The Apple Documentation states that...
...on 64-bit platforms, Int is the same size as Int64.
So in the Core Data model editor (.xcdatamodeld file), I choose to apply an Integer 64 attribute type for any value that will be used as an Int in my code.
Also, somewhere, some time ago, I read that if there is no reason to use Integer 16 or Integer 32, then default to the use of Integer 64 in object model graph. (I assume Integer 16 or Integer 32 are kept for backward compatibility.) If I find that reference I'll link it here.
I could write about the use of scalar attribute types here and manually writing your managed object subclass/es by selecting in the attribute inspector Class Codegen = Manual/None, but honestly I have decided such added detail will only complicate matters.
So your "automatically generated by Core Data" managed object subclass/es (NSManagedObject) will use the optional NSNumber? wrapper...
You will therefore need to convert your persisted/saved data in your code.
I do this in two places... when I access the data and when I persist the data.
(Noting I assume your entity is of type Dog and an instance exists of dog i.e. let dog = Dog())
// access
tempGender = dog.gender as? Int
// save
dog.gender = tempGender as NSNumber?
In between, I use a "temp" var property of type Int to work with the segmented control.
// temporary property to use with segmented control
private var tempGender: Int?
UPDATE
I do the last part a little differently now...
Rather than convert the data in code, I made a simple extension to my managed object subclass to execute the conversion. So rather than accessing the Core Data attribute directly and manipulating the data in code, now I instead use this convenience var.
extension Dog {
var genderAsInt: Int {
get {
guard let gender = self.gender else { return 0 }
return Int(truncating: gender)
}
set {
self.gender = NSNumber(value: newValue)
}
}
}
Your picker code...
Picker(selection: $addDogVM.genderAsInt, label: Text("Gender?")) {
Text("Boy ♂").tag(0)
Text("?").tag(1)
Text("Girl ♀").tag(2)
}
Any questions, ask in the comments.
I have a list of valid values that I am storing in a data store. This list is about 20 items long now and will likely grow to around 100, maybe more.
I feel there are a variety of reasons it makes sense to store this in a data store rather than just storing in code. I want to be able to maintain the list and its metadata and make it accessible to other services, so it seems like a micro-service data store.
But in code, we want to make sure only values from the list are passed, and they can typically be hardcoded. So we would like to create an enum that can be used in code to ensure that valid values are passed.
I have created a simple node.js that can generate a JS file with the enum right from the data store. This could be regenerated anytime the file changes or maybe on a schedule. But sharing the enum file with any node.js applications that use it would not be trivial.
Has anyone done anything like this? Any reason why this would be a bad approach? Any feedback is welcome.
Piggy-backing off of this answer, which describes a way of creating an "enum" in JavaScript: you can grab the list of constants from your server (via an HTTP call) and then generate the enum in code, without the need for creating and loading a JavaScript source file.
Given that you have loaded your enumConstants from the back-end (here I hard-coded them):
const enumConstants = [
'FIRST',
'SECOND',
'THIRD'
];
const temp = {};
for (const constant of enumConstants) {
temp[constant] = constant;
}
const PlaceEnum = Object.freeze(temp);
console.log(PlaceEnum.FIRST);
// Or, in one line
const PlaceEnum2 = Object.freeze(enumConstants.reduce((o, c) => { o[c] = c; return o; }, {}));
console.log(PlaceEnum2.FIRST);
It is not ideal for code analysis or when using a smart editor, because the object is not explicitly defined and the editor will complain, but it will work.
Another approach is just to use an array and look for its members.
const members = ['first', 'second', 'third'...]
// then test for the members
members.indexOf('first') // 0
members.indexOf('third') // 2
members.indexOf('zero') // -1
members.indexOf('your_variable_to_test') // does it exist in the "enum"?
Any value that is >=0 will be a member of the list. -1 will not be a member. This doesn't "lock" the object like freeze (above) but I find it suffices for most of my similar scenarios.
Poking around I was unable to discover a way to detect hidden files in OS X with node (nodejs).
Of course, we can easily find the ".dot_hidden" files, but on the Mac, there are files/folders that are "protected" system files, that most users shouldn't fiddle with. In the Finder GUI, they are invisible or grey'd out when hidden files are forced to be shown via "AppleShowAllFiles".
I did discover a reference to UF_HIDDEN : 0x8000 here:
https://developer.apple.com/library/mac/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemDetails/FileSystemDetails.html
Using node's stat, we can return 2 additional bits of info that may provide a clue for the hidden status:
mode: 33188, // File protection.
ino: 48064969, // File inode number. An inode is a file
system data structure that stores
information about a file.
I'm not really a hex / binary guy, but it looks like grabbing the stat's "ino" property we can apply 0x8000 and determine if the file is being hinted as hidden or not.
I didn't have any success with the 0x8000 mask on the mode, but did have some with ino.
Here's what I've got, checking the "ino" returns 0 or 1726, when it's 1726 the file seems to match as a hidden file in OS X.
var fs = require("fs");
var dir = "/";
var list = fs.readdirSync(dir);
list.forEach(function(f){
// easy dot hidden files
var hidden = (f.substr(0, 1) == ".") ? true : false;
var ino = 0;
var syspath = dir + "/" + f;
if( ! hidden ){
var stats = fs.statSync(syspath);
ino = parseInt( stats.ino & 0x8000, 8);
// ino yeilds 0 when hidden and 1726 when not?
if(ino || dotted){
hidden = true;
}
}
console.log(syspath, hidden, ino);
});
So my question is if I'm applying the 0x8000 mask properly on the ino value to yeild a proper result?
And how would one go about parsing the ino property get at all the other flags contained within it?
The inode number (stats.ino) is a number which uniquely identifies a file; it has nothing to do with the hidden status of the file. (Indeed, it's possible to set or clear the hidden flag on a file at any time, and this won't change the inode number.)
The hidden flag is part of the st_flags field in the struct stat structure. Unfortunately, it doesn't look like the node.js fs module exposes this value, so you may need to shell out to the stat shell utility if you need to get this information on Mac OS X. (Short version: stat -f%f file will print a file's flags, represented in decimal.)
hello every one i want to ask you if you know a way to extract data from message_t
in the oldest version of TinyOs there are TOS_Msg and TOS_MsgPtr but in message_t i could't find a way please help me
and i want to know if there is any data type to store data like table or array list
typedef nx_struct message_localization{
nx_uint8_t NodeId;
bool ancre_nature;
nx_uint8_t x_coordinate;
nx_uint8_t y_coordinate;
x_uint8_t energie_transmited;
} message_localization_t;
The Packet interface has a command getPayload which does want you want:
command void *getPayload(message_t *msg, uint8_t len);
See the documentation for more information.
To access the data field, you may do as follows:
message_t msg;
message_localization_t *payload =
(message_localization_t *)call Packet.getPayload(
&msg, sizeof(message_localization_t));
payload->x_coordinate = x;
payload->y_coordinate = y;
/* and so on */
The same command is for convenience included in interfaces Send and AMSend. Packet and AMSend are provided by the ActiveMessageC configuration.