How to use in each statement with mixed version dml devices? - dml-lang

Got a device with some common code in DML 1.4 where I want to verify a parameter is properly set on all fields of all registers by using and in each statement:
dml 1.4;
//import "each-bank.dml";
template stop_template {
param stop default true;
}
template dont_stop is stop_template{
param stop = false;
}
in each bank {
in each register {
in each field {
is stop_template;
#if (this.stop)
{
error "Stop here";
}
}
}
}
For a device in dml 1.2 my error statements triggers on d.o even if I add the template or parameter myself:
dml 1.2;
device each_device;
import "each-import.dml"
bank a {
register x size 4 #0x0 {
field alpha #[2:0] {
is dont_stop;
}
}
}
bank d {
register o size 4 #0x0 is stop_template;
}
DML-DEP each-device.dmldep
DEP each-device-dml.d
DMLC each-device-dml.c
Using the Simics 5 API for test-device module
/modules/test-device/each-device.dml:15:5: In d.o
/modules/test-device/each-import.dml:18:17: error: Stop here
...
gmake: *** [test-device] Error 2
Running the same code with an 1.4 device works as intended. Does the in each statement does not work with mixed devices?

Your issue is that in dml 1.2 registers with no declared fields get one whole-register-covering field implicitly defined by the compilator. This is then picked up by your each-statement, correctly.
To work around this, you need your template to check if it is actually being applied to an explicit field (part of the unfortunate pain that comes with writing common-code for both 1.2 and 1.4).
To do this: use the 1.2 parameter 'explicit' which is set by the compiler, which is 'true' for declared fields only. This parameter is however not present in 1.4, so you need to additionally guard this check with a check on the 'dml_1_2' parameter.
Something along the lines of:
in each bank {
in each register {
in each field {
// We cannot put this inside the hashif, despite wanting to,
// because DML does not (yet) allow conditional templating on
// parameters
is stop_template;
// This _does_ mean we can check this.stop first, saving us
// a little bit of code
#if (this.stop) {
// It is generally considered good practice to keep
// dml_1_2 checks in their own #if branch, so that they
// can be removed wholesale once 1.2 is no longer supported
#if (dml_1_2) {
#if (explicit) {
error "Stop here";
}
} #else {
error "Stop here";
}
}
}
}
}

In your case, the problematic code is a consistency check that's not strictly necessary for the device to function, so one option is to just skip this check for DML 1.2, assuming that most remaining DML 1.2 devices are legacy things where not much development is happening. So you can just put the problematic definitions inside #if (!dml_1_2) { } and it will compile again. Note that the compiler has a special case for the dml_1_2 parameter, which allows a top-level #if to contain any kind of top-level statement, including template definitions and typedefs.

Related

freeradius unlang disable escaping

I have the following list of VLAN's I want to check against ldap.
Tmp-String-0 = "CN=vlan10,CN=Users,DC=aaa,DC=local;CN=vlan20,CN=Users,DC=aaa,DC=local"
With ulang I explode them and loop through them
if ("%{explode:&control:Tmp-String-0 ;}" > 0) {
foreach &control:Tmp-String-0 {
Then I try to do checking with
Tmp-String-1 := "%{ldap_aaa.local:ldapi://192.168.0.199:389/cn=Users,dc=aaa,dc=local?memberof?sub?(&(objectCategory=User)(sAMAccountName=%{%{Stripped-User-Name}:-%{User-Name}})(memberOf=%{Foreach-Variable-0}))}"
However %{Foreach-Variable-0} gets the escaped version of the string:
CN3dvlan202cCN3dUsers2cDC3daaa2cDC3dlocal
The escaped version is not working, if I replace it with hardcoded unescaped version the it works.
CN=vlan20,CN=Users,DC=aaa,DC=local
How to prevent unlang for escaping the variable?
You can't prevent unescaping in v3. In version 4 there's the concept of "tainted" and "untainted" values, with tainted values being escaped, and "untainted" values being inserted verbatim, but I'm not sure that's implemented yet in the LDAP module expansions.
You could use the group membership checks in the LDAP module to do what you're trying to implement with the xlat expansion?
e.g.
if ("%{explode:&control:Tmp-String-0 ;}" > 0) {
foreach &control:Tmp-String-0 {
if (LDAP-Group == "%{control:Tmp-String-0}") {
update reply {
# VLAN-Attrs
}
}
}
}

Invoking a functions based on string name in java

I need a logic to replace the following code.
void invokeMethod(String action){
if ("echo".equals(action)) {
//call echo
echo();
}
else if ("dump".equals(action)) {
// call dump
dump();
}
... goes on
}
switch case with string parameter won't work in java 1.6.
Can I do it a better way ?
I used a java hashmap with action as key and random integer as value. Whenever perticular action is asked to invoke, fetch the integer from the hashmap and use switch case now (In above problem, string comparison was very operation, replaced the same with integers).

"this" argument in boost bind

I am writing multi-threaded server that handles async read from many tcp sockets. Here is the section of code that bothers me.
void data_recv (void) {
socket.async_read_some (
boost::asio::buffer(rawDataW, size_t(648*2)),
boost::bind ( &RPC::on_data_recv, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} // RPC::data_recvW
void on_data_recv (boost::system::error_code ec, std::size_t bytesRx) {
if ( rawDataW[bytesRx-1] == ENDMARKER { // <-- this code is fine
process_and_write_rawdata_to_file
}
else {
read_socket_until_endmarker // <-- HELP REQUIRED!!
process_and_write_rawadata_to_file
}
}
Nearly always the async_read_some reads in data including the endmarker, so it works fine. Rarely, the endmarker's arrival is delayed in the stream and that's when my program fails. I think it fails because I have not understood how boost bind works.
My first question:
I am confused with this boost totorial example , in which "this" does not appear in the handler declaration. ( Please see code of start_accept() in the example.) How does this work? Does compiler ignore the "this" ?
my second question:
In the on_data_recv() method, how do I read data from the same socket that was read in the on_data() method? In other words, how do I pass the socket as argument from calling method to the handler? when the handler is executed in another thread? Any help in form of a few lines of code that can fit into my "read_socket_until_endmarker" will be appreciated.
My first question: I am confused with this boost totorial example , in which "this" does not appear in the handler declaration. ( Please see code of start_accept() in the example.) How does this work? Does compiler ignore the "this" ?
In the example (and I'm assuming this holds for your functions as well) the start_accept() is a member function. The bind function is conveniently designed such that when you use & in front of its first argument, it interprets it as a member function that is applied to its second argument.
So while a code like this:
void foo(int x) { ... }
bind(foo, 3)();
Is equivalent to just calling foo(3)
Code like this:
struct Bar { void foo(int x); }
Bar bar;
bind(&foo, &bar, 3)(); // <--- notice the & before foo
Would be equivalent to calling bar.foo(3).
And thus as per your example
boost::bind ( &RPC::on_data_recv, this, // <--- notice & again
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)
When this object is invoked inside Asio it shall be equivalent to calling this->on_data_recv(error, size). Checkout this link for more info.
For the second part, it is not clear to me how you're working with multiple threads, do you run io_service.run() from more than one thread (possible but I think is beyond your experience level)? It might be the case that you're confusing async IO with multithreading. I'm gonna assume that is the case and if you correct me I'll change my answer.
The usual and preferred starting point is to have just one thread running the io_service.run() function. Don't worry, this will allow you to handle many sockets asynchronously.
If that is the case, your two functions could easily be modified as such:
void data_recv (size_t startPos = 0) {
socket.async_read_some (
boost::asio::buffer(rawDataW, size_t(648*2)) + startPos,
boost::bind ( &RPC::on_data_recv, this,
startPos,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} // RPC::data_recvW
void on_data_recv (size_t startPos,
boost::system::error_code ec,
std::size_t bytesRx) {
// TODO: Check ec
if (rawDataW[startPos + bytesRx-1] == ENDMARKER) {
process_and_write_rawdata_to_file
}
else {
// TODO: Error if startPos + bytesRx == 648*2
data_recv(startPos + bytesRx);
}
}
Notice though that the above code still has problems, the main one being that if the other side sent two messages quickly one after another, we could receive (in one async_read_some call) the full first message + part of the second message, and thus missing the ENDMARKER from the first one. Thus it is not enough to only test whether the last received byte is == to the ENDMARKER.
I could go on and modify this function further (I think you might get the idea on how), but you'd be better off using async_read_until which is meant exactly for this purpose.

How to define a CAS in database as external resource for an annotator in uimaFIT?

I am trying to structure my a data processing pipeline using uimaFit as follows:
[annotatorA] => [Consumer to dump annotatorA's annotations from CAS into DB]
[annotatorB (should take on annotatorA's annotations from DB as input)]=>[Consumer for annotatorB]
The driver code:
/* Step 0: Create a reader */
CollectionReader readerInstance= CollectionReaderFactory.createCollectionReader(
FilePathReader.class, typeSystem,
FilePathReader.PARAM_INPUT_FILE,"/path/to/file/to/be/processed");
/*Step1: Define Annotoator A*/
AnalysisEngineDescription annotatorAInstance=
AnalysisEngineFactory.createPrimitiveDescription(
annotatorADbConsumer.class, typeSystem,
annotatorADbConsumer.PARAM_DB_URL,"localhost",
annotatorADbConsumer.PARAM_DB_NAME,"xyz",
annotatorADbConsumer.PARAM_DB_USER_NAME,"name",
annotatorADbConsumer.PARAM_DB_USER_PWD,"pw");
builder.add(annotatorAInstance);
/* Step2: Define binding for annotatorB to take
what-annotator-a put in DB above as input */
/*Step 3: Define annotator B */
AnalysisEngineDescription annotatorBInstance =
AnalysisEngineFactory.createPrimitiveDescription(
GateDateTimeLengthAnnotator.class,typeSystem)
builder.add(annotatorBInstance);
/*Step 4: Run the pipeline*/
SimplePipeline.runPipeline(readerInstance, builder.createAggregate());
Questions I have are:
Is the above approach correct?
How do we define the depencdency of annotatorA's output in annotatorB in step 2?
Is the approach suggested at https://code.google.com/p/uimafit/wiki/ExternalResources#Resource_injection
, the right direction to achieve it ?
You can define the dependency with #TypeCapability like this:
#TypeCapability(inputs = { "com.myproject.types.MyType", ... }, outputs = { ... })
public class MyAnnotator extends JCasAnnotator_ImplBase {
....
}
Note that it defines a contract at the annotation level, not the engine level (meaning that any Engine could create com.myproject.types.MyType).
I don't think there are ways to enforce it.
I did create some code to check that an Engine is provided with the right required Annotations in the upstream of a pipeline, and prints an error log otherwise (see Pipeline.checkAndAddCapabilities() and Pipeline.addCapabilities() ). Note however that it will only work if all Engines define their TypeCapabilities, which is often not the case when one uses external Engines/libraries.

Why C++ CLI has no default argument on managed types?

The following line has the error Default argument is not allowed.
public ref class SPlayerObj{
private:
void k(int s = 0){ //ERROR
}
}
Why C++ has no default argument on managed types ?
I would like to know if there is a way to fix this.
It does have optional arguments, they just don't look the same way as the C++ syntax. Optional arguments are a language interop problem. It must be implemented by the language that makes the call, it generates the code that actually uses the default argument. Which is a tricky problem in a language that was designed to make interop easy, like C++/CLI, you of course don't know what language is going to make the call. Or if it even has syntax for optional arguments. The C# language didn't until version 4 for example.
And if the language does support it, how that compiler knows what the default value is. Notable is that VB.NET and C# v4 chose different strategies, VB.NET uses an attribute, C# uses a modopt.
You can use the [DefaultParameterValue] attribute in C++/CLI. But you shouldn't, the outcome is not predictable.
In addition to the precise answer from Hans Passant, the answer to the second part on how to fix this, you are able to use multiple methods with the same name to simulate the default argument case.
public ref class SPlayerObj {
private:
void k(int s){ // Do something useful...
}
void k() { // Call the other with a default value
k(0);
}
}
An alternative solution is to use the [OptionalAttribute] along side a Nullable<int> typed parameter. If the parameter is not specified by the caller it will be a nullptr.
void k([OptionalAttribute]Nullable<int>^ s)
{
if(s == nullptr)
{
// s was not provided
}
else if(s->HasValue)
{
// s was provided and has a value
int theValue = s->Value;
}
}
// call with no parameter
k();
// call with a parameter value
k(100);

Resources