At the following URL: https://developer.mozilla.org/en/XPCOM_Interface_Reference/nsICacheVisitor is the following code chunk:
boolean visitDevice(in string deviceID, in nsICacheDeviceInfo deviceInfo);
I thought I was dealing with c++, but "in" is not a c++ keyword according to c++ keyword lists i looked up, nor is it a java keyword. So what's it there for and what's it mean?
It means that the parameter is an input parameter, meaning that it will be used but not modified by the function.
The opposite of an in parameter is an out parameter, which means that the parameter is going to be modified, but not explicitly returned. If you were to use an out parameter after a method that uses it, the value is going to (potentially) be different.
As nos points out in the comment, the page you linked to is describing a .idl, or Interface definition language, file. I'm not familiar with the IDL that Mozilla uses (but if you want to learn more, you can read about it here), but I am somewhat familiar with the Object Management Group's IDL, which says that in parameters are call-by-value, out parameters are call-by-result, and inout parameters are call-by-value/result.
The language is Mozilla's Interface Description Language (XPIDL).
The keyword "in" is described here: here
I've seen frameworks/SDKs for C/C++ that define macros to indicate whether a parameter is for input, output or both. I'm guessing that that's what's going on in your example.
For example, the Windows DDK does this for IN OUT and INOUT (if I remember right). When compiling these macros are defined to nothing, they have the potential to be defined to something useful for other tools (like an IDL compiler or a static analysis tool). I;m not sure if they still use these macros in the more recent DDKs.
Microsoft has taken this idea to an extreme with the SAL macros that give a very fine level of control over what behavior is expected for a parameter.
Related
How can I retrieve the type of architecture (linux versus Windows) in my fortran code? Is there some sort of intrinsic function or subroutine that gives this information? Then I would like to use a switch like this every time I have a system call:
if (trim(adjustl(Arch))=='Linux') then
resul = system('ls > output.txt')
elseif (trim(adjustl(Arch))=='Windows')
resul = system('dir > output.txt')
else
write(*,*) 'architecture not supported'
stop
endif
thanks
A.
The Fortran 2003 standard introduced the GET_ENVIRONMENT_VARIABLE intrinsic subroutine. A simple form of call would be
call GET_ENVIRONMENT_VARIABLE (NAME, VALUE)
which will return the value of the variable called NAME in VALUE. The routine has other optional arguments, your favourite reference documentation will explain all. This rather assumes that you can find an environment variable to tell you what the executing platform is.
If your compiler doesn't yet implement this standard approach it is extremely likely to have a non-standard approach; a routine called getenv used to be available on more than one of the Fortran compilers I've used in the recent past.
The 2008 standard introduced a standard function COMPILER_OPTIONS which will return a string containing the compilation options used for the program, if, that is, the compiler supports this sort of thing. This seems to be less widely implemented yet than GET_ENVIRONMENT_VARIABLE, as ever consult your compiler documentation set for details and availability. If it is available it may also be useful to you.
You may also be interested in the 2008-introduced subroutine EXECUTE_COMMAND_LINE which is the standard replacement for the widely-implemented but non-standard system routine that you use in your snippet. This is already available in a number of current Fortran compilers.
There is no intrinsic function in Fortran for this. A common workaround is to use conditional compilation (through makefile or compiler supported macros) such as here. If you really insist on this kind of solution, you might consider making an external function, e.g., in C. However, since your code is built for a fixed platform (Windows/Linux, not both), the first solution is preferable.
I have a problem of understanding the semantic Null<Real>() in C++, is it a function object?
Thanks,
Eric
Neither Null nor Real have any meaning provided by the C++ standard.
They're legal identifiers for use in your code. And their meaning is defined by your code.
The snippet you provided certainly looks like a call to a template function, or default construction of a template type. But it's hard to day without seeing the definitions.
For example, could I implement a rule that would change every string that followed the pattern '1..4' into the array [1,2,3,4]? In JavaScript:
//here you create a rule that changes every string that matches /$([0-9]+)_([0-9]+)*/
//ever created into range($1,$2) (imagine a b are the results of the regexp)
var a = '1..4';
console.log(a);
>> output: [1,2,3,4];
Of course, I'm pretty confident that would be impossible in most languages. My question is: is there any language in which that would be possible? Or have anyone ever proposed something like that? Does this thing have a 'name' for which I can google to read more about?
Modifying the language from whithin itself falls under the umbrell of reflection and metaprogramming. It is referred as behavioral reflection. It differs from structural reflection that opperates at the level of the application (e.g. classes, methods) and not the language level. Support for behavioral reflection varies greatly across languages.
We can broadly categorize language changes in two categories:
changes that modify the semantics (i.e. the rules) of the language itself (e.g. redefine the method lookup algorithm),
changes that modify the syntax (e.g. your syntax '1..4' to create arrays).
For case 1, certain languages expose the structure of the application (structural reflection) and the inner working of their implementation (behavioral reflection) to the application itself via special object, called meta-objects. Meta-objects are reifications of otherwise implicit aspects, that become then explicitely manipulable: the application can modify the meta-objects to redefine part of its structure, or part of the language. When it comes to langauge changes, the focus is usually on modifiying message sending / method invocation since it is the core mechanism of object-oriented language. But the same idea could be applied to expose other aspects of the language, e.g. field accesses, synchronization primitives, foreach enumeration, etc. depending on the language.
For case 2, the program must be representated in a suitable data structure to be modified. For languages of the lisp family, the program manipulates lists, and the program can be itself represented as lists. This is called homoiconicity and is handy for metaprogramming, hence the flexibility of lisp-like languages. For other languages, their representation is usually an AST. Transforming the representation of the program, or rewriting it, is possible with macro, preprocessors, or hooks during compilation or class loading.
The line between 1 and 2 is however blurry. Syntactical changes can appear to modify the semantics of the language. For instance, I can rewrite all fields accesses with proper getter and setter and perform additional logic there, say to implement transactional memory. Did I perform a semantical change of what a field access is, or merely a syntax change?
Also, there are other constructs the fall bewten the lines. For instance, proxies and #doesNotUnderstand trap are popular techniques to simulate the reification of message sends to some extent.
Lisp and Smalltalk have been very influencial in the field of metaprogramming, and I think the two following projects/platform are interesting to look at for a representative of each of these:
Racket, a lisp-like language focused on growing languages from within the langauge
Helvetia, a Smalltalk extension to embed new languages into the host language by leveraging the AST of the host environment.
I hope you enjoyed this even if I did not really address your question ;)
Your desired change require modifying the way literals are created. This is AFAIK not usually exposed to the application. The closed work that I can think of is Virtual Values for Language Extension, that tackled Javascript.
Yes. Common Lisp (and certain other lisps) have "reader macros" which allow the user to reprogram (incrementally) the mapping between the input stream and the actual language construct as parsed.
See http://dorophone.blogspot.com/2008/03/common-lisp-reader-macros-simple.html
If you want to operate on the level of objects, you will want to use a debugging/memory management framework that keeps track of all objects, and processes the rules on each evaluation step (nasty). This seems like the kind of thing you might be able to shoehorn into smalltalk.
CLOS (Common Lisp Object System) allows redefinition of live objects.
Ultimately you need two things to implement this:
Access to the running system's AST (Abstract Syntax Tree), and
Access to the running system's objects.
You'll want to study meta-object protocols and the languages that use them, then the implementations of both the MOPs and the environment within which these programs are executed.
Image-based systems will be the easiest to modify (e.g., Lisp, potentially Smalltalk).
(Image-based systems store a snapshot of a running system, allowing complete shutdown and restarts, redefinitions, etc. of a complete environment, including existing objects, and their definitions.)
Ruby allows you to extend classes. For instance, this example adds functionality to the String class. But you can do more than add methods to classes. You can also overwrite methods, but defining a method that's already been defined. You may want to preserve access to the original method using alias_method.
Putting all this together, you can overload a constructor in Ruby, but in your case, there's a catch: It sounds like you want the constructor to return a different type. Constructors by definition return instances of their class. If you just want it to return the string "[1,2,3,4]", that's simple enough:
class string
alias_method :initialize :old_constructor
def initialize
old_constructor
# code that applies your transformation
end
end
But there's no way to make it return an Array if that's what you want.
I have an upcoming project in which a core requirement will be to mutate the way a method works at runtime. Note that I'm not talking about a higher level OO concept like "shadow one method with another", although the practical effect would be similar.
The key properties I'm after are:
I must be able to modify the method in such a way that I can add new expressions, remove existing expressions, or modify any of the expressions that take place in it.
After modifying the method, subsequent calls to that method would invoke the new sequence of operations. (Or, if the language binds methods rather than evaluating every single time, provide me a way to unbind/rebind the new method.)
Ideally, I would like to manipulate the atomic units of the language (e.g., "invoke method foo on object bar") and not the assembly directly (e.g. "pop these three parameters onto the stack"). In other words, I'd like to be able to have high confidence that the operations I construct are semantically meaningful in the language. But I'll take what I can get.
If you're not sure if a candidate language meets these criteria, here's a simple litmus test:
Can you write another method called clean which:
accepts a method m as input
returns another method m2 that performs the same operations as m
such that m2 is identical to m, but doesn't contain any calls to the print-to-standard-out method in your language (puts, System.Console.WriteLn, println, etc.)?
I'd like to do some preliminary research now and figure out what the strongest candidates are. Having a large, active community is as important to me as the practicality of implementing what I want to do. I am aware that there may be some unforged territory here, since manipulating bytecode directly is not typically an operation that needs to be exposed.
What are the choices available to me? If possible, can you provide a toy example in one or more of the languages that you recommend, or point me to a recent example?
Update: The reason I'm after this is that I'd like to write a program which is capable of modifying itself at runtime in response to new information. This modification goes beyond mere parameters or configurable data, but full-fledged, evolved changes in behavior. (No, I'm not writing a virus. ;) )
Well, you could always use .NET and the Expression libraries to build up expressions. That I think is really your best bet as you can build up representations of commands in memory and there is good library support for manipulating, traversing, etc.
Well, those languages with really strong macro support (in particular Lisps) could qualify.
But are you sure you actually need to go this deeply? I don't know what you're trying to do, but I suppose you could emulate it without actually getting too deeply into metaprogramming. Say, instead of using a method and manipulating it, use a collection of functions (with some way of sharing state, e.g. an object holding state passed to each).
I would say Groovy can do this.
For example
class Foo {
void bar() {
println "foobar"
}
}
Foo.metaClass.bar = {->
prinltn "barfoo"
}
Or a specific instance of foo without effecting other instances
fooInstance.metaClass.bar = {->
println "instance barfoo"
}
Using this approach I can modify, remove or add expression from the method and Subsequent calls will use the new method. You can do quite a lot with the Groovy metaClass.
In java, many professional framework do so using the open source ASM framework.
Here is a list of all famous java apps and libs including ASM.
A few years ago BCEL was also very much used.
There are languages/environments that allows a real runtime modification - for example, Common Lisp, Smalltalk, Forth. Use one of them if you really know what you're doing. Otherwise you can simply employ an interpreter pattern for an evolving part of your code, it is possible (and trivial) with any OO or functional language.
I suppose there could be historical reasons for this naming and that other languages have similar feature, but it also seems to me that parameters always had a name in C#. Arguments are the unnamed ones. Or is there a particular reason why this terminology was chosen?
Oh, you wanted arguments! Sorry, this is parameters - arguments are two doors down the hall on the left.
Yes, you're absolutely right (to my mind, anyway). Ironically, although I'm usually picky about these terms, I still use "parameter passing" when I should probably talk about "argument passing". I suppose one could argue that prior to C# 4.0, if you're calling a method you don't care about the parameter names, whereas the names become part of the significant metadata when you can specify them on the arguments as well.
I agree that it makes a difference, and that terminology is important.
"Optional parameters" is definitely okay though - that's adding metadata to the parameter when you couldn't do so before :) (Having said that, it's not going to be optional in terms of the generated IL...)
Would you like me to ask the team for their feedback?
I don't think so. The names are quite definitely the names of parameters, as they are defined and given a specific meaning in the method definition, where they are properly called the parameters to the method. At the call site, arguments can now be tagged with the name of the parameter that they supply a value for.
The new term refers to the perspective of the method caller - which is logical because that's where the feature applies. Previously, callers only had to think of parameters as being "positioned parameters". Now they can optionally treat them as "named parameters" - hence the name.
I dont know if its worth adding it now, but MS calls it named arguments anyway. See named and optional arguments