Origins of the name 'main' for program entry point? - programming-languages

Out of curiosity, what are the origins of the name 'main' for a program entry point?

Before C, there was IBM's PL/I. In PL/I you declared a procedure with options. If you wrote
PROC MUMBLE OPTIONS(MAIN);
that told the compiler that the MUMBLE procedure was the main procedure. PL/I may have adopted this convention from elsewhere, or C may have adopted it from PL/I, or maybe it was just in the air. But it definitely predates C.
(If anyone is wondering why all upper case, the IBM keypunches of the day did not support lower-case characters. Yes, I wrote programs on punched cards. That's probably why I'm a bit shaky on the syntax; it has been a while.)

I'm pretty sure that it has to do with the fact that it is the 'main' function of the program. Anything more than that is unknown to me.

In Fortran the main program was the main program even though it didn't have a name. It was distinguished from subroutines and functions by having an executable statement (or other non-commentary statement) without a preceding SUBROUTINE or FUNCTION statement.
When later languages decided they wanted the main routine to start with a beginning line like other procedures or functions, some of them adopted the word MAIN or main in various ways.
As someone else pointed out, Pascal did it differently. Shell scripts and Perl resemble Fortran.

My understanding (though I couldn't find a reference to confirm) is that some early languages had a notion of a main procedure (the first might have been Ada), even though you did not have to name it main().
I think that C was the first language to actually use this token as a name. C largely replaced Pascal which didn't have a named start procedure, if I remember correctly.
From there it influenced subsequent languages that were C inspired like C++, Java and C#.
It also influenced culturally languages that do not mandate such a function, like Python.

Related

Does Lisp's treatment of code as data make it more vulnerable to security exploits than a language that doesn't treat code as data? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I know that this might be a stupid question but I was curious.
Since Lisp treats code and data the same, does this mean that it's easier to write a payload and pass it as "innocent" data that can be used to exploit programs? In comparison to languages that don't do so?
For e.g. In python you can do something like this.
malicious_str = "print('this is a malicious string')"
user_in = eval(malicious_str)
>>> this is a malicious string
P.S I have just started learning Lisp.
No, I don't think it does. In fact because of what is normally meant by 'code is data' in Lisp, it is potentially less vulnerable than some other languages.
[Note: this answer is really about Common Lisp: see the end for a note about that.]
There are two senses in which 'code can be data' in a language.
Turning objects into executable code: eval & friends
This is the first sense. What this means is that you can, say, take a string or some other object (not all types of object, obviously) and say 'turn this into something I can execute, and do that'.
Any language that can do this has either
to be extremely careful about doing this on unconstrained data;
or to be able to be certain that a given program does not actually do this.
Plenty of languages have equivalents of eval and its relations, so plenty of languages have this problem. You give an example of Python for instance, which is a good one, and there are probably other examples in Python (I've written programs even in Python 2 which supported dynamic loading of modules at runtime, which executes potentially arbitrary code, and I think this stuff is much better integrated in Python 3).
This is also not just a property of a language: it's a property of a system. C can't do this, right? Well, yes it can if you're on any kind of reasonable *nixy platform. Not only can you use an exec-family function, but you can probably dynamically load a shared library and execute code in it.
So one solution to this problem is to, somehow, be able to be certain that a given program doesn't do this. One thing that helps is if there are a finite, known number of ways of doing it. In Common Lisp I think those are probably
eval of course;
unconstrained read (because of *read-eval*);
load;
compile;
compile-file;
and probably some others that I have forgotten.
Well, can you detect calls to those, statically, in a program? Not really: consider this:
(funcall (symbol-function (find-symbol s)) ...)
and now you're in trouble unless you have very good control over what s is: it might be "EVAL" for instance.
So that's frightening, but I don't think it's more frightening than what Python can do, for instance (almost certainly you can poke around in the namespace to find eval or something?). And something like that in a program ought to be a really big hint that bad things might happen.
I think there are probably two approaches to this, neither of which CL adopts but which implementations could (and perhaps even programs written in CL could).
One would be to be able to run programs in such a way that the finite set of bad functions above simply are disallowed: they'd signal errors if you tried to call them. An implementation could clearly do that (see below).
The other would be to have something like Perl's 'tainting' where data which came from a user needs to be explicitly looked-at by the program somehow before it's used. That doesn't guarantee safety of course, but it does make it harder to make silly mistakes: if s above came from user input and was thus tainted you'd have to explicitly say 'it's OK to use it' and, well, then it's up to you.
So this is a problem, but I don't think it's worse than the problems that very many other languages (and language-families) have.
An example of an implementation that can address the first approach is LispWorks: if you're building an application with LW, you typically create the binary with a function called deliver, which has options which allow you to remove the definitions of functions from the resulting binary whether or not the delivery process would otherwise leave them there. So, for instance
(deliver 'foo "x" 5
:functions-to-remove '(eval load compile compile-file read))
would result in an executable x which, whatever else it did, couldn't call those functions, because they're not present, at all.
Other implementations probably have similar features: I just don't know them as well.
But there's another sense in which 'code is data' in Lisp.
Program source code is available as structured data
This is the sense that people probably really mean when they say 'code is data' in Lisp, even if they don't know that. It's worth looking at your Python example again:
>>> eval("exit('exploded')")
exploded
$
So what eval eats is a string: a completely unstructured vector of characters. If you want to know whether that string contains something nasty, well, you've got a lot of work ahead of you (disclaimer: see below).
Compare this with CL:
> (let ((trying-to-be-bad "(end-the-world :now t)"))
(eval trying-to-be-bad))
"(end-the-world :now t)"
OK, so that clearly didn't end the world. And it didn't end the world because eval evaluates a bit of Lisp source code, and the value of a string, as source code, is the string.
If I want to do something nasty I have to hand it an actual interesting structure:
> (let ((actually-bad '(eval (progn
(format *query-io* "? ")
(finish-output *query-io*)
(read *query-io*)))))
(eval actually-bad))
? (defun foo () (foo))
foo
Now that's potentially quite nasty in at least several ways. But wait: in order to do this nasty thing, I had to hand eval a chunk of source code represented as an s-expression. And the structure of that s-expression is completely open to inspection by me. I can write a program which inspects this s-expression in any arbitrary way I like, and decides whether or not it is acceptable to me. That's just hugely easier than 'given this string, interpret it as a piece of source text for the language and tell me if it is dangerous':
the process of turning the sequence of characters into an s-expression has happened already;
the structure of s-expressions is both simple and standard.
So in this sense of 'code is data', Lisp is potentially much safer than other languages which have versions of eval which eat strings, like Python, say, because code is structured, standard, simple data. Lisp has an answer to the terrible 'language in a string' problem.
I am fairly sure that Python does in fact have some approach to making the parse tree available in a standard way which can be inspected. But eval still happily eats strings.
As I said above, this answer is about Common Lisp. But there are many other Lisps of course, which will have varying versions of this problem. Racket for instance probably can really fairly tightly constrain things, using sandboxed execution and modules, although I haven't explored this.
Any language can be exploited if you are not careful.
A well-known attack against Lisp is via the #. reader macro:
(read-from-string "#.(start-the-war)")
will start the war if *read-eval* is non-nil - this is why one should always bind it when reading from an un-trusted stream.
However, this is not directly related to "code is data" doctrine...

How to retrieve the type of architecture (linux versus Windows) within my fortran code

How can I retrieve the type of architecture (linux versus Windows) in my fortran code? Is there some sort of intrinsic function or subroutine that gives this information? Then I would like to use a switch like this every time I have a system call:
if (trim(adjustl(Arch))=='Linux') then
resul = system('ls > output.txt')
elseif (trim(adjustl(Arch))=='Windows')
resul = system('dir > output.txt')
else
write(*,*) 'architecture not supported'
stop
endif
thanks
A.
The Fortran 2003 standard introduced the GET_ENVIRONMENT_VARIABLE intrinsic subroutine. A simple form of call would be
call GET_ENVIRONMENT_VARIABLE (NAME, VALUE)
which will return the value of the variable called NAME in VALUE. The routine has other optional arguments, your favourite reference documentation will explain all. This rather assumes that you can find an environment variable to tell you what the executing platform is.
If your compiler doesn't yet implement this standard approach it is extremely likely to have a non-standard approach; a routine called getenv used to be available on more than one of the Fortran compilers I've used in the recent past.
The 2008 standard introduced a standard function COMPILER_OPTIONS which will return a string containing the compilation options used for the program, if, that is, the compiler supports this sort of thing. This seems to be less widely implemented yet than GET_ENVIRONMENT_VARIABLE, as ever consult your compiler documentation set for details and availability. If it is available it may also be useful to you.
You may also be interested in the 2008-introduced subroutine EXECUTE_COMMAND_LINE which is the standard replacement for the widely-implemented but non-standard system routine that you use in your snippet. This is already available in a number of current Fortran compilers.
There is no intrinsic function in Fortran for this. A common workaround is to use conditional compilation (through makefile or compiler supported macros) such as here. If you really insist on this kind of solution, you might consider making an external function, e.g., in C. However, since your code is built for a fixed platform (Windows/Linux, not both), the first solution is preferable.

A language in which everything compiles

I'm trying to do some research for a new project, and I need to create objects dynamically from random data.
For this to work, I need a language / compiler that doesn't have problems with weird uncompilable code lying around.
Basically, I need the random code to compile (or be interpreted) as much as possible - Meaning that the uncompilable parts will be ignored, and only the compilable parts will create the objects (which could be ran).
Object Oriented-ness is not a must, but is a very strong advantage.
I thought of ASM, but it's very messy, and I'd probably need a more readable code
Thanks!
It sounds like you're doing something very much like genetic programming; even if you aren't, GP has to solve some of the same problems—using randomness to generate valid programs. The approach to this that is typically used is to work with a syntax tree: rather than storing x + y * 3 - 2, you store something like the following:
Then, instead of randomly changing the syntax, one can randomly change nodes in the tree instead. And if x should randomly change to, say, +, you can statically know that this means you need to insert two children (or not, depending on how you define +).
A good choice for a language to work with for this would be any Lisp dialect. In a Lisp, the above program would be written (- (+ x (* y 3)) 2), which is just a linearization of the syntax tree using parentheses to show depth. And in fact, Lisps expose this feature: you can just as easily work with the object '(- (+ x (* y 3)) 2) (note the leading quote). This is a three-element list, whose first element is -, second element is another list, and third element is 2. And, though you might or might not want it for your particular application, there's an eval function, such that (eval '(- (+ x (* y 3)) 2)) will take in the given list, treat it as a Lisp syntax tree/program, and evaluate it. This is what makes Lisps so attractive for doing this sort of work; Lisp syntax is basically a reification of the syntax-tree, and if you operate at the syntax-tree level, you can work on code as though it was a value. Lisp won't help you read /dev/random as a program directly, but with a little interpretation layered on top, you should be able to get what you want.
I should also mention, though I don't know anything about it (not that I know much about ordinary genetic programming) the existence of linear genetic programming. This is sort of like the assembly model that you mentioned—a linear stream of very, very simple instructions. The advantage here would seem to be that if you are working with /dev/random or something like it, the amount of interpretation needed is very small; the disadvantage would be, as you mentioned, the low-level nature of the code.
I'm not sure if this is what you're looking for, but any programming language can be made to function this way. For any programming language P, define the language Palways as follows:
If p is a valid program in P, then p is a valid program in Palways whose meaning is the same as its meaning in P.
If p is not a valid program in P, then p is a valid program in Palways whose meaning is the same as a program that immediately terminates.
For example, I could make the language C++always so that this program:
#include <iostream>
using namespace std;
int main() {
cout << "Hello, world!" << endl;
}
would compile as "Hello, world!", while this program:
Hahaha! This isn't legal C++ code!
Would be a legal program that just does absolutely nothing.
To solve your original problem, just take any OOP language like Java, Smalltalk, etc. and construct the appropriate Javaalways, Smalltalkalways, etc. language from it. Again, I'm not sure if this is at all what you're looking for, but it could be done very easily.
Alternatively, consider finding a grammar for any OOP language and then using that grammar to produce random syntactically valid programs. You could then filter those programs down by using the Palways programming language for that language to eliminate syntactically but not semantically valid programs.
Divide the ASCII byte values into 9 classes (division modulo 9 would help). Then assign then to Brainfuck codewords (see http://en.wikipedia.org/wiki/Brainfuck). Then interpret as Brainfuck.
There you go, any sequence of ASCII characters is a program. Not that it's going to do anything sensible... This approach has a much better chance, compared to templatetypedef's answer, to get a nontrivial program from a random byte sequence.
Text Editors
You could try feeding random character strings to an editor like Emacs or VI. Many (most?) characters will perform an editing action but some will do nothing (other than beep, perhaps). You would have to ensure that the random code mutator never generates the character sequence that exits the editor. However, this experience would be much like programming a Turing machine -- the code is not too readable.
Mathematica
In Mathematica, undefined symbols and other expressions evaluate to themselves, without error. So, that language might be a viable choice if you can arrange for the random code mutator to always generate well-formed expressions. This would be readily achievable since the basic Mathematica syntax is trivial, making it easy to operate on syntactic units rather than at the character level. It would be even easier if the mutator were written in Mathematica itself since expression-munging is Mathematica's forte. You could define a mini-language of valid operations within a Mathematica package that does not import the system-defined symbols. This would allow you to generate well-formed expressions to your heart's content without fear of generating a dangerous expression, like DeleteFile[FileNames["*.*", "/", Infinity]].
I believe Common Lisp should suit your needs. I always have some code in my SLIME/Emacs session that wouldn't compile. You can always tweak things, redefine functions in run-time. It is actually very good for prototyping.
A few years ago it took me quite a while to learn. But nowadays we have quicklisp and everything is so much easier.
Here I describe my development environment:
Install lisp on my linux machine
PS: I want to give an example, where Common Lisp was useful for me:
Up to maybe 2004 I used to write small programs in C (the keep it simple Unix way).
The last 3 years I had to get lots of different hardware running. Motorized stages, scientific cameras, IO cards.
The cameras turned out to be quite annoying. Usually you have to cool them down to -50 degree celsius or so and (in some SDKs) they don't like it when you close them. But this
is exactly how my C development cycle worked: write (30s), compile (1s), run (0.1s), repeat.
Eventually I decided to just use Common Lisp. Often it is straight forward to define the foreign function interfaces to talk to the SDKs and I can do this without ever leaving the running Lisp image. I start the editor in the morning define the open-device function, to talk to the device and after 3 hours I have enough of the functions implemented to set gain, temperature, region of interest and obtain the video.
Then I can often put the SDK manual away and just use the camera.
I used the same interactive programming approach when I have to parse some webpage or some weird XML.

Does "The whole language always available" hold in case of Clojure?

Ninth bullet point in Paul Graham's What Made Lisp Different says,
9. The whole language always available.
There is no real distinction between read-time, compile-time, and runtime. You can compile or run code while reading, read or run code while compiling, and read or compile code at runtime.
Running code at read-time lets users reprogram Lisp's syntax; running code at compile-time is the basis of macros; compiling at runtime is the basis of Lisp's use as an extension language in programs like Emacs; and reading at runtime enables programs to communicate using s-expressions, an idea recently reinvented as XML.
Does this last bullet point hold for Clojure?
You can mix runtime and compile-time freely in Clojure, although Common Lisp is still somewhat more flexible here (due to the presence of compiler macros and symbol macros and a fully supported macrolet; Clojure has an advantage in its cool approach to macro hygiene through automagic symbol resolution in syntax-quote). The reader is currently closed, so the free mixing of runtime, compile-time and read-time is not possible1.
1 Except through unsupported clever hacks.
It does hold,
(eval (read-string "(println \"Hello World!!\")"))
Hello World!!
nil
Just like emacs you can have your program configuration in Clojure, one project that I know Clojure for is static which allows you to have your template as a Clojure vector along with arbitrary code which will be executed at read time.

Is there a compiled* programming language with dynamic, maybe even weak typing?

I wondered if there is a programming language which compiles to machine code/binary (not bytecode then executed by a VM, that's something completely different when considering typing) that features dynamic and/or weak typing, e.g:
Think of a compiled language where:
Variables don't need to be declared
Variables can be created during runtime
Functions can return values of different types
Questions:
Is there such a programming language?
(Why) not?
I think that a dynamically yet strong typed, compiled language would really sense, but is it possible?
I believe Lisp fits that description.
http://en.wikipedia.org/wiki/Common_Lisp
Yes, it is possible. See Julia. It is a dynamic language (you can write programs without types) but it never runs on a VM. It compiles the program to native code at runtime (JIT compilation).
Objective-C might have some of the properties you seek. Classes can be opened and altered in runtime, and you can send any kind of message to an object, whether it usually responds to it or not. In that way, you can implement duck typing, much like in Ruby. The type id, roughly equivalent to a void*, can be endowed with interfaces that specify a contract that the (otherwise unknown) type will adhere to.
C# 4.0 has many, if not all of these characteristics. If you really want native machine code, you can compile the bytecode down to machine code using a utility.
In particular, the use of the dynamic keyword allows objects and their members to be bound dynamically at runtime.
Check out Anders Hejlsberg's video, The Future of C#, for a primer:
http://channel9.msdn.com/pdc2008/TL16/
Objective-C has many of the features you mention: it compiles to machine code and is effectively dynamically typed with respect to object instances. The id type can store any class instance and Objective-C uses message passing instead of member function calls. Methods can be created/added at runtime. The Objective-C runtime can also synthesize class instance variables at runtime, but local variables still need to be declared (just as in C).
C# 4.0 has many of these features, except that it is compiled to IL (bytecode) and interpreted using a virtual machine (the CLR). This brings up an interesting point, however: if bytecode is just-in-time compiled to machine code, does that count? If so, it opens to the door to not only any of the .Net languages, but Python (see PyPy or Unladed Swallow or IronPython) and Ruby (see MacRuby or IronRuby) and many other dynamically typed languages, not mention many LISP variants.
In a similar vein to Lisp, there is Factor, a concatenative* language with no variables by default, dynamic typing, and a flexible object system. Factor code can be run in the interactive interpreter, or compiled to a native executable using its deploy function.
* point-free functional stack-based
VB 6 has most of that
I don't know of any language that has exactly those capabilities. I can think of two that have a significant subset, though:
D has type inference, garbage collection, and powerful metaprogramming facilities, yet compiles to efficient machine code. It does not have dynamic typing, however.
C# can be compiled directly to machine code via the mono project. C# has a similar feature set to D, but again without dynamic typing.
Python to C probably needs these criteria.
Write in Python.
Compile Python to Executable. See Process to convert simple Python script into Windows executable. Also see Writing code translator from Python to C?
Elixir does this. The flexibility of dynamic variable typing helps with doing hot-code updates (for which Erlang was designed). Files are compiled to run on the BEAM, the Erlang/Elixir VM.
C/C++ both indirectly support dynamic typing using void*. C++ example:
#include <string>
int main() {
void* x = malloc(sizeof(int))
*(int*)x = 5;
x = malloc(sizeof(std::string));
*(std::string*x) = std::string("Hello world");
free(x);
return 0;
}
In C++17, std::any can be used as well:
#include <string>
#include <any>
int main() {
std::any x = 5;
x = std::string("Hello world");
return 0;
}
Of course, duck typing is rarely used or needed in C/C++, and both of these options have issues (void* is unsafe, std::any is a huge performance bottleneck).
Another example of what you may be looking for is the V8 engine for JavaScript. It is a JIT compiler, meaning the source code is compiled to bytecode and then machine code at runtime, although this is hidden from the user.

Resources