I use the following class to easily store data of my songs.
class Song:
"""The class to store the details of each song"""
attsToStore=('Name', 'Artist', 'Album', 'Genre', 'Location')
def __init__(self):
for att in self.attsToStore:
exec 'self.%s=None'%(att.lower()) in locals()
def setDetail(self, key, val):
if key in self.attsToStore:
exec 'self.%s=val'%(key.lower()) in locals()
I feel that this is just much more extensible than writing out an if/else block. However, I have heard that eval is unsafe. Is it? What is the risk? How can I solve the underlying problem in my class (setting attributes of self dynamically) without incurring that risk?
Yes, using eval is a bad practice. Just to name a few reasons:
There is almost always a better way to do it
Very dangerous and insecure
Makes debugging difficult
Slow
In your case you can use setattr instead:
class Song:
"""The class to store the details of each song"""
attsToStore=('Name', 'Artist', 'Album', 'Genre', 'Location')
def __init__(self):
for att in self.attsToStore:
setattr(self, att.lower(), None)
def setDetail(self, key, val):
if key in self.attsToStore:
setattr(self, key.lower(), val)
There are some cases where you have to use eval or exec. But they are rare. Using eval in your case is a bad practice for sure. I'm emphasizing on bad practice because eval and exec are frequently used in the wrong place.
Replying to the comments:
It looks like some disagree that eval is 'very dangerous and insecure' in the OP case. That might be true for this specific case but not in general. The question was general and the reasons I listed are true for the general case as well.
Using eval is weak, not a clearly bad practice.
It violates the "Fundamental Principle of Software". Your source is not the sum total of what's executable. In addition to your source, there are the arguments to eval, which must be clearly understood. For this reason, it's the tool of last resort.
It's usually a sign of thoughtless design. There's rarely a good reason for dynamic source code, built on-the-fly. Almost anything can be done with delegation and other OO design techniques.
It leads to relatively slow on-the-fly compilation of small pieces of code. An overhead which can be avoided by using better design patterns.
As a footnote, in the hands of deranged sociopaths, it may not work out well. However, when confronted with deranged sociopathic users or administrators, it's best to not give them interpreted Python in the first place. In the hands of the truly evil, Python can a liability; eval doesn't increase the risk at all.
Yes, it is:
Hack using Python:
>>> eval(input())
"__import__('os').listdir('.')"
...........
........... #dir listing
...........
The below code will list all tasks running on a Windows machine.
>>> eval(input())
"__import__('subprocess').Popen(['tasklist'],stdout=__import__('subprocess').PIPE).communicate()[0]"
In Linux:
>>> eval(input())
"__import__('subprocess').Popen(['ps', 'aux'],stdout=__import__('subprocess').PIPE).communicate()[0]"
In this case, yes. Instead of
exec 'self.Foo=val'
you should use the builtin function setattr:
setattr(self, 'Foo', val)
Other users pointed out how your code can be changed as to not depend on eval; I'll offer a legitimate use-case for using eval, one that is found even in CPython: testing.
Here's one example I found in test_unary.py where a test on whether (+|-|~)b'a' raises a TypeError:
def test_bad_types(self):
for op in '+', '-', '~':
self.assertRaises(TypeError, eval, op + "b'a'")
self.assertRaises(TypeError, eval, op + "'a'")
The usage is clearly not bad practice here; you define the input and merely observe behavior. eval is handy for testing.
Take a look at this search for eval, performed on the CPython git repository; testing with eval is heavily used.
It's worth noting that for the specific problem in question, there are several alternatives to using eval:
The simplest, as noted, is using setattr:
def __init__(self):
for name in attsToStore:
setattr(self, name, None)
A less obvious approach is updating the object's __dict__ object directly. If all you want to do is initialize the attributes to None, then this is less straightforward than the above. But consider this:
def __init__(self, **kwargs):
for name in self.attsToStore:
self.__dict__[name] = kwargs.get(name, None)
This allows you to pass keyword arguments to the constructor, e.g.:
s = Song(name='History', artist='The Verve')
It also allows you to make your use of locals() more explicit, e.g.:
s = Song(**locals())
...and, if you really want to assign None to the attributes whose names are found in locals():
s = Song(**dict([(k, None) for k in locals().keys()]))
Another approach to providing an object with default values for a list of attributes is to define the class's __getattr__ method:
def __getattr__(self, name):
if name in self.attsToStore:
return None
raise NameError, name
This method gets called when the named attribute isn't found in the normal way. This approach somewhat less straightforward than simply setting the attributes in the constructor or updating the __dict__, but it has the merit of not actually creating the attribute unless it exists, which can pretty substantially reduce the class's memory usage.
The point of all this: There are lots of reasons, in general, to avoid eval - the security problem of executing code that you don't control, the practical problem of code you can't debug, etc. But an even more important reason is that generally, you don't need to use it. Python exposes so much of its internal mechanisms to the programmer that you rarely really need to write code that writes code.
When eval() is used to process user-provided input, you enable the user to Drop-to-REPL providing something like this:
"__import__('code').InteractiveConsole(locals=globals()).interact()"
You may get away with it, but normally you don't want vectors for arbitrary code execution in your applications.
In addition to #Nadia Alramli answer, since I am new to Python and was eager to check how using eval will affect the timings, I tried a small program and below were the observations:
#Difference while using print() with eval() and w/o eval() to print an int = 0.528969s per 100000 evals()
from datetime import datetime
def strOfNos():
s = []
for x in range(100000):
s.append(str(x))
return s
strOfNos()
print(datetime.now())
for x in strOfNos():
print(x) #print(eval(x))
print(datetime.now())
#when using eval(int)
#2018-10-29 12:36:08.206022
#2018-10-29 12:36:10.407911
#diff = 2.201889 s
#when using int only
#2018-10-29 12:37:50.022753
#2018-10-29 12:37:51.090045
#diff = 1.67292
Related
I have an application running in Python 3.9.4 where I store class objects in sets (along with many other kinds of objects). I'm getting non-deterministic behavior even when PYTHONHASHSEED=0 because class objects get non-deterministic hash codes. I assume that's because class objects' hash codes come from their addresses in memory.
For example, here are two runs of a little test program, where Before and Equation are classes:
print(hash(Before), hash(Equation), hash(int))
304555224 304593057 271715397
print(hash(Before), hash(Equation), hash(int))
326601328 293027788 273337413
How can I get Python to generate deterministic hash values for class objects?
Is there a metaclass or something that I could monkey-patch so that all class objects, even int, get a hash function that I specify?
Hash for classes is deterministic within the same process . Yes, in cPython it is memory based - but then you can't simply "move" a class object to another memory address using Python code.
If you happen to use some serialization/de-serialization transforms with the classes, the de-serialized objects will ordinarily be new objects, distinct from the original ones, and therefore will hash differently.
For the note: I could not reproduce the behavior you stated in the question: on the same process, the hashes for the class objects will be the same.
If you are calculating the hashes in different processes, though, the will differ. So, although you don't mention multiprocessing there, I assume that is your working case.
Then, indeed, implementing __hash__ and __eq__ proper methods on the metaclass can allow you a stable, across process, hashing - but you can't do that with built-in classes such as int: those are built in native code and can't be changed on the Python side. On the other hand, despite the hash number shown being different for these built-in classes, whatever you are using to serialize/deserialize your classes (that is what Python does for communicating data across processes, even if you don't do any explicit de/serializing) .
Then we come to, while it is straightforward to add __eq__ and __hash__ methods to a metaclass to your classes, it'd be better to ensure that on deserializing, it would always yield the same object (with the same ID). hash stability, as you put it, could possibly ensure you have always the same class, but it would depend on how you'd write your code: it is a bit tricky to retrieve the object instance that is already inside a set, if you check positively for containship of another instance that matches it - the most straightfoward way would be building a identity-dictionary out of a set, and then use the value:
my_registry_dict = {element: element for element in my_registry_set}
my_class = my_registry_dict[incoming_class]
With this in mind, we can have a custom metaclass that not only add __eq__ and __hash__- and you have to pick what elements of the classes you will want to compare for equality - class.__qualname__ can be a simple and functional attribute to use - but also customize the __new__ method so that upon de-serializing the same class a second time will always re-use the first class object defined in the current process (i.e.: ensuring the "singleton" behavior Python classes enjoy in non-corner cases like yours seems to be)
class Meta(type):
registry = {}
def __new__(mcls, name, bases, namespace):
cls = super().__new__(mcls, name, bases, namespace)
if cls not in mcls.registry:
mcls.registry[cls] = cls
else:
# reuse the previously created class
cls = mcls.registry[cls]
return cls
def __hash__(cls):
# when working with metaclasses, using the name `cls` instead of `self``
# helps reminding us that we are dealing with instances that are
# actually classes.
return hash(cls.__qualname__)
def __eq__(cls, other):
return cls.__qualname__ == other.__qualname__
I'd like to get the minimum for a complicated class, for which I have already written a strategy.
Is it possible to ask hypothesis to simply give me the minimum example for a given strategy?
Context here is writing a strategy and a default minimum (for use in #hypothesis.example) - surely the information for the latter is already contained in the former?
import dataclasses
import hypothesis
from hypothesis import strategies
#dataclasses.dataclass
class Foo:
bar: int
# Foo has many more attributes which are strategised over...
#classmethod
def strategy(cls):
return hypothesis.builds(cls, bar=strategies.integers())
#classmethod
def minimal(cls):
return hypothesis.minimal(cls.strategy())
Found the answer.
Use hypothesis.find:
"""Returns the minimal example from the given strategy specifier that
matches the predicate function condition.
So we want the following code for the above example:
minimal = hypothesis.find(
specifier=Foo.strategy(),
condition=lambda _: True,
)
You're correct that hypothesis.find(s, lambda _: True) is the way to do this.
Context here is writing a strategy and a default minimum (for use in #hypothesis.example()) - surely the information for the latter is already contained in the former?
It's not just contained in it, but generating examples will almost always start by generating the minimal example* - because if that fails, we can stop immediately! So you might not need to do this at all ;-)
(the rare exceptions are strategies with complicated filters, where the attempted-minimal example is rejected by a filter - and we just move on to reasonably-minimal examples rather than shrinking to find the minimal one)
I'd also note that st.builds() has native support for dataclasses, so you can omit the strategies (or pass hypothesis.infer) for any argument where you don't have more specific constraints than "any instance of the type". For example, there's no need to pass bar=st.integers() above!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am going to be creating a script that parses through an XML (very large, .5gb+), and am trying to think of how to efficiently do it.
Normally, I would do this in AutoIt, as that's my 'normal' language to use for things, but I think it's more appropriate to do it in Python (plus I'd like to learn more python).
Normally how I'd do this, is create a constant with all the 'columns' I'd need from the XML, use that to match and parse it through into an array (actually 2 arrays, cause of subrecords), then pass sets of the array(s) to the system of record as JSON objects/strings.
In Python, I'm not sure that's the best route. I was thinking about making a class of the object, then creating instances for each record/row of the XML that I'd convert to JSON and then submit. If I feel ambitious, I'd even work on getting it to be multithreaded. My best option would be to pull out a record, then submit it in the background while I work on the next record, up to say, 5 to 10 records, but perhaps that's not good.
My question is, does it seem like I'm using a class just to use a class, or does it seem like a good reason to do it? I admit my thinking is colored by the fact that I haven't used classes much (at all) before, and am using it because it's neat and new.
Is there actually a totally better way that I'm overlooking because I'm blinded by new/shiny concepts, or lack of knowledge of the program (this is probably likely to me)?
I'm hoping for answers that will guide me in a general direction - this is part of my learning the language and doing the research myself really helps me understand what I'm doing and why. Unfortunately, I think I need a guide here on this point.
This debate is largely situational in nature, and will depend on what you intend to do within your program. The main thing I would consider is, Do I need to encapsulate properties (data) and functionality(methods/functions) into a single grouping?
Some additional things that come to mind, in terms of pros vs. cons of using a class (object) in this context:
Reasons to use a class:
If potential future maintainability would warrant 'swapping' in a new class into an existing structure within the program.
If there are attributes that would hold true for all instances of the class.
If it makes logical sense to have a group of functions separated out from the rest of your program.
More concise options for ensuring immutability
Providing a type for the underlying fields meshes well with the rest of your program.
Reasons not to use a class:
The code can be maintained purely through the addition of new functions.
You aren't performing functional tasks on the fields stored (e.g. storing create_date, but needing only to work with age - this can lend itself better to an object that doesn't expose create_date, but rather just a function get_age).
You have severe performance optimization standards to meet and can't justify calls to functions to ensure encapsulation, any additional memory overhead, etc...
Generally, Python lends itself to using classes since it is an object-oriented language. However, compared to more heavily oop languages like C++ and Java, you can "get away" with a lot more in Python without using classes. If you want to explore using a class, I certainly think it would be a good exercise in use of the language.
Edit:
Based on follow-up comment, I wanted to provide an example of using named arguments to instantiate a class with optional fields. The general overview, is that Python interprets the ordering of arguments when considering which argument to assign to internal functionality. As an example:
def get_info(name, birthday, favorite_color):
age = current_time - birthday
return [name, age, favorite_color]
In this example, Python interprets the input arguments based on the order they appear when the method is called:
get_info('James', '03-05-1998', 'blue')
However, Python also allows for named arguments, which specify the parameter-internal field assignment explicitly:
get_info(name='James', birthday='03-05-1998', favorite_color='blue')
While at first glance this syntax appears to be more verbose, it actually allows for great flexibility, in that ordering of named arguments doesn't matter, and you can set defaults for arguments that aren't passed into the method's signature:
def get_info(name, birthday, favorite_color=None):
age = current_time - birthday
return [name, age, favorite_color]
get_info(name='James', birthday='03-05-1998')
Below I've provided a more in-depth working example of how named arguments could help the situation you've outlined in your comment (Many fields, not all of them required) Play around with constructing this object in various ways to see how the non-named parameters are required, but the named parameters are optional and will default to the values specified in the __init__() method:
class Car(object):
""" Initializes a new Car object. Requires a color, make, model, horsepower, price, and condition.
Optional parameters include: wheel_size, moon_roof, premium_sound, interior_color, and interior_material."""
def __init__(self, color, make, model, horsepower, price, condition, wheel_size=16, moon_roof=None, premium_sound=None, interior_color='black', interior_material='cloth'):
self.color = color
self.make = make
self.model = model
self.horsepower = horsepower
self.price = price
self.condition = condition
self.wheel_size = wheel_size
self.moon_roof = moon_roof
self.premium_sound = premium_sound
self.interior_color = interior_color
self.interior_material = interior_material
# Prints attributes of the Car class and their associated values in no specific order.
def print_car(self):
fields = []
for key, value in self.__dict__.iteritems():
fields.append(key + ': ')
fields.append(str(value))
fields.append('\n')
print ''.join(fields)
# Executes the main program body
def main():
stock_car = Car('Red', 'Honda', 'NSX', 290, 89000.00, 'New')
stock_car.print_car()
custom_car = Car('Black', 'Mitsubishi', 'Lancer Evolution', 280, 45000.00, 'New', 17, "Tinted Moonroof", "Bose", "Black/Red", "Suede/Leather")
custom_car.print_car()
# Calls main() as the entry point for this program.
if __name__ == '__main__':
main()
I've worked with tkinter a bit of a time now.
There are two ways for configuration or at at least I just know two:
1: frame.config(bg='#123456')
2: frame["bg"] = '#123456'
I use the latter more often. Only if there are more things to be done at the same time, the second seems useful for me.
Recently I was wondering if one of them is 'better' like faster or has any other advantage.
I don't think it's a crucially important Question but maybe someone knows it.
Studying the tkinter code base, we find the following:
class Frame(Widget):
# Other code here
class Widget(BaseWidget, Pack, Place, Grid):
pass
class BaseWidget(Misc):
# other code here
class Misc:
# various code
def __setitem__(self, key, value):
self.configure({key: value})
Therefore, the two methods are actually equivalent. The line
frame['bg'] = '#123456'
is interpreted as frame.__setitem__('bg','#123456'), which after passing through the inheritance chain finds itself on the internal class Misc which simply passes it to the configure method. As far as your question about efficiency is concerned, the first method is probably slightly faster because it doesn't need to be interpreted as much, but the speed difference is too little to be overly concerned with.
I am recently thinking about writing self-modifying programs, I think it may be powerful and fun. So I am currently looking for a language that allows modifying a program's own code easily.
I read about C# (as a way around) and the ability to compile and execute code in runtime, but that is too painful.
I am also thinking about assembly. It is easier there to change running code but it is not very powerful (very raw).
Can you suggest a powerful language or feature that supports modifying code in runtime?
Example
This is what I mean by modifying code in runtime:
Start:
a=10,b=20,c=0;
label1: c=a+b;
....
label1= c=a*b;
goto label1;
and may be building a list of instructions:
code1.add(c=a+b);
code1.add(c=c*(c-1));
code1. execute();
Malbolge would be a good place to start. Every instruction is self-modifying, and it's a lot of fun(*) to play with.
(*) Disclaimer: May not actually be fun.
I highly recommend Lisp. Lisp data can be read and exec'd as code. Lisp code can be written out as data.
It is considered one of the canonical self-modifiable languages.
Example list(data):
'(+ 1 2 3)
or, calling the data as code
(eval '(+ 1 2 3))
runs the + function.
You can also go in and edit the members of the lists on the fly.
edit:
I wrote a program to dynamically generate a program and evaluate it on the fly, then report to me how it did compared to a baseline(div by 0 was the usual report, ha).
Every answer so far is about reflection/runtime compilation, but in the comments you mentioned you're interested in actual self-modifying code - code that modifies itself in-memory.
There is no way to do this in C#, Java, or even (portably) in C - that is, you cannot modify the loaded in-memory binary using these languages.
In general, the only way to do this is with assembly, and it's highly processor-dependent. In fact, it's highly operating-system dependent as well: to protect against polymorphic viruses, most modern operating systems (including Windows XP+, Linux, and BSD) enforce W^X, meaning you have to go through some trouble to write polymorphic executables in those operating systems, for the ones that allow it at all.
It may be possible in some interpreted languages to have the program modify its own source-code while it's running. Perl, Python (see here), and every implementation of Javascript I know of do not allow this, though.
Personally, I find it quite strange that you find assembly easier to handle than C#. I find it even stranger that you think that assembly isn't as powerful: you can't get any more powerful than raw machine language. Anyway, to each his/her own.
C# has great reflection services, but if you have an aversion to that.. If you're really comfortable with C or C++, you could always write a program that writes C/C++ and issues it to a compiler. This would only be viable if your solution doesn't require a quick self-rewriting turn-around time (on the order of tens of seconds or more).
Javascript and Python both support reflection as well. If you're thinking of learning a new, fun programming language that's powerful but not massively technically demanding, I'd suggest Python.
May I suggest Python, a nice very high-level dynamic language which has rich introspection included (and by e.g. usage of compile, eval or exec permits a form of self-modifying code). A very simple example based upon your question:
def label1(a,b,c):
c=a+b
return c
a,b,c=10,20,0
print label1(a,b,c) # prints 30
newdef= \
"""
def label1(a,b,c):
c=a*b
return c
"""
exec(newdef,globals(),globals())
print label1(a,b,c) # prints 200
Note that in the code sample above c is only altered in the function scope.
Common Lisp was designed with this sort of thing in mind. You could also try Smalltalk, where using reflection to modify running code is not unknown.
In both of these languages you are likely to be replacing an entire function or an entire method, not a single line of code. Smalltalk methods tend to be more fine-grained than Lisp functions, so that may be a good place to begin.
Many languages allow you to eval code at runtime.
Lisp
Perl
Python
PHP
Ruby
Groovy (via GroovyShell)
In high-level languages where you compile and execute code at run-time, it is not really self-modifying code, but dynamic class loading. Using inheritance principles, you can replace a class Factory and change application behavior at run-time.
Only in assembly language do you really have true self-modification, by writing directly to the code segment. But there is little practical usage for it. If you like a challenge, write a self-encrypting, maybe polymorphic virus. That would be fun.
I sometimes, although very rarely do self-modifying code in Ruby.
Sometimes you have a method where you don't really know whether the data you are using (e.g. some lazy cache) is properly initialized or not. So, you have to check at the beginning of your method whether the data is properly initialized and then maybe initialize it. But you really only have to do that initialization once, but you check for it every single time.
So, sometimes I write a method which does the initialization and then replaces itself with a version that doesn't include the initialization code.
class Cache
def [](key)
#backing_store ||= self.expensive_initialization
def [](key)
#backing_store[key]
end
#backing_store[key]
end
end
But honestly, I don't think that's worth it. In fact, I'm embarrassed to admit that I have never actually benchmarked to see whether that one conditional actually makes any difference. (On a modern Ruby implementation with an aggressively optimizing profile-feedback-driven JIT compiler probably not.)
Note that, depending on how you define "self-modifying code", this may or may not be what you want. You are replacing some part of the currently executing program, so …
EDIT: Now that I think about it, that optimization doesn't make much sense. The expensive initialization is only executed once anyway. The only thing that modification avoids, is the conditional. It would be better to take an example where the check itself is expensive, but I can't think of one.
However, I thought of a cool example of self-modifying code: the Maxine JVM. Maxine is a Research VM (it's technically not actually allowed to be called a "JVM" because its developers don't run the compatibility testsuites) written completely in Java. Now, there are plenty of JVMs written in itself, but Maxine is the only one I know of that also runs in itself. This is extremely powerful. For example, the JIT compiler can JIT compile itself to adapt it to the type of code that it is JIT compiling.
A very similar thing happens in the Klein VM which is a VM for the Self Programming Language.
In both cases, the VM can optimize and recompile itself at runtime.
I wrote Python class Code that enables you to add and delete new lines of code to the object, print the code and excecute it. Class Code shown at the end.
Example: if the x == 1, the code changes its value to x = 2 and then deletes the whole block with the conditional that checked for that condition.
#Initialize Variables
x = 1
#Create Code
code = Code()
code + 'global x, code' #Adds a new Code instance code[0] with this line of code => internally code.subcode[0]
code + "if x == 1:" #Adds a new Code instance code[1] with this line of code => internally code.subcode[1]
code[1] + "x = 2" #Adds a new Code instance 0 under code[1] with this line of code => internally code.subcode[1].subcode[0]
code[1] + "del code[1]" #Adds a new Code instance 0 under code[1] with this line of code => internally code.subcode[1].subcode[1]
After the code is created you can print it:
#Prints
print "Initial Code:"
print code
print "x = " + str(x)
Output:
Initial Code:
global x, code
if x == 1:
x = 2
del code[1]
x = 1
Execute the cade by calling the object: code()
print "Code after execution:"
code() #Executes code
print code
print "x = " + str(x)
Output 2:
Code after execution:
global x, code
x = 2
As you can see, the code changed the variable x to the value 2 and deleted the whole if block. This might be useful to avoid checking for conditions once they are met. In real-life, this case-scenario could be handled by a coroutine system, but this self modifying code experiment is just for fun.
class Code:
def __init__(self,line = '',indent = -1):
if indent < -1:
raise NameError('Invalid {} indent'.format(indent))
self.strindent = ''
for i in xrange(indent):
self.strindent = ' ' + self.strindent
self.strsubindent = ' ' + self.strindent
self.line = line
self.subcode = []
self.indent = indent
def __add__(self,other):
if other.__class__ is str:
other_code = Code(other,self.indent+1)
self.subcode.append(other_code)
return self
elif other.__class__ is Code:
self.subcode.append(other)
return self
def __sub__(self,other):
if other.__class__ is str:
for code in self.subcode:
if code.line == other:
self.subcode.remove(code)
return self
elif other.__class__ is Code:
self.subcode.remove(other)
def __repr__(self):
rep = self.strindent + self.line + '\n'
for code in self.subcode: rep += code.__repr__()
return rep
def __call__(self):
print 'executing code'
exec(self.__repr__())
return self.__repr__()
def __getitem__(self,key):
if key.__class__ is str:
for code in self.subcode:
if code.line is key:
return code
elif key.__class__ is int:
return self.subcode[key]
def __delitem__(self,key):
if key.__class__ is str:
for i in range(len(self.subcode)):
code = self.subcode[i]
if code.line is key:
del self.subcode[i]
elif key.__class__ is int:
del self.subcode[key]
You can do this in Maple (the computer algebra language). Unlike those many answers above which use compiled languages which only allow you to create and link in new code at run-time, here you can honest-to-goodness modify the code of a currently-running program. (Ruby and Lisp, as indicated by other answerers, also allow you to do this; probably Smalltalk too).
Actually, it used to be standard in Maple that most library functions were small stubs which would load their 'real' self from disk on first call, and then self-modify themselves to the loaded version. This is no longer the case as the library loading has been virtualized.
As others have indicated: you need an interpreted language with strong reflection and reification facilities to achieve this.
I have written an automated normalizer/simplifier for Maple code, which I proceeded to run on the whole library (including itself); and because I was not too careful in all of my code, the normalizer did modify itself. I also wrote a Partial Evaluator (recently accepted by SCP) called MapleMIX - available on sourceforge - but could not quite apply it fully to itself (that wasn't the design goal).
Have you looked at Java ? Java 6 has a compiler API, so you can write code and compile it within the Java VM.
In Lua, you can "hook" existing code, which allows you to attach arbitrary code to function calls. It goes something like this:
local oldMyFunction = myFunction
myFunction = function(arg)
if arg.blah then return oldMyFunction(arg) end
else
--do whatever
end
end
You can also simply plow over functions, which sort of gives you self modifying code.
Dlang's LLVM implementation contains the #dynamicCompile and #dynamicCompileConst function attributes, allowing you to compile according to the native host's the instruction set at compile-time, and change compile-time constants in runtime through recompilation.
https://forum.dlang.org/thread/bskpxhrqyfkvaqzoospx#forum.dlang.org