I would like to run the individual tests (functions) of my Python unittest Class in sequential order and not in parallel. I can tell it is parallel because the first function/test writes a record into the TinyDB and another function/test - which fails - needs to read that new record and make an existence test.
So, how do I enforce the strict sequential test execution?
If that is NOT possible, can I enforce the strict sequential processing in creating multiple tests? (I would dislike to do so, because I would like to have a 1:1 relationship of modules and their test_modules.)
Answer for unittest
A strict execution I could realize by creating a master test py file. Named it run_all_tests.py
The modules have separate classes. I trigger them one by one. Respectively the functions.
Switching to pytest and fixtures
Anyhow, I dislike that short-coming of controlling the sequence in a sophisticated/declarative way on a function level. Thus I switched to pytest.
First of all I like that there is an argument that shows the sequence. That confirms what we are expecting:
pytest -setup-show test_myfunction.py
On top of it, you may apply the decorator #pytest.fixture() to a method that is run before. This does NOT necessarily help us with the sequence in first line. Complementary, it reminds us of making independent tests - where one uses the #pytest.fixture() annotated method as an argument to the test function. This way one has a deliberate fixture for a single function. Do NOT mistaken that as the same as the setUp() method of unittest. They run before every single method. Every. And the setUpClass() runs once before any test function is invoked.
For those who still want declarative order, you find it here: https://pypi.org/project/pytest-order/, https://github.com/pytest-dev/pytest-order
I can't find a specific answer to this question anywhere.
Using ParseServer with Back4App.
I want to know the best place to require NPM packages, and what the difference between these two methods are.
main.js - Global file require example
const otpGenerator = require('otp-generator');
Parse.Cloud.define("TEST", async (request) => {
let code = otpGenerator.generate(10, {alphabets: false, specialChars: false});
console.log(code);
});
main.js - Function require example
Parse.Cloud.define("TEST", async (request) => {
const otpGenerator = require('otp-generator');
let code = otpGenerator.generate(10, {alphabets: false, specialChars: false});
console.log(code);
});
Both work as far as small tests as above go...
In the Node/CommonJS context require() indicates that you are loading the module at runtime.
Assuming we have a module A which contains a require(moduleB) (Which could either be in node_modules or a local file). Then when the require statement is reached then essentially the following happens:
Execution halts in the calling context of Module A
Node finds and evaluates the specified Module B (parsing etc)
Execution continues at the top of Module B
Assuming that parsing and execution of Module B occurs without problem then execution is returned to Module A, with the contents of module.exports provided, essentially, as the return value from the call to require()
(Note this is very much a simplification, and there's a lot of details I'm glossing over, but this is broadly it).
Thus this treats loading the module as akin to calling a function. (This is in contrast to standard ECMAScript modules that are evaluated 'in advance' at parse time, thus they have to be at the top of the file).
Now given this we can see why the location of a call to require() can be very significant.
In the example you gave then it is functionally irrelevant because there is a single block and that is the only thing being executed. Also the package being imported is presumably well behaved and not performing a bunch of side effects. However in other cases it becomes more significant.
If I saw code like this:
if(someCondition){
require('./my-module1.js')
} else {
const something = require('some-module')
}
Or some other pattern where we are scattering calls to require throughout the code. Then I would consider this a potential bad code smell as it indicates that we are using require to load chunks of code, presumably with side effects, rather than encapsulating what we need within well defined functions and classes etc.
It also obscures what code gets loaded and when, which could make for a debugging nightmare.
Whereas if we have all of our require() calls at the top of the file then we are:
Eliminating any potential ambiguity of what gets loaded/executed and when
Clearly indicating up top of the code file just what the dependencies of the following code is.
Following established conventions of other code (e.g. ES modules, other languages) that load dependencies at parse time
It does get a little more complicated as there may be certain circumstances in which we want dynamic loading of code e.g. for optimisation purposes, but for the sake of code clarity and cleanliness the default should be to put all require() calls at the top of the code and outside of any execution blocks.
For further details on refer to the NodeJS documentation:
CommonJS modules
ECMAScript modules
PS: In the specific example of requiring modules for tests, as in your example, then whether you want the modules to be loaded at the point of setting up your tests or when the test itself runs, would depend on several factors. By including the require call inside the test you are essentially also testing the required code, and you would see any errors in it appearing under that test, however if you include it outside then you are essentially taking that required code for granted and sharing it between all your tests.
Standard practice is to put all import/requires at the top, unless you have a good reason not to, such as it being an optional dependency.
Importing the packages at the top is a good practice. This makes it visually clear which libraries you will be using, and loading the data before using it avoids compilation and performance problems.
I want to implement a argument type checker. I've read several times about Python and Duck typing, but I'm tired of hunting bugs when I could easily enforce the type of inputs for my functions.
My plan is to implement a type checker that right after the function definition, receives the inputs and does its assertion job.
Something like this
import sanity_check
fun1(a,b):
sanity_check.fun1(a,b)
<do something>
fun2(a,b):
sanity_check.fun2(a,b)
<do something>
It is not my intention for this type checker to clearly estate what is checking (that is left for the comments on the functions), but just to enforce types.
My idea would be that after implementation, I can erase this sanity check module by just automatically erase all lines with the "sanity_check" word. So, it is not intended permanent use, just during implementation.
Onto my question. I do not want to be constantly erasing and copying back these lines whenever I want to test for real the code, since given the nature of the codes I'm implementing, I know the function call overhead will make significant delays on my codes.
Is it there a way to ignore all the members of this "sanity_check" module?
Setting all the members to None could be a way, but I do not know how to do this.
It sounds like you want a combination of type annotations with a static type checker like mypy, plus some assert statements:
Assert statements are a convenient way to insert debugging assertions into a program [..] The current code generator emits no code for an assert statement when optimization is requested at compile time.
You can use this to make runtime checks in debug mode and choose to run your code using the -O flag to omit assert statements and get maximum performance.
Static type hints can catch other types of problems without incurring (significant) runtime overhead; see https://mypy.readthedocs.io/en/stable/getting_started.html#function-signatures-and-dynamic-vs-static-typing.
Example:
def foo(bar: list):
assert len(bar) >= 3, 'List must be at least 3 long, got %d' % len(bar)
...
mypy will help you find bugs where you're not even passing a list into foo, while the assert statement will warn you at runtime if the list is too short, and the check can be omitted if you run the code via python -O foo.py.
Computers can only understand machine language. Then how come interepreters execute a program directly without translating it into machine language? For example:
<?php
echo "Hello, World!" ;
It's a simple Hello World program written in PHP. How does it execute in machine while the machine has no idea what echo is? How does it output what's expected, in this case, the string Hello, World!?
Many interpreters, including the official PHP interpreter, actually translate the code to a byte code format before executing it for performance (and I suppose flexibility) reasons, but at its simplest, an interpreter simply goes through the code and performs the corresponding action for each statement. An extremely simple interpreter for a PHP-like language might look like this for example:
def execute_program(prog)
for statement in prog.toplevel_statements:
execute_statement(statement)
def execute_statement(statement):
if statement is an echo statement:
print( evaluate_expression(statement.argument) )
else if statement is a for loop:
execute_statement(statement.init)
while evaluate_expression(statement.condition).is_truthy():
for inner_statement in statement.body:
execute_statement(inner_statement)
execute_statement(statement.increment)
else if ...
Note that a big if-else-if statement is not actually the cleanest way to go through an AST and a real interpreter would also need to keep track of scopes and a call stack to implement function calls and returns.
But at its most basic, this is what it boils down to: "If we see this kind of statement, perform this kind of action etc.".
Except for being much more complex, it isn't really any different from writing a program that responds to user commands where the user could for example type "rectangle" and then you draw a rectangle. Here the CPU also doesn't understand what "rectangle" means, but your code contains something like if user_input == rectangle: [code to draw a rectangle] and that's all you need.
Strictly speaking, the interpreter is being executed and the code that the interpreter is interpreting just determines what actions the interpreter takes. (If it was just compiled to machine code, what would you need the interpreter for?).
For example, I built an automation framework awhile back where we captured reflection metadata on what was occurring at runtime during QA tests. We serialized that metadata to JSON. The JSON was never compiled to anything - it just told the automation engine what methods to call and what parameters to pass. No machine code involved. It wouldn't be exactly correct to say that we were "executing" the JSON - we were executing the automation engine, which was then following the "directions" found in the JSON, but it was certainly interpreting the JSON.
I am recently thinking about writing self-modifying programs, I think it may be powerful and fun. So I am currently looking for a language that allows modifying a program's own code easily.
I read about C# (as a way around) and the ability to compile and execute code in runtime, but that is too painful.
I am also thinking about assembly. It is easier there to change running code but it is not very powerful (very raw).
Can you suggest a powerful language or feature that supports modifying code in runtime?
Example
This is what I mean by modifying code in runtime:
Start:
a=10,b=20,c=0;
label1: c=a+b;
....
label1= c=a*b;
goto label1;
and may be building a list of instructions:
code1.add(c=a+b);
code1.add(c=c*(c-1));
code1. execute();
Malbolge would be a good place to start. Every instruction is self-modifying, and it's a lot of fun(*) to play with.
(*) Disclaimer: May not actually be fun.
I highly recommend Lisp. Lisp data can be read and exec'd as code. Lisp code can be written out as data.
It is considered one of the canonical self-modifiable languages.
Example list(data):
'(+ 1 2 3)
or, calling the data as code
(eval '(+ 1 2 3))
runs the + function.
You can also go in and edit the members of the lists on the fly.
edit:
I wrote a program to dynamically generate a program and evaluate it on the fly, then report to me how it did compared to a baseline(div by 0 was the usual report, ha).
Every answer so far is about reflection/runtime compilation, but in the comments you mentioned you're interested in actual self-modifying code - code that modifies itself in-memory.
There is no way to do this in C#, Java, or even (portably) in C - that is, you cannot modify the loaded in-memory binary using these languages.
In general, the only way to do this is with assembly, and it's highly processor-dependent. In fact, it's highly operating-system dependent as well: to protect against polymorphic viruses, most modern operating systems (including Windows XP+, Linux, and BSD) enforce W^X, meaning you have to go through some trouble to write polymorphic executables in those operating systems, for the ones that allow it at all.
It may be possible in some interpreted languages to have the program modify its own source-code while it's running. Perl, Python (see here), and every implementation of Javascript I know of do not allow this, though.
Personally, I find it quite strange that you find assembly easier to handle than C#. I find it even stranger that you think that assembly isn't as powerful: you can't get any more powerful than raw machine language. Anyway, to each his/her own.
C# has great reflection services, but if you have an aversion to that.. If you're really comfortable with C or C++, you could always write a program that writes C/C++ and issues it to a compiler. This would only be viable if your solution doesn't require a quick self-rewriting turn-around time (on the order of tens of seconds or more).
Javascript and Python both support reflection as well. If you're thinking of learning a new, fun programming language that's powerful but not massively technically demanding, I'd suggest Python.
May I suggest Python, a nice very high-level dynamic language which has rich introspection included (and by e.g. usage of compile, eval or exec permits a form of self-modifying code). A very simple example based upon your question:
def label1(a,b,c):
c=a+b
return c
a,b,c=10,20,0
print label1(a,b,c) # prints 30
newdef= \
"""
def label1(a,b,c):
c=a*b
return c
"""
exec(newdef,globals(),globals())
print label1(a,b,c) # prints 200
Note that in the code sample above c is only altered in the function scope.
Common Lisp was designed with this sort of thing in mind. You could also try Smalltalk, where using reflection to modify running code is not unknown.
In both of these languages you are likely to be replacing an entire function or an entire method, not a single line of code. Smalltalk methods tend to be more fine-grained than Lisp functions, so that may be a good place to begin.
Many languages allow you to eval code at runtime.
Lisp
Perl
Python
PHP
Ruby
Groovy (via GroovyShell)
In high-level languages where you compile and execute code at run-time, it is not really self-modifying code, but dynamic class loading. Using inheritance principles, you can replace a class Factory and change application behavior at run-time.
Only in assembly language do you really have true self-modification, by writing directly to the code segment. But there is little practical usage for it. If you like a challenge, write a self-encrypting, maybe polymorphic virus. That would be fun.
I sometimes, although very rarely do self-modifying code in Ruby.
Sometimes you have a method where you don't really know whether the data you are using (e.g. some lazy cache) is properly initialized or not. So, you have to check at the beginning of your method whether the data is properly initialized and then maybe initialize it. But you really only have to do that initialization once, but you check for it every single time.
So, sometimes I write a method which does the initialization and then replaces itself with a version that doesn't include the initialization code.
class Cache
def [](key)
#backing_store ||= self.expensive_initialization
def [](key)
#backing_store[key]
end
#backing_store[key]
end
end
But honestly, I don't think that's worth it. In fact, I'm embarrassed to admit that I have never actually benchmarked to see whether that one conditional actually makes any difference. (On a modern Ruby implementation with an aggressively optimizing profile-feedback-driven JIT compiler probably not.)
Note that, depending on how you define "self-modifying code", this may or may not be what you want. You are replacing some part of the currently executing program, so …
EDIT: Now that I think about it, that optimization doesn't make much sense. The expensive initialization is only executed once anyway. The only thing that modification avoids, is the conditional. It would be better to take an example where the check itself is expensive, but I can't think of one.
However, I thought of a cool example of self-modifying code: the Maxine JVM. Maxine is a Research VM (it's technically not actually allowed to be called a "JVM" because its developers don't run the compatibility testsuites) written completely in Java. Now, there are plenty of JVMs written in itself, but Maxine is the only one I know of that also runs in itself. This is extremely powerful. For example, the JIT compiler can JIT compile itself to adapt it to the type of code that it is JIT compiling.
A very similar thing happens in the Klein VM which is a VM for the Self Programming Language.
In both cases, the VM can optimize and recompile itself at runtime.
I wrote Python class Code that enables you to add and delete new lines of code to the object, print the code and excecute it. Class Code shown at the end.
Example: if the x == 1, the code changes its value to x = 2 and then deletes the whole block with the conditional that checked for that condition.
#Initialize Variables
x = 1
#Create Code
code = Code()
code + 'global x, code' #Adds a new Code instance code[0] with this line of code => internally code.subcode[0]
code + "if x == 1:" #Adds a new Code instance code[1] with this line of code => internally code.subcode[1]
code[1] + "x = 2" #Adds a new Code instance 0 under code[1] with this line of code => internally code.subcode[1].subcode[0]
code[1] + "del code[1]" #Adds a new Code instance 0 under code[1] with this line of code => internally code.subcode[1].subcode[1]
After the code is created you can print it:
#Prints
print "Initial Code:"
print code
print "x = " + str(x)
Output:
Initial Code:
global x, code
if x == 1:
x = 2
del code[1]
x = 1
Execute the cade by calling the object: code()
print "Code after execution:"
code() #Executes code
print code
print "x = " + str(x)
Output 2:
Code after execution:
global x, code
x = 2
As you can see, the code changed the variable x to the value 2 and deleted the whole if block. This might be useful to avoid checking for conditions once they are met. In real-life, this case-scenario could be handled by a coroutine system, but this self modifying code experiment is just for fun.
class Code:
def __init__(self,line = '',indent = -1):
if indent < -1:
raise NameError('Invalid {} indent'.format(indent))
self.strindent = ''
for i in xrange(indent):
self.strindent = ' ' + self.strindent
self.strsubindent = ' ' + self.strindent
self.line = line
self.subcode = []
self.indent = indent
def __add__(self,other):
if other.__class__ is str:
other_code = Code(other,self.indent+1)
self.subcode.append(other_code)
return self
elif other.__class__ is Code:
self.subcode.append(other)
return self
def __sub__(self,other):
if other.__class__ is str:
for code in self.subcode:
if code.line == other:
self.subcode.remove(code)
return self
elif other.__class__ is Code:
self.subcode.remove(other)
def __repr__(self):
rep = self.strindent + self.line + '\n'
for code in self.subcode: rep += code.__repr__()
return rep
def __call__(self):
print 'executing code'
exec(self.__repr__())
return self.__repr__()
def __getitem__(self,key):
if key.__class__ is str:
for code in self.subcode:
if code.line is key:
return code
elif key.__class__ is int:
return self.subcode[key]
def __delitem__(self,key):
if key.__class__ is str:
for i in range(len(self.subcode)):
code = self.subcode[i]
if code.line is key:
del self.subcode[i]
elif key.__class__ is int:
del self.subcode[key]
You can do this in Maple (the computer algebra language). Unlike those many answers above which use compiled languages which only allow you to create and link in new code at run-time, here you can honest-to-goodness modify the code of a currently-running program. (Ruby and Lisp, as indicated by other answerers, also allow you to do this; probably Smalltalk too).
Actually, it used to be standard in Maple that most library functions were small stubs which would load their 'real' self from disk on first call, and then self-modify themselves to the loaded version. This is no longer the case as the library loading has been virtualized.
As others have indicated: you need an interpreted language with strong reflection and reification facilities to achieve this.
I have written an automated normalizer/simplifier for Maple code, which I proceeded to run on the whole library (including itself); and because I was not too careful in all of my code, the normalizer did modify itself. I also wrote a Partial Evaluator (recently accepted by SCP) called MapleMIX - available on sourceforge - but could not quite apply it fully to itself (that wasn't the design goal).
Have you looked at Java ? Java 6 has a compiler API, so you can write code and compile it within the Java VM.
In Lua, you can "hook" existing code, which allows you to attach arbitrary code to function calls. It goes something like this:
local oldMyFunction = myFunction
myFunction = function(arg)
if arg.blah then return oldMyFunction(arg) end
else
--do whatever
end
end
You can also simply plow over functions, which sort of gives you self modifying code.
Dlang's LLVM implementation contains the #dynamicCompile and #dynamicCompileConst function attributes, allowing you to compile according to the native host's the instruction set at compile-time, and change compile-time constants in runtime through recompilation.
https://forum.dlang.org/thread/bskpxhrqyfkvaqzoospx#forum.dlang.org