Python module get top scope variables - python-3.x

Hey, in my python modules (written in python) I want to be able to access the variables from the top level scope, that is, the file that is calling the modules. How would I go about doing this?
Thanks.

There's no way that will work in all environments, so a precise answer might depend on how you are running your top-level Python code. The best thing to do is to put the variables into an object and pass the object to the functions that need it.

In general, the way to do this is to provide some API to your plugins (e.g. a module that they import) to provide controlled access to the information they need.
If this information may vary by context, and passing in different arguments to a plugin initialisation function isn't appropriate, then an information API that uses threading.local under the covers may be of value.

Related

How to best find code implementations in existing python projects

Different people have told me that in order to improve my Python programming skills, it helps to go and look how existing projects are implemented. But I am struggeling a bit to navigate through the projects and find the parts of the code I'm interested in.
Let's say I'm using butter of the scipy.signal package, and I want to know how it is implemented, so I'm going to scipy's github repo and move to the signal folder. Now, where is the first place I should start looking for the implementation of butter?
I am also a bit confused about what a module/package/class/function is. Is scipy a module? Or a package? And then what is signal? Is there some kind of pattern like module.class.function? (Or another example: matplotlib.pyplot...)
It sounds like you have two questions here. First, how do you find where scipy.signal.butter is implemented? Second, what are the different hierarchical units of Python code (and how do they relate to that butter thing)?
The first one actually has an easy solution. If you follow the link you gave for the butter function, you will see a [source] link just to the right of the function signature. Clicking on that will take you directly to the source of the function in the github repository (pinned to the commit that matches the version of the docs you were reading, which is probably what you want). Not all API documentation will have that kind of link, but when it does it makes things really easy!
As for the second question, I'm not going to fully explain each level, but here are some broad strokes, starting with the most narrow way of organizing code and moving to the more broad ways.
Functions are reusable chunks of code that you can call from other code. Functions have a local namespace when they are running.
Classes are ways of organizing data together with one or more functions. Functions defined in classes are called methods (but not all functions need to be in a class). Classes have a class namespace, and each instance of a class also has its own instance namespace.
Modules are groups of code, often functions or methods (but sometimes other stuff like data too). Each module has a global namespace. Generally speaking, each .py file will create a module when it is loaded. One module can access another module by using an import statement.
Packages are a special kind of module that's defined by a folder foo/, rather than a foo.py file. This lets you organize whole groups of modules, rather than everything being at the same level. Packages can have further sub-packages (represented with nested folders like foo/bar/). In addition to the modules and subpackages that can be imported, a package will also have its own regular module namespace, which will be populated by running the foo/__init__.py file.
To bring this back around to your specific question, in your case, scipy is a top-level package, and scipy.signal is a sub-package within it. The name butter is a function, but it's actually defined in the scipy/signal/_filter_design.py file. You can access it directly from scipy.signal because scipy/signal/__init__.py imports it (and all the other names defined in its module) with from ._filter_design import * (see here).
The design of implementing something in an inner module and then importing it for use in the package's __init__.py file is a pretty common one. It helps modules that would be excessively large to be subdivided, for ease of their developers, while still having a single place to access a big chuck of the API. It is, however, very confusing to work out for yourself, so don't feel bad if you couldn't figure it out yourself. Sometimes you may need to search the repository to find the definition of something, even if you know where you're importing it from.

Why we don't need to import any modules to use print(),input(),len(),int(),etc function in python

As definition say, To use any built-in functions we need to first import the respective modules in the program
But how we are using print(), input(), len(), etc many more function without importing any modules in python
Please someone clarify it..
(sorry if my question is not relevant)
Because the Python language designers chose to make them available by default, on the assumption that they were useful enough to always be available. This is especially common for:
The simplest I/O functions (e.g. print/input) that it's nice to have access to, especially when playing around with stuff in the interactive interpreter
Functions that are wrappers around special methods (e.g. len for __len__, iter for __iter__), as it reduces the risk of people calling special methods directly just to avoid an import
Built-in classes (e.g. int, set, str, etc.), which aren't technically functions, but they're used frequently (possibly available as literals), and the definition of the class needs to be loaded for basic operation of the interpreter anyway
In short, you have access to them automatically because they might have to load them anyway (in the case of built-in classes), it's convenient to have access to them automatically, and the designers thought it was likely they'd be frequently used, nothing more complicated than that. The "likely to be frequently used" is important; some modules on the CPython reference interpreter are actually baked into the interpreter itself, rather than existing as separate modules on the file system (e.g. sys), but the contents of those modules were not considered important/commonly used enough to be worth injecting into the built-in namespace (where they'd be likely to collide with user-defined names).
The built-ins are provided through the builtins module, so if you want to see what's there (or, being terrible, change what's available as a built-in everywhere), you can import it and perform normal attribute manipulation on it to query/add/remove/change the set of available built-ins (the site module does this to inject the exit quasi-built-in for instance).

Can I use the Rust lexer or parser to retrieve a list of functions within a Rust file?

The lexer/parser file located here is quite large and I'm not sure if it is suitable for just retrieving a list of Rust functions. Perhaps writing my own/using another library would be a better route to take?
The end objective would be to create a kind of execution manager. To contextualise, it would be able to read a list of function calls wrapped in a function. The function calls that are within the function will then be able to be re/ordered from some web interface. Thought it might be nice to manage larger applications this way.
No. I mean, not really. Whether you write your own parser or re-use syntex, you're going to hit a fundamental limitation: macros.
So let's say you go all-out and expand macro_rules!-based macros, including the ones defined in external crates (which means you'll also need to extract rustc's crate metadata loading... which isn't stable). What about procedural macros and custom derive attributes? Those are defined in code and depend on compiler-internal interfaces to function.
The only way this is likely to ever work correctly is if you build on top of the compiler, or duplicate a huge amount of work (which also involves unstable binary interfaces).
You could use syntex to parse the Rust code in a build script.

Object-capability security in Racket?

Racket's sandbox seems great for running code I don't trust, but I would like to prevent modules that call one another in the sandbox from being able to see or modify one another's internal state, code, or behavior. Right now the best way I can think of to do that is with separate sandboxes and a modified "require" that wraps all exported functions in contracts that create proxies. Is there a better way?
Could you provide a concrete example?
If module A requires module B, then module A can't see inside B.
Module A can use the functions that module B explicitly provided.
Some of these might change internal state in module B.

Isolating global changes between modules in node.js

It seems like if I modify, say, Object.prototype, that seems to be visible across all modules. It would be very nice if these global changes could be isolated so that a module is protected from being affected by modules is doesn't require.
Is this in any way possible?
Object.prototype is an object, and there's only one of it, so modifying it in one place affects all references to that object (just like any object). This is generally considered a benefit since it makes modules like colors possible. It shouldn't be necessary protect modules from changes made to global prototypes, since those changes should only be extensions. If your, or someone else's, modules are modifying built-in methods/properties, then that's probably bad practice in the first place.
Although you didn't give an example, I would think you probably want to either create local functions (not attached to the prototype), or look into using inheritance to solve your concerns with specific objects.

Resources