Fusepy "nothreads" argument explanation - python-3.x

I've been fiddling for a bit with the fuse package with python3.
When trying to create a FUSE instance, I came across the nothreads argument.
Can anyone please elaborate on what this does?
I can guess that setting this flag to True the software no longer supports multithreading, but what I would like to know is how it changes the software's behaviour, what would the flow be with and without setting it to True?
Thanks

Related

WHY is the fs.rmdir recursive option experimental?

I see that since version 12, NodeJS has had a recursive option on the fs.rmdir function to allow removal of non-empty directories from the file system. Why is this feature marked "experimental" in the documentation? Does it work or doesn't it? The documentation doesn't say what level of concern to have over this, or under what circumstances.
I found Christopher Hiller's article explaining the difficulties that went into creating an efficient implementation of this, but he doesn't explain the "experimental" designation. Maybe the problem isn't that it doesn't work, or doesn't always work, but that it can be a bottleneck under certain circumstances? I'm trying to decide whether to depend on it or not, rather than writing my own code that's going to have exactly the same pitfalls Hiller encountered, so if anyone here has any insight, I'd appreciate it!

What is haskellng? What is the difference between 'haskellPackages' and 'haskellngPackages'?

I have been reading this StackOverflow post in which we are advised to use the haskellng package set.
I have also read this but I did not understand what haskellng is.
I have read this too, but I still don't know what haskellng is.
Could someone please explain what haskellng is in a simple, clear way?
Why does haskellng matter ? Why is it good ?
I understand that haskellng is replacing something. But what is that something that it replaces ? Why does that something need to be replaced ?
In this post it is written:
So I'll never have to update if I don't want to?
My guess is that 'haskellPackages' and 'haskellngPackages' will
co-exist for a while. Personally, I switched to Haskell NG, though,
and I won't maintain any packages in the old hierarchy anymore. I
suppose other contributors will do the same thing. Once you've
converted your setup to 'haskellngPackages', there's no reason to
look back, really.
What is the difference between 'haskellPackages' and 'haskellngPackages' ?
What is 'haskellPackages' ? Where does it come from ? What is it used for ?
Also in the same post they write:
Why should I care about this "new infrastructure"?
The new code will break evaluation of any Haskell-related
configuration you may have in ~/.nixpkgs/config.nix or
/etc/nixos/configuration.nix.
Privately generated cabal2nix expressions will cease to compile.
Installations that rely on ghc-wrapper finding GHC libraries
automatically in your ~/.nix-profile are obsolete. If you use this
approach, you won't be able to update your profile anymore.
What is the new code ? What was the old code ? Why is the new code breaking what ?
Could someone please explain what haskellng is in a simple, clear way?
Well, haskellng is the next generation Nix Haskell package set made for Nix. I think most of the work was done by Peter Simons. But note that in the latest master version, haskellngPackages has been renamed back to haskellPackages. So the difference doesn't matter if you are using living in the unstable channel.
Why does haskellng matter ? Why is it good ?
With haskellng, I guess everything is automated. Someone uploads a package to hackage and in around a week, that package derivation is automatically included in the nix haskell package set (undex nixpkgs) by some process (I guess it makes use of cabal2nix).
What is the difference between 'haskellPackages' and 'haskellngPackages' ?
In the latest master branch there is no difference between them as explained above.
What is 'haskellPackages' ? Where does it come from ? What is it used for ?
It was the eariler infrastructure for Haskell nix packages. It was used for, um, creating and building Haskell packages.
What is the new code ? What was the old code ? Why is the new code it breaking what ?
The new code is the haskellngPackages. The old code was haskellPackages. But it doesn't matter now as haskellng has been renamed back to the old name and the old code I guess is removed.
I asked this on the #nix channel :
me: Could someone please explain what haskellng is ?
What is haskellng? What is the difference between 'haskellPackages' and 'haskellngPackages'?
Fuuzetsu: it no longer matters, it's the default new architecture and
the old one doesn't exist
Fuuzetsu: we had 2 Haskell architectures for a while and -ng was the
new one

How one should check for a permission in Plone?

Looking at http://developer.plone.org on how to check for a permission the first two results are:
http://developer.plone.org/reference_manuals/external/plone.app.dexterity/advanced/permissions.html
http://docs.plone.org/4/en/develop/plone/security/permissions.html
The first one advocates for zope.security.checkPermission while the second prefers a AccessControl.getSecurityManager().checkPermission.
Looking at the setup.py of AccessControl I see that it depends on zope.security, so the later is more low-level so to say, but at the same time zope.security seems to get more attention nowadays while AccessControl seems to be more stable (regarding getting changes on it).
So, I'm wondering which is the safe and up-to-date way to check for permissions.
I personally always use the checkPermission from AccessControl, but I believe under the hood both zope.security and AccessControl will be calling the same code. I've looked for this code before and I think it's actually in the C portion of the roles/permissions logic.
I personally prefer using plone.api.
See plone.api.user docu
This way you don't have to care, about the low level api.
Even if it will change in the future, plone.api will fix it for you :-)

Fuzzing the Linux Kernel: A student in peril.

I am currently a student at a university studying a computing related degree and my current project is focusing on finding vulnerabilities in the Linux kernel. My aim is to both statically audit as well as 'fuzz' the kernel (targeting version 3.0) in an attempt to find a vulnerability.
My first question is 'simple' is fuzzing the Linux kernel possible? I have heard of people fuzzing plenty of protocols etc. but never much about kernel modules. I also understand that on a Linux system everything can be seen as a file and as such surely input to the kernel modules should be possible via that interface shouldn't it?
My second question is: which fuzzer would you suggest? As previously stated lots of fuzzers exist that fuzz protocols however I don't see many of these being useful when attacking a kernel module. Obviously there are frameworks such as the Peach fuzzer which allows you to 'create' your own fuzzer from the ground up and are supposedly excellent however I have tried repeatedly to install Peach to no avail and I'm finding it difficult to believe it is suitable given the difficulty I've already experienced just installing it (if anyone knows of any decent installation tutorials please let me know :P).
I would appreciate any information you are able to provide me with this problem. Given the breadth of the topic I have chosen, any idea of a direction is always greatly appreciated. Equally, I would like to ask people to refrain from telling me to start elsewhere. I do understand the size of the task at hand however I will still attempt it regardless (I'm a blue-sky thinker :P A.K.A stubborn as an Ox)
Cheers
A.Smith
I think a good starting point would be to extend Dave Jones's Linux kernel fuzzer, Trinity: http://codemonkey.org.uk/2010/12/15/system-call-fuzzing-continued/ and http://codemonkey.org.uk/2010/11/09/system-call-abuse/
Dave seems to find more bugs whenever he extends that a bit more. The basic idea is to look at the system calls you are fuzzing, and rather than passing in totally random junk, make your fuzzer choose random junk that will at least pass the basic sanity checks in the actual system call code. In other words, you use the kernel source to let your fuzzer get further into the system calls than totally random input would usually go.
"Fuzzing" the kernel is quite a broad way to describe your goals.
From a kernel point of view you can
try to fuzz the system calls
the character- and block-devices in /dev
Not sure what you want to achieve.
Fuzzing the system calls would mean checking out every Linux system call (http://linux.die.net/man/2/syscalls) and try if you can disturb regular work by odd parameter values.
Fuzzing character- or block-drivers would mean trying to send data via the /dev-interfaces in a way which would end up in odd result.
Also you have to differentiate between attempts by an unprivileged user and by root.
My suggestion is narrowing down your attempts to a subset of your proposition. It's just too damn broad.
Good luck -
Alex.
One way to fuzzing is via system call fuzzing.
Essentially the idea is to take the system call, fuzz the input over the entire range of possible values - whether it remain within the specification defined for the system call does not matter.

Can a LabVIEW VI tell whether one of its output terminals is wired?

In LabVIEW, is it possible to tell from within a VI whether an output terminal is wired in the calling VI? Obviously, this would depend on the calling VI, but perhaps there is some way to find the answer for the current invocation of a VI.
In C terms, this would be like defining a function that takes arguments which are pointers to where to store output parameters, but will accept NULL if the caller is not interested in that parameter.
As it was said you can't do this in the natural way, but there's a workaround using data value references (requires LV 2009). It is the same idea of giving a NULL pointer to an output argument. The result is given in input as a data value reference (which is the pointer), and checked for Not a Reference by the SubVI. If it is null, do nothing.
Here is the SubVI (case true does nothing of course):
And here is the calling VI:
Images are VI snippets so you can drag and drop on a diagram to get the code.
I'd suggest you're going about this the wrong way. If the compiler is not smart enough to avoid the calculation on its own, make two versions of this VI. One that does the expensive calculation, one that does not. Then make a polymorphic VI that will allow you to switch between them. You already know at design time which version you want (because you're either wiring the output terminal or not), so just use the correct version of the polymorphic VI.
Alternatively, pass in a variable that switches on or off a Case statement for the expensive section of your calculation.
Like Underflow said, the basic answer is no.
You can have a look here to get the what is probably the most official and detailed answer which will ever be provided by NI.
Extending your analogy, you can do this in LV, except LV doesn't have the concept of null that C does. You can see an example of this here.
Note that the code in the link Underflow provided will not work in an executable, because the diagrams are stripped by default when building an EXE and because the RTE does not support some of properties and methods used there.
Sorry, I see I misunderstood the question. I thought you were asking about an input, so the idea I suggested does not apply. The restrictions I pointed do apply, though.
Why do you want to do this? There might be another solution.
Generally, no.
It is possible to do a static analysis on the code using the "scripting" features. This would require pulling the calling hierarchy, and tracking the wire references.
Pulling together a trial of this, there are some difficulties. Multiple identical sub-vi's on the same diagram are difficult to distinguish. Also, terminal references appear to be accessible mostly by name, which can lead to some collisions with identically named terminals of other vi's.
NI has done a bit of work on a variation of this problem; check out this.
In general, the LV compiler optimizes the machine code in such a way that unused code is not even built into the executable.
This does not apply to subVIs (because there's no way of knowing that you won't try to use the value of the indicators somehow, although LV could do it if it removes the FP when building an executable, and possibly does), but there is one way you can get it to apply to a subVI - inline the subVI, which should allow the compiler to see the outputs aren't used. You can also set its priority to subroutine, which will possibly also do this, but I wouldn't recommend that.
Officially, in-lining is only available in LV 2010, but there are ways of accessing the private VI property in older versions. I wouldn't recommend it, though, and it's likely that 2010 has some optimizations in this area that older versions did not.
P.S. In general, the details of the compiling process are not exposed and vary between LV versions as NI tweaks the compiler. The whole process is supposed to have been given a major upgrade in LV 2010 and there should be a webcast on NI's site with some of the details.

Resources