If SM's GC is a conservative stack scanner, why is the example 3 "bad"? Why is "root as you go" necessary? The GC should scan the stack and observe that str1, str2 are roots, no?
You need to get your timeline straight.
A conservative stack scanner was introduced in SpiderMonkey 1.8.5.
The document linked above also mentions 22 March 2011 as the release date of SpiderMonkey 1.8.5.
The documentation you linked to wasn't edited since August 2008.
In other words: you found a piece of very outdated documentation. Indeed, the tags at the bottom say: NeedsEditorialReview, NeedsTechnicalReview. Not something that you should rely on.
Related
I am looking for some contribution for real time linux which majorly involves RT_PRREMPT patch .
The wiki page of RT are pretty old i,e it says its been updated last at 2008
Also there are no wish list or bug list specific to RT _Preempt
Even bug zilla also doesn't have much on rt preempt?
Any resource pointing towards bugs ,features that are to be added to RT_preempt would be a lot of help.
Yes, thankfully, it does seem to be alive and well.
I can understand your tension: on 21 Oct 2014, an LWN article - The future of the realtime patch set - quoted Thomas Gleixner at less-than-highly-optimistic regarding the future of the PREEMPT_RT project.
The good news: recently, as of 05 Oct 2015, LF seems to have a working group in place for RT Linux.
Additional info here:
The Linux Foundation Announces Project to Advance Real-Time Linux
By Linux_Foundation - October 5, 2015 – 8:14am
and here:
Real-Time Linux on the go, OSADL
(quoting from the article)
“… OSADL is looking forward to a fruitful collaboration in the Linux Foundation RTL Working Group. We very much hope that a day will come in the foreseeable future when Linux mainline will immediately contain - without any further patching - the PREEMPT_RT configuration option. And we can only appeal to the other members of the RTL Working Group to not let Linux users wait too long. OSADL certainly will continue to go for it.”
I'm looking for a way to use OpenCL nicely in Haskell, and found these slides (alternative source) by Benedict Gaster. They mention an impressive “HOpenCL Contextual API” but I can't find anything tangible.
The only thing coming close to the C quasiquotation shown seems to be language-c-quote and its OpenCL-C support ends with the types, it doesn't support the extra keywords.
And accelerate is something completely different, and mainly for CUDA, with the OpenCL backend in early alpha.
Then there's HIPERFIT where no code was posted for a year (but the project is still running), which seems to combine the C quasiquotation and OpenCL, their bindings are even called HOpenCL, but are just a wrapper, nothing to see of the monadic transforms etc.
None of this seems close to finished and ready to build upon…
Any news or other projects I missed?
I was looking for exactly the same thing, and I came across this: https://github.com/bgaster/hopencl
This must be what Benedict Gaster - who is not working for AMD anymore - was talking about. There is not a tremendous amount of activity on the git, but there was an update about 2 months ago, which is still better than a year.
EDIT: Actually J. Garret Morris (the other author of HOpenCL) created a fork: https://github.com/jgbm/hopencl
First seeing you post here now. I'm the author of the HIPERFIT-hopencl package and sort of also a bit responsible for the language-c-quote OpenCL C support. I apologize for the naming confusion and that we now have two hopencl-packages. I have mailed Benedict Gaster and J. Garret Morris about how we resolve that.
What do you find lacking in language-c-quote? Could you give an example of what OpenCL C code that it doesn't handle?
(PS. I'm new here and could not find a way to comment on your post, so I had to post this as an answer - perhaps I just haven't reached the right "clearing level" yet)
Do you know if new editions of ULK or R.Love's books are going to be re-released? Or maybe another book is in writings?
Latest books are based on 2.6.18 kernels, so I'm looking if anything newer is coming.
The Third edition of Robert Love's Linux Kernel Development came out less than two years ago and is based on 2.6.34. I don't think there have been any substantial changes to the kernel since.
http://blog.rlove.org/2010/07/linux-kernel-development-third-edition.html
There are two good and mostly still accurate books on the Linux kernel. I'm not aware of anyone writing a new book just now.
If you just care about higher structures, how the scheduler works and things like that, use the Robert Love 3rd Edition.
If you want to know about all the various driver subsystems, choose the Venkateswaran book.
Note that the book is now exactly 3 years old and is starting to show its age.
All other kernel books (including Jonathan Corbet's, Bovet/Cesati and others) are no longer worth reading: too much details have changed.
Especially anything pre 2.6.24 should be avoided because the updated timer framework that got finalized at that revision had quite a big ripple effect.
2.6 was upgraded on 3.0 just because there was 20th anniversary of kernel. 3.0 does not have many breathtaking ideas, and most books relevant for 2.6.x are also relevant for 3.0.x
https://lwn.net/Articles/452531/
I used Reflector 6.8 to disassemble a binary. It shows the Class tree view. Even the declaration of methods of the classes , but "Expand Methods" errors out with some error like "Block statement count of 0 during conditional expression translation"
Then I tried to use Telerik's JustDecompile (in Beta), it worked fine for 1 of the 10-15 assemblies i have. But for another assembly it simply shoots up in memory to 1.5 GB and hangs.
Is there any other stable decompiler I can use to generate C# code ?
The only other one that I know of is IL Spy.
You should report errors in Reflector to the guys at Red Gate.
The no-op loops were probably added by some obfuscator.
Based upon the available information, I believe you may be using an obfuscated assembly.
The current Telerik JustDecompile beta (2011.1.728.1) does not offer support for decompiling obfuscated assemblies. It is very efficient at decompiling non-obfuscated assemblies, though, and its memory footprint is getting smaller with every update. The memory usage you observed is unusual. If you can share more detail over email about the assembly you’re using, we’ll try to reproduce and fix this specific case (chris.eargle [at] telerik.com).
Meanwhile, if you’d like to see more support in future JustDecompile updates for obfuscated assemblies, please share your feedback on the JustDecompile UserVoice so others can vote for the idea: http://justdecompile.uservoice.com.
It's been out for almost five years.
It's got tens of millions of users
I suspect several businesses rely on it.
How is it still "beta"? At what point will it no longer be beta? When it completely owns the e-mail market?
According to a Google spokesman:
"We have very high internal metrics
our consumer products have to meet
before coming out of beta. Our teams
continue to work to improve these
products and provide users with an
even better experience. We believe
beta has a different meaning when
applied to applications on the Web,
where people expect continual
improvements in a product. On the
Web, you don't have to wait for the
next version to be on the shelf or an
update to become available.
Improvements are rolled out as they're
developed. Rather than the packaged,
stagnant software of decades past,
we're moving to a world of regular
updates and constant feature
refinement where applications live in
the cloud."
Wikipedia defines Beta Version as:
A 'beta version' is the first version released outside the organization or community that develops the software, for the purpose of evaluation or real-world black/grey-box testing. The process of delivering a beta version to the users is called beta release. Beta level software generally includes all features, but may also include known issues and bugs of a less serious variety.
So this confirms that Google's use of the word is non-standard. I found this Slashdot article, Has Google Redefined Beta?, to be pretty interesting.
I think Google borrowed the word for their own ends and it shouldn't be taken at face value with the traditional definition of "Beta". It simply looks better to put "Beta" by your apps name instead of, "We are still constantly adding features to this product".
Well it was down for 30 hours about two months ago. Looks like even after five years there are a few kinks to iron out.
Google itself was in beta for years. The founders have much higher standards for their products than other companies.
Just like C++ wasn't a standard for quite a while :)
Also, they continuously add and change features, so it is a beta.
I suspect that beta, in this case, means that they are avoiding the hassles and complications of being accused of being a monopoly. Conspiracy anybody?
It is (at least officially) in perpetual beta state.
http://en.wikipedia.org/wiki/Perpetual_beta
its not in beta anymore since July 2009 - so if you're seeing a 'beta' logo still its because someone enabled the 'back to beta' feature. Yes really...