I already tried cleo, not convinced majorly because of this bug that I recently encountered and yet have been unable to solve.
Also tried elasticsearch, but too complex to run even a single query and indexing and other features were pretty slow too.
So if anyone knows another better one or something I am missing out here in these two. Thanks.
Have you tried the Completion Suggester API? It is not complex and certainly not slow.
Related
I'm using pyperplan in my project, however I'm very limited in choice of planning domains, as pyperplan does not support PDDL v2
Do you know of any pyperplan fork, that has this functionality? Basic +1 action costs should be enough, they are my only problem right now.
Eventually, do you know of any pyperplan-ish alternative that would support more modern versions of PDDL?
I'm planning to implement the functionality on my own, I ran through the code, and it shouldn't be THAT hard, it looks pretty set up for that.
I went through all forks listed on github, but none of them has this feature. I've also tried looking up such clone myself, or looking up any related articles, but nothing of value showed up.
I will be really grateful for any tips!
I'm new to javascript and NodeJS and I've been looking at a bit of deprecated code and I've come across 'require.paths.unshift()' on many occasions at the start of a file. To my understanding, this syntax has been long removed from NodeJS, but in order for me to understand and potentially fix up the old code, I need to understand it's meaning and context.
I've tried looking for it online but I couldn't find much on it. If someone could please explain the context of it's use or a newer implementation, I'd much appreciate it.
We have a large number of repositories. We want to implement a semantics(functionality) based code search on those repositories. Right now, we already have implemented keyword based code search in which we crawled through all the repository files and indexed them using elasticsearch. But that doesn't solve our problem as some of the repositories are poorly commented and documented, thus searching for specific codes/libraries become difficult.
So my question is: Is there any opensource libraries or any previous work done in this field which could help us index the semantics of the repository files, so that searching the code becomes easy and this would also help us in re-usability of the codes. I have found some research papers like Semantic code browsing, Semantics-based code search etc. but were of no use as there was no actual implementation given. So can you please suggest some good libraries or projects which could help me in achieving the same.
P.S:-Moreover, companies like Koders, Google, cocycles.com etc. started their code search based on functionality. But most of them have shut down their operations without giving any proper feedback, can anyone please tell me what kind of difficulties they are facing.
not sure if this is what you're looking for, but I wrote https://github.com/google/zoekt , which uses ctags-based understanding of code to improve ranking.
Take a look at insight.io
It provides semantic search and browsing
I know this question seems subjective but it's really pretty simple. As a long term user, and part time contributor to SubSonic I'm interested in what the community thinks would be the single best way to improve it.
So what's your opinion, how would you make SubSonic even better? What one thing would make you more likely to use/recommend/evangelise/stop complaining about it?
As I said I know this is a bit subjective and may get closed but as SO is the main support forum for SubSonic I think this could be a useful way to solicit opinion and/or contributions.
To keep this from turning into a general discussion here's the rules:
No omnibus wishes
No duplicate wishes
Up-vote those you agree with rather than re-posting them
Ability to run in MediumTrust out of the box
In all honesty the biggest thing thats lacking is solid documentation and HowTo's
Its got better but I think it needs a lot more.
Ability to automatically map collections of other objects, like Fluent NHibernate does.
When SubSonic throws an exception that isn't clear, I'd like to be able to use Google or some other mechanism to discover more information about how to keep my development effort moving forward. Right now it's too easy to get into a situation where you have to go spelunking into the SubSonic source code since SubSonic doesn't seem to be very proactive when the user goes off the "happy path".
This critique is hardly specific to SubSonic. Many (most?) software products suffer from this same problem. I have not really had this problem with NHibernate though, which is SubSonic's most clear competitor.
Faster and higher quality releases
Binary types for SimpleRepository (Images)
Left Outer Joins
Support more database-independent code generation...
What I mean by this is that it is truly a real pain if your application wants to talk to different databases (e.g. SQL Server and Oracle) and you want to only have one set of generated DAL objects. I would love it if you had the option of specifying that any SQL code that gets sent to the DB would be as compatible with most engines as possible, since right now if you generated your objects targeting SQL Server then all queries will be of the form:
SELECT [schema].[table_name] FROM ....
Sadly, this does not work in Oracle, so basically you're out of luck there.
Perhaps this isn't a huge concern for most of you, but I'm currently writing a commercial app that touts one of its main features as being able to run on various database engines just by changing its configuration and I chose SubSonic because I thought it could handle the job pretty easily, but I'm honestly having second thoughts now because of all the hoops I may have to jump through just to get this to work correctly under different environments.
Support MS Access ,Postgres and FireBird database :)....
I use and love Berkeley but it seems to bog down once you get near a million or so entries, especially on the inserts. I've tried memcachedb which works but it's not being maintained so I'm worried of using it in production. Does anyone have any other similar solutions, basically I want to be able to do key lookups on a large(possibly distributed) dataset(40+ million).
Note: Anything NOT in Java is a bonus. :-) It seems most things today are going the Java route.
Have you tried Project Voldemort?
I would suggest you had a look at:
Metabrew key-value store blog post
There is a big list of key-value stores with a little bit of discussion in each of them. If you still have doubts you could join the so called Nosql google group and ask for help there.
Redis is insanely fast and actively developed. It is written in C(no java). Compiles out of the box on POSIX OS(no dependencies).
Did you try the hash backend? That should be faster for insert and key search.