I've been using Slick's insertOrUpdate since its release in 2.1 version without any issue.
Right now I'm trying to improve my code performance and insertOrUpdate is the bottleneck (according to jProfiler). I thought I could improve this by using
a compiled query but as far as I can see, invoking insertOrUpdate I have a SingleInsertOrUpdateResult and not a Query object.
Is there a way to get a compiled query for insertOrUpdate? Or is there another way to improve it's performance?
Thanks in advance
It's the same as for the the other insert operations like +=: You have to compile the Query and then call insertOrUpdate on the resulting CompiledQuery.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed yesterday.
Improve this question
I'm trying to add CUDA functionality to an existing code. The desired result is that if the user has cuda runtime installed on their machine, the code will use their cuda runtime (using dlopen) to check if a CUDA enabled GPU is available and then run the CUDA code on it if that's true. Otherwise, run the original non-GPU accelerated code. However, there are some gaps in my understanding of libraries and CUDA that make this tricky for me.
The code compiles just fine if I specify the location of the required CUDA libraries (cudart and cublas) and dynamically link them. However, I tried not linking these libraries and instead wrapping 'everything' I need using dlopen and dlsym to get handles to the functions I need. However, compilation fails when it gets to actual device code (definitions for angle bracket code) because it's looking for things like __cudaRegisterFunction during compile time. I've replaced the angle bracket calls with a wrapped version of cudaLaunchKernel but still get this issue, possibly because the definitions of the machine code themselves require some special calls.
Some fundamental things I'm unsure about are when the symbols in a shared lib have to be resolved. For example, let's say the user does not have cudart.so, is it possible for me to just not run any cudart/cuda code and avoid any runtime issues involving finding references to functions contained in this library? Or do all cudart.so functions need to be found in the .so file regardless of whether or not they're used? If the answer to this question is that only functions that are used need to be resolved, would this not obviate the need for wrapping functions via dlopen/dlsym? Another question somewhat related to this is: can you compile cuda code without linking to cudart? I may be confusing two separate issues in that it might be necessary to link to cudart.so when compiling CUDA code but that does not mean you are actually using cudart.so during runtime.
It's entirely possible I'm going about this the entirely wrong way so hopefully the general statement of what I'm trying to do can get me to some working answer.
Is there a way to know how much time a code will take to finish? or an approximation
I am thinking something like when you are coping a file in windows, it says how much time is left, or for example when you download something, it tells you approximately how much time it will take
Is there a way to do this for a spark code? from something very simple like queries, to more complex code
Thanks
Spark themselves have considered implementing this but decided against it due to uncertainties in predicting the completion time of stragglers. See the discussion in this spark issue https://issues.apache.org/jira/browse/SPARK-5216
So you will not get that information from spark. Instead you must implement your own estimation model.
I already tried cleo, not convinced majorly because of this bug that I recently encountered and yet have been unable to solve.
Also tried elasticsearch, but too complex to run even a single query and indexing and other features were pretty slow too.
So if anyone knows another better one or something I am missing out here in these two. Thanks.
Have you tried the Completion Suggester API? It is not complex and certainly not slow.
I have a compliacted C and C++ code with heavy mathematics calculations. I use intel C++ - the latest update to compile. I use optimizatons and the application does not give the expected answer. After a long time I managed to reduce the problem to getting EXCEPTION_FLT_STACK_CHECK
0xc0000092. If I compile without optimization - The program work as expected.
It's a single threaded code on Winxp64 (the application is 32-bit).
MSVC 2010 gives the same results with Debug or Release builds. (I mean Good=Expected results)
Can someone help me where to look? Currently I suspect a compiler bug - since I have no asmsembly code of my own, Only compiler-generated code. I looked at the assembler and it's SSE/x87 mixed code.
I'm looking for directions to look for. Since I'm on trial version (of the intel compiler) I don't have much time for investigations.
I will try to use /Qfp-stack-check tommorow to see if i can find something wrong with my code.
* Update *
I just found a bug in intel compiler. A function returns a value on st(0) but the calling function does not remove it. That way i get the stack exception. Workaround is to use the returned value even that i dont always need it. I will try to reproduce it with code that I can share.
After this workaround intel was faster 35% then msvc2010 on the same code. - That's the main result.
mordy
Update * I just found a bug in intel compiler. A function returns a value on st(0) but the calling function does not remove it. That way i get the stack exception. Workaround is to use the returned value even that i dont always need it.
I'm using the plugin clang_complete in Vim. The plugin could complete C++ STL accurately. But its completion speed is unacceptable. Is there any way to improve the clang_complete's completion speed?
update:Yesterday I found this,and now the omnicppcomplete could basically meet my need ,so I decided to continue to use omnicppcomplete.vim. Thak you for your answers!!
Well i heard, that using libclang.so instead of clang executable is much faster. However for reliable completion, you need to ignore errors, and im kinda lost in using libclang api. Its really not that easy.
I dont know, which version of clang_complete are you using, but there is follow up plugin called same, that is updated until now by some guy. He tried to use libclang and pre-filled databases for speedup - and not only for completing, but also for context sensitive navigating. See here:
http://blog.wuwon.id.au/2011/10/vim-plugin-for-navigating-c-with.html
It actually does have some problems, as it doesnt work correctly, when there is something in code, that clang compiler doesnt like. it could be ignored by old clang_complete, but not this version(at least, when you are using libclang, you are free to use also the old clang executable).
If you're not using Clang 3.0 (rc) or trunk, you may be using slower code. This has recently been worked on, due to inacceptable performance, so just updating Clang might give you the boost you need.