can't execute sphinx search command not found - search

I have an Opensuse 13.2 operating system. Also I installed sphinx search engine. Everything was great till I tried to execute search command in terminal.
search: command not found
Before that I configured sphinx, executed indexer, executed searchd, all good.
Here is searchd output command:
vitalik-opensuse:/home/vitalik # searchd
Sphinx 2.2.8-id64-release (rel22-r4942)
Copyright (c) 2001-2015, Andrew Aksyonoff
Copyright (c) 2008-2015, Sphinx Technologies Inc(http://sphinxsearch.com)
using config file '/etc/sphinx/sphinx.conf'...
listening on all interfaces, port=9312
listening on all interfaces, port=9306
precaching index 'deal'
precaching index 'deal-have'
precaching index 'deal-want'
precached 3 indexes in 0.086 sec
Here is indexer output:
vitalik-opensuse:/home/vitalik # indexer --all
Sphinx 2.2.8-id64-release (rel22-r4942)
Copyright (c) 2001-2015, Andrew Aksyonoff
Copyright (c) 2008-2015, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file '/etc/sphinx/sphinx.conf'...
indexing index 'deal'...
collected 7 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 7 docs, 1355 bytes
total 0.021 sec, 62708 bytes/sec, 323.95 docs/sec
indexing index 'deal-have'...
collected 7 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 7 docs, 1355 bytes
total 0.007 sec, 193240 bytes/sec, 998.28 docs/sec
indexing index 'deal-want'...
collected 7 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 7 docs, 1355 bytes
total 0.006 sec, 207758 bytes/sec, 1073.29 docs/sec
skipping non-plain index 'dist1'...
total 12 reads, 0.000 sec, 0.8 kb/call avg, 0.0 msec/call avg
total 36 writes, 0.000 sec, 0.4 kb/call avg, 0.0 msec/call avg
But I still can't execute search command to try searching via terminal.
Sphinx version is 2.2.8, installed from
opensuse-13.2-server-search-repository
What I am doing wrong?
Thank you.

The search tool no longer exists. It was removed a few versions ago.
Its long been broken, and not a realistic search experience.
Use test.php etc from the api folder to test searchd.

Related

Change in max-old-space-size between nodejs 16 and nodejs 14?

We are using --max-old-space-size=8192 to run our complete E2E jest 26 tests with npm test.
node --max-old-space-size=8192 node_modules/jest/bin/jest --runInBand --coverage --detectOpenHandles --logHeapUsage --no-cache
We upgraded to node 16.14.2 and suddenly the tests stopps at exactly 4G with OOM under windows as well as Ubuntu 20.04.4 LTS.
The same behavior with node 17.8.0
I switched back to node 14.18.1 and see the following performance graph with process explorer.
With node 16 I get OOM at 4G in the beginning of the E2E test.
<--- Last few GCs --->
[14184:00000277700CA440] 1097059 ms: Mark-sweep (reduce) 2799.4 (3161.8) -> 2798.8 (3123.2) MB, 1520.8 / 0.4 ms (average mu = 0.099, current mu = 0.064) last resort GC in old space requested
[14184:00000277700CA440] 1098475 ms: Mark-sweep (reduce) 2798.8 (3123.2) -> 2798.7 (3116.2) MB, 1416.0 / 1.6 ms (average mu = 0.053, current mu = 0.000) last resort GC in old space requested
I switched between the node versions with nvm-windows.
The packages were all installed with npm from node16. They run perfectly on node 14.
Tried other several space related v8-options but no positive effect on node 16 and 17.
Didn't want to open an issue of github/node yet as it cannot be isolated easily.
Any suggestions?
Update:
My first deep finding in node 16 V8 is that --huge-max-old-generation-size is now true be default.
This limits the memory to 4G.
See also https://github.com/v8/v8/commit/b2f75b008d14fd1e1ef8579c9c4d2bc7d374efd3.
And Heap::MaxOldGenerationSize
And Heap::HeapSizeFromPhysicalMemory
The max-old-space is limited down to 4G there as far as I understood. (At least when huge-old-space is on)
Now setting --no-huge-max-old-generation-size --max-old-space-size=8192 still has no effect and OOM at 4G again.
Update 2:
I tracked the v8 heap statistics and see just before the OOM at 4G following infos from v8.getHeapSpaceStatistics() and v8.getHeapStatistics()
total_heap_size : 3184 MB
total_heap_size_executable : 127 MB
total_physical_size : 3184 MB
total_available_size : 9162 MB
used_heap_size : 2817 MB
heap_size_limit : 12048 MB
malloced_memory : 2 MB
peak_malloced_memory : 44 MB
does_zap_garbage : 0 MB
number_of_native_contexts : 0 MB
number_of_detached_contexts : 0 MB
read_only_space : size : 0 MB, used: 0 MB, avail: 0 MB, phy: 0 MB
old_space : size : 2425 MB, used: 2111 MB, avail: 268 MB, phy: 2425 MB
code_space : size : 127 MB, used: 110 MB, avail: 8 MB, phy: 127 MB
map_space : size : 44 MB, used: 39 MB, avail: 4 MB, phy: 44 MB
large_object_space : size : 555 MB, used: 541 MB, avail: 0 MB, phy: 555 MB
code_large_object_space : size : 0 MB, used: 0 MB, avail: 0 MB, phy: 0 MB
new_large_object_space : size : 0 MB, used: 0 MB, avail: 15 MB, phy: 0 MB
new_space : size : 32 MB, used: 13 MB, avail: 2 MB, phy: 32 MB
<--- Last few GCs --->
[7940:000001B87F118E70] 546939 ms: Mark-sweep (reduce) 2774.1 (3123.5) -> 2773.6 (3084.7) MB, 498.6 / 0.3 ms (average mu = 0.080, current mu = 0.044) last resort GC in old space requested
[7940:000001B87F118E70] 547453 ms: Mark-sweep (reduce) 2773.6 (3084.7) -> 2773.4 (3077.2) MB, 513.2 / 0.3 ms (average mu = 0.040, current mu = 0.000) last resort GC in old space requested
<--- JS stacktrace --->
Update 3:
Upgraded to jest 27.5.1 and no difference. node 14 is fine but node 16/17 got stuck at 4G while their heap statistics report huge amount of available space.
For now the only solution is to use node 16.10.0 for running the jest tests
The problem is discussed in github.com/facebook/jest/issues/11956 but none of the suggested jest config changes seem to work generally.
Large jest test suites still cause the memory leak (or memory limit) to happen.

app/.heroku/node/bin/node taking up a lot of memory and crashing

Recently my heroku application started getting memory errors. I haven't really changed anything that may affect it. After some soak tests and debuggings, I noticed there are two node processes running on the dyno:
here is an example of what I see from a ps command:
~ $ ps aux | grep "node"
u18923 46 0.0 0.0 2616 612 ? S 17:54 0:00 sh -c node ${NODE_INSPECT_FLAG} --gc-interval=100 --max-old-space-size=460 build-server/server/index.js
u18923 47 0.6 0.1 982148 79796 ? Sl 17:54 0:03 node --gc-interval=100 --max-old-space-size=460 build-server/server/index.js
u18923 72 12.9 0.3 1079028 195700 ? Sl 17:54 1:02 /app/.heroku/node/bin/node --gc-interval=100 --max-old-space-size=460 /app/build-server/server/index.js
The /app/.heroku/node/bin/node memory consumption goes up and eventually causes the app to crash since the environment is limited to 512MB memory.
Why are there two node processes? Why is the one coming from /app/.heroku is taking up so much more memory and crashing eventually. How can I fix this situation?
This was happening because of using https://www.npmjs.com/package/throng with concurrency of 1. It does seem like there is some sort of memory leak on local as well, i was checking the "forker" (parent) process instead of the actual one that is growing.

How to fit a 3d surface to data in excel?

I have a 10x16 table with 3d surface data in excel. I would like to fit this data to a surface where I can use this surface to calculate new points.
I made some VBA code that 2d interpolates but the above would suit me better and I cannot seem to make it happen.
Is there anyway to make this happen in excel? or can someone show me a software capable of doing this?
Pretty much posting this as a last result
This snippet doesn't run but is a representation of the data
' 172.74 322.77 472.80 770.51 1068.23 1365.94 1803.76 2241.58 2679.40 3126.00
10.67 1.6 1.776 1.96 2.24 4.132 5.12 5.756 7 8.2 9.4
15.33 1.6 1.772 1.96 2.272 4.012 5.156 5.52 7 8.2 9.4
18.67 1.6 1.836 2.044 2.58 4.024 5.036 5.4 7 8.2 9.4
27.67 1.6 1.848 2.088 2.64 3.796 4.708 5.948 7 8.2 9.4
32.00 1.6 1.824 2.088 2.62 3.8 4.512 5.832 7 8.2 9.4
37.00 1.6 1.836 2.152 2.54 3.996 4.556 5.02 7 8.2 9.4
46.67 1.6 1.832 2.14 2.648 3.884 4.62 4.796 7 8.2 9.4
51.67 1.6 1.892 2.1 2.692 3.54 4.876 5.312 6.836 8.2 9.4
60.00 1.6 1.872 2.076 2.748 3.688 4.66 5.768 6.932 8.404 9.6
68.33 1.6 1.864 2.064 2.712 3.62 4.744 5.552 7.016 8.384 9.69
76.67 1.6 1.888 2.152 2.736 3.536 4.716 5.656 6.568 8.336 9.7
83.33 1.6 1.864 2.16 2.7 3.716 4.708 5.4 6.508 8.352 9.90
91.67 1.6 1.896 2.216 2.756 3.584 4.42 5.52 6.472 8.488 9.97
100.00 1.6 1.744 2.036 2.808 3.708 4.356 5.672 6.728 8.42 10.
108.33 1.6 1.644 1.932 2.74 3.464 4.348 5.312 7.26 8.212 10.07
116.67 1.6 1.684 2.376 2.688 3.664 4.564 5.42 7.072 8.892 10.33
You can plot this data as a 3D-surface chart and it will look like this:
The rest of your question requires a LOT more detail about what you want to achieve. Don't just throw a concept into the ring. Edit your question and explain what you want to achieve, what you have tried, where you are stuck.
This answer will probably be downvoted because it is not an answer to your questions, but the requests for clarification was not suitable for a comment.
Over to you. Add a comment to your question when you've added the required detail.

HOW TO VIEW THE V8.LOG CREATED AFTER RUNNING NODE-PROFILER

profiler to check on cpu profiling of my node.js server.It created a log called as v8.log.I also downloaded the node-tick-processor it created data as below
Statistical profiling result from v8.log, (298287 ticks, 2 unaccounted, 0 excluded).
[Unknown]:
ticks total nonlib name
2 0.0%
[Shared libraries]:
ticks total nonlib name
295618 99.1% 0.0% /lib/x86_64-linux-gnu/libc-2.19.so
1999 0.7% 0.0% /usr/local/bin/node
119 0.0% 0.0% /lib/x86_64-linux-gnu/libpthread-2.19.so
59 0.0% 0.0% 7fff509b3000-7fff509b5000
5 0.0% 0.0% /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19
I've no idea on the above log.Any help regarding this is much useful.
I recently had the same problem. Try installing the "tick" processor instead. I think it is more compatible with many versions of node: https://www.npmjs.com/package/tick. Also if you are on a mac, make sure you pass in "--mac" when running it.

oracle coherence segmentation fault

I just installed Oracle Coherence 3.6 on RHEL 5.5. When I execute cache-server.sh I get a lot of GC warnings about allocating large blocks and then it fails with a segmentation fault. Suggestions? Here is the stack:
GC Warning: Repeated allocation of very large block (appr. size 1024000):
May lead to memory leak and poor performance.
GC Warning: Repeated allocation of very large block (appr. size 1024000):
May lead to memory leak and poor performance.
./bin/cache-server.sh: line 24: 6142 Segmentation fault $JAVAEXEC -server -showversion $JAVA_OPTS -cp "$COHERENCE_HOME/lib/coherence.jar" com.tangosol.net.DefaultCacheServer $1
[root#localhost coherence_3.6]# swapon -s
Filename Type Size Used Priority
/dev/mapper/VolGroup00-LogVol01 partition 2097144 0 -1
[root#localhost coherence_3.6]# free
total used free shared buffers cached
Mem: 3631880 662792 2969088 0 142636 353244
-/+ buffers/cache: 166912 3464968
Swap: 2097144 0 2097144
[root#localhost coherence_3.6]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
147G 6.7G 133G 5% /
/dev/sda1 99M 12M 82M 13% /boot
tmpfs 1.8G 0 1.8G 0% /dev/shm
/dev/hdb 2.8G 2.8G 0 100% /media/RHEL_5.5 Source
/dev/hda 57M 57M 0 100% /media/VBOXADDITIONS_4.2.16_86992
[root#localhost coherence_3.6]#
I haven't seen this issue before, but to start, I'd suggest the following:
Check for Linux updates. The JVMs for example now try to use large pages, and there have been some bugs in RH related to large pages that are fixed in the latest versions.
Download the latest Java 7 JDK. While no JDK is entirely bug-free, we have done extensive testing with JDK 7 patch levels 15, 21 and 40.
Download the latest version of Coherence. Coherence 12.1.2 is now out, but if you don't want to go for the very latest, then Coherence 3.7.1 is the suggested version. (The release after 3.7.1 is called 12.1.2. That is to align with Oracle versioning.)
I would check your space allocation on disk and memory/swap. You are probably running out of space somewhere.
df -h
free
You could also check your Java version - make sure that you are well patched.
Are you using Java 6 or Java 7?
There are Oracle forums for Coherence - you should try and ask the question there - thats where the real experts hang out.

Resources