Swift Combine - Does the zip operator retain all values that it hasn't had a chance to publish? - memory-leaks

Here's the code I'm wondering about:
final class Foo {
var subscriptions = Set<AnyCancellable>()
init () {
Timer
.publish(every: 2, on: .main, in: .default)
.autoconnect()
.zip(Timer.publish(every: 3, on: .main, in: .default).autoconnect())
.sink {
print($0)
}
.store(in: &subscriptions)
}
}
This is the output it produces:
(2020-12-08 15:45:41 +0000, 2020-12-08 15:45:42 +0000)
(2020-12-08 15:45:43 +0000, 2020-12-08 15:45:45 +0000)
(2020-12-08 15:45:45 +0000, 2020-12-08 15:45:48 +0000)
(2020-12-08 15:45:47 +0000, 2020-12-08 15:45:51 +0000)
Would this code eventually crash from memory shortage? It seems like the zip operator is storing every value that it receives but can't yet publish.

zip does not limit its upstream buffer size. You can prove it like this:
import Combine
let ticket = (0 ... .max).publisher
.zip(Empty<Int, Never>(completeImmediately: false))
.sink { print($0) }
The (0 ... .max) publisher will try to publish 263 values synchronously (that, is, before returning control to the Zip subscriber). Run this and watch the memory gauge in Xcode's Debug navigator. It will climb steadily. You probably want to kill it after a few seconds, because it will eventually use up an awful lot of memory and make your Mac unpleasant to use before finally crashing.
If you run it in Instruments for a few seconds, you'll see that all of the allocations happen in this call stack, indicating that Zip internally uses a plain old Array to buffer the incoming values.
66.07 MB 99.8% 174 main
64.00 MB 96.7% 45 Publisher<>.sink(receiveValue:)
64.00 MB 96.7% 42 Publisher.subscribe<A>(_:)
64.00 MB 96.7% 41 Publishers.Zip.receive<A>(subscriber:)
64.00 MB 96.7% 12 Publisher.subscribe<A>(_:)
64.00 MB 96.7% 2 Empty.receive<A>(subscriber:)
64.00 MB 96.7% 2 AbstractZip.Side.receive(subscription:)
64.00 MB 96.7% 2 AbstractZip.receive(subscription:index:)
64.00 MB 96.7% 2 AbstractZip.resolvePendingDemandAndUnlock()
64.00 MB 96.7% 2 protocol witness for Subscription.request(_:) in conformance Publishers.Sequence<A, B>.Inner<A1, B1, C1>
64.00 MB 96.7% 2 Publishers.Sequence.Inner.request(_:)
64.00 MB 96.7% 1 AbstractZip.Side.receive(_:)
64.00 MB 96.7% 1 AbstractZip.receive(_:index:)
64.00 MB 96.7% 1 specialized Array._copyToNewBuffer(oldCount:)
64.00 MB 96.7% 1 specialized _ArrayBufferProtocol._forceCreateUniqueMutableBufferImpl(countForBuffer:minNewCapacity:requiredCapacity:)
64.00 MB 96.7% 1 swift_allocObject
64.00 MB 96.7% 1 swift_slowAlloc
64.00 MB 96.7% 1 malloc
64.00 MB 96.7% 1 malloc_zone_malloc

Related

kmalloc-256 seems taking most of the memory resource. How can I free this?

I have a Linux instance (Amazon Linux Linux ip-xxx 4.9.20-11.31.amzn1.x86_64 #1) which runs Jenkins. It occasionally stops working because of the lack of the memory needed for a job.
Based on my investigation with free command and /proc/meminfo, it seems that Slab is taking up most of the memory available on the instance.
[root#ip-xxx ~]# free -tm
total used free shared buffers cached
Mem: 7985 7205 779 0 19 310
-/+ buffers/cache: 6876 1108
Swap: 0 0 0
Total: 7985 7205 779
[root#ip-xxx ~]# cat /proc/meminfo | grep "Slab\|claim"
Slab: 6719244 kB
SReclaimable: 34288 kB
SUnreclaim: 6684956 kB
I could find the way to purge dentry cache by running echo 3 > /proc/sys/vm/drop_caches, but how can I purge kmalloc-256? Or, is there a way to find which process is using kmalloc-256 memory space?
[root#ip-xxx ~]# slabtop -o | head -n 15
Active / Total Objects (% used) : 26805556 / 26816810 (100.0%)
Active / Total Slabs (% used) : 837451 / 837451 (100.0%)
Active / Total Caches (% used) : 85 / 111 (76.6%)
Active / Total Size (% used) : 6696903.08K / 6701323.05K (99.9%)
Minimum / Average / Maximum Object : 0.01K / 0.25K / 8.00K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
26658528 26658288 99% 0.25K 833079 32 6664632K kmalloc-256
21624 21009 97% 0.12K 636 34 2544K kernfs_node_cache
20055 20055 100% 0.19K 955 21 3820K dentry
10854 10646 98% 0.58K 402 27 6432K inode_cache
10624 9745 91% 0.03K 83 128 332K kmalloc-32
7395 7395 100% 0.05K 87 85 348K ftrace_event_field
6912 6384 92% 0.02K 27 256 108K kmalloc-16
6321 5581 88% 0.19K 301 21 1204K cred_jar

Linux: Cannot allocate more than 32 GB/64 GB of memory in a single process due to virtual memory limit

I have a computer with 128 GB of RAM, running Linux (3.19.5-200.fc21.x86_64). However, I cannot allocate more than ~30 GB of RAM in a single process. Beyond this, malloc fails:
#include <stdlib.h>
#include <iostream>
int main()
{
size_t gb_in_bytes = size_t(1)<<size_t(30); // 1 GB in bytes (2^30).
// try to allocate 1 block of 'i' GB.
for (size_t i = 25; i < 35; ++ i) {
size_t n = i * gb_in_bytes;
void *p = ::malloc(n);
std::cout << "allocation of 1 x " << (n/double(gb_in_bytes)) << " GB of data. Ok? " << ((p==0)? "nope" : "yes") << std::endl;
::free(p);
}
}
This produces the following output:
/tmp> c++ mem_alloc.cpp && a.out
allocation of 1 x 25 GB of data. Ok? yes
allocation of 1 x 26 GB of data. Ok? yes
allocation of 1 x 27 GB of data. Ok? yes
allocation of 1 x 28 GB of data. Ok? yes
allocation of 1 x 29 GB of data. Ok? yes
allocation of 1 x 30 GB of data. Ok? yes
allocation of 1 x 31 GB of data. Ok? nope
allocation of 1 x 32 GB of data. Ok? nope
allocation of 1 x 33 GB of data. Ok? nope
allocation of 1 x 34 GB of data. Ok? nope
I searched for quite some time, and found that this is related to the maximum virtual memory size:
~> ulimit -all
[...]
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
virtual memory (kbytes, -v) 32505856
[...]
I can increase this limit to ~64 GB via ulimit -v 64000000, but not further. Beyond this, I get operation not permitted errors:
~> ulimit -v 64000000
~> ulimit -v 65000000
bash: ulimit: virtual memory: cannot modify limit: Operation not permitted
~> ulimit -v unlimited
bash: ulimit: virtual memory: cannot modify limit: Operation not permitted
Some more searching revealed that in principle it should be possible to set these limits via the "as" (address space) entry in /etc/security/limits.conf. However, by doing this, I could only reduce the maximum amount of virtual memory, not increase it.
Is there any way to either lift this limit of virtual memory per process completely, or to increase it beyond 64 GB? I would like to use all of the physical memory in a single application.
EDIT:
Following Ingo Leonhardt, I tried ulimits -v unlimited after logging in as root, not as standard user. Doing this solves the problem for root (the program can then allocate all the physical memory while logged in as root). But this works only for root, not for other users. However, at the least this means that in principle the kernel can handle this just fine, and that there is only a configuration problem.
Regarding limits.conf: I tried explicitly adding
hard as unlimited
soft as unlimited
to /etc/security/limits.conf, and rebooting. This had no effect. After login as standard user, ulimit -v still returns about 32 GB, and ulimit -v 65000000 still says permission denied (while ulimit -v 64000000 works). The rest of limits.conf is commented out, and in /etc/security/limits.d there is only one other, unrelated entry (limiting nproc to 4096 for non-root users). That is, the virtual memory limit must be coming from elsewhere than limits.conf. Any ideas what else could lead to ulimits -v not being "unlimited"?
EDIT/RESOLUTION:
It was caused by my own stupidity. I had a (long forgotten) program in my user setup which used setrlimit to restrict the amount of memory per process to prevent Linux from swapping to death. It was unintentionally copied from a 32 GB machine to the 128 GB machine. Thanks to Paul and Andrew Janke and everyone else for helping to track it down. Sorry everyone :/.
If anyone else encounters this: Search for ulimit/setrlimit in the bash and profile settings, and programs potentially calling those (both your own and the system-wide /etc settings) and make sure that /security/limits.conf does not include this limit... (or at least try creating a new user, to see if this happens in your user or the system setup)
This is a ulimit and system setup problem, not a c++ problem.
I can run your appropriately modified code on an Amazon EC2 instance type r3.4xlarge with no problem. These cost less than $0.20/hour on the spot market, and so I suggest you rent one, and perhaps take a look around in /etc and compare to your own setup... or maybe you need to recompile a Linux kernel to use that much memory... but it is not a C++ or gcc problem.
Ubuntu on the EC2 machine was already set up for unlimited process memory.
$ sudo su
# ulimit -u
--> unlimited
This one has 125GB of ram
# free
total used free shared buffers cached
Mem: 125903992 1371828 124532164 344 22156 502248
-/+ buffers/cache: 847424 125056568
Swap: 0 0 0
I modified the limits on your program to go up to 149GB.
Here's the output. Looks good up to 118GB.
root#ip-10-203-193-204:/home/ubuntu# ./memtest
allocation of 1 x 25 GB of data. Ok? yes
allocation of 1 x 26 GB of data. Ok? yes
allocation of 1 x 27 GB of data. Ok? yes
allocation of 1 x 28 GB of data. Ok? yes
allocation of 1 x 29 GB of data. Ok? yes
allocation of 1 x 30 GB of data. Ok? yes
allocation of 1 x 31 GB of data. Ok? yes
allocation of 1 x 32 GB of data. Ok? yes
allocation of 1 x 33 GB of data. Ok? yes
allocation of 1 x 34 GB of data. Ok? yes
allocation of 1 x 35 GB of data. Ok? yes
allocation of 1 x 36 GB of data. Ok? yes
allocation of 1 x 37 GB of data. Ok? yes
allocation of 1 x 38 GB of data. Ok? yes
allocation of 1 x 39 GB of data. Ok? yes
allocation of 1 x 40 GB of data. Ok? yes
allocation of 1 x 41 GB of data. Ok? yes
allocation of 1 x 42 GB of data. Ok? yes
allocation of 1 x 43 GB of data. Ok? yes
allocation of 1 x 44 GB of data. Ok? yes
allocation of 1 x 45 GB of data. Ok? yes
allocation of 1 x 46 GB of data. Ok? yes
allocation of 1 x 47 GB of data. Ok? yes
allocation of 1 x 48 GB of data. Ok? yes
allocation of 1 x 49 GB of data. Ok? yes
allocation of 1 x 50 GB of data. Ok? yes
allocation of 1 x 51 GB of data. Ok? yes
allocation of 1 x 52 GB of data. Ok? yes
allocation of 1 x 53 GB of data. Ok? yes
allocation of 1 x 54 GB of data. Ok? yes
allocation of 1 x 55 GB of data. Ok? yes
allocation of 1 x 56 GB of data. Ok? yes
allocation of 1 x 57 GB of data. Ok? yes
allocation of 1 x 58 GB of data. Ok? yes
allocation of 1 x 59 GB of data. Ok? yes
allocation of 1 x 60 GB of data. Ok? yes
allocation of 1 x 61 GB of data. Ok? yes
allocation of 1 x 62 GB of data. Ok? yes
allocation of 1 x 63 GB of data. Ok? yes
allocation of 1 x 64 GB of data. Ok? yes
allocation of 1 x 65 GB of data. Ok? yes
allocation of 1 x 66 GB of data. Ok? yes
allocation of 1 x 67 GB of data. Ok? yes
allocation of 1 x 68 GB of data. Ok? yes
allocation of 1 x 69 GB of data. Ok? yes
allocation of 1 x 70 GB of data. Ok? yes
allocation of 1 x 71 GB of data. Ok? yes
allocation of 1 x 72 GB of data. Ok? yes
allocation of 1 x 73 GB of data. Ok? yes
allocation of 1 x 74 GB of data. Ok? yes
allocation of 1 x 75 GB of data. Ok? yes
allocation of 1 x 76 GB of data. Ok? yes
allocation of 1 x 77 GB of data. Ok? yes
allocation of 1 x 78 GB of data. Ok? yes
allocation of 1 x 79 GB of data. Ok? yes
allocation of 1 x 80 GB of data. Ok? yes
allocation of 1 x 81 GB of data. Ok? yes
allocation of 1 x 82 GB of data. Ok? yes
allocation of 1 x 83 GB of data. Ok? yes
allocation of 1 x 84 GB of data. Ok? yes
allocation of 1 x 85 GB of data. Ok? yes
allocation of 1 x 86 GB of data. Ok? yes
allocation of 1 x 87 GB of data. Ok? yes
allocation of 1 x 88 GB of data. Ok? yes
allocation of 1 x 89 GB of data. Ok? yes
allocation of 1 x 90 GB of data. Ok? yes
allocation of 1 x 91 GB of data. Ok? yes
allocation of 1 x 92 GB of data. Ok? yes
allocation of 1 x 93 GB of data. Ok? yes
allocation of 1 x 94 GB of data. Ok? yes
allocation of 1 x 95 GB of data. Ok? yes
allocation of 1 x 96 GB of data. Ok? yes
allocation of 1 x 97 GB of data. Ok? yes
allocation of 1 x 98 GB of data. Ok? yes
allocation of 1 x 99 GB of data. Ok? yes
allocation of 1 x 100 GB of data. Ok? yes
allocation of 1 x 101 GB of data. Ok? yes
allocation of 1 x 102 GB of data. Ok? yes
allocation of 1 x 103 GB of data. Ok? yes
allocation of 1 x 104 GB of data. Ok? yes
allocation of 1 x 105 GB of data. Ok? yes
allocation of 1 x 106 GB of data. Ok? yes
allocation of 1 x 107 GB of data. Ok? yes
allocation of 1 x 108 GB of data. Ok? yes
allocation of 1 x 109 GB of data. Ok? yes
allocation of 1 x 110 GB of data. Ok? yes
allocation of 1 x 111 GB of data. Ok? yes
allocation of 1 x 112 GB of data. Ok? yes
allocation of 1 x 113 GB of data. Ok? yes
allocation of 1 x 114 GB of data. Ok? yes
allocation of 1 x 115 GB of data. Ok? yes
allocation of 1 x 116 GB of data. Ok? yes
allocation of 1 x 117 GB of data. Ok? yes
allocation of 1 x 118 GB of data. Ok? yes
allocation of 1 x 119 GB of data. Ok? nope
allocation of 1 x 120 GB of data. Ok? nope
allocation of 1 x 121 GB of data. Ok? nope
allocation of 1 x 122 GB of data. Ok? nope
allocation of 1 x 123 GB of data. Ok? nope
allocation of 1 x 124 GB of data. Ok? nope
allocation of 1 x 125 GB of data. Ok? nope
allocation of 1 x 126 GB of data. Ok? nope
allocation of 1 x 127 GB of data. Ok? nope
allocation of 1 x 128 GB of data. Ok? nope
allocation of 1 x 129 GB of data. Ok? nope
allocation of 1 x 130 GB of data. Ok? nope
allocation of 1 x 131 GB of data. Ok? nope
allocation of 1 x 132 GB of data. Ok? nope
allocation of 1 x 133 GB of data. Ok? nope
allocation of 1 x 134 GB of data. Ok? nope
allocation of 1 x 135 GB of data. Ok? nope
allocation of 1 x 136 GB of data. Ok? nope
allocation of 1 x 137 GB of data. Ok? nope
allocation of 1 x 138 GB of data. Ok? nope
allocation of 1 x 139 GB of data. Ok? nope
allocation of 1 x 140 GB of data. Ok? nope
allocation of 1 x 141 GB of data. Ok? nope
allocation of 1 x 142 GB of data. Ok? nope
allocation of 1 x 143 GB of data. Ok? nope
allocation of 1 x 144 GB of data. Ok? nope
allocation of 1 x 145 GB of data. Ok? nope
allocation of 1 x 146 GB of data. Ok? nope
allocation of 1 x 147 GB of data. Ok? nope
allocation of 1 x 148 GB of data. Ok? nope
allocation of 1 x 149 GB of data. Ok? nope
Now, about that US$0.17 I spent on this...

Avoiding allocations in Haskell

I'm working on a more complicated program I would like to be very efficient, but I have boiled down my concerns for right now to the following simple program:
main :: IO ()
main = print $ foldl (+) 0 [(1::Int)..1000000]
Here I build and run it.
$ uname -s -r -v -m
Linux 3.12.9-x86_64-linode37 #1 SMP Mon Feb 3 10:01:02 EST 2014 x86_64
$ ghc -V
The Glorious Glasgow Haskell Compilation System, version 7.4.1
$ ghc -O -prof --make B.hs
$ ./B +RTS -P
500000500000
$ less B.prof
Sun Feb 16 16:37 2014 Time and Allocation Profiling Report (Final)
B +RTS -P -RTS
total time = 0.04 secs (38 ticks # 1000 us, 1 processor)
total alloc = 80,049,792 bytes (excludes profiling overheads)
COST CENTRE MODULE %time %alloc ticks bytes
CAF Main 100.0 99.9 38 80000528
individual inherited
COST CENTRE MODULE no. entries %time %alloc %time %alloc ticks bytes
MAIN MAIN 44 0 0.0 0.0 100.0 100.0 0 10872
CAF Main 87 0 100.0 99.9 100.0 99.9 38 80000528
CAF GHC.IO.Handle.FD 85 0 0.0 0.0 0.0 0.0 0 34672
CAF GHC.Conc.Signal 83 0 0.0 0.0 0.0 0.0 0 672
CAF GHC.IO.Encoding 76 0 0.0 0.0 0.0 0.0 0 2800
CAF GHC.IO.Encoding.Iconv 60 0 0.0 0.0 0.0 0.0 0 248
It looks like 80 bytes are being allocated per iteration. I think it's quite reasonable to expect the compiler to generate allocation-free code here.
Is my expectation unreasonable? Are the allocations a side effect of enabling profiling? How can I finangle things to get rid of the allocation?
In this case it looks like GHC was smart enough to optimize foldl into the stricter form, but GHC can't optimize away the intermediate list because foldl isn't a good consumer, so presumably those allocations are for the (:) constructors. (EDIT3: No, looks like that's not the case; see comments)
By using foldr fusion kicks in and you can get rid of the intermediate list:
main :: IO ()
main = print $ foldr (+) 0 [(1::Int)..1000000]
...as you can see:
k +RTS -P -RTS
total time = 0.01 secs (10 ticks # 1000 us, 1 processor)
total alloc = 45,144 bytes (excludes profiling overheads)
which has the same memory profile for me as
main = print $ (1784293664 :: Int)
EDIT: In this new version we're trading heap allocation for a bunch of (1 + (2 + (3 +...))) on the stack. To really get a good loop we have to write it by hand like:
main = print $ add 1000000
add :: Int -> Int
add nMax = go 0 1 where
go !acc !n
| n == nMax = acc + n
| otherwise = go (acc+n) (n+1)
showing:
total time = 0.00 secs (0 ticks # 1000 us, 1 processor)
total alloc = 45,144 bytes (excludes profiling overheads)
EDIT2 I haven't gotten to use Gabriel Gonzalez foldl library yet, but it also might be worth playing with for your application.

Golang: What is etext?

I've started to profile some of my Go1.2 code and the top item is always something named 'etext'. I've searched around but couldn't find much information about it other than it might relate to call depth in Go routines. However, I'm not using any Go routines and 'etext' is still taking up 75% or more of the total execution time.
(pprof) top20
Total: 171 samples
128 74.9% 74.9% 128 74.9% etext
Can anybody explain what this is and if there is any way to reduce the impact?
I hit the same problem then I found this: pprof broken in go 1.2?. To verify that it is really a 1.2 bug I wrote the following "hello world" program:
package main
import (
"fmt"
"testing"
)
func BenchmarkPrintln( t *testing.B ){
TestPrintln( nil )
}
func TestPrintln( t *testing.T ){
for i := 0; i < 10000; i++ {
fmt.Println("hello " + " world!")
}
}
As you can see it only calls fmt.Println.
You can compile this with “go test –c .”
Run with “./test.test -test.bench . -test.cpuprofile=test.prof”
See the result with “go tool pprof test.test test.prof”
(pprof) top10
Total: 36 samples
18 50.0% 50.0% 18 50.0% syscall.Syscall
8 22.2% 72.2% 8 22.2% etext
4 11.1% 83.3% 4 11.1% runtime.usleep
3 8.3% 91.7% 3 8.3% runtime.futex
1 2.8% 94.4% 1 2.8% MHeap_AllocLocked
1 2.8% 97.2% 1 2.8% fmt.(*fmt).padString
1 2.8% 100.0% 1 2.8% os.epipecheck
0 0.0% 100.0% 1 2.8% MCentral_Grow
0 0.0% 100.0% 33 91.7% System
0 0.0% 100.0% 3 8.3% _/home/xxiao/work/test.BenchmarkPrintln
The above result is got using go 1.2.1
Then I did the same thing using go 1.1.1 and got the following result:
(pprof) top10
Total: 10 samples
2 20.0% 20.0% 2 20.0% scanblock
1 10.0% 30.0% 1 10.0% fmt.(*pp).free
1 10.0% 40.0% 1 10.0% fmt.(*pp).printField
1 10.0% 50.0% 2 20.0% fmt.newPrinter
1 10.0% 60.0% 2 20.0% os.(*File).Write
1 10.0% 70.0% 1 10.0% runtime.MCache_Alloc
1 10.0% 80.0% 1 10.0% runtime.exitsyscall
1 10.0% 90.0% 1 10.0% sweepspan
1 10.0% 100.0% 1 10.0% sync.(*Mutex).Lock
0 0.0% 100.0% 6 60.0% _/home/xxiao/work/test.BenchmarkPrintln
You can see that the 1.2.1 result does not make much sense. Syscall and etext takes most of the time. And the 1.1.1 result looks right.
So I'm convinced that it is really a 1.2.1 bug. And I switched to use go 1.1.1 in my real project and I'm satisfied with the profiling result now.
I think Mathias Urlichs is right regarding missing debugging symbols in your cgo code. Its worth noting that some standard pkgs like net and syscall make use of cgo.
If you scroll down to the bottom of this doc to the section called Caveats, you can see that the third bullet says...
If the program linked in a library that was not compiled with enough symbolic information, all samples associated with the library may be charged to the last symbol found in the program before the library. This will artificially inflate the count for that symbol.
I'm not 100% positive this is what's happening but i'm betting that this is why etext appears to be so busy (in other words etext is a collection of various functions that doesn't have enough information for pprof to analysis properly.).

Can I place my application code on partition in my RAM?

I want use RAM instead of SSD. I'm looking for experienced people to give me some advice about this. I want to mount a partition and put into it my Rails app.
Any ideas?
UPD: I tested the SSD and RAM. I have a OSX with 4x4Gb Kingston # 1333 RAM, Intel Core i3 # 2,8 Ghz, OCZ Vertex3 # 120Gb, HDD Seagate ST3000DM001 # 3Tb. My OS installed on SSD and ruby with gems placed in home folder on SSD. I create new Rails app with 10.000 product items in sqlite and create controller with code:
#products = Product.all
Rails.cache.clear
Tested it with AB.
SSD
Document Length: 719 bytes
Concurrency Level: 4
Time taken for tests: 39.274 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 130600 bytes
HTML transferred: 71900 bytes
Requests per second: 2546.21
Transfer rate: 3325.35 kb/s received
Connnection Times (ms)
min avg max
Connect: 0 0 0
Processing: 398 1546 1627
Total: 398 1546 1627
RAM
Document Length: 719 bytes
Concurrency Level: 4
Time taken for tests: 39.272 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 130600 bytes
HTML transferred: 71900 bytes
Requests per second: 2546.33
Transfer rate: 3325.51 kb/s received
Connnection Times (ms)
min avg max
Connect: 0 0 0
Processing: 366 1546 1645
Total: 366 1546 1645
HDD
Document Length: 719 bytes
Concurrency Level: 4
Time taken for tests: 40.510 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 130600 bytes
HTML transferred: 71900 bytes
Requests per second: 2468.54
Transfer rate: 3223.92 kb/s received
Connnection Times (ms)
min avg max
Connect: 0 0 0
Processing: 1193 1596 2400
Total: 1193 1596 2400
So, I think that thing in ruby with gems placed on SSD and get this scripts slowly, I will test on a real server and puts all ruby scripts into RAM with more complicated code or real application.
ps: sorry for my english :)
You are looking for a ramdisk.

Resources