00:38:25  * tim_smartchanged nick to tim_smart|away
00:41:48  * kazuponjoined
00:46:25  * kazuponquit (Ping timeout: 248 seconds)
00:54:32  * DarkGodquit (Ping timeout: 252 seconds)
01:18:48  * kazuponjoined
01:35:32  * kazuponquit (Remote host closed the connection)
02:09:54  * grep_awesomequit (Quit: Leaving.)
02:35:55  * kazuponjoined
02:40:41  * kazuponquit (Ping timeout: 252 seconds)
02:57:43  * grep_awesomejoined
03:14:42  * dvvjoined
03:36:43  * kazuponjoined
03:41:55  * kazuponquit (Ping timeout: 264 seconds)
04:17:35  * kazuponjoined
04:34:08  * grep_awesomequit (Quit: Leaving.)
04:50:05  * kazuponquit (Remote host closed the connection)
04:56:23  * kazuponjoined
05:32:21  * kazuponquit (Remote host closed the connection)
06:42:58  * kazuponjoined
08:30:52  * kazuponquit (Remote host closed the connection)
09:02:28  * DarkGodjoined
09:31:11  * kazuponjoined
09:35:43  * kazuponquit (Ping timeout: 246 seconds)
10:12:24  * kazuponjoined
10:41:19  * q66joined
11:31:17  * kazuponquit (Remote host closed the connection)
12:26:49  * tim_smart|awayquit (Read error: Operation timed out)
12:27:13  * tim_smart|awayjoined
12:27:20  * tim_smart|awaychanged nick to tim_smart
12:41:43  * kazuponjoined
12:46:29  * kazuponquit (Ping timeout: 248 seconds)
13:00:53  * grep_awesomejoined
14:03:58  * indexzerojoined
14:17:49  * dvvpart
15:28:18  * indexzeroquit (Ping timeout: 264 seconds)
15:28:51  * indexzerojoined
15:44:56  * Peter200joined
15:45:18  <Peter200>hello, can luvit distribute requests over CPU-Cores?
16:04:31  <creationix>Peter200: not in a single process normally
16:04:43  <creationix>Peter200: it's single threaded and pretty efficient on that one core though
16:05:10  <creationix>I think dvv made a threading addon a while back (no shared state) kinda like web-workers
16:09:27  <Peter200>Is it planned to implement such a spreader directly into luvit? Because even when it is faster than nodejs, nodejs could outperform by using threads for all cores?
16:10:16  <Peter200>And another question: Is there a RAM / shared memory key-value store? Thank you for your answers!
16:16:18  <Peter200>mh, maybe it is easier now thanks to SO_REUSEPORT where n threads bind to the same port and Linux distributes the requests...
16:20:21  <creationix>Peter200: luvit is the same api as node basically
16:20:35  <creationix>so node is also single-threded
16:21:01  <creationix>also using more cores only makes you faster when cpu blocking stuff is the bottleneck
16:21:16  <creationix>sometimes the syncronizing overhead of threads actually makes things slower
16:21:27  <creationix>using more cpu cores doesn't always mean getting more work done
16:21:41  <creationix>the main model is to use several processes and talk over IPC
16:22:06  <creationix>If you're doing something CPU intensive, have it in a seperate process written in some other language (like C)
16:22:29  <creationix>I don't think we ever implemented something like node's cluster in luvit though
16:22:33  <Peter200>I am thinking about a chat system, many requests (or maybe open connections) and little data per request
16:22:54  <creationix>yeah, with a chat system like that, CPU processing is not your bottleneck
16:23:00  <creationix>it's maintaining all the connections
16:23:18  <creationix>libuv (the underlying network library of luvit and node) isn't really thread safe
16:23:27  <Peter200>but when you have (ok, maybe never happens) 100,000 concurrent chatters?
16:23:46  <creationix>right, a single thread means less state to pass around
16:23:52  <creationix>1 thread can handle 100,000 connections
16:23:57  <grep_awesome>if you don't keep your connections alive, then luvit or node can handle that easily
16:24:25  <creationix>(unless of course each connection is active sending and processing lots of data at once)
16:24:49  <creationix>now if you can partition your 100,000 connections, then sharding works great
16:25:18  <creationix>have ~1000 connections per process as long as connections don't need to talk to connections in other processes
16:25:21  <Peter200>I made some tests with openresty / Luajit and the critical section starts at >5000 req/sec, maybe the Linux kernel / tcp-stack can't open/close tcp-connections fast enough
16:25:46  <creationix>I've gotten 120,000 HTTP requests per second on a luvit http server before
16:25:50  <creationix>1 thread
16:25:57  <Peter200>but with keepalive?
16:26:04  <creationix>of course, the http server was very dumb and used keepalive
16:26:28  <creationix>but that's my point, CPU processing the protocol isn't always the bottleneck
16:26:59  <Peter200>The bottleneck seems to be the tcp/ip stack - as with more parallel requests gets overproportinal slower
16:27:01  <creationix>Peter200: if you need 100,000 people in the same chat room, all talking at once, and all hearing eachother, then things get tricky
16:27:08  <creationix>any technology will have a hard time with that one
16:27:13  <creationix>simply adding CPU cores doesn't make it easier
16:27:18  <creationix>it just makes it more complicated
16:27:47  <creationix>Peter200: yeah, high-load servers always need to tune their server kernels
16:27:50  <Peter200>I just read about WhatsApp or all other "massive services" and ask myself how they all get their problems solved fast enough before they get killed by the people-tsunami.
16:27:51  <creationix>I hear Linux is usually best
16:27:56  <creationix>Linux on bare-metal
16:28:25  <creationix>right, sharding, single-threaded event-loops, kernel tuning
16:28:29  <creationix>all parts of making it really fast
16:28:50  <creationix>the one trick you can't really do in luvit is shared memory between threads
16:29:21  <creationix>thanks to the awesome JIT in luajit, you can even call out to C libraries for hot parts of your path
16:29:30  <creationix>JIT calls to C don't deoptimize things like calls to C bindings
16:29:48  <creationix>*FFI calls
16:29:57  <creationix>since the FFI is baked into the core of the JIT engine
16:30:16  <creationix>though our libuv bindings are regular C bindings
16:30:28  <creationix>so every time you hit the network, the JIT is getting deoptimized a bit
16:30:36  <creationix>I would love a version of libuv that wasn't callback based
16:30:37  <Peter200>don't know if i remember correctly, but wasn't / isn't there a penalty when using ffi, no idea, sth. like context switch which "costs" a lot?
16:30:47  <creationix>normally FFI is very expensive
16:30:50  <creationix>luajit is the exception
16:30:54  <creationix>in luajit, FFI is the fast way
16:31:00  <creationix>(unless you use C callbacks)
16:31:38  <creationix>of course, like everything, benchmark it
16:31:47  <Peter200>The openresty-guys do some luajit stuff and "de-callbacked" it. Scripts actually "wait" although other lua-requests are processed in the same thread
16:31:52  <creationix>if you write one version in pure lua and the same in C with ffi, see which is faster
16:32:27  <creationix>I'm talking about the libuv API
16:32:33  <creationix>it's all C callback oriented
16:32:42  <creationix>now it's exposed to lua is another question
16:33:03  <creationix>it's possible if you write a libuv wrapper in C that exposes another API
16:33:14  <creationix>and then ffi into that new wrapped library
16:33:40  <creationix>I've also seen people write normal C bindings to libuv where they expose it to lua as coroutines instead of callbacks
16:33:48  <creationix>but that's using the slow C API bindings
16:35:04  <creationix>Peter200: if someone has written such a wrapper to libuv, I'd love to see it
16:35:28  <creationix>I've been too busy with paid and sponsored work to find time for it
17:31:32  * arek_deepinitjoined
17:33:10  * arek_deepinitquit (Client Quit)
18:49:05  * tvquit (Ping timeout: 248 seconds)
18:51:07  * tvjoined
19:01:24  * indexzeroquit (Quit: indexzero)
19:21:55  * indexzerojoined
19:44:10  * indexzeroquit (Quit: indexzero)
19:50:04  * indexzerojoined
20:01:11  * Peter200quit (Quit: Page closed)
20:28:33  * indexzeroquit (Quit: indexzero)
20:29:51  * indexzerojoined
20:43:00  * indexzeroquit (Quit: indexzero)
21:41:58  * themgtjoined
23:47:01  * themgtquit (Quit: themgt)
23:47:51  * themgtjoined