01:35:35  * mmaleckiquit (Quit: leaving)
01:53:17  * dvvquit (Ping timeout: 265 seconds)
02:05:48  * dvvjoined
02:23:33  * dvvquit (Ping timeout: 260 seconds)
02:28:22  * zx9597446joined
02:46:53  * hij1nxjoined
03:12:22  * zx9597446part
03:58:14  * dvvjoined
05:04:27  * TheJHjoined
05:08:42  * TheJHquit (Ping timeout: 245 seconds)
07:38:50  * hij1nxquit (Quit: hij1nx)
08:17:26  * AvianFluquit (Quit: Leaving)
08:32:52  * mmaleckijoined
09:02:47  * mmaleckiquit (Ping timeout: 240 seconds)
09:03:28  * mmaleckijoined
10:19:20  * mmaleckiquit (Quit: Reconnecting)
10:19:29  * mmaleckijoined
10:23:27  * mmaleckiquit (Client Quit)
10:29:28  * mmaleckijoined
11:33:36  * xmingjoined
11:34:24  * mmaleckiquit (Quit: Reconnecting)
11:34:33  * mmalecki_joined
11:36:35  * mmalecki_quit (Client Quit)
11:36:42  * mmaleckijoined
11:42:43  * mmaleckiquit (Quit: Reconnecting)
11:43:14  * mmaleckijoined
11:43:44  * mmaleckiquit (Client Quit)
11:46:54  * mmaleckijoined
11:47:53  <xming>I want to emit an event when receiving data from a socket, but I can't get it right
11:50:28  * mmalecki_joined
11:50:43  * mmaleckiquit (Client Quit)
11:51:06  * mmalecki_quit (Client Quit)
11:51:16  * mmaleckijoined
12:04:32  * indexzero_quit (Quit: indexzero_)
12:06:41  <xming>the idea is basically the same as in http.lua, converting tcp to application proto
12:06:53  <xming>so I can create new ojb from it
12:07:07  <xming>but I can't get it right :/
14:08:38  <creationix>xming, what is the hard part?
14:14:04  <xming>I thkn I get it now
14:14:46  <xming>the hard part is that I don't know how things work (in a high level) as I am not comingfrom nodejs
14:15:14  <xming>I missed good high level docs, including the flow of the stream
14:18:29  <xming>creationix: does this looks right to you? http://pastebin.com/5ujxkbtv
14:20:40  <creationix>xming, so, so far this is a simple tcp server with "new" and "foo" events?
14:22:02  <creationix>xming, though, your "new" event doesn't send any data, but you're expecting data down in main.lua
14:28:39  * mmaleckiquit (Quit: Reconnecting)
14:28:48  * mmaleckijoined
14:30:50  <xming>yes just trying to see how things work (the flow, callbacks, etc)
14:34:10  <xming>creationix: those func args are left over, it's not meant to be like that. My question is about the events/listeners, am I doing it the right way?
14:37:04  <creationix>xming, yes, that's how emitters work
14:37:13  <creationix>and any extra args passed to emit are the args in the listener
14:38:09  <xming>ah okay thanks
14:39:26  <xming>so conn:emit('foo', arg1) and conn:on('foo', function(arg1_from_emitter) ... end
14:39:31  <xming>something like that?
14:39:40  <creationix>yep
14:42:41  <xming>my actual problem was I missed that callback, so onConnection(conn) fixed it
15:03:09  * hij1nxjoined
15:47:54  * hij1nxquit (Quit: hij1nx)
16:01:11  * hij1nxjoined
16:03:58  * TheJHjoined
16:19:33  * hij1nxquit (Read error: Connection reset by peer)
16:24:10  * hij1nxjoined
16:36:51  * tsingjoined
16:41:03  * tsingquit
16:41:15  * hij1nxquit (Quit: hij1nx)
16:58:07  * neomantra1joined
16:58:46  * tsingjoined
17:00:31  * hij1nxjoined
17:17:58  <philips>I think I want to add a way to set the luvit return code to say 1- that way if the loop exits before the user expects it to we get a failing return code. Thoughts?
17:19:24  * AvianFlujoined
17:26:48  <dvv>settable process.exitCode, which is 0-ed when exiting cleanly?
17:27:30  <philips>dvv: yea, essentially
17:27:37  <dvv>+1
17:28:02  <dvv>but i believe we do have this already
17:40:11  * mmaleckiquit (Quit: leaving)
17:41:20  <creationix>I didn't add such a feature
17:41:22  <creationix>I like it
17:48:12  <philips>creationix: We just encountered a problem where our unit tests triggered a bug and the event loop exited early but luvit's return code was 0 so buildbot made it green. I iwll work on that
17:48:37  <creationix>I wonder if we should add that feature to node as well
17:48:47  <creationix>philips, how important is it for you guys that we stick to node APIs
17:48:51  <creationix>I know you use both there
17:49:08  <philips>creationix: Not terribly important, it makes context switching slightly easier
17:49:13  <philips>creationix: What do you have in mind
17:49:26  <philips>creationix: I would rather not make sweeping api changes and break virgo though :)
17:50:07  <creationix>no, I don't have any sweeping changes in mind
17:50:12  <creationix>just wondering how important it is
17:50:20  <creationix>this exit code API, for example, doesn't exist in node
17:51:33  <philips>creationix: oh I am fine with extensions and new stuff
17:52:29  <philips>creationix: What do you think?
17:52:36  <creationix>fine by me
17:56:41  * tsing_joined
18:00:22  * tsingquit (Ping timeout: 245 seconds)
18:02:32  <luvit-bb>build #1586 of virgo-rhel6.1_x64 is complete: Failure [failed integration tests] Build details are at https://virgo-bb.k1k.me/builders/virgo-rhel6.1_x64/builds/1586
18:14:03  <luvit-bb>build #1587 of virgo-rhel6.1_x64 is complete: Success [build successful] Build details are at https://virgo-bb.k1k.me/builders/virgo-rhel6.1_x64/builds/1587
18:17:31  * tsingjoined
18:18:59  * tsing_quit (Read error: Connection reset by peer)
18:19:23  * tsing_joined
18:23:08  * tsingquit (Ping timeout: 260 seconds)
18:34:29  * indexzerojoined
18:39:58  * indexzeroquit (Ping timeout: 250 seconds)
18:53:44  * Thomas000joined
18:55:17  <Thomas000>Hello, is there a memorycache as known from PHP APC (read/store variables) in Luvit or an external Library that does this storage without network-based connects?
19:06:56  <creationix>Thomas000, you can simply store values in a global table
19:07:03  <creationix>luvit processes are persistent
19:07:11  <creationix>local db = {}
19:07:43  <creationix>or do you need something persistent to disk?
19:59:01  <Thomas000>Thank you! Is luvit multi-threading (1 process per core) possible or on todo list? When to expect roundabout? creationix
19:59:44  <creationix>the whole control-flow is based on the assumption that all lua runs in the same thread
20:00:06  <creationix>dvv does have an experiment where he uses the uv thread-pool to run some lua in another thread
20:00:08  <creationix>kinda like web workers
20:00:41  <xming>bloody namespace collision
20:01:10  <Thomas000>I was looking for a nodejs cluster equivalent.
20:01:44  <creationix>Thomas000, fd sharing is possible once someone adds write2 bindings for libuv
20:02:10  <creationix>then you can bind to port 80 in a parent process, spawn X child processes passing them all a handle to the port's fd
20:02:18  <creationix>and never accept connections in the parent
20:02:34  <creationix>the os will load balance across the X children on each request
20:02:34  <Thomas000>because speed(Luvit with limit to 1 core) < speed(most other solutions being multithreadable)
20:02:49  <Thomas000>ah that sounds very interesting
20:02:50  <creationix>is luvit's speed really a bottleneck?
20:03:04  <creationix>I mean, I can get over 120,000/second on a single core if I tune luvit enough
20:03:16  <Thomas000>Sure most projects just don't get the attention to have that many visitors
20:03:21  <creationix>my most active site gets a < 100,000/day
20:03:46  <Thomas000>I wanted to use luvit for an Android App similar to WhattsApp
20:03:47  <creationix>but yes, fd sharing is possible if someone takes the time to add uv_write2 bindings
20:04:05  <Thomas000>Correection: For the server side
20:04:06  <creationix>but then all your workers have to be stateless since you never know which one will get a request
20:04:28  <Thomas000>And this leads to a need for a central and fast "persistent" storage
20:04:38  <Thomas000>like known from APC PHP cache
20:04:52  <creationix>I know redis and riak are often used with node
20:05:04  <creationix>I think there is a redis library for luvit
20:05:28  <creationix>but if one process can handle your load, then that's much simpler
20:05:34  <creationix>don't overcomplicate things
20:05:50  <Thomas000>Yes, but they are only "hm, ok" for clusters with more than 1 server. On just one server, redis is the bottleneck because the network stuff is several orders of magnitude slower than direct Shared Memory access
20:06:23  <Thomas000>It is like a magic wall.
20:06:30  <creationix>that could probably be added as a binary addon
20:06:34  <Thomas000>I can only speak for nodejs with this example
20:06:43  <creationix>I don't know enough about shared memory coding to tell
20:07:11  <creationix>but since luvit is 2-4x faster than node, a single luvit process can do what 4 node processes can do
20:07:33  <creationix>(of course this depends heavily on what exactly you're doing)
20:07:49  <Thomas000>The problem is this dilemma: One Core: is "fast" with some few thousand requests per second. As soon as more than one core is involved, overall performance gets lower due to persistent storage management which is needed. Needing more than 1 server further dramatically decreses speed
20:08:05  <Thomas000>as one then needs network-io-based persistent storage.
20:08:22  <creationix>true, but that's a generic scaling issue
20:08:31  <creationix>the larger you scale, the less effecient each node is
20:08:32  <Thomas000>The performance per core goes down ddown down with every new extension and it seems there is a performance-wall.
20:09:03  <creationix>for n-n broadcast style systems, yes
20:09:08  <creationix>those are hard to scale
20:09:10  <Thomas000>I never came over 100,000 requests per second wih an aritificial benchmark. I do this just out of curiosity.
20:09:18  <creationix>but usually there is some application level trick to optimize and shard things
20:10:34  <Thomas000>And as Luvit is some few times faster than nodejs I wanted to test the same stuff like clustering ,etc. I will just keep Luvit on my agenda and revisit month by month.
20:11:05  <creationix>I was able to get >100k by doing manual http pipelining and keepalive
20:11:21  <creationix>I imagine with fd sharing it will scale linearly per core
20:11:30  <creationix>or I would like to see if it can
20:11:38  <creationix>that would be 1m/second on my desktop
20:11:42  <creationix>I doubt it scales that far
20:12:12  <Thomas000>But maybe a good server could do that then.
20:12:24  <creationix>sure, but not with any real workload
20:12:38  <creationix>a server that responds with static Hello World isn't very useful
20:12:49  <creationix>add any real work and the numbers plummit
20:12:54  <creationix>for every platform
20:13:13  <Thomas000>Sure, as soon as you need a session lookup (inmemory) or worse: a DB-lookup, all is voer :-(
20:13:23  <Thomas000>voer=over
20:13:30  <creationix>hmm, that would be a good benchmark
20:13:48  <creationix>load session, load template from disk, query database, render html, serve
20:13:53  <creationix>no caching at all
20:13:59  <Thomas000>I made some tests, for example with nginx+LuaJit on a 6core Xeon and it gave about 210k/sec. keepalive
20:14:08  <creationix>nice
20:14:31  <Thomas000>With a simple Lua-script that made an easy math-addition and printed "Hello blablabla"+i
20:14:52  <Thomas000>It was just to see the overall overhead of parsing/Lua-instance reusage, etc
20:15:43  <Thomas000>Intention was: Many requests with very low data-payload like a messenger
20:20:05  <Thomas000>Good bye
20:20:19  <Thomas000>and thank you
20:20:27  * Thomas000quit (Quit: Page closed)
20:21:58  * luvit-bbquit (Ping timeout: 244 seconds)
20:30:39  * tsing_quit (Remote host closed the connection)
20:31:43  <pquerna>http://playcontrol.net/opensource/LuaCocoa/
20:40:10  <pquerna>so, has anyone tried to integrate with NSRunLoop?
21:14:19  <creationix>tootallnate has done some with node I think
21:14:39  <creationix>pquerna, what do you think about https://gist.github.com/4b71912f266133c69506
21:29:28  <philips>creationix: That looks reasonable. Adding a simple TCP server "database" that simply echos out the JSON for the records might be a nice second level since it is very unlikely you have to do zero I/O.
21:29:50  <creationix>so require I/O in all the cases then?
21:29:51  <philips>Just a short TCP server written in C that everyone uses
21:30:03  <creationix>oh, share the database, I see
21:30:11  <creationix>not a bad idea
21:30:29  <creationix>input is table/key, output is value?
21:30:31  <creationix>something like that?
21:32:32  <philips>creationix: I was thinking input was "users" or "sessions" and output would be the JSON for those
21:32:53  <creationix>I don't want to dump the entire database on query
21:36:31  <philips>creationix: How would a more complex db add to the benchmark?
21:37:00  <creationix>well, for this one, it doesn't take much
21:37:11  <creationix>send "users/creationix" and it responds with the json
21:37:20  <creationix>send "sessions/adsfkjadfs" and it responds with the session
21:37:32  <creationix>I should probably allow connection pooling
21:37:47  <creationix>null bytes to end queries
21:38:01  <creationix>allow query pipelining
21:38:06  <creationix>should be easy enough using libuv
21:38:32  <philips>creationix: It might be a rabbit hole. The spec you have so far seems useful.
21:38:51  <creationix>well, I want to give people options to optimize
21:39:05  <creationix>I do think having a tcp database is a good idea though
21:39:09  <creationix>what about templates?
21:39:18  <creationix>are those fine pre-compiled and in-memory functions?
21:39:39  <creationix>I'd like to force file I/O somehow
21:39:49  <creationix>maybe make them load the template from disk and compile it on the fly?
21:43:16  <creationix>maybe I should have the template be a static file as part of the benchmark and they have to load and render it
21:45:29  <rphillips>i would go for a redis-like clone... file i/o isn't consistent across computers
21:45:41  <rphillips>memory + cpu is a bit better, IMO
21:50:49  <philips>yea, file i/o isn't useful, tmpfs would be what everyone uses
22:01:40  * neomantra1quit (Quit: Leaving.)
22:09:17  <creationix>ok, updated gist with the beginning of a simple tcp database https://gist.github.com/4b71912f266133c69506#file_db.c
22:10:05  <creationix>ok, I'll let them cache the template
22:10:13  <creationix>that's usually what happens in production anyway
22:10:34  <creationix>rphillips, would tcp over localhost be consistent enough?
22:23:41  * mmaleckijoined
22:37:23  * mmaleckiquit (Quit: leaving)
22:45:38  <rphillips>probably... i was just thinking reading from the template files isn't a great test