00:07:23  * orlandov1twquit (Ping timeout: 250 seconds)
00:07:41  * orlandovftwjoined
00:17:43  * mikealjoined
00:18:24  * piscisaureus_quit (Quit: ~ Trillian Astra - www.trillian.im ~)
00:19:01  * piscisaureus_joined
00:19:12  <piscisaureus_>Hmm. trillian crash
00:19:14  <piscisaureus_>It's a sign
00:19:18  <piscisaureus_>I quit. Bye all.
00:19:26  * piscisaureus_quit (Client Quit)
00:19:48  * mikealquit (Client Quit)
00:25:24  <isaacs>bnoordhuis: so, brendan gregg poked around at that vert.x benchmark
00:25:58  <bnoordhuis>and?
00:26:02  <isaacs>bnoordhuis: i dont' think we have anything to get excited about. there's no 10x latency bubble for node to go fix.
00:26:21  <isaacs>for starters, the numbers that his client shows don't match the number of connections the server shows.
00:26:28  <isaacs>by about 30%
00:26:45  <isaacs>second, he said that there was a "crippled" vert.x that was set to only run on one processor.
00:26:58  <isaacs>however, when run in that mode, it still uses 1710ms of cpu time per second.
00:27:07  <bnoordhuis>hah
00:27:15  <isaacs>whereas node, by default, under load, uses exactly 1000ms of cpu time per seoncd
00:27:36  <isaacs>when he restricted it with pbind, the performance was cut in half, and wildly erratic
00:27:42  <isaacs>because the JVM does not like being on a single core.
00:28:08  <isaacs>also, the fs.readFile in the response handler makes node's performaen die badly.
00:28:29  <isaacs>take that out, actually restrict vert.x to a single core, and count requests properly, and node is slightly faster.
00:28:29  <bnoordhuis>yeah, it's a pretty lousy (and naive) benchmark
00:28:44  <isaacs>and more stable, and uses less memory.
00:28:45  <tjfontaine>surprise surprise
00:28:47  <bnoordhuis>well, even if vert.x was faster
00:29:09  <bnoordhuis>i would just sit back and see if it's still around in six months time
00:29:15  <isaacs>of course, the clustered node really ought to be performing much much better.
00:29:19  <isaacs>bnoordhuis: yeah, that's another thing.
00:29:23  <isaacs>try installing it.
00:29:35  <isaacs>it makes node 0.0 look like a cakewalk.
00:29:43  <bnoordhuis>hah, that bad?
00:29:44  * mikealjoined
00:30:42  * orlandovftwquit (Ping timeout: 256 seconds)
00:33:48  <isaacs>oh, it's awful
00:34:08  <isaacs>brendan did notice some interesting behavior in the node server while benchmarking it, though.
00:34:16  <isaacs>so this exercise may yield something useful.
00:34:24  <bnoordhuis>what was it?
00:34:25  <isaacs>but it might've just been gc causing a few outliers
00:34:33  * perezdquit (Quit: perezd)
00:34:47  <isaacs>well, the bulk of the requests were much faster than vertx's, but there are a few that ran like 100ms or more.
00:35:01  <isaacs>which sort of looks like gc tracks, but maybe we can smooth it out or something
00:35:03  * perezdjoined
00:35:33  <isaacs>the vert.x server's latency distribution was a bit wider curve, and not as fast, but didn't have the spike of outliers.
00:36:03  <bnoordhuis>g1 is a little less naive than v8's gc, i don't doubt
00:36:35  <bnoordhuis>the garbage collectors in the sun jvm are works of art
00:36:42  <bnoordhuis>and lots and lots of man hours
00:42:32  * mikealquit (Quit: Leaving.)
00:50:05  <txdv>g1?
00:50:45  <bnoordhuis>txdv: the default jvm gc
00:51:00  <txdv>why not j1?
00:51:11  * dapquit (Quit: Leaving.)
00:51:24  <bnoordhuis>ask the people at oracle nay sun
00:59:53  * hij1nxquit (Quit: hij1nx)
01:01:15  * ericktquit (Ping timeout: 240 seconds)
01:05:06  * theColequit (Ping timeout: 260 seconds)
01:13:17  * pieternquit (Quit: pietern)
01:14:12  * ericktjoined
01:37:30  * abraxasjoined
01:40:32  * ericktquit (Ping timeout: 250 seconds)
01:49:53  * piscisaureus_joined
01:50:02  <piscisaureus_>I didn't manage to actually quit
01:50:17  <piscisaureus_>isaacs: is brendangregg going to do a write up on that analysis
01:50:41  <piscisaureus_>isaacs: it looks like he did an in-depth analysis that the node folk might be interested in
01:51:18  <isaacs>piscisaureus_: i'm trying to convince him to
01:51:39  <isaacs>piscisaureus_: he said the problem is that it's a lot of work, and might inadvertently lend legitimacy to vert.x
01:51:39  <piscisaureus_>isaacs: we probably gotta fix that cluster stuff
01:51:46  <isaacs>yeah, definitely gotta fix that
01:51:50  <isaacs>it's not a v0.8 fc blocker.
01:51:59  <isaacs>put it off a week if you can :)
01:52:00  <piscisaureus_>isaacs: well the thing *is* kinda legit right
01:52:20  <isaacs>piscisaureus_: almost everything about that benchmark article is mistaken, misleading, or outright wrong.
01:52:21  <piscisaureus_>isaacs: well my coworkers are pushing for it since they want to upgrade to 0.6 :-)
01:52:36  <piscisaureus_>isaacs: but yeah I will try to fix the refcount thing first
01:53:45  <piscisaureus_>isaacs: I did not understand the node 0.0 comment btw
01:54:03  <isaacs>piscisaureus_: have you installed vert.x?
01:54:05  <piscisaureus_>isaacs: did he benchmark node 0.0 ?
01:54:08  <isaacs>nono
01:54:14  <piscisaureus_>isaacs: nope
01:54:19  <piscisaureus_>isaacs: what's up with it?
01:54:21  <isaacs>i was saying that installing vert.x is harder than node 0.0
01:54:24  <isaacs>used to be
01:54:41  <isaacs>piscisaureus_: you need like different versions of java and stuff. it's super convoluted, and very fiddly with classpath stuff.
01:54:47  * ericktjoined
01:54:50  <piscisaureus_>ah, I see
01:55:08  <isaacs>piscisaureus_: but i mean, whatever. JVMites are used to abuse.
01:55:34  <isaacs>but if that doesn't get sorted out, vert.x will never get off the ground.
01:55:46  <isaacs>if it does, it probably will still fade out, because that's what projects tend to do.
01:55:52  * perezdquit (Quit: perezd)
01:56:25  <piscisaureus_>isaacs: to be honest, I kinda like the idea behind vert.x
01:56:26  <piscisaureus_>isaacs: but I think that it will be hard to work with
01:56:40  <piscisaureus_>isaacs: since you have threads which basically you need data sharing
01:57:04  <piscisaureus_>isaacs: hence, really "data sharing" with locks, or something else
01:57:12  <piscisaureus_>which is not easy to accomplish even in the vm
01:57:28  <piscisaureus_>isaacs: but isn't the thing vmware-backed?
01:57:46  <isaacs>piscisaureus_: it's not entirely clear
01:57:53  <isaacs>piscisaureus_: but yes, the author does work at vmware
01:58:58  <piscisaureus_>isaacs" anywa
01:59:00  <piscisaureus_>er
01:59:09  <piscisaureus_>isaacs: anyway, brendangregg++
01:59:13  <isaacs>yes
01:59:21  <isaacs>the guy is a performance scientist.
01:59:34  <piscisaureus_>it's always nice if people actually analyze these benchmarks and extract stuff for us to learn :-)
01:59:47  <isaacs>if vert.x has managed to troll brendan enough to get him to poke his performance science skills into node.js, then i'm thankful.
02:02:59  * hij1nxjoined
02:04:14  <piscisaureus_>:-)
02:05:58  <piscisaureus_>No more worrying about concurrency. Vert.x allows you to write all your code as single threaded, freeing you from the hassle of multi-threaded programming, yet unlike other asynchronous framework it scales seamlessly over available cores without you having to fork.
02:06:05  <piscisaureus_>So lame
02:06:16  <piscisaureus_>Maybe we should have this --cluster option in node after all
02:06:28  <piscisaureus_>So we can say "node scales seamlessly blah blah blah"
02:07:56  * perezdjoined
02:08:21  * perezdquit (Remote host closed the connection)
02:09:39  * piscisaureus_quit (Quit: ~ Trillian Astra - www.trillian.im ~)
02:19:43  <txdv>is someone jealous of vert.x?
02:21:37  <txdv>can someone give me a link to that benchmarjk
02:22:05  * TooTallNatequit (Quit: Linkinus - http://linkinus.com)
02:23:50  * orlandovftwjoined
02:24:29  * mikealjoined
02:42:58  * brson_quit (Quit: leaving)
02:56:00  * mikealquit (Quit: Leaving.)
03:00:37  * mikealjoined
03:10:50  * orlandovftwquit (Ping timeout: 248 seconds)
03:18:45  * c4miloquit (Remote host closed the connection)
03:24:58  * theColejoined
03:31:46  * elijah-mbpjoined
03:35:04  * theColequit (Read error: Connection reset by peer)
04:01:03  * TheJHjoined
04:04:18  * loladiropart
04:21:42  * TheJHquit (Read error: Operation timed out)
04:36:38  * philipsquit (Read error: Operation timed out)
04:37:29  * chiltsquit (Read error: Operation timed out)
04:37:43  * chiltsjoined
04:38:01  * indexzerojoined
04:38:55  * philipsjoined
04:39:45  * perezdjoined
04:51:50  * bulatshakirzyanoquit (Ping timeout: 260 seconds)
04:52:28  * bulatshakirzyanojoined
04:56:04  * bnoordhuisquit (Ping timeout: 245 seconds)
05:02:38  * abraxasquit (Remote host closed the connection)
05:03:13  * abraxasjoined
05:03:30  * AvianFluquit (Quit: Leaving)
05:03:34  * abraxasquit (Read error: Connection reset by peer)
05:04:03  * abraxasjoined
05:08:15  * abraxasquit (Ping timeout: 240 seconds)
05:09:22  * abraxasjoined
05:14:38  * isaacsquit (Remote host closed the connection)
05:25:45  * paddybyersjoined
05:28:29  * ericktquit (Quit: erickt)
05:58:39  * igorzi_joined
05:58:54  * igorzi_quit (Client Quit)
06:05:42  * hij1nxquit (Quit: hij1nx)
06:17:51  * paddybyersquit (Ping timeout: 245 seconds)
06:23:59  * avsej_changed nick to avsej
06:29:06  * perezdquit (Quit: perezd)
06:30:21  * bulatshakirzyanoquit (Ping timeout: 245 seconds)
06:32:49  * indexzeroquit (Quit: indexzero)
06:44:27  * orlandovftwjoined
06:45:10  * orlandovftwquit (Client Quit)
06:45:27  * orlandovftwjoined
07:11:59  * mralephjoined
08:36:01  * orlandovftwquit (Ping timeout: 260 seconds)
08:36:13  * felixgejoined
08:36:13  * felixgequit (Changing host)
08:36:13  * felixgejoined
08:48:17  * paddybyersjoined
08:48:20  * mralephquit (Quit: Leaving.)
08:51:24  * mmalecki_joined
08:54:10  * mmaleckiquit (Remote host closed the connection)
08:54:14  * mmalecki_quit (Client Quit)
08:54:27  * mmaleckijoined
09:09:43  * mmalecki_joined
09:12:02  * mmaleckiquit (Ping timeout: 272 seconds)
09:12:20  * mmalecki_quit (Client Quit)
09:12:36  * mmaleckijoined
09:16:19  * mmalecki_joined
09:17:55  * mmaleckiquit (Ping timeout: 252 seconds)
09:18:00  * mmalecki_quit (Client Quit)
09:18:07  * mmaleckijoined
10:13:22  * mmaleckiquit (Quit: Reconnecting)
10:19:27  * mmaleckijoined
10:24:03  * mmaleckiquit (Client Quit)
10:24:31  * mmaleckijoined
10:25:31  * theColejoined
11:02:45  * mmaleckiquit (Ping timeout: 240 seconds)
11:05:32  * abraxasquit (Remote host closed the connection)
11:09:02  * mmaleckijoined
11:20:48  * theColequit (Quit: theCole)
11:58:47  * TheJHjoined
11:58:51  * bnoordhuisjoined
11:58:58  * TheJHquit (Changing host)
11:58:58  * TheJHjoined
12:07:18  * paddybyersquit (Quit: paddybyers)
12:27:55  * loladirojoined
12:38:27  * c4milojoined
13:11:00  * mmaleckichanged nick to mmalecki[away]
13:11:29  * mmalecki[away]quit (Quit: leaving)
13:27:33  * piscisaureus_joined
13:31:57  * mmaleckijoined
13:58:21  * loladiroquit (Remote host closed the connection)
13:58:40  * loladirojoined
13:58:44  * paddybyersjoined
14:02:58  * theColejoined
14:10:09  * isaacsjoined
14:39:28  <CIA-155>libuv: Bert Belder v0.6 * rbc4126b / src/win/fs.c :
14:39:28  <CIA-155>libuv: Windows: skip GetFileAttributes call when opening a file
14:39:28  <CIA-155>libuv: - http://git.io/CKnB4Q
14:41:22  * travis-cijoined
14:41:22  <travis-ci>[travis-ci] joyent/libuv#271 (v0.6 - bc4126b : Bert Belder): The build is still failing.
14:41:22  <travis-ci>[travis-ci] Change view : https://github.com/joyent/libuv/compare/cb58a56...bc4126b
14:41:22  <travis-ci>[travis-ci] Build details : http://travis-ci.org/joyent/libuv/builds/1295556
14:41:22  * travis-cipart
14:44:16  <indutny>heeey ben
14:44:26  <indutny>bnoordhuis: time to pull that patch in
14:47:05  <isaacs>Good morning!
14:47:17  <indutny>good morning! :)
14:52:14  <bnoordhuis>sup fedor
14:52:25  <bnoordhuis>what patch? the stdio thing?
14:52:37  <bnoordhuis>indutny: ^
14:52:42  <indutny>bnoordhuis: yep
14:52:47  <indutny>bnoordhuis: I fixed benchmark
14:52:55  <bnoordhuis>piscisaureus_ still needs to do the windows side of things
14:53:14  <bnoordhuis>piscisaureus_ still needs to do a lot of things actually
14:53:23  <indutny>ah
14:53:24  <indutny>ok
14:53:36  <indutny>is it blocking 0.8?
14:53:44  <mmalecki>that's when I realized it's an actual workday
14:53:47  <piscisaureus_>bnoordhuis: yeah I am not getting at it.
14:54:02  <piscisaureus_>bnoordhuis: I have to do the cluster backoff first because it's blocking everyone here now
14:54:26  <bnoordhuis>piscisaureus_: oh, i can do that - i intended to work on it today
14:54:45  <piscisaureus_>bnoordhuis: ok I'll send you what I've got so far. I don't know if it compiles
14:55:01  <piscisaureus_>bnoordhuis: but I think it reasonable, except we need to fix ABI breakage
14:56:13  * loladiroquit (Quit: loladiro)
14:56:20  <CIA-155>node: Ben Noordhuis master * r928d28a / (lib/util.js test/simple/test-util.js):
14:56:20  <CIA-155>node: util: make _extend() more robust
14:56:20  <CIA-155>node: - http://git.io/5aEcLw
14:56:21  <CIA-155>node: Ben Noordhuis master * r68f63fe / lib/child_process.js :
14:56:21  <CIA-155>node: child_process: make copy of options arg
14:56:21  <CIA-155>node: - http://git.io/w8WzXg
14:57:08  <creationix>piscisaureus_, fwiw, I'm excited for the ref refactor to get merged into master
14:57:28  <creationix>piscisaureus_, I think it will make things much easier for my luv* projects
14:57:55  <piscisaureus_>bnoordhuis: https://gist.github.com/2653691
14:58:01  <piscisaureus_>bnoordhuis: apply to node/0.6
14:58:07  <piscisaureus_>creationix: let's hope so,
14:58:43  <bnoordhuis>it will
14:59:11  <creationix>bnoordhuis, I did have a question about it though
14:59:18  <piscisaureus_>crap
14:59:25  <bnoordhuis>perhaps i have an answer
14:59:28  <piscisaureus_>bnoordhuis: I forgot to land some stuff I'm afraid
14:59:29  <creationix>will is_active say a handle is inactive if there are pending events, but you're manually unref'ed it?
14:59:49  * theColequit (Quit: theCole)
14:59:53  <creationix>for my use, I don't care so much if it's holding the event loop open as it stuff may happen later
15:01:14  <bnoordhuis>uv_is_active reports true if the handle is reading / writing but unref'd
15:01:32  <creationix>perfect, then is_active + is_closing sounds like what I need
15:01:37  <bnoordhuis>good :)
15:02:58  <creationix>so after every uv_* function call that accepts a callback I should check to see if it's active, and then in the callback check again
15:03:03  <creationix>would that be enough for all the cases?
15:03:25  <creationix>I think everything that makes a handle active takes a callback
15:04:02  * paddybyersquit (Quit: paddybyers)
15:04:05  * loladirojoined
15:09:03  * mikealquit (Quit: Leaving.)
15:09:43  <bnoordhuis>creationix: yes, that should work
15:10:14  <creationix>ok, if that works, then I'm happy
15:10:17  <creationix>no need for a callback
15:11:48  <bnoordhuis>piscisaureus_: what's the motivation for doing it in libuv?
15:12:02  <piscisaureus_>bnoordhuis: because you can't stop accepting in node
15:12:26  <bnoordhuis>why not?
15:12:35  <piscisaureus_>bnoordhuis: because there is no uv_listen_stop :-)
15:12:52  <piscisaureus_>bnoordhuis: besides, we also have windows-specific hacks for improving load balancing
15:13:06  <piscisaureus_>bnoordhuis: so it kinda makes sense to make this a libuv flag, right?
15:13:26  <bnoordhuis>i don't know, the idle watcher trick should work from within node
15:13:42  <piscisaureus_>bnoordhuis: then you'd have to add uv_listen_stop
15:13:57  <piscisaureus_>bnoordhuis: you can also just not call uv_accept and set an idle watcher instead.
15:14:18  <piscisaureus_>bnoordhuis: but that is not very nice for that particular connection, because libuv already accepted it when that happens
15:14:26  * loladiroquit (Ping timeout: 260 seconds)
15:14:36  <bnoordhuis>it doesn't on unix
15:14:37  <piscisaureus_>so basically libuv accepts it and then node decides to ignore it... that can't be good for latency :-)
15:14:44  <piscisaureus_>bnoordhuis: it definitely does
15:15:18  <bnoordhuis>nuh-uh, the listen fd signals readiness, libuv calls the onconnection callback, which calls uv_accept(), which calls accept()
15:15:23  <bnoordhuis>or accept4() as the case may be
15:15:29  * felixgequit (Quit: felixge)
15:15:42  <bnoordhuis>but if you don't call uv_accept(), nothing happens - the new connection remains in the pending queue
15:16:06  <piscisaureus_>bnoordhuis: did you recently change that. cause I am looking at 0.6 and it definitely does it the other way
15:16:17  <bnoordhuis>hm, that could be
15:18:16  <bnoordhuis>oh right, accept() then uv_accept()
15:18:30  <bnoordhuis>okay, that's something i can change with no ill effects
15:19:01  <piscisaureus_>bnoordhuis: ok I updated https://gist.github.com/2653691 - I still don't know if it compiles
15:19:06  <piscisaureus_>bnoordhuis: but the idea is there :-p
15:19:17  <bnoordhuis>except maybe... i'll have to check if the various poll mechanisms do or don't keep signaling readiness
15:19:35  <piscisaureus_>bnoordhuis: they all should, right?
15:19:52  <bnoordhuis>listen fds are special
15:19:58  <piscisaureus_>bnoordhuis: I think uv__server_io stops the io watcher and restarts when the user calls uv_accept
15:20:05  <piscisaureus_>bnoordhuis: so that's something to take into consideration
15:20:07  <bnoordhuis>yep
15:20:10  <isaacs>mjr_: did you get a chance to try out that socket onError patch?
15:20:13  <piscisaureus_>bnoordhuis: but, about chaning the accept/callback order
15:20:18  <piscisaureus_>bnoordhuis: *changing
15:20:22  <bnoordhuis>yes?
15:20:45  * mikealjoined
15:20:48  <piscisaureus_>bnoordhuis: you can do that, but then the user can get a connection_cb, but uv_accept can throw an error in is face
15:20:52  <piscisaureus_>bnoordhuis: in case of ECONNRESET etc
15:21:04  <piscisaureus_>or ECONNABORTED, more likely
15:21:12  * mikealquit (Read error: Connection reset by peer)
15:21:18  * mikealjoined
15:21:48  <bnoordhuis>true
15:21:58  <bnoordhuis>actually, i don't think we even have to change anything
15:22:06  <piscisaureus_>bnoordhuis: I think we do
15:22:09  <piscisaureus_>bnoordhuis: why not?
15:22:10  <bnoordhuis>at worst, there's one pending-but-accepted connection
15:26:34  <bnoordhuis>piscisaureus_: i see two options that don't require changing the api / abi
15:27:18  <bnoordhuis>1. do it all in node and live with the fact that there may be one accepted-but-not-uv_accepted connection
15:27:21  <piscisaureus_>bnoordhuis: overlap it with the write wtcher
15:27:44  <bnoordhuis>2. add an idle watcher in libuv that distributes the accept() calls
15:28:06  <bnoordhuis>option 2 is arguably the cleanest, option 1 is the easiest :)
15:28:19  <piscisaureus_>bnoordhuis: I suppose I could live with option 1 for 0.6 but it'd be nice to do it in a clean way for 0.8
15:28:35  <piscisaureus_>bnoordhuis: I don't understand how (2) would fix the abi issue btw
15:28:56  <piscisaureus_>bnoordhuis: do you have some leftover fields that you can use to create a list of backed-off server handles?
15:28:59  * mjr__joined
15:29:08  <bnoordhuis>oh, maybe not abi - but we could add it to the loop to keep breakage to a minimum
15:29:20  <bnoordhuis>dirty hack but it'd work
15:29:26  * indexzero_joined
15:29:37  <bnoordhuis>for node, it could even be a *shock* *gasp* global
15:29:43  <piscisaureus_>bnoordhuis: I think union { ev_idle idle_watcher; ev_io write_watcher } also works
15:29:52  <piscisaureus_>bnoordhuis: because servers don't need the write watcher anyway
15:30:01  <bnoordhuis>ah, well...
15:30:17  <piscisaureus_>unless sizeof(ev_idle) > sizeof(ev_io)
15:30:22  <piscisaureus_>but that seems unlikely
15:31:22  * TooTallNatejoined
15:31:33  <bnoordhuis>no, it's smaller
15:31:36  * mjr_quit (Ping timeout: 245 seconds)
15:31:36  * mjr__changed nick to mjr_
15:32:30  <bnoordhuis>but it'll probably break more than it prevents from breaking
15:32:54  <bnoordhuis>or maybe not, but it's quite ugly
15:36:08  <piscisaureus_>sure
15:36:10  <piscisaureus_>do whatever
15:36:22  <bnoordhuis>i will do whatever
15:36:47  <piscisaureus_>bnoordhuis: I am going to rush out the refcount refactor for windows
15:36:51  <bnoordhuis>good
15:36:59  <piscisaureus_>bnoordhuis: btw I don't really see the point of keeping an *active* watcher list
15:38:30  <bnoordhuis>it's useful for debugging
15:38:30  * ericktjoined
15:38:49  <piscisaureus_>bnoordhuis: yeah but it seems better to just keep a list of *all* handles
15:38:59  * AvianFlujoined
15:39:13  <bnoordhuis>sure, that would work too
15:40:54  <isaacs>piscisaureus_, TooTallNate, bnoordhuis, igorzi, indutny: reminder, skype in 0:20
15:41:06  <TooTallNate>yup yup
15:41:08  <piscisaureus_>isaacs: yep, I'll try to make it. It's right after our standup
15:41:12  <isaacs>kewl
15:41:32  <isaacs>it's right before mine. that's why i like when it goes long.
15:41:45  <piscisaureus_>ah crap I also have to review igor's patch
15:44:02  <indutny>isaacs: 0:20 what time, PST?
15:44:14  <isaacs>indutny: i mean, "in 20 minutes"
15:44:16  <indutny>ah
15:44:22  * rendarjoined
15:44:22  <isaacs>indutny: 17 now :)
15:44:26  <indutny>hahaha :D
15:44:32  <isaacs>or however long it takes for teh c9 standup to finish
15:45:30  * piscisaureus_quit (Quit: ~ Trillian Astra - www.trillian.im ~)
15:48:32  <indutny>guys, call in 13 minutes
15:48:37  <indutny>:D
15:49:04  <bnoordhuis>i'm going to piggyback on the next_watcher
15:56:28  * loladirojoined
16:00:06  * c4milochanged nick to c4milo|meeting
16:00:10  <bnoordhuis>call?
16:00:28  <indutny>in 1 minute
16:00:29  <indutny>:D
16:00:36  <indutny>well, now
16:00:39  * mikealquit (Quit: Leaving.)
16:01:22  <isaacs>bnoordhuis: sign into skype
16:01:33  <bnoordhuis>i am, i have
16:05:20  * bulatshakirzyanojoined
16:06:35  * benviequit
16:06:37  * mikealjoined
16:08:16  * dapjoined
16:08:26  * loladiroquit (Quit: loladiro)
16:11:57  * c4milo|meetingquit (Remote host closed the connection)
16:12:35  * orlandovftwjoined
16:12:41  * orlandovftwquit (Client Quit)
16:13:01  * orlandovftwjoined
16:13:53  * piscisaureus_joined
16:14:13  <piscisaureus_>isaacs: i'm back
16:26:01  * mmaleckiquit (Quit: leaving)
16:31:34  * bulatshakirzyanoquit (Quit: Computer has gone to sleep.)
16:36:01  <igorzi>indutny: hey, can you point me to your unix implementation of stdio in libuv?
16:37:28  <indutny>igorzi: yep, one moment
16:37:43  <indutny>igorzi: https://github.com/joyent/libuv/pull/413
16:49:52  * philipsquit (Excess Flood)
16:50:56  * philipsjoined
16:56:49  <piscisaureus_>igorzi: so your patch looks good to me. There are some conceptual "issues" with it but not much that should stop you from landing it.
16:56:49  <piscisaureus_>igorzi: The only exception is the \\?\ vs \??\ issue, because in node we make paths start with \\?\ so that would break.
16:57:57  <igorzi>piscisaureus_: yep, i'll get that fixed
17:01:37  <TooTallNate>indutny: i didn't get very far on tests for stdio, but feel free to use: https://github.com/TooTallNate/node/compare/cp_stdio
17:01:50  <indutny>TooTallNate: ok, will do! thank you
17:02:51  * ericktquit (Quit: erickt)
17:03:36  <TooTallNate>sure thing
17:05:04  <igorzi>piscisaureus_: so if we do get a long path (that starts with "\\?\") - we need to strip "\\?\" and prepend "\??\"
17:05:27  <piscisaureus_>igorzi: yeah
17:05:33  <piscisaureus_>igorzi: sounds like easy huh :-)
17:10:09  * ericktjoined
17:12:21  <piscisaureus_>bnoordhuis: hey... yt?
17:12:31  <bnoordhuis>piscisaureus_: ho... ih
17:12:50  <piscisaureus_>bnoordhuis: can you explain tim how to merge mozilla-central into smjs
17:13:00  <bnoordhuis>oh god
17:13:07  <bnoordhuis>that task is not for the faint of heart
17:13:14  <piscisaureus_>bnoordhuis: http://piscisaureus.no.de/luvmonkey/latest
17:17:01  * mikealquit (Quit: Leaving.)
17:18:38  <isaacs>piscisaureus_: what's \??\ vs \\?\ mean?
17:18:51  <piscisaureus_>isaacs: \\?\ means unparsed path :-)
17:19:08  <isaacs>so what does \??\ mean?
17:19:11  <indutny>isaacs: some random windows crap
17:19:12  <indutny>:D
17:19:19  <indutny>It's a named pipes, right?
17:19:26  <piscisaureus_>isaacs: \\?\ allows paths longer than 256 characters
17:19:33  <indutny>oooh
17:19:34  <isaacs>piscisaureus_: right
17:19:44  <isaacs>yeah, i'm familiar with \\?\ because that broken npm in the early days of 0.6
17:19:47  <piscisaureus_>isaacs: node handles that automatically because npm creates these vast pathes sometimes
17:19:51  <piscisaureus_>yes
17:20:00  <piscisaureus_>indutny: a names pipe is \\.\pipe :-)
17:20:09  <piscisaureus_>indutny: \??\ is kinda oddball
17:20:15  <indutny>yeah, I see now
17:20:19  <piscisaureus_>I think it's an NT namespace or something
17:20:28  <isaacs>piscisaureus_: well, it doesn't help that some windows users run npm in C:\Documents and Settings\User Name\Documents\Programming\Projects\my-project
17:20:41  <isaacs>piscisaureus_: instead of /home/username/projects/my-project as god intended
17:20:58  <piscisaureus_>isaacs: hah, that's only on xp
17:21:11  <isaacs>yeah, it's C:\Users\ now
17:21:22  <isaacs>but yeah, 256 chars is way too restrictive
17:22:11  <isaacs>even running `vcbuild test` in c:\node-v0.6.17\ ends up creating files over that limit
17:22:42  <isaacs>so \??\ is a named pipe?
17:22:43  <bnoordhuis>just you wait until the cp/m port is done
17:22:59  <bnoordhuis>8 chars is all you get
17:23:33  <indutny>nono
17:23:54  <indutny>isaacs: http://msdn.microsoft.com/en-us/library/windows/desktop/aa365783(v=vs.85).aspx
17:24:01  <piscisaureus_>isaacs: a named pipe is \\.\pipe\
17:24:10  <piscisaureus_>isaacs: \??\ is really oddball weird
17:24:45  <piscisaureus_>it's basically just some prefix that we have to add to a path to make junctions work
17:26:13  <indutny>oh crap
17:30:18  <mjr_>isaacs: we are running it now on two machines. I haven't checked the logs.
17:30:59  <isaacs>man, windows is so crazy!
17:33:17  <igorzi>piscisaureus_: do you think we need to handle a case if someone hands us a path that starts with "\??\" ?
17:35:07  * mikealjoined
17:36:02  * ljacksonquit (Read error: Operation timed out)
17:46:28  * perezdjoined
17:51:22  <bnoordhuis>piscisaureus_: https://github.com/bnoordhuis/libuv/commit/580deb1 <- wip that seems to work well
17:51:33  * bnoordhuissneaks off to the dining table
17:55:00  * brsonjoined
17:59:55  * felixgejoined
17:59:55  * felixgequit (Changing host)
17:59:55  * felixgejoined
18:01:23  * avalanche123quit (Quit: Textual IRC Client: http://www.textualapp.com/)
18:17:42  * avalanche123joined
18:26:44  * mmaleckijoined
18:33:26  <ryah_>DrPizza: yes im in london
18:33:33  <DrPizza>huh
18:33:34  <DrPizza>me too!
18:33:44  <ryah_>we should meet :)
18:33:51  <DrPizza>how long are you in town for?
18:34:02  <ryah_>for 5 days
18:34:12  <DrPizza>ok brb
18:34:31  <ryah_>do you have my email address?
18:34:51  <ryah_>[email protected]
18:35:03  <indutny>ryah_: are you going to visit Russia too? :)
18:35:06  <ryah_>i have to run at the moment
18:35:11  <indutny>ryah_: seems like you're traveling a lot
18:35:49  * mattpardee_joined
18:36:18  <mmalecki>ryah_: and Poland?
18:36:51  <ryah_>indutny: unfoetunatly not
18:37:03  <indutny>ryah_: oh, ok then
18:37:23  <ryah_>also on my pjone and slightly drunk... so spelling is bad
18:37:38  <ryah_>ok must go. cheery-o
18:37:58  <indutny>:D
18:41:11  <DrPizza>ryah_: are you on holiday or working?
18:42:09  <indutny>DrPizza: I think ryah_ is practicing slippy-fingers-typing
18:42:19  <DrPizza>always an important and valuable skill
18:43:55  <indutny>:)
18:48:31  * felixgequit (Ping timeout: 260 seconds)
18:49:32  <TooTallNate>isaacs: so this is 99% working https://github.com/TooTallNate/node/commit/throw-call-sites
18:49:46  <TooTallNate>the only one that doesn't work is test/message/stack_overflow.js
18:49:55  <TooTallNate>but i'm not entirely sure why (yet)
18:50:19  * mralephjoined
18:50:25  <TooTallNate>isaacs: i get output like this instead:
18:50:33  <TooTallNate> /Users/nrajlich/node/test/message/stack_overflow.js:0
18:50:33  <TooTallNate>(function (exports, require, module, __filename, __dirname) { // Copyright Joy
18:50:33  <TooTallNate>^
18:50:33  <TooTallNate>RangeError: Maximum call stack size exceeded
18:51:02  <isaacs>TooTallNate: same here
18:51:09  <isaacs>but it's even worse without felixge's patches
18:51:22  <TooTallNate>isaacs: where does that module wrapper come into play?
18:51:26  <isaacs>TooTallNate: you saw this? https://github.com/isaacs/node/compare/v0.6-merge
18:51:37  <isaacs>TooTallNate: sorry, i meant to send you that before
18:51:51  <isaacs>but it sounds like you're basically already caught up with it
18:51:51  <TooTallNate>oh, hah
18:52:07  <isaacs>TooTallNate: it's defined in lib/module.js i believe
18:52:09  * mmalecki_joined
18:52:56  * mmaleckiquit (Read error: Connection reset by peer)
18:54:39  <TooTallNate>isaacs: found it: src/node.js:567
18:54:49  <isaacs>oh, right, there
18:54:59  <TooTallNate>but i don't get why there's a difference
18:55:20  * mmalecki_quit (Client Quit)
18:55:24  <isaacs>yeah, it's losing a frame for some reason
18:55:27  * mmaleckijoined
18:55:30  <isaacs>or, rather, the line number comes back as 0
18:56:01  <TooTallNate>ya
19:00:05  * orlandovftwquit (Quit: leaving)
19:01:26  * paddybyersjoined
19:11:48  * mattpardee_quit (Quit: Textual IRC Client: http://www.textualapp.com/)
19:14:26  * mattpardee_joined
19:16:54  * brsonquit (Ping timeout: 260 seconds)
19:18:14  * loladirojoined
19:18:35  * brsonjoined
19:18:51  * mmaleckiquit (Ping timeout: 260 seconds)
19:20:24  * mmaleckijoined
19:24:36  * loladiroquit (Read error: Connection reset by peer)
19:25:16  * loladirojoined
19:30:14  * piscisaureus_quit (Ping timeout: 245 seconds)
19:36:29  * mjr_quit (Ping timeout: 245 seconds)
19:43:20  * mjr_joined
19:48:11  * ljacksonjoined
19:57:24  * orlandovftwjoined
20:03:47  * mmaleckiquit (Ping timeout: 240 seconds)
20:04:26  * mmaleckijoined
20:06:43  * TheJHquit (Ping timeout: 260 seconds)
20:07:31  * piscisaureus_joined
20:20:05  * mmaleckiquit (Read error: Connection reset by peer)
20:25:19  <TooTallNate>isaacs: if i try/catch the call stack error, err.stack is undefined...
20:29:06  * mmaleckijoined
20:31:45  <isaacs>TooTallNate: right, buecasue RangeErrors have no stack
20:35:09  * mmaleckiquit (Quit: Reconnecting)
20:35:17  * mmaleckijoined
20:36:21  * c4milo|meetingjoined
20:37:28  * avalanche123quit (Ping timeout: 245 seconds)
20:39:19  * avalanche123joined
20:43:21  * c4milo|meetingquit (Remote host closed the connection)
20:43:29  <TooTallNate>isaacs: well hmm, i'm guessing the discrepancy with this last one is just because of the difference in V8 versions
20:43:33  <TooTallNate>maybe even a bug
20:49:47  * felixgejoined
20:49:47  * felixgequit (Changing host)
20:49:47  * felixgejoined
20:52:32  <isaacs>TooTallNate: ok, that's fair.
20:53:05  <isaacs>TooTallNate: i did see a few cases in my v6-merge branch at least, where it seemed to be printing out two error lines
20:53:32  <TooTallNate>isaacs: really? like for what?
20:54:00  <isaacs>TooTallNate: now i can't remember..
20:54:21  <isaacs>TooTallNate: it was in some oddball crash that was also throwing in my domain.on('error') handler
20:55:15  <mjr_>isaacs: you guys get anywhere with this annoying this? Uncaught exception: TypeError: Property 'onIncoming' of object #<HTTPParser> is not a function
20:55:39  <mjr_>It's happening a lot. I can't tell if leaking memory is better, but it might be.
20:56:31  <isaacs>mjr_: is it still happening with thatpatch?
20:56:49  <mjr_>yes, I'm comparing error rates now, and they look the same for machines with the patch and those without
20:57:12  <isaacs>mjr_: ok, then it's a different thing.
20:58:03  <mjr_>isaacs: it's interesting that this is happening in two different processes. They are both the ones that handle the media streams.
20:58:36  <mjr_>It's got to be some odd condition related to aborting connections.
21:00:56  <mjr_>isaacs: I'm tempted to patch this to have it ignore the data of onIncoming is null, but I'm guessing it'll leak something, and then we won't be as likely to fix it.
21:02:38  <isaacs>mjr_: so, there's actually only one place where we call onIncoming
21:02:50  <isaacs>mjr_: but two places where we set it, one for server and the other for client
21:03:12  <mjr_>Would it help if you knew which one it used to be?
21:03:41  <isaacs>mjr_: and, when we set it for a client, we do so in a nextTick
21:03:52  <isaacs>so... there is exactly one tick where it's still null.
21:04:01  <isaacs>can i test a hypothesis?
21:04:10  <mjr_>test that shit
21:04:21  <mjr_>I can open my lab to your testing
21:04:22  <isaacs>mjr_: (also, yes, that would be useful, but i have a hypothesis about which one it is)
21:04:25  <isaacs>one sec.
21:04:47  <mjr_>OK. I'm going to eat a chicken pot pie now, but I'll be nearby.
21:04:48  <isaacs>running make test now to see if what i'm suggesting is even possible
21:05:29  <isaacs>mjr_: i'm gonna run out and get tacos
21:05:35  <isaacs>mjr_: but i'll be back son.
21:05:37  <isaacs>*soon
21:06:01  <tjfontaine>isaacs is now a priest
21:06:45  <isaacs>mjr_: ok, that broke all kinds of shit.
21:06:55  <isaacs>mjr_: i need food. i'll try again after lunch
21:12:04  <creationix>mjr_, quick question, about how many https requests does your cluster handle in a typical day? or can you day
21:12:21  <creationix>*can you say
21:12:31  <mjr_>Let me check
21:13:12  <mjr_>170M https requests / day, but that's request, not necessarily connection.
21:13:31  <mjr_>Most clients open one connection and keep it open for a long time.
21:13:44  <mjr_>We are almost about to move everything over to HTTPS, which is going to be exciting.
21:13:57  <creationix>ok, so the guy wanting 900,000,000/day is doing something really huge or has estimating problems
21:14:12  <tjfontaine>delusions of granduer
21:14:43  <mjr_>I think we do more like 2B requests / day, but only 170M of them are HTTPS.
21:15:38  <creationix>ahh, I thought it was all https
21:15:43  <mjr_>After we switch to all HTTPS, then that's going to be 2B HTTPS / day.
21:15:46  <creationix>still that's a ton of traffic either way
21:15:57  <mjr_>HTTPS is really brittle in node right now, unfortunately.
21:27:44  * mmaleckiquit (Ping timeout: 252 seconds)
21:37:02  * mmaleckijoined
21:40:34  * mmaleckiquit (Client Quit)
21:41:56  * mmaleckijoined
21:43:11  * CoverSlidequit (Read error: Connection reset by peer)
21:43:15  * CoverSli1ejoined
21:43:46  * CoverSli1equit (Read error: Connection reset by peer)
21:43:48  <piscisaureus_>bnoordhuis: hey, yt?
21:44:01  <piscisaureus_>bnoordhuis: I have a question about how these common methods are supposed to work?
21:44:08  <piscisaureus_>s/\?//
21:48:15  * CoverSlidejoined
21:48:36  * mmaleckiquit (Ping timeout: 260 seconds)
21:50:17  * felixgequit (Quit: felixge)
21:52:23  * mmaleckijoined
22:01:40  * loladiroquit (Quit: loladiro)
22:12:50  * ircretaryquit (Ping timeout: 260 seconds)
22:13:42  <bnoordhuis>piscisaureus_: ih
22:13:50  <bnoordhuis>what common methods?
22:13:57  <piscisaureus_>uv__handle_start
22:13:58  <piscisaureus_>etc
22:14:39  <piscisaureus_>bnoordhuis: a handle is considered active when there's an active request, right?
22:14:42  <piscisaureus_>bnoordhuis: or not?
22:15:25  <bnoordhuis>piscisaureus_: active request or reading
22:15:39  <piscisaureus_>bnoordhuis: exactly
22:15:42  <bnoordhuis>more aptly worded, reading or writing
22:16:07  <piscisaureus_>bnoordhuis: so how do you decide whether to call uv__handle_stop when a write req comes back?
22:16:57  * paddybyers_joined
22:17:00  <bnoordhuis>piscisaureus_: by checking if the write queue is empty, off the top of my head
22:17:12  <piscisaureus_>hmm ok
22:17:23  <piscisaureus_>bnoordhuis: so no magic want that does it all :-)
22:17:37  <piscisaureus_>*wand
22:17:40  <piscisaureus_>hmm
22:17:41  <piscisaureus_>painful
22:18:01  <bnoordhuis>is it?
22:18:08  <piscisaureus_>yes
22:18:24  <piscisaureus_>on uv-win we're not really very consistent in these things
22:19:25  <piscisaureus_>on windows it would be active if
22:19:25  <piscisaureus_>(tcp->flags & (UV_HANDLE_READING | UV_HANDLE_SHUTTING | UV_HANDLE_LISTENING) || tcp->active_reqs > 0)
22:19:43  * paddybyersquit (Ping timeout: 260 seconds)
22:19:44  * paddybyers_changed nick to paddybyers
22:20:20  <bnoordhuis>piscisaureus_: that's not so bad, is it? it almost fits on a single line
22:20:28  <piscisaureus_>:-p
22:20:37  <piscisaureus_>now I have to dig that up from memory for all handle types
22:20:45  <piscisaureus_>bnoordhuis: I am not surprised this was painful for you
22:21:00  <piscisaureus_>bnoordhuis: so, how does uv_ref(handle) work
22:21:15  <piscisaureus_>bnoordhuis: how does it know whether to increment the loop refcount?
22:21:21  <piscisaureus_>oh hmm nvm that should be easy
22:21:22  <piscisaureus_>:-op
22:21:30  <bnoordhuis>piscisaureus_: it maintains a flag in handle->flags
22:21:43  <piscisaureus_>ah
22:21:48  <piscisaureus_>headache
22:21:48  <bnoordhuis>so it knows when the handle is already ref'd
22:21:59  <bnoordhuis>yeah, you're out of bits, aren't you? :)
22:22:14  <piscisaureus_>yep\
22:22:20  <piscisaureus_>well I cut one lately
22:22:32  <piscisaureus_>I suppose I could also cut UV_HANDLE_CLOSED but it's kinda nice to use it for assertions
22:22:38  <piscisaureus_>so people don't uv_close the same handle twice
22:22:44  <piscisaureus_>but strictly speaking it's not needed
22:23:27  <piscisaureus_>bnoordhuis: I am also going to need an UV_HANDLE_INTERNAL so uv_walk won't show people handles that are unknown to them
22:23:45  <piscisaureus_>bnoordhuis: I suppose you don't have internal handles in unix because it's all ev
22:23:57  <bnoordhuis>yes, though that might change on linux
22:24:06  <bnoordhuis>or maybe all unices
22:24:09  <piscisaureus_>2 headaches now
22:24:16  <bnoordhuis>i was thinking of abstracting ev_io away into uv_io
22:24:23  <bnoordhuis>or uv__io as the case may be
22:24:39  <bnoordhuis>that would let me reuse the bulk of the code in src/unix with minimum hassle
22:25:03  * pieternjoined
22:25:39  <piscisaureus_>bnoordhuis: uv_poll :-)
22:25:56  <bnoordhuis>yes, but it would have to a handle that's not user-visible
22:26:01  <bnoordhuis>*have to be
22:26:02  <piscisaureus_>yeah, I see
22:26:21  <piscisaureus_>ah I can cherry pick the removal uv UV_HANDLE_UV_ALLOCED
22:26:23  <piscisaureus_>\o/
22:29:08  <piscisaureus_>Ah, nice, it might just fit :-)
22:30:07  <avalanche123>bnoordhuis piscisaureus_ what do you think about splitting uv_fs_cb into separate callbacks? mich like uv_write_cb, uv_read_cb and uv_connect_cb work?
22:30:30  <avalanche123>*much
22:32:02  <bnoordhuis>avalanche123: to what purpose?
22:32:23  <avalanche123>to make it more consistent
22:33:03  <bnoordhuis>i need a better reason :)
22:33:07  <bnoordhuis>what would it improve?
22:34:06  <avalanche123>well right now you have one data structure, uv_fs_s that is used in callbacks for fs_stat and fs_read requests
22:34:28  <avalanche123>function with the same signature could be used in both places
22:34:44  <avalanche123>but it would go terribly wrong obviously if used in the wrong place
22:35:09  <avalanche123>because callbacks are explicit for other events, you can't mix them
22:36:09  <avalanche123>you can't use uv_write_cb where uv_connect_cb is expected
22:36:34  <avalanche123>but there is no such api hinting in case of fs events
22:37:03  <avalanche123>also will make writing bindings simpler
22:37:07  <bnoordhuis>i don't quite follow
22:37:14  <bnoordhuis>what's the difficulty with fs_stat vs fs_read?
22:38:06  <avalanche123>I have to inspect request to do what I want, with uv_read for example I get data as a function parameter
22:40:19  <avalanche123>you could have callback functions that take relevant arguments instead of the same argument for every event
22:41:03  <avalanche123>don't feel like I make much sense, do i?
22:41:14  <bnoordhuis>well, you haven't convinced me yet :)
22:41:29  <bnoordhuis>we do plan to turn some fs operations into streams some time in the future
22:41:54  <avalanche123>why?
22:42:30  <bnoordhuis>to make our lives easier
22:43:00  <piscisaureus_>avalanche123: because right now stuff is dangerous in node
22:43:01  <bnoordhuis>and to optimize use cases like streaming files from disk to e.g. a socket or a pipe
22:43:13  <avalanche123>that makes sense
22:43:45  <avalanche123>should also make it more consistent with tcp/udp/tty/pipes
22:44:10  <avalanche123>I'm writing bindings for ruby like I said and file system api just didn't fit with the rest
22:44:13  <piscisaureus_>avalanche123:
22:44:14  <piscisaureus_>fd = open('file1', 'w');
22:44:14  <piscisaureus_>write(fd, 'hello')
22:44:14  <piscisaureus_>close(fd)
22:44:14  <piscisaureus_>fs = open('file2', 'w');
22:44:14  <piscisaureus_>^-- Avalanche: the "hello" write might inadvertedly end up in file2
22:44:39  <piscisaureus_>avalanche123: most users that use the low level fs api are aware of the risks, but this is just inconvenient.
22:45:03  <avalanche123>piscisaureus_ this is not related to libuv?
22:45:12  <piscisaureus_>avalanche123: well libuv is to blame :-)
22:45:32  <avalanche123>ok, so some internals of libuv make it risky?
22:45:34  <piscisaureus_>avalanche123: it is fallout from the fact that we use a thread pool and fs operations can be arbitrarily reordered
22:45:39  <avalanche123>ah
22:45:54  <avalanche123>this makes a lot of sense now
22:46:52  <avalanche123>so in your example, second open() might return the same fd int?
22:47:02  <avalanche123>and mess up the wrong file as a result
22:47:04  <piscisaureus_>avalanche123: yes. The unix api even mandates this :-)
22:47:08  <piscisaureus_>api->spec
22:47:18  <avalanche123>reuse of fds
22:47:20  <avalanche123>nice
22:47:37  <bnoordhuis>yes, always return the lowest free file descriptor
22:47:50  <bnoordhuis>sometimes convenient, sometimes not
22:48:00  <avalanche123>right
22:48:15  <piscisaureus_>the guarantee that FDs are low us useful in certain scenarios
22:48:30  <piscisaureus_>but it defintely also makes bugs harder to catch
22:48:51  <avalanche123>could you wait for all queued operations for an fd to complete by blocking open() call?
22:48:55  <avalanche123>or is that too naive?
22:49:28  <piscisaureus_>avalanche123: no we just ensure the the close() syscall not made until all reqs complete
22:49:48  <piscisaureus_>avalanche123: and maybe there will be some other ordering constraints too
22:49:52  <avalanche123>gotcha
22:50:06  <avalanche123>so this forces to use a different fd in the second open()
22:50:10  <piscisaureus_>yep
22:50:22  <avalanche123>smart
22:50:37  <piscisaureus_>avalanche123: there could be more constraints, too
22:50:50  <avalanche123>my main problem is that I use ffi to make my ruby bindings right
22:50:55  <avalanche123>so I don't resort to C
22:51:01  <avalanche123>or would prefer not to
22:51:09  <piscisaureus_>for example, to avoid issues with reordered writes we always use an explicit offset for writes (in the node high-level fs functions)
22:51:44  <avalanche123>oh
22:51:56  <piscisaureus_>but we could also just queue in libuv and solve it that way
22:52:29  <avalanche123>I see
22:52:37  <avalanche123>can you explain?
22:52:39  <avalanche123>queue what?
22:52:53  <piscisaureus_>avalanche123: well, consider
22:53:00  <piscisaureus_>write(fd, "hello")
22:53:06  <piscisaureus_>write(fd, "world")
22:53:17  <avalanche123>so you keep the offset count internally?
22:53:25  <piscisaureus_>avalanche123: you expect to write "helloworld" here
22:53:29  <avalanche123>right
22:53:37  <piscisaureus_>but since the thread pool might reorder, yadda yadda
22:53:47  <avalanche123>yeah I see now
22:53:49  <piscisaureus_>avalanche123: so yeah node tracks the file pointer internally for this
22:54:09  <avalanche123>you migh actually end up with ' world' and then 'helloworld'
22:54:11  <avalanche123>neat
22:54:24  <avalanche123>this is not part of libuv?
22:54:37  <avalanche123>done in node?
22:55:08  <piscisaureus_>it's done in node
22:55:14  <avalanche123>I see
22:55:17  <piscisaureus_>but the effect is kinda lame sometimes
22:55:28  <piscisaureus_>because on mac pwrite() is not threadsafe
22:55:34  <piscisaureus_>so we have to lock the entire thread pool
22:55:44  <avalanche123>oh
22:56:09  <piscisaureus_>I wonder if that is also true on recent OS X versions
22:56:11  <piscisaureus_>bnoordhuis: you know?
22:56:14  * rendarquit
22:56:39  <piscisaureus_>avalanche123: if we have more control over the scheduling we could also alleviate the pain a little for that
22:57:14  <mjr_>isaacs: did tacos give you any new insights  into this parser exception mess?
22:57:34  <isaacs>mjr_: yeah, i think i'm close.
22:57:46  <avalanche123>piscisaureus_ async fs io is hard it, very interesting stuff
22:57:55  <avalanche123>are you planing to move node tricks to libuv?
22:58:24  * mikealquit (Quit: Leaving.)
22:58:42  <bnoordhuis>piscisaureus_: not 100% sure but it think it's still an issue
22:58:45  <bnoordhuis>*i think
22:59:01  <piscisaureus_>avalanche123: yes, we will move some node tricks to libuv
22:59:12  <avalanche123>for the ffi bindings I will have to write some helper functions in c to inspect fs_req_s but I am not sure you guys will accept that in the mainstream
22:59:42  <bnoordhuis>when it comes to pull requests, i often ask myself "how does this help *me*?"
22:59:58  <avalanche123>right and it won't help c guys
23:00:08  <avalanche123>since they can just inspect the struct
23:00:19  <bnoordhuis>yes
23:00:33  <avalanche123>that's why I'm wondering if fs api is changing to more explicit callbacks where I could get necessary data as part of function arguments
23:00:55  <piscisaureus_>avalanche123: so what's stopping you from forking libuv and maintaining this stuff yourself
23:01:08  * mikealjoined
23:01:17  <piscisaureus_>avalanche123: git is such a wonderful tool :-)
23:01:30  <avalanche123>nothing really, just that it is a big project to take on maintaining :)
23:01:46  <piscisaureus_>avalanche123: well floating 3 functions or so is not very big to maintain
23:02:11  <avalanche123>ok, that's fair
23:02:30  <piscisaureus_>avalanche123: maybe every now and then you have to merge with joyent/libuv/master, and maybe you need to update these functions slightly
23:02:34  * loladirojoined
23:02:38  <avalanche123>now that I know fs is ok as is with you guys, I'll have to take this approach
23:03:05  <avalanche123>yes, definitely doable
23:03:27  <piscisaureus_>avalanche123: so it's not okay as-is, but it's just ineffecient to turn it all upside down and after that refactor it again
23:03:38  * paddybyersquit (Quit: paddybyers)
23:03:50  <piscisaureus_>avalanche123: expecially since the 0.8 deadline was today, moreorless
23:04:04  <avalanche123>oh
23:04:17  <avalanche123>ok you do what you gotta do then :)
23:04:31  <avalanche123>I will still hope it gets turned upside down after refactor
23:05:47  <piscisaureus_>bnoordhuis: actually it's kinda weird that uv_prepare and uv_check refs at all
23:06:20  * c4milo|meetingjoined
23:06:26  <piscisaureus_>bnoordhuis: I can't imagine any situation where you would want either of them to keep the loop alive
23:07:22  * c4milo|meetingchanged nick to c4milo
23:09:02  <isaacs>mjr_: i'm almost 100% sure it's a result of the setTimeout in the http client parser usage
23:09:15  <isaacs>mjr_: s/setTimeout/process.nextTick/
23:10:09  <mjr_>isaacs: the nextTick business certainly does seem suspect, or at least frustrating because you lose the stack.
23:10:16  <isaacs>mjr_: yeah
23:10:32  <isaacs>there's some reason why we need a nextTick before assigning the listeners and emitting the socket event, i forget what it is
23:10:36  <isaacs>but some tests fail if i just yank it out
23:11:28  <piscisaureus_>I agree it looks fishy
23:11:54  <mjr_>There are a few of those in node. Every time I run into one, I think it'll be simple to fix, and then spend a frustrating hour on it, then give up.
23:12:18  <piscisaureus_>I know there's one in dns but that's because c-ares can report events sychronously
23:13:20  <isaacs>this nextTick is there to preserve an API behavior that was implemented before we had sockets being reused in teh agents.
23:13:27  <mjr_>Oh, speaking of DNS, DTrace just revealed that we are re-reading /etc/hosts like maniacs. Our /etc/hosts file is 300KB of auto-generated internal names and stuff.
23:13:45  <piscisaureus_>yeah
23:13:50  <piscisaureus_>known issue
23:14:13  <piscisaureus_>problem is basically that this happens in cares
23:14:31  <piscisaureus_>deep inside cares even
23:14:48  <piscisaureus_>mjr_ : and 300KB hosts file is kinda insane innit :-)
23:15:22  <mjr_>piscisaureus_: not when you have over 100 physical machines and countless extra zones, including thousands of node processes, all of which you want to address individually.
23:15:36  <mjr_>I mean, yes? But no.
23:15:42  <piscisaureus_>mjr_: you can't really expect a dns lib author to use a btree for these cases
23:15:51  <piscisaureus_>mjr_: sure, you have good reasons to do it
23:15:58  <mjr_>Indeed not. But we shouldn't re-read it every time.
23:16:27  <piscisaureus_>so what should we do? mmap it?
23:16:33  <piscisaureus_>cache it for 5 minutes?
23:16:51  <mjr_>yeah, I guess mmap might be a good solution
23:17:04  <piscisaureus_>load it, fstat it every 5 minutes, and keep it cached
23:17:27  <piscisaureus_>the problem with mmaping is that it handles change moreorless transparently but you can't really detect if the file grows
23:17:32  <mjr_>You can stat it every time you need it, as long as you don't stat it more than once every few seconds.
23:17:52  <piscisaureus_>mjr_: maybe you can do a simple caching dns module in node?
23:18:12  <mjr_>Yeah, that's what I'm about to do if there isn't any other obvious solution.
23:18:39  <piscisaureus_>mjr_: the obvious solution is to patch c-ares or use the system resolver (the dns module can do that, too)
23:19:03  <piscisaureus_>mjr_: but patching cares is going to be a lot of work I think
23:19:10  <mjr_>I know, kind of sucks.
23:19:25  <mjr_>The system resolver usually has some kind of cache.
23:19:26  * mattpardee_part ("Textual IRC Client: http://www.textualapp.com/")
23:19:32  <piscisaureus_>mjr_: does getaddrinfo in the thread not work for you?
23:19:35  <mjr_>But the libc API is sync.
23:19:47  <mjr_>Oh, it probably works great. I'm just using whatever node does by default.
23:19:47  <piscisaureus_>mjr_: node has a facility for doing that on the thread pool
23:21:51  * TooTallNatequit (Quit: Leaving...)
23:22:10  <piscisaureus_>mjr_: dns.lookup uses that, actually
23:22:30  <piscisaureus_>mjr_: so if you are using that too, then your system dns lib would be to blame and not cares
23:23:20  <mjr_>Yeah, I'll give that a try. I think we might win the most by having an in-app DNS cache.
23:23:33  <mjr_>Cache everything for 1 minute or something.
23:23:52  <piscisaureus_>reasonable
23:24:00  <piscisaureus_>put in in an object
23:24:15  <piscisaureus_>you get a "nice" hashtable lookup for free!
23:24:51  <mjr_>I've done this for some of our monitoring tools already for reverse lookups.
23:25:02  * isaacscan't wait to rewrite http.js
23:25:15  <mjr_>isaacs: I'll be first in line to try it out.
23:25:17  <piscisaureus_>mjr_: alternatively you could scrap the big hosts file and get a proper dns server >:-)
23:25:37  * mikealquit (Quit: Leaving.)
23:25:49  <CoverSlide>what's the priority of being able to use http for both http and https
23:26:16  <mjr_>piscisaureus_: yeah, we have DNS, but it's one more dependency, and for same-DC communication, I'd rather avoid it.
23:26:39  <isaacs>mjr_: yes, i'm sure :)
23:26:56  <isaacs>really, the problem with this thing is that node wasn't designed with socket pooling and keepalives in mind from the start.
23:27:02  <isaacs>a lot of these features are kind of bolted on
23:27:30  <mjr_>Before you rewrite it, we should tell you about our experience with explicit pooling.
23:27:52  <mjr_>Danny is doing some good work in that area. It's really tricky, and for us it's the next frontier of back pressure problems.
23:28:40  <isaacs>mjr_: yes, that will be good
23:29:08  <isaacs>mjr_: first target of attack will be to see if a js parser can perform admirably, so we don't have this slow creation bullshit
23:29:20  <bnoordhuis>CoverSlide: why do you ask?
23:29:27  <isaacs>mjr_: and then we can just throw them away, instead of keeping around these long-lived gc bombs.
23:31:17  <mjr_>isaacs: I think you'll be able to make something good happen there in pure JS, but I'm guessing you won't be able to beat the performance of http_parser.
23:31:19  * mralephquit (Quit: Leaving.)
23:31:35  <mjr_>But who knows, I'm wrong all the time.
23:31:48  <isaacs>mjr_: maybe not in a head-to-head parse-off, but in actual http load tests? it's promising.
23:32:06  <isaacs>mjr_: these long-lived objects are kind of tricky for the garbage collector.
23:32:07  <mjr_>sunday, sunday, sunday
23:32:26  * mjr_will sell you the whole seat, but you'll only need the EDGE
23:33:34  * isaacshearing mjr_ in the OH YEAH! voice
23:33:36  * isaacsquit (Remote host closed the connection)
23:33:38  <mjr_>I struggled for a long time to beat hiredis at parsing redis replies. I couldn't get close enough not to worry.
23:34:04  * isaacsjoined
23:34:04  <mjr_>I do love a good parse-off though.
23:34:42  <mjr_>I struggled for a long time to beat hiredis at parsing redis replies. I couldn't get close enough not to worry.
23:37:20  <isaacs>mjr_: i noticed that hiredis seems to fall over when i do hgetall and hmset
23:37:33  <isaacs>it's like it doesn't speak the same version of redis as my redis does or something
23:37:42  <mjr_>But the JS parser works?
23:38:14  * mikealjoined
23:38:21  <isaacs>mjr_: yeah, beautifully
23:38:30  <isaacs>mjr_: and it's still way crazy fast.
23:38:41  <mjr_>Newer V8's have made the JS parser almost 2X faster.
23:38:54  <bnoordhuis>crazy, isn't it?
23:39:11  <bnoordhuis>well, there was some room for improvement in v8's ia32 and x64 assemblers
23:39:13  <bnoordhuis>but still
23:39:14  <mjr_>Still not as fast as hiredis, but it's closing the gap a bit.
23:39:52  <mjr_>isaacs: I will trade you a hiredis bug fix for a node http parser crash fix.
23:40:25  <isaacs>mjr_: no deal
23:40:33  * mjr_shakes fist
23:40:37  <isaacs>mjr_: i am perfectly happy not using hiredis.
23:40:44  <isaacs>mjr_: i WILL fix node's http bug anyway, though.
23:40:49  <isaacs>mjr_: just because i like you.
23:40:55  <bnoordhuis>it's a strange bug though
23:41:02  <bnoordhuis>i can't see a code path that might cause that
23:41:25  <mjr_>I resisted hiredis for a long time, but I couldn't deny the numbers. Now that we have flame graph technology, I can revisit and hopefully close the gap even further.
23:41:50  <mjr_>The old V8 profiler basically lied to me and wasted my time on countless dead ends.
23:43:38  <bnoordhuis>now that would be a nice gsoc project, a valgrind + callgrind plugin that works with v8
23:51:41  * orlandovftwquit (Ping timeout: 260 seconds)
23:53:57  * ircretaryjoined
23:56:28  <piscisaureus_>I am not so confident that the typical gsoc applicant can pull that off
23:56:34  <piscisaureus_>but maybe I am too pessimistic here
23:58:07  <bnoordhuis>one can dream though