00:00:00  * ircretaryquit (Remote host closed the connection)
00:00:07  * ircretaryjoined
00:00:19  <trevnorris>hey, is this "beforeExit" event going to fire when there's an error?
00:00:22  <trevnorris>tjfontaine: ^ ?
00:02:20  <trevnorris>othiym23: if process._exiting === true, should I bother to run any error callbacks. i.e. if the process.on('exit'... callback throws?
00:03:03  <othiym23>so what's the overall goal here?
00:03:18  <trevnorris>i dunno. this feature is because you requested it ;)
00:03:19  <othiym23>this seems like a bunch of new changes to me
00:03:22  <trevnorris>i'm just the implementor.
00:03:42  <othiym23>hmmmmmmmmmmmmm
00:04:17  <tjfontaine>I'm not sure we're going to land beforeExit :)
00:04:26  <othiym23>I feel like we don't want to do anything unexpected if we're in the process of bringing the process down
00:04:31  <othiym23>so no?
00:04:43  <trevnorris>tjfontaine: thanks :)
00:05:15  <trevnorris>beforeExit was meant to help revive a process in case it ran out of "refs", but never discussed how it might affect error handling.
00:05:46  * ISAACSchanged nick to isaacs
00:05:49  <isaacs>trevnorris: pong
00:06:28  <trevnorris>isaacs: question about error handling from the error callbacks. um. othiym23 think you can give a short summary?
00:06:33  <trevnorris>i suck at doing that.
00:07:01  <isaacs>ok, i see
00:07:16  <trevnorris>othiym23: fyi, doing that causes three of your tests to fail.
00:07:27  <isaacs>i think if the error handler throws, then let it crash
00:07:41  <isaacs>trevnorris: ie, no try/catch around errorHandler(er)
00:07:57  <trevnorris>isaacs: well, problem there is then process.on('exit' won't fire
00:08:02  <isaacs>hmm.
00:08:04  <trevnorris>that's what got me started into all this in the first place.
00:08:06  <isaacs>why not?
00:08:18  <othiym23>those are groundwater_'s tests, not mine
00:08:25  <trevnorris>because it's like throwing from _fatalException.
00:08:28  <groundwater_>othiym23: lol
00:08:28  <isaacs>trevnorris: ok
00:08:58  <groundwater_>trevnorris: is it just the error-in-error test that's failling?
00:09:23  <isaacs>trevnorris: threw=true; try { handleError(er);threw=false; } finally { if(threw) process.emit('exit', process.exitCode || 8) }
00:09:27  <trevnorris>groundwater_: it's throw-in-before-multiple, throw-in-before and throw-in-after
00:09:57  <groundwater_>ahh, interesting because those used to pass
00:10:09  <groundwater_>why don't i have a look at your latest changes later
00:10:57  <groundwater_>i can also try to writeup a summary of the edge case behavior
00:11:04  <trevnorris>isaacs: it's possible to run through the error callbacks one more time, which I thought might be useful to debug if one of your error callbacks throws.
00:11:49  <trevnorris>which i've implemented (wasn't hard) but also othiym23 wanted a custom error message that would print like "Error handler threw: <stacktrace> when handling error: <stacktrace>"
00:13:21  <othiym23>trevnorris, isaacs: everybody is confused when a domain handler throws and they get the original stacktrace that was handed to the domain handler
00:14:05  <trevnorris>yeah. so just allow the errorHandler to throw and that'll be it.
00:14:23  <trevnorris>if they need extra information then you can listen for 'exit' event and check the status code.
00:14:25  * dshaw_joined
00:17:31  <trevnorris>othiym23: i think it'd just be helpful if we didn't loose the stacktrace when getting into _fataException, then they'd have the stack that got them there as well.
00:17:36  <trevnorris>that should be enough.
00:17:56  <othiym23>not sure what behavior you're describing here
00:20:53  <trevnorris>... screw it. need to head out.
00:21:27  <trevnorris>isaacs: for some reason I'm loosing the stack trace when I'm entering _fatalException. if you have any ideas why, it'd be appreciated.
00:23:17  <isaacs>trevnorris: i'll dig into it tomorrow
00:26:35  * mikealquit (Quit: Leaving.)
00:27:25  * bradleymeckjoined
00:29:49  * jmar777quit (Remote host closed the connection)
00:30:26  * jmar777joined
00:30:58  * FROGGSjoined
00:34:38  * jmar777quit (Ping timeout: 240 seconds)
00:37:59  * AvianFluquit (Remote host closed the connection)
00:38:27  * AvianFlujoined
00:42:36  * dshaw_quit (Quit: Leaving.)
00:43:33  * AvianFluquit (Ping timeout: 272 seconds)
00:49:39  * pachetquit (Quit: leaving)
00:50:42  * dshaw_joined
00:53:25  * bradleymeckquit (Quit: bradleymeck)
00:55:02  * dshaw_quit (Ping timeout: 240 seconds)
00:55:57  * FROGGSquit (Ping timeout: 272 seconds)
00:56:34  * jmar777joined
00:57:26  * Ralithquit (Ping timeout: 240 seconds)
01:01:18  * TooTallNatequit (Quit: ["Textual IRC Client: www.textualapp.com"])
01:13:33  * dshaw_joined
01:15:21  * dshaw_1joined
01:15:34  * dshaw_quit (Read error: Connection reset by peer)
01:19:36  * dshaw_1quit (Ping timeout: 245 seconds)
01:23:52  * kenansulaymanquit (Ping timeout: 264 seconds)
01:25:12  * kenansulaymanjoined
01:27:06  * Ralithjoined
01:35:46  * inolenjoined
01:37:19  * abraxasjoined
01:38:44  * kazuponjoined
01:45:49  * kazuponquit (Remote host closed the connection)
01:48:10  * FROGGSjoined
01:55:19  * kazuponjoined
02:13:27  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
02:21:34  * kazuponquit (Read error: Connection timed out)
02:28:03  * FROGGSquit (Ping timeout: 272 seconds)
02:28:30  * FROGGSjoined
02:31:08  * kazuponjoined
02:37:55  * AvianFlujoined
02:38:27  * FROGGSquit (Ping timeout: 272 seconds)
02:51:08  * FROGGSjoined
02:55:53  * kazuponquit (Read error: Connection timed out)
03:01:49  * kazuponjoined
03:33:01  * FROGGSquit (Ping timeout: 272 seconds)
03:44:07  * kazuponquit (Read error: Operation timed out)
03:49:51  * FROGGSjoined
03:50:08  * kazuponjoined
04:07:33  * AvianFluquit (Remote host closed the connection)
04:08:03  * AvianFlujoined
04:08:30  * AvianFluquit (Remote host closed the connection)
04:08:44  * kazuponquit (Read error: Connection timed out)
04:10:14  * kazuponjoined
04:10:28  * mikealjoined
04:12:03  * AvianFlujoined
04:12:14  * kazuponquit (Read error: Connection reset by peer)
04:16:24  * abraxas_joined
04:16:36  * AvianFluquit (Remote host closed the connection)
04:17:05  * AvianFlujoined
04:17:59  * abraxasquit (Ping timeout: 272 seconds)
04:21:47  * AvianFluquit (Ping timeout: 272 seconds)
04:30:33  * brsonquit (Ping timeout: 272 seconds)
04:34:56  * mikealquit (Quit: Leaving.)
04:35:05  * mikealjoined
04:43:02  * FROGGSquit (Ping timeout: 240 seconds)
04:52:37  * avalanche123joined
04:54:52  * avalanche123quit (Client Quit)
04:59:49  * kazuponjoined
05:19:47  * kazuponquit (Read error: Connection timed out)
05:21:32  * kazuponjoined
05:34:59  * paddybyersjoined
05:57:16  * defunctzombie_zzchanged nick to defunctzombie
06:10:28  * paddybyersquit (Quit: paddybyers)
06:16:49  * paddybyersjoined
06:23:28  * defunctzombiechanged nick to defunctzombie_zz
06:41:11  <MI6>nodejs-v0.10-windows: #277 UNSTABLE windows-ia32 (10/603) windows-x64 (11/603) http://jenkins.nodejs.org/job/nodejs-v0.10-windows/277/
06:43:15  * dominictarrjoined
06:54:50  * dominictarrquit (Quit: dominictarr)
07:03:33  * FROGGSjoined
07:05:19  * kazuponquit (Read error: Connection reset by peer)
07:11:07  * kazuponjoined
07:12:31  * rendarjoined
07:36:20  * jmar777quit (Remote host closed the connection)
07:43:34  <indutny>heya
07:51:05  * c4miloquit (Read error: Connection reset by peer)
07:51:40  * c4milojoined
07:54:14  * paddybyersquit (Quit: paddybyers)
07:55:32  * c4miloquit (Remote host closed the connection)
07:56:05  * c4milojoined
08:00:26  * c4miloquit (Ping timeout: 245 seconds)
08:03:57  <indutny>oh ben
08:04:40  <indutny>tjfontaine: yt?
08:07:14  * paddybyersjoined
08:11:20  * bnoordhuisjoined
08:12:08  * kazuponquit (Read error: Connection reset by peer)
08:23:09  * kazuponjoined
08:31:13  * kenansulaymanjoined
08:42:21  <indutny>bnoordhuis: heya
08:42:28  <indutny>bnoordhuis: remember you removed my code from 0.10
08:42:33  <indutny>because it was hanging and failing on 10.9?
08:42:38  <indutny>osx
08:42:51  <indutny>looks like the latter one was your fault, actually
08:43:05  <indutny>bnoordhuis: https://github.com/joyent/libuv/blob/master/src/unix/fsevents.c#L555
08:50:03  <MI6>joyent/libuv: Fedor Indutny master * 0fdd99f : fsevents: increase stack size for OSX 10.9 - http://git.io/x36sDw
08:50:37  <indutny>saghul: thanks
08:51:08  <saghul>indutny welcome. I'll re-check the other one a bit later, gotta work now ;-)
08:51:16  <indutny>thank you! :)
08:51:18  <indutny>you did a lot
08:53:57  <MI6>libuv-master: #299 UNSTABLE windows (3/196) smartos (2/195) http://jenkins.nodejs.org/job/libuv-master/299/
08:55:36  <MI6>libuv-master-gyp: #240 FAILURE windows-x64 (3/196) windows-ia32 (3/196) http://jenkins.nodejs.org/job/libuv-master-gyp/240/
09:01:43  <MI6>libuv-node-integration: #284 FAILURE http://jenkins.nodejs.org/job/libuv-node-integration/284/
09:20:33  <bnoordhuis>indutny: i'll go out on a limb and say it's apple's fault
09:20:44  <bnoordhuis>if it's anyone's fault
09:21:46  <indutny>haha :)
09:21:47  <indutny>why
09:21:53  <indutny>because they started using a bit more stack?
09:22:12  <indutny>bnoordhuis: so, why not merge it back into v0.10?
09:22:14  <indutny>is there any reasons?
09:22:27  <indutny>I know at least one problem that my changes in FSEvents are fixing
09:22:57  <indutny>bnoordhuis: btw, what is your opinion on this https://github.com/joyent/libuv/pull/958/files#diff-c5e4480a66af872d6425f7f6ecd55e02R141
09:23:07  <indutny>I'd like to run close callbacks in loop_delete
09:23:15  <indutny>if closes are happening in platform dependent code
09:23:29  <bnoordhuis>hrm, i'm not that comfortable with big changes in stable branches
09:23:44  <bnoordhuis>landing all those fsevents changes was, in retrospect, not a great idea
09:24:03  <indutny>there was a lot of buggy stuff in libuv
09:24:10  <indutny>and not the most of it was produced by my hands
09:24:13  <indutny>:)
09:24:26  <indutny>I think its ok to have problems and ok to solve them
09:24:44  <indutny>removing features because of some rare failures is not a reason for it
09:25:12  <indutny>though, I must admit that we've pretty awkward tests for fsevents in general
09:25:14  <indutny>not just osx side
09:25:52  <bnoordhuis>fixing bugs is one thing but i need to be fairly confident it won't cause regressions elsewhere
09:25:52  <bnoordhuis>with 500+ line patches that's hard to verify
09:26:37  <bnoordhuis>btw, you can't run close callbacks in uv__loop_delete(), that's asking for trouble
09:27:59  <bnoordhuis>and you know, i'm not sure i'm a fan of #958
09:28:23  <bnoordhuis>lots of complexity, lots of potential for performance problems
09:28:50  <bnoordhuis>i guess my biggest beef is that it hides the fact that libuv may decide to switch from something that's O(1) to something that's O(n) or worse
09:29:37  <indutny>it doesn't
09:29:44  <indutny>apple will throw a couple of stderr lines
09:29:44  <indutny>:)
09:29:48  <indutny>haha
09:29:54  <indutny>if you want my opinion - I don't like it either
09:30:03  <indutny>but I know that its important for people
09:30:23  <indutny>bnoordhuis: I dislike calling close callbacks in loop_delete() too
09:30:36  <indutny>bnoordhuis: but I need to close a uv_async_t handle
09:30:45  <indutny>bnoordhuis: which is allocated loop state
09:31:02  <indutny>bnoordhuis: should I just pretend that I know internals and free structure immediately after uv_close()?
09:31:14  <indutny>bnoordhuis: right now, there won't be any problems with this
09:31:15  <bnoordhuis>no, because that's not very maintainable
09:31:19  <indutny>indeed
09:31:36  <bnoordhuis>but closing handles in uv__loop_delete() has the same issue
09:31:51  <bnoordhuis>the moment libuv's internals change, things start breaking left and right
09:31:56  <indutny>not that much
09:31:59  <indutny>its a core thing
09:32:08  <indutny>its hard to change internals that much without changing
09:32:24  <indutny>but tracking all platform-dependent changes is really hard
09:32:35  <indutny>anyway
09:32:40  <indutny>I'm open to any other suggestions
09:32:47  <indutny>and that's the main reason for asking you there ;)
09:32:53  <bnoordhuis>let me think about that for a bit
09:33:17  <indutny>thank you
09:35:38  * bnoordhuisis afk for a bit
09:40:27  * bnoordhuisquit (Ping timeout: 260 seconds)
10:06:16  * bnoordhuisjoined
10:10:34  * abraxas_quit (Remote host closed the connection)
10:11:08  * abraxasjoined
10:22:25  <indutny>bnoordhuis: so what's your approach about?
10:22:37  <indutny>did you mean waiting in uv__fs_event_close?
10:23:35  * Kakerajoined
10:28:50  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
10:29:55  <bnoordhuis>indutny: just commented
10:29:59  <indutny>thanks
10:30:56  <indutny>bnoordhuis: just commented too :)
10:37:52  <bnoordhuis>and commented again :)
10:39:54  <indutny>bnoordhuis: haha
10:40:01  <indutny>bnoordhuis: I think it'd be faster to continue here
10:40:06  <bnoordhuis>will upgrading xcode break everything? (my mac is bugging me about software updates)
10:40:09  <indutny>bnoordhuis: otherwise it looks like a slow-mo chat
10:40:14  <indutny>bnoordhuis: I'm not sure
10:40:21  <indutny>you probably going to start to live with lldb
10:40:26  <indutny>if you already didn't
10:40:29  <bnoordhuis>they removed gdb?!
10:40:33  <indutny>in osx 10.9
10:40:37  <indutny>not sure about xcode
10:40:38  <bnoordhuis>waaaaaaaaaaa?
10:41:09  <bnoordhuis>okay, that means i'm staying on 10.8 forever
10:41:12  <indutny>hahaha
10:41:22  <indutny>well, lldb is not that bad, actually
10:41:26  <indutny>anyway, returning to FSEvents
10:41:57  <bnoordhuis>yes
10:42:06  <bnoordhuis>so, i'm still not in love with the current approach
10:42:29  <bnoordhuis>i mean that in a general sense, not talking particulars here
10:43:07  <bnoordhuis>i'm leaning towards telling the user when fsevents fails so they have a chance to do polling themselves
10:43:32  <bnoordhuis>which is pretty straightforward to implement in node, i think
10:44:12  <indutny>hm...
10:47:10  <indutny>bnoordhuis: I see where it goes
10:48:26  <MI6>nodejs-v0.10: #1551 UNSTABLE smartos-x64 (4/603) smartos-ia32 (4/603) osx-ia32 (1/603) http://jenkins.nodejs.org/job/nodejs-v0.10/1551/
10:48:30  <bnoordhuis>you agree that's a better approach or ?
10:49:57  <indutny>bnoordhuis: let me think about it for a bit
10:50:03  <indutny>I agree that polling in node is simplier
10:50:29  <indutny>but polling in C is as hard as it is
10:50:35  <indutny>though
10:50:40  <indutny>requires no changes to event-loop
10:50:45  <indutny>and handled only by user
10:51:03  <indutny>also, considering that its not recursive right now
10:51:10  <indutny>yeah, I think I agree with you
10:51:14  <bnoordhuis>okay, cool :)
10:51:25  <indutny>bnoordhuis: should I just invoke handle->cb with status=-1 ?
10:51:32  <indutny>in case if Start() fails
10:51:47  <bnoordhuis>well... something like, i don't know, -EBUSY?
10:52:18  <bnoordhuis>and document in uv.h that it means that fsevents is at max capacity
10:52:37  <bnoordhuis>maybe -EMFILE is a better status code but i'll leave that to your good taste
10:52:51  <indutny>-EMFILE sounds like a better approach
10:52:58  <indutny>bad thing is that I'll need to call it on all handles
10:53:23  <bnoordhuis>you do? does fsevents stop working altogether when you hit that rpc error?
10:53:39  <bnoordhuis>i thought that's only for new handles?
10:53:52  <indutny>yes
10:54:01  <indutny>because we're using single shared FSEventStream
10:54:21  <bnoordhuis>oh. that sucks
10:54:34  <indutny>using new FSEventStream for each handle sucks even more
10:54:42  <indutny>because it fails really early :)
10:54:48  <bnoordhuis>yeah, i can imagine
10:54:52  <indutny>so
10:55:00  <indutny>probably I should try to create new stream first
10:55:04  <indutny>no
10:55:06  <indutny>it won't work
10:55:08  <indutny>ok
10:55:15  <indutny>just throw all errors to user
10:55:15  <indutny>:)
10:55:26  <bnoordhuis>and let the user deal with it, yeah
11:03:56  * kazuponquit (Remote host closed the connection)
11:04:24  * kazuponjoined
11:06:03  <indutny>bnoordhuis: sounds like a plan
11:08:46  * kazuponquit (Ping timeout: 245 seconds)
11:12:12  <indutny>oh god
11:13:08  <indutny>I just found that I'm traversing handles without mutex
11:13:12  <indutny>in FSEvents thread
11:16:27  <indutny>to say that its terrible is to say nothing
11:19:02  <bnoordhuis>hah
11:24:56  <indutny>bnoordhuis: I think I'm almost done with it
11:27:08  <MI6>joyent/node: Ben Noordhuis v0.10 * 91a0e52 : src: IsInt64() should return bool, not int - http://git.io/5WPGsg
11:34:32  <bnoordhuis>$ make test-gc
11:34:32  <bnoordhuis>touch out/Makefile
11:34:32  <bnoordhuis>/usr/bin/python tools/gyp_node -f ninja
11:34:39  <bnoordhuis>^ seriously, who thought that was a good idea?
11:35:32  <bnoordhuis>at least in master it no longer eats your config.gypi, like it used to
11:37:32  <MI6>nodejs-v0.10: #1552 UNSTABLE smartos-x64 (5/603) smartos-ia32 (4/603) linux-x64 (1/603) linux-ia32 (1/603) http://jenkins.nodejs.org/job/nodejs-v0.10/1552/
11:40:22  <indutny>bnoordhuis: are you read? :)
11:40:31  <indutny>s
11:40:34  <indutny>s/read/ready/
11:41:43  <bnoordhuis>well-read even
11:42:10  <bnoordhuis>if you're asking, do i have time to review a PR?, well, i'm already reviewing something else
11:42:17  <bnoordhuis>fixing something else, really
11:42:22  <MI6>nodejs-v0.10-windows: #278 UNSTABLE windows-ia32 (11/603) windows-x64 (9/603) http://jenkins.nodejs.org/job/nodejs-v0.10-windows/278/
11:42:51  <indutny>bnoordhuis: tadam https://github.com/joyent/libuv/pull/965
11:43:03  <indutny>bnoordhuis: we all do :)
11:49:03  <indutny>bnoordhuis: still trying to review trevor's stuff
11:49:31  <indutny>bnoordhuis: i'm not sure how to describe it without making myself vulnerable to your "that's what she said jokes"
11:53:17  <bnoordhuis>now i'm curious what that description would be :)
11:54:53  <indutny>haha
11:54:58  <indutny>no way I'll tell you
11:57:41  <indutny>bnoordhuis: two star? :0
11:58:30  <bnoordhuis>indutny: you do approx. -> if (p == NULL) { list->head = p; } else { p->next = p; }
11:58:42  <indutny>err
11:58:44  <indutny>that was a mistake :)
11:58:52  <indutny>it should be list->tail = p
11:58:53  <indutny>not
11:58:55  <indutny>p->next = p
11:59:15  <indutny>ah
11:59:18  <bnoordhuis>okay. wait, let me comment on the issue
11:59:21  <indutny>yeah
11:59:26  <indutny>refresh page
11:59:31  <indutny>I've added one commit to PR
11:59:56  <bnoordhuis>okay, good
12:00:08  <bnoordhuis>the two star is something else though :)
12:00:13  <bnoordhuis>wait, i'll post an example
12:00:20  <indutny>yeah
12:00:27  <indutny>like p = **x
12:00:28  <indutny>:)
12:00:40  <indutny>void** res;
12:00:44  <indutny>if (p == NULL)
12:00:47  <indutny>res = something
12:00:47  <indutny>else
12:00:49  <indutny>res = something-else
12:00:50  <indutny>*res = p
12:00:52  <indutny>this?
12:02:25  <bnoordhuis>https://github.com/joyent/libuv/pull/965/files#r7154685 <- that
12:02:33  <indutny>yeah
12:02:38  <indutny>close to what I thought
12:03:16  <bnoordhuis>two star programming. i think it's linus who came up with that phrase
12:03:46  <indutny>yeah
12:03:48  <indutny>anyway
12:04:08  <indutny>added two star code
12:04:13  <indutny>specially for you
12:04:53  <indutny>any other comments? :)
12:09:58  <bnoordhuis>left another comment regarding if statements with negation
12:10:06  <bnoordhuis>i don't mind that one too much though
12:10:20  <bnoordhuis>btw, can you add commits rather than rebase+force-push?
12:10:57  <bnoordhuis>oh btw, you could move the err = -ENOMEM out of the critical section
12:11:26  <indutny>bnoordhuis: I add them
12:11:32  <indutny>you're just commenting on PR
12:11:38  <indutny>that's why things are disappearing
12:11:55  <bnoordhuis>oh right, github again
12:12:11  <bnoordhuis>yeah, i'm looking at the PR as a whole, not individual commits
12:12:48  <indutny>yeah
12:12:54  <indutny>so, I pushed final commit
12:13:00  <indutny>please take a peek look again
12:13:06  <indutny>should be all clear now
12:13:16  <indutny>and then I think we should merge libuv in node master
12:13:28  <indutny>to make it work on 10.9
12:15:02  <bnoordhuis>there's no regression test though
12:15:12  <indutny>and it couldn't be there
12:15:24  <indutny>as this error isn't usually reproducible
12:15:44  <bnoordhuis>it'll happen eventually right? just keep adding enough watchers until you hit that rpc limit
12:17:29  <indutny>no, it won't
12:17:32  <indutny>we're using one stream
12:17:47  <indutny>I'll need to spawn 400-1000 processes
12:17:54  <indutny>each creating it's own stream
12:18:11  <indutny>are you ok with this?
12:18:16  <bnoordhuis>how do people manage to trigger that error with only a handful of node processes?
12:18:31  <indutny>perhaps they've dropbox or something else watching directories
12:18:38  <bnoordhuis>btw, i think you could two-star that head/tail code as well
12:18:41  <indutny>you know number of FSEventStreams is limited on system-level
12:18:47  <bnoordhuis>hrm, okay
12:18:50  <bnoordhuis>well, balls
12:18:50  <indutny>bnoordhuis: yes, but why?
12:19:11  <bnoordhuis>because branchless code is easier to understand than branching code
12:19:24  <indutny>oh gosh
12:19:33  <bnoordhuis>compilers usually generate slightly better code too
12:19:50  <indutny>I'll still have a branch for head
12:20:51  <bnoordhuis>how so? you can do foo** next = &head; right?
12:20:53  <indutny>no, it won't eliminate branches at all
12:21:00  <indutny>well, head could be NULL
12:21:02  <indutny>at the end of the loop
12:21:05  <indutny>if there're no matched events
12:21:23  <indutny>I don't think two star will improve situation
12:21:23  <bnoordhuis>okay, but that's outside the inner loop
12:21:34  <indutny>yes...
12:21:38  <bnoordhuis>it's not a big thing
12:21:46  <indutny>well, I'm curious
12:21:48  <indutny>please continue :)
12:21:51  <bnoordhuis>but you want to become the greatest programmer in the world, right, fedor?
12:21:56  <indutny>hahaha
12:21:58  <indutny>not really
12:22:01  <indutny>I just like to learn more
12:23:35  <bnoordhuis>i used two star pointers to some effect in that heapify-timers PR
12:24:02  <bnoordhuis>which imo improves readability compared to having if/then/else statements all over the place
12:24:22  <bnoordhuis>but only insofar that data structure code can be readable, of course
12:25:09  <bnoordhuis>we should require that all libuv PRs come with proofs of correctness written in coq!
12:25:42  <bnoordhuis>okay, i digress a little
12:25:50  <indutny>I agree
12:26:00  <indutny>haha
12:26:04  <indutny>so LGTY?
12:26:06  <bnoordhuis>about the proofs or that i'm digressing? :)
12:26:32  <bnoordhuis>yeah, i guess so
12:26:48  <bnoordhuis>i don't spot any obvious bugs, at least
12:27:26  <bnoordhuis>ho!
12:27:27  <bnoordhuis>../../src/unix/fsevents.c: In function ‘uv__fsevents_push_event’:
12:27:27  <bnoordhuis>../../src/unix/fsevents.c:196: warning: assignment from incompatible pointer type
12:27:27  <bnoordhuis>../../src/unix/fsevents.c:198: warning: assignment from incompatible pointer type
12:27:27  <bnoordhuis>[133/133] LINK run-benchmarks, POSTBUILDS
12:28:07  <bnoordhuis>you didn't spot that, fedor?
12:29:00  <indutny>haha
12:29:03  <indutny>nope
12:29:05  <indutny>new compiler
12:30:17  <indutny>bnoordhuis: fixed
12:30:30  <indutny>bnoordhuis: (void**) => (uv_fsevents_event_t**)
12:30:31  * kazuponjoined
12:31:33  <indutny>bnoordhuis: pushed fixes to PR
12:31:54  <bnoordhuis>also this:
12:31:55  <bnoordhuis>../src/unix/fsevents.c: In function 'uv__cf_loop_cb':
12:31:56  <bnoordhuis>../src/unix/fsevents.c:384:7: warning: 'i' may be used uninitialized in this function [-Wmaybe-uninitialized]
12:31:59  <bnoordhuis> int i;
12:32:01  <bnoordhuis> ^
12:32:04  <bnoordhuis>../src/unix/fsevents.c:440:14: warning: 'paths' may be used uninitialized in this function [-Wmaybe-uninitialized]
12:32:07  <bnoordhuis> cf_paths = pCFArrayCreate(NULL, (const void**) paths, path_count, NULL);
12:33:15  <bnoordhuis>the compiler is right, too
12:34:05  <indutny>that's is a lie
12:34:07  <indutny>at least 440
12:34:20  <bnoordhuis>well, the other one isn't
12:34:25  <indutny>except i = 0
12:34:52  <bnoordhuis>it's not, check the first goto statement
12:35:01  <indutny>yes
12:35:07  <indutny>I need to add i = 0
12:35:10  <bnoordhuis>if (state->fsevent_need_reschedule == 0) { <- that one
12:35:14  <bnoordhuis>indeed
12:35:35  <indutny>bnoordhuis: please take another look :)
12:35:41  <indutny>my compiler is playing bad games with me
12:36:10  <bnoordhuis>still the same warnings...
12:36:26  <indutny>yeah
12:36:30  <indutny>but this time I'm cool
12:36:33  <indutny>and compiler is stupid
12:36:44  <indutny>ah
12:36:45  <indutny>no
12:37:04  <indutny>hahaha
12:37:05  <indutny>shit
12:37:08  <indutny>pushed out another fix
12:37:22  <bnoordhuis>with all due respect, fedor, but next time please spend a little more time on your patches before asking me to review them
12:37:40  <bnoordhuis>this is taking up time that could've been spent better
12:38:10  <bnoordhuis>still complaining about paths
12:38:14  <indutny>yeah, noted
12:38:15  <bnoordhuis>okay, i'm taking a lunch break
12:39:12  <indutny>and pushed fix for paths
12:39:44  <indutny>gosh
12:39:56  <indutny>I need to work on my attention to details
12:40:02  <indutny>I mean, now it looks good
12:54:14  * bnoordhuisquit (Ping timeout: 240 seconds)
13:16:25  * jmar777joined
13:17:26  * bnoordhuisjoined
13:17:58  <bnoordhuis>and back
13:24:00  * FROGGSquit (Quit: Verlassend)
13:27:22  <bnoordhuis>https://news.ycombinator.com/news <- libuv's on the home page. everyone upvote!
13:32:25  <bnoordhuis>also this: http://www.nrc.nl/nieuws/2013/10/23/europarlement-stemt-voor-opschorting-swift-verdrag-met-vs/ <- as of today, the EU no longer lets the USA view EU bank transactions
13:37:11  * kazuponquit (Remote host closed the connection)
13:37:38  * kazuponjoined
13:40:07  * inolenquit (Read error: Connection reset by peer)
13:40:14  * inolenjoined
13:42:05  * kazuponquit (Ping timeout: 245 seconds)
13:46:31  * c4milojoined
13:48:54  * wavdedjoined
13:51:02  * c4miloquit (Ping timeout: 240 seconds)
13:57:39  <indutny>haha
13:57:40  <indutny>already did
13:57:45  <indutny>oh
13:57:52  <indutny>last thing is pretty interesting
13:58:52  <bnoordhuis>yeah. i guess the EU got pissed off about NSA spying
13:59:10  <indutny>oh god
13:59:15  <indutny>its NSA again :)
14:00:15  <indutny>so anyway
14:00:17  <indutny>LGTY?
14:01:12  * jmar777quit (Read error: Connection reset by peer)
14:01:37  * jmar777joined
14:02:22  <bnoordhuis>i've come up with a brilliant new rule for reviewing
14:02:28  <bnoordhuis>i only look at a PR once a day
14:02:52  <bnoordhuis>if there are issues, you get a full day to fix them up, think through the consequences, etc. :)
14:03:00  <bnoordhuis>win all around, right?
14:03:03  <indutny>ok
14:03:07  <indutny>see you tomorrow then?
14:03:23  * pachetjoined
14:04:45  <bnoordhuis>yes. your PR is the first thing i'll look at tomorrow :)
14:06:34  <indutny>:)
14:06:35  <indutny>ok
14:09:06  <indutny>any suggestions for next stuff?
14:09:09  <indutny>except reviewing trevnorris :)
14:09:15  <indutny>which I'm trying to do right now
14:10:32  <bnoordhuis>hrm, let's see
14:11:27  <indutny>there's interesting epoll bug
14:11:30  <indutny>related to cluster
14:11:45  <indutny>right?
14:12:02  <bnoordhuis>https://github.com/joyent/libuv/issues/826 <- there's this which i'm reasonably sure is still an issue
14:12:03  <indutny>you think that epoll has some events left after passing fd to another process?
14:12:11  * kazuponjoined
14:12:22  <indutny>yeah, I think its it
14:12:25  <bnoordhuis>hah, great minds - that's the same issue :)
14:12:30  <bnoordhuis>there's also a node issue somewhere
14:13:24  <bnoordhuis>god, github's search is so awful
14:13:42  <bnoordhuis>https://github.com/joyent/node/issues/6222
14:14:23  <indutny>yes
14:14:31  <indutny>so is that what you think about it?
14:14:50  <indutny>anyway
14:15:00  <bnoordhuis>#6222 is because the sending process closes the fd too quickly, i think
14:15:04  <indutny>as far as I understand - it could happen only in one condition
14:15:14  <indutny>fd is closed in epoll callback
14:15:19  <indutny>and opened in another one
14:15:22  <indutny>in the same loop
14:15:25  <indutny>if that's the case
14:15:31  * Petkajoined
14:15:38  <indutny>limiting number of reported epoll events should fix it
14:15:42  <indutny>(just as experiment)
14:15:47  <indutny>right?
14:15:52  <bnoordhuis>possibly
14:16:22  <bnoordhuis>i commented that i suspect it's a stale event for a fd descriptor that was recently closed
14:16:27  <indutny>indeed
14:16:33  <bnoordhuis>when a new fd then comes in with recvmsg(), it gets assigned the same fd number
14:16:37  <indutny>I just think that it'd be cool if those people would try this out
14:16:48  <indutny>oooooh
14:16:51  <indutny>well
14:17:15  <Petka>how would node take completely rehauling EventEmitter internals like https://github.com/petkaantonov/FastEmitter
14:17:17  <indutny>we have loop->watchers
14:17:19  <indutny>and all the stuff
14:17:37  <indutny>Petka: there was a couple of attempts in doing that
14:17:58  <indutny>Petka: why yours is better?
14:18:19  <Petka>you mean why is it faster or why is it better?
14:18:32  <indutny>I think both :)
14:18:37  <indutny>anyway
14:18:39  <Petka>it's not better in the sense that it implements the node API and semantics of course
14:18:43  <indutny>my general opinion on this topic
14:18:50  <indutny>if it works as EventEmitter
14:19:01  <indutny>isn't breaking any tests
14:19:06  <indutny>when replacing it
14:19:08  <bnoordhuis>indutny: to illustrate what i think happens: libuv calls epoll_wait(), it gets a bunch of events. say the first event results in libuv closing fd 42. the second one makes it call recvmsg() which returns a new fd 42. the third event in the list is for the _old_ fd 42
14:19:10  <indutny>and faster than it is right now
14:19:12  <bnoordhuis>bam! you're dead
14:19:14  <Petka>well the test suite is exactly what is in node core
14:19:15  * spionjoined
14:19:18  <indutny>Petka: it might be a good idea to replace it
14:19:19  <Petka>but it could have its own bugs ofc :)
14:19:26  <indutny>Petka: sure
14:19:42  <Petka>I am just gauging response right now, there is some work to do still
14:19:44  <indutny>bnoordhuis: oh god
14:19:49  <bnoordhuis>Petka: what's better about it?
14:20:02  <indutny>bnoordhuis: I think it could be solved if inserting fd into watchers list only after the loop pass
14:20:13  <Petka>nothing's better, it's the same api and semantics literally but using different data structure and algorithms to gain perf
14:20:18  <indutny>but
14:20:21  <indutny>its not that simple :)
14:20:25  <bnoordhuis>indutny: maybe make sure first whether that's actually what happens :)
14:20:27  <indutny>because it happens outside of that loop
14:20:35  <indutny>I think I know how to fix it
14:20:48  <bnoordhuis>Petka: okay, so it's faster but everything else is the same?
14:20:55  <Petka>yes
14:21:05  <Petka>I use the node core test suite with drop-in replacement
14:21:10  <bnoordhuis>faster all around or just in some cases?
14:21:20  <bnoordhuis>faster is better but not if it regresses some workloads
14:21:49  <Petka>the benchmarks test 1 event and 3 different events with 3 different handlers
14:21:57  <Petka>and 6
14:22:03  <Petka>speed is 10-200% faster
14:22:18  <Petka>of course if you have huge amount of separate listeners then that workload is slower
14:22:26  <Petka>I imagine
14:22:32  <bnoordhuis>how well does it work when EventEmitter goes megamorphic?
14:22:33  <Petka>what is the max amoutn of distinc event types usually?
14:22:41  <bnoordhuis>i guess that's the main issue with EE in real programs
14:22:55  <bnoordhuis>there's so many call sites that v8 stops trying to optimize it
14:23:25  <Petka>I will make a jsperf with that
14:23:32  <bnoordhuis>max amount... it varies. anywhere from < 10 to millions
14:24:04  <Petka>I mean max amount of different parallel event types on the emitter at the same time
14:24:14  <bnoordhuis>btw, we prefer plain old benchmarks. just drop some scripts in benchmark/
14:24:20  <Petka>for example the GC test is no problem because they are not there at the same time even if it's 250k different
14:24:38  <bnoordhuis>you mean events on a single instance?
14:24:43  <bnoordhuis>*different event types
14:25:03  <Petka>yes I mean active unique event types on a single instance
14:25:12  <bnoordhuis>see above, there's no real upper bound
14:25:25  <bnoordhuis>for example, there's this mongodb module that emits fields as events
14:25:48  <bnoordhuis>i'm not making this up
14:26:02  <Petka>yea actually I could transition to hash table in that case
14:27:16  <indutny>have I missed something?
14:28:33  <bnoordhuis>Petka: sounds interesting. open a PR if you want
14:29:10  <rendar>what is the EventEmitter? a class that receives callback replies from libuv and calls v8 stuff, whcih in turns calls js code callbacks?
14:29:22  <bnoordhuis>rendar: no, it's a pure js struct
14:29:28  <rendar>oh, i see
14:29:52  <indutny>bnoordhuis: what do you mean by "making this up"
14:29:55  <bnoordhuis>rendar: basically, you call emit('event', data, moredata) and the eventemitter passes that on to all subscribed listeners
14:30:04  <rendar>i see
14:30:04  <indutny>bnoordhuis: sorry, just want to understand this phrase :) and I hear it for the first time
14:30:06  <bnoordhuis>rendar: it's basically the observer pattern
14:30:23  <bnoordhuis>indutny: oh, that there's a module out there that emits database fields as events
14:30:25  <rendar>Petka, which data structures makes that better/faster, specifically?
14:30:40  <indutny>bnoordhuis: ah, I missed the message
14:30:43  <rendar>bnoordhuis, yeah i see, basically like those signal/slot stuff of Qt (more or less) :)
14:30:49  <indutny>rendar: sort of
14:30:54  <indutny>rendar: but it uses string keys
14:30:59  <indutny>instead of objects
14:30:59  <bnoordhuis>indutny: i was expressing astonishment at that fact
14:31:02  <rendar>indutny, yeah, right
14:31:36  <Petka>rendar array
14:31:50  <rendar>Petka, an array?
14:32:11  <Petka>yes, where you store everything side by side to lessen indirection
14:32:56  <rendar>Petka, hmm, i see, so the string 'event_name' is mapped (with an hash table?) to an array of objects "connected" to that particular 'event_name' ?
14:33:09  <Petka>in fact the external array is slowing it down by 15% in the single emit case.... I was initially using the Properties[] array of the event emitter object
14:33:18  <rendar>hmm, i see
14:33:42  <Petka>hmm not Properties[].. I mean Elements[] lol
14:33:55  <Petka>e.g this[i] instead of this._events[i]
14:34:41  <Petka>but it could be considered incompatible in that case since it would create all those array indices on the event emitter object itself
14:35:15  <Petka>the array structure is like ["event_name", function, function, undefined, undefined, "event_name2", undefined, undefined, undefined, undefined] for example
14:35:32  <Petka>where event_name2 will be claimed next since it doesn't have listeners
14:35:43  <Petka>and "event_name" has 2 listeners
14:36:12  <indutny>Petka: if I got it right - it won't scale
14:36:14  <rendar>hmm, i see
14:36:22  <indutny>I mean in the most of the times we've a couple of events
14:36:38  <Petka>the structure is really tight initially and resizes when necessary
14:36:39  <rendar>isn't that O(m) where m=number of clients "connected" to the event?
14:36:39  <indutny>but some people use one eventemitter for communication between different parts of app
14:36:58  <bnoordhuis>even node does that internally in a few places :)
14:37:07  <indutny>yes :)
14:37:23  <indutny>bnoordhuis: so what do you think about sorting epoll_wait() results by fd? :)
14:37:32  <Petka>yes, huge amount of different event types on active at the same time is not good for this structure
14:37:34  <indutny>hahahaha
14:37:45  <indutny>Petka: that was for my last message, not yours
14:37:52  <bnoordhuis>it's a good thing i know you're not serious :)
14:38:01  <bnoordhuis>btw, are tests are not very good at closing fds
14:38:09  <indutny>bnoordhuis: erm?
14:38:10  <bnoordhuis>[% 100|+ 184|- 11|T 0|S 0]: Done. <- that's after adding a file descriptor leak check
14:38:18  <bnoordhuis>err, *our tests
14:38:19  <indutny>bnoordhuis: oh gosh
14:38:25  <indutny>not good
14:38:32  <indutny>it might be tests themselves
14:38:47  <bnoordhuis>not unlikely. but i'll dig in
14:38:56  <indutny>ok
14:39:00  <indutny>see you on other side
14:39:01  <rendar>bnoordhuis, i have just fast read the problem of before, how recvmsg can return an fd (42) of the same number of another fd?
14:39:14  * FROGGSjoined
14:39:17  <Petka>indutny yes bnoordhuis brought up that some use cases have huge amount of different listener types active at the same time on a single emitter... so I was thinking to detect that case and just transition to hash table. But still be fast for the usual cases
14:40:12  <indutny>Petka: ok, looking forward for benchmarks ;)
14:40:23  <bnoordhuis>rendar: read more closely :) the issue is that there may be events in the list returned by epoll_wait() that were for the previous fd 42
14:40:26  <rendar>Petka, the current implementation is fast with a bunch of listeners connected to the same event?
14:40:29  <Petka>sure :)
14:40:38  <rendar>bnoordhuis, oh..
14:40:59  <Petka>rendar, yes if you have multiple listeners for the same event then the event emitter has been 3x faster than the current one in node
14:41:10  <bnoordhuis>rendar: it's still to be determined if that is what actually happens
14:41:15  <Petka>the amount of same listeners for some event doesn't affect the data structure performance
14:41:24  <Petka>only different event types
14:41:25  <bnoordhuis>rendar: but it's the only explanation i can come up with that's faintly plausible
14:41:58  <rendar>bnoordhuis, you mean that when fd 42 is closed and so it instantly removed from epoll queue, it btw could also still exist in the array that epoll_wait() read from the kernel
14:42:53  <indutny>bnoordhuis: what about adding field to watcher?
14:43:04  <indutny>meh, that won't work out
14:43:06  <indutny>ok
14:43:06  <indutny>gtg
14:43:26  <rendar>Petka, no, what i meant is: if you use an array, so if event 'event' is emitted you have to traverse all the clients connected stored in an array?
14:43:35  <rendar>Petka, ^ this is yours implementation right?
14:44:47  <Petka>if event "event" is emitted, it will go through the sentinels until it finds undefined or "event"
14:44:57  <Petka>when it finds a sentinel, the next indices contain possible listeners
14:45:25  * wavdedquit (Ping timeout: 245 seconds)
14:46:52  <Petka>so if you have 10 listeners for the event "event" the array might look like this: ["event", fn, fn, fn, fn, fn, fn, fn, fn, ....]
14:47:20  * wavdedjoined
14:48:05  <bnoordhuis>rendar: yes, correct
14:48:34  <Petka>and yea it's my implementation, just a funny experiment
14:48:35  <bnoordhuis>rendar: and when a new fd 42 comes in through recvmsg(SCM_RIGHTS), well, bad things happen
14:59:41  * abraxasquit (Remote host closed the connection)
15:07:18  * dshaw_joined
15:08:47  * mikealquit (Quit: Leaving.)
15:09:52  * AvianFlujoined
15:10:27  * AvianFlu_joined
15:10:39  <tjfontaine>indutny: here
15:14:36  * AvianFluquit (Ping timeout: 265 seconds)
15:14:59  * defunctzombie_zzchanged nick to defunctzombie
15:20:48  <tjfontaine>indutny: if your "yt" was about the 10.9 issue, do you want me to test or do a libuv release? we can probably do a libuv release without much concern
15:20:53  <MI6>nodejs-master: #636 UNSTABLE smartos-ia32 (5/648) smartos-x64 (8/648) linux-ia32 (1/648) http://jenkins.nodejs.org/job/nodejs-master/636/
15:21:58  * dshaw_quit (Quit: Leaving.)
15:25:42  * AvianFlu_changed nick to AvianFlu
15:27:40  * dominictarrjoined
15:30:48  <isaacs>bnoordhuis: 11 tess are leaking fd's?
15:31:10  <isaacs>bnoordhuis: is that in libuv, or node? it's not very many tests.
15:31:22  <bnoordhuis>libuv
15:31:59  <bnoordhuis>i'm going over them one by one. so far it's just lack of cleanup in the test, not a real leak in libuv
15:32:14  <bnoordhuis>but i still have 5 or 6 to go
15:32:25  * defunctzombiechanged nick to defunctzombie_zz
15:50:44  <isaacs>kewl
15:56:27  * kenansulaymanjoined
15:58:02  * c4milojoined
15:58:09  <tjfontaine>isaacs: thanks for filing #6402
16:16:33  <isaacs>np
16:21:53  * dshaw_joined
16:22:29  * dlwjoined
16:23:47  * dlwquit (Client Quit)
16:30:42  * abraxasjoined
16:33:24  * mikealjoined
16:39:02  <indutny>tjfontaine: yeah, it would be cool
16:39:11  * bnoordhuisquit (Ping timeout: 245 seconds)
16:45:12  * piscisaureus_joined
16:45:19  <piscisaureus_>hello
16:45:51  * Ralithquit (Ping timeout: 245 seconds)
16:50:07  <indutny>hello
16:50:11  <indutny>piscisaureus_: how is beer?
16:50:17  <indutny>:)
16:50:25  <piscisaureus_>indutny: I'm in california
16:50:28  <indutny>oh
16:50:30  <indutny>awful then
16:50:30  <piscisaureus_>it's a bit early for beer now :)
16:50:36  <piscisaureus_>not too early though
16:50:47  * inolenquit (Quit: Leaving.)
16:50:51  <indutny>are you on some conference?
16:50:58  <indutny>or just staying?
16:51:02  <piscisaureus_>indutny: no all-hands
16:51:12  <tjfontaine>piscisaureus_: it's always after noon somewhere
16:51:18  <rendar>piscisaureus_, cool, palo alto?
16:51:19  <rendar>:)
16:51:26  <piscisaureus_>rendar: no san mateo atm
16:51:43  <rendar>:-)
16:56:11  * paulfryzeljoined
16:59:41  * kazuponquit (Remote host closed the connection)
17:03:24  * mikealquit (Quit: Leaving.)
17:06:29  <piscisaureus_>trevnorris: hey, why is the async listener unload callback not called when the async listener throws?
17:11:15  * AvianFluquit (Remote host closed the connection)
17:19:52  * AvianFlujoined
17:25:32  * octetcloudjoined
17:26:23  * Ralithjoined
17:33:35  * dominictarrquit (Quit: dominictarr)
17:40:00  * rendarquit (Ping timeout: 245 seconds)
17:40:09  * c4miloquit (Remote host closed the connection)
17:42:11  * TooTallNatejoined
17:43:48  * bnoordhuisjoined
17:45:07  <trevnorris>piscisaureus_: i'm still working on throwing behavior. we had discussion about it yesterday and something's going to change.
17:45:16  <trevnorris>i'm working on that now
17:46:42  <groundwater_>trevnorris: ping
17:46:47  <trevnorris>groundwater_: pong
17:47:05  <groundwater_>i found why the tests were failing off your latest commit, i'm just commenting in github now
17:47:16  <groundwater_>i think it's just a small fix
17:47:21  * c4milojoined
17:47:36  <trevnorris>don't bother w/ my latest commit. still working out a bug
17:48:06  <groundwater_>ahh, well i commented anyways
17:48:07  <trevnorris>if the "error" callback throws, it happens synchronously, so whatever called the "error" callback should be in the call stack
17:48:09  <trevnorris>but it's not
17:48:43  * bnoordhuisquit (Ping timeout: 248 seconds)
17:49:46  <tjfontaine>TooTallNate: what are your thoughts on the node-weak in tree modifications, are you ok with us floating for now until you decide to add some #ifdef's or nan it?
17:51:20  <groundwater_>trevnorris: what did you and othiym23 decide on for throwing in an error? i don't think i have any tests around the call stacks yet
17:51:54  * inolenjoined
17:52:37  <trevnorris>groundwater_: we're going to throw from the error callback, so it's easy to see where it threw. but the call stack _should_ also contain the function that originally caused the error callback to fire.
17:52:44  <trevnorris>which it isn't. so i'm working on that.
17:53:12  <trevnorris>with that call stack you'll be able to debug two things at once. why your error callback threw, and what caused it to fire in the first place.
17:53:20  <groundwater_>trevnorris: okay awesome, i'll fill in a test when that's working
17:53:23  <MI6>libuv-master: #300 UNSTABLE windows (3/196) smartos (2/195) http://jenkins.nodejs.org/job/libuv-master/300/
17:54:07  <TooTallNate>tjfontaine: floating sounds fine to me
17:54:48  <TooTallNate>tjfontaine: i believe there's already some patches being floated previously :p
17:55:04  <tjfontaine>heh ok, just wanted to make sure that was ok :)
17:55:33  <tjfontaine>trevnorris going to land the v8 upgrade, so you can play with your eternals in 0.11
17:56:03  <trevnorris>tjfontaine: thanks. i'm excited to try those out and see what performance improvements they offer.
17:56:06  <MI6>joyent/node: Ben Noordhuis master * 0079e57 : test: fix up weakref.cc after v8 api change (+3 more commits) - http://git.io/Nswvew
17:57:21  <trevnorris>tjfontaine: wow, not a single conflict. i'm impressed.
17:57:36  <tjfontaine>it's pretty low touch for our code
17:57:41  <tjfontaine>I do have a follow up question for you though
17:57:47  <trevnorris>shoot
17:57:56  <tjfontaine>the array buffer allocator, there is now Allocate and AllocateUnitialized
17:58:10  <tjfontaine>in our implementation for Allocate we were not memset'ing
17:58:29  <trevnorris>you mean for ArrayBuffer?
17:58:36  <tjfontaine>yes, ArrayBuffer::Allocator
17:59:18  <trevnorris>we should be. that's why they introduced the ArrayBuffer::AllocateUninitialized() api
17:59:43  <trevnorris>but maybe we don't care. i'll trust ben's decision on implementation over mine.
17:59:57  <trevnorris>anyway. sorry, your question?
17:59:58  <tjfontaine>heh, well I originally asked isaacs and he deferred to you :)
18:00:02  <tjfontaine>that's the question
18:00:10  <groundwater_>trevnorris: how are you appending the stack traces together? string concat?
18:00:15  * brsonjoined
18:00:21  * kazuponjoined
18:00:23  <tjfontaine>in our previous version we were *not* memset'ing, but now there are two paths, should we be memset'ing to match what people expect or do we not care
18:00:55  <trevnorris>groundwater_: we shouldn't have to. if the stack traces worked properly they should show everything from the last asynchronous event. which it's not doing right now.
18:01:07  <trevnorris>groundwater_: so there's a bug in how our stack trace is being generated.
18:01:58  <trevnorris>tjfontaine: we should take a look and see the implementation details. I don't even understand when AllocateUninitialized would be called. i.e. if the programmer has control over that.
18:03:40  <tjfontaine>that should be knowable
18:03:47  <trevnorris>nod
18:04:42  <trevnorris>groundwater_: check this: https://gist.github.com/trevnorris/7123545
18:05:04  <groundwater_>trevnorris: cool thanks
18:05:05  <trevnorris>groundwater_: see how the originating point of the error is _fatalException. it _should_ be setImmediate callback.
18:05:11  * hzjoined
18:05:20  <tjfontaine>SetupArrayBufferAllocatingData if (allocated_length != 0) and if (initialize) is false
18:06:00  <trevnorris>tjfontaine: awesome. that was fast. so how is "initialized" set?
18:06:17  * superjoejoined
18:06:19  <tjfontaine>gonna link you to source on it, just found it
18:06:22  <tjfontaine>it has a block comment :)
18:06:31  <trevnorris>thanks :)
18:06:34  * hzquit (Client Quit)
18:06:54  <tjfontaine>https://github.com/joyent/node/blob/master/deps/v8/src/runtime.cc#L957-L964
18:07:14  <superjoe>hopefully this channel will forgive a v8 question... I'm calling object->Get and getting Undefined when I expected to get an object instance which I ->Set earlier
18:07:23  <superjoe>is this a commonly understood phenomenon?
18:07:37  <tjfontaine>superjoe: gist?
18:08:11  <superjoe>hmm good point I should put together a minimal test case
18:08:37  * kazuponquit (Ping timeout: 248 seconds)
18:08:44  <tjfontaine>please and thank you
18:09:01  <groundwater_>trevnorris: doesn't the stack unwind because of the first thrown exception?
18:09:09  <MI6>nodejs-master-windows: #429 FAILURE http://jenkins.nodejs.org/job/nodejs-master-windows/429/
18:09:10  <groundwater_>error handlers are called from within _fatalException
18:09:35  <tjfontaine>sigh. windows...
18:09:46  <trevnorris>groundwater_: it all happens synchronously. so there's no technical reason why it has to unwind.
18:09:50  * bnoordhuisjoined
18:10:28  * bajtosjoined
18:10:35  <tjfontaine>bnoordhuis: hey, how goes?
18:10:47  * c4miloquit (Remote host closed the connection)
18:10:51  <groundwater_>trevnorris: hmm… maybe i'm not familiar with how _fatalException works, but I would expect the stack to unwind because the exception propagates all the way up
18:10:56  * dshaw_quit (Quit: Leaving.)
18:12:11  <groundwater_>i thought _fatalException was kind of like a top-level 'catch' but done in a much more clever way
18:12:12  <trevnorris>tjfontaine: thanks for pulling all that out. right now i'm going to trust ben's implementation, but i'll have a look when there are brain cells to spare.
18:12:27  <tjfontaine>well technically it was me masquerading as ben :)
18:12:39  <trevnorris>hehe, ok
18:13:04  <tjfontaine>so the implementation as it is right now is just "make it compile but keep roughly the same behavior"
18:13:16  <tjfontaine>with the follow up of: "is this really what we meant"
18:14:08  <tjfontaine>as it is right now the build is broken on windows -- because ... MSVC hates my freedom
18:15:10  <groundwater_>trevnorris: if i slap a debugger statement in the middle of _fatalException I get#0 node.js:457:11
18:15:10  <groundwater_>#1 process._fatalException node.js:225:20
18:15:50  <trevnorris>imo that's a problem. we should be getting the stack that caused _fatalException to be called.
18:16:09  <trevnorris>it's a lot less useful w/o it
18:16:24  <groundwater_>oh agreed, but I'm not sure how, that's v8 voodoo
18:16:51  <tjfontaine>g:\jenkins\workspace\nodejs-master-windows\eec653f3\deps\v8\include\v8.h(477): error C2440: '=' : cannot convert from 'v8::Primitive *' to 'v8::Object *volatile ' [g:\jenkins\workspace\nodejs-master-windows\eec653f3\node.vcxproj]
18:16:52  <groundwater_>because isn't the call stack captured explicitly by 'new Error()'
18:16:55  <tjfontaine> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast
18:16:58  <tjfontaine> g:\jenkins\workspace\nodejs-master-windows\eec653f3\deps\v8\include\v8.h(473) : see reference to function template instantiation 'void v8::NonCopyablePersistentTraits<T>::Uncompilable<v8::Object>(void)' being compiled
18:17:16  <isaacs>tjfontaine: i spoke too soon. yes, it is related to ssl, afaict
18:17:23  <groundwater_>so, if you choose to throw something without a stack trace, you're SOL
18:17:30  * isaacswas getting confused because the test was too fancy
18:18:26  <tjfontaine>isaacs: ok, alright -- I felt like we would be seeing all sorts of problems if this was general fs issue, but I was deferring to your experience on it
18:18:41  <isaacs>tjfontaine: that'll teach you to do that
18:18:43  <isaacs>;)
18:18:56  <tjfontaine>heh
18:19:15  <tjfontaine>piscisaureus_: ping?
18:19:49  <MI6>libuv-node-integration: #285 FAILURE http://jenkins.nodejs.org/job/libuv-node-integration/285/
18:20:14  * mikealjoined
18:20:51  <trevnorris>groundwater_: regardless of the implementation details, imo the following stack trace should start with the callback called by setImmediate: https://gist.github.com/trevnorris/7123545
18:20:58  <trevnorris>and, I just got an idea of how to do this.
18:21:03  <trevnorris>one moment.
18:21:22  <isaacs>tjfontaine: the weird thing is that it's specifically a SSL->FS issue, which is super strange.
18:21:30  <isaacs>tjfontaine: ssl->synthetic works fine, synthetic->fs works fine
18:21:46  <groundwater_>trevnorris: +1
18:21:55  <tjfontaine>isaacs: hrm
18:23:06  <MI6>nodejs-master: #637 UNSTABLE osx-x64 (1/648) smartos-ia32 (12/648) smartos-x64 (12/648) linux-ia32 (1/648) http://jenkins.nodejs.org/job/nodejs-master/637/
18:27:00  <bnoordhuis>tjfontaine: hey, you rang?
18:27:27  <tjfontaine>bnoordhuis: ya, unfortunately the v8 upgrade broke on windows, I'm trying to decipher the msvc compiler error
18:27:52  <bnoordhuis>is that the error you posted some 20 lines up?
18:28:22  <tjfontaine>yes
18:28:30  <tjfontaine>full log http://jenkins.nodejs.org/job/nodejs-master-windows/429/DESTCPU=ia32,label=windows/consoleFull
18:28:36  <bnoordhuis>thanks
18:28:50  <tjfontaine>I'm wondering if there's just a gyp flag we've missed
18:30:16  <superjoe>tjfontaine, d'oh - I typo'd and used args[0] instead of args.This(). thank you for telling me to create a test case
18:30:16  <bnoordhuis>it's v8's TYPE_CHECK macro msvc is complaining about
18:30:29  * dominictarrjoined
18:30:32  <tjfontaine>superjoe: no problem
18:30:53  <tjfontaine>bnoordhuis: it's not related to the non copyable traits stuff?
18:31:20  <bnoordhuis>yes, NonCopyablePersistentTraits triggers it
18:31:53  <tjfontaine>just confused how they're not seeing this, or if they're only on 2012 or something
18:31:53  * dominictarrquit (Client Quit)
18:32:02  <tjfontaine>where they is v8
18:32:56  <bnoordhuis>hrm, i guess that's possible
18:33:19  <tjfontaine>or if there's some magic flag to msvc we're not triggering
18:33:21  <bnoordhuis>i wonder if we're using a persistent wrong in some windows-only code
18:33:47  <tjfontaine>this is just compiling v8 itself right now, isnt' it?
18:33:56  <tjfontaine>regexp-macro-assembler-ia32.cc
18:34:12  <tjfontaine>ah no
18:34:13  <tjfontaine>ok
18:34:15  <isaacs>ah, ok, reproduced wth just ssl. back to this being strictly an ssl bug.
18:34:21  <isaacs>that's comforting :)
18:34:34  <tjfontaine>isaacs: heh for one level of comfort :)
18:34:47  <isaacs>tjfontaine: well, it means that it's not likely to exist in 0.8 or 0.10
18:35:01  <isaacs>andit's not (afaict) specifically a streams2 or streams3 bug
18:35:06  <tjfontaine>right, I'm fairly sure it's just tls_wrap
18:35:14  <isaacs>the magic number is 32768 bytes.
18:35:36  <isaacs>if you pipe through more than 32768 bytes, but not a round multiple of 1024, then it will drop the last chunk
18:35:36  <bnoordhuis>tjfontaine: looking at the log, it looks like ObjectWrap is the culprit
18:35:44  <isaacs>32769 is the smallest value that fails.
18:35:58  <tjfontaine>bnoordhuis: hrm, I guess we're still #include'ing but not using it?
18:36:24  <tjfontaine>isaacs: ah interesting
18:36:46  <tjfontaine>bnoordhuis: Explicitly instantiate some template classes, so we're sure they will be <-- that stuff maybe?
18:38:10  <bnoordhuis>it's possible. i never understood why msvc needs that in the first place
18:38:27  <MI6>nodejs-master-windows: #430 FAILURE http://jenkins.nodejs.org/job/nodejs-master-windows/430/
18:38:33  <bnoordhuis>it's not liking the Persistents in src/env.h either for some reason
18:39:12  * dshaw_joined
18:41:42  <bnoordhuis>tjfontaine: i give up, i don't understand why msvc is complaining. maybe have sblom look at it or else revert the upgrade
18:42:00  <tjfontaine>ya, I'm going to ping him and see if he has some cycles for it
18:43:44  <isaacs>indutny: ping
18:48:33  <trevnorris>is it v8 that generates the <script>:<line> <code> message when an error is thrown, or do we do that?
18:49:17  <tjfontaine>if I understand your question, it's v8
18:49:33  * paulfryz_joined
18:50:11  * octetcloudquit (Ping timeout: 245 seconds)
18:50:35  <piscisaureus_>trevnorris: do you have time to talk to me?
18:50:38  <trevnorris>tjfontaine: thanks. I almost have this operating how I'd like, except that's reporting the incorrect location.
18:50:40  <trevnorris>piscisaureus_: sure
18:50:48  <piscisaureus_>trevnorris: on skype or something?
18:51:03  <piscisaureus_>or hangouts?
18:51:06  <tjfontaine>does piscisaureus_ have time to help debug msvc? :)
18:51:29  <trevnorris>sure, just give me a minute. my usb headset hasn't been working since my pulseaudio update.
18:51:50  <piscisaureus_>tjfontaine: I submitted a PR to redmond
18:52:15  * paulfryzelquit (Ping timeout: 240 seconds)
18:53:13  <tjfontaine>heh
18:56:18  <TooTallNate>guys, there seems to be a problem with process.title...
18:56:19  <TooTallNate>https://cloudup.com/cIKf2lX9DNQ
18:56:35  <trevnorris>piscisaureus_: ok, think I got my headset working.
18:56:49  <TooTallNate>my initial theory is that our method of setting the process name marks the process as a GUI app in Activity Monitor
18:57:10  <tjfontaine>oh is that why the not-responding thing?
18:57:16  <tjfontaine>nice find TooTallNate
18:57:29  <TooTallNate>tjfontaine: yes, that thing :p
18:57:32  <TooTallNate>people have been pissed
18:57:33  <TooTallNate>hahah
18:58:23  <tjfontaine>silly
18:58:23  * amartensjoined
18:58:42  <tjfontaine>but yes, that seems likely this is a byproduct of the hacks necessary to do set_title on osx
18:59:23  <TooTallNate>tjfontaine: fixable you think, or no?
18:59:47  <piscisaureus_>trevnorris: if it's not practical for you right now then we can do it later today
19:00:08  * pachetquit (Quit: leaving)
19:00:10  <tjfontaine>TooTallNate: it may be, we just need to figure out which part of the loop they expect us to be ACKing
19:00:17  <trevnorris>piscisaureus_: it works now. my headset is just being stupid. xcfe isn't recognizing pulseaudio settings.
19:00:24  <trevnorris>but nevermind it. let's do this now.
19:00:28  <piscisaureus_>cool!
19:00:51  <trevnorris>just give me a call on hangouts
19:00:57  <trevnorris>i don't have skype setup on this machine yet
19:01:49  <piscisaureus_>trevnorris: https://plus.google.com/hangouts/_/179d022d437452998d9a4518601662d2fc9d6c53?hl=en
19:04:01  <isaacs>hmmm....
19:04:13  <isaacs>so, it's not strictly a tls issue, but a https issue.
19:04:18  <isaacs>the plot thickens!
19:05:28  * kazuponjoined
19:06:46  <tjfontaine>dun dun dun
19:08:24  <TooTallNate>tjfontaine: make a ticket :D https://github.com/joyent/libuv/issues/966
19:08:51  <tjfontaine>TooTallNate: perfect.
19:08:59  <tjfontaine>TooTallNate: maybe assign it to indutny :P
19:09:52  * kazuponquit (Ping timeout: 246 seconds)
19:10:07  <TooTallNate>tjfontaine: i don't have the priv's
19:10:14  <TooTallNate>so feel free to assign
19:10:25  <tjfontaine>ah, /cc @indutny :)
19:10:42  <tjfontaine>brb lunch
19:12:13  * dominictarr_joined
19:28:19  * inolenquit (Quit: Leaving.)
19:28:26  * c4milojoined
19:32:19  * paulfryz_quit (Remote host closed the connection)
19:36:31  <bnoordhuis>so, is there a cross-platform way to get the thread id that's not pthread_self()?
19:36:41  <bnoordhuis>it's a rhetorical question, there isn't :-(
19:37:08  <bnoordhuis>i need to get hold of the thread id in a signal handler somehow and pthread_self() is not guaranteed async signal-safe
19:37:16  <bnoordhuis>signals... they complicate everything
19:38:19  <bnoordhuis>fun fact, no pthread functions are async signal-safe - except sem_post() (which of course is not a pthread function, strictly speaking)
19:44:30  * stagasjoined
19:44:36  * stagasquit (Client Quit)
19:55:19  * bajtosquit (Quit: bajtos)
19:58:45  <isaacs>indutny: ignore my ping before.
19:58:50  <isaacs>man, this is FASCINATING
19:58:54  <isaacs>and this bug is a 1-line fix
19:59:05  <isaacs>so crazy!
20:02:42  <isaacs>now to remove the several hundred console.trace lines i added everywhere...
20:04:14  <superjoe>haha
20:04:38  <superjoe>it's so satisfying to do that once you've located the problem
20:06:02  * kazuponjoined
20:08:43  * FROGGSquit (Quit: Verlassend)
20:09:39  * bajtosjoined
20:10:30  * kazuponquit (Ping timeout: 252 seconds)
20:16:28  * julianduquejoined
20:18:37  * dshaw_quit (Quit: Leaving.)
20:20:18  * octetcloudjoined
20:23:38  <isaacs>bnoordhuis: I'm seeing sporadic failures on test/simple/test-debug-signal-cluster.js, git says you're the last person to touch it
20:23:49  <indutny>isaacs: pong
20:23:52  <indutny>isaacs: ignoring
20:24:10  <indutny>sup?
20:26:16  <indutny>bnoordhuis: yt?
20:26:22  <indutny>I've a fix for you to give a try :)
20:30:07  <bnoordhuis>isaacs: platform? what kind of failures?
20:30:27  <bnoordhuis>indutny: what fix for what bug?
20:30:31  <indutny>stale fd
20:30:36  <indutny>actually
20:30:39  <indutny>stale events for new fd
20:30:53  <indutny>whatever
20:30:55  <indutny>bnoordhuis: https://github.com/joyent/libuv/pull/968
20:30:57  <indutny>nice and elegant
20:31:02  <bnoordhuis>can you give me the executive summary?
20:31:39  <indutny>I'm just checking if loop->watchers[fd] is the same as it was before ->cb()
20:31:42  <indutny>if its not
20:31:48  <indutny>I'm setting fd to -1 in all next events
20:31:50  <indutny>with the same fd
20:32:04  <indutny>bnoordhuis: is it nice?
20:32:12  <bnoordhuis>i don't think that's always a safe assumption to make
20:32:23  * paulfryzeljoined
20:32:24  <indutny>oh, it might be not catching some stuff, actually
20:32:28  <indutny>ergh
20:32:40  <indutny>ok, I've another fix then
20:32:49  <indutny>get all watchers ahead of time
20:32:58  <indutny>and check if they're matching before doing ->cb() call
20:33:09  <bnoordhuis>not sure what you mean by that
20:33:13  <indutny>well
20:33:26  <indutny>we've this: struct uv__epoll_event events[1024];
20:33:34  <indutny>lets add uv__io_t* watchers[1024];
20:33:44  <bnoordhuis>and then?
20:33:44  <indutny>and right after epoll_wait()
20:33:46  <indutny>a loop
20:33:54  <indutny>that'll fill watchers[i] for each events[i]
20:34:11  <indutny>then, in the existing loop, check if watchers[i] is the same as loop->watchers[fd]
20:34:17  <indutny>if it is not - ignore event
20:34:31  <indutny>oh
20:34:35  <indutny>won't work either
20:34:40  * jmar777quit (Remote host closed the connection)
20:34:43  <bnoordhuis>yeah, but what if watchers[i] == loop->watchers[fd] but it's not actually the same fd?
20:34:44  <indutny>watchers might have the same address
20:34:49  <bnoordhuis>exactly :)
20:35:02  <indutny>oh, stupid C
20:35:07  <indutny>you can't compare objects :)
20:35:17  * jmar777joined
20:35:17  <indutny>anyway
20:35:25  <indutny>I need to think about it
20:35:48  <bnoordhuis>yeah, me too. have you been able to reproduce it by the way?
20:37:02  <isaacs>bnoordhuis: os x
20:37:09  <isaacs>bnoordhuis: just ned to run it a few times
20:38:06  <isaacs>bnoordhuis: https://gist.github.com/isaacs/7126246
20:40:09  * jmar777quit (Ping timeout: 268 seconds)
20:40:19  <bnoordhuis>tjfontaine: when i open a libuv PR with multiple commits, does it test every commit or just all commits?
20:40:59  <bnoordhuis>isaacs: oh, right. i fixed that a while ago in that i made the test less susceptible to timing issues
20:41:51  <bnoordhuis>isaacs: but less != not at all, of course. it's still a bit buggy
20:42:17  <isaacs>yeah
20:42:32  <trevnorris>does anyone have an idea how to customize/change the following output to the console from an existing error:/tmp/test2.js:10
20:42:33  <trevnorris> throw new Error('setImmediate');
20:42:35  <indutny>isaacs: while you're there
20:42:37  <bnoordhuis>tjfontaine: oh nvm, jenkins truncated the build output. i see it's building them in one go
20:42:39  <indutny>isaacs: do you have a minute?
20:42:47  <isaacs>indutny: sure
20:42:49  <trevnorris>I need it to point at another location
20:42:55  <indutny>isaacs: I think I found an npm bug
20:43:09  <isaacs>indutny: kewl!
20:43:39  <indutny>isaacs: basically, when full url to package is used npm-shrinkwrap.json and its distinct from main registry - protocol from main registry could be used to get the package
20:43:41  <indutny>i.e.
20:43:49  <indutny>registry=https://registry.npmjs.org/
20:43:50  <indutny>thus
20:43:53  <indutny>protocol=https://
20:44:04  <isaacs>no, protocol=https:
20:44:08  <indutny>package url is = http://npm.local:5984/tar.jz
20:44:12  <indutny>doesn't matter
20:44:14  <isaacs>k
20:44:25  <indutny>and it'll be requested from https://npm.local:5984/tar.jz
20:44:27  <isaacs>right, and it forcibly makes it https?
20:44:29  <isaacs>right
20:44:30  <indutny>yes
20:44:58  <isaacs>i seem to recall encountering this when exploring moving tarballs to manta
20:45:11  <isaacs>indutny: what version of npm?
20:45:35  <trevnorris>that c/p was bad, so again:
20:45:35  <trevnorris>/tmp/test2.js:10
20:45:35  <trevnorris> throw new Error('setImmediate');
20:45:35  <trevnorris> ^
20:45:36  <trevnorris>I'm able to change the message and the stack trace itself, which is good, but I can't manage to change that type of output. any ideas?
20:46:15  <trevnorris>or is that generated by v8 and I have no control over it?
20:46:35  <isaacs>trevnorris: it's generated by v8, and we print it out from c-ladn
20:46:37  <indutny>isaacs: all versions
20:46:40  <indutny>:)
20:46:47  <trevnorris>isaacs: thanks.
20:46:48  <tjfontaine>bnoordhuis: jenkins as it turns out is not a CI that cares about each commit :)
20:46:54  <indutny>isaacs: thank you
20:47:27  <isaacs>indutny: ok. well.. yes, it's a bug, and it'll have to be fixed eventually. but the fix must not cause always-auth users to send their creds through a plaintext channel.
20:47:36  <isaacs>so that's the tricky bit
20:47:38  <indutny>haha
20:47:58  <isaacs>tjfontaine: https://github.com/joyent/node/pull/6404
20:47:59  <indutny>from looking at code - I believe that you've a shared variable called protocol
20:48:08  <tjfontaine>isaacs: ya I saw
20:48:08  <indutny>which is cached
20:48:16  <isaacs>indutny: ok
20:48:22  <indutny>isaacs: thank you, anyway
20:48:33  <indutny>right now, I recommended all those people to use their local registry
20:48:40  <indutny>and it seems to be working
20:48:46  <indutny>so there're definitely no rush
20:49:13  <bnoordhuis>tjfontaine: another question: i added a fd leak check to libuv's test runner and now all the tests on jenkins are failing with "Open file descriptor 11 of type file."
20:49:32  <bnoordhuis>tjfontaine: that's on os x. it's fd 10 on linux. any idea what that could be? doesn't happen for me locally
20:50:18  * dshaw_joined
20:50:33  <isaacs>indutny: yeah, it's something i've been kind of kicking down the road a bit further as we get 0.12 finished
20:50:44  <tjfontaine>bnoordhuis: hm, I'm not sure without actually digging in I guess
20:51:02  <bnoordhuis>tjfontaine: okay, i'll poke at it a bit more then
20:51:24  <tjfontaine>bnoordhuis: the jenkins slaves are run in a screen, the slaves fork into node, which then ultimately cp.spawn()'s
20:51:53  <bnoordhuis>hrm, okay. i'll try that
20:52:10  <tjfontaine>bnoordhuis: jenkins tries to do the fd leak check stuff as well
20:52:11  * brsonquit (Ping timeout: 248 seconds)
20:52:23  <tjfontaine>so I am not sure what all it's doing for that
20:53:01  <bnoordhuis>okay
20:54:15  <bnoordhuis>i guess some of the os x failures are legitimate, i can reproduce them on my macbook
20:54:58  <tjfontaine>isaacs: that test case is pretty excellent
20:56:14  <trevnorris>bnoordhuis: w/ multi-context support, if any context throws will that kill all contexts?
20:56:29  <isaacs>bnoordhuis: btw, this is an example of why i have a problem with strongloop's approach of using sloc as a measure of impact and relevance: https://github.com/isaacs/node/commit/f153d6da450b6ba5c78381e6e90a7ba243657691
20:56:44  <isaacs>bnoordhuis: discounting tests, comments, and whitespace, that commit is literally a single line.
20:58:11  <bnoordhuis>trevnorris: no
20:58:39  <bnoordhuis>isaacs: then what would you suggest as an alternative metric? :)
20:58:59  <bnoordhuis>i agree lines of code is not a great way to measure things
20:59:00  <isaacs>bnoordhuis: well, i've said in the past, SLOC is a great metric if you cast the number to a boolean.
20:59:09  <isaacs>lose some granularity that way, though :)
20:59:23  <trevnorris>bnoordhuis: ok. so isn't it a problem that node::DisplayExceptionLine() has a static to prevent re-execution?
21:00:54  <isaacs>bnoordhuis: the problem is that basically any more granular metric than "Did this person contribute, or not?" is worse than no data at all, because it pretends to show something that in fact it does not. qualitative measurements of impact are usually very good, but of course, very hard to communicate objectively.
21:01:40  <isaacs>bnoordhuis: so you and i and tjfontaine and trevnorris all know roughly what one another do, and what areas we're impacting, and how relevant those areas are. but you can't make a pie chart of that.
21:01:48  <bnoordhuis>trevnorris: that would be a problem, yes :)
21:01:57  <trevnorris>bnoordhuis: ok. noted. :)
21:02:54  <bnoordhuis>isaacs: i don't disagree. still, i don't know what else would work that you can distill into an infographic
21:02:58  <isaacs>bnoordhuis: anyway, i'll stop complaining about it. just came to mind when i realized that this super tricky bug turned out to be a 14 chracter fix
21:03:14  <trevnorris>bnoordhuis: i'll fix it as soon as i'm done figuring out this dumb stack trace issue.
21:03:31  <trevnorris>isaacs: that's what made you think of it? what about all the loc that upgrading v8 gives? :P
21:03:47  <isaacs>bnoordhuis: when data can't be distilled into an infographic in a way that preserves the important aspects of its implications, perhaps an infographic is not the best way to communicate that information.
21:03:49  <bnoordhuis>maybe pull requests reviewed or something?
21:04:10  <bnoordhuis>isaacs: try telling that to the marketing types :)
21:04:15  <indutny>bnoordhuis: I like turning your words against you :)
21:04:19  <isaacs>bnoordhuis: so often i try telling them! :)
21:04:23  <indutny>that makes me feel happy in the end of the day
21:05:17  <isaacs>also, if people know that they're being measured, then the mere existence of metrics tends to skew behavior of the people being measured, even if they are not conscious of the effect.
21:05:29  <bnoordhuis>yeah, i guess that's true
21:05:37  * bnoordhuisupgrades v8 again
21:05:38  <isaacs>there IS some good science that's been done on this subject.
21:05:41  <isaacs>but not by marketing types.
21:06:34  <isaacs>several papers have been written and peer-reviewed, on studies showing that one of the surest ways to fuck up your project is to track the lines of code that each contributor writes.
21:06:36  * kazuponjoined
21:06:52  <isaacs>even if we don't think we're going to get competitive, we will. humans are tricky, especially if you are one.
21:07:56  <bnoordhuis>that's why i don't track lines of code
21:08:03  <bnoordhuis>number of commits, that's all that counts!
21:08:15  * bnoordhuiscommits another whitespace fix
21:14:59  <bnoordhuis>apparently we're sometimes leaking a kqueue file descriptor on os x
21:15:18  <bnoordhuis>but in the process_title test..? that one doesn't even use an event loop
21:15:34  <bnoordhuis>i'm betting it's something in CoreServices
21:16:28  <bnoordhuis>"Open file descriptor 11 of type Operation not supported on socket." <- i smiled at that one :)
21:16:55  * kazuponquit (Ping timeout: 246 seconds)
21:18:38  <trevnorris>i'm wasting a lot of time on what feels should be a simple fix. anyone know if it's possible to catch and error in a try w/o actually catching it?
21:19:02  <trevnorris>i need to change the .message on the error before it gets printed
21:20:04  <bnoordhuis>trevnorris: try { f(); } catch (e) { e.message = 'BAM!'; throw e; } ?
21:20:23  <bnoordhuis>though why you would want to do that...
21:20:30  <trevnorris>bnoordhuis: but then the message shows the thrown location there, instead of where it was thrown in f();
21:21:12  <bnoordhuis>nuh uh. the stack trace is created at the place where new Error() is called
21:21:37  <trevnorris>yeah, the stack trace is. but the message of the form:
21:21:38  <trevnorris>/tmp/test2.js:5
21:21:38  <trevnorris> throw new Error('onError');
21:21:38  <trevnorris> ^
21:21:38  <trevnorris>isn't
21:21:48  <tjfontaine>new Error?
21:22:25  <trevnorris>i'm talking about the console output where there's the ^ and it points to the line of code that actually threw.
21:22:44  <trevnorris>that's printed out by node::()
21:22:48  <trevnorris>DisplayExceptionLine
21:22:59  <bnoordhuis>i'm curious now what you're trying to do
21:23:37  <trevnorris>there's a case where an error callback in async listeners can throw. but in case that happens the beginning of the call stack is _fatalException.
21:23:53  <trevnorris>I'd like to include the stack trace that lead to the _fatalException being called anyways.
21:24:13  <bnoordhuis>why does it start there and not in the callback?
21:24:38  <bnoordhuis>kid's awake, biab
21:24:42  <trevnorris>kk
21:25:23  <trevnorris>othiym23: if I don't get this in the next half an hour the patch is going in w/o it and it can be fixed later.
21:25:53  <trevnorris>already spent too much time futzing around w/ appending error messages. the stack trace is easy.
21:26:09  <trevnorris>it's displaying the correct exception line that's a pain.
21:30:14  <groundwater_>trevnorris: want me to look at any changes?
21:31:11  <trevnorris>groundwater_: i don't have any changes to look at. gone through a dozen permutations and can't get the exception line to display correctly. it's because node::ReportException() is passed a <Message> that isn't part of the error.
21:31:27  <trevnorris>so I can change the stack on the error object, but that doesn't affect the exception line reported.
21:32:05  <groundwater_>i'm okay with figuring this out later, it's an extreme case
21:34:16  <groundwater_>also the NR agent should 99% not be throwing in the handlers
21:34:25  <tjfontaine>you say that now
21:34:26  <tjfontaine>:P
21:34:31  <groundwater_>haha
21:34:33  <tjfontaine>also
21:34:37  <groundwater_>TJ IS THE BRINGER OF DOOM
21:34:37  <LOUDBOT>HOW MUCH TO GO BY BOAT
21:34:45  <tjfontaine>I am super stoked about tomorrow *night*
21:34:57  <tjfontaine>LOUDBOT: THESE GUYS ARE HAVING CAKE PLAY
21:34:58  <LOUDBOT>tjfontaine: LOOK AT JOSIAH PICKING UP MILK
21:35:26  * sblomjoined
21:35:35  <tjfontaine>sblom: my savior :)
21:36:19  <groundwater_>trevnorris: i would like to make sure the tests pass though, so when you have time we should address those failing tests
21:36:31  <trevnorris>groundwater_: next on my list.
21:38:18  <groundwater_>trevnorris: i can't PR your node repo on github for some reason
21:38:27  <trevnorris>strange
21:39:03  <groundwater_>maybe i'll try forking your copy
21:39:12  <groundwater_>i forked joyent/node
21:39:20  <groundwater_>and set you up as a remote
21:46:06  <indutny>bnoordhuis: this fd problem is interesting
21:46:13  <indutny>I see a lot of different solutions
21:46:27  <indutny>but most of them are not ABI-compatible
21:46:38  * brsonjoined
21:47:00  <indutny>bnoordhuis: what do you think about storing pointer to the list of events inside loop?
21:47:11  <indutny>bnoordhuis: and invalidating it when watcher is removed
21:47:26  <trevnorris>i really wish it were possible to try { } finally (e) { }, where e would be set _if_ there was an error.
21:47:33  <trevnorris>that would be really useful right now.
21:48:09  * dshaw_quit (Quit: Leaving.)
21:48:09  <indutny>trevnorris: heh
21:48:17  <indutny>trevnorris: I wish javascript has macros
21:48:21  * wavdedquit (Quit: Hasta la pasta)
21:48:36  <tjfontaine>ha ha
21:48:46  <indutny>macros would simplify all this shit
21:48:49  <indutny>:)
21:48:53  <indutny>and a lot of stuff in core
21:49:01  <indutny>but
21:49:07  <indutny>at what price?
21:49:38  <tjfontaine>a lot of understandability mostly, the times where it makes it simpler to understand logic, it makes it harder to debug
21:49:53  <tjfontaine>understand the source, vs being able to debug
21:50:01  <indutny>yes
21:50:12  <indutny>but if it were a part of ES
21:50:18  <indutny>debuggers would be aware of it
21:50:22  <tjfontaine>ya, different story
21:50:41  <tjfontaine>but if you want meta programming why are you using js? :)
21:50:50  <indutny>haha
21:50:55  <indutny>its fast?
21:51:04  <indutny>and has GC
21:51:08  <indutny>which is fast too
21:51:10  <groundwater_>trevnorris: got it https://github.com/trevnorris/node/pull/1
21:51:12  <indutny>mostly
21:51:25  <groundwater_>i may push some more things here
21:51:29  <trevnorris>groundwater_: cool
21:51:54  <trevnorris>groundwater_: don't worry about working on the tests. there are several other things that i'm taking care of as well.
21:52:48  <groundwater_>trevnorris: i need to keep the tests up to date since we've got a polyfil in place
21:53:13  <trevnorris>groundwater_: yeah. i'm keeping all your tests as a separate commit on my branch, so it'll be easy to pick out.
21:53:30  <trevnorris>just need to focus on this error message stuff and get it done before I break my monitor
21:53:37  <groundwater_>lol
21:53:58  <tjfontaine>indutny: you like the v8 vm, just want a different language :P
21:54:09  <indutny>I don't know which exactly
21:54:13  <indutny>though, I enjoy rust
21:54:22  <indutny>but they broke all code that I've written
21:54:41  <trevnorris>hey, they sounds like v8
21:54:55  <tjfontaine>v8 doesn't really care about embedders :P
21:55:02  <groundwater_>trevnorris: the test work up to eef688 but break in 61e7684
21:55:12  <trevnorris>don't worry about those last few commits.
21:55:20  <trevnorris>they're going to be scrapped anyways
21:55:30  <groundwater_>ahh okay, coolios
21:55:44  <groundwater_>much appreciated my friend
21:57:21  <trevnorris>np
21:58:39  <MI6>joyent/node: isaacs master * f153d6d : http client: pull last chunk on socket close - http://git.io/dUAtHg
22:01:09  * AvianFluquit (Ping timeout: 248 seconds)
22:02:35  <indutny>isaacs: good job
22:02:41  <isaacs>indutny: thanks
22:07:46  * TooTallNatequit (Quit: Computer has gone to sleep.)
22:08:40  <MI6>nodejs-master-windows: #431 FAILURE http://jenkins.nodejs.org/job/nodejs-master-windows/431/
22:13:20  <trevnorris>isaacs: why is it that require('../common') needs to be included at the top of all tests?
22:13:30  * kazuponjoined
22:14:32  <othiym23>common.PORT, among other things
22:14:36  <othiym23>trevnorris: ^^
22:14:52  <trevnorris>othiym23: yeah, but I have it included at the top of tests that don't even use common.
22:14:58  <trevnorris>and one time isaacs told me to include it anyways.
22:15:35  <MI6>nodejs-master: #638 UNSTABLE smartos-ia32 (7/649) smartos-x64 (8/649) osx-ia32 (2/649) linux-ia32 (1/649) http://jenkins.nodejs.org/job/nodejs-master/638/
22:17:39  * jmar777joined
22:18:13  * kazuponquit (Ping timeout: 248 seconds)
22:18:28  <trevnorris>DAMN YOU TMUX REFLOW!!!!
22:18:29  <LOUDBOT>I WONDER IF THEY USES SPACES OR TABS
22:18:52  <trevnorris>i hate it when "features" make life more difficult.
22:24:54  * kenansulaymanpart ("≈♡≈")
22:24:55  * st_lukejoined
22:25:36  * TooTallNatejoined
22:26:00  <trevnorris>piscisaureus_: have to say, that discussion of ours got me thinking of some cool applications of this thing.
22:26:24  <piscisaureus_>good :)
22:26:35  <piscisaureus_>trevnorris: any specific one you want to mention?
22:28:42  <trevnorris>piscisaureus_: mainly the idea of scoping, and using the uid to group resources together.
22:29:05  <othiym23>I'm glad we came up with a general solution instead of just adding CLS as it stood
22:29:06  * st_lukequit (Ping timeout: 252 seconds)
22:29:11  <trevnorris>piscisaureus_: it is technically possible to set an "accessor" callback on an object to see when it was altered.
22:29:38  <trevnorris>piscisaureus_: so there's a **very slight** chance of knowing if the global state has been altered or not.
22:29:47  <piscisaureus_>:)
22:29:52  <piscisaureus_>trevnorris: String.prototype
22:30:10  <othiym23>piscisaureus_: do you have a document somewhere of how Tasks is supposed to work?
22:30:23  <piscisaureus_>othiym23: work, or how to be used?
22:30:33  <othiym23>piscisaureus_: either / both
22:30:56  <trevnorris>othiym23: the ideas that piscisaureus_ are pretty cool. my hope is that AsyncWrap can work well enough to allow Tasks to live in user-land.
22:31:19  <trevnorris>if it can achieve that, then it's pretty close to as generic-use as possible.
22:31:21  <piscisaureus_>othiym23: not the "latest version" but close: https://www.youtube.com/watch?v=QnO6Uut4Ao8&hd=1
22:31:22  <othiym23>that would be a big validation of the generality of the API
22:31:41  <piscisaureus_>othiym23: unfortunately I can't do it with async listeners as they stand
22:31:53  <piscisaureus_>othiym23: I can't track handle and req lifecycles precisely enough
22:32:21  <othiym23>piscisaureus_: is it the lack of finalizers that's problematic?
22:32:25  <piscisaureus_>othiym23: nujs.github.io/sl2013 is somewhat more recent, but only slides so probably not very useful
22:32:33  <piscisaureus_>othiym23: I need to know when a Handle goes away
22:32:50  <piscisaureus_>othiym23: but I can't distinguish between onclose and other callbacks (e.g. onread)
22:32:50  <trevnorris>and I do have a plan for that :)
22:33:17  * amartensquit (Quit: Leaving.)
22:33:34  <othiym23>I have my own uses for that information as well
22:33:46  <trevnorris>groundwater_: yeah. so common.js has to be included in all tests whether it's used or not. it throws in a bunch of probes for dtrace and the like.
22:34:20  * TooTallNatequit (Quit: Computer has gone to sleep.)
22:34:37  * superjoequit (Ping timeout: 246 seconds)
22:35:05  <trevnorris>groundwater_: it also makes sure that there are no floating globals from the result of your script (i.e. that a require() doesn't accidentely polute the global object)
22:35:32  <indutny>piscisaureus_: is it what you propose for 0.12?
22:35:34  <indutny>or 0.14?
22:35:42  <indutny>piscisaureus_: or is it a joke? :)
22:35:57  <indutny>when I first watched it - I had very mixed feeling about it
22:36:00  <piscisaureus_>indutny: is that a joke?
22:36:10  * st_lukejoined
22:36:11  <indutny>your `task` API
22:36:29  <piscisaureus_>indutny: can you get specific?
22:36:42  <indutny>http://nujs.github.io/sl2013/#/7
22:36:44  <piscisaureus_>indutny: I'd love your feedback but I can't fix your feelings
22:36:45  <othiym23>the lxjs venue was way fancier this year than last year
22:36:49  <indutny>haha
22:36:52  <piscisaureus_>it was indeed
22:36:55  <indutny>a joke?
22:37:05  <piscisaureus_>indutny: ?
22:37:10  <indutny>ah
22:37:14  <indutny>it was not for me :)
22:37:16  <indutny>nvm
22:37:19  <indutny>brb
22:39:04  <piscisaureus_>indutny: ah, that was a joke :)
22:39:10  * jmar777quit (Remote host closed the connection)
22:39:11  <piscisaureus_>indutny: I forgot about that
22:41:12  <trevnorris>groundwater_, othiym23: so right now if a process exit callback fires while an async listener is active, and the error is handled, then the test asserts will pass.
22:41:17  <trevnorris>groundwater_: just fyi :)
22:43:15  <trevnorris>groundwater_: also, try to use process.nextTick() when testing for a specific count of async listeners. see, your first timer will first create a TimerWrap in c++, then create a Timer object in JS.
22:43:33  <trevnorris>groundwater_: both will fire the async listener.
22:43:41  <isaacs>trevnorris: global var leak detection
22:44:35  <trevnorris>isaacs: yeah. decided to not be lazy and actually check the source myself. :) thanks for the follow up though.
22:44:41  <groundwater_>trevnorris: ahh, so setImmediate will fire multiple listeners?
22:45:44  <trevnorris>groundwater_: it _may_ fire multiple. it's actually really confusing how TimerWrap works with Timers.
22:46:03  <trevnorris>but process.nextTick will only run once.
22:47:56  * st_lukequit
22:48:12  <trevnorris>othiym23: and sorry for now. if an error callback throws it'll die hard and print where it died in the callback's stack trace.
22:48:30  <isaacs>it's really amazing how lazy you can manage to get away with being if your node site uses cluster and domains together.
22:49:00  <isaacs>the npmjs.org site is such an utter mess. but like, whatever. errors usually don't happen 16 times in a row.
22:49:10  <isaacs>and all the state is not in the node process
22:49:22  <trevnorris>othiym23: it bothers me that the stack looses the call location from the callback if the error callback also throws, but this way you'll know exactly where the error actually happened.
22:49:22  <trevnorris>that should help with the domain confusion you were discussing earlier.
22:49:39  <trevnorris>isaacs: hah, awesome.
22:49:40  <isaacs>i can just not even look at it, and even if there are horrible errors that happen daily, pingdom hardly ever bugs me
22:49:50  <isaacs>when i do go back and sniff the logs, i'm usually horrified.
22:50:14  <isaacs>just fixed a few boneheaded "property of undefined" type errors that apparently can happen in rare edge cases.
22:50:24  <othiym23>trevnorris: all I *really* care about is seeing the most recently occurred error
22:50:45  <othiym23>having it dump the stacktrace that it got before the most recent thing threw up is the part that's really unhelpful
22:51:07  <trevnorris>othiym23: and that's what you'll get. see, the error handler will never actually throw, so I allow the error from the callback itself to propagate through. so you know the point of origin.
22:51:31  * c4miloquit (Remote host closed the connection)
22:52:21  * TooTallNatejoined
22:52:43  <trevnorris>isaacs: well, i figured out why we loose the stack trace when you have multiple handled throws. have it on my list for when I'm bored.
22:54:27  * Kakeraquit (Ping timeout: 272 seconds)
22:55:06  <trevnorris>groundwater_: wait, your tests show that uncaughtException should fire even if the error callbacks handled the error?
22:56:11  <groundwater_>trevnorris: hmm… which test?
22:56:12  <othiym23>WHYYYYYYYYYYYYYYYYYYYY: http://blog.mongodb.org/post/49262866911/the-mean-stack-mongodb-expressjs-angularjs-and
22:56:35  <othiym23>Manning is making a book about the MEAN stack
22:56:54  <trevnorris>groundwater_: like throw-in-before-multiple. you run the checks in uncaughtException, but the error callback handle the error.
22:57:06  <groundwater_>trevnorris: you're absoluately right… wtf
22:57:59  <trevnorris>groundwater_: see, your asserts might not have been fired at all, since they're in a function that shouldn't be running.
22:58:08  <trevnorris>groundwater_: i'm fixing those up.
22:58:17  <groundwater_>but the console.log('ok') is printing… odd
22:58:21  <othiym23>it sounds like I've got some work to do on they polyfill
22:58:44  <groundwater_>i try to make sure i place an 'ok' right after my key assert, to prevent that
22:59:20  <trevnorris>groundwater_: technically make test doesn't care about the "ok". if it doesn't throw then it'll pass. but regardless, if that's the intended API then there's a bug in the patch.
22:59:35  * jmar777joined
22:59:35  <groundwater_>i run the tests manually
23:00:03  <trevnorris>groundwater_, othiym23: so, it should only call uncaughtException iff no error callbacks handled the error.
23:00:10  <groundwater_>i likely didn't pay much attention because they were working
23:00:22  <othiym23>we really need to put together a test runner for async-listener
23:00:44  <othiym23>right now I use a bash for loop which doesn't check exit codes
23:00:52  <groundwater_>trevnorris: yes
23:00:59  * c4milojoined
23:01:12  * jmar777quit (Remote host closed the connection)
23:01:22  <trevnorris>groundwater_: ah, I forgot about one case. that is, if a before/after callback is running when it throws, then the error callbacks are _not_ allowed to register that they've handled the error.
23:01:44  <groundwater_>i actually think i have a test for that, but it may also have an error in it
23:01:46  * jmar777joined
23:01:49  <trevnorris>hence the line: return handled && !inAsyncTick;
23:01:55  <groundwater_>yup!
23:02:09  <othiym23>I had that in the polyfill at one point, but I think I may have reworked things in such a way that I changed the semantics
23:02:26  * jmar777quit (Read error: Connection reset by peer)
23:02:48  <trevnorris>ok. what's your opinion. if error handlers _do_ run but they're not allowed to handle the error, should uncaughtException be called?
23:02:54  * jmar777joined
23:03:13  <groundwater_>oh… right trevnorris i have the "return true" there specifically because it should be ignored
23:03:14  <trevnorris>groundwater_: that's why I changed the code at one point to only check for === null
23:03:29  <othiym23>I'm of two minds, trevnorris
23:03:30  <trevnorris>because "null" means that there were no error callbacks to handle the error.
23:03:53  <trevnorris>othiym23: sup?
23:03:59  <othiym23>on the one hand, I think we should probably force the process to tank if something bad happens in the listener or callbacks
23:04:31  * jmar777quit (Remote host closed the connection)
23:04:42  <othiym23>on the other hand, I don't want to force users of the New Relic module to deal with our code making their apps crash if they were naively making their apps continue running by trapping errors in uncaughtException
23:05:06  * jmar777joined
23:05:28  * dshaw_joined
23:05:38  * sblomquit (Ping timeout: 264 seconds)
23:06:00  <trevnorris>othiym23: well, if the error callback throws then it'll run the process exit callbacks then immediately die. no getting around that.
23:06:00  <trevnorris>but the others may have a little give.
23:06:27  * bnoordhuisquit (Ping timeout: 260 seconds)
23:06:38  <othiym23>from a soundness point of view, eating shit and dying is the correct thing to do
23:06:59  <trevnorris>othiym23: see, it'll only bypass uncaught exception if it's from a callback (i.e. your code) so uncaughtException will still catch all the other errors.
23:07:01  <othiym23>but I know that I'm going to take heat for New Relic crashing people's apps if that's what happens
23:07:12  <othiym23>yeah, I know
23:07:51  <othiym23>I had a bug in the polyfill that caused CLS to throw up under 0.8 when app code threw
23:08:13  <othiym23>so they were already in an error state, but the problem with async listener turned an error they could deal with into an unrecoverable crash
23:08:40  <othiym23>and New Relic took the blame, even though their apps were already in an error state
23:09:15  <trevnorris>othiym23: well, they better not expect to be able to recover from a domain error callback throwing.
23:09:16  <othiym23>I *think* I have those errors ironed out now, but that's why I'm a little hesitant to fully commit to not giving uncaughtException a chance to run
23:09:27  <groundwater_>trevnorris: going back to your earlier point about === null, the errorHandler function seems to only ever return false, or 'handled && !inAsyncTick' so when will it ever be explicitly null?
23:09:33  <othiym23>I'm only talking about before and after here
23:09:50  * jmar777quit (Ping timeout: 264 seconds)
23:10:24  <trevnorris>groundwater_: ... whoops. must've botched that in a rebase or something.
23:10:30  * AvianFlujoined
23:10:55  <trevnorris>it's supposed to return null in the case there are no error handlers available to run.
23:10:55  <groundwater_>trevnorris: i'm resonably sure the tests are now correct
23:11:17  <groundwater_>aha!
23:12:23  <trevnorris>the original idea was to sort of phase out uncaught exception by not running it, unless no error handlers had been set.
23:13:07  <groundwater_>so right now, uncaughtException is expected to run on everything except an "error thrown in error handler"
23:13:15  <groundwater_>and the tests rely on that behavior
23:13:56  <groundwater_>i could move that test to process.on('exit',…) but i'm unclear exactly when that's supposed to run
23:14:07  <trevnorris>well, uncaughtException will not run if the error wasn't handled by the error handler callbacks, or if one of the error handler callbacks threw.
23:14:30  <groundwater_>sorry yes, that's what i meant
23:14:39  * kazuponjoined
23:15:03  <trevnorris>ok. so that's the agreed upon behavior?
23:15:27  <groundwater_>yes i think so, and i think the tests are asserting that behavior currently
23:16:06  <othiym23>this-all is why the 0.8 version of the polyfill uses try-catch now
23:16:17  <trevnorris>just to solidify this, uncaughtException will run if a listener/before/after callback throws?
23:16:17  <othiym23>if you're using 0.8 at this point, you deserve shitty performance
23:16:45  <groundwater_>yes, after routing though error callback
23:16:55  <othiym23>that is my understanding as well
23:19:26  * kazuponquit (Ping timeout: 264 seconds)
23:21:29  <trevnorris>groundwater_, othiym23: here's the issue. the following will lead to an infinite recursion: https://gist.github.com/trevnorris/7128575
23:22:03  <groundwater_>lol dang...
23:22:29  <groundwater_>actually i'm not clear what's causing the loop
23:23:02  <othiym23>you guys know that return true in the uncaughtException handler doesn't do jack shit, right?
23:23:23  <trevnorris>othiym23: um. in my patch it does.
23:24:06  <trevnorris>othiym23: is it not supposed to?
23:24:32  <trevnorris>oh wait. yeah. because it's technically an event emitter.
23:24:36  <trevnorris>duh
23:25:08  <trevnorris>yet another reason why I hate event emitters. they obfuscate the seemingly obvious.
23:25:55  <groundwater_>i'm still not clear why 'before' gets called repeatedly
23:26:45  <groundwater_>but assuming there is some async mechanism causing it, if we're ignoring return codes by the async-listener error handlers, it makes sense to also ignore the return codes in 'uncaughtException'
23:27:28  <trevnorris>well, like othiym23 said. you can't get a return code from uncaughtException because it's being called from the event emitter.
23:27:51  <othiym23>groundwater_: I had an error in my thinking where I thought returning from an event listener actulaly did something, but it does not
23:28:54  <groundwater_>so it has nothing to do with the 'return true'
23:29:13  <othiym23>right
23:29:16  <trevnorris>groundwater_: no. just the existence of the callback is enough
23:30:17  <groundwater_>so… how do my tests exit, because they're using uncaughtException
23:31:27  <groundwater_>OH, because i'm removing the listener
23:31:38  <trevnorris>yeah
23:31:44  <groundwater_>likely because i wanted to console.log('ok')
23:31:56  <groundwater_>okay, new test case!
23:32:17  <groundwater_>othiym23: how does the polyfil respond?
23:32:37  <othiym23>groundwater_: lemme check!
23:32:47  <trevnorris>groundwater_: i've fixed that, and if you look in my gist you'll see I don't use that.
23:32:52  <trevnorris>no it's because of the following:
23:33:02  * amartensjoined
23:33:14  <trevnorris>https://github.com/joyent/node/blob/master/src/node.js#L244
23:33:37  <trevnorris>see, if everything is handled then it calls setImmediate, which calls the before callback again, which errors again.
23:34:16  <othiym23>groundwater_: silently exits with an exit code of 0
23:34:22  <groundwater_>lol
23:34:26  <othiym23>at least with latest published listener
23:35:16  <groundwater_>the easy solution is to bypass uncaughtException if bfore/after throws, but i don't think that's the right behavior
23:35:35  <trevnorris>well, what other option do we have?
23:36:39  <groundwater_>trevnorris: are you emitting the exit event on the next tick?
23:37:11  <trevnorris>groundwater_: no. it has to be done immediately or the callbacks will fire again.
23:37:42  <groundwater_>hmm… what is being passed to setImmediate on L244
23:38:02  <trevnorris>groundwater_: the ability to finish processing the nextTickQueue
23:38:23  <groundwater_>ahh, so uncaughtException can schedule async events?
23:39:46  <trevnorris>no. _fatalException does
23:39:53  <trevnorris>in the case that it believe all errors were handled.
23:40:28  <groundwater_>so what's going on here https://github.com/joyent/node/blob/master/src/node.js#L227
23:40:45  <groundwater_>isn't 'caught' only going to ever be null?
23:40:47  <othiym23>I think that wherever this lands, the ability of the polyfill to handle everything in 0.8 is going to be somewhat compromised compared to 0.10+
23:42:14  <trevnorris>well, look. I can have uncaught exception be called _once_ in the case a callback throws, but then the application must crash.
23:42:46  <othiym23>I'm not complaining!
23:43:03  <othiym23>I'm happy to live with the polyfill behaving differently than what's in core
23:43:09  <othiym23>these are literally edge cases of edge cases
23:43:14  <othiym23>just pointing it out
23:43:17  <trevnorris>alright. then that's how it'll work. if a callback throws then uncaughtException will be called _once_ then the application will crash.
23:44:06  <othiym23>and this ignores things like process.on('exit') / process.on('beforeExit') handlers?
23:44:38  <trevnorris>no. it'll fire the exit callbacks.
23:45:32  <othiym23>k
23:46:01  <othiym23>I'm debating adding support to the New Relic module to try to do one last metrics push in case the process is going down, but that's unavoidably asynchronous because it needs to do a few POSTs
23:46:14  <trevnorris>though, then there's the edge case of a process exit callback throwing. so where do we want the stack trace to come from?
23:46:29  <trevnorris>the original callback that caused process exit to fire, or the exit callback?
23:47:13  <trevnorris>othiym23: well, you can write to filesystem sync. then check for it on next startup and send it off.
23:47:35  <othiym23>trevnorris: that's not a bad idea
23:48:31  <trevnorris>as much as I hate debugging, I've had to debug my fair share of massive monolithic java ec2/oracle db services.
23:48:45  <trevnorris>actually, maybe that's the reason I hate debugging so much :P
23:51:13  <trevnorris>othiym23: so, which stack would you want. the stack from calling process.exit callbacks, or from the original callers callback?
23:51:24  <trevnorris>man. this is messed up.
23:51:48  <trevnorris>so, you could technically throw three times before exit. from your app, from the error callback, then from the process exit callback.
23:52:06  <trevnorris>imo, if you're there, you're already SOL.
23:52:30  <othiym23>see, the reason I originally asked for both is that which you want depends on who you are and what you're trying to do
23:52:51  <othiym23>I think if we're going to get all axiomatic about it, the best thing to do is print out the last stacktrace we get
23:53:36  <trevnorris>ok. that's easiest.
23:58:22  <trevnorris>othiym23: hopefully, but don't plan on this any time soon, I can fix the error reporting to include the entire stack trace from the beginning of the synchronous code.
23:58:46  <MI6>libuv-master-gyp: #241 FAILURE windows-x64 (3/196) windows-ia32 (3/196) http://jenkins.nodejs.org/job/libuv-master-gyp/241/
23:59:09  <trevnorris>it's loosing the stack because it generates a new stack, from last point of failure.
23:59:59  <trevnorris>but this way you could technically track down the sequence of throws.