00:00:01  * ircretaryquit (Remote host closed the connection)
00:00:09  * ircretaryjoined
00:03:14  * bnoordhuisquit (Ping timeout: 264 seconds)
00:03:27  * qardquit (Quit: Leaving.)
00:14:21  * kazuponquit (Remote host closed the connection)
00:14:58  * kazuponjoined
00:16:19  * kazuponquit (Remote host closed the connection)
00:40:24  * xakaquit (Ping timeout: 276 seconds)
00:59:56  * brsonquit (Quit: leaving)
01:06:49  * amartensquit (Quit: Leaving.)
01:09:26  * bnoordhuisjoined
01:10:26  * c4miloquit (Remote host closed the connection)
01:10:51  * inolenquit (Ping timeout: 245 seconds)
01:10:52  * c4milojoined
01:11:13  * EhevuTovquit (Quit: This computer has gone to sleep)
01:14:51  * bnoordhuisquit (Ping timeout: 276 seconds)
01:15:26  * c4miloquit (Ping timeout: 245 seconds)
01:15:48  * inolenjoined
01:20:24  <TooTallNate>isaacs: ping https://github.com/joyent/node/pull/6209
01:24:44  * TooTallNatequit (Quit: Computer has gone to sleep.)
01:28:42  * defunctzombie_zzchanged nick to defunctzombie
01:31:01  * defunctzombiechanged nick to defunctzombie_zz
01:36:54  * piscisaureus_joined
01:43:17  * abraxasjoined
01:58:33  * defunctzombie_zzchanged nick to defunctzombie
02:09:34  * defunctzombiechanged nick to defunctzombie_zz
02:46:11  * TooTallNatejoined
02:47:03  * TooTallNatequit (Client Quit)
02:47:56  * defunctzombie_zzchanged nick to defunctzombie
03:06:21  * kevinswiberquit (Remote host closed the connection)
03:25:57  * st_lukequit (Remote host closed the connection)
03:39:13  * brsonjoined
03:45:00  * defunctzombiechanged nick to defunctzombie_zz
03:54:46  * defunctzombie_zzchanged nick to defunctzombie
03:57:07  * M28joined
04:09:30  * kazuponjoined
04:57:16  * c4milojoined
05:13:55  * julianduquequit (Ping timeout: 240 seconds)
05:35:12  * julianduquejoined
05:36:19  * qardjoined
05:39:24  * qardpart
05:49:09  * julianduquequit (Ping timeout: 276 seconds)
05:56:13  * brsonquit (Quit: leaving)
06:02:58  * csaohjoined
06:03:22  * csaohquit (Client Quit)
06:24:21  * tuxie_joined
06:42:39  <MI6>nodejs-v0.10-windows: #207 UNSTABLE windows-ia32 (8/599) windows-x64 (7/599) http://jenkins.nodejs.org/job/nodejs-v0.10-windows/207/
06:49:48  * c4miloquit (Remote host closed the connection)
07:07:33  * bnoordhuisjoined
07:14:46  * defunctzombiechanged nick to defunctzombie_zz
07:24:40  * csaohjoined
07:33:06  * dsantiagoquit (Quit: Leaving...)
07:34:07  * dsantiagojoined
07:42:30  * bnoordhuisquit (Ping timeout: 245 seconds)
07:43:19  * tuxie_quit (Ping timeout: 260 seconds)
07:50:49  * lei__joined
07:51:21  <lei__>should I use UV_HANDLE_ACTIVE in my application?
07:51:55  <lei__>how can I know a timer's state?
07:52:12  <lei__>how to avoid muptile close on a timer?
07:54:14  <lei__>anybody?
07:55:31  <lei__>fellows?
07:59:06  * Kakerajoined
07:59:32  * hzjoined
08:03:49  <saghul>lei__ use uv_is_active instead
08:04:05  <saghul>uv_is_active((uv_handle_t *)your_handle)
08:06:21  <lei__>how about closing and closed?
08:06:41  <saghul>uv_is_closing
08:06:52  <saghul>that will return true if either is set
08:07:47  <lei__>thanks
08:07:59  <lei__>should I use reqs_pending in my application?
08:08:52  <lei__>couse I can't delete my object since there are pending reqs.
08:09:02  <saghul>I think that's a private field, you can double check in uv.h
08:10:24  <saghul>if you want to wait for write requests to end you can uv_shutdown the handle
08:10:35  <saghul>and then uv_close it in the shutdown callback
08:12:53  * hij1nxAtNodeconfquit (Ping timeout: 248 seconds)
08:13:07  <lei__>if I use asio, shared_from_this can deal with this problem.
08:13:20  <lei__>I am finding a way in libuv
08:13:52  <lei__>all the samples from google are too simple.
08:14:13  <saghul>what are you trying to accomplish?
08:14:38  <lei__>in my application, my ClientSession object has several handle, tcp_handle, two timers.
08:15:50  * defunctzombie_zzchanged nick to defunctzombie
08:17:18  <lei__>I just need a sample deal with multiple handles in a object.
08:17:46  * hij1nxAtNodeconfjoined
08:20:36  <lei__>thanks saghul, I learned a lot.
08:20:54  <saghul>no problem, glad I could help :-)
08:22:40  * abraxas__joined
08:22:40  * abraxasquit (Ping timeout: 264 seconds)
08:48:46  * bnoordhuisjoined
08:53:45  * bnoordhuisquit (Ping timeout: 276 seconds)
09:05:37  * bajtosjoined
09:09:48  * indexzeroquit (Quit: indexzero)
09:41:33  * defunctzombiechanged nick to defunctzombie_zz
09:44:27  * kazuponquit (Remote host closed the connection)
09:47:36  * bnoordhuisjoined
09:49:11  <lei__>hi, saghul
09:49:28  <saghul>hi lei__
09:50:34  <lei__>I have a quesiton, I have an object, witch contains several handles, like tcp handle, timer handle, I don't wanna maintain state in my class, how could I known a handle is closed?
09:50:58  <saghul>bnoordhuis mind if one of these days I do some janitoring on the issue tracker? the obvious stuff, that is
09:51:25  <saghul>lei__ uv_is_closing(handle)
09:51:34  <lei__>I don't wanna maintain tcp_closed, timer_closed flag.
09:52:11  <lei__>I can't distinguish between closed and closing.
09:52:20  <saghul>lei__ I guess you want to tie the lifetime of your object to the lifetime of all handles
09:52:31  <lei__>yes.
09:52:35  <saghul>lei__ why do you need to distinguish between them?
09:53:22  <lei__>since if all handles are closed, I can delete the object.
09:53:37  <lei__>if handles are closing, I can't do that.
09:53:54  <saghul>gotcha
09:54:20  <saghul>I think you'll need to keep at least a counter
09:54:39  <saghul>so, you uv_close X handles, counter = X
09:54:46  <lei__>on a closed handle, if handle is tcp, uv_is_closing 3, if timer return 1
09:54:49  <saghul>on each close_cb you counter--
09:55:03  <saghul>when counter == 0, you can delete your object
09:55:09  <lei__>ref counter is not a best choise.
09:55:33  <saghul>looks appropriate to me in this case
09:56:13  <bnoordhuis>saghul: sure, go ahead
09:56:32  <saghul>bnoordhuis ack, will do!
09:56:34  <lei__>yes, I can solve this by this by ref counter or manual handle state, but I don't think it is the best answer.
09:58:07  <lei__>if I can known a handle's state, I don't need to maintain a counter or handle state.
10:02:58  * bajtosquit (Quit: bajtos)
10:04:49  * bajtosjoined
10:05:14  <lei__>so, keep a counter or other vars is the best answer for my problem? fellows.
10:10:08  <bnoordhuis>lei__: when you say "i don't want to maintain a closed flag", why is that?
10:11:39  <lei__>I mean for tcp handle, I have a var tcp_handle_closed, default to false. for timer handle, there is a flag, timer_closed, default to false. after a close callback I set it to true.
10:11:46  <lei__>I am not a native speaker.
10:12:03  <bnoordhuis>me neither. don't let that stop you
10:12:06  <saghul>i guess you'd want a uv_is_closed, and use it in the close callback
10:12:26  <lei__>yes, that's what I need.
10:12:29  <saghul>if all of them return true you are done
10:13:05  <lei__>I can do that for tcp handle by uv_is_closing() > 1
10:13:17  <saghul>hum
10:13:17  <lei__>but for timer, it just return 1
10:13:38  <saghul>I'm not sure it's supposed to work that way
10:13:53  <bnoordhuis>lei__: it returns a bool. zero is false, non-zero is true
10:14:16  <bnoordhuis>i'm reasonably sure uv-unix makes sure it's always 0 or 1, don't know about uv-win
10:14:44  <bnoordhuis>no, uv-win doesn't :)
10:14:51  <bnoordhuis>easy to fix though
10:14:54  <saghul>heh
10:19:18  * abraxas__quit (Remote host closed the connection)
10:19:51  * abraxasjoined
10:24:11  * abraxasquit (Ping timeout: 245 seconds)
10:24:15  * csaohquit (Quit: csaoh)
10:29:50  <MI6>joyent/libuv: bnoordhuis created branch jenkins-saghul-review - http://git.io/cX95VQ
10:29:55  <bnoordhuis>saghul: but only if you feel like it
10:30:06  * bajtosquit (Quit: bajtos)
10:34:29  <MI6>libuv-review: #71 UNSTABLE smartos-ia32 (2/194) smartos-x64 (2/194) windows-ia32 (3/195) windows-x64 (4/195) http://jenkins.nodejs.org/job/libuv-review/71/
10:37:59  * kazuponjoined
10:39:17  * `3rdEdenchanged nick to `3E|GONE
10:40:23  <lei__>is it safe delete object if I just pass NULL callback to uv_close?
10:40:33  <lei__>safe to delete object.
10:42:14  <bnoordhuis>lei__: you mean immediately after the call to uv_close()? no
10:43:08  <lei__>gotcha
10:44:13  <Domenic_>bnoordhuis: browsing vm bugs, I bet this one is fixed https://github.com/joyent/node/issues/3352
10:44:27  <lei__>I pass null callback, if I can't delete immediately, when?
10:44:31  * csaohjoined
10:44:36  <Domenic_>bnoordhuis: well, generally I guess I should do a vm bug scrub
10:45:21  <lei__>when should I delete the object if I pass null callback?
10:46:07  <MI6>nodejs-v0.10: #1479 UNSTABLE smartos-x64 (2/599) http://jenkins.nodejs.org/job/nodejs-v0.10/1479/
10:46:44  <bnoordhuis>lei__: well, never - because you don't know when libuv actually closes it
10:47:01  <bnoordhuis>lei__: a NULL callback is only useful if you have a handle with static storage
10:47:13  <bnoordhuis>if it's heap-allocated, you'll always want to pass a close_cb
10:48:12  <bnoordhuis>Domenic_: did you test it?
10:48:24  <bnoordhuis>if yes, please comment and i'll close the issue
10:50:24  * piscisaureus_quit (Read error: Operation timed out)
10:50:41  <lei__>it is kind of hard to manager object lifecycle, since I you initialized a handle in a object, I can't delete it directly, you have to use an object lifecycle manager.
10:52:03  <lei__>if I initialized a handle in a object, I have to wait for all handles closed.
10:54:31  <bnoordhuis>hey, i never promised you a rose garden
10:56:02  <lei__>I am not complaining, just confirming libuv's behavior.
11:30:16  <MI6>joyent/libuv: Ben Noordhuis master * d7115f0 : unix, windows: make uv_is_*() always return 0 or 1 - http://git.io/6vdrEg
11:35:43  <MI6>libuv-master: #238 UNSTABLE windows (3/195) smartos (9/194) http://jenkins.nodejs.org/job/libuv-master/238/
11:36:37  <MI6>libuv-master-gyp: #177 UNSTABLE windows-x64 (3/195) smartos-ia32 (2/194) windows-ia32 (3/195) linux-ia32 (1/194) smartos-x64 (2/194) http://jenkins.nodejs.org/job/libuv-master-gyp/177/
11:39:49  * bnoordhuisquit (Ping timeout: 248 seconds)
11:45:54  * kazuponquit (Read error: Connection reset by peer)
11:46:18  * kazuponjoined
11:47:43  <indutny>hey people
11:48:36  <MI6>libuv-node-integration: #223 UNSTABLE smartos-x64 (6/639) linux-x64 (1/639) http://jenkins.nodejs.org/job/libuv-node-integration/223/
11:51:28  * kazupon_joined
11:51:40  * kazuponquit (Ping timeout: 256 seconds)
11:55:45  <indutny>oh shit
11:55:48  <indutny>just found a bug in contextify
11:57:58  <Domenic_>ircretary: tell bnoordhuis no, i didn't test that vm bug, but will do a general run-through testing all open vm bugs later.
11:57:59  <ircretary>Domenic_: I'll be sure to tell bnoordhuis
11:58:25  <Domenic_>indutny: the one that was just filed? https://github.com/joyent/node/issues/6208
11:58:34  <indutny>no
11:58:37  <indutny>node crashes
11:59:17  <Domenic_>oh shit
11:59:28  <indutny>yeah
11:59:39  <indutny>on highly computational js :)
11:59:54  <Domenic_>:-S
12:00:16  <indutny>debugging it atm
12:01:39  <indutny>oh, that's quite nice
12:02:15  <indutny>I'm passing and object to vm.runInNewContext
12:08:02  <saghul>ircretary tell bnoordhuis sorry, I was aft, but LGTM :-)
12:08:03  <ircretary>saghul: I'll be sure to tell bnoordhuis
12:09:13  * kazupon_quit (Remote host closed the connection)
12:15:47  <Domenic_>indutny: oh, it does .As<String>() instead of ->ToString() or something?
12:17:03  <indutny>nono
12:17:09  * AvianFlujoined
12:17:10  <indutny>I'm still looking into it
12:28:31  * kazuponjoined
12:28:34  <indutny>Domenic_: sorry, I must be missing something
12:28:48  <Domenic_>?
12:28:49  <indutny>Domenic_: but can you please take a look at CreateV8Context
12:28:54  * Kakeraquit (Ping timeout: 264 seconds)
12:28:59  <indutny>Domenic_: how are you using `sandbox` object here?
12:29:24  <Domenic_>indutny: ugh everything has changed since the multi-context stuff
12:29:29  <indutny>ah
12:29:31  <indutny>ok
12:29:38  <indutny>I suppose it should be stored somewhere inside context
12:29:41  <indutny>no?
12:30:10  <Domenic_>there is a sandbox_ instance variable
12:30:18  <Domenic_>that is a persistent
12:30:33  <indutny>ok
12:30:46  <Domenic_>will look through change history see where this weirdness was introduced
12:30:52  <indutny>but how is CreateV8Context related to it?
12:31:15  <indutny>756b6222
12:31:29  <indutny>well, doesn't look like much was changed
12:31:31  <indutny>since your commit
12:31:42  <indutny>aaah
12:31:48  <indutny>you're doing CreateDataWrapper
12:32:54  <Domenic_>Hmm it looks like the sandbox local is mostly useless except for getting its constructor name
12:33:00  <Domenic_>that did not change
12:33:16  <indutny>yeah
12:33:20  <indutny>well, its not as bad
12:33:21  <indutny>nvm
12:33:35  <indutny>I just though that CreateDataWrapper was some sort of newly added v8 api
12:33:38  <indutny>and it is actually your function
12:33:59  * indutnyis continuing to look into it
12:34:14  <Domenic_>oh ok :)
12:36:49  * bnoordhuisjoined
12:38:35  <indutny>looks like v8 bug
12:39:20  <Domenic_>:-S
12:40:03  * AvianFluquit (Remote host closed the connection)
13:01:35  * AvianFlujoined
13:06:41  * AvianFluquit (Remote host closed the connection)
13:07:36  * jmar777joined
13:21:46  <indutny>probably not
13:21:59  <indutny>it seems that noone is holding reference to sandbox_
13:24:04  <indutny>hm...
13:27:37  * kevinswiberjoined
13:32:24  * superdeallocjoined
13:32:45  <superdealloc>Hi guys. Is there any way to trace back why libuv would get stuck on a kevent()?
13:32:49  * AvianFlujoined
13:33:50  <bnoordhuis>superdealloc: you can `call uv__print_handles(0, 0)` in gdb
13:34:28  <superdealloc>What return type does that have? void?
13:35:24  <bnoordhuis>yes
13:37:00  <superdealloc>Program received signal EXC_BAD_ACCESS, Could not access memory.
13:37:13  <superdealloc>Reason: KERN_INVALID_ADDRESS at address: 0x0000000000000000
13:37:44  <superdealloc>I did this: call (void)uv__print_handles(0, 0)
13:38:46  <bnoordhuis>superdealloc: maybe try (void*)0 as the first argument
13:38:53  <bnoordhuis>if that doesn't work, something's wrong
13:38:57  <superdealloc>Alright, let me restart this
13:40:19  <superdealloc>Yeah, same thing.
13:40:52  <superdealloc>backtrace gives me node::SetupProcessObject -> ares__send_query -> uv_barrier_wait -> kevent
13:41:03  <superdealloc>this is running from a puppet deployment, if that helps anything.
13:42:25  <superdealloc>Is there anything else I can use to pinpoint the issue?
13:42:35  <bnoordhuis>that's weird, node doesn't use uv_barrier_wait
13:43:07  <superdealloc>I'm on 0.10.13
13:43:33  <bnoordhuis>what does `info sharedlibrary` print at that point?
13:43:48  <bnoordhuis>and re: 0.10.13, that doesn't use uv_barrier_wait either
13:44:52  <superdealloc>let me gist that for you
13:45:19  <superdealloc>https://gist.github.com/andremedeiros/2edb8a5cfab9d27c31df
13:46:28  * kazuponquit (Remote host closed the connection)
13:46:38  <indutny>Domenic_: care to review https://gist.github.com/indutny/d267a53e69e8d4de0eb9 ?
13:46:41  <indutny>bnoordhuis: /cc ^
13:46:47  <indutny>basically
13:46:55  <indutny>sandbox object is not referenced by anyone
13:47:02  <indutny>i.e. when you do
13:47:09  <Domenic_>hmm
13:47:09  <indutny>vm.runInNewContext('(function() {})')
13:47:13  <indutny>and then use that function
13:47:18  <bnoordhuis>superdealloc: looks okay to me
13:47:19  <indutny>there's no reference to sandbox object
13:47:29  <Domenic_>right that would be the case for runInNewContext
13:47:34  <Domenic_>what about for running in existing contexts
13:47:35  <indutny>yeah
13:47:39  <indutny>well
13:47:45  <bnoordhuis>superdealloc: is that a release build? that might explain the odd backtrace
13:47:52  <indutny>Domenic_: they'll be referenced anyway
13:47:58  <superdealloc>that's compiled by nodenv
13:48:00  <indutny>I don't think it matters much
13:48:18  <indutny>Domenic_: do you have a counterexampe?
13:48:37  <Domenic_>indutny: hmm that makes sense. but we still need to dealloc the contextify context when you GC the sandbox, I think?
13:48:44  <indutny>its deallocated
13:48:47  <indutny>well
13:48:51  <indutny>and sandbox should not retain it
13:48:57  <indutny>aaah
13:48:58  <indutny>I see
13:49:02  <indutny>we should not dealloc before it
13:49:10  <indutny>so both global and sandbox object should die
13:49:16  <indutny>is that what you're talking about?
13:49:31  <Domenic_>i think that already happens, cuz it calls the destructor
13:49:37  <bnoordhuis>superdealloc: try cloning the repo, then check out the right tag and build from that
13:49:44  <Domenic_>which deallocs both proxyglobal and sandbox
13:49:51  <superdealloc>oof, will do. might take some time tho
13:49:59  <indutny>no
13:50:02  <indutny>that's not what I mean
13:50:25  <Domenic_>oh you mean you should only call SandboxFreeCallback once both die
13:50:35  <indutny>Domenic_: I mean that it should be destroyed once both sandbox and global object will be not referenced
13:50:36  <indutny>Domenic_: yes
13:50:42  <indutny>because you can create sandbox
13:50:43  <Domenic_>yes that makes perfect sense
13:50:45  <indutny>ok
13:50:51  <indutny>opening PR
13:51:46  <bnoordhuis>superdealloc: `make -j<num_cores>` - on a 32 core system, it takes less than 30 seconds
13:52:12  <superdealloc>https://github.com/joyent/node
13:52:13  <Domenic_>indutny: very nice catch, thanks so much for finding it.
13:52:14  <superdealloc>from here?
13:52:23  <indutny>Domenic_: np, it was blocking my work :)
13:52:24  <indutny>hah
13:52:32  <bnoordhuis>superdealloc: yes
13:53:04  * kevinswiberquit (Remote host closed the connection)
13:53:16  * AvianFluquit (Remote host closed the connection)
13:53:26  <Domenic_>indutny: is it something you can create a test for? I really want to make vm2 have nice test coverage for all annoying stupid edge cases.
13:53:34  <indutny>yeah
13:53:37  <indutny>I believe I can
13:53:41  <Domenic_>sweeeeet
13:55:51  <indutny>Domenic_: I can reproduce it in three lines :)
13:55:51  <indutny>var vm = require('vm');
13:55:51  <indutny>var fn = vm.runInNewContext('(function() { obj.p = {}; })', { obj: {} })
13:55:51  <indutny>while (true)
13:55:51  <indutny> fn();
13:55:53  <indutny>oh
13:55:54  <indutny>four
13:56:16  <Domenic_>hah
13:56:31  <Domenic_>i wonder if it would happen immediately if you did --expose_gc and added a gc()
13:57:04  <indutny>probably yes
13:57:23  <indutny>yeah, it does
13:59:55  <superdealloc>bnoordhuis: now I get something different.
14:01:04  <superdealloc>bnoordhuis: https://gist.github.com/andremedeiros/2edb8a5cfab9d27c31df#file-gistfile2-txt
14:02:26  <superdealloc>Oh hold on, I'm being an iditot.
14:03:53  <superdealloc>Here's the correct one: https://gist.github.com/andremedeiros/2edb8a5cfab9d27c31df#file-gistfile2-txt
14:04:12  <superdealloc>uv print handles prints nothing.
14:05:14  <bnoordhuis>superdealloc: what does `f 1; p loop->active_reqs; p &loop->active_reqs` print?
14:06:10  * kevinswiberjoined
14:06:17  <superdealloc>bnoordhuis: https://gist.github.com/andremedeiros/2edb8a5cfab9d27c31df#file-gistfile3-txt
14:06:17  <bnoordhuis>superdealloc: oh, and the same please for loop->handle_queue
14:07:06  <superdealloc>Done and done.
14:07:51  <bnoordhuis>superdealloc: thanks. looks like there are active handles. curious that uv__print_handle() doesn't print them
14:08:18  <superdealloc>Yeah :-(
14:08:19  * chrisdickinsonquit (Ping timeout: 260 seconds)
14:09:04  <bnoordhuis>how are you starting node? just `node`, no arguments?
14:09:09  <superdealloc>yup
14:09:30  <superdealloc>well, I'm passing 'pow' as an argument
14:09:30  <bnoordhuis>pow being?
14:09:31  <superdealloc>a local dns responder thingie
14:09:44  <superdealloc>makes your computer respond to .dev domains.
14:09:45  <bnoordhuis>okay, so you pass it a script?
14:09:48  <superdealloc>yeah
14:10:02  <bnoordhuis>does that script create sockets, servers, timers, etc.?
14:10:32  <bnoordhuis>i'm guessing yes
14:11:33  <superdealloc>not what I'm running, no :-(
14:11:41  <superdealloc>it asynchronously runs a bunch of scripts to generate files
14:12:16  <bnoordhuis>async in this context meaning ... ?
14:13:17  <superdealloc>a utility module that runs a bunch of operations asynchronously
14:13:19  <superdealloc>https://github.com/caolan/async
14:13:32  * chrisdickinsonjoined
14:13:57  <bnoordhuis>right. so what is exactly the issue?
14:14:06  <superdealloc>so this just generates a couple files from a template and puts them in directories
14:14:09  <superdealloc>but node is hanging
14:14:21  <superdealloc>the process hangs, I can't send it USR1 and run node debug, that hangs too
14:14:48  <superdealloc>so I have no idea of where it's stuck and what the issue is
14:15:01  <superdealloc>I'm running this from a puppet provisioning script
14:15:05  <bnoordhuis>what happens when you attach dtruss?
14:15:44  <superdealloc>I see this
14:15:45  <superdealloc>SYSCALL(args) = return
14:15:54  <superdealloc>oh wait
14:15:56  <superdealloc>let me restart that
14:16:41  <superdealloc>ok, same thing
14:17:11  <bnoordhuis>that's the only thing it prints?
14:17:17  <superdealloc>yup :(
14:17:33  <bnoordhuis>okay... what happens when you run it like this: `sudo dtruss node script.js`?
14:17:54  <superdealloc>Wait, I'm getting something
14:18:23  <superdealloc>o/b/repo ❯❯❯ sudo dtruss -p 74572 ⏎
14:18:23  <superdealloc>SYSCALL(args) = return
14:18:24  <superdealloc>workq_kernreturn(0x20, 0x0, 0x1) = 0 0
14:18:24  <superdealloc>kevent(0xA, 0x0, 0x0) = 1 0
14:18:24  <superdealloc>thread_selfid(0x7FFF75E34278, 0x0, 0xFFFFFFFF) = 108211 0
14:18:24  <superdealloc>kevent(0xA, 0x101980DB8, 0x1) = 1 0
14:18:24  <superdealloc>thread_selfid(0x101900000, 0x83000, 0x2207) = 108260 0
14:18:34  <superdealloc>__disable_threadsignal(0x1, 0x0, 0x0) = 0 0
14:18:34  <superdealloc>OOPS! Sorry :S that was meant to be a gist link
14:19:19  <bnoordhuis>okay, so at least it's doing something
14:19:46  <bnoordhuis>the fact that you can't attach with `node debug` might mean that something's installing a SIGUSR1 handler
14:20:39  <bnoordhuis>one thing you can try is to hack the script, make it call a function every second or so that prints process._getActiveHandles() and process._getActiveRequests()
14:20:59  <superdealloc>Yeah, I'll look into that...
14:21:08  <superdealloc>I'll also try a more recent nodejs version
14:25:37  * `3E|GONEchanged nick to `3rdEden
14:30:54  <indutny>ergh
14:30:56  <indutny>internet problems
14:31:00  <indutny>Domenic_: bnoordhuis : https://github.com/joyent/node/pull/6213
14:33:47  * kazuponjoined
14:34:24  * Kakerajoined
14:36:11  * bradleymeckjoined
14:44:35  * bnoordhuisquit (Ping timeout: 245 seconds)
14:51:38  * bnoordhuisjoined
14:56:13  <MI6>joyent/node: Fedor Indutny master * 3395cb1 : contextify: dealloc only after global and sandbox - http://git.io/woP9zA
14:56:28  <indutny>tadam
14:56:29  <indutny>landed
14:56:30  <indutny>thanks
14:56:36  * csaohquit (Quit: csaoh)
15:00:20  * csaohjoined
15:02:57  * indexzerojoined
15:05:32  <MI6>nodejs-master: #550 UNSTABLE smartos-x64 (5/639) http://jenkins.nodejs.org/job/nodejs-master/550/
15:09:21  <Domenic_>silly indutny, I am not a real reviewer :P
15:10:15  <bnoordhuis>i am however. left some comments
15:15:40  <MI6>nodejs-master-windows: #344 UNSTABLE windows-x64 (22/639) windows-ia32 (22/639) http://jenkins.nodejs.org/job/nodejs-master-windows/344/
15:17:23  <MI6>nodejs-master: #551 UNSTABLE smartos-ia32 (1/639) smartos-x64 (6/639) linux-ia32 (1/639) http://jenkins.nodejs.org/job/nodejs-master/551/
15:23:04  * csaohquit (Quit: csaoh)
15:23:05  * mbanaquit (Quit: Leaving)
15:38:57  <indutny>:)
15:39:51  <indutny>btw
15:39:56  <indutny>Domenic_: comma first style? :)
15:39:58  <indutny>in C++
15:40:00  <indutny>gosh
15:40:51  <MI6>joyent/node: Fedor Indutny master * 3d4c663 : contextify: dealloc only after global and sandbox - http://git.io/wQVXxw
15:40:54  <Domenic_>indutny: where!? i would never...
15:41:29  <Domenic_>indutny: that's bnoordhuis's fault, when he switched to the initializers.
15:43:27  <bnoordhuis>it's not my preferred style but it keeps down noise in the diff when you add members
15:45:23  <indutny>aaah
15:45:26  <indutny>bnoordhuis: offender
15:46:49  * c4milojoined
15:48:38  <bnoordhuis>cpplint doesn't complain
15:48:55  <bnoordhuis>and like we say in the banking industry, "if it ain't illegal, it's legal"
15:49:34  * xakajoined
15:49:37  <MI6>nodejs-master: #552 UNSTABLE linux-x64 (1/639) smartos-x64 (6/639) http://jenkins.nodejs.org/job/nodejs-master/552/
15:59:19  <MI6>nodejs-master-windows: #345 UNSTABLE windows-x64 (21/639) windows-ia32 (21/639) http://jenkins.nodejs.org/job/nodejs-master-windows/345/
16:00:34  * julianduquejoined
16:04:06  * bradleymeckquit (Quit: bradleymeck)
16:06:44  * bradleymeckjoined
16:07:10  * brsonjoined
16:08:22  <indutny>bnoordhuis: what if I'll make it complain:)
16:13:17  * TooTallNatejoined
16:13:35  <bnoordhuis>indutny: why would you want to? you know the rationale now, right?
16:13:44  <indutny>well, yeah
16:13:48  <indutny>it doesn't seem to be rational
16:13:54  <indutny>to write inconsistent code
16:14:07  <indutny>and I don't like idea of rewriting all other code because of this
16:14:12  <indutny>so its either rest or it
16:14:17  <bnoordhuis>it's not inconsistent if all initializers use that style
16:14:23  <indutny>indeed
16:14:27  <indutny>either all files
16:14:28  <indutny>or one
16:14:33  <indutny>what to choose… hm...
16:14:34  <indutny>:)
16:16:37  <bnoordhuis>github has fancy animations now?
16:16:53  <bnoordhuis>i could swear it was doing something animated when i just posted a comment
16:17:16  * Chip_Zeroquit (Ping timeout: 264 seconds)
16:17:55  * amartensjoined
16:20:14  <indutny>yeah
16:20:15  <indutny>it does
16:21:48  * M28quit (Read error: Connection reset by peer)
16:21:53  * M28_joined
16:23:08  * Chip_Zerojoined
16:24:07  * zotjoined
16:26:18  <zot>when doing a tcp connect, the connect_cb, can the connect_t->handle ever NOT equal the original &uv_tcp_t?
16:26:29  <zot>(assuming non-failure cases here)
16:31:10  * kazuponquit (Remote host closed the connection)
16:37:12  * inolenquit (Quit: Leaving.)
16:40:43  * csaohjoined
16:42:34  * dominictarrjoined
16:46:14  * TooTallNatequit (Quit: Computer has gone to sleep.)
16:58:28  * csaohquit (Quit: csaoh)
17:04:44  * TooTallNatejoined
17:05:32  * kevinswiberquit (Remote host closed the connection)
17:06:42  * julianduquequit (Ping timeout: 264 seconds)
17:08:58  * zotquit (Ping timeout: 268 seconds)
17:12:02  * bajtosjoined
17:12:54  * inolenjoined
17:15:18  * ecrjoined
17:16:33  * dominictarrquit (Quit: dominictarr)
17:20:23  * EhevuTovjoined
17:20:45  * groundwaterjoined
17:32:05  * kazuponjoined
17:37:18  * kazuponquit (Ping timeout: 240 seconds)
17:53:40  <MI6>libuv-master: #239 UNSTABLE windows (3/195) smartos (9/194) http://jenkins.nodejs.org/job/libuv-master/239/
18:02:25  * kevinswiberjoined
18:06:33  <MI6>libuv-node-integration: #224 UNSTABLE smartos-x64 (5/639) http://jenkins.nodejs.org/job/libuv-node-integration/224/
18:07:38  * Chip_Zeroquit (Ping timeout: 245 seconds)
18:08:54  * Chip_Zerojoined
18:08:55  * Chip_Zeroquit (Changing host)
18:08:55  * Chip_Zerojoined
18:09:16  * bajtosquit (Ping timeout: 260 seconds)
18:12:55  * bajtosjoined
18:13:21  * bajtosquit (Client Quit)
18:18:43  * defunctzombie_zzchanged nick to defunctzombie
18:20:24  * defunctzombiechanged nick to defunctzombie_zz
18:28:43  * kenperkinsjoined
18:32:13  * CoverSlidejoined
18:33:25  * kazuponjoined
18:38:38  * kazuponquit (Ping timeout: 264 seconds)
18:39:22  * kenperkinsquit (Quit: Computer has gone to sleep.)
18:45:36  <MI6>nodejs-master-windows: #346 UNSTABLE windows-x64 (21/639) windows-ia32 (20/639) http://jenkins.nodejs.org/job/nodejs-master-windows/346/
18:45:43  * defunctzombie_zzchanged nick to defunctzombie
18:52:55  * kevinswiberquit (Remote host closed the connection)
18:55:24  * defunctzombiechanged nick to defunctzombie_zz
18:55:55  * kevinswiberjoined
19:25:45  * defunctzombie_zzchanged nick to defunctzombie
19:34:05  * kazuponjoined
19:38:43  * kazuponquit (Ping timeout: 260 seconds)
19:45:43  * TooTallNatequit (Quit: Computer has gone to sleep.)
19:48:50  * groundwaterquit (Ping timeout: 264 seconds)
20:07:36  * groundwaterjoined
20:12:15  * TooTallNatejoined
20:24:40  * niskaquit (Ping timeout: 240 seconds)
20:25:52  * indexzeroquit (Quit: indexzero)
20:26:02  * niskajoined
20:26:22  * indexzerojoined
20:27:02  * ecrquit (Ping timeout: 240 seconds)
20:27:28  * indexzeroquit (Client Quit)
20:29:51  * ecrjoined
20:34:40  * kazuponjoined
20:36:43  * bradleymeckquit (Quit: bradleymeck)
20:38:13  <TooTallNate>can someone explain to me why node uses hard-coded CAs rather then looking it up from the operating system?
20:38:19  <TooTallNate>i'm sure there's a good reason...
20:39:23  * kazuponquit (Ping timeout: 260 seconds)
20:46:19  <bnoordhuis>TooTallNate: consistency, mostly. different systems have different root ca lists, some systems have more than one ca store, some have none
20:46:54  <bnoordhuis>that said, one of my todos is to write a script that downloads the ca list from mozilla and converts it something node can use
20:47:10  <bnoordhuis>because we're not very good with keeping it up to date right now
20:47:45  * julianduquejoined
20:48:31  <TooTallNate>bnoordhuis: that makes sense, thanks
20:48:42  <TooTallNate>we're probably going to look into writing an npm module then
20:48:52  <TooTallNate>that attempts to find the OS's systems certificates
20:49:31  <bnoordhuis>good luck. my fedora system has two separate ca stores, my ubuntu system has three :-/
20:50:28  * hzquit (Read error: Connection reset by peer)
20:50:30  <bnoordhuis>at least i'm not wanting for choice
20:53:09  <TooTallNate>bnoordhuis: we're specifically trying to handle the case where a user has a self-signed root cert which intercepts all SSL traffic for the purposes of a pornography filter
20:53:17  <TooTallNate>bnoordhuis: so i'm not too worried about linux for that case :p
20:53:22  * hzjoined
20:53:34  <bnoordhuis>heh
20:53:49  <TooTallNate>bnoordhuis: so what is the "format" that the "ca" arg is expecting then?
20:54:56  <bnoordhuis>TooTallNate: PEM
20:58:55  * defunctzombiechanged nick to defunctzombie_zz
20:59:29  * jmar777quit (Read error: Connection reset by peer)
20:59:59  * jmar777joined
21:00:45  * kevinswiberquit (Remote host closed the connection)
21:04:47  * jmar777quit (Remote host closed the connection)
21:07:27  * julianduquequit (Remote host closed the connection)
21:07:50  * kevinswiberjoined
21:08:44  * julianduquejoined
21:17:23  * kenperkinsjoined
21:18:06  * kevinswiberquit (Remote host closed the connection)
21:18:52  * kevinswiberjoined
21:22:42  * kenperkinsquit (Ping timeout: 264 seconds)
21:22:44  * kevinswiberquit (Remote host closed the connection)
21:27:43  <bnoordhuis> 20.23 │4008fa: add $0x1,%ecx <- why the hell does gcc generate this code for ++n?
21:28:04  <bnoordhuis>that's one-fifth of the cpu time spent in my inner loop right there
21:29:31  <bnoordhuis> │ child->right = node->right; ▒
21:29:34  <bnoordhuis> 26.68 │4009d6: mov 0x8(%rdi),%rdx
21:29:50  <bnoordhuis>that makes me sad too, but i guess it can't be helped
21:30:21  <bnoordhuis>and it's only for node->right. updating the left and parent pointers doesn't cost nearly as much
21:30:54  <bnoordhuis>i wonder if there's some cache line bouncing thing going on
21:33:35  * kevinswiberjoined
21:35:07  * kevinswiberquit (Remote host closed the connection)
21:35:42  * kazuponjoined
21:38:11  * julianduquequit (Ping timeout: 260 seconds)
21:39:59  * julianduquejoined
21:40:26  * kazuponquit (Ping timeout: 264 seconds)
21:49:13  * EhevuTovquit (Quit: This computer has gone to sleep)
22:03:21  * EhevuTovjoined
22:09:32  * hzquit
22:11:36  * Kakeraquit (Ping timeout: 256 seconds)
22:14:06  * AvianFlujoined
22:15:51  * AvianFluquit (Read error: Connection reset by peer)
22:18:38  * AvianFlujoined
22:19:39  * AvianFluquit (Remote host closed the connection)
22:36:12  * kazuponjoined
22:40:43  * kazuponquit (Ping timeout: 260 seconds)
22:52:43  * CoverSlidequit (Ping timeout: 246 seconds)
23:08:30  * bnoordhuisquit (Ping timeout: 264 seconds)
23:08:46  * indexzerojoined
23:14:30  * defunctzombie_zzchanged nick to defunctzombie
23:22:05  * defunctzombiechanged nick to defunctzombie_zz
23:35:53  * defunctzombie_zzchanged nick to defunctzombie
23:36:44  * kazuponjoined
23:41:22  * kazuponquit (Ping timeout: 246 seconds)
23:42:06  * defunctzombiechanged nick to defunctzombie_zz
23:53:11  * ecrquit (Quit: ecr)
23:59:55  <MI6>libuv-master-gyp: #178 UNSTABLE windows-x64 (3/195) smartos-ia32 (2/194) windows-ia32 (3/195) smartos-x64 (2/194) http://jenkins.nodejs.org/job/libuv-master-gyp/178/