00:00:00  * ircretaryquit (Remote host closed the connection)
00:00:08  * ircretaryjoined
00:03:37  * txdvquit (Read error: Connection reset by peer)
00:04:31  * txdvjoined
00:07:55  * indexzeroquit (Read error: Connection reset by peer)
00:09:47  * indexzerojoined
00:22:52  * daviddiasquit (Remote host closed the connection)
00:23:24  * daviddiasjoined
00:27:38  * daviddiasquit (Ping timeout: 240 seconds)
00:29:56  * indexzeroquit (Quit: indexzero)
00:32:04  * zz_karupanerurachanged nick to karupanerura
00:36:55  * thlorenzquit (Remote host closed the connection)
00:40:08  * calvinfojoined
00:41:45  * hzquit
00:43:35  * mikealquit (Quit: Leaving.)
00:50:26  * calvinfoquit (Quit: Leaving.)
00:51:57  * calvinfojoined
00:53:55  * kazuponjoined
00:54:08  * kazuponquit (Remote host closed the connection)
00:54:15  * kazuponjoined
00:58:06  * calvinfoquit (Quit: Leaving.)
01:02:50  * indexzerojoined
01:15:32  * indexzeroquit (Quit: indexzero)
01:21:06  * abraxasjoined
01:22:15  * m76quit (Read error: Connection reset by peer)
01:26:39  * TooTallNatequit (Quit: Computer has gone to sleep.)
01:34:22  * mikealjoined
01:44:07  * brsonquit (Quit: leaving)
01:47:04  * kazuponquit (Remote host closed the connection)
01:48:48  * rmgjoined
01:53:01  * rmgquit (Ping timeout: 240 seconds)
01:56:44  * thlorenzjoined
01:59:13  * thlorenzquit (Remote host closed the connection)
01:59:13  * inolenquit (Read error: Connection reset by peer)
02:00:15  * thlorenzjoined
02:00:32  * inolenjoined
02:01:02  * thlorenzquit (Remote host closed the connection)
02:03:01  * kazuponjoined
02:05:26  * brucem_changed nick to brucem
02:10:28  * inolen1joined
02:10:28  * inolenquit (Read error: Connection reset by peer)
02:14:31  * thlorenzjoined
02:49:02  * daviddiasjoined
02:51:33  * daviddiasquit (Read error: No route to host)
02:51:35  * __rockbot__joined
03:07:41  * __rockbot__quit (Quit: __rockbot__)
03:27:15  * kazuponquit (Remote host closed the connection)
03:27:41  * TooTallNatejoined
03:29:30  * kazuponjoined
03:41:13  * kazuponquit (Remote host closed the connection)
03:48:53  * rmgjoined
03:52:54  * __rockbot__joined
03:53:09  * rmgquit (Ping timeout: 248 seconds)
04:04:22  * kazuponjoined
04:31:17  * abraxasquit (Remote host closed the connection)
04:38:24  * mikealquit (Quit: Leaving.)
05:00:27  * mikealjoined
05:10:56  * stagasjoined
05:27:34  * guilleiguaranjoined
05:30:19  * thlorenzquit (Remote host closed the connection)
05:35:10  * abraxasjoined
05:40:16  * daviddiasjoined
05:45:14  * daviddiasquit (Ping timeout: 264 seconds)
05:47:42  * mikealquit (Quit: Leaving.)
05:51:29  * vptrquit (Quit: WeeChat 0.3.5)
05:54:41  * __rockbot__quit (Quit: __rockbot__)
05:58:48  * TooTallNatequit (Quit: Computer has gone to sleep.)
06:01:12  * mikealjoined
06:03:01  * superjoe30quit (Ping timeout: 265 seconds)
06:05:06  * TooTallNatejoined
06:10:30  * superjoe30joined
06:16:52  * calvinfojoined
06:16:52  * calvinfoquit (Client Quit)
06:18:31  * TooTallNatequit (Quit: Computer has gone to sleep.)
06:26:55  * superjoe30changed nick to superjoe
06:35:08  * daviddiasjoined
06:39:50  * daviddiasquit (Ping timeout: 264 seconds)
06:40:38  * daviddiasjoined
06:41:13  * mikealquit (Quit: Leaving.)
06:41:38  <MI6>nodejs-v0.10-windows: #420 UNSTABLE windows-x64 (11/608) windows-ia32 (12/608) http://jenkins.nodejs.org/job/nodejs-v0.10-windows/420/
06:45:11  * daviddiasquit (Ping timeout: 260 seconds)
06:47:16  * calvinfojoined
06:51:45  * calvinfoquit (Ping timeout: 245 seconds)
06:56:15  * insertcoffeejoined
06:56:55  * mikealjoined
07:07:47  * wolfeidauquit (Ping timeout: 272 seconds)
07:08:44  * LeftWingquit (Remote host closed the connection)
07:08:51  * LeftWingjoined
07:09:53  * abraxasquit (Remote host closed the connection)
07:14:16  * abraxasjoined
07:15:24  * calvinfojoined
07:22:23  * mikealquit (Quit: Leaving.)
07:26:43  * abraxasquit (Remote host closed the connection)
07:44:25  * insertcoffeequit (Ping timeout: 246 seconds)
07:47:01  * rendarjoined
07:48:16  * nickleeflyjoined
07:48:16  * Domenic_joined
07:48:16  * iamstefjoined
07:48:16  * Damn3djoined
07:49:31  * dsantiagoquit (Ping timeout: 240 seconds)
07:51:28  * kazuponquit (Remote host closed the connection)
07:53:10  * insertcoffeejoined
07:54:02  * m76joined
07:55:14  * dsantiagojoined
07:56:01  * kazuponjoined
07:56:07  * hzjoined
08:03:49  * insertcoffeequit (Read error: Connection reset by peer)
08:05:04  * AvianFluquit (Remote host closed the connection)
08:05:55  * dsantiagoquit (Ping timeout: 260 seconds)
08:14:14  * dsantiagojoined
08:27:46  * hzquit
08:33:03  * mikealjoined
08:35:01  * abraxasjoined
08:38:49  * calvinfoquit (Read error: Connection reset by peer)
08:40:08  * hzjoined
08:45:28  * abraxasquit (Remote host closed the connection)
08:48:29  * abraxasjoined
09:14:40  <rendar>https://github.com/joyent/libuv/blob/master/src/unix/process.c -- i cannot understand line 392, why a new child process should signal the father process with an EPIPE when it call exec*() to avoid that race condition, if the code between fork() and exec*() is completely controlled by libuv so the user can't throw a signal there?
09:16:31  * superjoequit (Quit: Leaving)
09:33:32  <indutny>well
09:33:37  * stagasquit (Ping timeout: 246 seconds)
09:33:39  <indutny>parent process continue to execute after fork()
09:34:14  <indutny>and child won't receive any signals
09:34:16  <indutny>if it won't block
09:37:35  <rendar>indutny, hmmm, so that is needed only for letting the father waiting the child execution? so letting the father knows when the "limbo" state between child's fork() and exec*() is finished?
09:37:42  <indutny>yep
09:42:21  <rendar>indutny, i got that, but why a pipe, and maybe not a piece of shared memory?
09:44:20  <rendar>indutny, e.g. a shared integer 0 and the child process set that integer to 1 before calling exec*() ? because in the time between setting that integer to 1 and calling exec*() there could be a race condition?
09:44:45  <indutny>shared memory
09:44:46  <indutny>meh
09:44:57  <indutny>well
09:45:08  <indutny>it won't be that atomic
09:46:48  <rendar>right
09:47:31  <rendar>indutny, well, that race condition is because of signals right? i mean, if one disables *all* signals from the parent process, and the child process inherit that, we don't need this pipe trick, am i right?
09:57:19  * inolen1quit (Quit: Leaving.)
09:57:54  <felicity>also a pipe is much simpler than shared memory and i doubt this is a major performance bottleneck
10:00:07  <indutny>it is not
10:00:31  <indutny>rendar: it's just simpler than doing everything that you mentioned
10:00:32  * abraxasquit (Remote host closed the connection)
10:00:36  <indutny>and also atomic
10:01:08  * abraxasjoined
10:03:35  <rendar>indutny, i see
10:03:39  <rendar>indutny, yeah right
10:04:17  <rendar>indutny, i was just meaning in a hypotetic case where signals are all blocked, we wouldn't need that because that is needed only for signals, right?
10:05:16  <indutny>I think yes
10:05:48  * abraxasquit (Ping timeout: 276 seconds)
10:06:23  * brett19quit (Ping timeout: 272 seconds)
10:08:53  * brett19joined
10:27:03  * wolfeidaujoined
10:40:10  * abraxasjoined
10:48:50  <MI6>nodejs-v0.10: #1696 UNSTABLE linux-x64 (5/608) osx-x64 (1/608) linux-ia32 (3/608) smartos-x64 (5/608) smartos-ia32 (5/608) osx-ia32 (1/608) http://jenkins.nodejs.org/job/nodejs-v0.10/1696/
11:00:09  * abraxasquit (Remote host closed the connection)
11:00:54  * kazuponquit (Remote host closed the connection)
11:15:08  * karupanerurachanged nick to zz_karupanerura
11:15:37  * daviddiasjoined
11:29:51  * wolfeida_joined
11:31:26  * wolfeidauquit (Ping timeout: 240 seconds)
12:15:29  * wolfeida_changed nick to wolfeidau
12:21:00  * roxlujoined
12:22:09  <roxlu>hey guys, I was wondering, when I use a uv_mutex_t with a uv_cond_t, can I still use that mutex to synchronize some other data besides the data I initialize used the mutex/cond for?
12:23:17  * roxluhopes he makes any sense ^.^
12:38:21  * kazuponjoined
12:58:41  * kazuponquit (Remote host closed the connection)
13:01:15  * abraxasjoined
13:02:25  * zz_karupanerurachanged nick to karupanerura
13:05:33  * abraxasquit (Ping timeout: 245 seconds)
13:17:31  * karupanerurachanged nick to zz_karupanerura
13:44:55  * piscisaureusjoined
14:01:07  * M28quit (Ping timeout: 260 seconds)
14:03:29  <mmalecki>indutny: hey Fedor
14:04:08  <mmalecki>indutny: does your dht module support initial host discovery through any means?
14:04:29  <indutny>hey man
14:04:32  <indutny>nope
14:04:38  <indutny>you should bootstrap it yourself
14:04:45  <mmalecki>what'd you recommend for bootstrapping it?
14:05:09  <mmalecki>I was thinking of using UDP multicast
14:05:28  <indutny>hm...
14:05:43  <indutny>I'd use some big list of nodes
14:05:46  <indutny>with sparse ids
14:06:01  <indutny>perhaps 40-80
14:06:05  <indutny>or
14:06:08  <indutny>centralized server
14:06:24  <indutny>but perhaps UDP will work too
14:06:34  <indutny>I mean multicat
14:06:38  <indutny>multicast*
14:09:33  * kazuponjoined
14:09:42  * AvianFlujoined
14:10:36  * rphillips_changed nick to rphillips
14:13:53  * kazuponquit (Ping timeout: 245 seconds)
14:30:13  * thlorenzjoined
14:30:49  * thlorenzquit (Remote host closed the connection)
14:37:43  * AvianFluquit (Remote host closed the connection)
14:42:39  * AvianFlujoined
14:52:32  * zot1joined
14:53:44  * zot1part
14:54:45  * AvianFluquit (Remote host closed the connection)
14:57:13  * pachetjoined
14:58:25  * pachetquit (Client Quit)
14:58:48  * pachetjoined
15:01:18  * thlorenzjoined
15:11:12  * bradleymeckjoined
15:11:15  * Cheeryquit (Ping timeout: 245 seconds)
15:11:16  * bradleymeckquit (Client Quit)
15:12:04  * Cheeryjoined
15:21:43  <MI6>nodejs-master: #828 UNSTABLE smartos-x64 (6/692) smartos-ia32 (5/692) centos-ia32 (3/692) ubuntu-x64 (2/692) centos-x64 (1/692) http://jenkins.nodejs.org/job/nodejs-master/828/
15:27:25  <tjfontaine>indutny: hey
15:27:39  <indutny>hey man
15:27:41  <indutny>how are you?
15:27:49  <tjfontaine>oh so it's the difference between execFile and exec
15:28:15  <indutny>yep
15:28:30  <tjfontaine>we should be delivering the cmd in that case as well though
15:28:51  <indutny>ok
15:28:57  <indutny>going to update openssl in node
15:29:00  <indutny>to 1.0.1f
15:29:04  <indutny>they have just released it
15:29:06  <tjfontaine>sounds good
15:29:10  <indutny>yeah
15:29:13  <indutny>fixes some crashes
15:29:23  <indutny>http://www.openssl.org/news/openssl-1.0.1-notes.html
15:30:11  <indutny>already did it in bud
15:30:15  <indutny>seems to be working fine :)
15:30:42  <tjfontaine>heh, I trust openssl -- presuming you verify the source archive after their defacing incident :P
15:31:18  <indutny>hahaha
15:31:21  <indutny>yeah, I did
15:37:41  <indutny>tjfontaine: https://github.com/joyent/node/pull/6812
15:37:47  <indutny>tjfontaine: may I ask you to check if it builds on windows?
15:37:54  <indutny>or will CI check it automatically?
15:38:07  <tjfontaine>indutny: push it to a feature branch joyent/node and then the CI will run it on windows
15:38:12  <indutny>ok
15:38:19  <tjfontaine>PRs don't by default because it's asking to run arbitrary code on a windows box ;)
15:38:24  <indutny>:)
15:38:25  <indutny>ok
15:38:25  <MI6>joyent/node: indutny created branch feature/update-openssl1.0.1f - http://git.io/xhegZA
15:38:29  <indutny>also
15:38:38  <indutny>are we interested in applying this
15:38:46  <indutny>https://github.com/indutny/bud/commit/78866c73311cfb2b546ac4e924a807b3fe123850
15:38:52  <indutny>that's google's patch
15:38:55  <indutny>for TLS False Start
15:39:34  <tjfontaine>hmm I'm not opposed to it, but I need to do a proper review when I get into work
15:39:40  <indutny>oh gosh
15:39:46  <indutny>I just noticed a problem with bud
15:39:47  <indutny>one sec
15:41:00  <indutny>with a false start
15:41:30  <indutny>fixed
15:53:09  <tjfontaine>mother fuckign windows
15:55:05  <swaj>hey indutny !
15:55:11  <swaj>sorry I was away for Christmas break.
15:55:16  <swaj>going to test setEngine today :)
15:55:20  <indutny>hey man
15:55:20  <indutny>np
15:55:28  <indutny>swaj: I think it should work with just an id now
15:55:34  <swaj>ok
15:55:37  <indutny>swaj: setEngine('atalla')
15:55:40  <swaj>let me go clone and build master
15:55:43  <indutny>sure
15:55:44  <indutny>thank you
15:55:44  <swaj>and I'll run some tests
15:55:49  <indutny>though, I'm going to groceries
15:55:56  <swaj>it's all good
15:56:02  <swaj>early here, so I'll be on for a while
16:01:03  * AvianFlujoined
16:01:52  * AvianFlu_joined
16:03:01  * AvianFluquit (Disconnected by services)
16:03:09  * AvianFlu_changed nick to AvianFLu
16:03:12  <swaj>hmm
16:03:13  * AvianFLuchanged nick to AvianFlu
16:03:15  <swaj>having issues
16:03:30  <swaj>indutny: let me know when you're back from getting groceries and I can show you the logs
16:08:45  <MI6>node-review: #139 FAILURE windows-x64 (17/692) centos-x64 (1/692) linux-ia32 (1/692) windows-ia32 (17/692) centos-ia32 (2/692) smartos-ia32 (8/692) smartos-x64 (8/692) osx-x64 (2/692) http://jenkins.nodejs.org/job/node-review/139/
16:20:23  * mikolalysenkojoined
16:31:55  * vptrjoined
16:37:22  <indutny>swaj: back
16:37:25  <indutny>what's up?
16:45:33  * daviddiasquit (Ping timeout: 276 seconds)
16:48:13  * daviddiasjoined
16:49:42  * AvianFlu_joined
16:49:58  * AvianFluquit (Disconnected by services)
16:50:09  * rmgjoined
16:50:32  * AvianFlu_changed nick to AvianFlu
16:57:43  <tjfontaine>indutny: I'm going to trigger that -review build again, seems like it was probably a transient issue for that failure
16:57:54  <indutny>щл
16:57:55  <indutny>ok
16:58:35  <tjfontaine>otherwise it's looking fine
16:59:17  * mikolalysenkoquit (Ping timeout: 248 seconds)
17:01:26  * vptrquit (Ping timeout: 264 seconds)
17:02:48  * vptrjoined
17:21:52  <MI6>node-review: #140 FAILURE windows-x64 (16/692) centos-x64 (2/692) linux-ia32 (2/692) windows-ia32 (17/692) centos-ia32 (2/692) smartos-ia32 (5/692) smartos-x64 (7/692) http://jenkins.nodejs.org/job/node-review/140/
17:22:37  <isaacs>tjfontaine: https://gitlab.com/
17:22:54  <isaacs>tjfontaine: a free-as-in-beer cloud-hosted open source alternative to github
17:22:54  <tjfontaine>this is the github knock off right?
17:22:59  <isaacs>yes.
17:23:23  <isaacs>i'm going to give it a spin
17:23:23  <tjfontaine>ya, I guess it's time to investigate it further
17:23:29  <tjfontaine>please and thank you :)
17:26:10  * TooTallNatejoined
17:26:22  * mikolalysenkojoined
17:28:02  * calvinfojoined
17:28:05  * dap_joined
17:33:49  * octetcloudjoined
17:34:58  * bajtosjoined
17:53:23  <MI6>libuv-master: #418 UNSTABLE windows (4/202) smartos (3/203) http://jenkins.nodejs.org/job/libuv-master/418/
17:55:51  * stagasjoined
18:03:29  <trevnorris>morning
18:08:17  * inolenjoined
18:08:17  * inolenquit (Client Quit)
18:08:26  * AvianFlu_joined
18:08:40  * AvianFluquit (Disconnected by services)
18:08:42  * AvianFlu_changed nick to AvianFlu
18:09:34  * inolenjoined
18:12:51  <mmalecki>morning trevnorris
18:12:59  <trevnorris>morning
18:13:12  <indutny>mmalecki: any luck?
18:13:31  <trevnorris>joy. today start all the interns.
18:13:33  <indutny>with dht.js
18:13:39  <indutny>trevnorris: at mzla?
18:13:50  <mmalecki>indutny: yes, I went with hard-coding my central server IP for now
18:13:58  <indutny>nice
18:14:00  <indutny>does it work? :D
18:14:05  <indutny>haha
18:14:13  <indutny>I remember testing it like year ago
18:14:27  <indutny>but it should be working fine, I guess
18:14:37  <mmalecki>indutny: hopefully, I'm still stuck with some deployment stuff
18:14:38  <trevnorris>indutny: yup.
18:14:47  <mmalecki>designing new architectures is fun
18:14:50  <mmalecki>until it's not fun
18:14:58  <mmalecki>then you start drinking and it's fun again
18:15:41  <MI6>libuv-node-integration: #373 UNSTABLE linux-x64 (3/692) smartos-ia32 (5/692) smartos-x64 (6/692) http://jenkins.nodejs.org/job/libuv-node-integration/373/
18:18:27  <trevnorris>tjfontaine: ping about https://github.com/joyent/node/pull/6802
18:19:42  <trevnorris>indutny: you know why the close() API for dgram and net are different? i mean, you can pass a callback to .close() in net, but you have to set on('close') on dgram. what's up w/ that?
18:20:44  <trevnorris>indutny / piscisaureus: I'd like your feedback on https://github.com/joyent/node/pull/6802#issuecomment-31593382 and the subsequent two comments
18:22:02  * stagasquit (Quit: Bye)
18:23:08  <trevnorris>groundwater: have that link again for those tests? I want to cherry-pick those into my branch
18:25:34  <piscisaureus>trevnorris: what is eloop?
18:25:57  <trevnorris>event loop i.e. uv_run
18:26:35  <groundwater>trevnorris https://github.com/jacobgroundwater/node/tree/ee-hooks
18:27:00  <piscisaureus>ah
18:28:46  <trevnorris>groundwater: awesome. thanks. going to throw those onto the branch. thanks again for taking the time to create those tests.
18:29:05  <piscisaureus>trevnorris: I'm okay with that change. I think we did the 'early' close callback this way so the libuv close callback woudn't need another c++ -> js roundtrip.
18:29:09  <tjfontaine>it's just difficult to be able to know who we might break in their assumptions of end/close semantics, it's a fragile piece of node history that is dangerous to change, especially without a need today
18:29:27  * dap_quit (Quit: Leaving.)
18:29:37  <tjfontaine>we've indicated our intention to close, I'm not sure there's a need to wait for libuv to tell us it has
18:29:41  <piscisaureus>trevnorris: and the handle is "dead" after uv_close anyway
18:29:55  <trevnorris>tjfontaine: regardless, it's broken now. emitting in a nextTick means the eloop is essentially blocked.
18:30:11  <piscisaureus>trevnorris: but if we made synchronous callbacks, that's bad! Was this actually the case?
18:30:14  <trevnorris>piscisaureus: that's fine. the ._handle is set to null as soon as close() is called.
18:30:19  * dap_joined
18:30:33  <trevnorris>piscisaureus: well, we make a nextTick callback. so the eloop was blocked.
18:30:36  <piscisaureus>trevnorris: but i'll leak into the close callback no?
18:30:37  <trevnorris>it just appeared to be async
18:30:45  <piscisaureus>ah ok that's fine
18:31:00  <piscisaureus>as what happens in lib/...
18:31:34  <trevnorris>tjfontaine: do you mean that end could fire before close w/ this change?
18:32:27  <piscisaureus>trevnorris: euh ? I guess not, but not sure what you mean.
18:32:46  <trevnorris>tjfontaine: and beyond intent, I don't think the call should be in a nextTick. so imo it's a setImmediate() or this patch.
18:33:05  <tjfontaine>I just haven't spent enough cycles on it, I'm just worried about changes in these semantics without knowing of an issue, we can totally change it to a setImmediate that's fine with me
18:33:30  <piscisaureus>trevnorris: it seems to make the code actually simpler, so if it doesn't slow down stuff then I'm okay with it.
18:33:33  <trevnorris>tjfontaine: what i'm saying is that it's essentially the same thing, except this way we're using the libuv api properly.
18:33:47  <piscisaureus>trevnorris: but maybe keep tjfontaine happy and postpone after-0.12 ?
18:34:00  <tjfontaine>nah I think this can go into .12 just need to think about it some more
18:34:01  <piscisaureus>maybe fork off 0.13 already?
18:34:09  <tjfontaine>brb coffee
18:34:11  <trevnorris>tjfontaine: off the top of your head, what would you like me to test?
18:35:21  <trevnorris>piscisaureus: w/ the patch the callbacks aren't run until after uv__finish_close() has completed. it seemed like the correct place in the libuv api the callbacks should be made.
18:36:05  * janjongboomquit (Quit: My MacBook has gone to sleep. ZZZzzz…)
18:36:15  <piscisaureus>trevnorris: well, I'm not necessarily super happy with excessive loop-phase strictness
18:36:43  <piscisaureus>trevnorris: I think the user wouldn't notice anyway.
18:36:56  <trevnorris>piscisaureus: could you help me understand the win uv_run. it's not near the same.
18:37:55  <piscisaureus>trevnorris: messy, also, didn't get the cleanup love that uv-unix got in the last 2 years
18:38:20  <piscisaureus>trevnorris: what's the specific question?
18:38:41  <piscisaureus>trevnorris: close callbacks (and some other types of callbacks too!) are invoked in uv_process_endgames
18:38:54  <trevnorris>piscisaureus: that's what I wanted to know. thanks :)
18:39:02  <piscisaureus>trevnorris: but remember this:
18:39:27  <piscisaureus>trevnorris: * between uv_close and the close callback can be multiple loop iterations on windows
18:39:44  <piscisaureus>trevnorris: uv_process_engames may also call stuff like the read_cb sometimes
18:40:18  <trevnorris>piscisaureus: is that implementation details, or just how win works?
18:40:31  * mikolalysenkoquit (Ping timeout: 240 seconds)
18:42:21  <piscisaureus>trevnorris: the first is how win works, the 2nd is an implementation detail
18:42:28  * mikolalysenkojoined
18:42:32  <trevnorris>piscisaureus: also, I thought "close" meant that the stream won't accept anything else coming in. not that it's actually closed.
18:42:34  <trevnorris>ok, cool.
18:43:40  <trevnorris>oy, it's going to take me a while to get the feel for the logic flow for the win and unix side.
18:44:00  <trevnorris>piscisaureus: oh, also. have a guy at apple checking out the pwrite() issue.
18:44:19  <piscisaureus>trevnorris: yay! was just about to ask
18:44:35  <piscisaureus>trevnorris: you have established contact?
18:45:15  <trevnorris>piscisaureus: my friend that programs the kernel drivers is going to write a simplified test case for me, to make sure it's actually a bug.
18:45:28  * mikealquit (Quit: Leaving.)
18:45:58  <piscisaureus>kewl!
18:46:04  <roxlu>hi guys, i'm using a uv_work_t and spawning some threads. I was wondering, how can I stop all threads that are running when I want to close my app ?
18:46:23  * mikealjoined
18:46:45  <trevnorris>piscisaureus: also, another question about that. so in uv__fs_write it checks if req->off < 0. but the uv_fs_t is only used w/ sendfile, which requires the memory to be mmap-able. which means off can't ever be less than 0.
18:46:53  <trevnorris>piscisaureus: I just must be missing something.
18:47:01  <piscisaureus>trevnorris: as a mozillan - do you have to open a bug for every 2-liner that you want to submit? Or are there more informal ways to get minor stuff in?
18:47:35  <trevnorris>piscisaureus: like, for me personally or just in general?
18:47:55  <piscisaureus>trevnorris: in general?
18:48:19  <piscisaureus>trevnorris: this is for js so I won't ask you to land anything there?
18:48:28  <piscisaureus>s/\?//
18:48:52  <piscisaureus>trevnorris: uv_fs_t is also used for write, read, stat etc
18:49:12  <piscisaureus>roxlu: thread pool threads? other threads?
18:49:54  <trevnorris>piscisaureus: guess my thought was a uv_fs_t would never write to a socket, so ->off would always be >= 0.
18:50:04  <roxlu>piscisaureus: I'm creating mutiple uv_work_t using uv_queue_work
18:50:44  <piscisaureus>roxlu: you can't reliably cancel thread pool work. You'll have to wait until it completes, after that call uv_loop_delete() to join all worker threads.
18:50:54  <piscisaureus>roxlu: if you want to exit quit-n-dirty just call exit()
18:51:10  <trevnorris>piscisaureus: I think you can submit a single bug as long as each change is in a distinct commit.
18:51:14  * mikealquit (Ping timeout: 264 seconds)
18:51:22  <roxlu>piscisaureus: ok thanks
18:51:40  <roxlu>strange thing is, that when a uv_work_t is active my d'tor is't even called
18:52:14  <roxlu>I was thinking to add a shared variable which I set to false when the workers need to stop
18:53:26  <piscisaureus>roxlu: hmm. I think uv_loop_delete doesn't actually join threads...
18:53:58  <roxlu>piscisaureus: wierd thing is that I my call to uv_loop_delete() isn't even called
18:54:04  <roxlu>(I did create a new loop btw)
18:54:47  <piscisaureus>https://github.com/joyent/libuv/blob/master/src/unix/threadpool.c#L132-L153
18:55:06  <piscisaureus>I don't know how Ben intended this to be used ...
18:55:26  <piscisaureus>There's a global thread pool so likely uv_loop_delete won't delete any threads.
18:57:45  * Benvie_quit (Ping timeout: 272 seconds)
18:58:11  * Benviejoined
18:59:09  <roxlu>hmmm interesting, it looks like it has to do with how GLFW and libuv work. I tell GLFW to close my windows when I press esc, this somehow makes the loop blocking (or it joins some threads)
19:01:42  <trevnorris>piscisaureus: you know if there's a win equivalent of man (2) splice ?
19:03:08  <piscisaureus>trevnorris: there isn't. Is there a mac equivalent?
19:03:28  <trevnorris>piscisaureus: man page says it's linux specific.
19:03:44  <trevnorris>... but that has nothing to do w/ mac
19:03:46  <trevnorris>let me check
19:04:46  <rendar>trevnorris, there is TransferFile or FileTransfer, something like that
19:04:58  <rendar>trevnorris, but that is more like sendfile()
19:05:14  <rendar>trevnorris, i think beside that one, there isn't one
19:05:39  <trevnorris>rendar: cool. thanks.
19:05:43  <rendar>yw
19:05:49  <piscisaureus>TransmitFile / TransmitPackets
19:05:56  <rendar>yeah that one
19:06:23  <rendar>piscisaureus, but, is TransmitFile worth using? i mean, can it really help performance? i think it cannot even supports iocp, iirc
19:06:29  <trevnorris>and that works between any two fd's?
19:07:02  <piscisaureus>rendar: I haven't benchmarked it so I wouldn't know. The api looks like it supports IOCP though.
19:08:51  <roxlu>hmm something else which is interesting, it seems that the threads aren't spawn directly after calling uv_queue_work
19:10:04  <indutny>trevnorris: sorry was away
19:10:10  <piscisaureus>roxlu: they are started on the fist invocation of uv_loop_init or uv_default_loop
19:10:14  <indutny>trevnorris: is your question still relevant?
19:11:50  * AvianFluquit (Remote host closed the connection)
19:12:18  <roxlu>hmm can't find uv_loop_init()
19:12:46  <trevnorris>indutny: about splice? sure. mainly i'm just curious
19:12:54  <indutny>splice?
19:12:56  * brsonjoined
19:13:05  <trevnorris>man (2) splice
19:13:35  <trevnorris>copy data between any two fd's and keep it in kernel space.
19:14:34  * AvianFlujoined
19:14:57  <indutny>ah
19:15:15  <indutny>I don't think it works on mac
19:15:17  <indutny>let me check
19:15:51  <indutny>oh
19:15:53  <indutny>sendfile()
19:16:04  <indutny>hm
19:16:10  <indutny>well, that's not exactly it
19:17:09  <trevnorris>indutny: yeah. the reason I like the idea of splice is that it can be done between sockets and any other fd.
19:17:33  <indutny>I know
19:17:33  <trevnorris>but sendfile the in_fd must support mmap.
19:17:48  <indutny>I don't think that it is really feasible
19:17:55  <indutny>to support it on non-linuxes
19:18:01  <indutny>only if emulating it
19:18:25  <trevnorris>yeah. that's what I figured. oh well.
19:19:32  <indutny>I think we had some plans for it
19:19:33  <indutny>in ub
19:19:34  <indutny>uv*
19:19:37  <indutny>but it never worked out
19:20:05  <trevnorris>bummer
19:27:53  * daviddiasquit (Remote host closed the connection)
19:28:28  * daviddiasjoined
19:32:03  * daviddia_joined
19:32:32  * daviddiasquit (Ping timeout: 240 seconds)
19:33:39  * m76quit (Read error: Connection reset by peer)
19:35:34  * superjoejoined
19:37:32  * mikealjoined
19:42:01  * mikealquit (Client Quit)
19:44:27  * bajtosquit (Quit: bajtos)
19:58:21  * daviddia_quit (Remote host closed the connection)
20:14:16  * mikealjoined
20:23:43  * mikolalysenkoquit (Ping timeout: 259 seconds)
20:25:24  * mikealquit (Quit: Leaving.)
20:36:51  * mikealjoined
20:44:25  * mikealquit (Quit: Leaving.)
20:48:01  * mikolalysenkojoined
20:56:03  <tjfontaine>trevnorris, indutny: I've had conversations in recent history, regarding this concept of what we could call require('stream').sendfile -- specifically relating to linux's concept of splice and what that would look like on smartos, and if we could get similar concepts for the bsd's and windows
20:57:06  * mikolalysenkoquit (Ping timeout: 276 seconds)
20:57:07  <trevnorris>tjfontaine: why the ... does Socket#destroy() emit synchronously? man this API is inconsistent.
20:57:22  <tjfontaine>trevnorris, indutny: basically it could be implemented in user land today, with an interface like uv_pump(uv_handle_t, uv_handle_t, size_t), such that you poll in/out on the first two and when both handles are ready you start read/write'ing until you get EAGAIN, or max buffer bytes per this iteration
20:57:22  <trevnorris>tjfontaine: and yeah. I like that idea. :)
20:58:09  <tjfontaine>trevnorris: if you're interested in further conversation I can get you in contact with the people I've talked to about it with
20:59:00  <trevnorris>tjfontaine: i do like the idea, but that's definitely a v1.0 thing. :)
20:59:28  <trevnorris>tjfontaine: ok. and about the emit after _handle.close(). Socket#destroy already does this. the emit points are all over the place.
21:00:19  <tjfontaine>my point is only that because we've been inconsistent for years that a path forward for consistency makes it difficult to achieve without also potentially becoming backwards incompatible
21:02:30  <trevnorris>...
21:03:05  <tjfontaine>frustrating right?
21:04:01  <trevnorris>it's all over the place. Socket#destroy() has the option of emitting the close event on the server, after it decrements the number of connections, but it doesn't check if there are still open connections on the server.
21:04:18  <trevnorris>and why the hell could Socket#destroy() be allowed to fire the close event for the server anyways?
21:04:27  <trevnorris>like, what. the. fuck.
21:05:23  <tjfontaine>destroy is a very final mechanism though, it's like "no really we're going down hard" moment
21:05:58  <trevnorris>but why could destroy on a socket bring down the server?
21:06:18  <tjfontaine>hmm?
21:06:27  <tjfontaine>destroy on the server's socket you mean?
21:07:02  <trevnorris>tjfontaine: this: https://github.com/joyent/node/blob/master/lib/net.js#L468-L475
21:08:01  <trevnorris>tjfontaine: i got this after I made a change to not emit after the actual _handle.close() event was complete, and I started to receive two server close emits if I destroyed the socket
21:08:18  <trevnorris>*to not emit _until_ after
21:08:35  * roxluquit (Ping timeout: 272 seconds)
21:09:42  <trevnorris>ah crap. dumb ass self._connections check
21:09:56  <trevnorris>didn't see that. but seriously, wtf. any why is it happening synchronously?
21:10:22  <trevnorris>oh wait. it's technically not because it's wrapped in a nextTick...
21:10:45  <trevnorris>sorry. bad mood today. AL has been giving me shit so i'm taking it out on net. :P
21:11:14  <tjfontaine>that's nice of you :P
21:14:45  * mikolalysenkojoined
21:19:48  <tjfontaine>trevnorris: btw, what's our story for someone relying on the older domains mechanism for MakeCallback?
21:20:06  <trevnorris>tjfontaine: how do you mean?
21:20:24  <tjfontaine>people who might have just constructed their own object with .domain attached to it
21:21:07  <trevnorris>um... they're SOL.
21:21:45  <tjfontaine>sigh, is there a way to extend domain.add to do the right thing at least?
21:21:53  <tjfontaine>instead of just hooking up 'error'?
21:22:33  <trevnorris>how do you mean "the right thing"? I do have AsyncWrap::{Add,Remove}AsyncListener to set all the proper flags in domain.add
21:23:28  <tjfontaine>hmm, in my rudimentary test which does .add({}) and then passes through node::MakeCallback it's not being caught in the domain
21:24:21  <tjfontaine>ah I see what may be the problem
21:24:57  <trevnorris>this is part of the reason for the EEO API. because people are bastardizing the EE and make it impossible for AL to properly handle all those cases.
21:25:19  <trevnorris>so EEO works sort of like a "fall back" that'll still allow all the domain stuff to get caught like it used to be.
21:25:44  <trevnorris>hell, we bastardize EE
21:25:55  <tjfontaine>hmm, just trying to figure out what we can do for someone who might have been relying on domains without EEs being involved in their binary moduels
21:26:01  <tjfontaine>granted this number may be small
21:26:28  <trevnorris>ok. so node::MakeCallback is not longer used in core. so we could add the check back for the "domain" object property.
21:26:38  <trevnorris>it'll make the call slower, but it won't affect core performance.
21:26:54  <tjfontaine>right, it might be necessary for backwards compatibility
21:27:16  <tjfontaine>I'll try and do a manta query across npm to see if I can find that information out
21:27:40  <trevnorris>tjfontaine: but how are they using it? like, are they just checking if process.domain is set?
21:28:05  <tjfontaine>well, they don't necessarily have to, they're just passing a receiver object with a .domain attached
21:28:13  <trevnorris>because the EE no longer checks for this.domain before emitting an event.
21:28:14  <tjfontaine>they could be attaching it in anyway they wanted
21:28:19  <trevnorris>ok
21:28:38  <tjfontaine>not that we ever had a story abotu what that meant for addon authors already
21:30:26  <trevnorris>most likely othiym23 would have something to say about this
21:37:07  * daviddiasjoined
21:41:38  * daviddiasquit (Ping timeout: 265 seconds)
21:57:15  * rendarquit (Quit: Leaving)
21:57:59  <othiym23>I think tjfontaine has the right idea, trevnorris. If manta coughs up something that an addon is relying on today, then we may have a problem
21:58:05  <othiym23>otherwise, who cares
21:58:24  <othiym23>I've always discouraged people from relying upon the implementation details of domains
21:58:33  <othiym23>hueniverse might care, though
21:58:42  <othiym23>his stuff is very hands-on in how it consumes domains
22:04:57  <trevnorris>othiym23: you're telling me :P
22:10:16  * daviddia_joined
22:13:05  * rmgquit (Remote host closed the connection)
22:24:16  <othiym23>trevnorris: not really
22:24:36  <othiym23>I've never touched .domain on anything in any of my stuff
22:24:45  * mikealjoined
22:25:00  <othiym23>and does backwards compatibility get broken if nothing breaks?
22:25:26  * mikealquit (Client Quit)
22:30:00  <trevnorris>othiym23: heh, i meant more for the hapi domain tests.
22:32:28  <othiym23>well, shit, man, at least somebody's using domains
22:32:30  <othiym23>from my POV
22:39:20  * vptrquit (Quit: WeeChat 0.4.1)
22:42:33  * rmgjoined
22:46:38  * rchquit (Changing host)
22:46:38  * rchjoined
22:51:17  * c4milojoined
22:58:47  * c4miloquit (Remote host closed the connection)
23:00:02  * mikealjoined
23:01:47  * pquernaquit (Remote host closed the connection)
23:01:58  * pquerna_joined
23:02:57  * mikealquit (Client Quit)
23:09:18  * pquerna_changed nick to pquerna
23:10:49  * mikealjoined
23:13:05  * mikealquit (Client Quit)
23:19:04  * thlorenzquit (Remote host closed the connection)
23:35:06  * pachetquit (Quit: leaving)
23:53:23  * AvianFluquit (Read error: Operation timed out)
23:53:28  * daviddia_quit (Remote host closed the connection)
23:53:57  * daviddiasjoined
23:57:14  * mikolalysenkoquit (Ping timeout: 264 seconds)
23:57:53  * dap_1joined
23:58:26  * daviddiasquit (Ping timeout: 264 seconds)
23:59:41  * inolen1joined