00:00:01  * ircretaryquit (Remote host closed the connection)
00:00:09  * ircretaryjoined
00:06:13  * Benviejoined
00:08:12  * defunctzombie_zzchanged nick to defunctzombie
00:20:30  * Benviequit (Ping timeout: 264 seconds)
00:45:47  * defunctzombiechanged nick to defunctzombie_zz
00:47:35  * kazuponjoined
01:06:39  * c4miloquit (Remote host closed the connection)
01:07:11  * c4milojoined
01:11:28  * c4miloquit (Ping timeout: 248 seconds)
01:37:24  * abraxasjoined
01:39:04  * c4milojoined
01:39:50  * kazuponquit (Remote host closed the connection)
01:42:25  * inolen1joined
01:42:25  * inolenquit (Read error: Connection reset by peer)
01:53:03  * kazuponjoined
02:13:42  <trevnorris>hello world
02:14:00  <tjfontaine>goodbye cruel world :)
02:36:10  <trevnorris>tjfontaine: heh, how's it been?
02:37:26  <tjfontaine>sok, you?
02:39:19  <trevnorris>nothing much. been doing a little irhydra debugging with mraleph.
02:39:27  <trevnorris>it's not showing deopt's in output.
02:39:58  <trevnorris>also cleaning up the patch. since isaacs (supposedly) going to be back tomorrow.
02:40:08  <trevnorris>not like he's going to have anything else to do than review my patch. :P
02:42:53  <wolfeidau>Heya does anyone know what "<anonymous> (as DataHandler):" mean in mdb output?
02:44:27  <tjfontaine>that'll be an anonymous function with an assignment I think
02:44:52  <tjfontaine>var foo = function(){}
02:44:54  <tjfontaine>I think
02:45:44  <wolfeidau>I have a "class like thing" called DataHandler which is getting created in like 3 places
02:46:01  <wolfeidau>Just trying to zero in on which one that is
02:47:14  <wolfeidau>As in new DataHandler :P
02:47:44  <tjfontaine>you have 3 times someone calls `new DataHandler`?
02:49:27  <wolfeidau>Yeah
02:49:36  <wolfeidau>All inside functions
02:49:46  <wolfeidau>One is also used in a subsequent closure
02:49:51  <wolfeidau>callback function
02:50:23  <wolfeidau>so i have set that to dataHandler=null at the end
02:51:00  <wolfeidau>My knowledge of how to force js to clean this up is limited to say the least
02:52:06  <wolfeidau>The thing i am trying to accertain is whether or not this is just an effect of a larger leak which is within the same closure
02:53:03  <wolfeidau>So if i, heaven forbid had a function that did a ton of stuff would just ONE badness in their retain everything used within that closure?
03:17:24  <trevnorris>tjfontaine: i'm having a pain of a time figuring out how to do an assert.throws() for an asynchronous function.
03:27:18  * abraxasquit (Remote host closed the connection)
03:29:47  * abraxasjoined
03:31:42  * Kakerajoined
03:39:31  * kazuponquit (Remote host closed the connection)
03:41:54  * Benviejoined
03:45:48  * c4miloquit (Remote host closed the connection)
03:46:21  * c4milojoined
03:50:54  * c4miloquit (Ping timeout: 256 seconds)
04:09:53  * kazuponjoined
04:18:08  * kazuponquit (Ping timeout: 248 seconds)
04:18:12  * Kakera_joined
04:18:54  * Benvie_joined
04:19:50  * Benviequit (Ping timeout: 240 seconds)
04:21:02  * Kakeraquit (Ping timeout: 240 seconds)
04:24:46  * kazuponjoined
04:54:40  * Kakera_quit (Ping timeout: 246 seconds)
05:19:40  * mikealquit (Quit: Leaving.)
05:21:48  * Benviejoined
05:21:58  * mikealjoined
05:23:30  * Benvie_quit (Ping timeout: 264 seconds)
05:37:31  * paddybyersjoined
05:40:20  * mraleph1part
06:04:06  * paddybyersquit (Quit: paddybyers)
06:11:36  * paddybyersjoined
06:12:26  * bajtosjoined
06:13:42  * Benviequit (Ping timeout: 256 seconds)
06:14:26  * Benviejoined
06:40:55  <MI6>nodejs-v0.10-windows: #245 UNSTABLE windows-ia32 (7/600) windows-x64 (9/600) http://jenkins.nodejs.org/job/nodejs-v0.10-windows/245/
06:41:34  * st_lukejoined
06:48:21  * st_lukequit (Remote host closed the connection)
06:51:40  * inolen1quit (Ping timeout: 256 seconds)
06:52:01  * inolenjoined
06:53:52  * Benvie_joined
06:54:21  * Benviequit (Ping timeout: 245 seconds)
07:04:13  * rendarjoined
07:05:37  * inolenquit (Read error: Connection reset by peer)
07:06:09  * inolenjoined
07:23:21  * Benvie_quit (Ping timeout: 245 seconds)
07:23:39  * Benviejoined
07:29:44  * Benvie_joined
07:30:11  * Benviequit (Ping timeout: 245 seconds)
07:46:51  * Benviejoined
07:48:02  * Benvie_quit (Ping timeout: 240 seconds)
07:49:25  * wolfeida_joined
07:50:06  <trevnorris>good night cruel world
07:50:11  * trevnorris&
07:50:12  <LOUDBOT>BECAUSE I'M WEB PROGRAMMING AS HARD AS I PONDERED THE EFFECT OF ECHINACEA - YOUR NICK ASSFACE
07:51:08  * wolfeidauquit (Ping timeout: 240 seconds)
08:00:48  * bnoordhuisjoined
08:02:14  * hzjoined
08:09:18  * Benvie_joined
08:11:02  * Benviequit (Ping timeout: 264 seconds)
08:19:21  <bnoordhuis>saghul: do you have moderator rights for the libuv mailing list?
08:21:29  <bnoordhuis>saghul: nvm, you are now :)
08:21:48  <saghul>bnoordhuis oh, thanks :-)
08:24:33  <indutny>bnoordhuis: hey ben
08:24:36  <indutny>how are you?
08:30:14  * inolenquit (Quit: Leaving.)
08:31:24  * inolenjoined
08:37:36  <bnoordhuis>indutny: hola. i'm fine (again)
08:37:58  <bnoordhuis>was down with a stomach flu over the weekend
08:38:10  <bnoordhuis>things like that always happen in the weekends, don't they?
08:43:45  <bnoordhuis>indutny: btw, are you a libuv ML moderator
08:43:52  <indutny>I'm not really sure
08:43:56  <indutny>perhaps, no
08:44:00  <bnoordhuis>let me check
08:44:03  <indutny>bnoordhuis: stomach flu sounds quite dangerous
08:44:05  <indutny>are you ok?
08:44:10  <indutny>I mean now
08:44:29  <bnoordhuis>yeah, i'm better again
08:44:36  <bnoordhuis>and you're a manager now
08:44:54  <bnoordhuis>that rat bert made me manager rather than owner so now i can't bump you guys to owner
08:45:33  <bnoordhuis>i've emailed him that i expect a fix on my desk before 4 PM
08:45:50  <indutny>haha
08:45:54  <indutny>ok
08:57:12  * rendarquit
09:04:03  * Benviejoined
09:05:00  * Benvie_quit (Ping timeout: 245 seconds)
09:11:08  <hz>after a FIN recvd, i can call shutdown on my tcp socket.. FIN is sent and socket enters into the LAST_ACK state waiting for the ACK to the sent FIN. can the socket reach the CLOSED state event if i dont call close on my socket?
09:11:40  <hz>* event if -> even if
09:13:04  <hz>if yes... if now i call close, does it avoid to send FIN right? how it works internally?
09:13:16  <bnoordhuis>hz: the answer however is 'no' :)
09:13:42  <bnoordhuis>if you don't call close(), it'll end up in a WAIT state
09:13:43  <hz>shutdown send a FIN
09:14:23  <hz>CLOSE_WAIT even if FIN has been ACKed?
09:15:01  <hz>* shutdown(wr)
09:15:11  <bnoordhuis>yes
09:15:38  <hz>is it a TCP directive or just implementation detail
09:15:48  <bnoordhuis>i guess the latter
09:16:09  <hz>so i have no warranties on it, damn it :D
09:16:45  <hz>thanks bnoordhuis :)
09:16:49  <bnoordhuis>np :)
09:17:00  <hz>what you said, its unix/linux impl?
09:17:06  <hz>or other OS?
09:17:11  <bnoordhuis>yeah, that's how linux works
09:17:16  <bnoordhuis>freebsd too, i'm pretty certain
09:17:28  <hz>i see
09:17:37  <bnoordhuis>it'd be pretty odd if the kernel actually reclaimed the address:port behind your back
09:17:45  <hz>are you a ker develper?
09:17:59  <bnoordhuis>you mean kernel programmer? off and on
09:18:16  <bnoordhuis>my last linux patch is from 2006 or thereabouts though :)
09:18:24  <hz>ehehe
09:20:59  * rendarjoined
09:36:18  <indutny>hz: TCP is really fucked
09:36:20  <indutny>use UDP
09:36:21  <indutny>:)
09:37:21  <hz>hahahah :D
09:39:36  <hz>11:13 <bnoordhuis> if you don't call close(), it'll end up in a WAIT state <--- doesnt shutdown put it into LAST_ACK state?
09:39:52  * Benvie_joined
09:41:40  * Benviequit (Ping timeout: 256 seconds)
09:46:32  <bnoordhuis>hz: LAST_ACK is when the other end sent a FIN
09:46:40  * Benvie_quit (Ping timeout: 248 seconds)
09:47:50  <hz>CLOSE_WAIT is when the other sends the firts FIN
09:48:22  <hz>and i reply with an ACK to it
09:48:59  <hz>from close wait, diagram shows the socket can pass into LAST_ACK only by sending the second FIN
09:49:12  <bnoordhuis>hz: oh wait, yeah, what i said is only partially correct
09:49:22  <hz>so both shutdown(wr) and close should do this
09:49:33  <bnoordhuis>shutdown from both sides but there may still be pending data on our side
09:49:45  <bnoordhuis>where 'our' means 'in the kernel'
09:50:28  * bajtosquit (Quit: bajtos)
09:50:30  <hz>assuming i recevived the first FIN and i sent the ACK to it
09:50:46  <bnoordhuis>if you have a lot of sockets in LAST_ACK state, it means you're not closing them
09:50:53  <hz>now if i call shutdown does my socket pass into LAST_ACK?
09:51:38  <hz>or LAST_ACK state is recheable only by invoking close syscall?
09:52:12  <bnoordhuis>no, it's usually when you call shutdown but not close
09:52:24  <bnoordhuis>a socket can stay in LAST_ACK for a little while after close
09:52:58  <hz>sure, waiting for 2nd FIN being ACKed
09:53:50  <bnoordhuis>i'm curious though, why are you so interested in this? :)
09:54:18  <hz>just for study
09:54:29  <hz>curiosity its dangerous :D
09:54:41  <bnoordhuis>yeah, it killed the cat
09:55:22  <hz>btw i can think write a didactical TCP socket interactive simulator
09:55:50  <hz>where interactions are syscall invokations
09:57:21  <hz>so id like understand the result given by "every state, every syscall" couples
09:57:56  <bnoordhuis>you want to understand the tcp state machine?
09:58:12  <hz>yes
09:58:18  <hz>but in relation to syscall
09:59:13  <hz>what i can read doent talk about syscall shutdown o close
09:59:26  <hz>they say "send a FIN"
10:00:00  <hz>but i know shutdown is not exaclty as close
10:00:43  <bnoordhuis>oh, right
10:00:51  <bnoordhuis>what kind of shutdown are you doing? SHUT_RDWR?
10:00:58  <hz>only WR
10:01:17  <hz>RD is quite useless atm
10:01:37  * mburnsquit (Quit: ZNC - http://znc.sourceforge.net)
10:02:05  * kazuponquit (Remote host closed the connection)
10:02:31  * kazuponjoined
10:03:26  * mburnsjoined
10:05:29  * mburnsquit (Client Quit)
10:06:54  * mburnsjoined
10:07:33  * kazuponquit (Ping timeout: 268 seconds)
10:07:50  <bnoordhuis>hz: i guess you could say the same thing about SHUT_WR, it's not that different from a full close
10:08:11  <bnoordhuis>you can still receive data but you could argue that's working around application bugs
10:08:29  <bnoordhuis>not that i have a really strong opinion on it, i just work with what's there :)
10:10:22  <hz>12:08 <bnoordhuis> you can still receive data <-- if im in CLOSE_WAIT i just received FIN from my peer so read its useless (i think i would get another 0 returned from read syscall)
10:11:00  <hz>this is why i dont care about RW detail now
10:11:42  <bnoordhuis>right
10:15:16  <hz>*RD
10:18:20  <hz>http://webmuseum.mi.fh-offenburg.de/exhibition/17/assets/statemachine.swf <-- :D it doesnt contemple shutdown, arrrrr
10:26:24  * dominictarrjoined
10:28:32  * mburnspart
10:32:36  * dominictarrquit (Quit: dominictarr)
10:34:43  * dominictarrjoined
10:39:59  * piscisaureus_joined
10:45:40  <MI6>nodejs-v0.10: #1519 UNSTABLE smartos-ia32 (1/600) linux-x64 (1/600) osx-x64 (1/600) http://jenkins.nodejs.org/job/nodejs-v0.10/1519/
10:46:48  * bnoordhuisquit (Ping timeout: 240 seconds)
10:48:05  * dominictarrquit (Quit: dominictarr)
10:48:28  * bnoordhuisjoined
10:49:52  * abraxasquit (Remote host closed the connection)
10:52:51  * bajtosjoined
10:53:01  * bajtosquit (Client Quit)
11:05:32  * Benviejoined
11:10:18  * Benviequit (Ping timeout: 264 seconds)
11:37:29  <bnoordhuis>indutny: what are the major outstanding issues for v0.12 in your opinion?
11:37:32  <bnoordhuis>piscisaureus_: ^ ditto
11:38:47  <indutny>well
11:38:56  <indutny>I'd like to introduce cluster.distribute thing
11:39:07  <indutny>and probably dualstack
11:39:13  <indutny>no other blockers from my side
11:39:24  <bnoordhuis>i was thinking more of bugs, not features
11:40:01  <bnoordhuis>okay, think it over while i do the groceries :)
11:44:35  * bnoordhuisquit (Ping timeout: 245 seconds)
11:47:41  * bajtosjoined
12:18:32  <piscisaureus_>bnoordhuis: execSync
12:21:33  <piscisaureus_>ircretery: tell bnoordhuis execSync, and also sync/async stdio woes
12:37:13  * bnoordhuisjoined
12:38:01  <bnoordhuis>piscisaureus_: okay. when is that finished?
13:22:27  * bajtosquit (Quit: bajtos)
13:25:29  * bajtosjoined
13:38:24  * Kakera_joined
13:51:41  * kevinswiberjoined
13:53:43  * c4milojoined
13:58:10  * isaacswaves
13:59:15  <bnoordhuis>hey isaac
14:01:52  <piscisaureus_>hello
14:04:32  * vptrjoined
14:24:04  * AvianFlujoined
14:27:35  * M28quit (Read error: Connection reset by peer)
14:28:00  * kevinswiberquit (Remote host closed the connection)
14:28:32  * kevinswiberjoined
14:29:10  * M28joined
14:29:17  * inolen1joined
14:29:38  * inolenquit (Ping timeout: 264 seconds)
14:31:16  * bajtosquit (Quit: bajtos)
14:32:03  * mikealquit (Quit: Leaving.)
14:32:55  * kevinswiberquit (Ping timeout: 245 seconds)
14:37:36  <piscisaureus_>rendar: ?
14:39:01  <rendar>piscisaureus_: i saw that named pipes in windows: they're different than those on unix (local sockets) which are more similar to tcp sockets, you have an accept which gives you a new fd. instead with named pipes the listening fd will be the client endpoint fd in the server, right? so how to abstract this to make it working with iocp?
14:40:13  <piscisaureus_>rendar: so with libuv you just do it the same as on unix
14:40:54  <piscisaureus_>rendar: internally libuv will create a new "listening fd" after it accepts, so it can accept more
14:40:54  * saghulquit (Ping timeout: 264 seconds)
14:41:44  <rendar>piscisaureus_: exactly
14:42:45  * saghuljoined
14:43:54  <rendar>piscisaureus_: now my question is: when it returns a new fd (which is not a new fd) shouldn't libuv change its iocp key associated with that named pipe handle? i mean: if you set a callback as the completion key, and that callback is "uv__namedpipe_listening" after, you should change it to "uv__normal_stream" because that fd change "face" it passes to be a listening entity to a normal stream
14:43:55  <rendar>entity, this do not happen on unix...how to solve this problem? by modifing callbacks i guess, or changing cb* pointers?
14:45:07  <piscisaureus_>rendar: the iocp key isn't used
14:45:20  <rendar>piscisaureus_: oh..i see
14:45:34  <rendar>piscisaureus_: but why? any particular design decision for that?
14:46:23  <piscisaureus_>rendar: because the IOCP key would reference a handle
14:47:03  <piscisaureus_>rendar: but we needed more granularity (track individual OVERLAPPEDs)
14:47:04  <rendar>a handle? it can also reference a pointer, right? e.g. a pointer to a callback
14:47:26  <piscisaureus_>rendar: yes but a handle can have many callbacks
14:47:43  <piscisaureus_>rendar: e.g. you can have read and multiple writes outstanding at the same time
14:47:55  <piscisaureus_>rendar: so there isn't really "the" callback for a handle
14:48:21  <rendar>oh, right
14:48:25  <piscisaureus_>rendar: you can make it point anywhere, but I never had a use for it. the OVERLAPPED is embedded in a uv_req_t and the uv_req_t points to the handle
14:48:38  <rendar>piscisaureus_: i saw that also that completion key can't be changed in anyway during the life of the HANDLE, right?
14:48:39  <piscisaureus_>rendar: so why would I make it redundant?
14:48:48  <piscisaureus_>rendar: could be yeah
14:48:52  <bnoordhuis>accept() really bugs me as an interface
14:48:55  <piscisaureus_>rendar: so, don't rely on the completion key
14:49:01  * mikealjoined
14:49:01  <bnoordhuis>all the EAGAINs!
14:49:04  <rendar>piscisaureus_: i see
14:49:09  <bnoordhuis>and it needs to lock the socket on each call. bah
14:49:33  <rendar>what's your network name, EAGAIN?
14:49:56  <rendar>bnoordhuis: yeah right
14:49:56  <piscisaureus_>rendar: so what is the question - are you writing your own libuv?
14:50:38  <rendar>piscisaureus_: nop, i'm just doing some async i/o tests but in C++ instead of C, so i was trying this silly completion key
14:51:01  <rendar>piscisaureus_: i had completion key pointing to a C++ object which is a listener entity, and i couldn't change it to a stream entity..
14:51:32  <piscisaureus_>yup. that don't work
14:51:50  <rendar>piscisaureus_: interesting the point of using a new pointer for each OVERLAPPEDs
14:52:11  <rendar>pointer*
14:54:21  <rendar>piscisaureus_: before you said "track" individual OVERLAPPEDs, what you mean with "track"? i guess because each overlapped request created must increment an atomic counter, and decrement it on completion? so the pointer that the OVERLAPPED structure has can be freed?
14:56:12  <piscisaureus_>rendar: much more bookkeeping, e.g. if a UV_READ req completes we must queue another
14:56:27  <rendar>piscisaureus_: oh yeah, exactly
14:56:36  * mikealquit (Quit: Leaving.)
14:56:45  <rendar>piscisaureus_: thats just what i was thinking about
14:57:55  <rendar>piscisaureus_: well, i guess windows its the only operating system that uses OVERLAPPED allocation for each request, so in other systems you don't have to worry about OVERLAPPED deallocation, right? thats because epoll() or kqueue() etc just don't have to allocate these "packets"
14:58:22  <rendar>piscisaureus_: but does libuv use something special to assure that every OVERLAPPED will be freed? i thought an atomic counter for pending requests or something like that
14:59:16  <piscisaureus_>rendar: the user is responsible for allocating an uv_req_t (uv_write_t, uv_shutdown_t) whenever it does an operation
14:59:18  * julianduquejoined
14:59:25  <piscisaureus_>rendar: and freeing it in the callback
14:59:35  <piscisaureus_>rendar: the overlapped sits inside that struct
15:00:16  <piscisaureus_>rendar: as for read and accept requests, their memory management is handled by the bookkeeping code
15:01:02  <piscisaureus_>rendar: there are no atomic counters involved in the process. that would make no sense since libuv is not thread-safe anyway
15:07:15  * julianduquequit (Ping timeout: 240 seconds)
15:08:13  <rendar>piscisaureus_: i see
15:10:26  <rendar>piscisaureus_: well this is totally different from the unix way, because in epoll or kquueue you don't have OVERLAPPED packets, so you have to specify a pointer for the final callback?
15:11:50  <piscisaureus_>rendar: I don't understand the question
15:13:56  <rendar>piscisaureus_: sorry, i mean, in libuv/win when you start an async request, you have to specify the pointer to the callback, in order to allocate an OVERLAPPED structure with that pointer, and this won't happen in libuv/*nix i guess, because you do not allocate OVERLAPPEDs, but you have just i/o readyness alerts
15:15:04  <rendar>so in *nix you will specify an unique pointer that epoll/kqueue or whatever will call when i/o is *ready* but not completed -- my point is that in *nix you'll need less granularity than in windows i guess
15:15:45  <piscisaureus_>rendar: well on unix libuv does bookkeeping differently
15:15:53  <piscisaureus_>rendar: it maintains a queue for writes etc
15:15:59  <rendar>yeah right
15:17:12  <MI6>nodejs-master: #597 UNSTABLE osx-x64 (1/644) smartos-x64 (5/644) http://jenkins.nodejs.org/job/nodejs-master/597/
15:22:18  * bnoordhuisquit (Ping timeout: 264 seconds)
15:25:48  * bajtosjoined
15:39:58  * kevinswiberjoined
15:40:35  * octetcloudjoined
15:49:08  * TooTallNatejoined
15:49:21  * dapjoined
15:49:35  * hzquit
15:53:03  * paddybyersquit (Quit: paddybyers)
15:55:58  * amartensjoined
16:04:08  * indexzerojoined
16:12:49  <tjfontaine>gday good folks
16:16:01  <indutny>gday
16:20:47  <othiym23>good morning Tj
16:21:05  <othiym23>I forgot how to type this weekend
16:21:13  <othiym23>take one day off IRC, and I'm a mess
16:21:27  <tjfontaine>heh
16:21:43  <tjfontaine>muscle memory, it's difficult to keep
16:21:44  <tjfontaine>:)
16:28:34  <othiym23>sweet, Node engineers at New Relic++
16:28:38  <othiym23>I have a teammate!
16:28:39  <othiym23>whee
16:28:43  * bnoordhuisjoined
16:29:05  <tjfontaine>othiym23: yay!
16:31:36  * inolenjoined
16:33:16  * bnoordhuisquit (Ping timeout: 246 seconds)
16:33:36  * inolen1quit (Ping timeout: 248 seconds)
16:46:20  * bnoordhuisjoined
17:02:57  * defunctzombie_zzchanged nick to defunctzombie
17:07:58  * nightmar1joined
17:13:15  <othiym23>trevnorris: your latest changes are still looking good for me
17:16:20  * bnoordhuisquit (Quit: leaving)
17:17:51  * AvianFluquit (Remote host closed the connection)
17:17:59  * indexzeroquit (Ping timeout: 241 seconds)
17:19:47  * piscisaureus_quit (Quit: ~ Trillian Astra - www.trillian.im ~)
17:23:13  <MI6>joyent/node: Timothy J Fontaine master * 711ec07 : v8: ugprade to 3.20.17.14 - http://git.io/keTOag
17:23:19  <trevnorris>othiym23: good to hear
17:23:39  * defunctzombiechanged nick to defunctzombie_zz
17:27:47  * AvianFlujoined
17:35:20  <MI6>nodejs-master: #598 FAILURE smartos-ia32 (1/644) smartos-x64 (8/644) http://jenkins.nodejs.org/job/nodejs-master/598/
17:35:40  <tjfontaine>scuzi?
17:35:54  <tjfontaine>/bin/sh: /bin/sh: cannot execute binary file
17:39:14  * defunctzombie_zzchanged nick to defunctzombie
17:42:22  * defunctzombiechanged nick to defunctzombie_zz
17:46:52  <MI6>nodejs-master: #599 UNSTABLE smartos-ia32 (1/644) smartos-x64 (7/644) osx-ia32 (1/644) http://jenkins.nodejs.org/job/nodejs-master/599/
17:56:13  <MI6>nodejs-master-windows: #392 UNSTABLE windows-x64 (26/644) windows-ia32 (23/644) http://jenkins.nodejs.org/job/nodejs-master-windows/392/
17:56:19  <MI6>libuv-master: #273 UNSTABLE linux (2/195) windows (3/196) smartos (7/195) http://jenkins.nodejs.org/job/libuv-master/273/
18:05:44  * julianduquejoined
18:06:32  * paddybyersjoined
18:06:52  <trevnorris>tjfontaine: that was a tiny v8 upgrade. what was the fix for?
18:07:37  <tjfontaine>trevnorris: https://github.com/v8/v8/commit/435d8aadc3f60fea2aaad23fae3f29e5aeccb907
18:07:57  <trevnorris>ah, thanks.
18:08:45  <tjfontaine>trying to concot a test case for this zlib refcounting issue, I don't think I will win
18:09:31  <trevnorris>ooh. yeah.
18:12:06  <trevnorris>ah man. i'm going to have soo much fun screwing around with Eternal's when we upgrade to 3.21 in v0.13
18:12:20  * trevnorrissmacks himself
18:12:20  <trevnorris>ok trevor, focus
18:12:29  <tjfontaine>well eternals are technically in there now
18:12:39  <trevnorris>the class isn't
18:12:42  <othiym23>is an Eternal like an even more persistent Persistent?
18:12:57  <trevnorris>othiym23: it's a local that exists for the life of the isolate
18:13:00  <tjfontaine>and when izs is around we're going to have a discussion about maybe upgrading to 3.21 before release
18:13:02  <othiym23>aha
18:13:11  <othiym23>ooo 3.21
18:13:12  <trevnorris>and it's retrievable by a returned index
18:13:44  <trevnorris>i'm looking at the diff of include/v8.h between 3.20 and 3.21. man they like to change stuff.
18:14:02  <tjfontaine>trevnorris: https://github.com/joyent/node/pull/6279
18:14:19  <trevnorris>:))))))))))))))))
18:14:29  <tjfontaine>there's a discussion that's going to be had first though
18:15:00  <MI6>libuv-node-integration: #258 UNSTABLE osx-x64 (1/644) linux-x64 (2/644) smartos-x64 (7/644) smartos-ia32 (1/644) http://jenkins.nodejs.org/job/libuv-node-integration/258/
18:15:33  <trevnorris>tjfontaine: well, iirc you're the one that said v0.11 was a bit "long in the tooth" for an upgrade to 3.21. ;)
18:16:04  <tjfontaine>that's how I feel yes, but I am not unsympathetic to need to have a fresher version of v8 in our stable releases
18:19:00  <trevnorris>tjfontaine: wtf, they have a new Function constructor that allows a Local<Value> data argument,
18:19:11  <othiym23>haha just loading that PR in Chrome is making my laptop chug
18:19:21  <trevnorris>oh wait. nm
18:19:25  <trevnorris>i'm still waking up.
18:19:26  <tjfontaine>othiym23: ya, WTG github
18:20:23  <othiym23>haha 1.5GiB RSS for that tag alone
18:20:29  <othiym23>sorry laptop!
18:20:52  <othiym23>tab, not tag
18:21:39  <trevnorris>Externalize, Eternalize. yeah, i'm going to be getting those two mixed up.
18:22:07  <tjfontaine>yup.
18:23:42  <trevnorris>at least their parameters aren't the same or i'd really be screwed.
18:24:35  <othiym23>the build changes are nice
18:25:03  <othiym23>so does 3.21 assume there's always a single isolate?
18:25:19  <othiym23>seeing lots of Dispose calls that no longer have isolate parameters
18:25:25  <othiym23>or is the Isolate passed to the constructors now?
18:25:36  <trevnorris>othiym23: i'm going to cleanup/squash my 37 commits into something much cleaner.
18:25:36  <trevnorris>it'll still have the two implementations (w/ and w/o the domain specific code) but it's sort of ridiculous right now.
18:25:38  <othiym23>ah nm it's the latter, I see
18:26:09  <othiym23>trevnorris: that sounds good -- did you replace the domain.enter / exit calls everwhere with asyncListener before / after callbacks?
18:29:50  <othiym23>how had I not noticed NeanderObject and NeanderArray before?
18:29:53  <othiym23>their names are so delightful
18:30:48  <trevnorris>othiym23: yup.
18:31:07  <trevnorris>othiym23: but they should still work normally if the user does
18:31:42  * paddybyersquit (Quit: paddybyers)
18:32:05  <othiym23>trevnorris: I just like the reduction in code cruft
18:32:26  <othiym23>trevnorris: although nobody is going to have a chance of understanding how domains work now without learning about asyncListeners first ;)
18:34:25  * st_lukejoined
18:36:45  <trevnorris>othiym23: heh, yeah. and the best part is that in events.js the domain code was just removed. didn't need to add async listener code. :P
18:36:48  * hzjoined
18:36:55  <trevnorris>that'll confuse some people.
18:40:00  * st_lukequit (Read error: Connection reset by peer)
18:40:30  * nightmar1quit (Quit: Changing server)
18:41:49  * julianduquequit (Quit: leaving)
18:42:04  * julianduquejoined
18:47:54  * mikealjoined
18:48:45  <MI6>nodejs-master-windows: #393 UNSTABLE windows-x64 (21/644) windows-ia32 (22/644) http://jenkins.nodejs.org/job/nodejs-master-windows/393/
18:49:12  * mikealquit (Client Quit)
18:52:55  * st_lukejoined
18:57:56  * st_lukequit (Ping timeout: 245 seconds)
19:06:34  * mikealjoined
19:07:07  * TooTallNatequit (Quit: Computer has gone to sleep.)
19:15:39  * EhevuTovjoined
19:24:08  * bajtosquit (Quit: bajtos)
19:25:30  * bradleymeckjoined
19:27:43  * EhevuTovquit (Quit: This computer has gone to sleep)
19:28:47  * EhevuTovjoined
19:33:37  * qmxquit (Excess Flood)
19:34:07  * qmxjoined
19:52:30  * TooTallNatejoined
19:52:54  <trevnorris>tjfontaine: any knowledge if isaacs is even alive?
19:53:43  <TooTallNate>trevnorris: isn't he like in europe somewhere?
19:54:34  * st_lukejoined
19:55:46  <trevnorris>TooTallNate: well, he's been over there for like a month. was just wondering if joyent's going to have to replace him. ;P
19:56:07  <TooTallNate>haha
19:56:49  <trevnorris>between isaacs being on vacation and ben having a kid, things have been pretty dead.
19:57:18  <trevnorris>except for tjfontaine debugging the things that would cause most developers to blow their brains out. ;)
20:00:56  * st_lukequit (Read error: Connection reset by peer)
20:27:58  <tjfontaine>you're pretty dead :)
20:28:03  <tjfontaine>isaacs is alive and in oakland
20:29:23  <tjfontaine>execSync, asyncListener, beforeExit are the features that need to land before we can freeze
20:33:15  <roxlu>hey guys, I'm using uv_tty to have some nice colored output, but this does not work in the console of xcode
20:33:42  <roxlu>does someone knows a way to either not use colors when output goes to the console of xcode or to enable colors in that console?
20:34:45  <tjfontaine>so in xcode terminal you're seeing ansi escape sequences?
20:34:50  <roxlu>yes
20:36:08  <tjfontaine>roxlu: https://github.com/robbiehanson/XcodeColors
20:36:27  <tjfontaine>not entirely what you're looking for though I suspect
20:37:08  <roxlu>nah, it would be fine if I could simple disable it when color modes are not supported, but I don't think libuv has features for that at?
20:37:11  <roxlu>atm
20:37:25  <tjfontaine>no that's a feature of someone doing the writes
20:38:00  <roxlu>ok
20:44:18  <trevnorris>tjfontaine: what was the need for beforeExit?
20:45:23  <trevnorris>othiym23: ok, 37 commits down to 12. no need to run tests though. end code is the exact same as before.
20:45:49  <tjfontaine>trevnorris: it would actually help (imo) with the #6305 issue
20:46:28  <tjfontaine>the event loop is getting ready to exit, but you know enough about your state to be like ZOMG WAIT I HAVE MORE TO DO
20:46:45  <trevnorris>but would beforeExit be backported to v0.10?
20:47:12  <tjfontaine>no, that's pretty much working as intended I think :/
20:47:43  <trevnorris>ok, and i'd think there'd be a way to create a tcp test that accomplishes the same thing as your http code.
20:47:49  <tjfontaine>yup
20:47:53  <tjfontaine>should be relatively trivial
20:48:28  <tjfontaine>the thing is you uv_tcp_open, uv_read_start, get chunk, uv_read_stop in the readcb, loop exits
20:50:20  <trevnorris>one sec. need my pills before I can get all this
20:55:29  * kevinswiberquit (Remote host closed the connection)
20:56:04  * kevinswiberjoined
20:56:29  * bradleymeckquit (Quit: bradleymeck)
20:58:29  * kevinswi_joined
20:58:50  * kevinswiberquit (Read error: Connection reset by peer)
20:59:36  * Kakera_quit (Ping timeout: 245 seconds)
21:03:02  * Benviejoined
21:03:57  <tjfontaine>how is this even possible
21:06:55  <trevnorris>tjfontaine: what?
21:07:06  <othiym23>yeah what trevnorris said
21:07:44  <tjfontaine>just this zlib crash that we're seeing
21:07:50  <tjfontaine>that ben has a fix for
21:08:23  <trevnorris>ah, that. yeah, never was able to create a reproducible test case.
21:08:32  <trevnorris>other than downloading a crap ton of npm packages.
21:08:41  <tjfontaine>for this to happen, write_in_progress_ must be set to true, and for an Error or After to have come through and set the object weak, and then a GC is triggered
21:08:53  * wwicks_joined
21:09:01  * wwicksquit (Ping timeout: 256 seconds)
21:09:03  * wwicks_changed nick to wwicks
21:11:21  <tjfontaine>but, writes are serialized from the js interface
21:12:29  <trevnorris>tjfontaine: back to the pause() problem. i'm not seeing the problem. I mean, if I pause the stream and nothing else is in the event loop, that means there's no way for me to .resume().
21:12:34  <trevnorris>and here's my tcp version: https://gist.github.com/trevnorris/6875034
21:13:06  <trevnorris>so then, why wouldn't the client exit?
21:13:07  <tjfontaine>trevnorris: yes, that's what I said in my follow up
21:13:47  <tjfontaine>have you read https://github.com/joyent/node/issues/6305#issuecomment-25728618 :)
21:13:54  <trevnorris>am now :)
21:18:28  <trevnorris>tjfontaine: seems cleaner to me if we had proper handling on the paused state, rather than allow users to hack in on beforeExit, checking if each connection needs to be resumed or what not.
21:20:38  * wwicksquit (Ping timeout: 240 seconds)
21:20:40  <trevnorris>tjfontaine: something like, node doesn't consider the connection out of the event loop either until it's recieved FIN, timed out or the user manually ran destroy/unref
21:20:59  * wwicksjoined
21:21:34  <tjfontaine>what do you think "proper handlign" is in this concept
21:22:44  <tjfontaine>trevnorris: but then you're actually breaking the existing semantics, since there is no way to really unpause the stream in this case, and notifying the user through 'end' or whatever isn't good either because what they'll try and do is put async activities back in th eloop
21:22:53  <trevnorris>possibly, and I don't care either way, but if a connection is opened then it keeps the event loop open until either the user has explicitly closed it, or an event has been emitted about it being closed.
21:23:18  <tjfontaine>that's a big semantic change to libuv, I don't necessarily disagree with it in principle
21:23:37  <trevnorris>well, i don't think it'd need to happen in libuv.
21:23:49  <trevnorris>i mean, node could keep an active whatnot with a counter
21:23:58  <othiym23>how does Node know the connection is unresumable?
21:24:02  <othiym23>does pausing unref the handle?
21:24:28  <tjfontaine>othiym23: no, it's ref'd, but not considered active if there's active readers and listeners
21:24:36  <tjfontaine>*only considered active if
21:24:57  <trevnorris>othiym23: it doesn't "know", but the eloop will close when no other async events are queued. and if none are queued, then it'd be impossible to resume().
21:25:19  <othiym23>thanks guys, that makes sense
21:27:01  <trevnorris>tjfontaine: beforeExit might be a nice utility, but in this case, and spitballing here, having an aboutToClose event on the connection seems more appropriate/specific
21:27:38  * hzquit (Disconnected by services)
21:27:41  <tjfontaine>trevnorris: how/who would fire that, and wouldn't that fire per connection?
21:27:42  * hzjoined
21:27:57  <trevnorris>i mean, in real use case, when would a user pause a connection then say, eh I'll wait until beforeExit to resume it.
21:28:10  <tjfontaine>it's generally programmer error here
21:28:18  * AvianFlu_joined
21:28:19  <trevnorris>no disagreement there.
21:28:46  <trevnorris>have people complained about this?
21:28:57  <tjfontaine>no one really complained about it perse
21:29:19  <tjfontaine>well, I had a coworker who was surprised by the behavior, but after I worked through it I realized this was expected
21:29:46  <trevnorris>hm. i'm going to try something. one sec.
21:30:03  <trevnorris>oh, does a flag get set on the connection that says it's been paused?
21:30:44  <tjfontaine>in the sense that it is not set to read or write
21:33:32  * kevinswi_quit (Remote host closed the connection)
21:34:08  * kevinswiberjoined
21:34:51  * AvianFluquit (Ping timeout: 245 seconds)
21:34:52  * rendarquit (Ping timeout: 245 seconds)
21:36:00  <trevnorris>tjfontaine: heh, well, fwiw I just created a script using async listeners to check if the handle is paused after the callback is finished, and resume it automatically.
21:36:10  <trevnorris>nothing unexpected, but fun.
21:36:35  <tjfontaine>heh
21:38:30  * kevinswiberquit (Ping timeout: 264 seconds)
21:38:40  * CoverSlidejoined
21:41:06  * wwicks_joined
21:41:16  * wwicksquit (Ping timeout: 246 seconds)
21:41:29  * wwicks_changed nick to wwicks
21:42:08  <trevnorris>othiym23: in case you're curious, here's how I did it: https://gist.github.com/trevnorris/6875034
21:42:08  <trevnorris>there's a comment at the top where it's being done.
21:44:24  <tjfontaine>sometimes I like past-tj he has good things to say
21:48:49  * superjoejoined
21:49:24  <superjoe>hello friends. where might I begin to troubleshoot this assertion failure? nodejs: ../deps/uv/src/unix/core.c:171: uv__finish_close: Assertion `handle->flags & UV_CLOSING' failed.
21:49:50  <superjoe>it's almost certainly an issue with my C++ node addon that does thread stuff
21:50:56  <tjfontaine>superjoe: probably need a bit more context
21:51:06  <superjoe>I suppose I can try to catch it with gdb
21:51:27  <trevnorris>superjoe: code always helps :)
21:52:04  <superjoe>ok I'll work on getting a smallish test case
21:52:47  <superjoe>here's the code though: https://github.com/superjoe30/node-groove/blob/master/src/gn_scan.cc
21:54:18  <trevnorris>superjoe: is the assertion reliable?
21:54:46  <superjoe>I'm not sure how to answer that question. are you asking whether I think that the assertion belongs in that line in libuv?
21:56:09  <trevnorris>superjoe: sorry, no. will it always occur, or is there some special circumstance/race condition
21:56:30  * hzquit (Read error: No route to host)
21:56:35  <superjoe>ah. it's a race condition. I have not yet figured out how to reliably duplicate the issue
21:56:59  <superjoe>it is not rare though. I think I can get a test case going that does it almost every time
21:58:02  <trevnorris>superjoe: a backtrace from gdb would be the best. that assertion failure you posted doesn't point to your code.
21:58:24  <superjoe>ok I'll get a backtrace
21:58:37  <trevnorris>bert, where are you?!
21:58:57  * hzjoined
22:00:05  <tjfontaine>I don't see how this race can happen
22:00:20  <trevnorris>tjfontaine: you talking about the zlib again?
22:00:22  <tjfontaine>but it seems like this could happen on 0.10 if so
22:00:28  <tjfontaine>yes, still talking about zlib
22:00:30  <superjoe>oh sorry I forgot to specify version
22:00:36  <superjoe>node v0.10.20
22:00:50  <superjoe>whatever version of libuv is packaged with that
22:00:51  <trevnorris>superjoe: yeah. that's easy to see by the api you're using. :)
22:00:59  <superjoe>ah of course
22:01:26  * wwicks_joined
22:01:43  <trevnorris>superjoe: don't mind tjfontaine, he's just loosing his mind right now. poor little guy.
22:03:21  * wwicksquit (Ping timeout: 245 seconds)
22:03:21  * wwicks_changed nick to wwicks
22:06:48  <superjoe>this backtrace doesn't look very helpful. https://gist.github.com/superjoe30/6875750
22:08:02  <superjoe>I suppose I could try to get it running with a version of libuv with debug symbols
22:09:39  <trevnorris>superjoe: this might sound dumb, but put your require() in a setTimeout() and see if it gives you a different backtrace
22:09:57  <superjoe>the one for the C++ addon, yes?
22:10:05  <trevnorris>yeah
22:11:15  <trevnorris>also, if possible to reproduce, try getting the strace.
22:12:33  * AvianFlu_quit (Remote host closed the connection)
22:12:33  <superjoe>I have it reproducing pretty reliably. bt with setTimeout is the same. I'll look into strace
22:13:21  <superjoe>strace gdb node or just strace node ?
22:13:40  <superjoe>going with the latter
22:13:47  <trevnorris>yeah, the latter
22:14:26  <tjfontaine>your console will hate you if you strace gdb
22:14:30  <superjoe>haha
22:14:47  <superjoe>I think it hates me anyway
22:15:15  <tjfontaine>stracing node is bad enough, stracing gdb which is debugging node GLWAT
22:16:05  <superjoe>whoa, somehow vim has highlighting for the output of this file
22:16:16  <trevnorris>yup. vim rocks!
22:18:12  <superjoe>http://superjoe.s3.amazonaws.com/temp/strace-out
22:18:17  <superjoe>sorry for the weird hosting. it's 3 MB
22:22:52  <wolfeida_>heya still digging into these memory leaks does anyone know if there is an issue with HTTPs requests leaking buffers atm, I have narrowed my previous issues down to an small example and still exhibits a leak of buffers..
22:22:54  * wwicksquit (Ping timeout: 264 seconds)
22:23:00  * wwicks_joined
22:23:05  <trevnorris>superjoe: sorry, can you do that again using -f ?
22:23:09  <trevnorris>wolfeida_: what node version?
22:23:26  <wolfeida_>v0.10.18 on smartos
22:23:51  <wolfeida_>I have documented my trivial sample with output of MDB in https://gist.github.com/wolfeidau/6875651
22:23:58  <trevnorris>wolfeida_: both the tcp part of http and tls use their own slab allocators.
22:24:06  <trevnorris>so yeah, possibility of memory leak.
22:24:15  <wolfeida_>OMG
22:24:41  <wolfeida_>Yeah i have beating my head against this issue for a couple of weeks
22:24:47  <superjoe>trevnorris, yes one moment
22:25:11  <wolfeida_>I foound a ton of leaks in our code but just couldn't get rid of the one relating to this HTTPS call to an upstream api
22:25:21  <tjfontaine>wolfeida_: oh you have a test case wee
22:25:28  <tjfontaine>wolfeida_: and of course tls is on the scene
22:25:33  <trevnorris>wolfeida_: does this run against latest master?
22:25:47  <wolfeida_>It should unless request doesn't
22:26:02  <wolfeida_>All i do is just use request to make a call nothing fancy in that GIST
22:26:37  <superjoe>trevnorris, I uploaded it to the same URL: http://superjoe.s3.amazonaws.com/temp/strace-out
22:26:40  <wolfeida_>I tried 0.10.20 on my mac and it still kept growing memory wise
22:26:50  <superjoe>trevnorris, I have to go AFK in 10 minutes. shall I file an issue?
22:26:55  <tjfontaine>I bet we can track this down to just https module
22:27:05  * vptrquit (Ping timeout: 245 seconds)
22:27:15  <trevnorris>superjoe: not yet.
22:27:48  <wolfeida_>tjfontaine: After our last chat i got rid of all things i now object wise then i was dumfounded it still leaked
22:28:01  <wolfeida_>Then i turned off each of our upstream apis
22:28:12  <wolfeida_>And wala no HTTPS == no leak
22:28:23  <wolfeida_>So i made a very trivial example to test it
22:28:59  <wolfeida_>Is there anything in this gist that looks totally wrong https://gist.github.com/wolfeidau/6875651 ?
22:29:16  * wolfeida_changed nick to wolfeidau
22:29:27  <wolfeidau>I even named all my closures!
22:29:27  * paddybyersjoined
22:30:17  <trevnorris>wolfeidau: can you post some output of the memory growth, and possibly how long it took to get there?
22:30:45  <wolfeidau>trevnorris: yeah that is at the top of comment in the gist
22:31:08  <wolfeidau>I started it and went and got a coffee came back and it was at 172M rss
22:31:23  <tjfontaine>I don't see that comment btw
22:31:28  <trevnorris>ditto
22:31:36  <wolfeidau>sorry
22:31:41  <wolfeidau>Just had to click
22:31:44  <wolfeidau>Damn comment editor
22:31:50  <wolfeidau>See it now?
22:31:53  <superjoe>gotta run - catch you later trevnorris
22:31:55  * superjoequit (Quit: Leaving)
22:32:01  <tjfontaine>I bet this will be a native leak
22:32:16  <wolfeidau>Sorry about that i prepared a full analysis then didn't click the freaking button hard enough
22:32:49  <trevnorris>tjfontaine: could be related to that http leak I was investigating.
22:33:14  <tjfontaine>wolfeidau: add setInterval(gc, 1000); then run: UMEM_DEBUG=default node --expose-gc script.js
22:33:24  <tjfontaine>then 3 cores, and put them somewhere where I can play :)
22:33:36  <wolfeidau>tjfontaine: np mate
22:33:38  <wolfeidau>will do
22:33:43  <tjfontaine>presuming
22:33:46  <tjfontaine>well
22:33:52  <tjfontaine>there are credentials involved ...
22:34:26  <wolfeidau>tjfontaine: I can get you them
22:34:36  <wolfeidau>It is just a demo account from a site
22:34:39  <tjfontaine>ok
22:34:41  <wolfeidau>Anyone can sign up
22:34:49  <wolfeidau>Or i can give you mine
22:34:49  <tjfontaine>ok if they're nto special then I can see the cores :)
22:35:05  <tjfontaine>I was just worried about seeing the cores with credentials that I shouldn't have
22:35:33  <wolfeidau>tjfontaine: yeah all good i have a set of creds just for this test on another account :)
22:35:41  <tjfontaine>:)
22:39:18  <trevnorris>oh, hate that this strace output doesn't make more sense to me.
22:39:24  <trevnorris>wish I could borrow ben's brain for 10 mins.
22:39:42  * julianduquequit (Ping timeout: 256 seconds)
22:39:48  <trevnorris>tjfontaine: so, that crash that super was talking about. strace shows the pid mmaping, then sigabrt.
22:42:39  * wwicksjoined
22:43:26  * inolenquit (Ping timeout: 264 seconds)
22:43:49  * wwicks_quit (Ping timeout: 248 seconds)
22:44:25  * julianduquejoined
22:46:28  * inolenjoined
22:48:04  <wolfeidau>tjfontaine: Interesting process hovered around 35m of ram for 5 minutes then jumped to 83m wth
22:48:17  * julianduquequit (Read error: Connection reset by peer)
22:51:40  * julianduquejoined
22:52:00  * hzquit
22:52:55  * vptrjoined
22:54:02  <trevnorris>ircretary: tell superjoe you might ask bnoordhuis to look at that strace output.
22:54:03  <ircretary>trevnorris: I'll be sure to tell superjoe
22:56:54  <tjfontaine>wolfeidau: erm
22:57:11  <tjfontaine>trevnorris: well, that could be unrelated, depends on the threads
22:57:32  <trevnorris>tjfontaine: yeah, well, when it crashes there are 43 threads. so, um yeah.
22:57:47  <wolfeidau>tjfontaine: *shrug
22:57:59  <wolfeidau>But it is a lot more stable with ram using full gc
22:58:01  <tjfontaine>trevnorris: 43 o0
22:58:13  <trevnorris>yeah, right?
22:58:21  <tjfontaine>wolfeidau: right, I would expect we need to let this run for a while to catch the real culprit
22:58:30  * TooTallNatequit (Quit: ["Textual IRC Client: www.textualapp.com"])
22:59:12  <wolfeidau>tjfontaine: yeah agreed and i am not keen on running a full gc every second on a production process :P
22:59:57  <tjfontaine>wolfeidau: haha no, *but* you could tune max_old_space_size down to convince it to run more often
23:00:17  <tjfontaine>or other similar tactics
23:00:47  <wolfeidau>tjfontaine: yeah seems a bit silly that if i use HTTP i don't need to do that :(
23:00:52  <tjfontaine>well
23:01:17  <tjfontaine>consider that v0.10 tls could be generating a lot of extra js garbage that just needs cleaned up more often
23:01:20  * vptrquit (Ping timeout: 248 seconds)
23:01:32  <tjfontaine>if we can identify those pieces, it benefits everyone
23:01:55  <wolfeidau>yeah i am all for finding this issue
23:02:42  <tjfontaine>but it may also be that node_crypto has a leak, or https/tls.js has a leak
23:02:48  * wwicks_joined
23:02:55  <tjfontaine>or if we're lucky, and everyone else in the world is unlucky request has a leak :)
23:03:37  <wolfeidau>tjfontaine: Yeah i tried substacks hyperequest as well and it seemed to have the same behaviour
23:03:43  <tjfontaine>ok good
23:03:45  <tjfontaine>well
23:03:50  <tjfontaine>good for the world, bad for node :P
23:04:25  <wolfeidau>haha well if we fix node we fix the world!
23:04:26  * wwicksquit (Ping timeout: 264 seconds)
23:04:27  * wwicks_changed nick to wwicks
23:04:30  <tjfontaine>indeed :P
23:04:40  <trevnorris>tjfontaine: remember when I was investigating the http leak, I found that it was fixed by the upgrade to 3.20?
23:04:48  <trevnorris>still don't know why though
23:04:58  <tjfontaine>trevnorris: right, for certain definitions of fixed
23:05:09  <tjfontaine>trevnorris: in this case though, it's difficult to get an apples to apples comparison
23:05:27  <tjfontaine>I think anyway, need to check my order of operations on when 3.20 landed
23:05:55  <tjfontaine>ah tlswrap is after 3.20
23:06:09  * paddybyersquit (Quit: paddybyers)
23:07:07  <tjfontaine>hmm or did it
23:07:17  <tjfontaine>nope
23:07:35  <tjfontaine>it happened just before 3.20, sucks.
23:09:08  * AvianFlujoined
23:10:54  <tjfontaine>wolfeidau: how difficult would it be for you to implement this in terms of https only? are you only using request for the post work?
23:11:22  <tjfontaine>at least that's all it seems like you're using it for
23:11:31  <wolfeidau>tjfontaine: yeah i can just use HTTP, using request because that is what my upstream library uses
23:12:07  <tjfontaine>right, if/when we file this we'll want to remove request
23:12:12  <wolfeidau>Looks like this process is stable with full GC
23:12:30  <tjfontaine>I would just let it run for a while
23:12:41  <wolfeidau>Yeah i am happy to leave it going :)
23:13:06  <tjfontaine>can you set it to gcore in longtime and then longtime*2?
23:13:08  <tjfontaine>:)
23:14:03  <wolfeidau>yeah I will drop a core an hour for a few hours
23:14:36  <tjfontaine>excellent, wolfeidau++
23:16:48  <tjfontaine>hueniverse: speaking of hourly cores, were you guys able to glean any information from the ones you guys were doing
23:17:25  <tjfontaine>hueniverse: I'm also interested to know if tls is on the scene for you as well
23:19:23  * kaesoquit (Ping timeout: 248 seconds)
23:21:53  * kaesojoined
23:23:18  * wwicks_joined
23:25:01  * wwicksquit (Ping timeout: 245 seconds)
23:25:01  * wwicks_changed nick to wwicks
23:25:51  <wolfeidau>tjfontaine: Added comment with what is coming out of mdb atm https://gist.github.com/wolfeidau/6875651
23:26:36  <tjfontaine>ya I can see some definite tls related objects that are being reaped sooner
23:31:57  <wolfeidau>my requests are TINY as well atm
23:32:09  <tjfontaine>are they generally not tiny in production?
23:32:18  <wolfeidau>noper tiny in prod
23:32:35  <tjfontaine>k, you could also use --gc_interval whcih specifies the number of allocations that will trigger a gc
23:32:36  <wolfeidau>It is just time series data so a few points and a timestamp
23:32:41  <tjfontaine>nod
23:33:43  <wolfeidau>Yeah atm I am just getting it to hit a reverse proxy which does the http -> https for us
23:34:16  <wolfeidau>This is an interum measure just while i work on this
23:35:21  <wolfeidau>But yeah you can see the diff in those mdb dumps eh
23:35:44  <hueniverse>tjfontaine: still working on it. we have storage issues in production so hopefully soon.
23:36:39  <wolfeidau>I am loving MDB has totally explained a ton of issues we were seeing
23:36:40  <tjfontaine>hueniverse: alright, just wanted to make sure I didn't drop your request
23:37:02  <tjfontaine>wolfeidau: me too :)
23:37:22  <wolfeidau>tjfontaine: It would be nice if it had command history :)
23:37:39  <wolfeidau>I have typed ::findjsobjects ! sort -k2 about 50 times
23:37:49  <tjfontaine>hehe
23:40:10  <hueniverse>tjfontaine: you still own me a 0.11 build
23:40:29  <tjfontaine>hueniverse: yup, that is tomorrow, now that I have the zlib bug fixed
23:40:31  <tjfontaine>well
23:40:53  <tjfontaine>bnoordhuis has a fix for, but I am pretty much giving up on finding a reproducible test case
23:43:47  * wwicks_joined
23:45:51  * wwicksquit (Ping timeout: 245 seconds)
23:45:51  * wwicks_changed nick to wwicks
23:47:59  <wolfeidau>tjfontaine: Why on earth would node sit on 84m of ram doing 1 https / second this has me stumped
23:48:24  <wolfeidau>Is it something todo with zlib + tls
23:49:25  * M28quit (Remote host closed the connection)
23:49:39  * M28joined
23:50:29  <tjfontaine>wolfeidau: probably, gist the output of pmap <pid>
23:51:56  <wolfeidau>tjfontaine: Added to https://gist.github.com/wolfeidau/6875651
23:55:33  <trevnorris>tjfontaine: we got morning meeting tomorrow I assume?
23:55:48  <tjfontaine>yes
23:55:53  <tjfontaine>izs should be there as well :)
23:56:33  <trevnorris>heh, cool.
23:59:21  <MI6>libuv-master-gyp: #213 FAILURE windows-x64 (3/196) linux-ia32 (1/195) windows-ia32 (4/196) http://jenkins.nodejs.org/job/libuv-master-gyp/213/