00:15:42  * piscisaureus_joined
00:17:31  * piscisaureus_quit (Client Quit)
00:17:55  * pfox___joined
00:27:19  * isaacsjoined
00:38:52  * c4miloquit (Ping timeout: 256 seconds)
00:58:26  * isaacsquit (Remote host closed the connection)
01:01:11  * bnoordhuisquit (Read error: Operation timed out)
01:01:52  * mikealjoined
01:04:25  * c4milojoined
01:47:35  * c4miloquit (Ping timeout: 252 seconds)
01:51:12  * loladirojoined
01:57:33  * abraxasjoined
02:30:10  * mikealquit (Quit: Leaving.)
02:45:58  * loladiropart
02:46:06  * loladirojoined
03:04:41  * saghulquit (Ping timeout: 244 seconds)
03:06:00  * saghuljoined
03:44:13  * mikealjoined
03:46:02  * mikealquit (Client Quit)
04:02:55  * mikealjoined
04:09:03  * mikealquit (Quit: Leaving.)
04:23:54  * loladiroquit (Quit: loladiro)
04:33:27  * mikealjoined
05:04:52  * avalanche123joined
05:15:57  * dshaw_joined
05:40:12  * pfox___quit (Ping timeout: 244 seconds)
06:12:58  * avalanche123quit (Quit: Computer has gone to sleep.)
06:16:43  * orlandovftwjoined
06:23:14  * rendarjoined
06:42:44  * stephankquit (Quit: *Poof!*)
07:04:06  * irajoined
07:40:18  * paddybyersjoined
08:01:44  * rendarquit
08:04:18  * rendarjoined
08:17:37  * paddybyersquit (Quit: paddybyers)
08:24:59  * dshaw_quit (Quit: Leaving.)
09:43:25  * orlandovftwquit (Ping timeout: 260 seconds)
10:23:25  * iraquit (Quit: Computer has gone to sleep.)
11:48:25  * irajoined
12:06:07  * abraxasquit (Remote host closed the connection)
12:06:42  * abraxasjoined
12:11:07  * abraxasquit (Ping timeout: 260 seconds)
12:19:24  * piscisaureus_joined
12:26:38  * piscisaureus_quit (Ping timeout: 256 seconds)
12:27:39  * piscisaureus_joined
12:29:49  * loladirojoined
13:01:50  * c4milojoined
13:08:08  * mmalecki[away]changed nick to mmalecki
13:08:26  * paddybyersjoined
13:08:41  * pfox___joined
13:19:14  * mrb_bkquit (Ping timeout: 252 seconds)
13:27:17  * mrb_bkjoined
13:54:11  * isaacsjoined
13:59:29  <isaacs>Good morning heroes.
14:02:27  <mmalecki>morning, isaacs
14:02:42  <mmalecki>isaacs: hey, you mentioned you want a node-based test runner, right?
14:07:25  * c4milochanged nick to c4milo|shower
14:07:43  * c4milo|showerchanged nick to c4milo|breakfast
14:21:27  <isaacs>mmalecki: it'd be nice, but it's not a priority.
14:22:10  <isaacs>mmalecki: it'd have to support everything our current test runner does.
14:22:40  <isaacs>mmalecki: and also, ideally make it easier to copy and paste results into chat.
14:22:56  <isaacs>mmalecki: like, maybe after a failure, print out all the failing test names again
14:23:10  <mmalecki>isaacs: I can do this in my spare time, I wrote few test runners already
14:23:21  <isaacs>and it has to know, for example, how to run all the tests in test/gc/ with --expose-gc
14:23:57  <isaacs>and it must be tiny, and not require changing any of the tests themselves.
14:24:12  <isaacs>(though, of course, the testCfg.py files could be replaced with a json or something)
14:25:39  * loladiroquit (Quit: loladiro)
14:27:50  <mmalecki>isaacs: makes sense. I bet it can't use any npm modules?
14:28:34  * paddybyersquit (Quit: paddybyers)
14:30:31  <isaacs>mmalecki: nope, not for tests
14:30:39  <isaacs>mmalecki: and should be rather tiny.
14:31:20  <isaacs>mmalecki: if it does use any npm modules, they need to be small and bundled (like how marked is used for doc generation)
14:31:21  <mmalecki>isaacs: ok. is there any folder which makes sense to run in parallel?
14:31:43  <mmalecki>(just trying to find out feature set needed)
14:32:00  <isaacs>mmalecki: no, our tests don't work in parallel.
14:32:15  <isaacs>for example, they all listen on the same port
14:32:37  <mmalecki>yeah, I know that, I thought about GC tests or something
14:32:39  <mmalecki>so it's cool
14:32:52  <mmalecki>I can probably put this one together by the end of this week
14:34:05  * loladirojoined
14:34:32  <mmalecki>probably faster , but I'm bad at estimations
14:35:09  * bnoordhuisjoined
14:35:12  <isaacs>the GC tests also can't run in parallel
14:35:19  <isaacs>actually, parallel tests are usually a bad idea, imo
14:35:22  <isaacs>harder to debug failures.
14:35:46  <mmalecki>yeah, usually. ok, so we don't need this feature at all, cool
14:36:06  <isaacs>mmalecki: also, it needs to create a tmp dir and delete it for each test
14:37:08  <mmalecki>isaacs: ok, will do
14:55:16  * c4milo|breakfastchanged nick to c4milo
15:07:01  * mikealquit (Quit: Leaving.)
15:08:05  * mikealjoined
15:13:00  * mikealquit (Ping timeout: 260 seconds)
15:17:26  * pfox___changed nick to pfox__
15:22:03  * hij1nxjoined
15:22:15  * mikealjoined
15:22:50  * mikeal1joined
15:22:51  * mikealquit (Read error: Connection reset by peer)
15:33:17  * hij1nxquit (Quit: hij1nx)
15:38:06  * pfox__quit (Quit: leaving)
15:39:54  * hij1nxjoined
15:41:35  * pfox__joined
15:42:11  * philipsquit (Excess Flood)
15:44:18  * philipsjoined
15:44:56  * mmaleckichanged nick to mmalecki[away]
15:50:26  * pieternjoined
15:51:12  * pieternquit (Client Quit)
15:54:21  * paddybyersjoined
15:54:34  * ericktjoined
15:56:35  * dapjoined
15:57:08  * stephankjoined
16:10:20  * pieternjoined
16:16:50  * hij1nxquit (Quit: hij1nx)
16:21:24  * orlandovftwjoined
16:22:31  * ericktquit (Quit: erickt)
16:24:14  <piscisaureus_>socket.bytesWritten is lame
16:24:16  <piscisaureus_>:-(
16:24:39  * benvie_quit (Remote host closed the connection)
16:26:51  * TooTallNatejoined
16:33:30  * loladiroquit (Remote host closed the connection)
16:34:32  <indutny>piscisaureus_: really?
16:34:43  <piscisaureus_>yeah
16:34:45  <indutny>piscisaureus_: cause I use them in one pretty private place
16:34:54  * loladirojoined
16:34:59  <indutny>piscisaureus_: is there something wrong with it?
16:35:02  * loladiroquit (Remote host closed the connection)
16:35:14  <piscisaureus_>indutny: ummm... I don't wanna know what you do with it :-)
16:35:18  * loladirojoined
16:35:23  <indutny>piscisaureus_: well, I use it! :)
16:35:40  <indutny>piscisaureus_: for counting bytes, that was written :P
16:35:40  <piscisaureus_>indutny: writing strings to sockets can be optimized seriously by not constructing a buffer every time
16:35:47  <indutny>aaah
16:36:01  <piscisaureus_>indutny: just spatting it into a malloced buffer is much faster
16:36:04  <indutny>piscisaureus_: we can try to introduce LazyBuffer
16:36:10  <piscisaureus_>indutny: but since net.js uses this queue
16:36:20  <indutny>piscisaureus_: which is just a JS wrapper for string
16:36:24  <piscisaureus_>indutny: we have to compute the byte length of a string upfront
16:36:32  <indutny>piscisaureus_: hm...
16:36:34  <indutny>piscisaureus_: yep
16:36:53  <indutny>piscisaureus_: anyway, LazyBuffer may improve performance
16:37:15  <piscisaureus_>indutny: yeah maybe, but this improves performance more
16:37:24  <indutny>piscisaureus_: what this?
16:37:28  <indutny>s/this/`this`
16:39:30  <piscisaureus_>indutny: I optimize it by
16:39:31  <piscisaureus_>1. don't construct a buffer. just use a malloced buffer.
16:39:31  <piscisaureus_>2. the allocated buffer is allocated in the same memory area that holds the reqwrap
16:39:31  <piscisaureus_>3. I conservatively allocate space so (a) we don't have to do a preliminary encoding pass to calculate the buffer size and (b) to allow v8 to eliminate overflow checks.
16:39:55  <piscisaureus_>indutny: the only problem is... we only know the amount of bytes written after WriteWrap is constructed
16:40:19  <indutny>oh crap
16:40:26  <indutny>this's tough
16:40:35  <indutny>does it really hit us? :)
16:40:59  <indutny>because if it don't - I prefer to not touch it
16:41:00  <piscisaureus_>indutny: So am am thinking 2 options.
16:41:46  <piscisaureus_>indutny: (1) we construct the ReqWrap ahead of time but only dispatch it when the time comes. The downside - the patch is invasive and now 2 js/c transitions are required per write.
16:42:11  <piscisaureus_>indutny: (2) make bytesWritten a getter and do some hackery to compute the size of unsubmitted writes.
16:42:32  <piscisaureus_>indutny: the downsize - bytesWritten will be slow(er)
16:42:38  <indutny>well
16:42:51  <indutny>I'm quite sure it won't hit anyone
16:42:59  <indutny>may be make it a function?
16:43:08  <piscisaureus_>that's an api change...
16:43:12  <indutny>yep
16:43:17  <indutny>getters works too
16:43:27  <indutny>but I think methods are quite faster
16:43:30  <piscisaureus_>or we could say that a write is only accounted for after the write callback is made
16:43:42  <piscisaureus_>but that may not be so nice
16:43:44  <indutny>btw, that's even beter
16:43:57  <piscisaureus_>right now it's unspecified
16:44:01  <indutny>indeed
16:44:35  <indutny>we ain't relying on it
16:45:02  <piscisaureus_>the question is - who is
16:45:13  <piscisaureus_>I wish I could grep the npm repo
16:47:03  <isaacs>piscisaureus_: i think making bytesWritten a getter is probably fine
16:53:43  * ericktjoined
16:53:47  * pieternquit (Quit: pietern)
16:55:39  * mikeal1quit (Quit: Leaving.)
16:56:21  * mikealjoined
16:56:52  * mikeal1joined
16:56:52  * mikealquit (Read error: Connection reset by peer)
16:57:50  <piscisaureus_>isaacs: ok. Although I still think bytesWritten is lame atm :-p
16:58:30  <indutny>hahah
16:58:32  <indutny>but it's useful
16:58:59  * loladiro_joined
17:00:16  * TheJHjoined
17:00:25  * hij1nxjoined
17:01:34  * loladiroquit (Ping timeout: 255 seconds)
17:01:35  * loladiro_changed nick to loladiro
17:03:06  <isaacs>piscisaureus_: yeah, it's kinda useful in some edge case scenarios.
17:03:11  <isaacs>piscisaureus_: but really, yeah... dumb.
17:03:31  <isaacs>the nice thing about a getter is that we can make our benchmarks faster while making node actually slower.
17:03:39  <isaacs>so that's always fun and ironic.
17:05:35  * loladiroquit (Quit: loladiro)
17:07:06  <piscisaureus_>does a getter make node slower entirely?
17:07:18  <piscisaureus_>what exactly is affected by the presence of a getter?
17:07:41  * c4miloquit (Read error: Connection reset by peer)
17:07:53  * c4milojoined
17:14:50  * igorzijoined
17:16:40  * orlandovftwquit (Ping timeout: 260 seconds)
17:17:39  <bnoordhuis>were you guys talking about .bytesWritten?
17:17:53  <bnoordhuis>i hate that property, adding it was a bad idea
17:19:20  * pieternjoined
17:20:31  <piscisaureus_>yeah I hate it too
17:20:33  <piscisaureus_>it's super lame
17:20:51  <piscisaureus_>What I dislike especially is that it takes unwritten bytes into account
17:21:55  <bnoordhuis>the thing i dislike the most is that it got added because "yeah, it might come in handy"
17:22:06  <bnoordhuis>i wouldn't mind axing it altogether
17:23:32  <isaacs>piscisaureus_: getters don't slow things down unless you get them
17:24:19  <isaacs>bnoordhuis: can you email nodejs-dev saying that you want to remove it, because it's a pita, incorrect, and broken?
17:24:25  <isaacs>bnoordhuis: then we can see who complains, and remove it anyway.
17:24:31  <bnoordhuis>isaacs: sure thing
17:24:48  <isaacs>it will have to be documented on the wiki if we do that
17:24:57  <isaacs>but you never know, there might be some super valid reason to make it worth fixing rather than axing
17:25:10  <isaacs>doubtful, but worth checking
17:25:25  <bnoordhuis>we'll find out :) i'll mail nodejs-dev
17:25:35  <isaacs>thanks
17:25:55  <isaacs>it's easy enough to implement as a decoration later anywa
17:26:38  <isaacs>res.write = function (o) { return function (c) { res.bytesWritten += Buffer.byteLenth(c); return o.call(res, c) } }(res.write)
17:34:40  * iraquit (Quit: Leaving...)
17:43:17  <piscisaureus_>isaacs: when did you find that security issue ?
17:44:12  <isaacs>April 17th
17:46:15  <isaacs>piscisaureus_: though, technically, i didn't find it
17:46:27  <isaacs>oh, also, i think 0.6.16 actually has the fix
17:46:34  <isaacs>but 0.6.16 is not as awesome as 0.6.17
17:46:35  <piscisaureus_>isaacs: yeah. I actually just found the blog :-)
17:46:56  <piscisaureus_>isaacs: I was already wondering why that took so long :-)
17:47:06  <piscisaureus_>but apparently it was fixed in 0.6 already
17:47:29  <isaacs>yeah, we fixed it in the code right away. matt pretty much told us exactly what to do
17:47:37  <isaacs>and it was a one-char fix
17:50:13  <piscisaureus_>Anyone wants a nodeconf ticket?
17:50:22  <piscisaureus_>I tried the website for fun and I actually got one
17:52:00  <piscisaureus_>Well, I suppose I'll let it fall
17:52:48  * pietern_joined
17:54:54  * pieternquit (Ping timeout: 244 seconds)
17:54:54  * pietern_changed nick to pietern
17:55:59  * brsonjoined
18:01:28  * pieternquit (Quit: pietern)
18:04:34  * orlandovftwjoined
18:18:58  <indutny>bnoordhuis: I complain
18:19:17  <indutny>bnoordhuis: we use it internally
18:19:33  <bnoordhuis>indutny: tell me for what and i'll tell you what you're doing wrong :)
18:19:41  <indutny>bnoordhuis: hahaha
18:19:52  <indutny>bnoordhuis: well, we can calculate that stuff manually
18:19:58  <indutny>bnoordhuis: bandwidth statistic
18:20:47  <bnoordhuis>indutny: that's something of a special case
18:21:02  <bnoordhuis>the problem with .bytesWritten is that it gets in the way of optimizing for the common case
18:21:22  <bnoordhuis>also, it was broken until v0.6 - it sometimes didn't record bytes but chars written..
18:21:37  <indutny>yes, I know
18:21:44  <indutny>we're on 0.6.x
18:22:02  <isaacs>bnoordhuis, indutny: really, this is something that should be tracked at a much lower level using dtrace or something anyway.
18:22:04  <indutny>as I said it can be implemented in user-land
18:22:16  <isaacs>if you want to track bandwidth stats, doing so at the app level is a bit odd.
18:22:21  <isaacs>and probably wrong.
18:22:29  <indutny>isaacs: that really depends
18:22:37  <indutny>isaacs: having all logic in one place is good too
18:22:40  <isaacs>true that
18:23:07  <isaacs>indutny: you can use node-dtrace-provider to create arbitrary app-specific probes, and then aggregate them at run-time
18:23:30  <indutny>I don't really care, actually :P
18:23:42  <indutny>if you think it's wasting our resources - I'm ok with removing it
18:28:40  * dshaw_joined
18:31:57  * `3rdEdenjoined
18:36:49  * arlolrajoined
18:37:09  * hij1nxquit (Ping timeout: 260 seconds)
18:37:51  * mikeal1quit (Quit: Leaving.)
18:39:01  * loladirojoined
18:39:55  * loladiroquit (Client Quit)
18:44:28  * pieternjoined
18:48:03  <piscisaureus_>We could also put the connect queue in libuv
18:54:12  * mikealjoined
19:06:43  <bnoordhuis>piscisaureus_: https://github.com/joyent/libuv/issues/409 <- how can that happen?
19:07:22  * loladirojoined
19:07:41  <piscisaureus_>bnoordhuis: hmm, that's odd. Is he improperly reusing write reqs?
19:07:46  <piscisaureus_>or clobbering memory?
19:08:27  <bnoordhuis>piscisaureus_: i am not the one you should be asking these questions of:)
19:16:18  * mjr_joined
19:20:22  <piscisaureus_>bnoordhuis: unix has readonly?
19:20:39  <piscisaureus_>bnoordhuis: btw "man chattr" suggests there are more
19:20:53  <bnoordhuis>piscisaureus_: chattr is fs specific
19:21:30  <piscisaureus_>bnoordhuis: I also couldn't find a man 2/3 chattr
19:21:37  <piscisaureus_>bnoordhuis: is there a ioctl for this?
19:23:03  <bnoordhuis>piscisaureus_: not an ioctl but separate syscalls like getxattr and setxattr
19:23:19  <bnoordhuis>they're not well documented but you can find them in the kernel source
19:23:45  <bnoordhuis>grep SYSCALL fs/xattr.c
19:24:31  <piscisaureus_>bnoordhuis: so the primary reason was because I want to allow people to delete files with the rdonly flag set
19:24:40  <piscisaureus_>bnoordhuis: and with the +s flag
19:25:07  <piscisaureus_>bnoordhuis: people have been complaining about it for node. Apparently you're supposed to be able to delete files on unix even if you cannot write them
19:25:17  <bnoordhuis>piscisaureus_: deleting read-only files just works on unices :)
19:25:22  * mikealquit (Quit: Leaving.)
19:25:34  <piscisaureus_>bnoordhuis: so at first I wanted to remove the +r flag automatically when the user attempts to unlink
19:25:42  <bnoordhuis>with +s you mean the sticky bit?
19:25:51  <piscisaureus_>bnoordhuis: but I have second thoughts. No, "system" flag
19:26:04  <bnoordhuis>ah okay, no such concept on unices
19:26:22  <piscisaureus_>bnoordhuis: on windows people set the +r flag on files to make them readonly. That is typically with a reason
19:26:33  <piscisaureus_>bnoordhuis: removing the flag under their ass is kinda rude
19:26:44  <bnoordhuis>yeah, i suppose so
19:26:58  <piscisaureus_>so we need a way to control the flags field
19:27:39  <piscisaureus_>that's why I suggested to add support for it
19:27:44  <piscisaureus_>:-)
19:28:00  <piscisaureus_>so at first I though it should just be moreorless windows specific
19:28:01  * isaacsquit (Remote host closed the connection)
19:28:04  <piscisaureus_>fs.attrib()
19:28:38  <piscisaureus_>but it turns out that unix has chattr and hfs also has some extended attributes that are supported by mac os
19:28:46  <piscisaureus_>so now I don't know what to do anymore :-(
19:28:56  <bnoordhuis>yeah, but uv-unix doesn't use any of that
19:28:58  <piscisaureus_>I wish these things were easier
19:29:01  <bnoordhuis>it's all very os and fs specific
19:30:18  <bnoordhuis>piscisaureus_: i say uv-unix doesn't use any of that
19:30:29  <bnoordhuis>but in general no one uses any of that
19:31:08  <piscisaureus_>http://www.twitter.com/UPC/status/199581891488268289 <-- that took a while
19:32:05  <bnoordhuis>upc's connection to twitter.com probably kept timing out :)
19:32:44  <piscisaureus_>upc is actually pretty bad
19:33:00  <piscisaureus_>their customer service is ok nowadays but their internet delivery
19:33:08  <piscisaureus_>and I thought surfnet had problems ... :x
19:38:48  <piscisaureus_>Our stream_wrap/pipe_wrap/tcp_wrap/tty_wrap implementation is so incredibly lame
19:39:26  <piscisaureus_>I mean, ryah_ actually did it while I only procrastinated, so I probably shouldn't complain
19:40:57  <bnoordhuis>what's so bad about it?
19:41:05  <piscisaureus_>So much duplication of code
19:41:35  <bnoordhuis>how would you improve it?
19:42:11  <piscisaureus_><piscisaureus> well x and y and
19:42:11  <piscisaureus_><bnoordhuis> Go do it!
19:42:11  <piscisaureus_>^-- no thank you, sir
19:43:43  <CIA-155>node: Kevin Gadd master * r1eb9fc5 / doc/api/vm.markdown :
19:43:43  <CIA-155>node: docs: add warning to vm module docs
19:43:43  <CIA-155>node: Add a clear warning about known issues with the module and a pointer to the
19:43:43  <CIA-155>node: GitHub issues list for the module. Describe some of the biggest known issues
19:43:43  <CIA-155>node: with the module. - http://git.io/_Y2grA
19:45:29  <bnoordhuis>ircretary: tell isaacs https://github.com/joyent/node/issues/3231 <- seen this?
19:45:29  <ircretary>bnoordhuis: I'll be sure to tell isaacs
19:45:59  <TooTallNate>bnoordhuis: good timing :p
19:46:25  <bnoordhuis>TooTallNate: i know, right? :)
19:52:30  <piscisaureus_>bnoordhuis: you're the c++ guru here, right?
19:54:07  <piscisaureus_>bnoordhuis: what is the idiomatic way of doing the placement constructor stuff?
19:54:07  <piscisaureus_>wrap = new (sizeof(WriteWrap) + bonus) WriteWrap();
19:54:07  <piscisaureus_>~ or ~
19:54:07  <piscisaureus_>buf = new char[sizeof(WriteWrap) + bonus];
19:54:07  <piscisaureus_>wrap = new (buf) WriteWrap();
19:54:12  <piscisaureus_>bnoordhuis ^-- ?
19:56:59  <bnoordhuis>piscisaureus_: new (ptr) Foo();
19:57:15  <bnoordhuis>new (new char[n]) Foo() works too
19:57:37  <bnoordhuis>you may have to implement operator new() for the class
19:57:45  <piscisaureus_>I know
19:58:42  <piscisaureus_>bnoordhuis: deletion works like this, right?
19:58:42  <piscisaureus_>wrap->~Wrap()
19:58:42  <piscisaureus_>delete[] reinterpret_cast<char[]>(wrap);
19:59:27  <bnoordhuis>piscisaureus_: wrap->~Wrap()
19:59:59  <piscisaureus_>bnoordhuis: I had that one :-)
20:00:14  <piscisaureus_>bnoordhuis: but I have to free the storage after that
20:00:37  <bnoordhuis>piscisaureus_: yeah, cast the wrap pointer to char*
20:00:47  <bnoordhuis>but... why would you use placement new?
20:01:01  <bnoordhuis>if you're going to delete the memory immediately afterwards anyway
20:01:29  <piscisaureus_>bnoordhuis: because I need to control the space
20:01:44  <piscisaureus_>bnoordhuis: I need extra bytes after the Wrap to store stuff
20:02:10  <piscisaureus_>bnoordhuis: oh - I'm not going to delete it right afterwards
20:03:06  <bnoordhuis>okay. about the only time you use placement new is when pooling memory
20:06:14  <piscisaureus_>memegenerator is down :-(
20:06:44  <piscisaureus_>I guess people started working on their nodeconf presentations
20:07:57  * orlandovftwquit (Ping timeout: 252 seconds)
20:08:01  <bnoordhuis>no, i preemptively striked
20:09:45  <piscisaureus_>too late
20:09:46  <piscisaureus_>bnoordhuis:
20:09:50  <piscisaureus_>http://www.quickmeme.com/meme/3p66qt/
20:10:47  <bnoordhuis>ho ho
20:12:29  * orlandovftwjoined
20:12:49  * kohaijoined
20:24:15  * arlolraquit (Quit: Linkinus - http://linkinus.com)
20:26:28  * loladiroquit (Quit: loladiro)
20:50:59  <CIA-155>node: Ben Noordhuis v0.6 * r29232ee / test/simple/test-http-client-timeout.js :
20:50:59  <CIA-155>node: test: add failing HTTP client timeout test
20:50:59  <CIA-155>node: See #3231. - http://git.io/psWO2Q
20:51:02  <piscisaureus_>anyone - does test-child-process-fork2 fail for you guys 2
20:51:30  <bnoordhuis>hmm, maybe i should not have pushed that just yet
20:51:41  <bnoordhuis>piscisaureus_: i think so
20:51:47  <piscisaureus_>alright
20:51:49  <piscisaureus_>good to know
20:52:02  <piscisaureus_>bnoordhuis: you get a DNS error or an assertion error?
20:54:27  <CIA-155>node: Ben Noordhuis v0.6 * re02af94 / test/simple/test-http-client-timeout.js :
20:54:27  <CIA-155>node: test: add failing HTTP client timeout test
20:54:27  <CIA-155>node: See #3231. - http://git.io/z-BwxA
20:54:52  <bnoordhuis>piscisaureus_: oh wait, no - it passes
20:55:09  <bnoordhuis>there was a child-process-fork test failing last night though
20:55:28  <bnoordhuis>$ out/Release/node test/simple/test-child-process-fork3.js
20:55:29  <bnoordhuis>(libev) epoll_wait: Bad file descriptor
20:55:33  <bnoordhuis>^ bad
20:55:37  <piscisaureus_>aiiii
20:55:43  <piscisaureus_>I had fork2 failing tho
20:56:18  <bnoordhuis>seems to consistently pass for me
20:56:54  * irajoined
20:56:54  * iraquit (Client Quit)
20:58:00  <piscisaureus_>ok thanks
20:58:03  <piscisaureus_>vcbuild debug te
20:58:08  <piscisaureus_>er ECHAN
21:00:21  <piscisaureus_>test-cluster-worker-disconnect is failing consistently
21:00:27  <piscisaureus_>or rather, hanging
21:00:50  <bnoordhuis>ircretary: tell isaacs https://github.com/joyent/node/commit/e02af94 <- passes with 0.6.16, fails with v0.6.17
21:00:50  <ircretary>bnoordhuis: I'll be sure to tell isaacs
21:01:25  <bnoordhuis>piscisaureus_: passes for me
21:02:30  <piscisaureus_>test-cluster-worker-kill uses SIGHUP
21:02:36  <piscisaureus_>that aint gonna work...
21:02:48  * paddybyersquit (Quit: paddybyers)
21:03:29  * `3rdEdenquit (Quit: Leaving...)
21:04:18  * isaacsjoined
21:05:15  <piscisaureus_>isaacs is going to be busy
21:05:26  <isaacs>oh?
21:06:12  <isaacs>oh, i see.
21:06:13  <isaacs>grr.
21:06:24  <piscisaureus_>isaacs: node test\simple\test-domain-implicit-fs.js is failing for me. It throws "ENOENT, d:\\node\\this file does not exist."
21:06:33  <bnoordhuis>$ python tools/test.py --mode=release simple
21:06:33  <bnoordhuis>[02:07|% 100|+ 400|- 0]: Done <- master
21:08:13  <isaacs>So: http req.setTimeout
21:08:18  <isaacs>do you ever want the socket after it times out?
21:09:29  <bnoordhuis>isaacs: well... the docs are incomplete in that regard, they don't mention that the req gets destroyed
21:09:50  <isaacs>oh, it's not just that it's destroyed. it's that socket destruction sets the req variable to null.
21:11:09  <isaacs>bnoordhuis: on the server, we auto-destroy if the socket times out
21:11:14  <isaacs>bnoordhuis: this behavior was missing in the client
21:12:48  <bnoordhuis>isaacs: you added that in 0.6.17? that's a change in behavior
21:12:51  * zz_jcechanged nick to jce
21:13:44  <bnoordhuis>right, i see it in c9be1d5f
21:13:52  <isaacs>bnoordhuis: yes, but it also leads to leaking sockets if we don't
21:14:14  <isaacs>the only reason to add a timeout listener is typically to destroy it
21:14:21  <isaacs>(as seen in this bug)
21:14:34  <bnoordhuis>i don't know about that
21:14:57  <bnoordhuis>i can see people uploading stuff in batches
21:15:21  <bnoordhuis>by which i mean: open a connection, upload some, wait some, upload some more, etc.
21:15:22  <isaacs>right, but the timed out socket is not going to be reused between batches
21:15:35  <isaacs>bnoordhuis: so you'd set a timeout listener... why?
21:15:56  <isaacs>bnoordhuis: so you only upload when you've got n seconds of no activity?
21:16:01  <bnoordhuis>yes
21:16:07  <bnoordhuis>i'm not saying that's how i would do it :)
21:16:21  <bnoordhuis>but that's something that used to work and now it doesn't
21:16:33  <isaacs>you can still do that by setting a timeout on the connection directly instead.
21:16:43  <bnoordhuis>sure
21:16:56  <bnoordhuis>my point is that this is a behavioral change in a stable release
21:17:08  <bnoordhuis>which is generally considered a no-no
21:17:12  <isaacs>yes, that's true
21:17:29  <isaacs>we can revert that, but we should not do so in such a way that leaks socket objects.
21:17:38  <isaacs>THAT bit of the behavior was changed on purpose.
21:17:39  <bnoordhuis>i think we can all agree on that :)
21:18:02  <piscisaureus_>test\simple\test-tls-server-verify.js also fails on master for me. Not with 0.6 ...
21:18:52  <CIA-155>node: isaacs reviewme * rb4fbf6d / lib/http.js : Fix #3231. Don't try to emit error on a null'ed req object (+203 more commits...) - http://git.io/VMkDEQ
21:18:54  <isaacs>bnoordhuis: ^
21:20:29  <isaacs>good news! remove the self.destroy() call in emitTimeout doesn't break the gc tests.
21:21:12  <CIA-155>node: isaacs reviewme * r8c758e1 / lib/http.js : Don't destroy on timeout - http://git.io/EDcAzg
21:21:26  <isaacs>running other tests now
21:22:05  <bnoordhuis>[01:05|% 100|+ 341|- 0]: Done <- seems to work okay
21:23:10  <isaacs>[01:28|% 100|+ 345|- 2]: Done
21:23:20  <isaacs>but those two are debugger tests.
21:23:27  * isaacsshakes fist at the debugger
21:24:46  <isaacs>bnoordhuis: lgty?
21:25:49  <bnoordhuis>isaacs: lgtm
21:27:22  <bnoordhuis>hah, i guess TooTallNate has been hacking up my weakref module
21:27:22  <CIA-155>node: isaacs v0.6 * r8c758e1 / lib/http.js : Don't destroy on timeout - http://git.io/EDcAzg
21:27:22  <CIA-155>node: isaacs v0.6 * rb4fbf6d / lib/http.js : Fix #3231. Don't try to emit error on a null'ed req object - http://git.io/VMkDEQ
21:27:34  <bnoordhuis>that code doesn't look anything like i remember it :)
21:27:39  <TooTallNate>bnoordhuis: you only just now found that :p
21:27:54  <TooTallNate>bnoordhuis: i basically added the callback function part
21:28:39  <TooTallNate>which turns out to be really helpful in debugging memory leaks :)
21:29:07  <bnoordhuis>good :)
21:29:38  <TooTallNate>bnoordhuis: in retrospect i probably could have sent you pull requests, but since it looked more like a POC, i wasn't sure if you wanted to maintain it as a whole module
21:30:02  * paddybyersjoined
21:39:35  * dshaw_quit (Read error: Connection reset by peer)
21:43:51  * dshaw_joined
21:54:22  <isaacs>Yes, it's super handy for debugging memory leaks, but it's very easy to get the object trapped in the cb.
21:54:40  <isaacs>weak(obj, function () { neverGonnaLetYouGo() })
21:55:45  * mikealjoined
21:56:38  <TooTallNate>isaacs: ya i had to add a disclaimer to the README recently
21:56:48  <isaacs>TooTallNate: true.
21:57:10  <isaacs>but we've gotten a few issues like, "This and that leaks memory!" when really it's just the weak usage that's leaking
21:58:04  <TooTallNate>yup, yup. i mean for a while i thought v8 was smart enough to know the anonymous function didn't touch the object
21:58:20  <isaacs>nope
21:58:29  <TooTallNate>but once i learned that wasn't the case it was a big eye opener
21:58:42  <isaacs>i also saw a case where someone actually referenced the object in the cb.
21:58:51  <piscisaureus_>bnoordhuis: https://github.com/piscisaureus/node/compare/master...netwrite
21:58:54  <isaacs>weak(obj, function () { console.log('just released', obj) })
21:58:56  <isaacs>like... no.
21:58:59  <isaacs>that doesn't work
21:59:06  <TooTallNate>lol
21:59:07  <piscisaureus_>bnoordhuis: any preliminary comments (benchmark it with string/xxx btw)
21:59:45  <isaacs>TooTallNate: actually...
22:00:19  <piscisaureus_>bnoordhuis: I will fix comments and style later
22:00:52  <isaacs>TooTallNate: you could set the cb as a member of the object or something, in some discrete way
22:01:05  <isaacs>TooTallNate: then assign some other thing as the weakref callback.
22:01:22  <isaacs>TooTallNate: and look at the hidden property to call their callback
22:01:24  <bnoordhuis>piscisaureus_: looking
22:01:40  <isaacs>TooTallNate: does it call into your thing before GC, or after it?
22:02:07  <TooTallNate>isaacs: the MakeWeak callback gets invoked before GC
22:02:12  <isaacs>TooTallNate: right
22:02:27  <TooTallNate>`this` inside the callback is actually the object, if you wanted a clean way to access it
22:02:30  <TooTallNate>log it, etc.
22:02:34  <isaacs>interesting
22:02:47  <TooTallNate>just dont make any new references to it...
22:02:48  <piscisaureus_>bnoordhuis: buffer/10240 is rather nice for me. It bumps from 2288 to 3878 r/s
22:03:00  <isaacs>so, yeah, there's no reason why you couldn't make weak(obj, function () { no traps, please }) work
22:03:11  <isaacs>just have to not keep a persistent ref to the function.
22:03:17  <isaacs>hang the function off the object itself
22:03:31  <bnoordhuis>piscisaureus_: void* operator new (size_t size) { assert(0); }; -> private: void* operator new(size_t); ?
22:03:44  <bnoordhuis>piscisaureus_: iow, just don't implement it
22:03:49  <TooTallNate>isaacs: hmm interesting idea
22:04:18  <piscisaureus_>bnoordhuis: It's just defensive programming. I want to assure that people don't do "new WriteWrap"
22:04:28  <TooTallNate>isaacs: so the goal would be to allow anonymous function as the weak callback?
22:04:29  <piscisaureus_>bnoordhuis: ... or "delete writeWrap"
22:04:42  <isaacs>TooTallNate: yeah
22:04:51  <bnoordhuis>piscisaureus_: right, but making the operator private and unimplemented will do that at compile-time instead of run-time
22:04:53  <isaacs>TooTallNate: or any function that traps a reference to the object
22:04:58  <isaacs>TooTallNate: anonymous or not
22:05:04  <TooTallNate>right
22:05:10  <isaacs>function foo () { } weak(obj, foo)
22:05:15  <bnoordhuis>piscisaureus_: in your case, your ide will probably complain even before you compile it
22:05:37  <piscisaureus_>bnoordhuis: I tried that but for me it's the contrary. If I define but not implement msvc starts whining
22:05:51  <piscisaureus_>bnoordhuis: but I can try again
22:05:55  <TooTallNate>isaacs: so right now it's a Persistent Array, with regular handles to the Functions inside
22:05:56  <TooTallNate>isaacs: https://github.com/TooTallNate/node-weak/blob/master/src/weakref.cc#L30
22:12:43  * c4miloquit (Ping timeout: 244 seconds)
22:16:06  <bnoordhuis>piscisaureus_: that placement new stuff is avoid an extra alloc, right?
22:16:10  <bnoordhuis>*to avoid
22:16:15  <piscisaureus_>bnoordhuis: yup
22:16:32  <bnoordhuis>piscisaureus_: have you measured what the actual impact of that second alloc is?
22:16:38  <piscisaureus_>bnoordhuis: nope
22:16:43  <bnoordhuis>also, why don't you use a char data_[1] as the last field
22:17:01  <piscisaureus_>bnoordhuis: *shrug* Didn't think of it
22:17:13  * rendarquit
22:17:46  <bnoordhuis>piscisaureus_: okay. alternatively, just do reinterpret_cast<char*>(wrap + 1)
22:18:09  <piscisaureus_>oh, that works too
22:18:12  <bnoordhuis>yep, you don't need data_offset
22:18:17  <piscisaureus_>bnoordhuis: you mean, to avoid the alignment stuff ?
22:18:37  <bnoordhuis>well, mostly because it's simpler :)
22:19:00  <piscisaureus_>bnoordhuis: ok, will do that
22:19:03  <bnoordhuis>having the buffer data aligned won't hurt though
22:19:15  <piscisaureus_>bnoordhuis: I assume (wrap + 1) will be aligned too ?
22:19:19  <bnoordhuis>yes
22:19:23  <piscisaureus_>so that works, then
22:20:34  <piscisaureus_>bnoordhuis: I agree that avoiding the extra malloc. But it just felt stupid not to do it.
22:20:40  <piscisaureus_>bnoordhuis: and the added complexity is not that bad
22:21:44  <bnoordhuis>yeah, it's not too bad
22:21:47  <isaacs>TooTallNate: regarding SlowBuffer's __proto__... that's not really something we want people creating instances of.
22:21:59  <isaacs>TooTallNate: C++ should *get* a buffer from js land.
22:22:11  <isaacs>TooTallNate: along with a start/end/length args.
22:23:02  <TooTallNate>isaacs: i agree, but there's really no way around it when using a Buffer around an external memory address
22:23:13  <TooTallNate>i.e. Buffer::New() with the callback function
22:24:29  <TooTallNate>isaacs: i mean `new Buffer()` (in C++), but maybe we should add a new Buffer::New() signature that accepts a callback function
22:24:46  <TooTallNate>isaacs: that just creates the SlowBuffer like `new Buffer()` does, but then wraps it in a JS Buffer
22:24:52  <TooTallNate>before returning it
22:25:13  <piscisaureus_>bnoordhuis: the buf.len = (int) len; assert(buf.len == len) should be organized a little differently, and should probably be "unsigned int"
22:25:48  <TooTallNate>isaacs: of course, that'd be new API for v0.8
22:28:52  * c4milojoined
22:29:07  <bnoordhuis>piscisaureus_: ../src/stream_wrap.h:68: error: use of enum ‘WriteEncoding’ without previous declaration
22:29:28  <piscisaureus_>bnoordhuis: oh, that's fine as far as msvc is concerned :-)
22:29:48  <piscisaureus_>bnoordhuis: should I move the entire enum to stream_wrap.h or just "enum WriteEncoding;"
22:34:18  * paddybyersquit (Quit: paddybyers)
22:35:56  <bnoordhuis>piscisaureus_: i'm seeing a 4-5% speedup, i think
22:36:11  <piscisaureus_>bnoordhuis: what value for xxx ?
22:36:20  <bnoordhuis>piscisaureus_: bytes/10240
22:36:33  <bnoordhuis>btw, you mentioned string/xxx but that doesn't exist
22:36:35  <piscisaureus_>bnoordhuis: oh that's rather disappointing. For me it was much higher
22:36:54  <piscisaureus_>bnoordhuis: I think we've been over this. Replace xxx by any number.
22:37:22  <bnoordhuis>piscisaureus_: try it yourself - you get the 404 page :)
22:37:29  <piscisaureus_>bnoordhuis: orly
22:37:55  <bnoordhuis>rly
22:37:55  <piscisaureus_>bnoordhuis: I benchmarked with bytes/50, /1024 and /10240
22:38:04  <bnoordhuis>yeah, bytes/xxx is okay
22:38:07  <bnoordhuis>just not string/xxx
22:38:23  <piscisaureus_>bnoordhuis: oh - sure. Typo, sorry
22:39:14  <piscisaureus_>bnoordhuis: ah, maybe it's that I was comparing 0.6.17 to netwrite
22:39:25  <piscisaureus_>Lemme build master
22:40:48  <bnoordhuis>piscisaureus_: i'm seeing a 5-6% speedup on bytes/1024
22:42:01  <isaacs>TooTallNate: the way around it is that you don't have bindings allocate arbitrary data on behalf of js programs. the js program instead gives it an address and length, and says "Here, use this"
22:42:05  <isaacs>TooTallNate: like we do with zlib
22:42:32  <isaacs>of course, that's leaking 2 kb on every object. not sure why, yet.
22:42:36  <isaacs>it's nothing in js
22:42:54  <TooTallNate>isaacs: for example, node-ffi has to expose the pointers to C functions
22:42:55  <bnoordhuis>piscisaureus_: 4-5% on bytes/64
22:43:12  <TooTallNate>isaacs: the rewrite i'm doing on node-ffi uses a 0-length Buffer for that
22:43:20  <TooTallNate>which gets returned from C++ as a SlowBuffer
22:43:27  <TooTallNate>i could do this http://sambro.is-super-awesome.com/2011/03/03/creating-a-proper-buffer-in-a-node-c-addon/
22:43:31  <TooTallNate>but there's not much point
22:43:58  <isaacs>TooTallNate: yeah
22:44:07  <isaacs>i guess ffi is a pretty valid case.
22:44:38  <TooTallNate>my point is that SlowBuffer is a leaky abstraction currently
22:44:49  <isaacs>yeah, i tis
22:44:58  <TooTallNate>but that can be solved with a Buffer::New() function that handles the callback function case
22:45:12  <isaacs>i guess, whatever, go ahead and land the __proto__ thing, but put a comment explaining that this is frowned upon, and a rare exception.
22:45:14  <TooTallNate>there's already a Buffer::New() that returns a JS Buffer for a given length
22:46:16  <TooTallNate>isaacs: that sounds good. but this Buffer::New() also sounds like a good idea. i mean i would use it if it were there
22:46:20  <TooTallNate>bnoordhuis: ^ thoughts?
22:46:54  <bnoordhuis>TooTallNate: i have many. what's the subject?
22:47:08  <bnoordhuis>hey, that sambro guy. you know i explained to him how to do that?
22:47:14  <bnoordhuis>but do i get attribution? of course not :/
22:47:21  <TooTallNate>bnoordhuis: a new Buffer::New() signature that handles the callback function case
22:47:33  <TooTallNate>bnoordhuis: where you want to return a Buffer around existing data
22:47:51  <TooTallNate>bnoordhuis: currently you can only do that with one of the `new Buffer()` signatures
22:48:00  <TooTallNate>but that returns a SlowBuffer
22:48:04  * mikealquit (Quit: Leaving.)
22:48:10  <TooTallNate>which is supposed to be an implementation detail
22:48:39  <bnoordhuis>TooTallNate: what would that method look like?
22:48:48  <piscisaureus_>bnoordhuis: I am seeing about 8% speedup vs master for bytes/10240, about 5% for bytes/1024
22:49:00  <piscisaureus_>bnoordhuis: Not really noticable for bytes/50
22:49:09  <TooTallNate>bnoordhuis: but this new Buffer::New() method would call the current `new Buffer()` that takes a callback function, and do the sambro (your) technique to return a JS Buffer
22:49:39  <piscisaureus_>bnoordhuis: the difference with 0.6 was much bigger, but I realize we already did a bunch of optimizations in the string encoder.
22:50:02  <piscisaureus_>bnoordhuis: but I think at least we can say we got faster again :-p
22:50:14  <bnoordhuis>piscisaureus_: always a good thing :)
22:50:26  <bnoordhuis>TooTallNate: i guess we could add that. have people been complaining about it?
22:50:35  <TooTallNate>just isaacs :)
22:50:48  <piscisaureus_>bnoordhuis: but srsly, I as seeing like +50% for bytes/10240 for 0.6 .. netwrite
22:50:59  <piscisaureus_>bnoordhuis: too bad I benchmarked teh wrong thing ...
22:51:14  <bnoordhuis>piscisaureus_: bytes/10240 is pretty broken in 0.6
22:51:16  * mikealjoined
22:51:22  <TooTallNate>isaacs: but actually, i really like the "all allocated memory through Buffers" approach (zlib)
22:51:25  <piscisaureus_>bnoordhuis: why?
22:51:33  <TooTallNate>isaacs: i'm doing the same thing whenever possible in the node-ffi rewrite :)
22:52:03  <bnoordhuis>piscisaureus_: i'm not sure actually
22:52:28  <bnoordhuis>i think v8 is at least partially to blame
22:52:46  <bnoordhuis>or maybe v8 improved a whole lot between 3.6 and 3.9, that's a possibility too :)
22:53:22  <piscisaureus_>bnoordhuis: yes, it's one of the things I worked on with erik corry
22:53:29  <piscisaureus_>bnoordhuis: and actually there's more gain possible
22:54:36  <piscisaureus_>bnoordhuis: btw - you should put one unicode char in the bytes/10240 benchmark string
22:54:46  <piscisaureus_>bnoordhuis: if you want to see +45% :-p
22:55:12  <bnoordhuis>piscisaureus_: heh, okay - i'll try that
22:55:43  <piscisaureus_>I just do `stored[n] = "ü";` @ line 50
22:56:03  <piscisaureus_>still pretty pathetic tho
22:57:39  * mikealquit (Quit: Leaving.)
23:08:09  <TooTallNate>:\ so i can't overload by return type in C++?
23:09:54  <piscisaureus_>nope
23:11:49  <TooTallNate>welllll dammnnnn
23:12:00  <TooTallNate>we kinda screwed ourself there then
23:12:39  <TooTallNate>ideally, `new Buffer()` would return a SlowBuffer, and Buffer::New() would return a JS Buffer
23:12:52  <TooTallNate>but it's too late for that now :(
23:15:14  <piscisaureus_>bnoordhuis: bytes/10240 spends 50% of it's time in WriteToFlat :-(
23:15:15  * TheJHquit (Read error: Operation timed out)
23:16:34  <piscisaureus_>bnoordhuis: doesn't quite look right to me ...
23:17:27  <bnoordhuis>piscisaureus_: sounds familiar, that's what it did in 0.6
23:17:51  <piscisaureus_>bnoordhuis: somehow the string is not pre-flattened.
23:17:57  <piscisaureus_>and it's still slow
23:19:09  <piscisaureus_>yeah the string is definitely not pre-flattened
23:19:21  <piscisaureus_>it's good that the benchmark stresses this but it's rather lame
23:21:25  <bnoordhuis>signing off for tonight
23:21:32  <piscisaureus_>ok
23:21:32  <bnoordhuis>piscisaureus_: i'll test your patch some more tomorrow
23:21:36  <piscisaureus_>sleep well
23:21:38  <bnoordhuis>you too
23:21:42  <piscisaureus_>ok, cool
23:25:13  * c4miloquit (Ping timeout: 250 seconds)
23:26:11  * bnoordhuisquit (Ping timeout: 245 seconds)
23:37:42  * mjr_quit (Ping timeout: 260 seconds)
23:42:20  * dapquit (Quit: Leaving.)
23:46:48  * dapjoined