00:00:00  * ircretaryquit (Remote host closed the connection)
00:00:08  * ircretaryjoined
00:12:06  * kazuponjoined
00:14:07  * trevnorrisquit (Quit: Leaving)
00:17:09  * kazuponquit (Ping timeout: 248 seconds)
00:21:50  * dapquit (Quit: Leaving.)
00:24:49  <bnoordhuis>https://gist.github.com/bnoordhuis/8fa3986ac3b596439178 <- check this out. Debug.setBreakPoint() works but only if you add a small delay
00:24:59  * bnoordhuissighs
00:25:08  <tjfontaine>this sounds like why debugger-client hates us right now
00:28:16  <mmalecki>bnoordhuis: isn't this related to a debugger running in separate thread?
00:30:03  <bnoordhuis>mmalecki: possibly (or maybe likely) but i'm not 100% sure it's that
00:30:12  <bnoordhuis>i'll let you know what i find
00:31:01  <mmalecki>sure :)
00:31:11  <mmalecki>how are you, Ben?
00:32:28  <bnoordhuis>lamenting my career cboice but otherwise fine. you?
00:46:41  <mmalecki>bnoordhuis: same here, diving deep into C code :)
00:48:26  <bnoordhuis>oh? what code and what for?
00:49:23  <mmalecki>bnoordhuis: monitoring, process spawning. just making nodejitsu kick more asses, you know :)
00:49:33  <mmalecki>bnoordhuis: I wrote this yesterday, btw https://github.com/mmalecki/env
00:50:36  <bnoordhuis>mmalecki: const correctness!
00:51:33  <mmalecki>bnoordhuis: I know dude. it was 5 AM and I didn't care much. gotta fix that soon tho :)
00:52:01  <mmalecki>what do you think of this API tho? test is here: https://github.com/mmalecki/env/blob/master/test/test-env.c
00:53:17  * qmx|awaychanged nick to qmx
00:53:44  <bnoordhuis>mmalecki: is it for working with general key/value pairs or manipulating the actual environ?
00:54:26  * mralephjoined
00:54:46  <mmalecki>bnoordhuis: general key/value. we need this for stripping things from child's environment
00:55:06  <mmalecki>you can use it to create a whole new environment and such :)
00:55:33  <bnoordhuis>so why c?
00:56:37  <bnoordhuis>seems the debug thing is indeed caused by a threading issue
00:56:39  <mmalecki>bnoordhuis: node is 10 MB RAM at least. C can stay under 1 MB with no problems. this is important when your servers are 256 MB
00:56:58  <bnoordhuis>right
00:59:11  * kazuponjoined
00:59:39  <mmalecki>bnoordhuis: we're not dropping node completely, obviously :)
01:00:50  <mmalecki>although cjitsu would be kinda badass
01:01:34  <bnoordhuis>hah
01:02:22  * inolenquit (Quit: Leaving.)
01:04:43  * sblomquit (Ping timeout: 264 seconds)
01:10:37  * bradleymeckjoined
01:14:01  * bradleymeckquit (Client Quit)
01:15:06  * kazuponquit (Remote host closed the connection)
01:17:21  * TooTallNatequit (Quit: ["Textual IRC Client: www.textualapp.com"])
01:19:22  * kazuponjoined
01:29:37  * defunctzombie_zzchanged nick to defunctzombie
01:34:27  * stagasquit (Quit: ChatZilla 0.9.90-rdmsoft [XULRunner 1.9.0.17/2009122204])
01:36:28  * cjdpart
01:40:13  * kazuponquit (Remote host closed the connection)
01:40:14  * inolenjoined
01:42:21  * abraxasjoined
01:44:36  * inolenquit (Ping timeout: 264 seconds)
01:44:59  * inolenjoined
02:05:48  * qmxchanged nick to qmx|away
02:10:16  * bradleymeckjoined
02:13:43  * bnoordhuisquit (Ping timeout: 264 seconds)
02:33:57  * hzquit
02:36:28  * mmaleckichanged nick to mmalecki[zzz]
02:40:25  * kazuponjoined
02:52:55  * brson_quit (Quit: leaving)
03:02:55  * bradleymeckquit (Quit: bradleymeck)
03:05:39  * saghulquit (Quit: ["Textual IRC Client: www.textualapp.com"])
03:36:51  * c4milojoined
03:49:45  * brsonjoined
03:52:25  * mikealjoined
03:53:37  * kazuponquit (Remote host closed the connection)
03:59:43  * mikealquit (Quit: Leaving.)
04:16:25  * c4miloquit (Remote host closed the connection)
04:25:00  * benoitcquit (Excess Flood)
04:30:09  * benoitcjoined
04:46:56  * AvianFluquit (Remote host closed the connection)
04:53:59  * kazuponjoined
04:54:49  * nsmjoined
04:58:13  * kazuponquit (Ping timeout: 240 seconds)
05:10:49  * trevnorrisjoined
05:11:42  <trevnorris>indutny: you around? line 176 of lib/crypto.js "return new Hash(algorithm)", shouldn't that be "return new Hash(algorithm, options);"?
05:19:10  <trevnorris>isaacs: you looming around?
05:23:31  <trevnorris>anyone around that can tell me if I missed when "crypto" could be accessed w/o needing the "require()"?
05:25:06  <trevnorris>or is that just automatically included in REPL? strange.
05:42:20  * bradleymeckjoined
05:42:25  * bradleymeckquit (Client Quit)
05:44:30  * davispquit (Ping timeout: 264 seconds)
05:51:06  * mikealjoined
05:54:22  * kazuponjoined
06:19:03  * loladirojoined
06:19:35  * kazuponquit (Remote host closed the connection)
06:35:52  * defunctzombiechanged nick to defunctzombie_zz
06:38:05  * nsmquit (Quit: nsm)
07:04:16  * kazuponjoined
07:17:26  * brsonquit (Quit: leaving)
07:36:26  * kazuponquit (Remote host closed the connection)
07:57:42  * benoitcquit (Excess Flood)
07:59:42  * benoitcjoined
08:00:08  * `3rdEdenjoined
08:06:07  * rendarjoined
08:36:49  * kazuponjoined
08:41:47  * kazuponquit (Ping timeout: 260 seconds)
08:45:21  * paddybyersjoined
08:46:08  * trevnorrisquit (Quit: Leaving)
08:52:58  * Ralt_joined
08:53:00  * Ralt_quit (Client Quit)
08:53:48  * Ralt_joined
08:57:34  * trevnorrisjoined
08:57:45  * trevnorrisquit (Client Quit)
09:00:40  * loladiroquit (Quit: loladiro)
09:05:37  * loladirojoined
09:06:34  * Ralt_quit (Quit: WeeChat 0.3.9)
09:08:25  * `3rdEdenquit (Remote host closed the connection)
09:08:58  * `3rdEdenjoined
09:11:20  * Raltquit (Remote host closed the connection)
09:11:52  * Raltjoined
09:18:20  * `3rdEden_joined
09:32:16  * indexzeroquit (Quit: indexzero)
09:36:27  <indutny>morning
09:36:28  <indutny>isaacs: yt?
09:37:26  * kazuponjoined
09:42:31  * kazuponquit (Ping timeout: 264 seconds)
09:48:25  <roxlu>hi guys!
09:48:45  <roxlu>When I have a uv_queue_work(), how can I join the work thread?
10:12:28  <roxlu>indutny: do you know?
10:13:00  <indutny>what?
10:13:32  <roxlu>I'm using a uv_queue_work() but when my application is closed I want to finish/stop the work first
10:15:13  <indutny>cancel?
10:15:29  <indutny>well, there're no way of doing it
10:15:41  <indutny>and that's because this operation isn't generally safe
10:15:49  <indutny>at least, across different platforms
10:15:58  <roxlu>what do you mean with not safe?
10:19:12  <indutny>cancelling running threads
10:19:14  <indutny>is not safe
10:19:26  <roxlu>depends on how I implement it, right?
10:20:19  <roxlu>with a basic uv_thread_t I've control over how to stop it and that works great, but for a work queuue I guess I need to manage it a bit myself
10:23:36  <indutny>yes
10:23:43  <rendar>roxlu: to kill a thread, the thread just have to return 0; or to stop it, it just have to wait on some condition variable or something
10:23:47  * `3rdEdenquit (Disconnected by services)
10:23:49  <rendar>all other methods, are not safe
10:23:49  * `3rdEden_changed nick to `3rdEden
10:24:00  <roxlu>rendar: sure that's not really the problem
10:24:07  <rendar>oh, ok :)
10:24:21  * V1joined
10:24:23  <roxlu>rendar: I'm start a couple of threads using the uv_queue_work() function
10:24:29  <rendar>yeah
10:24:34  <roxlu>which does the thread management for me
10:24:54  * V1quit (Disconnected by services)
10:25:01  <roxlu>I think I'm looking for something like: uv_queue_join_threads() or something
10:25:20  <roxlu>but I'll write a workaround using a custom 'counter'
10:25:51  <rendar>hmm
10:26:12  <rendar>roxlu: well, you need a loop which traverse the handle of all running threads and wait on that handle
10:26:29  <roxlu>I'm using UV_RUN_NOWAIT
10:29:11  <rendar>well, iirc threads in uv_queue_work are always non-detached (in the pthread meaning of 'detached') right?
10:33:38  * roxludon't know how the queue works internally
10:33:52  * roxlus/don't/doesn't
10:38:05  * kazuponjoined
10:39:10  * Kakerajoined
10:42:48  * kazuponquit (Ping timeout: 264 seconds)
10:46:49  * rendar_joined
10:47:27  * rendarquit (Read error: Connection reset by peer)
10:54:11  * loladiroquit (Read error: Connection reset by peer)
10:54:30  * loladirojoined
10:57:43  * einaros_changed nick to einaros
11:03:36  * kazuponjoined
11:06:11  * kazuponquit (Remote host closed the connection)
11:06:41  * stagasjoined
11:08:16  * hzjoined
11:08:39  * kazuponjoined
11:15:50  * AvianFlujoined
11:38:07  * abraxasquit (Remote host closed the connection)
12:09:44  * kazuponquit (Remote host closed the connection)
12:29:31  * mmalecki[zzz]changed nick to mmalecki
12:29:41  * piscisaureus_joined
12:40:37  * benoitcquit (Excess Flood)
12:41:13  * benoitcjoined
12:41:16  * hzquit
12:55:12  * `3rdEdenchanged nick to `3E|AFK
12:59:17  * bnoordhuisjoined
13:03:50  * hzjoined
13:10:07  * kazuponjoined
13:11:31  * paddybyersquit (Ping timeout: 258 seconds)
13:14:47  * kazuponquit (Ping timeout: 260 seconds)
13:29:35  * paddybyersjoined
13:33:39  * defunctzombie_zzchanged nick to defunctzombie
13:38:40  * loladiroquit (Quit: loladiro)
13:57:45  * c4milojoined
13:59:17  * c4miloquit (Remote host closed the connection)
13:59:43  * c4milojoined
14:00:14  * c4miloquit (Read error: Connection reset by peer)
14:00:33  * c4milojoined
14:10:47  * kazuponjoined
14:16:18  * kazuponquit (Ping timeout: 258 seconds)
14:20:37  <bnoordhuis>autoconf is so fucking awful
14:21:02  <bnoordhuis>i mean intrinsically awful
14:21:17  <bnoordhuis>and then people take it and try to beat it into submission
14:21:24  <bnoordhuis>making it awful^2
14:23:57  * kazuponjoined
14:24:17  * AvianFluquit (Remote host closed the connection)
14:25:26  <bnoordhuis>and the perpetual regenerating of everything!
14:25:29  * kazuponquit (Remote host closed the connection)
14:25:34  <bnoordhuis>GNU has got a lot to answer for >:-(
14:28:18  <piscisaureus_>bnoordhuis: you need some pot?
14:28:29  <bnoordhuis>piscisaureus_: i'm opting for beer today
14:29:29  <bnoordhuis>ibtool: compile: unable to infer tagged configuration
14:29:29  <bnoordhuis>libtool: compile: specify a tag with `--tag'
14:29:31  <piscisaureus_>bnoordhuis: we should do TLS in all c
14:29:32  <bnoordhuis>son of a...
14:29:53  <bnoordhuis>piscisaureus_: i think i agree. i just don't expect miracles
14:31:08  <piscisaureus_>bnoordhuis: I expect miracles. Sunday we're all going to celebrate that someone beat death.
14:31:18  <piscisaureus_>And you're telling me you can't even make tls faster?
14:31:56  <piscisaureus_>bnoordhuis: I'll give you the monday, too.
14:35:34  <bnoordhuis>you're too generous, bertje
14:46:37  * kazuponjoined
14:55:06  <bnoordhuis>piscisaureus_: call in 5
14:55:14  <piscisaureus_>bnoordhuis: word
14:55:44  <bnoordhuis>isaacs broke the build :)
14:55:45  <bnoordhuis>Undefined symbols for architecture x86_64:
14:55:45  <bnoordhuis> "_uv_version_string", referenced from:
14:55:45  <bnoordhuis> node::SetupProcessObject(int, char**)in node.node.o
14:56:31  <piscisaureus_>bnoordhuis: boo! you did
14:56:44  <bnoordhuis>nuh-uh. he merged it into master!
14:56:45  <piscisaureus_>bnoordhuis: Should I land the versioning stuff on libuv-master?
14:56:56  <bnoordhuis>i'll merge libuv v0.10 in master
14:57:04  <bnoordhuis>(was already doing that)
14:57:20  <piscisaureus_>bnoordhuis: ok. Also bump the versions in include/uv.h and src/version.c then
14:58:20  <bnoordhuis>hrm, why are UV_VERSION_MAJOR and UV_VERSION_MINOR defined in two places?
14:58:50  <piscisaureus_>bnoordhuis: we used to have them in uv.h, but IMO that's wrong
14:58:57  <piscisaureus_>but I couldn't just remove those defines
14:59:22  <bnoordhuis>well, we can remove them in master
14:59:26  <bnoordhuis>now is a good a time as any
14:59:50  <bnoordhuis>would this be 0.11.1?
14:59:52  <piscisaureus_>that's okay
15:00:04  <piscisaureus_>bnoordhuis: yeah, let's go for 0.11.1 or or 0.11.0
15:00:10  <piscisaureus_>bnoordhuis: i don't care much actually :)
15:00:25  <bnoordhuis>we already did a node 0.11.0 release
15:00:28  <piscisaureus_>yeah
15:00:36  <bnoordhuis>so i guess 0.11.1 it is
15:00:40  <piscisaureus_>ok
15:00:43  <piscisaureus_>go for it
15:00:52  * bnoordhuisgoes for it
15:01:28  * Kakeraquit (Read error: Connection reset by peer)
15:01:34  <piscisaureus_>bnoordhuis: meeting
15:02:15  <MI6>joyent/libuv: Ben Noordhuis master * c43e851 : src: bump version to 0.11.1-pre (+7 more commits) - http://git.io/8WoSeQ
15:02:26  <piscisaureus_>bnoordhuis: meeeeeeeting!!
15:02:29  <bnoordhuis>piscisaureus_: where?
15:02:30  <bnoordhuis>skype?
15:02:41  <bnoordhuis>there's no hangout
15:04:46  <MI6>libuv-master: #62 UNSTABLE windows (7/188) osx (2/187) smartos (6/187) linux (2/187) http://jenkins.nodejs.org/job/libuv-master/62/
15:16:41  <MI6>joyent/node: Ben Noordhuis master * 3f091c7 : src: fix Persistent<> deprecation warning Pass the Isolate to Persistent (+2 more commits) - http://git.io/Gyahig
15:25:59  * trevnorrisjoined
15:30:40  * AvianFlujoined
15:31:16  <MI6>nodejs-master: #127 FAILURE osx-x64 (1/573) windows-ia32 (8/572) windows-x64 (7/573) smartos-ia32 (2/573) smartos-x64 (3/573) http://jenkins.nodejs.org/job/nodejs-master/127/
15:32:09  * Guest__joined
15:32:10  * Guest__quit (Max SendQ exceeded)
15:33:16  * Guest__joined
15:33:18  * Guest__quit (Max SendQ exceeded)
15:34:04  * omgnodesjoined
15:38:29  * AvianFluquit (Remote host closed the connection)
15:42:58  * AvianFlujoined
15:43:44  * mikealquit (Quit: Leaving.)
15:44:23  * kazuponquit (Remote host closed the connection)
15:50:19  <trevnorris>bnoordhuis, indutny: thanks you guys for putting up with all my questions recently. think I solved the allocation problem, thanks to your help.
16:07:25  <indutny>bnoordhuis: hey
16:07:37  <indutny>its not necessary to off-load tls jobs to multiple threads
16:07:45  <indutny>its just generally not a good idea to do this in event loop
16:07:49  <indutny>(to my mind)
16:10:21  <isaacs>indutny: what do you propose? have a thread dedicated to TLS jobs?
16:10:28  <indutny>yeah
16:10:40  <indutny>sort of tlsnappy, but with one thread
16:10:52  <indutny>there's negative effects of context switching
16:11:02  <indutny>but generally app should be still responsive even under high load
16:11:04  <indutny>of tls requests
16:11:18  <isaacs>bnoordhuis: oh, whoops, i merged deps/uv/ as well. usually i do `git checkout deps/uv/ -- master` before committing the merge.
16:11:21  <trevnorris>isaacs: think I found a way to allocate external memory faster than Buffers currently do, and don't leak memory. though I definitely need a sanity check.
16:11:31  <isaacs>trevnorris: exciting!
16:12:24  * kazuponjoined
16:12:36  <isaacs>05:15 < trevnorris> indutny: you around? line 176 of lib/crypto.js "return new Hash(algorithm)", shouldn't that be "return new Hash(algorithm, options);"?
16:12:39  <isaacs>^ yes.
16:12:40  <isaacs>it should
16:13:36  <trevnorris>cool. tls is a scary place for me, so didn't know if there was some magic sauce I didn't see.
16:14:10  <isaacs>trevnorris: patch coming shortly
16:14:23  <isaacs>there's like 6 places where we do this wrong in crypto.js
16:14:27  <isaacs>just making tests first
16:14:48  * mikealjoined
16:15:25  <indutny>isaacs: ah
16:15:29  <indutny>speaking about net.js
16:15:33  <indutny>I think there's a bug
16:15:46  <indutny>related to https://github.com/joyent/node/issues/5145
16:16:23  <indutny>https://gist.github.com/indutny/8e4037222315a8abe5bd
16:16:28  <indutny>fixes it ^
16:16:35  <indutny>but introduces test failures
16:16:41  <indutny>and probably some user-land failures as well
16:17:13  <indutny>so, basically
16:17:30  <indutny>we should not keep socket open if one of the sides was ended or finished
16:17:34  <indutny>and we do it now
16:17:38  <indutny>if it was finished
16:20:25  * dapjoined
16:22:42  <bnoordhuis>indutny: re tls, i was discussing that with bert just now
16:22:53  <indutny>oh, can you forward this to me?
16:23:03  <bnoordhuis>well, it's not like i recorded it
16:23:23  <bnoordhuis>i have eidetic memory but i'm rather attached to it, you can't have it
16:24:05  <indutny>anyway
16:24:06  <bnoordhuis>anyway, a) seems like the best option + c) fixing cluster to work with shared session storage
16:24:18  <indutny>a ?
16:24:24  <indutny>move everything to C?
16:24:28  <bnoordhuis>move everything to c++ land
16:24:36  <indutny>ok, while we're here
16:24:41  <bnoordhuis>in c++ land?
16:24:42  <indutny>lets talk about how it'll work :)
16:24:57  <indutny>because I did clone in tlsnappy
16:25:07  <bnoordhuis>clone what?
16:25:07  <indutny>and it was still visiting js-land *alot*
16:25:12  <indutny>lib/tls.js
16:25:18  <bnoordhuis>ah
16:25:23  <indutny>bnoordhuis: do you have any particular idea?
16:25:37  <bnoordhuis>essentially you'd do everything in c++ land and only emit cleartext strings/buffers to js land
16:25:42  <indutny>meh
16:25:45  <indutny>that doesn't work
16:25:47  <indutny>next?
16:25:58  <indutny>:)
16:26:11  <bnoordhuis>"that doesn't work" <- great argument
16:26:19  <indutny>well, you need more control
16:26:29  <bnoordhuis>who is 'you' here?
16:26:29  <indutny>and 95% of things are manageable only from js
16:26:36  <indutny>you/we :)
16:26:51  <bnoordhuis>if it's in c++, i (as in me, bnoordhuis) have all the control i need
16:27:08  <indutny>bnoordhuis: just to be sure, that I got you correctly
16:27:14  * omgnodesquit (Quit: Computer has gone to sleep.)
16:27:19  <indutny>you want to create "black box" in C++ that will consume incoming data
16:27:27  <indutny>from two directions
16:27:33  <indutny>and emit outcoming data to two directions
16:27:36  <indutny>is it correct?
16:27:37  <bnoordhuis>not even consume
16:27:46  <indutny>right
16:27:49  <indutny>consume => receive
16:27:52  <indutny>passive
16:28:00  <bnoordhuis>you call tls2.start(port, host), it deals with libuv directly and the only thing it does is emit cleartext data to js land
16:28:01  <indutny>ok?
16:28:06  <indutny>aaaaah
16:28:10  <indutny>this stuff
16:28:21  <indutny>what about starttls
16:28:31  <bnoordhuis>too bad
16:28:33  <indutny>and many other applications
16:28:49  <bnoordhuis>i'm optimizing for the common use case here
16:28:57  <indutny>hm...
16:29:37  <indutny>I don't like this idea, actually...
16:29:41  <isaacs>indutny: i'll dig into that issue. it takes a while for me to page-in all the data about how TCP works for stuff like this :)
16:29:43  <indutny>we could do http the same way
16:29:46  <isaacs>indutny: (re eof/fin/etc.)
16:29:54  <indutny>isaacs: kewl!
16:30:13  <bnoordhuis>indutny: http or https? (i like the idea in both cases :)
16:30:14  <indutny>isaacs: would be good if you'll write something in that issue
16:30:22  <indutny>isaacs: this guy is sort of expecting some answer
16:30:24  <indutny>:)
16:30:38  <indutny>bnoordhuis: hehe
16:31:02  <indutny>bnoordhuis: this sound a bit enterprisey
16:31:16  <indutny>sl is having an effect on you?
16:31:24  <bnoordhuis>i like enterprisey, enterprisey is where the money is
16:31:32  <indutny>oh gosh
16:31:41  <bnoordhuis>but no, it's not strongloop
16:31:58  <bnoordhuis>the way i see it, if we want fast tls, something's got to give
16:32:17  <indutny>yeah
16:32:27  <indutny>I just not sure if it'd be a good thing to have this in core
16:32:33  <indutny>s/I/I'm/
16:32:43  <bnoordhuis>well, then we're stuck with the current implementation
16:32:44  <indutny>its a little bit outstanding
16:33:04  <indutny>and would complicate "hacking" a lot
16:33:13  <bnoordhuis>for whom?
16:33:25  <indutny>for developers/module developers
16:33:35  <bnoordhuis>right, tough cookies
16:33:38  <indutny>but I think
16:33:39  <bnoordhuis>they can stick with tls1
16:33:42  <indutny>it might be a good idea
16:33:52  <indutny>well, we're discussing imaginary module right now
16:34:03  <indutny>how can I be sure what it'll look like
16:34:04  <isaacs>bnoordhuis: what's that about http/s?
16:34:18  <isaacs>bnoordhuis: reading this conv, feel like i missed a step there.
16:34:23  <bnoordhuis>isaacs: we're discussing tls
16:34:27  * mikealquit (Quit: Leaving.)
16:34:36  <isaacs>oh, i did miss a step: 16:33 <@indutny> we could do http the same way
16:34:44  <indutny>heh
16:34:53  <indutny>it just looks logical for me
16:35:08  <isaacs>i actually think that http belongs more in js than in C++
16:35:15  <bnoordhuis>i'd be in favor splitting http in a high and a low level module actually
16:35:17  <isaacs>but we can't ditch the http_parser.c yet
16:35:19  <bnoordhuis>but that's a different discussion
16:35:24  <isaacs>yeah, very different.
16:35:27  <indutny>welll
16:35:36  <indutny>its about consistency
16:35:41  <indutny>and applies to this discussion
16:35:45  <isaacs>first we need to just get our current http to be less shitty. but go back to tls, please, don't let me distract you. just bumped into my Pet Issue, that's all ;)
16:36:09  <bnoordhuis>indutny: there's people waiting for me
16:36:16  <indutny>bnoordhuis: k
16:36:21  <indutny>lets talk about it later
16:36:32  <bnoordhuis>yep, give you time to think about it
16:36:36  <bnoordhuis>and realize i'm right :)
16:36:39  <indutny>haha
16:36:42  <indutny>lets see
16:36:53  <bnoordhuis>okay, later
16:36:58  <indutny>later
16:37:39  <isaacs>indutny: We can make TLS a bit less transparent/hackable if it makes it better/faster/stronger. but we DO need to make that trade-off clear, and explore exactly what it means.
16:37:48  <isaacs>indutny: i think most node users don't give two shits about hacking tls
16:37:55  <indutny>that "bit" is a bit too big
16:38:08  <indutny>separate network stack
16:38:19  <indutny>every networking operation in C++
16:38:24  <isaacs>sure
16:38:31  * loladirojoined
16:38:52  <indutny>why not write it in java then?
16:38:54  <isaacs>it might end up taking us back to the Bad Old Days of node 0.1 when https and http were completely different mostly-overlapping implementations.
16:39:04  <isaacs>so we'd have to find the right trade-off there.
16:39:06  <isaacs>no one wants that.
16:40:02  <isaacs>indutny: https://gist.github.com/5271988 <-- review?
16:40:06  <isaacs>tests pass
16:40:19  <isaacs>just look for typos :)
16:40:34  <indutny>LGTM
16:41:16  * bnoordhuisquit (Ping timeout: 246 seconds)
16:41:36  <isaacs>k
16:41:37  <isaacs>thanks
16:41:41  <indutny>it might that I'm not in a good mood today :)
16:41:43  <trevnorris>isaacs: if there's a cc api that's only meant to be accessed directly via the js api, is it fine to remove the argument type checks in cc?
16:41:46  <MI6>joyent/node: isaacs v0.10 * 7af075e : crypto: Pass options to ctor calls - http://git.io/MLEymg
16:41:48  <indutny>so please ignore me if I'm a bit rude
16:42:14  <isaacs>indutny: everything ok?
16:42:17  <indutny>yeah
16:42:23  <indutny>just mood issues
16:42:35  <isaacs>trevnorris: asserts are good for that.
16:42:45  <isaacs>trevnorris: it's a good idea to have defense in depth. they've come in handy a lot of times.
16:42:50  <isaacs>trevnorris: and C asserts are cheap
16:43:07  * piscisaureus_quit (Ping timeout: 258 seconds)
16:43:29  <trevnorris>isaacs: it's mainly all the arg[n]->IsObject/IsUint32/Is... that is costly.
16:43:38  <isaacs>trevnorris: hm.
16:43:48  <indutny>trevnorris: benchmark
16:43:49  <indutny>and decide
16:43:55  <isaacs>trevnorris: yes, benchmark
16:43:57  <indutny>usually cost is not that high
16:44:10  <isaacs>trevnorris: but if we can replace that with something that is equally blow-uppy then sure.
16:44:21  <isaacs>or move to the JS layer
16:45:06  * inolenquit (Quit: Leaving.)
16:46:37  <tjfontaine>lul "has simply not provided any evidence to support his view or refute a single point of mine on the matter"
16:46:43  <trevnorris>how often are buffers < 256 bytes needed?
16:47:01  <trevnorris>because that's the only case where the perf difference is noticable (by around 10%)
16:47:08  <tjfontaine>possible to generate them on the dns side
16:49:01  * benoitcquit (Excess Flood)
16:51:15  * benoitcjoined
16:53:48  <isaacs>tjfontaine: error handling still happening?
16:54:06  <mmalecki>trevnorris: happens fairly often with websockets
16:54:11  <tjfontaine>isaacs: yes that thread
16:54:21  <isaacs>oh, jeez.
16:54:23  * isaacssigh
16:54:32  <tjfontaine>the emphasis was his, not mine :)
16:54:33  <trevnorris>mmalecki: thanks
16:54:50  <isaacs>yeah
16:58:33  * TooTallNatejoined
16:58:55  <MI6>nodejs-v0.10: #93 UNSTABLE windows-x64 (5/570) smartos-ia32 (1/570) windows-ia32 (5/570) http://jenkins.nodejs.org/job/nodejs-v0.10/93/
16:59:31  <trevnorris>is there a way to attach to module.exports from cc, or will it have to go through js?
17:00:13  <tjfontaine>hmm there was an issue about this, I think it may have been added
17:00:19  <tjfontaine>(we really should document this :P)
17:00:45  <trevnorris>i'll make that my next project after finishing the memory allocation stuff.
17:01:07  <tjfontaine>https://github.com/joyent/node/pull/4634
17:01:27  <tjfontaine>https://github.com/joyent/node/commit/15508589a163b0c9f09ac608281f9ebb015d4deb being the commit that landed
17:02:12  <trevnorris>tjfontaine: awesome. thanks =)
17:03:29  * mikealjoined
17:04:46  <isaacs>yes, in 0.10 there is this
17:05:33  * mikealquit (Client Quit)
17:08:00  * bnoordhuisjoined
17:09:44  <trevnorris>cool. I just moved all the check logic to cc and bypassed the need for js. ended up being faster.
17:12:33  * c4miloquit (Remote host closed the connection)
17:13:07  <mmalecki>trevnorris: let me know when you have a patch. I'll give it a try on our LBs
17:13:29  <mmalecki>(the buffer allocation thing, I mean)
17:15:35  * bnoordhuisquit (Ping timeout: 260 seconds)
17:16:35  <trevnorris>mmalecki: right now I only have the allocator (the part that will handle all external memory allocation).
17:16:53  <trevnorris>it's going to take me a while to hammer it into the current Buffers
17:17:16  <mmalecki>yeah, no worries
17:17:23  <mmalecki>just let me know when can we test it out
17:18:00  <trevnorris>will do. throw a reminder on PR 4964 or i'll probably forget. =)
17:18:15  * piscisaureus_joined
17:21:28  <MI6>nodejs-master: #128 UNSTABLE windows-ia32 (6/573) osx-ia32 (1/573) windows-x64 (5/573) smartos-ia32 (3/573) smartos-x64 (2/573) http://jenkins.nodejs.org/job/nodejs-master/128/
17:22:33  <trevnorris>there's a test i need to include to make sure the allocator doesn't seg fault. i'll assume there's not an assert for that?
17:22:44  <trevnorris>;-)
17:22:54  <tjfontaine>well
17:23:15  <tjfontaine>best thing would be spawn a child and check its exit code :)
17:23:42  <trevnorris>good idea. i'll break this test into it's own file
17:24:24  <tjfontaine>trevnorris: you'll also want to throw --expose-gc in with it and forcibly run the gc to try and make it fail
17:25:09  <trevnorris>tjfontaine: i was wondering how i'd get the gc exposed for testing. child process, duh...
17:34:48  * inolenjoined
17:35:49  <trevnorris>tjfontaine: ah, it is documented. it just doesn't stand out very well.
17:36:09  <tjfontaine>which the module thing?
17:37:11  * c4milojoined
17:37:55  <trevnorris>how to attach directly to module.exports from cc
17:39:10  <tjfontaine>nod
17:39:37  * loladiroquit (Quit: loladiro)
17:39:43  <trevnorris>oh, but doesn't seem to work from core. just from node modules.
17:40:00  <trevnorris>at least, that's what it look like
17:45:12  <trevnorris>ToObject is sort of useless. I just attached an external data array to a regex.
17:45:32  <tjfontaine>that's cute
17:45:48  <tjfontaine>maybe we can use that to use pcre for regex :P
17:46:35  <trevnorris>heh
17:51:52  <trevnorris>ugh. ok. if I a user passes an array then the program will fatally error. guess that means I need to add yet another check.
17:53:24  <tjfontaine>trevnorris: well this Alloc isn't going to be "public" right? so let them be responsible for that, too many checks and I presume it will start to hurt your perf
17:54:24  <trevnorris>tjfontaine: yeah. i'm torn. it's freakishly efficient at just allocating external memory to an object w/o all the frills of a buffer.
17:54:34  <trevnorris>and there have been times i'd have like that.
17:55:06  <tjfontaine>but that's why it would be tucked away in _internal_alloc
17:55:17  <tjfontaine>if someone wnats it they can be responsible for the consequences
17:55:40  <trevnorris>ah. a sort of, hey it's here. but make sure you know what the crap you're doing, type of thing?
17:55:55  <tjfontaine>ya
17:57:46  <trevnorris>isaacs: you are a saint. anyone else would have ripped his face off.
17:58:17  <tjfontaine>well, I suspect he's approaching /dev/null land anyway
17:58:32  <trevnorris>lol
17:58:34  <isaacs>he's a good person.
17:58:53  <isaacs>he works for linkedin, and believes that he's doing right by the node community, and his employer
17:59:06  <trevnorris>that's cool
17:59:12  <isaacs>it's more frustrating for him than for me, i suspect.
17:59:33  <tjfontaine>sure, but his tactic recently hasn't been ideal for the community, at least as far as attitude is concerned
17:59:51  <isaacs>yep. i agree. that's why i'm trying to guide in a different direction.
18:00:04  <isaacs>it's tough, and i'm not very good at this, but that's life, i guess.
18:00:16  <trevnorris>guess i'd just disagree with two ideas (that I think you've pointed out already)
18:00:43  <trevnorris>returning an Error instead of throwing isn't js. that would really throw off the community.
18:00:46  <tjfontaine>isaacs: pivoting back to addressing a specific concern was the right thing to do there--no doubt
18:01:37  * c4miloquit (Remote host closed the connection)
18:01:53  <isaacs>i did have a long pointless unhelpful stupid frustration vent-fest that i wrote in vim and threw away.
18:02:01  <isaacs>:)
18:02:14  <tjfontaine>haha, that usually makes you feel better
18:02:21  * loladirojoined
18:02:24  <tjfontaine>also https://github.com/joyent/node/issues/3871#issuecomment-15652630 that seems interesting, is that valid isaacs?
18:02:26  <isaacs>but when someone keeps coming back, i mean, i want to harness that enthusiasm if i can.
18:03:02  * bradleymeckjoined
18:03:23  <isaacs>tjfontaine: it's not a guaranteed-correct fix, no.
18:03:29  <isaacs>tjfontaine: it's a "works by accident" fix.
18:03:48  <tjfontaine>right ok, that's what it felt like
18:03:48  <isaacs>tjfontaine: what we need is synchronous stdio in windows.
18:03:57  * c4milojoined
18:03:58  <isaacs>if you did an abort() or something, for example, you'd lose output
18:04:05  <isaacs>we need the process to block while doing stdio
18:04:31  <tjfontaine>nod
18:05:09  <trevnorris>isaacs: your 2 cents. the new allocator could be made community ready with minimal perf hit. want that, or keep it internal?
18:05:33  <isaacs>trevnorris: i almost always recommend innovations be done in userland first.
18:05:38  <isaacs>trevnorris: if that's your question.
18:05:58  <isaacs>trevnorris: if you can play around with a npm module that provides some fast-buffer-allocation stuff, and let people play around with it, i mean, that's awesome.
18:06:35  * loladiroquit (Client Quit)
18:08:05  * brsonjoined
18:11:47  <trevnorris>isaacs: how did you do that w/ streams2? i'm not sure how to do that while testing if it'd working doing things like replacing the SlabAllocator.
18:12:21  <trevnorris>(i swear. you'd never know english is my language by the way I type)
18:12:35  <tjfontaine>heh
18:13:53  * loladirojoined
18:15:19  * mikealjoined
18:15:52  * piscisaureus_quit (Ping timeout: 258 seconds)
18:16:01  * loladiroquit (Client Quit)
18:21:48  <isaacs>trevnorris: hah, well, yeah, it's tricky, especially for something core like this.
18:22:04  <isaacs>trevnorris: in practice, i had the readable-stream module, but also a streams2 branch where i was doing the real work.
18:22:35  <trevnorris>isaacs: that makes sense. think i'll keep doing it that way as well.
18:23:15  <tjfontaine>will monkeypatch'ing over [Slow]Buffer be enough to see the difference in the benchmarks?
18:25:10  * kazuponquit (Remote host closed the connection)
18:25:24  <trevnorris>tjfontaine: by monkeypatch'ing do you mean a dirty/quick implementation?
18:25:54  <tjfontaine>trevnorris: I'm just trying to think of how far you can get from a user module implementation
18:26:19  <trevnorris>i'd have to overwrite the entire Buffer class w/ my own implementation.
18:26:44  <isaacs>trevnorris: of course, right, it's hard to do in this case without making deep global changes.
18:26:58  <trevnorris>like, it'd be trivial to create a module w/ the allocator. but you couldn't see any effect in node core.
18:27:05  <isaacs>trevnorris: streams are higher level, so you cna just use the code, withotu touching the guts too much
18:27:39  <isaacs>trevnorris: maybe it could return a Buffer function, and you can do global.Buffer = require('trevors-super-buffer') if you really want
18:28:21  <trevnorris>isaacs: had that idea, and just might still for user-land, but still wouldn't show any improvements on net/tls/etc.
18:28:54  <trevnorris>the only thing below Buffer is ObjectWrap. everything else is built on top of those.
18:31:05  * AvianFluquit (Remote host closed the connection)
18:35:40  <trevnorris>isaacs: what's sort of sad is that took me just short of 3 weeks to figure it out. and boiled down to around 55 lines of code.
18:36:47  <tjfontaine>that's good :)
18:37:35  <tjfontaine>consider how much more you know about node, v8, and c++ -- and when less code results in wins everyone wins :)
18:39:55  <isaacs>off to practice. going to dig into this IIS FIN/EOF issue after.
18:42:11  <trevnorris>tjfontaine: here's what it all boiled down to: http://git.io/oELKiw
18:43:01  <trevnorris>what I finall figured out (last night around 3am) was that persisting an object is cheap, if the object existed beforehand and no other properties need to be set.
18:43:32  <trevnorris>so i'm actually creating a persistent for every allocation, and what's funny is it's faster than the current Buffer.
18:43:50  <trevnorris>also this will remove the SlabAllocator memory leak
18:44:52  <tjfontaine>that doesn't necessarily remove the .parent leak though
18:45:00  <trevnorris>there is no more .parent
18:45:09  <tjfontaine>what do you get when you slice?
18:45:16  <tjfontaine>oh right
18:45:17  <tjfontaine>ok
18:45:20  * loladirojoined
18:45:36  <trevnorris>yeah. you only can have that problem if the user slices an already allocated buffer.
18:45:47  <trevnorris>but if we're creating pointers to memory, then not much we can do about that.
18:45:56  <tjfontaine>trevnorris: this is faster for thousands buffer creations than the slab backed version?
18:46:02  <tjfontaine>*tiny
18:46:10  <trevnorris>interestingly, yes.
18:46:18  <tjfontaine>hmm
18:48:33  <trevnorris>it's only a little slower for very small buffers (e.g. < 256 bytes), but at most ~10%
18:48:55  <tjfontaine>ya, that's what I was expecting to hear
18:49:24  <trevnorris>but it's 4x's faster for buffer allocations over 1024 * 4 bytes.
18:49:41  <tjfontaine>are 4k buffers a common allocation?
18:50:11  <trevnorris>no idea. was just collecting data from a spectrum of ranges.
18:50:23  <tjfontaine>it would be interesting to run a dtrace histogram for buffer allocs to see what the common allocs are
18:50:36  <trevnorris>but one place this will help is in tls. since those only allocate from cc, which are always SlowBuffers
18:50:47  <tjfontaine>I can probably gather that information
18:50:53  <trevnorris>that would be awesome.
18:50:57  <trevnorris>i've never used dtrace before.
18:51:03  <tjfontaine>it's awesome.
18:52:44  <tjfontaine>I mean, I guess those sizes should be known from the benchmarks, perhaps it would only be interesting data gathered from a production environment
18:53:05  <tjfontaine>anyway I'll do it anyway just for my own edification
18:54:39  <indutny>I'm back
18:55:58  * piscisaureus_joined
19:13:47  * c4miloquit (Remote host closed the connection)
19:18:31  * c4milojoined
19:19:45  * piscisaureus_quit (Ping timeout: 252 seconds)
19:25:48  * kazuponjoined
19:30:42  <mikeal>do any bad things happen if i do
19:30:51  <mikeal>var x = net.connect(port, host)
19:30:54  <mikeal>x.write(chunk)
19:31:00  <mikeal>before waiting on the "connect" event?
19:31:08  <mikeal>in 0.10
19:32:11  <indutny>mikeal: I presume no
19:32:24  <mikeal>k
19:32:35  <indutny>it'll buffer them
19:32:41  <mikeal>i'm writing an 0.10 http proxy
19:32:44  <mikeal>that doesn't decode http
19:32:46  <indutny>kewl :)
19:32:51  <indutny>oh, interesting
19:32:56  <mikeal>like at all, it just matches caseless 'Host: '
19:33:07  <mikeal>with a limit on how much it'll buffer in order to do so
19:33:29  * paddybyersquit (Ping timeout: 246 seconds)
19:33:32  <mikeal>and then you provide a function that tells me what to forward it to
19:33:35  <mikeal>so it's pure tcp
19:33:48  <mikeal>should be plenty fast
19:33:52  <indutny>nice!
19:33:53  * kazuponquit (Ping timeout: 240 seconds)
19:34:12  <mikeal>i'm not even optimizing the buffering and matching
19:34:16  <mikeal>i can do that later
19:34:28  <mikeal>not parsing anything is a big enough performance gain
19:35:16  <indutny>looking forward to see it
19:36:02  * glipsonjoined
19:37:03  * glipsonquit (Client Quit)
19:47:31  <CoverSlide>kind of like bouncy?
19:47:46  <CoverSlide>well i guess bouncy does parse some
20:09:39  * nsm_joined
20:17:37  * paddybyersjoined
20:20:05  * AvianFlujoined
20:26:51  <trevnorris>tjfontaine: so will dtrace tell you what's called Buffer, an the size of the allocations?
20:27:28  <txdv_>Hey guys, do you want to read an article where an idiot "proves" that .net is faster than node.js?
20:27:35  <indutny>trevnorris: yes, it can
20:27:42  <indutny>txdv_: I already read it
20:27:57  <trevnorris>that's sick.
20:28:06  <indutny>hardly believe sorting big amounts of numbers is what web server should usually do
20:28:15  <txdv_>the reddit comments are so good - http://www.reddit.com/r/programming/comments/1b8lds/net_and_nodejs_performance_comparison/
20:30:47  * kazuponjoined
20:31:00  <trevnorris>i love those "well mine is bigger than yours" articles.
20:31:35  <tjfontaine>trevnorris: yes it can, if I put the probes in the right place
20:35:04  * kazuponquit (Ping timeout: 246 seconds)
20:37:20  * bnoordhuisjoined
20:41:42  * bnoordhuisquit (Ping timeout: 252 seconds)
20:45:57  <txdv_>THEY ARE ALL WRONG
20:45:57  <LOUDBOT>ALSO THEY GAVE ME A FREE TRUFFLE
20:46:03  <txdv_>I NEED TO CONVINCE THEM THAT THEY ARE WRONG
20:46:03  <LOUDBOT>WHAT THE FUCK DID YOU DO SIMON COPTER
20:48:12  * howdynihaojoined
20:49:09  <howdynihao>eventemitter is 'api frozen' 4, if i submit a PR that doesn't break anything thats ok yea?
20:49:31  <trevnorris>howdynihao: nope. this has a matter of debate, but nope.
20:49:41  * AvianFluquit (Remote host closed the connection)
20:49:54  <tjfontaine>howdynihao: you want to add a method, or fix something?
20:50:04  <tjfontaine>or add some other feature?
20:50:11  * `3E|AFKquit (Remote host closed the connection)
20:50:23  <howdynihao>well 2 pr's one is to just refactor, optimize 'emit'
20:50:50  <howdynihao>and another is a catch all event (yes i know its been discussed before) but the implementation would be so simple if the first PR went through
20:50:50  <tjfontaine>that better come with excellent bechmarks and solid pre/post data, and not change behavior :)
20:51:15  <tjfontaine>that one seems unlikely to go through
20:51:40  <trevnorris>howdynihao: you already have the pr's up?
20:51:55  <howdynihao>no, i wanted to get some feedback before committing the time
20:52:29  <tjfontaine>making emit faster is always a good idea, so long as you can prove it doesn't adversely affect anyone
20:53:13  <howdynihao>it doesnt appear to be any benchmarks setup for EE so i guess i'll have to get that done for the first PR to go through?
20:53:48  <tjfontaine>howdynihao: if you search the issues there are people who have tried similar things, and those issues have the relevant data
20:54:10  <trevnorris>howdynihao: because event emitter is really hot code, the performance impact will show up in almost all tests that require streams.
20:55:25  <trevnorris>howdynihao: also "git show 75305f3..d1b4dcd"
20:55:37  <trevnorris>i spent almost 2 weeks doing the same.
20:55:39  * qmx|awaychanged nick to qmx
20:57:16  <trevnorris>howdynihao: don't let me discourage you from trying. at the least you'll get to know node internals much better.
20:57:34  <tjfontaine>yes, and there's always the chance that your idea is a good one
20:58:22  <trevnorris>tjfontaine: i swear. as soon as v1 is released i'm going back and ripping out every last line of legacy crap.
20:58:37  <tjfontaine>trevnorris: https://gist.github.com/tjfontaine/5273607 most of those numbers are expected considering what I was using to generate the histogram
20:59:04  <tjfontaine>trevnorris: that's with the probe right before makeFastBuffer, so not entirely a full view of things
20:59:26  <tjfontaine>trevnorris: and yes well you mean in the v2 branch not necessarily in 1.1 :)
20:59:45  <trevnorris>tjfontaine: allow me to give you a big virtual hug. =)
20:59:50  <trevnorris>those histograms are awesome.
21:00:07  <tjfontaine>sudo dtrace -n "node*::buffer-alloc { @allocs = quantize(arg0); }" -c "./node benchmark/common net"
21:00:11  <trevnorris>but what the crap is allocating so many 1 byte buffers?
21:00:30  <tjfontaine>net/dgram.js len=1 num=100 type=send dur=5: 0.00069884
21:00:31  <tjfontaine>net/dgram.js len=1 num=100 type=recv dur=5: 0.00022959
21:00:33  <tjfontaine>probably
21:00:41  <trevnorris>ah. ok
21:00:49  <trevnorris>yeah. those really suck.
21:01:57  <tjfontaine>so which Buffer::New is interesting, or is it more interesting to grab from Buffer::Replace
21:02:20  <trevnorris>tjfontaine: everything goes through Buffer::Replace, including slices.
21:02:42  <trevnorris>took me half a day to figure out how that thing worked.
21:03:25  <trevnorris>just benchmarked the 65536 byte case. smalloc is 2.6 times faster.
21:03:26  <tjfontaine>ya I'm just trying to decide if I can easily add anything that does a new buffer from c++ without duplicating a call from js land
21:04:27  <trevnorris>like add any new probes, or add new code for testing?
21:05:09  <trevnorris>yeah. and I meant for the v2 branch. =P
21:05:10  <tjfontaine>well I was wondering if it's only interesting to see the buffers from js, or if there was something interesting to be seen from C++
21:05:24  <trevnorris>all buffer calls go through MakeFastBuffer
21:05:35  <trevnorris>so if you track there then you'll see all allocations.
21:05:50  * howdynihaoquit (Remote host closed the connection)
21:06:05  <trevnorris>(unless it's calling a SlowBuffer, in which case performance is really going to suck anyways)
21:06:20  <tjfontaine>well that's kinda what I want to know :)
21:06:42  <tjfontaine>though I suppose none of the benchmarks should do that
21:06:55  <trevnorris>well. hash digests do.
21:07:12  <trevnorris>anything that calls Encode from node.cc will
21:07:36  <trevnorris>but that's only crypto stuff.
21:07:42  <tjfontaine>so is there a single place in the node::Buffer that will track both js and c++ activity?
21:07:58  <tjfontaine>replace certainly isn't called on a slice, right?
21:08:08  <tjfontaine>not that slice matters
21:08:24  <trevnorris>well. the problem is that it'll track slab allocations.
21:08:36  <trevnorris>i mean. pools that buffers use.
21:08:56  <trevnorris>that's why it's better to track from MakeFastBuffer, since that will show how much memory js actually wants to use.
21:09:01  <tjfontaine>what's interesting at the moment is (regardless of what's backing them) what buffer sizes we're using
21:09:14  <tjfontaine>ok fair point
21:09:46  <trevnorris>can you track from a specific point within a function (e.g. an if statement)?
21:10:00  <tjfontaine>yes
21:10:06  <trevnorris>sweetness!!!
21:10:15  <tjfontaine>well, all about where I want to put my probe
21:10:28  <tjfontaine>or make more probes or whatever
21:10:31  <trevnorris>ok. so track MakeFastBuffer, then also track where memory is being allocated in Buffer::Replace
21:10:56  <trevnorris>then we can compare how much memory is actually being allocated, and how much memory js really wanted to use.
21:11:20  <trevnorris>also, you'll want to track SlabAllocator::Allocate
21:11:35  <tjfontaine>you want the replace to be accounted separately from the js allocs?
21:11:37  <trevnorris>since that's used as a backing for a lot of network operations.
21:12:32  <trevnorris>yeah. if we have two tests. one showing how much memory was actually used, and another showing how much memory js was asking for, we can see the play with current slab allocations, etc.
21:12:38  <tjfontaine>I think I can probe slab allocations now without any new probes
21:13:04  <trevnorris>dude. this is freaking awesome. dtrace only works on sunos right?
21:13:47  <tjfontaine>no, it works on osx, freebsd, and an incomplete linux one as well
21:14:27  <trevnorris>heh. not often I hear of a development/debugging tool that I wish worked better on linux.
21:15:14  <tjfontaine>trevnorris: then you need to spend sometime looking at the sunos debugging features, and you'll ask why you bothered with linux :)
21:15:23  <trevnorris>heh
21:19:35  * benoitcquit (Excess Flood)
21:20:44  <tjfontaine>ah this is kinda interesting data
21:21:37  <tjfontaine>trevnorris: https://gist.github.com/tjfontaine/5273607 updated
21:23:01  <trevnorris>interesting.
21:23:20  <trevnorris>Yeah. I think the latter shows all the SlabAllocator segments.
21:23:23  * stagasquit (Read error: Connection reset by peer)
21:23:53  <tjfontaine>well, everything too big to go into the pool, and including the 52 pools that were made
21:24:18  <trevnorris>what do you mean?
21:24:31  <tjfontaine>8192, that's the buffer pools right?
21:24:37  <trevnorris>yeah.
21:24:47  * benoitcjoined
21:24:52  <tjfontaine>and things larger than that get allocated on their own?
21:25:01  <trevnorris>but after that memory is allocated at the exact size it's needed.
21:25:03  <trevnorris>yeah
21:25:21  <trevnorris>check out src/slab_allocator.h
21:25:24  <tjfontaine>right
21:25:49  <trevnorris>node streamwrap doesn't use buffers directly. it uses the slab allocator.
21:26:55  <tjfontaine>which is why you'll only see 6 65536
21:27:22  <trevnorris>yeah. that's the the Buffer pool replenishing after depletion.
21:27:29  <trevnorris>but those are only used for misc things.
21:28:32  <tjfontaine>having the consolidation will let us reuse more, though you may not notice as much since v8 is most of our overhead in making things weak
21:28:38  <tjfontaine>anyway
21:28:42  <tjfontaine>this data is mostly pointless
21:28:50  <tjfontaine>because it was observing the benchmarks
21:29:42  <trevnorris>there's a strange thing though. if you persist and make weak an existing object (not on a new one) and don't set any properties, it becomes fast.
21:30:36  <trevnorris>check out how I did it here: http://git.io/99vRwg
21:31:11  <trevnorris>this method out performs buffer pools for all allocations larger than 256 bytes.
21:31:21  * benoitcquit (Excess Flood)
21:31:22  * loladiroquit (Quit: loladiro)
21:31:24  * kazuponjoined
21:31:56  <tjfontaine>trevnorris: and where in the perf is it slower for 256? presuming at that point you're finally hitting malloc overhead?
21:32:44  <trevnorris>that's just necessary evils. for example:
21:33:22  <tjfontaine>trevnorris: some mallocs are necessary, but this is also what a slab allocator is about solving
21:33:43  <trevnorris>the biggest difference you'll see is a 1 byte allocation. Buffers can do it in about 266 ns. where Alloc does it in 290 ns
21:34:27  <trevnorris>tjfontaine: but see, once you have to hit the SlowBuffer performance goes to total crap.
21:34:32  <tjfontaine>is the difference there the malloc, can't be the js-c boundary since makeFastBuffer calls into it
21:34:49  <tjfontaine>trevnorris: I'm not talking about slowbuffer performance
21:35:12  <trevnorris>but you have to take that into account since every time a slab is used up it needs to re allocate another slab.
21:36:10  <tjfontaine>trevnorris: well block/heap arena style keep a free list of what's available, you rarely need to allocate a whole new heap unless it's completely used
21:36:17  * piscisaureus_joined
21:36:40  <trevnorris>tjfontaine: your test shows that 149917 new slabs needed to be allocated.
21:37:17  * benoitcjoined
21:37:29  <trevnorris>see. here's the different. with pools you have to attach the persistent to the object so it knows it's alive.
21:37:32  * piscisaureus_quit (Client Quit)
21:37:33  <trevnorris>which kills you in gc
21:37:37  <tjfontaine>trevnorris: the type of caching I'm talking about is something like _freelist.js
21:37:43  * kazuponquit (Ping timeout: 264 seconds)
21:37:53  <tjfontaine>er where is it
21:38:02  <tjfontaine>oh just freelist.js
21:38:46  <trevnorris>the reason buffer pooling works is because the persistent is passed back to the Buffer as .parent.
21:38:58  <trevnorris>but that causes a lot of gc overhead while it figures out if it's still being used.
21:39:16  <trevnorris>when there are no attachments then it knows immediately when it's ready for cleanup.
21:39:29  <tjfontaine>trevnorris: what I'm trying to explain is that you've eliminated that model in your new way, but if < 256 is slow it's like malloc overhead, and the way to solve that is by caching those allocs
21:40:09  <trevnorris>for how much v8 has to do, the malloc is pretty much a noop.
21:40:34  <tjfontaine>which is why I asked if you checked the perf to see what the difference was for < 256
21:40:50  <tjfontaine>and if it was malloc there was a solution for that
21:41:02  <trevnorris>yeah. instead of typing it out. let me just post it. one sec.
21:43:06  <trevnorris>tjfontaine: here's the output for 1 byte allocations: https://gist.github.com/trevnorris/5273872
21:43:11  <trevnorris>everything over 1% is listed
21:46:52  <tjfontaine>so who's responsible for the malloc's showing there, mostly you or generic v8 allocs?
21:47:34  <trevnorris>those are mine.
21:47:56  <tjfontaine>so for 1byte allocs, it seems like malloc is more than a noop?
21:48:53  * c4miloquit (Remote host closed the connection)
21:49:30  <trevnorris>unless _int_malloc is also a malloc, then malloc's only count for 5% of the time, right?
21:50:59  * c4milojoined
21:51:09  <tjfontaine>it's difficult without seeing callstacks that go along with it, but it would seem like _int_malloc is called from malloc
21:51:49  <trevnorris>tjfontaine: also, those results are with all the API checks in place.
21:52:07  <trevnorris>if I remove all those checks then Alloc is 10% faster at 1 byte allocs.
21:54:37  <trevnorris>so for a basis. it takes ~54 ns just to enter and return from cc.
21:55:14  <tjfontaine>right
22:02:50  <trevnorris>tjfontaine: you think you could get counts from SlabAllocator::Shrink?
22:03:17  <trevnorris>and can you run that against the http benchmark?
22:03:49  * benoitcquit (Excess Flood)
22:07:17  * benoitcjoined
22:22:26  * nsm_changed nick to nsm
22:33:59  * kazuponjoined
22:34:35  <tjfontaine>trevnorris: sure
22:34:45  <trevnorris>awesome. thanks.
22:36:31  <trevnorris>does anyone in here know what "Buffer::New(Handle<String> string)" is for?
22:36:51  <trevnorris>it's not used in core anywhere, but has the comment "C++ API for constructing fast buffer"
22:38:41  * kazuponquit (Ping timeout: 256 seconds)
22:38:44  <nsm>trevnorris: its creating a Buffer and setting the contents to the string
22:39:06  <trevnorris>nsm: get that from the code. what i'm wondering is why it's there
22:39:28  <trevnorris>it's doing it in the most horrible way possible.
22:40:12  <trevnorris>it's getting the Buffer function from js, not caching it in a Persistent mind you, then calling it and passing the string.
22:40:30  <trevnorris>which then calls .write(), and jumps back into cc to write the string
22:41:14  <trevnorris>and it has nothing to do with the comment of "for constructing a fast buffer". that's why MakeFastBuffer exists.
22:41:23  <nsm>haha, yea i don't know
22:42:36  <tjfontaine>I bet this was used at one point in crypto when it was string based
22:44:00  <trevnorris>that makes sense. well, it's not documented, unused. i say it goes!
22:44:17  * toothrchanged nick to toothrot
22:44:42  * bnoordhuisjoined
22:46:35  <tjfontaine>trevnorris: https://gist.github.com/tjfontaine/5273607
22:47:25  <trevnorris>tjfontaine: thanks. that very interesting.
22:47:53  <trevnorris>i can't figure out why Buffer::Replace is working over so many 65536
22:48:23  <trevnorris>that's SlowBuffer territory.
22:48:47  <trevnorris>hm. that must be from the benchmark size, huh?
22:48:47  <tjfontaine>I could move over to smartos and find out the jstack
22:48:54  <tjfontaine>ya its' benchmark though
22:49:48  <trevnorris>and I wonder what all the 32 bit Shrinks are from.
22:49:57  <trevnorris>that isn't something I expected.
22:50:42  <trevnorris>will jstack tell me how we're arriving at Shrink?
22:50:56  <tjfontaine>yup
22:51:02  <tjfontaine>both C and JS stack
22:51:40  <trevnorris>that is freakin sweet
23:08:19  * defunctzombiechanged nick to defunctzombie_zz
23:11:45  <trevnorris>tjfontaine: thanks for all the benchmark info. will be super helpful. have to jam, but if you have anything else just throw it up and I'll see it on the logs.
23:11:45  * c4miloquit (Remote host closed the connection)
23:11:47  * trevnorrisquit (Quit: Leaving)
23:12:11  * c4milojoined
23:13:24  <tjfontaine>trevnorris, we will convince you to not rely on the logs instead lurk here :)
23:16:48  * c4miloquit (Ping timeout: 252 seconds)
23:26:09  * bradleymeckquit (Quit: bradleymeck)
23:34:35  * kazuponjoined
23:39:21  * kazuponquit (Ping timeout: 252 seconds)
23:47:38  * paddybyersquit (Ping timeout: 245 seconds)
23:48:14  <bnoordhuis>i find it pleasing that most bugs in the tracker are either windows or streams2 related
23:48:25  <isaacs>ahhh
23:48:29  <isaacs>nice.
23:48:38  <isaacs>figured out this TLS IIS bug.
23:48:51  * mikealquit (Quit: Leaving.)
23:48:52  <isaacs>it's because we used to do src.on('close', function() {dest.destroy()}) in stream.pipe()
23:48:55  <isaacs>and now we don't.
23:49:08  <bnoordhuis>do i hear 'streams2 bug'?
23:50:09  <mmalecki>not that uncommon :)
23:50:33  <isaacs>bnoordhuis: well, no, you hear 'streams2 feature' ;P
23:50:39  <bnoordhuis>hah :)
23:50:49  <isaacs>bnoordhuis: no, for real, many people complained a lot about this.
23:51:11  <isaacs>bnoordhuis: but i suppose "We did what you asked, and broke someone else's program" is kind of how "streams2 bug" works
23:51:19  <isaacs>most bugs, for that matter.
23:51:25  <bnoordhuis>yeah
23:51:29  <bnoordhuis>it wasn't a criticism btw
23:51:38  <bnoordhuis>it means that node is pretty stable by now
23:51:44  <isaacs>true that.
23:51:53  <isaacs>and, honestly, i'm shocked at how FEW bugs streams2 has caused.
23:51:59  <isaacs>that leads me to believe people still aren't upgrading to 0.10
23:52:09  <bnoordhuis>hah, probably :)
23:53:01  <isaacs>the only refactors i've ever seen in node that were this massive were moving to uv in 0.6, and ripping out promises in 0.1
23:53:05  <isaacs>and both basically broke everything
23:53:19  <isaacs>uv was less breaking, obviously
23:54:25  <MI6>joyent/node: Trevor Norris master * 2093e7d : lint: add isolate, remove semicolon - http://git.io/wUswyg
23:57:24  <isaacs>this IIS thing is kind of a weird case.
23:57:31  <isaacs>i mean, you really *can't* reusing a TLS socket, can you?
23:57:37  <isaacs>s/ing/e/
23:57:46  <bnoordhuis>tjfontaine: you got dtrace to work reliably on os x?
23:57:46  * isaacsforgoting englishes
23:58:12  <bnoordhuis>i'm looking at #5166 in case you're wondering
23:58:30  <tjfontaine>oh that's the fd one right?
23:58:34  <bnoordhuis>ah, OS!="mac"
23:58:37  <bnoordhuis>:sad panda:
23:58:41  <bnoordhuis>yes
23:58:41  <tjfontaine>5163
23:58:54  <tjfontaine>is the real magic
23:58:57  <tjfontaine>it does work on osx
23:59:02  <tjfontaine>but you can't resolve structs
23:59:09  <tjfontaine>so you have to pass them as arguments
23:59:29  <tjfontaine>ustack helpers don't work on osx still, that's beyond my power
23:59:48  <bnoordhuis>right. too bad