00:06:08  * loladirojoined
00:08:31  * loladiroquit (Client Quit)
00:47:57  * loladirojoined
01:28:50  * ArmyOfBrucequit (Excess Flood)
01:29:21  * ArmyOfBrucejoined
01:31:17  * abraxasjoined
01:41:51  * loladiroquit (Quit: loladiro)
01:54:05  * loladirojoined
01:59:50  * loladiroquit (Quit: loladiro)
02:23:18  * brsonquit (Quit: leaving)
02:32:49  * bnoordhuisquit (Ping timeout: 260 seconds)
02:36:24  * dshaw_joined
02:38:25  * ericktjoined
02:38:34  * mikealquit (Quit: Leaving.)
03:24:14  * mikealjoined
03:55:59  * loladirojoined
04:37:06  * mikealquit (Quit: Leaving.)
04:55:11  * loladiroquit (Quit: loladiro)
05:02:06  * loladirojoined
05:06:39  * loladiroquit (Client Quit)
05:15:11  * Ariaquit (Remote host closed the connection)
05:20:48  * ericktquit (Quit: erickt)
05:21:02  * ibobrikjoined
05:26:29  * avalanche123joined
05:39:15  * avalanche123quit (Quit: Computer has gone to sleep.)
05:48:41  <indutny>ho
05:52:00  * TheJHjoined
05:53:22  <isaacs>hola
05:57:01  * ibobrikquit (Quit: ibobrik)
06:10:28  * mikealjoined
06:31:05  * ibobrikjoined
07:08:29  <indutny>ohai
07:27:42  * mmaleckiquit (Ping timeout: 246 seconds)
07:35:23  * hzjoined
07:45:49  * `3rdEdenjoined
08:07:02  * paddybyersjoined
08:13:53  * paddybyersquit (Quit: paddybyers)
08:16:53  * mmaleckijoined
08:32:56  <einaros>what exactly does node/libuv do to a tcp stream when you call pause()? hold reads?
08:35:48  * txdvquit (Ping timeout: 252 seconds)
08:38:01  <indutny>einaros: I guess it just stop reading
08:38:22  <indutny>einaros: and kernel will handle all required bandwith changes
08:38:27  <indutny>einaros: i.e. when socket will became full
08:38:38  * txdvjoined
08:46:26  * rendarjoined
08:53:57  * paddybyersjoined
09:06:07  <einaros>indutny: yeah, looking at the uv source, that's it :)
09:06:12  <einaros>also, wireshark to the rescue
09:06:43  <indutny>einaros: definitely
09:06:55  <einaros>the os reduces the window size until it indicates that it's full - at that point the remote peer will hold all packets until space is available
09:11:25  * dshaw_quit (Quit: Leaving.)
09:12:12  <indutny>obviously
09:13:47  <einaros>that means https://github.com/einaros/ratelimit isn't completely off the mark
09:27:08  * paddybyersquit (Quit: paddybyers)
09:29:57  * paddybyersjoined
09:53:54  * mitsuhikoquit (Excess Flood)
09:57:20  * mitsuhikojoined
10:07:13  * ibobrikquit (Ping timeout: 245 seconds)
10:12:33  * loladirojoined
10:13:13  * TheJHquit (Ping timeout: 246 seconds)
10:19:43  * ibobrikjoined
10:38:37  * stagasjoined
10:45:05  * hzquit (Ping timeout: 272 seconds)
10:46:46  * hzjoined
10:53:09  * saghulquit (Ping timeout: 245 seconds)
10:55:57  * saghuljoined
11:16:05  * bnoordhuisjoined
11:18:21  * hzquit (Disconnected by services)
11:18:25  * hzjoined
11:36:17  * stagasquit (Quit: ChatZilla 0.9.88-rdmsoft [XULRunner 1.9.0.17/2009122204])
11:46:35  <indutny>bnoordhuis: hey man
11:46:42  <indutny>reminding about tls sessions stuff
12:09:23  * AvianFluquit (Quit: AvianFlu)
12:10:38  * hzquit (Read error: Connection reset by peer)
12:12:49  * hzjoined
12:49:27  <bnoordhuis>$ ab -q -k -c 1000 -n 200000 http://127.0.0.1:1234/ | grep Requests
12:49:27  <bnoordhuis>Requests per second: 166813.18 [#/sec] (mean)
12:49:47  <bnoordhuis>^ simple keep-alive http server with epoll in edge triggered mode
12:52:23  * TheJHjoined
12:52:25  <bnoordhuis>$ ab -q -c 1000 -n 200000 http://127.0.0.1:1234/ | grep Requests
12:52:26  <bnoordhuis>Requests per second: 26216.41 [#/sec] (mean)
12:52:34  <bnoordhuis>^ same server without keep-alive
12:52:46  <indutny>bnoordhuis: what os?
12:52:56  <bnoordhuis>indutny: what os has epoll?
12:52:59  <indutny>ah
12:53:00  <indutny>heh
12:53:04  <indutny>good results
12:53:12  <indutny>I've same benchmark for kqueue
12:53:13  <bnoordhuis>they are, aren't they?
12:53:15  <indutny>here unfinished
12:53:21  <indutny>surprisingly good
12:53:28  <indutny>I wonder how you hadn't run out of ephemeral ports
12:53:37  <bnoordhuis>i tweaked some sysctls
12:53:42  <indutny>ok
12:53:49  <indutny>btw, I thought about one way of fixing it
12:54:01  <indutny>probably you already know existing solutions
12:54:07  <bnoordhuis>of fixing what?
12:54:11  <bnoordhuis>ephemeral ports?
12:54:12  <indutny>ephemeral ports limit :D
12:54:22  <indutny>i.e. handling more than 65k connections
12:54:40  <bnoordhuis>i propose we switch to a user mode tcp stack
12:54:45  <bnoordhuis>(that was a joke btw)
12:55:10  <indutny>heh
12:55:12  <indutny>we can do it
12:55:23  <indutny>but instead we can have multiple virtual IP interfaces
12:55:33  <indutny>and balance incoming tcp packets to that addresses
12:55:40  <bnoordhuis>eh?
12:55:43  <indutny>using sender's address hash
12:55:47  <indutny>yeah
12:55:57  <indutny>i.e. balancing w/o handling protocol
12:56:08  <indutny>so each interface will have it's own separate limit
12:56:50  <indutny>I suppose it should be possible to do it as a kernel driver
12:57:20  <bnoordhuis>we're not bundling kernel drivers with node :)
12:57:25  <indutny>hahaha
12:57:28  <bnoordhuis>but by all means have a go at it
12:57:31  <indutny>that's not related to node
12:57:43  <indutny>bnoordhuis: do you know something like that?
12:58:20  <bnoordhuis>indutny: well... i hacked a patch for the linux kernel a few years ago
12:58:25  <indutny>it seems that I need an ARP proxy
12:58:33  <bnoordhuis>where i added the tcp sequence number as a source of uniqueness
12:58:35  <indutny>bnoordhuis: nice
12:58:42  <bnoordhuis>not entirely fool proof but it worked well enough
12:58:59  <indutny>what uniqueness are we talking about?
12:59:19  <bnoordhuis>local addr + local port + remote addr + remote port
12:59:28  <bnoordhuis>and + sequence number
12:59:44  <indutny>oh, that thing
12:59:46  <indutny>interesting
13:00:05  <indutny>does this mean that we can have multiple connections with same ports on both sides?
13:00:18  <bnoordhuis>technically yes
13:00:25  <indutny>but not pratically :D
13:00:38  <indutny>at least it'll prevent us from reusing old existing connection
13:00:38  <bnoordhuis>it confuses a lot of software
13:00:39  <indutny>right?
13:00:43  <bnoordhuis>not to mention other operating systems
13:00:57  <indutny>heh
13:03:22  <indutny>I'll call this utility arpmux
13:04:14  <bnoordhuis>has a nice ring to it
13:18:16  * mikealquit (Quit: Leaving.)
13:23:42  <indutny>I guess SOCK_RAW may work for that purpose
13:27:40  * saghulquit (Ping timeout: 252 seconds)
13:28:55  * saghuljoined
13:38:17  * ericktjoined
13:46:34  * piscisaureus_joined
13:46:40  <piscisaureus_>hello
13:47:00  <bnoordhuis>olla
13:47:07  <indutny>olalal
14:00:26  <ibobrik>bnoordhuis: can you explain this for me? maybe it will be useful in libuv: http://pastie.org/4597378
14:02:39  <ibobrik>indutny: you may know that too :)
14:03:11  <piscisaureus_>ibobrik: I see some code to align a a buffer at a page boundary
14:03:58  <piscisaureus_>ibobrik: but... it's probably not needed because when you malloc 64k you'll almost certainly get an aligned block already
14:04:37  <ibobrik>but i see 50% win with random file reads!
14:04:58  <ibobrik>i double-checked this, really
14:05:26  <piscisaureus_>ibobrik: you see that where? and Where did you add this code?
14:07:00  <ibobrik>http://pastie.org/4597407 this is my stupid code
14:09:40  <piscisaureus_>ibobrik: you see that after adding this alignment code?
14:09:41  <piscisaureus_>(ibobrik: btw - if it actually aligns the buffern then you'll cause a buyffer overrun)
14:10:10  <ibobrik>yep, 10 mb/s after adding, 7 mb/s before
14:10:46  <piscisaureus_>ibobrik: can you print the hex value of buf before and after alignment
14:10:47  <piscisaureus_>?
14:11:02  <piscisaureus_>ibobrik: printf("%x\n", (uintptr_t) buf)
14:11:05  <ibobrik>i may try, wait a minute
14:15:21  <ibobrik>buf before: 25e9010
14:15:21  <ibobrik>buf after: 25ea000
14:15:51  <ibobrik>printf("buf before: %x\n", (unsigned int) buf); /* this may be wrong */
14:18:36  <piscisaureus_>ah, right
14:18:41  <indutny>piscisaureus_: it should not be aligned
14:18:52  <indutny>there're 16 bytes identifiying it
14:18:54  <piscisaureus_>yeah
14:18:56  <piscisaureus_>I see now
14:19:12  <indutny>allocators are really odd :D
14:19:22  <piscisaureus_>I think on windows this doesn't happen btw
14:19:59  <piscisaureus_>I think free() will just know that the buffer has been malloced when it points outside of the heap area
14:20:11  <piscisaureus_>er, s/malloced/mmaped
14:21:52  <ibobrik>does it make sense to align buffers before reading from files to them? is it safe?
14:22:09  <indutny>ibobrik: no, it's not safe
14:22:15  <indutny>ibobrik: try free()'ing it
14:22:18  <piscisaureus_>ibobrik: it can be made safe, but what you are doing isn't :-)
14:22:28  <indutny>you can mmap
14:22:34  <indutny>it'll be much more safier
14:22:39  <piscisaureus_>yeah
14:22:45  <indutny>and even more
14:22:48  <indutny>you can mmap file
14:22:54  <indutny>:)
14:23:03  <ibobrik>i don't see any mmap in fs module in node.js :)
14:23:08  <piscisaureus_>unless it is very big
14:23:26  <ibobrik>it's 4 gigabytes
14:23:31  <ibobrik>is it big?
14:23:35  <indutny>are you on x86?
14:23:39  <piscisaureus_>too big to mmap on x86
14:23:46  <ibobrik>x86_64
14:24:06  <indutny>you should be able to mmap it then
14:24:21  <ibobrik>and probably i'll want to use many files like that
14:24:35  <indutny>that won't work :D
14:25:14  <indutny>bnoordhuis: hey man
14:25:20  <ibobrik>so how to make it safe?
14:25:52  <ibobrik>disk io is bottleneck in my case, 50% is so good
14:26:01  <indutny>bnoordhuis: why udata is NULL when receiving events from kevent() call?
14:35:03  <ibobrik>i found posix_memalign, this probably does the same
14:54:48  <indutny>a question
14:55:02  <indutny>why ain't we using kqueue for watching signals in node on supported platforms
14:55:11  <indutny>it seems to be quite efficient
14:55:18  <indutny>and fits event-loop model well
14:59:40  <piscisaureus_>indutny: I think bnoordhuis would take a patch for it
14:59:57  * ericktquit (Quit: erickt)
15:00:24  <piscisaureus_>indutny: I know he has been working on getting multiloop signal handling working, but it could probably be solved with signalfd and kqueue on linux, bsds
15:00:29  <piscisaureus_>indutny: so solaris is the only chore here
15:01:23  <indutny>kqueue on linux?
15:01:29  <piscisaureus_>indutny: signalfd on linux
15:01:33  <indutny>ah
15:01:40  <indutny>complex sentence
15:01:44  <piscisaureus_>indutny: haha
15:01:57  <indutny>yeah, will take a look
15:01:59  <piscisaureus_>indutny: I don't know if you can receive signals on all loops with kqueue
15:02:04  <indutny>finishing simple kqueue http server
15:02:14  <indutny>piscisaureus_: em... why not?
15:02:29  <indutny>every kqueue should receive event
15:02:35  <piscisaureus_>indutny: are you sure
15:02:43  <indutny>no
15:02:44  <piscisaureus_>indutny: what if the event loop is not in kqueue at that time
15:02:48  <indutny>that needs to be checked
15:03:08  <indutny>piscisaureus_: it's persistent
15:03:13  <indutny>piscisaureus_: at least it's designed to be so
15:03:54  <piscisaureus_>man, kqueue is so nice
15:04:02  <piscisaureus_>instead of writing libuv we should just port kqueue to all platforms
15:04:21  <indutny>hahaha
15:04:24  <indutny>yeah, it's really nice
15:04:48  <indutny>berkley mad science
15:05:12  <indutny> 16387.18 req/sec so far
15:05:17  <indutny>w/o http parsing
15:05:29  <indutny>and w/o keep-alive
15:05:47  <piscisaureus_>data returns the number of
15:05:47  <piscisaureus_> times the signal has been generated since the last call to kevent(). This filter
15:05:47  <piscisaureus_> automatically sets the EV_CLEAR flag internally.
15:06:31  <piscisaureus_>^-- indutny: the question is: since the last call to kevent(), or the last call to kevent with that particular kqueue fd ?
15:06:40  <indutny>hahaha
15:06:42  <indutny>really nice
15:06:49  <indutny>I can lookup xnu source if you would like
15:07:10  <piscisaureus_>good idea
15:08:02  <indutny>for every kqueue
15:08:10  <indutny>explaining
15:08:10  <piscisaureus_>hmm
15:08:14  <indutny>process holds knote list
15:08:20  <indutny>and on signal walks it
15:08:23  <indutny>and increments data
15:08:25  <piscisaureus_>right
15:08:35  <piscisaureus_>not very practical
15:08:42  <indutny>why not?
15:08:50  * AvianFlujoined
15:09:06  <piscisaureus_>indutny: well suppose you have two uv loops and they both watch for sigint
15:09:40  <piscisaureus_>indutny: then you want this sigint to be delivered to both loops. But if one kqueue() call resets the pending signals list, only one loop will get it.
15:10:03  <piscisaureus_>indutny: btw - I suppose that using EVFILT_PROC would be nice for watching child processes.
15:10:37  <indutny>piscisaureus_: your example is incorrect
15:10:41  <indutny>EV_CLEAR doesn't work like that
15:10:56  <indutny>it just removes single knote from list
15:11:52  <piscisaureus_>indutny: so when a signal arrives all the kqueues are walked and the first knote it finds has its data member incremented ?
15:12:13  <piscisaureus_>indutny: whatever - if you say it works, I will believe you.
15:12:32  <indutny>piscisaureus_: it should increment every matching knote
15:12:42  <piscisaureus_>right, ok
15:13:13  <indutny>so both loops should receive notifications
15:15:53  * dylangjoined
15:18:05  * AvianFluquit (Ping timeout: 248 seconds)
15:19:31  <indutny> * Query/Post each knote in the object's list
15:19:31  <indutny> *
15:19:31  <indutny> * The object lock protects the list. It is assumed
15:19:31  <indutny> * that the filter/event routine for the object can
15:19:31  <indutny> * determine that the object is already locked (via
15:19:31  <indutny> * the hint) and not deadlock itself.
15:19:31  <indutny> *
15:19:32  <indutny> * The object lock should also hold off pending
15:19:32  <indutny> * detach/drop operations. But we'll prevent it here
15:19:33  <indutny> * too - just in case.
15:19:36  <indutny>piscisaureus_: ^
15:19:51  <indutny>I just verified
15:19:55  <indutny>it seems to be working this way
15:19:59  <piscisaureus_>ok
15:20:04  <piscisaureus_>well, that's very nice :-)
15:22:37  <indutny>https://gist.github.com/2bbef0db7b225d3e9e5c
15:25:19  <piscisaureus_>http://mikaelkoskinen.net/post/asp-net-web-api-node-benchmarks.aspx
15:28:01  <piscisaureus_>bnoordhuis: yt?
15:28:59  <piscisaureus_>bnoordhuis: I need this for windows: https://gist.github.com/337256544ca428905b02. Would it be useful for unix as well or are we divorcing again?
15:29:43  <indutny>piscisaureus_: confirmation https://gist.github.com/8361b2a97ec7e5149193
15:30:24  <piscisaureus_>indutny: nice
15:30:56  <piscisaureus_>indutny: kqueue would really rock if it'd work with /dev/stdin as well _p
15:31:04  <indutny>haha
15:31:26  <indutny>that's only on osx
15:31:55  <piscisaureus_>indutny: do EVFILT_SIGNAL and EVFILT_PROC work on *bsd?
15:32:09  <indutny>EVFILT_SIGNAL definitely
15:32:18  <indutny>EVFILT_PROC - hadn't checked yet
15:32:26  <indutny>suppose so
15:34:43  <indutny>bnoordhuis seems to be working on removing ev from libuv
15:34:58  <indutny>this is essentially leads to adding new bindings for kqueu
15:35:01  <indutny>kqueue*
15:35:48  * ibobrikquit (Quit: ibobrik)
15:36:42  * TooTallNatejoined
15:38:46  <piscisaureus_>yes
15:38:49  <piscisaureus_>you're welcome
15:39:25  <piscisaureus_>what are the current refcount semantics for close callbacks on unref'ed handles
15:39:26  <piscisaureus_>?
15:39:59  <piscisaureus_>are they always called, or could they be skipped when uv_run returns?
15:40:06  * piscisaureus_does know remember
15:40:09  <tjfontaine>I think they're skipped
15:40:10  * piscisaureus_is ashamed
15:58:45  * dapjoined
16:10:26  * avalanche123joined
16:13:31  * ericktjoined
16:15:31  * tomshredsjoined
16:16:26  * joshthecoderjoined
16:19:04  * ibobrikjoined
16:21:40  * dshaw_joined
16:23:01  * avalanche123quit (Quit: Computer has gone to sleep.)
20:26:57  <piscisaureus_>One of sixteen vestal virgins ♫
20:26:58  <bnoordhuis>it's not something we'll change if we can avoid it because most all tests rely on it
20:27:30  <saghul>bnoordhuis thanks! ok, so I can rely on it then for a small optimization :-)
20:28:43  <saghul>bnoordhuis btw, when you can I'd love some initial feedback on #535 before I proceed further
20:29:15  <bnoordhuis>saghul: i've seen it but... the bits you've changed have been removed completely in the 'out with libev' refactor
20:29:41  <bnoordhuis>it's not that i mind merging it - except for the name uv_run2 - but i'd have to do it all over again in my branch
20:30:30  <saghul>I could work against your branch if you want :-) So, the next major release won't have libev then :-)
20:30:50  <bnoordhuis>saghul: yep. and maybe no libeio either
20:31:19  <saghul>oh, will there be a replacement for ego?
20:31:47  <saghul>s/ego/eio/
20:33:50  <bnoordhuis>saghul: just the uv_fs_* functions + a custom thread pool
20:34:06  <saghul>bnoordhuis cool!
20:34:50  <saghul>bnoordhuis once the refactor is shaping up ping me if you didn't do the uv_run2 already :-)
20:35:50  <bnoordhuis>saghul: i'll probably add it somehow, it's a logical candidate
20:36:01  <bnoordhuis>meshes nicely with the underlying implementation as well
20:36:26  <saghul>great!
20:51:22  <bnoordhuis>$ perf diff perf.data.master perf.data.ev
20:51:22  <bnoordhuis>Segmentation fault (core dumped)
20:51:28  * bnoordhuissighs
20:56:50  * arlolraquit (Quit: Leaving...)
20:58:00  <bnoordhuis>i guess my edge triggered implementation is a victim of its own success
20:58:14  <bnoordhuis>it spends a lot more kernel-side time in __ticket_spin_lock() ...
21:09:38  * perezdquit (Ping timeout: 244 seconds)
21:16:54  * TooTallNatequit (Quit: Computer has gone to sleep.)
21:17:22  * ibobrikquit (Quit: ibobrik)
21:21:51  * erickt_joined
21:22:22  * perezdjoined
21:25:48  * ericktquit (Ping timeout: 252 seconds)
21:25:48  * erickt_changed nick to erickt
21:29:30  <piscisaureus_>bnoordhuis: is the spin count not configurabe?
21:29:44  <bnoordhuis>piscisaureus_: np
21:29:51  <bnoordhuis>s/np/no/
21:30:20  <piscisaureus_>s/s\/np\/no\//s\/p\/o\//
21:39:02  <hz>omg lol
21:39:03  <hz>:D
21:39:13  <hz>piscisaureus_ +1
21:39:14  * ircretary1joined
21:39:43  * ircretary1quit (Remote host closed the connection)
21:40:02  * ircretary1joined
21:40:16  * TooTallNatejoined
21:41:07  * ircretary1quit (Remote host closed the connection)
21:41:44  * hz_joined
21:41:49  * hzquit
21:43:05  <hz_>tu?
21:43:17  <bnoordhuis>et du?
21:43:17  <hz_>che parte stai facendo?
21:43:41  <hz_>wrong chan sorry :)
21:47:35  * AvianFluquit (Quit: AvianFlu)
21:53:09  * hz_quit (Quit: Yaaic - Yet another Android IRC client - http://www.yaaic.org)
22:02:07  <CIA-131>libuv: Bert Belder master * r637be16 / (15 files in 2 dirs): windows: make active and closing handle state independent - http://git.io/ouQRTw
22:02:07  <CIA-131>libuv: Bert Belder master * r621a4e3 / (test/test-list.h uv.gyp test/test-active.c): test: add test for uv_is_active and uv_is_closing - http://git.io/Endi6Q
22:02:16  <piscisaureus_>^-- slightly closed to being actually correct now...
22:03:32  * ericktquit (Remote host closed the connection)
22:03:50  * ericktjoined
22:03:53  * travis-cijoined
22:03:53  <travis-ci>[travis-ci] joyent/libuv#628 (master - 621a4e3 : Bert Belder): The build passed.
22:03:53  <travis-ci>[travis-ci] Change view : https://github.com/joyent/libuv/compare/c77d08eb9229...621a4e36f7f0
22:03:53  <travis-ci>[travis-ci] Build details : http://travis-ci.org/joyent/libuv/builds/2255432
22:03:53  * travis-cipart
22:06:39  * paddybyers_joined
22:09:31  * paddybyersquit (Ping timeout: 260 seconds)
22:09:31  * paddybyers_changed nick to paddybyers
22:11:57  * loladiroquit (Quit: loladiro)
22:14:24  * rborgpart
22:27:37  <CIA-131>node: Bert Belder reviewme * raba90f7 / lib/child_process.js : windows: fix single-accept mode for shared server sockets - http://git.io/yrhGXQ
22:27:45  <piscisaureus_>^-- review anyone?
22:36:48  * TooTallNatequit (Ping timeout: 245 seconds)
22:38:17  * TooTallNatejoined
22:43:45  * loladirojoined
22:58:29  * rendarquit
23:19:33  * dshaw_quit (Quit: Leaving.)
23:28:37  * mmaleckiquit (Ping timeout: 246 seconds)
23:31:35  * Ariajoined
23:35:28  * bnoordhuisquit (Quit: Leaving)
23:53:24  * paddybyersquit (Quit: paddybyers)
23:59:15  * brsonquit (Read error: Connection reset by peer)