00:01:24  <piscisaureus_>yeah, that's odd
00:01:56  <mjr_>I fear this memory thing is a smartos specific issue, but something about crypto seems to trigger it.
00:02:09  <piscisaureus_>I wonder when node allocates 100000
00:02:26  <piscisaureus_>it looks nonrandom but it's a weird number of bytes to allocate
00:02:46  <mjr_>And also a very common pattern.
00:03:25  * TheJHquit (Ping timeout: 248 seconds)
00:03:28  <piscisaureus_>ah wait, this is the allocation of a new slab
00:04:02  <mjr_>For the processes doing a lot of crypto, this crash happens as soon as a few minutes after starting.
00:04:41  <piscisaureus_>ah right
00:04:45  <piscisaureus_>this number is in hex
00:04:48  <piscisaureus_>it's 1MB
00:04:59  <piscisaureus_>which is the default size for slabs
00:05:55  <piscisaureus_>mjr_: I don't see any crypto-related stuff in the stacks
00:06:15  <piscisaureus_>mjr_: this looks like an ordinary read from a tcp socket or pipe
00:07:34  <mjr_>piscisaureus_: that's actually encouraging, because nobody seems to know how to fix problems with crypto. :)
00:08:02  <piscisaureus_>mjr_: actually, it's not unlikely that this is an ordinary OOM situation
00:08:58  <piscisaureus_>mjr_: this is an x86 machine. I don't know about smartos, but on many OSes the maximum virtual address space that a process can use is 2gb or 3gb
00:09:02  <piscisaureus_>somewhere around that
00:09:10  <piscisaureus_>now if the heap is using 115617792 bytes
00:09:15  <mjr_>Heap is at 400MB
00:09:27  <piscisaureus_>which heap are we talking about?
00:10:00  <mjr_>The process heap.
00:10:29  <piscisaureus_>mjr_: ok. So how much is the v8 heap using?
00:10:44  <mjr_>Not sure. I don't think very much.
00:12:29  <piscisaureus_>right
00:12:43  <piscisaureus_>mjr_: hmm, well, then out-of-memory seems to be unlikely
00:13:01  <piscisaureus_>mjr_: do you happen to know how much virtual memory was allocated when the process crashed?
00:13:20  <piscisaureus_>mjr_: but I can tell you for sure that these stacks don't suggest it has anything to do with crypto
00:13:27  <mjr_>I do not. I think I have a graph of process RSS, and the V8 sizes. Hang on
00:13:50  <piscisaureus_>vm is also not very interesting usually
00:14:04  <piscisaureus_>but is is when you are investigating a malloc() failure on x86 :-)
00:14:48  <mjr_>heapUsed is 40MB, heapTotal is 57MB
00:14:55  <DrPizza>there's also the hairy old fragmentation issue
00:15:00  <mjr_>So V8 thinks the heap is small.
00:15:00  <DrPizza>how long have these processes been running?
00:15:06  <mjr_>10 mins or so
00:15:14  <DrPizza>so unlikely to be that
00:15:19  <mjr_>DrPizza: I think this is a fragmentation problem, honestly.
00:15:26  <piscisaureus_>well
00:15:37  <piscisaureus_>it's the only thing I can think of for sure
00:16:07  <piscisaureus_>but when you're using, say, 600mb of memory it would be really sad if your heap was fragmented so much
00:16:13  <piscisaureus_>unlikely i'd say
00:16:29  <piscisaureus_>maybe there's something peculiar about libumem
00:16:41  <piscisaureus_>did you try without?
00:16:56  <DrPizza>what is libumem?
00:17:02  <mjr_>I think the fancy SmartOS libumem stats on that gist indicate that we only really have 100MB in use, but we've claimed 400MB from the OS.
00:17:23  <mjr_>DrPizza: libumem is the SmartOS fancy allocator that is "better" than the one in libc.
00:17:37  <DrPizza>better i.e. less tested?
00:17:44  <piscisaureus_>I think it's quite old actually
00:17:47  <piscisaureus_>from the solaris 9 daus
00:17:49  <piscisaureus_>*days
00:17:59  <piscisaureus_>but you want to ask bcantrill to be sure :-)
00:19:21  <mjr_>Yeah, still waiting to get a slice of his time to look at this.
00:19:42  <mjr_>I can't tell whether this is a SmartOS thing or not. Certainly the SmartOS tools make it possible to understand problems like this in great detail.
00:19:56  <piscisaureus_>well you can try to not use libumem :-)
00:20:13  <piscisaureus_>if crashes within the our are common, you should know quickly enough
00:20:28  <piscisaureus_>libumem is just something you load with LD_PRELOAD right?
00:20:42  <piscisaureus_>s/our/hour/
00:21:46  <mjr_>Yeah, but I think the node 0.8 build uses libumem explicitly now.
00:23:12  <piscisaureus_>that's easy to fix
00:23:15  <piscisaureus_>git revert f70b138
00:26:01  <mjr_>That is worth a try
00:47:58  * dapquit (Quit: Leaving.)
00:48:47  * ericktquit (Quit: erickt)
00:49:11  * dapjoined
00:50:25  <mmalecki[busy]>mjr_: you guys ran into the ENOMEM stuff too?
00:50:41  * mmalecki[busy]changed nick to mmalecki
00:50:51  <mjr_>mmalecki[busy]: it has a different name on SmartOS, but I think so.
00:50:54  <mjr_>What do you guys see?
00:51:27  <mmalecki>mjr_: npm throwing ENOMEM/we were unable to spawn a child process because it threw ENOMEM
00:51:36  <mmalecki>also, it was SmartOS
00:51:47  <mmalecki>we were certain it's our fault somehow.
00:51:59  <mjr_>huh
00:52:07  <mjr_>Only on 0.8?
00:52:12  <mjr_>We don't get this if we revert back to 0.6
00:52:13  * mcavagequit (Remote host closed the connection)
00:52:25  <mmalecki>we only run 0.8 in production as the host environment
00:52:50  <mmalecki>we could try reverting to 0.6 on staging tho.
00:52:55  <mmalecki>AvianFlu: ^ opinion?
00:53:12  <mjr_>How often do you get it?
00:53:18  <mmalecki>AvianFlu: also, don't really trust me. I'm in a certain state.
00:53:48  <mmalecki>mjr_: pretty rarely, but often in cases when we should have at least few megs of mem left.
00:53:54  <mjr_>Hmm
00:54:03  <mmalecki>mjr_: in what cases do you get it?
00:54:05  <mjr_>In our case, we have thousands of megs legs.
00:54:08  <mjr_>left
00:54:14  * TooTallNatequit (*.net *.split)
00:54:14  * chobi_e_quit (*.net *.split)
00:54:27  <mmalecki>mjr_: yeah, we run 256 MB hosts in production
00:54:29  <mjr_>On our processes with lots of HTTPS, we get it after a small number of minutes.
00:54:56  <mmalecki>so, that's quite a difference. but yeah, seems kind of similar, actually.
00:55:20  <mjr_>If you don't get this when reverting back to 0.6, that would be very useful information
00:55:41  <mmalecki>I can give it a try when I'm not in a certain state
00:56:17  <mmalecki>actually, fuck it, I'm drunk.
00:56:34  <mmalecki>well, not *that* bad, but the night is young.
00:58:48  * TooTallNatejoined
01:00:41  <mmalecki>dang man, I'm depressed
01:01:20  <piscisaureus_>sounds unhealthy mmalecki
01:01:57  * EhevuTovquit (Quit: This computer has gone to sleep)
01:02:12  <mmalecki>it probably does.
01:03:04  <mmalecki>well.
01:03:09  <mmalecki>whatever, really
01:03:11  <mmalecki>shots!
01:04:01  <DrPizza>hah
01:04:03  <DrPizza>hrm
01:04:10  <DrPizza>I need to figure out a reason to go to san francisco
01:04:15  <DrPizza>I need to feast on burritos
01:05:15  <piscisaureus_>I don't like the amount of ip chatter involved with a hello/100 requests
01:05:28  <piscisaureus_>And I also need to find myself an excuse to go to SF
01:07:42  <mmalecki>I'll be in SF in December
01:09:44  * tomshredsjoined
01:10:56  <mmalecki>well, I'm drinking in front of a mirror
01:12:42  * mmaleckiquit (Quit: Reconnecting)
01:13:00  * mmaleckijoined
01:29:25  <piscisaureus_>bnoordhuis: hey, yt?
01:35:29  * piscisaureus^joined
01:43:18  * joshthecoderquit (Quit: Leaving...)
01:46:53  * dapquit (Quit: Leaving.)
01:47:36  * paddybyersquit (Quit: paddybyers)
01:48:49  * mmaleckiquit (Ping timeout: 272 seconds)
01:53:15  * piscisaureus^quit (Quit: leaving)
01:55:38  * abraxasjoined
01:56:38  * ericktjoined
01:59:12  * ericktquit (Client Quit)
02:02:15  * tomshredsquit (Quit: Linkinus - http://linkinus.com)
02:03:01  <bnoordhuis>piscisaureus_: yep
02:03:31  <piscisaureus_>bnoordhuis: ok, just checkin'
02:08:40  * pooyaquit (Quit: pooya)
02:10:46  <bnoordhuis>it's a sad fact but it seems level triggered epoll is faster than edge triggered epoll...
02:11:10  <bnoordhuis>probably because everyone is using level triggered i/o so it's seen more optimization
02:11:20  <bnoordhuis>still, it's sad
02:11:22  <piscisaureus_>bnoordhuis: any reasons beyond these contention issues
02:11:23  <piscisaureus_>yes
02:14:24  <bnoordhuis>bnoordhuis: it's lock contention
02:14:35  <bnoordhuis>i could shave off a few more cycles here and there
02:14:40  <bnoordhuis>but that won't save it
02:15:06  <bnoordhuis>err, piscisaureus_
02:15:10  * bnoordhuisis getting tired
02:15:21  <piscisaureus_>ya
02:15:31  <piscisaureus_>too bad
02:15:42  <piscisaureus_>bnoordhuis: send some kernel patches :-p
02:15:50  <piscisaureus_>(kidding)
02:17:50  * piscisisaureus^joined
02:17:50  <bnoordhuis>well... it's something that should be addressed
02:18:03  * piscisisaureus^changed nick to piscisaureus^
02:18:34  <piscisaureus^>I agree
02:18:47  <piscisaureus^>nothing but disappointment these days
02:18:55  <piscisaureus^>http.sys rules node
02:19:10  <piscisaureus^>level triggered rules edge triggered
02:19:55  <piscisaureus^>I sort of like this crappy chromebook btw - it feels like a permanent distraction-free mode
02:20:30  <bnoordhuis>how so? no solitaire?
02:20:56  <piscisaureus^>bnoordhuis: no mail, no twitter app
02:21:02  <piscisaureus^>no work
02:21:45  <bnoordhuis>2 out of 3, eh?
02:21:48  <bnoordhuis>okay, i'm off to bed
02:21:52  <bnoordhuis>sleep tight, bertje
02:21:58  <piscisaureus^>sleep tight bnoordhuis
02:23:59  * brsonquit (Ping timeout: 245 seconds)
02:25:19  * ArmyOfBrucequit (Excess Flood)
02:25:22  * TooTallNatequit (Quit: Computer has gone to sleep.)
02:25:52  * ArmyOfBrucejoined
02:26:01  * TooTallNatejoined
02:26:50  * bnoordhuisquit (Ping timeout: 264 seconds)
02:28:48  * TooTallNatequit (Client Quit)
02:59:35  * pooyajoined
03:00:45  * loladirojoined
03:24:43  * AvianFluquit (Ping timeout: 240 seconds)
03:25:30  * chobi_e_joined
03:30:27  * chobi_e_quit (Ping timeout: 256 seconds)
03:40:35  * ericktjoined
03:42:01  * piscisaureus_quit (Quit: ~ Trillian Astra - www.trillian.im ~)
03:50:01  * chobi_e_joined
04:15:27  * loladiroquit (Quit: loladiro)
04:15:53  * piscisaureus^quit (Read error: Operation timed out)
04:16:56  * loladirojoined
04:47:33  * piscisaureus^joined
04:49:55  * loladiroquit (Quit: loladiro)
05:06:14  * ericktquit (Quit: erickt)
05:40:48  * ibobrikjoined
05:52:56  * toothrotquit (Ping timeout: 265 seconds)
05:56:25  * toothrjoined
05:56:28  * `3rdEdenjoined
05:57:02  * ibobrikquit (Quit: ibobrik)
06:04:06  * ArmyOfBrucequit (Excess Flood)
06:04:36  * ArmyOfBrucejoined
06:15:47  * pooyaquit (Quit: pooya)
06:26:34  <indutny>pampam
06:26:36  <indutny>good morninbg
06:27:01  <indutny>mjr_: I just read yesterday's logs, have you figured out anything else?
06:29:16  * TheJHjoined
06:32:36  * ibobrikjoined
06:41:03  * toothrquit (Ping timeout: 240 seconds)
06:42:25  * toothrjoined
06:46:36  * piscisaureus^quit (Quit: Lost terminal)
07:13:05  * ibobrik_joined
07:16:05  * paddybyersjoined
07:16:15  * ibobrikquit (Ping timeout: 260 seconds)
07:16:16  * ibobrik_changed nick to ibobrik
07:46:12  * rendarjoined
08:19:22  * dshaw_quit (Quit: Leaving.)
08:19:36  * dshaw_joined
08:37:00  * dshaw_quit (Quit: Leaving.)
10:14:23  * toothrquit (Read error: Connection reset by peer)
10:14:54  * toothrjoined
10:29:57  * mmaleckijoined
11:50:23  * abraxasquit (Remote host closed the connection)
11:53:47  * AvianFlujoined
11:56:19  * stagasquit (Ping timeout: 252 seconds)
12:19:07  * CIA-122quit (Ping timeout: 265 seconds)
12:21:17  * bnoordhuisjoined
12:38:38  <indutny>bnoordhuis: hey man
12:38:46  <indutny>bnoordhuis: working on faio, is it still relevant?
12:44:09  * CIA-128joined
12:53:27  <indutny>bnoordhuis: https://github.com/bnoordhuis/faio/pull/1/files
12:53:35  <indutny>unfortunatelly results are not as good as they're with epoll
12:53:39  <indutny>only 15000 connections/sec
12:54:10  <indutny>with keep-alive 63000 req/sec
12:54:39  <indutny>bnoordhuis: I think we can additionaly report number of bytes that can be read/written
13:00:22  <indutny>ok, that wasn't true
13:00:27  <indutny>83198 req/sec
13:04:09  * `3rdEdenchanged nick to `3E|Shop
13:11:10  * hzquit
13:18:31  * CIA-128quit (Ping timeout: 246 seconds)
13:20:18  <bnoordhuis>indutny: hey nice work
13:20:52  <bnoordhuis>yes, faio is relevant. i'm going to use it as a base in libuv
13:23:22  * _appujoined
13:29:41  * _appuquit (Ping timeout: 245 seconds)
13:37:51  <indutny>bnoordhuis: heh
13:38:02  <indutny>bnoordhuis: fixing issues
13:38:16  <bnoordhuis>indutny: cool
13:38:19  <bnoordhuis>and nice work
13:38:23  <bnoordhuis>but i already said that :)
13:38:37  <indutny>haha
13:40:30  * userjoined
13:40:39  * userchanged nick to piscisaureus^
13:40:56  <piscisaureus^>is faio the new libev?
13:40:59  <piscisaureus^>or the new libuv?
13:41:55  * CIA-127joined
13:42:04  * loladirojoined
13:42:04  <indutny>piscisaureus^: new libev, I suppose
13:43:01  * `fogusjoined
13:43:05  <indutny>bnoordhuis: fixed issues
13:43:06  <indutny>except mach
13:43:16  <piscisaureus^>will it be part of libuv
13:43:19  <bnoordhuis>piscisaureus^: yes
13:43:25  <bnoordhuis>maybe not in this exact form
13:43:40  <bnoordhuis>indutny: i want it to be usable on freebsd/openbsd/netbsd/etc.
13:43:47  <piscisaureus^>bnoordhuis: to provide "backends" ?
13:43:51  <bnoordhuis>piscisaureus^: yes
13:43:55  <piscisaureus^>aha
13:44:04  <bnoordhuis>indutny: but i can probably fix that up myself
13:44:59  <bnoordhuis>piscisaureus^: like libev but without all the extra cruft, really
13:45:05  <piscisaureus^>bnoordhuis: nice
13:45:10  <bnoordhuis>and a coding style that doesn't make eyeballs bleed
13:45:32  <piscisaureus^>bnoordhuis: so will it still be possible to use signalfd, EVFILT_PROC etc on the platforms that support it?
13:45:39  <bnoordhuis>piscisaureus^: yes
13:46:04  <piscisaureus^>if all the cargo is hidden behind faio_wait then its' going to be hard
13:47:24  <bnoordhuis>piscisaureus^: what cargo?
13:51:18  <piscisaureus^>piscisaureus^: well, the actual work that gets done
13:51:33  <piscisaureus^>oh I see you are using a callback based implementation
13:52:19  <piscisaureus^>Actually if I would do it again I would make people "ask" for events
13:52:28  <piscisaureus^>where it == libuv
13:53:00  <piscisaureus^>but if the goal is to make backends for libuv then I suppose you should do what is easiest/fastest
13:53:35  <indutny>bnoordhuis: ah
13:53:45  * ericktjoined
13:54:17  <bnoordhuis>piscisaureus^: i considered that, something like a FAIO_FOREACH_EVENT() macro
13:54:45  <bnoordhuis>callbacks are easier to deal with from a user perspective though
13:54:49  <indutny>bnoordhuis: cool
13:54:57  <bnoordhuis>but i may yet change my mind :)
13:55:15  <indutny>bnoordhuis: we need a way to propagate bytes count
13:55:22  <indutny>bnoordhuis: if that's available in backend
13:55:26  <bnoordhuis>indutny: what byte count and for what?
13:55:40  <indutny>bnoordhuis: well, for example, EVFILT_READ happened
13:55:48  <indutny>event->data will contain number of bytes that can be read
13:55:54  <indutny>same for EVFILT_WRITE
13:55:54  <bnoordhuis>indutny: why?
13:56:02  <indutny>bnoordhuis: what why?
13:56:02  <bnoordhuis>i mean, why propagate that?
13:56:14  <bnoordhuis>other backends don't provide that
13:56:22  <indutny>not even any?
13:56:25  <bnoordhuis>you'd have to call ioctl(FIONREAD) which is hellishly expensive
13:56:31  <indutny>oh
13:56:35  <indutny>ok, the
13:56:37  <indutny>then*
13:56:46  <indutny>bnoordhuis: and also we need a select() hack there :D
13:56:47  <bnoordhuis>i'll keep it mind as a future optimization hint then
13:57:05  <bnoordhuis>indutny: we'll handle that in libuv :)
13:57:09  <indutny>bnoordhuis: ah, ok :)
13:58:59  <piscisaureus^>we could use that info
13:59:33  <piscisaureus^>but with node's slab allocator it's not really useful I think
14:00:03  <bnoordhuis>piscisaureus^: not all the world is node
14:00:06  <bnoordhuis>only the hipster part
14:00:09  <piscisaureus^>ghe
14:00:15  <indutny>piscisaureus^: hahah
14:00:36  <piscisaureus^>maybe you can just supply the bytes count if you have it and say 0 otherwise
14:00:51  <bnoordhuis>piscisaureus^: yes, reasonable
14:00:57  <bnoordhuis>but i don't want to add that right now
14:01:02  * `foguspart ("Leaving")
14:01:03  <bnoordhuis>let's get the basics working first
14:01:15  <piscisaureus^>oh, many basics work already
14:01:23  <piscisaureus^>I remember quickbasic was quite stable for one
14:01:29  * bnoordhuisgroans
14:02:10  <bnoordhuis>it's a good thing you didn't opt to become a stand-up comedian, bertje
14:02:22  <piscisaureus^>I am using the chromebook as a dedicated irssi terminal now
14:02:30  <indutny>piscisaureus^: your truth
14:02:40  <piscisaureus^>it's quite good for that
14:02:48  <indutny>btw, this '^' mark at the end of your nickname confuses me
14:02:59  <piscisaureus^>alright
14:03:07  * piscisaureus^changed nick to piscisaureus
14:03:11  <indutny>oh, thanks a lot!
14:03:29  <indutny>I've used irssi for quite a long time
14:11:31  <tjfontaine>isaacs: in reference to your st module < jtsage> tj - holy shit that little cache is fast.
14:27:02  * mmaleckichanged nick to mmalecki[food]
14:37:03  * rendarquit (Ping timeout: 272 seconds)
14:42:44  * `3E|Shopchanged nick to `3rdEden
14:45:34  <indutny>bnoordhuis: so faio_mod is sort of emulate event?
14:45:50  <bnoordhuis>indutny: no, it changes the event mask
14:46:01  <bnoordhuis>but it can defer it
14:46:13  <indutny>bnoordhuis: ok, got it
14:46:36  <indutny>bnoordhuis: why it isn't calling cb if event has already happened?
14:46:54  <indutny>I mean, why queueing when you can just run it immediately
14:47:09  <bnoordhuis>indutny: in faio_mod, you mean?
14:47:13  <indutny>yes
14:47:21  <bnoordhuis>because the caller may not expect that
14:47:51  <indutny>bnoordhuis: oh, got it
14:48:00  <indutny>bnoordhuis: but poll may end not quite soon
14:48:18  <indutny>bnoordhuis: it can actually not happen in any reasonable amount of time
14:48:31  <bnoordhuis>indutny: okay, let me explain how it's supposed to work
14:48:39  <bnoordhuis>callbacks are invoked from inside faio_poll()
14:48:44  <indutny>(well, it'll happen in level-triggered engines)
14:49:04  <bnoordhuis>if there are callbacks pending those run first
14:49:24  <bnoordhuis>and no blocking poll is done
14:50:08  <indutny>ok
14:50:14  <indutny>ah
14:50:16  <bnoordhuis>from the perspective of the caller, faio_poll() returns when a) callbacks have been invoked, or b) the timeout expired
14:50:31  <indutny>I forgot that faio_poll and faio_mod should be called in one thread
14:50:45  <bnoordhuis>or protected by a lock
14:51:01  <bnoordhuis>but yes
14:51:20  <bnoordhuis>callbacks need to happen on the same thread
14:51:42  <indutny>yeah
14:51:52  <indutny>if faio_mod and faio_poll will be called simulatenously
14:51:57  <indutny>i.e. while _poll() is blocking
14:52:02  <indutny>it'll never return
14:52:18  <bnoordhuis>yes
14:52:20  <bnoordhuis>so don't do that :)
14:52:41  <indutny>haha
14:52:43  <indutny>ok
15:01:21  * CoverSlidejoined
15:05:22  * hzjoined
15:06:16  * ericktquit (Quit: erickt)
15:09:28  <piscisaureus>I can make a windows backend :-p
15:09:44  <piscisaureus>The library size would quadruple
15:09:47  <piscisaureus>at least
15:13:32  * loladiroquit (Quit: loladiro)
15:17:36  * mcavagejoined
15:26:27  * piscisaureusquit (Ping timeout: 272 seconds)
15:28:14  <indutny>paddybyers: haha
15:28:23  <indutny>oops
15:28:27  <indutny>bertje left
15:33:52  * dapjoined
15:40:50  <indutny>bnoordhuis: hey
15:40:51  * `3rdEdenquit (Quit: Linkinus - http://linkinus.com)
15:40:56  <indutny>can I ask you to look at https://github.com/indutny/vock/issues/3#issuecomment-8195947
15:41:05  <indutny>it seems to be some odd compilation issue that I can't fix
15:41:10  <indutny>no matter how I try
15:45:02  * loladirojoined
15:45:42  <tjfontaine>indutny: worked for me on my ML
15:46:49  * ibobrikquit (Quit: ibobrik)
15:50:42  <indutny>tjfontaine: well, that's it
15:50:44  <indutny>it works everywhere
15:50:49  <indutny>except this guy's macbook
15:51:05  * rendarjoined
15:51:10  * CAPSLOCKBOTjoined
15:51:28  <tjfontaine>indutny: I'm guessing the guy uses evil things like homebrew or macports, people who do such things deserve their fates
15:52:12  <indutny>heh
15:57:30  * loladiroquit (Quit: loladiro)
15:57:46  <saghul>tjfontaine homebrew is pretty nice :-)
15:57:58  <indutny>tjfontaine: ok, he was using clang
15:58:58  <tjfontaine>saghul: beauty and the eye of the beholder, but I'm a strong advocate of using appropriate package management for your platform though, and willynilly libs and search path mucking is a receipe for disaster imesho
15:59:48  <saghul>tjfontaine well, what is the appropriate package management for OSX? unfortunately compiling from source :-S
15:59:58  <indutny>saghul: I like it
16:00:11  <indutny>wget, tar xzvf, make
16:00:15  <tjfontaine>saghul: .pkg of course
16:00:35  <saghul>indutny well, I don't mind, but as soon as I install OSX I do brew install tux bash vim wget, and doing it manually would be tiresome
16:00:42  <tjfontaine>saghul: if you're left to compile on your own, doing anything but mucking with system wide config vars
16:01:00  <saghul>tjfontaine THB I hope to see apt on OSX some day
16:01:11  <tjfontaine>saghul: well there's fink or whatever
16:01:12  <saghul>it's already on iOS
16:01:29  <saghul>tjfontaine that didn't work out for me last time I checked
16:01:46  <tjfontaine>saghul: ya, well they all suck :)
16:02:06  <saghul>so far homebrew does the trick, and being in /usr/local is pretty much like you compiled the stuff yourself
16:02:14  <tjfontaine>the ideal situation woudl of course for these things to appear in app store
16:02:23  <saghul>good luck with that :-S
16:02:54  <saghul>we just had to remove some nice features from our app because of the stupid sandbox
16:03:01  <tjfontaine>nod
16:07:45  * ericktjoined
16:09:37  <creationix>bnoordhuis: I think I want to wrap a non-callback API on top of libuv
16:09:54  <creationix>is there currently an API where I can block waiting for the next event
16:10:09  <creationix>(assuming I never called uv_start)
16:10:22  * mcavagequit (Read error: Connection reset by peer)
16:10:35  * mcavagejoined
16:11:58  <creationix>I agree that callbacks are easier for user code, but I think a library like libuv shouldn't be callback based
16:12:04  <creationix>leave that up to code that layers on top
16:12:51  <creationix>especially when binding with some scripting language. Just write the while loop in the scripting language and dispatch events from there to the various callbacks
16:13:08  <indutny>creationix: so you need faio
16:13:15  <indutny>creationix: http://github.com/bnoordhuis/faio
16:13:20  <indutny>though it's using callbacks too
16:13:34  <creationix>right, I don't think I want to move away from libuv
16:13:35  <indutny>and as far as I can see you need something with APIs similar to epoll or kqueue
16:13:43  <creationix>I think so
16:13:48  <saghul>IIRC bnoordhuis said at some point the fd of the loop and timeout would be exposed, so you can do the polling manually
16:14:02  * joshthecoderjoined
16:14:10  <indutny>saghul: and?
16:14:14  <creationix>there is uv_run_once right?
16:14:41  <saghul>indutny so you could do something like uv_poll(loop, 1000) in your app loop for example
16:14:52  <creationix>oh, nice, run_once does block
16:14:53  <saghul>creationix yes, but that can block indefinitely
16:15:12  <creationix>saghul: no worse than uv_run right?
16:15:17  <saghul>creationix there will be a non-blocking version, just not yet
16:15:27  <creationix>I don't want a non-blocking version
16:15:38  <creationix>well, not for this project
16:15:43  <saghul>creationix this is a rough idea on how it may look like https://github.com/joyent/libuv/pull/535
16:15:47  <creationix>I may later on when I need to integrate with the iOS event loop
16:16:12  <creationix>saghul: nice, that works
16:16:54  <creationix>hmm, what does uv_run_once do when the refcount drops to zero?
16:17:07  <indutny>saghul: libuv will still use callback based API
16:17:08  <creationix>I guess that's what the return value is for
16:17:46  <creationix>so how do I get the event out of run_once?
16:17:54  <saghul>indutny sure, but I guess what creationix wants is to tell libuv: wait for events 1 second and dispatch them now
16:18:09  <creationix>I don't want libuv dispatching the event
16:18:13  <creationix>I don't want any callbacks
16:18:21  <creationix>I want to pull the event
16:18:22  <saghul>creationix is dispatched asynchronously just like uv_run
16:18:29  <saghul>creationix oh
16:19:16  <creationix>pseudocode: while (event = getEvent()) { console.log(event) }
16:19:28  <saghul>what should event contain?
16:19:48  <saghul>I mean, the event could de IO, timer
16:19:51  <creationix>probably a reference to the handle or req related to the event along with it's parameters
16:20:25  <creationix>basically I don't want to ever use callbacks in C
16:20:40  <creationix>but js / lua interface can stay the same though
16:20:46  <creationix>I just want to move the dispatching up
16:20:56  <saghul>I see
16:21:25  <saghul>not an expert myself here, but as it looks I'm not sure you can accomplish that with libuv
16:21:35  <creationix>looks like with current libuv, I'll have to intercept all dispatches and route them to my getEvent function
16:21:47  <creationix>it will be terribly inneffecient
16:22:32  <creationix>actually it won't be that bad. I already have to convert the C callbacks to lua dispatches
16:22:48  <creationix>the only overhead will be the lookup in lua land to dispatch the event again
16:22:52  <saghul>I expose callbacks to python land as well
16:23:07  <saghul>and use greenlet to give a synchronous looking API
16:23:22  <creationix>yeah, I have coroutines
16:23:44  <saghul>can't you use them to 'wait' for callbacks to be fired?
16:23:54  <creationix>also, if I can make libuv not require callbacks, I can use luajit's awesome ffi instead of C bindings
16:24:33  <creationix>saghul: sure, but I want more control https://github.com/luvit/luvit/blob/master/examples/stream/test.lua#L35-43
16:24:52  <creationix>and every time I call a C function using C bindings, it kills my jit optimizations
16:25:03  <creationix>if I were to use ffi instead, it wouldn't slow things down as much
16:25:14  <creationix>but ffi sucks for C callbacks
16:25:20  <saghul>yeah, there is quite some overhead
16:25:52  <saghul>I do like that API with fibers :-)
16:26:21  <indutny>creationix: FFI is not as good as you're advertising it
16:26:27  <indutny>though it allows you to play in one field
16:26:43  <creationix>indutny: luajit ffi is good right?
16:26:48  <creationix>I know most ffi sucks
16:26:56  <indutny>well, I'm talking about ffi itself
16:27:11  <indutny>call overhead is really big
16:27:12  <creationix>but that was the entire point of the luajit 2 rewrite to make ffi calls faster than C calls could possibly do
16:27:21  <creationix>sure, normal ffi sucks
16:27:25  <indutny>ah
16:27:26  <indutny>ok
16:27:33  <indutny>I didn't know that
16:27:41  <creationix>but luajit ffi basically jit generates direct (or indirect) calls and executes them
16:27:50  <creationix>and since it knows the signature is can optimize agressivly
16:28:03  <creationix>C api calls are black-box
16:28:17  <creationix>they could do anything and mess with any vm state
16:28:54  <indutny>I wonder when I'll finish my JIT :)
16:29:08  <indutny>I've really stuck at register allocation
16:29:16  <indutny>lost all inspiration that I had
16:31:16  * piscisaureusjoined
16:31:23  <creationix>indutny: that's a hard one
16:31:41  <creationix>piscisaureus: welcome back
16:31:51  <piscisaureus>creationix: the "pulling events" ship has really sailed I think
16:31:57  <piscisaureus>creationix: unless you want to start over :-)
16:32:34  <piscisaureus>thanks creationix
16:33:02  <creationix>piscisaureus: I'll try a shim to see if the new API is even better for me
16:33:15  <creationix>if it is, then we'll talk about rewriting or forking libuv
16:33:34  <piscisaureus>creationix: well, if you want to fork it then you don't have to talk :-)
16:33:37  <piscisaureus>be my guest
16:33:46  <creationix>like I have time for that :P
16:33:57  <piscisaureus>yes, well, same here
16:34:15  <creationix>also if I'm right about luajit's ffi, then it should be faster even with a shim at the C layer
16:34:32  <creationix>and that will be good enough till someone has time and motivation to rewrite libuv
16:38:05  <piscisaureus>I think that will be a pretty big undertaking
16:38:14  <piscisaureus>I don't think it'll even happen
16:39:18  <creationix>right, maybe in the unlikely case that luvit becomes more popular than node
16:39:36  <creationix>but I don't see that happening for webdev
16:39:40  <creationix>js is the language of the web after all
16:44:00  <indutny>piscisaureus: have we touched openssl somewhere between 0.8.4 and 0.8.8?
16:51:55  * loladirojoined
16:52:06  <piscisaureus>creationix: but even then, would you find people to rewrite libuv?
16:52:20  <piscisaureus>indutny: git log v0.8.4..v0.8.8 -- deps/openssl
16:52:40  <creationix>piscisaureus: I'm not worried about. If it's not going to happen, I'l survive
16:53:02  <creationix>I'm pretty sure I can beat the socks off node in performance without rewriting libuv :)
16:53:10  * dshaw_joined
16:53:43  <piscisaureus>creationix: IT'S ON!!1
16:54:02  <creationix>I cheat by changing the http api
16:54:05  <creationix>node can't do that :P
16:54:17  <piscisaureus>creationix: so what's the change?
16:54:26  <piscisaureus>creationix: no http header support
16:54:45  <creationix>no, remove the massive response object
16:54:55  <creationix>and instead have a respond method that takes code, headers, and body
16:55:00  <creationix>(body can be either string or stream)
16:55:22  <creationix>it made luvit 20x faster when I tried it
16:55:33  <creationix>and I like the API better too
16:56:36  <piscisaureus>creationix: 20x hmm. I don't believe that.
16:56:46  <creationix>well, it's only twice the speed of node
16:56:53  <creationix>luvit has gotten really slow lately for some reason
16:57:01  <creationix>haven't tracked down the cause yet
16:57:48  <piscisaureus>right, ok. I though you were saying it got 20x as fast as node
16:58:13  <creationix>no, that would be impressive indeed
16:58:16  <creationix>https://gist.github.com/3517969
16:58:26  * mmalecki[food]changed nick to mmalecki
16:58:29  <creationix>I did get 6x the speed of node once by cheating heavily
16:58:45  <creationix>but it wasn't an equivalent API so it doesn't count
16:59:18  <creationix>basically a tcp server supporting http keep alive and pipeline emitting canned http responses
17:00:01  <piscisaureus>right, 2000 r/s with keepalive is quite slow indeed
17:00:25  <indutny>piscisaureus: thanks
17:02:00  <creationix>piscisaureus: that's what happens when I spend months working on a cloud9 release and don't have time to watch luvit
17:02:23  <piscisaureus>creationix: haha :-)
17:02:55  <creationix>the new web module I wrote in one night while holding a crying baby. I couldn't sleep anyway, so I coded
17:11:18  * TooTallNatejoined
17:15:34  * ericktquit (Quit: erickt)
17:20:42  * piscisaureusquit (Quit: Lost terminal)
17:26:04  <bnoordhuis>indutny: ho
17:26:16  <bnoordhuis>looks like it's using the struct before actually declaring it?
17:26:43  * piscisaureus_joined
17:26:47  * CIA-127quit (Ping timeout: 244 seconds)
17:27:14  <indutny>bnoordhuis: the thing is that the build works for everyone else
17:28:26  * ibobrikjoined
17:29:23  * ericktjoined
17:30:45  * CIA-127joined
17:32:17  * hzquit (Read error: Connection reset by peer)
17:32:22  * joshthecoderquit (Quit: Linkinus - http://linkinus.com)
17:33:05  * hzjoined
17:35:29  <bnoordhuis>indutny: builds fine for me as well
17:39:09  <bnoordhuis>indutny: maybe deps/opus/opus/src/opus_decoder.c needs to include modes.h before celt.h
17:39:17  <bnoordhuis>just a guess though
17:39:35  <bnoordhuis>or alternatively, celt.h needs to include modes.h
17:40:23  <indutny>bnoordhuis: yeah, probably
17:40:32  * CIA-127quit (Ping timeout: 265 seconds)
17:40:35  <indutny>though it's very odd
17:42:22  <indutny>bnoordhuis: btw, want trying vock?
17:42:30  <indutny>bnoordhuis: just messaging would be ok for me
17:42:30  <indutny>:D
17:42:37  <indutny>you can press 'm' to mute yourself
17:42:40  <indutny>and 'n' to send message
17:45:11  <indutny>I wonder if I can use DHT
17:45:13  <creationix>indutny: how do I connect to someone on dock?
17:45:20  <creationix>*vock
17:45:21  <indutny>you mean vock?
17:45:24  <indutny>vock connect ...
17:45:27  <indutny>where ... is a room id
17:45:34  * hzquit
17:45:36  <indutny>which is randomly generated when you do : vock create
17:45:49  <creationix>so no named rooms then?
17:46:02  <creationix>like `vock connect libuv`
17:47:43  * `3rdEdenjoined
17:48:25  <indutny>creationix: not so far
17:48:32  <indutny>creationix: though it's a good idea
17:48:48  <creationix>is it only one-on-one or does is support chat rooms?
17:48:57  <indutny>well, it should support conference calls
17:49:08  * lohkeyquit (Quit: WeeChat 0.3.8)
17:49:16  <indutny>though https://github.com/indutny/vock/issues/2
17:49:18  <indutny>happens
17:49:25  <indutny>some sort of protocol error
17:49:30  <indutny>but I haven't seen it
17:49:55  <creationix>cool progress
17:50:04  <indutny>creationix: ah
17:50:09  * lohkeyjoined
17:50:09  <indutny>btw, named rooms should work fine
17:50:26  <indutny>creationix: can you call `vock connect libuv`?
17:50:29  <indutny>s/call/run
17:51:51  <creationix>that's kinda cool
17:52:03  <indutny>creationix: have you closed client?
17:52:10  <creationix>yep
17:52:14  <indutny>ok
17:52:18  <indutny>because I got error :)
17:52:21  <creationix>did you hear me?
17:52:24  <indutny>nope
17:52:28  <indutny>everything is muted
17:52:37  <indutny>well, I muted everything
17:52:39  <creationix>oh, mute on your side mutes listening too
17:52:41  <indutny>when you press 'm' only mic is muted
17:52:44  <indutny>nope
17:52:52  <indutny>sorry for confusing you
17:52:53  <indutny>:)
17:53:38  <indutny>what I really do like about vock is that all protocol is there: https://github.com/indutny/vock/blob/master/lib/vock/peer.js
17:53:51  <indutny>it can easily be patched or new functionality can be added w/o going deep into sound processing and etc
17:54:09  <creationix>so it's peer-to-peer once the connection is established
17:54:11  <bnoordhuis>`npm update` doesn't work so great with pre-release builds of node...
17:54:15  <creationix>or does it proxy through your server
17:54:24  * lohkeyquit (Client Quit)
17:54:25  <indutny>creationix: well, there're two modes
17:54:26  <bnoordhuis>anything that requires node-gyp dies horribly
17:54:27  <indutny>relay and direct
17:54:38  * ibobrikquit (Read error: Connection reset by peer)
17:54:38  * lohkeyjoined
17:54:43  <indutny>bnoordhuis: yeah, that's very annoying
17:54:59  <indutny>bnoordhuis: I'm using multiple tabs for that goal
17:55:06  <indutny>bnoordhuis: one with stable and one with pre
17:55:08  <TooTallNate>you should be able to pass --nodedir to `npm update`
17:55:15  <TooTallNate>i think
17:55:57  <bnoordhuis>`npm --nodedir <path> update` doesn't work
17:56:06  <bnoordhuis>`npm update --nodedir <path>` doesn't seem to do anything
17:56:44  <bnoordhuis>is there some kind of env var i can set?
17:58:04  <piscisaureus_>bnoordhuis: npm config ?
17:59:20  <bnoordhuis>piscisaureus_: yes, but what key to set?
17:59:52  <piscisaureus_>bnoordhuis: prefix?
18:00:08  <piscisaureus_>bnoordhuis: look through "npm config list -l"
18:00:28  <bnoordhuis>piscisaureus_: nope, doesn't work
18:00:36  <piscisaureus_>bnoordhuis: oh
18:00:38  <piscisaureus_>in that case
18:00:39  <piscisaureus_>I don't know
18:00:53  <piscisaureus_>maybe try NODE_DIR=/foo/bar npm
18:00:54  <bnoordhuis>kind of bad that three core devs don't know how to fix that...
18:01:02  <piscisaureus_>hahaha
18:01:26  * brsonjoined
18:01:28  <TooTallNate>bnoordhuis: npm_config_nodedir=$dir npm update?
18:02:14  <bnoordhuis>hurray, that works!
18:02:40  <bnoordhuis>TooTallNate: maybe add that to the --nodedir error message?
18:02:46  <bnoordhuis>i mean, it's not very discoverable
18:02:52  <TooTallNate>bnoordhuis: it's an npm bug
18:02:57  <TooTallNate>it's supposed to set those config vars
18:02:57  <piscisaureus_>oh my
18:03:01  <piscisaureus_>httpsys is so fast...
18:03:09  <TooTallNate>it does for "npm install"
18:03:12  <bnoordhuis>ah okay
18:03:14  <TooTallNate>but not "npm update" apparently
18:03:16  <TooTallNate>isaacs: ^
18:04:52  <bnoordhuis>indutny: 165.225.128.181 is the master server?
18:05:05  <indutny>bnoordhuis: yeah, I think so
18:05:17  <indutny>bnoordhuis: why are you asking?
18:05:32  <bnoordhuis>indutny: just curious. i'm stracing what vock does
18:05:38  <indutny>haha
18:05:39  <indutny>nice
18:05:42  <bnoordhuis>what's with the 'Opponent appeared' message btw? :)
18:06:02  <indutny>bnoordhuis: what do you mean?
18:06:17  <indutny>bnoordhuis: server notifies client that new peer is available on the list
18:06:23  <indutny>bnoordhuis: there're several problems with that
18:06:31  <bnoordhuis>indutny: that's what it prints when i type `vock connect libuv`
18:06:33  <indutny>bnoordhuis: i.e. disconnected peers ain't removing from list
18:06:38  <indutny>bnoordhuis: ^^
18:07:08  <indutny>list itself will be removed after some timeout
18:07:13  <indutny>if noone will try to connect to it
18:07:27  <bnoordhuis>so... of how many botnets am i part now?
18:08:00  <piscisaureus_>hey core people
18:08:19  <piscisaureus_>would it be possible to channel data from libuv straight into the http parser
18:08:25  <indutny>bnoordhuis: at least one
18:08:25  <piscisaureus_>without going through js?
18:08:35  <bnoordhuis>piscisaureus_: sure. but not atm
18:08:38  <indutny>piscisaureus_: sure, but why?
18:08:42  <piscisaureus_>well
18:08:47  <indutny>piscisaureus_: we'll loose a lot of flexibility
18:08:49  <piscisaureus_>so I am benchmarking this httpsys thing
18:08:57  <piscisaureus_>and it's much faster
18:09:01  <bnoordhuis>well duh
18:09:16  <indutny>fuck it
18:09:18  <indutny>honestly :D
18:09:23  <bnoordhuis>tux is much faster than any user mode http server too
18:09:27  <piscisaureus_>it seems mostly related to the fact that the headers just arrive
18:09:34  <piscisaureus_>tux?
18:09:38  <bnoordhuis>kernel mode http server
18:09:42  <piscisaureus_>ah, roght
18:09:44  <piscisaureus_>*right
18:09:50  <piscisaureus_>but I don't think it has much to do with kernel mode
18:10:05  <piscisaureus_>more the fact that there are less roundtrips between js and c
18:10:08  * dshaw_quit (Quit: Leaving.)
18:10:09  <bnoordhuis>well...
18:10:15  <bnoordhuis>if i understand what httpsys does right
18:10:22  <bnoordhuis>it doesn't copy data from and to user space
18:10:27  <bnoordhuis>that's a pretty big win
18:10:38  <indutny>piscisaureus_: roundtrips are bad
18:10:47  <piscisaureus_>well of course it has to copy data to userspace
18:11:04  <piscisaureus_>or maybe it maps it into the user's address space - that could be
18:11:14  <indutny>I think copying 1kb of data is not as slow
18:11:21  <piscisaureus_>I don't think so either
18:11:29  <piscisaureus_>also, this is also supported with normal sockets on windows
18:11:46  <piscisaureus_>but this optimization is disabled because the difference was too small to care about
18:12:01  <indutny>oh
18:12:11  <indutny>I just thought that I can parasite on torrent's dht network
18:12:16  <indutny>ahahhaha
18:12:21  * indutnythinks about evil plan
18:12:51  <bnoordhuis>piscisaureus_: just so we're talking about the same thing
18:13:09  <bnoordhuis>http.sys is that you tell the kernel "give me only data for these urls", right?
18:13:16  <piscisaureus_>yes
18:13:21  <piscisaureus_>and it also parses the headers for you
18:13:34  <bnoordhuis>right, i was just about to type that
18:13:34  <mjr_>indutny: no good progress on the crashes, but we did observe that your slab allocator patch seems to make the problem go away.
18:13:46  <indutny>mjr_: aha
18:13:54  <indutny>ok, I can tell you one thing now
18:15:03  <indutny>ok, so this is that 0x10000 allocation
18:15:30  <bnoordhuis>hah, 64K - very recognizable
18:15:46  <bnoordhuis>must be a stream_wrap or udp_wrap thing, right?
18:16:01  <mjr_>I think stream wrap is in the stack, yes
18:16:07  <mjr_>https://gist.github.com/3539304
18:16:14  <indutny>noope
18:16:20  <indutny>this is not related to it
18:16:23  <indutny>tls.js is allocating this buffer
18:16:24  <indutny>a lot
18:16:38  <indutny>tls.js:443
18:16:48  * dshaw_joined
18:16:54  <mjr_>line 443, how ominous
18:17:08  <bnoordhuis>quite so
18:17:21  <bnoordhuis>indutny: so it's a plain OOM?
18:17:26  <indutny>not sure
18:17:31  <indutny>OOM should not happen
18:17:55  <bnoordhuis>but why then is it throwing std::bad_alloc?
18:17:55  <indutny>it's forgetting buffers that was allocated
18:18:00  <indutny>it seems that GC doesn't work well
18:18:11  <indutny>or
18:18:20  <indutny>callbacks on buffer destruction in C++ isn't called
18:18:32  * dshaw_quit (Client Quit)
18:18:36  <indutny>but I can hardly understand why OOM happens and core dump is so small
18:18:46  <tjfontaine>is the slab allocator used for generic Buffers?
18:19:00  <mjr_>do std::bad_alloc objects have more ways to inspect them that tell us what the actual problem is?
18:19:43  <indutny>mjr_: so with my patch those allocation happens
18:19:47  <indutny>but much more rarely
18:20:05  <indutny>mjr_: can you create some sort of chart/plot of memory usage
18:20:08  <indutny>before crash
18:20:40  <bnoordhuis>mjr_: another thing that might be of interest is if V8::AdjustAmountOfExternalAllocatedMemory() is called or not
18:20:40  <mjr_>yeah, we do, but it's not high resolution enough to see the pattern.
18:20:58  <indutny>mjr_: so it's crashes really soon
18:21:22  <tjfontaine>does ::jsstack in mdb return anything useful?
18:21:23  <mjr_>Well, we sample every 60 seconds, and that might not be fast enough to catch a quick ramp up in memory.
18:21:40  <indutny>hah
18:21:43  <indutny>bnoordhuis: nice idea
18:21:52  <bnoordhuis>looking at the backtrace it should hit the code path that adjust v8's idea of external memory
18:22:01  <mjr_>tjfontaine: something about our platform isn't the right version, and we can't run v8.o right now. Waiting on Joyent for a fix for that.
18:22:09  <tjfontaine>mjr_: ok
18:22:10  <mjr_>bnoordhuis: I don't see it in the stack.
18:22:19  <indutny>bnoordhuis: it won't happen in many places
18:22:27  <indutny>bnoordhuis: where free callback is specified, for example
18:22:50  <bnoordhuis>mjr_: it gets called after the memory has been allocated
18:23:03  <bnoordhuis>mjr_: but if it never gets called at run-time, something's wrong
18:23:56  <mjr_>We could rebuild with some extra logging if that would help.
18:24:16  <mjr_>But it sure is interesting that with indutny's slab patch that it doesn't seem to crash.
18:24:24  <bnoordhuis>indutny: what patch is that?
18:24:36  <mjr_>Perhaps we haven't let it run long enough to see the crash, but it certainly doesn't crash as fast.
18:24:38  <indutny>bnoordhuis: commit f210530f46e8ddbd9e7cc0d0c37778888c27f526
18:24:52  <indutny>mjr_: it's using shared 10mb buffer
18:25:00  <indutny>instead of allocating a lot of 0x10000 buffers
18:25:08  <bnoordhuis>ah, right
18:25:20  <indutny>btw, 10mb is really questionable amount of memory :D
18:26:19  <bnoordhuis>indutny: why?
18:26:20  <mjr_>Oh, we did also confirm that rebuilding node with the libc allocator instead of libumem does not fix the crashing.
18:26:29  <indutny>bnoordhuis: too much for devices
18:26:51  <bnoordhuis>indutny: oh, i don't worry about that. mmap magic takes care of that
18:27:08  <bnoordhuis>until it's actually used it's only virtual
18:28:41  <indutny>bnoordhuis: btw, I don't see how external memory can affect this
18:28:58  <indutny>bnoordhuis: it seems to be a forcing GC in some situations
18:29:03  <indutny>and that's all that it can affect
18:30:21  <bnoordhuis>comparing the stack trace with the code suggests that it's a regular OOM error
18:30:38  <bnoordhuis>so... why does fedor's patch fix the issue?
18:30:51  <indutny>haha
18:30:53  <indutny>brb
18:30:58  <indutny>need to get some groceries
18:31:19  <mjr_>bnoordhuis: I think we are mangling the process heap somehow
18:31:50  <mjr_>It's operator new that's failing, and the allocation size looks reasonable. The heap is 400MB, and there's plenty of free memory on the system.
18:32:21  <mjr_>So the indutny slab just makes fewer or differently shaped allocations from the process heap, and that works around the underlying issue.
18:32:32  <mjr_>While also making it slightly faster. :)
18:32:53  <mjr_>That's my theory anyway.
18:33:11  <bnoordhuis>mjr_: do you have the possibility to dtrace on of those processes until it dies?
18:33:26  <bnoordhuis>my working theory is that a mmap() or brk() syscall fails
18:33:31  <mjr_>Oh sure.
18:33:36  <mjr_>What do you want to trace?
18:34:04  <bnoordhuis>i guess those two syscalls
18:34:09  <bnoordhuis>dap: ping
18:34:18  * papertigersjoined
18:34:48  <bnoordhuis>maybe dap knows if there's any extra magic going on inside smartos's malloc
18:34:48  <mjr_>bnoordhuis: so you want to see if mmap or brk/sbrk fail right before the bad_alloc?
18:34:54  <bnoordhuis>mjr_: yes
18:35:12  <mjr_>OK, but note that this also fails if we take libumem out of the picture.
18:35:19  <bnoordhuis>mjr_: hm, okay
18:35:25  <bnoordhuis>my reasoning is this:
18:35:40  <mjr_>In which case I guess dtrace on those system calls is a good idea.
18:35:48  <bnoordhuis>if large(ish) allocations are mmap'd in but free() somehow forgets to unmap them
18:35:52  <dap>Yo.
18:35:55  <bnoordhuis>you'd eventually run out of address space
18:36:06  <bnoordhuis>in that case VMEM would be large as well though
18:36:08  <bnoordhuis>dap: hey
18:36:15  <mjr_>Hey dap, we are just trying to decipher our bad_alloc on smartos issue.
18:36:22  <mjr_>From here: https://gist.github.com/3539304
18:36:23  <bnoordhuis>do you know what malloc() on smartos uses to allocate large chunks?
18:36:32  <bnoordhuis>mmap, brk, something else?
18:36:33  <dap>Which implementation? libumem?
18:36:45  <bnoordhuis>libumem and libc
18:36:50  <dap>Very likely brk, but easy to verify with DTrace.
18:37:21  <papertigers>dap: what do you want me to trace? syscall::brk: ?
18:37:22  <bnoordhuis>mjr_: okay, let's take the safe route: mmap, munmap, mprotect, brk
18:37:34  * dshaw_joined
18:37:48  <mjr_>Breaking news: it turns out that even with the indutny slab patch that we still crash from bad_alloc, but we stayed up 16 hours instead of 10 minutes.
18:37:59  <bnoordhuis>well, it's an improvement :)
18:38:18  <dap>dtrace -n 'syscall::brk:return/pid == $1/{ trace(arg1); }'
18:38:22  <dap>err
18:38:29  <dap>dtrace -n 'syscall::brk:return/pid == $1/{ trace(arg1); }' YOURPID
18:38:58  <dap>If you see anything non-zero, brk() failed.
18:39:07  <dap>If you're getting a ton, of course, aggregate instead of tracing each one.
18:39:32  <mjr_>Stack of crash with crypto slab patch applied:
18:39:33  <mjr_>https://gist.github.com/3539304#file_crypto_slab_crash.txt
18:40:02  <mjr_>Our v8.o is messed up, so we can't get JS function names for those JS stack frames, sorry.
18:40:23  * `3rdEdenquit (Quit: Leaving...)
18:41:14  <bnoordhuis>mjr_: ClientHelloParser? you're using fedor's async tls session patch?
18:41:30  <mjr_>bnoordhuis: oh yes, we are also using that.
18:41:36  <papertigers>dap cool I will run that until the next crash
18:42:04  <dap>papertigers: you may want to make sure it's printing something to verify the claim that it's using brk() at all.
18:42:09  <bnoordhuis>mjr_: do you also get bad_alloc without that patch?
18:42:11  <mjr_>This is part of an experiment to increase our HTTPS capacity, and it curiously also increased our resilience to the main bad_alloc crash.
18:42:25  * joshthecoderjoined
18:42:30  <papertigers>brk was returning with 0 a bunch, so I @[arg1] = count()
18:42:35  <dap>Cool.
18:42:58  <mjr_>bnoordhuis: yes, with stock node 0.8, we get the bad_alloc in a small number of minutes. With Fedor's TLS event patch and slab patch, we got a crash only after 16 hours.
18:43:15  <bnoordhuis>okay, good
18:43:41  <bnoordhuis>i think tracing syscalls is a good first step
18:43:50  <mjr_>In case anybody is wondering, papertigers works at Voxer and is a SmartOS person.
18:44:00  <bnoordhuis>hi papertigers
18:44:10  <papertigers>bnoordhuis: hey
18:45:45  <papertigers>dap:
18:45:48  <papertigers>-1 1
18:45:50  <papertigers> 0 3329
18:45:54  <papertigers>got one -1
18:46:03  <dap>mjr_, papertigers: It would be surprising if malloc() failed without brk returning −1, and if malloc failed and you were NOT out of virtual memory (i.e. you're not at the 4G limit for a 32-bit process), then I'd assume you were out of swap (anonymous memory).
18:46:18  <papertigers>dap: do you want me to trace the errno
18:46:34  <mjr_>dap: that assumption seems reasonable, except that with node 0.6, we run for days.
18:46:42  <papertigers>dap: if you do a vmstat 1 swap and free are not 0
18:46:53  <papertigers>or even close
18:47:42  <dap>I don't know what vmstat is reporting — if that's system-wide or for your zone
18:47:48  <papertigers>that fail was reflected in ::umastat as well showing one failed alloc
18:48:26  <dap>CA shows you anonymous memory available and used (that's swap, in this case)
18:48:34  <papertigers>Good point. But switching off of node 8 makes the crashes go away. If it were bottlenecked on that wouldnt we still see crashes
18:48:34  <dap>I'm looking up the kstat you can look at
18:50:03  <dap>kstat memory_cap:::
18:50:16  <dap>swap and swapcap are the numbers you want to look at, shortly before it crashes
18:50:29  <dap>what's swapcap now? (that's static.)
18:51:09  <papertigers>dap: https://gist.github.com/e5732b50c20f45311da2
18:51:31  * bradleymeckjoined
18:52:51  <mjr_>seems like plenty
18:53:06  * jay_joined
18:53:40  <jay_>good day
18:54:05  <jay_>I'm trying to figure out how to use ngx_queue to replace stl queue
18:54:25  <jay_>Having a bit of trouble following how to attach/detach data to a queue item
18:54:43  <jay_>Is there a simple example of this somewhere?
18:54:45  <dap>Yeah, I'd guess you'd be seeing anon_alloc_fails if that were the problem, too.
18:54:51  * c4miloquit (Remote host closed the connection)
18:54:54  <papertigers>dap, I am running "while true; do kstat -p memory_cap:::swap && sleep 2; done" and I don't really see it dip
18:55:06  <jay_>Basically I'm trying to create a queue<uv_work_t>
18:55:10  <dap>papertigers: it would go up, not down
18:55:34  <dap>Yeah, my next step would probably be to grab errno when brk() fails.
18:55:35  <papertigers>dap oops yeah, well it has gone done from 58 to 56
18:55:48  * brsonquit (Quit: leaving)
18:56:03  * loladiroquit (Quit: loladiro)
18:58:23  * loladirojoined
18:59:49  * loladiroquit (Client Quit)
19:00:14  <tjfontaine>jay_: ngx_queue_data(<queue entry>, <containing type>, <name of ngx_queue_t in containing type>)? basically you add ngx_queue_t to your data type, not vice versa
19:01:08  <bnoordhuis>jay_: you'd create a struct work_queue { uv_work_t work_req; ngx_queue_t queue; }
19:01:11  <jay_>thank you tjfontaine, that makes things a little more clear
19:03:10  <jay_>Right so then I would instantiate work_queue and call ngx_queue_insert_tail(&queue, &work_queue_item)
19:04:17  <jay_>and later ngx_queue_data(&work_queue_item->queue, work_queue, queue)?
19:04:52  * piscisaureus_quit (Quit: ~ Trillian Astra - www.trillian.im ~)
19:04:53  <jay_>later being after I dequeue work_queue_item->queue
19:05:08  * bettajoined
19:05:25  <jay_>that helps a lot in understanding the code I was reading, ty
19:07:15  <bnoordhuis>jay_: almost
19:07:24  <betta>hi
19:07:34  <bnoordhuis>your work_cb gets a uv_work_t* req arg
19:08:04  <bnoordhuis>you use that to look up the embedding struct like so: struct work_queue* q = ngx_queue_data(req, struct work_queue, work_req)
19:08:38  <bnoordhuis>frankly, all ngx_queue_data does is subtract some bytes of the req pointer and cast it to the proper type
19:08:52  <bnoordhuis>*from the req pointer
19:08:56  <bnoordhuis>betta: hi
19:09:18  <betta>i have also a question if you don't mind :)
19:09:23  <bnoordhuis>sure, go ahead
19:09:27  <betta>i'm trying to remotely quit a libuv event loop and i tried to do this by using ngx_queue_foreach and uv_unref()
19:09:29  <betta>is it safe to do it like this?
19:10:42  <bnoordhuis>betta: define 'remotely quit'?
19:11:30  <betta>i'm using libuv in a seperate thread together with a GUI which has it's own event loop
19:11:52  <betta>if the GUI event loop quits the libuv event loop should do that too
19:12:10  <indutny>betta: better use semaphore there
19:12:22  <bnoordhuis>indutny: no, you need to wake up the event loop
19:12:27  <indutny>betta: interacting with libuv's handles from other thread is not safe
19:12:33  <bnoordhuis>betta: i'd use a uv_async_t and uv_run_once in a loop
19:12:46  <indutny>ah, yeah, that should be even better
19:12:58  <bnoordhuis>when the async cb fires, set a flag that exits the loop
19:13:22  <bnoordhuis>from the gui thread, call uv_async_send() to make the cb fire
19:13:27  <betta>i'm already using uv_async_t for the notification
19:13:59  <bnoordhuis>betta: you can use more than one async handle
19:14:09  <bnoordhuis>use one for notifications, the other for exiting
19:14:39  <bnoordhuis>or combine them, the basic concept remains the same: wake up thread, set flag, exit from loop
19:14:58  <tjfontaine>what if in the uv_async_t he uv_walk'd and unref'd?
19:15:09  <betta>i'm still a beginner so i still have a few problems, but regarding async notifications i already understood how it's supposed to be (i think)
19:15:58  <betta>tjfontaine i'm doing exactly this at the moment
19:16:05  <bnoordhuis>tjfontaine: it'll work but it takes more work to implement
19:16:09  <tjfontaine>nod
19:16:44  <betta>and i get a memory leak with windows if i do it with uv_unref :/
19:18:15  <papertigers>dap bnoordhuis I added a comment with the ustack from when brk returns -1. Its not useful though https://gist.github.com/e5732b50c20f45311da2#comments
19:18:17  <betta>but i will try using uv_run_once - thanks for the tip :)
19:19:04  <papertigers>there are symbols for when it returns 0 however
19:20:25  <mjr_>wow, look at those awesome memory addresses
19:20:52  <bnoordhuis>okay, so it really is an OOM error
19:21:03  <bnoordhuis>the question now becomes why it triggers with v0.8 and not with v0.6
19:21:42  <bnoordhuis>papertigers: do you know what VMEM looks like at the time of the bad_alloc?
19:24:53  <papertigers>bnoordhuis: how would I grab it
19:27:41  * ryahquit (Ping timeout: 248 seconds)
19:27:50  * russfrankquit (Ping timeout: 265 seconds)
19:29:08  <bnoordhuis>papertigers: poor man's solution is logging the output of `ps -o rss,vsz -p <pid> | tail -1` every couple of seconds
19:29:35  <bnoordhuis>maybe smartos has a nicer facility for that but i don't know it :)
19:29:45  <indutny>bnoordhuis: logging the same from dtrace
19:29:46  <indutny>:)
19:31:51  <indutny>I wonder how much memory is using v8 before the crash
19:31:55  <indutny>in percents of total memory
19:32:24  * loladirojoined
19:33:53  <betta>ok now i'm certain that either libuv leaks memory or uv_loop_delete doesn't clean up everything...
19:34:38  <bnoordhuis>betta: what platform?
19:34:42  <betta>windows
19:34:43  * bradleymeckquit (Ping timeout: 245 seconds)
19:35:01  <bnoordhuis>what version of libuv?
19:35:06  <betta>current master
19:35:15  <bnoordhuis>ah okay
19:35:20  <bnoordhuis>what makes you think it leaks memory?
19:35:51  <betta>visual studio using crtdbg.h
19:36:28  <bnoordhuis>can you open an issue? our windows guy is away tonight
19:36:39  <bnoordhuis>please add a test case if you have one
19:37:05  <betta>well then let's create a github account :D
19:39:25  * bradleymeckjoined
19:42:19  * `3rdEdenjoined
19:42:23  * `3rdEdenquit (Remote host closed the connection)
19:43:01  <bnoordhuis>betta: everyone should have one
19:43:41  <betta>i know... i know... but i never really needed an account
19:44:00  <papertigers>bnoordhuis: I am capturing the output by the way...waiting for it to crash
19:44:02  <betta>it's for the same reason why i don't have a facebook account :x
19:45:25  <bnoordhuis>that's not remotely the same thing :)
19:50:12  <mjr_>There is a time in everyone's life where they don't have a github account.
19:50:49  <bnoordhuis>but everyone grows up eventually
19:51:03  <betta>:D
19:51:13  <mjr_>Whatever comes after github is going to be amazing
19:52:14  <tjfontaine>the space has changed drastically from berlios and sf.net
19:52:46  <mjr_>I used to have sf.net on my old Linux machine, back in the day.
19:53:57  <mjr_>My friend who owned it, eventually sold it to sourceforge, so they could forge source code with coal and anvils.
20:01:11  * brsonjoined
20:01:18  <betta>stupid github... every username i ever used in my life is already taken :/
20:02:23  * piscisaureusjoined
20:04:05  * ryahjoined
20:05:45  <piscisaureus>betta: it's true that uv_loop_delete leaves some cruft behind
20:05:47  * bradleymeckquit (Quit: bradleymeck)
20:06:03  <piscisaureus>betta: but really what you want is not uv_unref everything, better to uv_close all the things
20:06:24  <piscisaureus>because uv_unref makes uv_run return but the handles are still there
20:06:50  <betta>but the memory was freed?
20:07:01  * CIA-128joined
20:08:23  * dapquit (Quit: Leaving.)
20:09:42  <betta>well then is this intended behaviour of uv_loop_delete to not delete everything? and if i uv_close everything do i still need to call uv_loop_delete at all? i'm really confused of this really good, but documentation lacking library^^
20:13:48  * dapjoined
20:15:37  * loladiroquit (Quit: loladiro)
20:20:39  <piscisaureus>betta: uv_loop_delete is supposed to delete state associated with the loop itself. It doesn't clean up everything atm - and that's a bug.
20:21:27  <piscisaureus>betta: however uv_loop_delete will not free memory that *you* allocated to store your handles in. It will also not close sockets etc. You have to do that yourself by closing the handle.
20:21:56  * brsonquit (Ping timeout: 244 seconds)
20:22:07  <betta>oh ok thanks for clarification piscisaureus :)
20:23:20  * bradleymeckjoined
20:24:19  <betta>piscisaureus if i allocate everything on the heap and use uv_close() with free() and uv_loop_delete() at the end it seems to be that no memory get's leaked anymore...
20:24:29  <betta>thanks so far :)
20:25:13  <piscisaureus>betta: well, that's how you are supposed to do it :-)
20:27:01  * brsonjoined
20:27:07  <betta>well yes but i didn't know that uv_loop_delete doesn't free up my sockets (although i think i know why)
20:27:58  <betta>piscisaureus one last thing: what seemed strange to me was that the only thing which wasn't released was something internal...
20:29:12  <betta>to be specific: accept_reqs of the listening socket
20:33:26  * brsonquit (Ping timeout: 264 seconds)
20:36:54  * brsonjoined
20:38:07  <betta>oh shit... i found the mistake and it was my mistake of course... i didn't check for UV__HANDLE_INTERNAL :x
20:39:09  * bradleymeck_joined
20:42:29  * bradleymeckquit (Ping timeout: 252 seconds)
20:42:30  * bradleymeck_changed nick to bradleymeck
20:44:47  * bradleymeck_joined
20:47:30  <piscisaureus>oh
20:47:43  <piscisaureus>UV__HANDLE_INTERNAL sounds very risky
20:47:59  <piscisaureus>bnoordhuis: ^-- do we even want to unumerate those with uv_walk?
20:48:13  <bnoordhuis>piscisaureus: no, that shouldn't happen
20:48:15  * bradleymeckquit (Ping timeout: 244 seconds)
20:48:15  * bradleymeck_changed nick to bradleymeck
20:49:05  <bnoordhuis>betta: how are you walking the handles? uv_walk() skips internal handles
20:49:09  <piscisaureus>betta: when you uv_close a listening socket it leaks?
20:51:19  * piscisaureuschanged nick to piscisaureus^
20:51:22  * piscisaureus^changed nick to piscisaureus
20:56:07  * loladirojoined
20:56:40  * bradleymeckquit (Quit: bradleymeck)
20:57:31  * loladiroquit (Client Quit)
20:58:03  * TooTallNatequit (Ping timeout: 245 seconds)
20:59:59  * brsonquit (Ping timeout: 276 seconds)
21:00:00  * TooTallNatejoined
21:04:25  <betta>bnoordhuis, piscisaureus: i don't use uv_walk because in the same file i have a function which enumerates through all UV_TCP handles
21:04:57  <betta>for this i use a slightly modified uv_walk function
21:05:50  <betta>i just copied this modified uv_walk function to my "quit-function"
21:06:45  <betta>i didn't think about UV__HANDLE_INTERNAL when i copied it
21:10:17  <betta>btw the "quit-function" looks like this:
21:10:20  <betta>ngx_queue_foreach(...) { h = ngx_queue_data(...); uv_close(h, h->flags == UV__HANDLE_INTERNAL ? NULL : on_close); }
21:10:39  <betta>plus a uv_loop_delete(loop); at the end
21:11:04  <betta>which works just fine :)
21:11:44  * loladirojoined
21:12:47  * brsonjoined
21:22:08  * piscisaureusquit (Quit: Lost terminal)
21:25:33  * piscisaureusjoined
21:28:45  <piscisaureus>sudo gparted
21:29:36  <pquerna>indutny: hi
21:30:50  <indutny>pquerna: hi
21:31:15  * brsonquit (Ping timeout: 252 seconds)
21:32:52  <indutny>pquerna: sup?
21:33:34  * jay_quit (Quit: Leaving)
21:34:13  <pquerna>indutny: i think i figured it out; just trying to use node-spdy with express3
21:34:20  <indutny>;q
21:34:22  <indutny>oops
21:34:26  <indutny>ok
21:34:37  <indutny>that reminds me...
21:34:37  <pquerna>just the example int he readme doesn't work, express.HTTPSServer went away
21:35:36  * jaybeaversjoined
21:35:37  <indutny>pquerna: oh
21:35:44  <indutny>pquerna: express is becoming better and better
21:36:06  <indutny>pquerna: what is the working way to do it now?
21:36:16  <jaybeavers>I have another design question
21:36:26  <jaybeavers>Gist (https://gist.github.com/3553827)
21:36:41  <pquerna>indutny: https://gist.github.com/f29c23a007623a5d08ff
21:36:45  <jaybeavers>I have a dedicated background write thread with a write_queue for incoming data
21:37:07  <indutny>oh
21:37:08  <indutny>cool
21:37:17  <jaybeavers>I'm using sleep(1) when the queue is empty, that the right way?
21:37:17  <indutny>pquerna: don't you mind if I'll copy-paste parts of it into readme?
21:38:20  <pquerna>indutny: go for it/.. :)
21:41:41  <indutny>pquerna: btw, I think you do not needs https.Server as a first argument
21:41:47  <indutny>because it should be used by default anyway
21:41:54  <pquerna>k
21:42:16  * brsonjoined
21:42:17  <pquerna>yup, works
21:42:43  <indutny>pquerna: updated https://github.com/indutny/node-spdy
21:43:12  <pquerna>https://198.101.158.143/
21:43:23  <indutny>pquerna: woot!
21:43:29  <pquerna>freebsd9, ipv6, spdy, woo :)
21:43:34  <indutny>hahah
21:43:38  <indutny>even ipv6
21:43:46  <pquerna>yeah... will move dns in a bit
21:43:48  <indutny>what's ipv6 address, btw?
21:45:06  <pquerna>2001:4801:7817:72:2ce2:56d8:ff10:f60
21:45:58  <pquerna>hrm, maybe its not working.
21:46:01  <indutny>yep
21:46:06  <indutny>I think IP is malformed
21:46:12  <indutny>odd
21:46:16  <piscisaureus>jaybeavers: why don't you use a semaphore?
21:46:36  <indutny>probably I can't open tcp6 connections from my isp
21:46:40  <pquerna>hrm? its there... just can't seem to get...
21:46:41  <pquerna>it
21:47:10  <indutny>pquerna: well, me too
21:47:17  <indutny>I've even tried to do that from joyent serevr
21:47:34  <indutny>ah, it's using ipv4 too
21:47:42  <indutny>well, where can I get tcp6 machine? :D
21:48:01  <pquerna>indutny: rackspacecloud is all ipv6 now :P
21:48:05  <piscisaureus>are there any linux experts in the audience?
21:48:16  <jaybeavers>piscisaureus: new to libuv, only found documentation for mutex :-)
21:48:50  <piscisaureus>jaybeavers: look in include/uv.h. Semaphore functions are pretty straightforward.
21:49:07  <indutny>jaybeavers: you're waiting in one thread
21:49:13  <indutny>jaybeavers: and sending in other
21:49:30  <indutny>wait() will block until something will be sent
21:49:34  <indutny>to the semaphore
21:50:43  <piscisaureus>jaybeavers: so what you would do is call uv_sem_post every time you add something to the write queue
21:50:56  <piscisaureus>[email protected]:~$ sudo swapon --all -v
21:50:56  <piscisaureus>swapon on /dev/sda3
21:50:56  <piscisaureus>swapon: /dev/sda3: found swap signature: version 1, page-size 4, same byte order
21:50:56  <piscisaureus>swapon: /dev/sda3: pagesize=4096, swapsize=2147483648, devsize=2147483648
21:50:56  <piscisaureus>swapon: /dev/sda3: swapon failed: Function not implemented
21:51:02  <jaybeavers>makes sense. I'll update.
21:51:07  <piscisaureus>^-- wtf ?
21:51:07  <jaybeavers>thx, much appreciated
21:52:02  * lohkeyquit (Quit: lohkey)
21:53:28  * rendarquit
21:54:08  <indutny>piscisaureus: wtf
21:54:17  <indutny>let me google it for you
21:54:27  <indutny>where are you doing it?
21:55:17  <indutny>piscisaureus: suppose you're trying to use swap on partition that isn't swap
21:56:24  <piscisaureus>indutny: no, I'm pretty sure :-)
21:56:36  <indutny>ok, so what do you want from it?
21:56:46  <piscisaureus>indutny: this is just a normal ubuntu pretty pinuin kernel
21:56:55  <indutny>pinuin?
21:57:07  <indutny>ah
21:57:10  <indutny>ok
21:57:32  <piscisaureus>oh i meant to say penguin
21:57:33  <piscisaureus>heh
21:57:37  <piscisaureus>whatever
21:57:42  <piscisaureus>12.04 LTS
21:58:41  <indutny>ok
21:58:44  <indutny>going to sleep
21:58:52  <indutny>hope you'll be good with your swap stuf
21:58:55  <indutny>ttyl ;)
21:59:03  <piscisaureus>bleh
21:59:07  <piscisaureus>indutny: sleep well
21:59:11  <indutny>you too
21:59:12  <indutny>bye
22:04:58  <jaybeavers>ok updated gist to semaphore: https://gist.github.com/3553827
22:05:07  <jaybeavers>Works great, code much simplified.
22:11:47  * ericktquit (Ping timeout: 272 seconds)
22:12:19  <piscisaureus>jaybeavers: you have a threadsafety issue in there
22:12:38  <piscisaureus>jaybeavers: you probably still need a mutex to protext the ngx_queue
22:13:02  <piscisaureus>jaybeavers: also, since you are now using your own worker thread - why not just do the actual work there?
22:15:02  <jaybeavers>piscisaureus: fair enough, looking at insert_tail, it's possible it would be updating head
22:15:05  <piscisaureus>jaybeavers: oh sorry - I misunderstood your code
22:15:20  <piscisaureus>jaybeavers: you *are* doing the work on the thread
22:15:22  <piscisaureus>so that's good
22:15:26  <piscisaureus>*but*
22:15:40  <piscisaureus>you are then calling uv_queue_work to synchronize with the main thread
22:15:58  <piscisaureus>which is not very efficient, and also not safe
22:16:16  <piscisaureus>since no libuv functions are threadsafe except uv_async_send
22:16:42  <jaybeavers>piscisaureus: The current api (node-serialport) takes a callback as a param and calls that from doWorkAfter
22:17:01  <piscisaureus>yes, that's okay
22:17:13  <piscisaureus>*but* calling uv_queue_work from the worker thread isn't
22:17:25  <jaybeavers>So I feel I need to marshal the results of my doWorkAfter back onto uv_default_loop
22:18:05  <jaybeavers>So you're saying so a uv_async_send to communicate back and then make the api callback from the async_send handler?
22:19:16  <piscisaureus>jaybeavers: I would leave the queue_work part out
22:19:44  <jaybeavers>piscisaureus: I think I'm following you. brb :-)
22:20:03  <jaybeavers>(this is a fascinating api :-) )
22:20:17  <piscisaureus>threading. threading is hard
22:20:51  <bnoordhuis>piscisaureus: sudo strace swapon --all -v
22:20:51  <jaybeavers>rgr. I'm used to my c# semantics. portable c is a bit of a change.
22:21:27  <bnoordhuis>you can abbreviate it that to `swapon -av` btw, saves you some typing
22:21:36  <bnoordhuis>it, that... pick one
22:29:32  <CIA-128>libuv: Ben Noordhuis master * rff0a93a / (src/unix/async.c src/unix/internal.h): unix: fix clang -Wlanguage-extension-token warnings - http://git.io/G0PWyA
22:31:16  * travis-cijoined
22:31:17  <travis-ci>[travis-ci] joyent/libuv#639 (master - ff0a93a : Ben Noordhuis): The build passed.
22:31:17  <travis-ci>[travis-ci] Change view : https://github.com/joyent/libuv/compare/5eb1d191cce1...ff0a93a04f21
22:31:17  <travis-ci>[travis-ci] Build details : http://travis-ci.org/joyent/libuv/builds/2299126
22:31:17  * travis-cipart
22:32:27  * ericktjoined
22:35:18  <piscisaureus>bnoordhuis: https://gist.github.com/f3f7b9167604d8b61406
22:37:14  <bnoordhuis>piscisaureus: i don't see it failing anywhere
22:38:01  <piscisaureus>err
22:40:56  <bnoordhuis>piscisaureus: then again, i'm also not seeing it make any swapon() syscalls
22:41:41  <piscisaureus>bnoordhuis: https://gist.github.com/17490bfb846df7da7f67
22:41:53  <piscisaureus>bnoordhuis: so the swapon syscall is failing with ENOSYS
22:43:19  <bnoordhuis>piscisaureus: what does `uname -r` say?
22:45:26  <bnoordhuis>also, strace is reporting it kind of oddly - swapon() takes two args
22:46:34  <piscisaureus>bnoordhuis: 3.2.7
22:49:53  <jaybeavers>ok, gist updated to use uv_async_send instead of uv_queue_work for the callback
22:49:54  <jaybeavers>https://gist.github.com/3553827
22:50:15  <jaybeavers>the async handler doesn't seem to get involved
22:50:38  <jaybeavers>I used uv_async_init(uv_default_loop(), &async, work_done); and this returns 0
22:51:08  * joshthecoderquit (Quit: Leaving...)
22:51:08  <jaybeavers>Am I using the wrong loop? Intent is to post doWorkDone back onto the 'main' loop rather than calling it from the background thread
22:51:26  <jaybeavers>*handler doesn't seem to get invoked
22:51:42  <bnoordhuis>piscisaureus: https://gist.github.com/c909c3c998707b3c7409 <- can you try this? run as `strace ./swapon /dev/sda3`
22:52:17  <bnoordhuis>hah, my strace reports it as a single arg function as well
22:54:13  <bnoordhuis>jaybeavers: why does uv_work_queue.cpp use uv_default_loop()?
22:54:20  <piscisaureus>bnoordhuis: https://gist.github.com/654ebd64c6dd4e68f7bf
22:54:20  * loladiroquit (Quit: loladiro)
22:54:40  <piscisaureus>jaybeavers: you need to use a queue for reporting back to the main thread
22:55:06  <bnoordhuis>piscisaureus: hah, that's awesome. what does `uname -m -p` print for you?
22:55:12  <piscisaureus>jaybeavers: it's slightly complicated but uv_async_send may not correspond 1:1 to uv_async callback invocations
22:55:23  <jaybeavers>bnoordhuis: uv_work_queue is a simplification of the current threading model of node-serialport which has out of order issues
22:55:34  <piscisaureus>bnoordhuis: x86_64 x86_64
22:55:37  <jaybeavers>it's not meant to be 'how to work going forward'
22:56:44  <piscisaureus>ah shit. maybe....
22:56:49  <jaybeavers>I'm trying to replace the logic represented by uv_work_queue.cpp with uv_thread.cpp
22:57:16  <bnoordhuis>piscisaureus: 32 bits binary?
22:59:03  <piscisaureus>bnoordhuis: the thing I just compiled?
22:59:28  <piscisaureus>bnoordhuis: swapon: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=0x440b089968214d3a122375913f82be472c39a7a7, not stripped
22:59:57  <piscisaureus>bnoordhuis: no it could be that I am accidentally running a chromeos kernel with ubuntu userland
23:00:07  <bnoordhuis>piscisaureus: that could explain it :)
23:00:22  <jaybeavers>piscisaureaus: Using a queue to marshal back onto uv_default_loop() doesn't sound right -- I'd still need to post a message to the loop to notify it of the new queue item?
23:00:24  <bnoordhuis>i wonder what google butchered out of the mainline kernel
23:00:53  <piscisaureus>it's all pretty stable actually
23:01:11  <bnoordhuis>piscisaureus: so what does `uname -a` print?
23:01:38  <piscisaureus>bnoordhuis: nothing interesting:
23:01:40  <piscisaureus>Linux ChrUbuntu 3.2.7 #1 SMP Fri Apr 6 09:09:51 PDT 2012 x86_64 x86_64 x86_64 GNU/Linux
23:01:49  * TheJHquit (Ping timeout: 252 seconds)
23:02:00  <bnoordhuis>ChrUbuntu... gotta remember that one
23:09:13  <jaybeavers>Let me back up a bit. Node library to write to the serial port. serial.write is a long, sync operation. Current implementation uses uv_queue_work
23:09:24  <jaybeavers>This causes occasional out of order delivery of data -- bad.
23:10:26  <jaybeavers>Trying to rewrite using a dedicated uv_thread, but need serialPort.write(byte[], callback) for callback to come back on uv_default_loop()
23:11:26  <bnoordhuis>jaybeavers: it should be pretty trivial, you manage the queue in the main thread. you fire off a work req, wait for it to come back, then fire off the next one
23:12:47  * bettaquit
23:14:02  * joshthecoderjoined
23:15:27  <jaybeavers>The callback comes out of the C++ layer with:
23:15:28  <jaybeavers>v8::Function::Cast(*data->callback)->Call(v8::Context::GetCurrent()->Global(), 2, argv);
23:15:57  <jaybeavers>Does that ensure the nodejs callback is properly threaded, so I can call that from the background uv_thread?
23:17:17  <jaybeavers>In that case, I can just call uv_after_work_cb from the background thread, that's simple
23:17:49  <jaybeavers>Sorry or being dense, my first multithreaded nodejs c++ code.
23:19:05  <bnoordhuis>jaybeavers: let me type up some example code
23:21:03  <jaybeavers>bnoordhuis: thx
23:25:27  <bnoordhuis>jaybeavers: https://gist.github.com/3560958 <- something like that
23:28:10  <bnoordhuis>jaybeavers: you call work_add() from the main thread and it'll take care of the rest
23:28:45  <jaybeavers>ok, we're a bit off base. Current code is simple and uses uv_queue_work like your gist
23:29:06  <jaybeavers>Problem is that if you call uv_queue_work rapidly, some work gets performed out of order
23:29:21  <jaybeavers>When that work is 'write Buffer to serial port' that results in corrupted data
23:29:41  <bnoordhuis>jaybeavers: not with my gist. it doesn't queue a new work req until the old one has finished
23:29:54  <bnoordhuis>i.e. it's serialized
23:30:00  <bnoordhuis>which is what you want
23:30:04  <bnoordhuis>and now you have it :)
23:30:17  <jaybeavers>:-)
23:30:43  <jaybeavers>Fair enough, node-serialport exposes .write to the user, they aren't guaranteed to only call the next write then the previous has completed.
23:31:16  <jaybeavers>So trying to make an api that will self-serialize
23:31:38  <bnoordhuis>jaybeavers: that's exactly what work_add() does
23:31:54  <bnoordhuis>if you call it ten times in a row, it'll fire off the first work req and queue the other nine
23:38:31  <jaybeavers>in after_work_cb, why do you reenter uv_queue_work at the end? Shouldn't the previous call in work_add be sufficient?
23:39:13  <bnoordhuis>jaybeavers: to drain the queue
23:39:45  * joshthecoderquit (Quit: Leaving...)
23:40:23  <jaybeavers>and in work_cb, what happens if the threadpool calls the 2nd work_cb first? Since you're not pulling work from ngx_queue_head, you might get the 2nd work, yes?
23:40:44  <bnoordhuis>jaybeavers: that never happens because there is only ever one work req active
23:40:53  * mjr_quit (Quit: mjr_)
23:41:16  <bnoordhuis>jaybeavers: it works like this: work_add() adds work to our local queue, which is not in any way related to libuv's thread pool
23:41:52  <bnoordhuis>jaybeavers: when the queue is empty, i.e. work_add() is called for the first time, it hands off a work req to the thread pool
23:42:20  <bnoordhuis>then it twiddles its thumbs until that work req comes back, i.e. after_work_cb is called
23:42:23  <jaybeavers>bnoordhuis: Got that. I'm nervous that if you call work_add 10 times, there will be 10 scheduled callbacks into work_cb, each with their pointer to their original work element
23:42:56  <jaybeavers>If work_cb always pulled ngx_queue_head, I could see how you'd always pull work items in order
23:43:06  <bnoordhuis>that's exactly what happens
23:43:39  <bnoordhuis>copy that code, modify your_data, your_work_cb and your_after_work_cb but nothing else
23:43:42  <bnoordhuis>and you should be fine
23:44:46  <jaybeavers>:-) sorry, need to understand the code the 'write' too
23:45:27  <jaybeavers>So if work_add is called 10x, uv_queue_work is called 10x to schedule 10 invocations of work_cb
23:45:44  <jaybeavers>They may be out of order, but they should match cbs to queue elements
23:46:12  <jaybeavers>So why is the reentrancy needed in after_work_cb, if it's omitted I should still get the right number of work_cb calls, yes?
23:47:12  <bnoordhuis>So if work_add is called 10x, uv_queue_work is called 10x to schedule 10 invocations of work_cb <- no
23:47:28  <bnoordhuis>uv_queue_work is called for the first req
23:47:30  <jaybeavers>duh, nevermind
23:47:47  <jaybeavers>I mentally skipped the if (!empty) return
23:47:51  <bnoordhuis>ah :)
23:47:57  <bnoordhuis>it's there for a reason :)
23:48:55  <jaybeavers>ok, I'm following now. Thanks for your patience and help
23:49:41  * piscisaureusquit (Quit: Lost terminal)
23:50:37  <bnoordhuis>jaybeavers: my pleasure
23:52:44  * ericktquit (Ping timeout: 244 seconds)
23:55:09  * piscisaureus_joined
23:56:50  * tomshredsjoined