00:03:06  * thlorenzjoined
00:03:33  <brycebaril>thlorenz: straem-viz is uses d3?
00:04:40  * thlorenzquit (Remote host closed the connection)
00:05:51  * timoxleyjoined
00:07:56  * st_lukequit (Read error: Connection reset by peer)
00:10:30  * timoxleyquit (Ping timeout: 252 seconds)
00:29:07  * eugenewarejoined
00:35:34  * thlorenzjoined
00:43:53  * thlorenzquit (Ping timeout: 245 seconds)
01:06:19  * timoxleyjoined
01:10:38  * timoxleyquit (Ping timeout: 240 seconds)
01:16:40  * kenansulaymanquit (Ping timeout: 264 seconds)
01:17:26  * kenansulaymanjoined
01:24:23  * timoxleyjoined
01:33:41  <levelbot>[npm] [email protected] <http://npm.im/krang>: braaain (@jarofghosts)
01:37:12  * eugenewarequit (Remote host closed the connection)
01:37:58  * eugenewarejoined
01:44:17  <mbalho>brycebaril: i was wondering if you wanted to split https://github.com/mranney/node_redis/blob/master/lib/parser/javascript.js into a separate module called redis-parser
01:44:19  * thlorenzjoined
01:44:42  <mbalho>brycebaril: or if you would prefer that i do it. i just didnt wanna take ownership of code that i didnt write in case the actual maintainer preferred doing it
01:45:02  <mbalho>brycebaril: either way it would be a really trivial thing. i just dont wanna depend on the rest of the code base when i am only using the parser
01:48:28  * thlorenzquit (Ping timeout: 245 seconds)
01:49:40  * ralphtheninjaquit (Read error: Operation timed out)
01:54:29  <brycebaril>mbalho: that's actually already a plan.
01:54:38  <brycebaril>In fact there is a faster pure js parser out there as a separate lib already
01:55:33  <brycebaril>You writing a new redis library?
01:56:39  <mbalho>brycebaril: nah wanted to use https://github.com/dominictarr/redis-protocol-stream but it depends on redis
01:56:48  <brycebaril>Ahh
01:57:03  <mbalho>brycebaril: which is fine except i noticed that the javascript parser in redis is ripe and ready to split out into its own module
01:57:20  <mbalho>brycebaril: do you or one of the other maintainers wanna do it or should i do it and send a pull req to redis?
01:57:21  <brycebaril>https://github.com/tonistiigi/redisparse << this one is faster, though he hasn't touched it for 7 months
01:57:29  <mbalho>oh cool, i know tonis
01:57:50  <mbalho>brycebaril: i should probably just use that then
01:57:53  <brycebaril>*faster for 80% of use cases
01:59:02  <brycebaril>Yeah. The plan for splitting is splitting the parser out along with breaking up the 2k line index.js into ~3 separate modules. Right now it is a bit of a nightmare. I was waiting to do the js parser at the same time because we're going to move them to a github organization vs mranney's personal github
01:59:22  <mbalho>brycebaril: ahh good call
01:59:44  <mbalho>brycebaril: the js parser seems ready to go to me, i dont think it would be that much work. the index.js sounds like a challenge though
02:00:29  <brycebaril>Yeah, the js parser would be easy, that was pretty much just waiting on me getting around to making the org
02:00:40  <brycebaril>bbiab gotta sing happy birthday to my wife
02:00:45  <mbalho>hah
02:01:04  <mbalho>you should probably just log out of irc for the night :P
02:02:47  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
02:06:48  * thlorenzjoined
02:10:38  <thlorenz>brycebaril: part of it, yes - but what you saw is just using smoothie charts
02:11:16  <thlorenz>lots more in the works, will push something more useful in a bit - figuring out this one bug
02:28:59  * jcrugzzjoined
02:38:25  * vincentmacquit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
03:17:05  * st_lukejoined
03:17:19  * st_lukequit (Remote host closed the connection)
03:20:53  * thlorenzquit (Remote host closed the connection)
03:27:56  * jcrugzzquit (Read error: Connection reset by peer)
03:28:15  * jcrugzzjoined
04:01:58  * vincentmacjoined
04:13:01  * mikealquit (Quit: Leaving.)
04:13:54  * mikealjoined
04:45:08  * eugenewarequit (Remote host closed the connection)
04:45:46  * eugenewarejoined
04:48:40  * reid_joined
04:49:36  * ramitosquit (*.net *.split)
04:49:37  * reidquit (*.net *.split)
04:49:40  * reid_changed nick to reid
04:49:40  * reidquit (Changing host)
04:49:40  * reidjoined
05:00:06  * jxsonjoined
05:02:37  * eugenewarequit (Remote host closed the connection)
05:05:02  * eugenewarejoined
05:40:43  * jcrugzzquit (Ping timeout: 248 seconds)
06:08:55  * jxsonquit (Remote host closed the connection)
06:18:26  * jxsonjoined
06:34:58  * esundahlquit (Remote host closed the connection)
06:56:48  * jxsonquit (Remote host closed the connection)
07:05:28  * jcrugzzjoined
07:06:16  * esundahl_joined
07:14:45  * esundahl_quit (Ping timeout: 248 seconds)
07:40:36  * vincentmacquit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
07:44:36  * dominictarrquit (Ping timeout: 245 seconds)
07:49:19  * dominictarrjoined
07:50:39  * eugenewarequit (Remote host closed the connection)
07:58:51  * jcrugzzquit (Ping timeout: 248 seconds)
08:07:34  * dominictarrquit (Quit: dominictarr)
08:25:00  * dominictarrjoined
08:29:03  * dominictarrquit (Client Quit)
08:29:31  * dominictarrjoined
08:33:59  * dominictarrquit (Ping timeout: 260 seconds)
08:42:29  * vincentmacjoined
08:55:47  * ralphtheninjajoined
09:00:43  * vincentmacquit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
09:46:03  * joshwnjquit (Quit: ERC Version 5.3 (IRC client for Emacs))
10:18:27  * kenansulaymanjoined
10:21:24  * kenansulaymanquit (Client Quit)
11:18:13  <levelbot>[npm] [email protected] <http://npm.im/mosca>: The multi-transport MQTT broker for node.js. It supports AMQP, Redis, ZeroMQ, MongoDB or just MQTT. (@matteo.collina)
11:21:45  * eugenewarejoined
11:22:39  * kenansulaymanjoined
11:31:00  * eugenewarequit (Ping timeout: 256 seconds)
11:38:11  <levelbot>[npm] [email protected] <http://npm.im/deferred-leveldown>: For handling delayed-open on LevelDOWN compatible libraries (@rvagg)
12:04:41  <levelbot>[npm] [email protected] <http://npm.im/deferred-leveldown>: For handling delayed-open on LevelDOWN compatible libraries (@rvagg)
12:38:01  * thlorenzjoined
12:42:21  * thlorenzquit (Ping timeout: 252 seconds)
12:57:37  * rudquit (Quit: rud)
13:09:42  * thlorenzjoined
13:11:52  * mikealquit (Quit: Leaving.)
13:16:58  * jmartinsjoined
13:17:42  * mikealjoined
13:17:57  * mikealquit (Client Quit)
13:22:02  * thlorenzquit (Remote host closed the connection)
13:22:54  * mikealjoined
13:31:00  * dominictarrjoined
14:01:42  * tmcwjoined
14:05:46  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
14:06:13  * rudjoined
14:06:14  * rudquit (Changing host)
14:06:14  * rudjoined
14:06:27  * mikealquit (Quit: Leaving.)
14:06:28  <brycebaril>rvagg: does deferred-leveldown also help with iterator?
14:11:54  * dominictarrquit (Quit: dominictarr)
14:14:15  * mikealjoined
14:15:07  * ednapiranhajoined
14:16:12  * jjmalina1joined
14:16:49  * jjmalina1quit (Client Quit)
14:17:07  * jjmalina1joined
14:17:19  * jjmalina1quit (Client Quit)
14:19:00  * jjmalina1joined
14:24:37  * mikealquit (Quit: Leaving.)
14:27:51  * mikealjoined
14:28:03  * vincentmacjoined
14:30:26  * ednapiranhaquit (Remote host closed the connection)
14:32:19  * ramitosjoined
14:44:45  * Acconutjoined
14:45:01  * vincentmacquit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
14:45:58  * dguttmanjoined
14:46:31  * Acconutquit (Client Quit)
14:46:50  * Acconutjoined
14:49:09  * Acconutquit (Client Quit)
14:56:56  * jcrugzzjoined
14:57:52  * Acconutjoined
14:58:23  * Acconutquit (Client Quit)
14:59:27  * jjmalina1quit (Quit: Leaving.)
14:59:37  * jjmalinajoined
14:59:41  * jjmalinaquit (Max SendQ exceeded)
15:11:50  * jjmalinajoined
15:12:34  * ednapiranhajoined
15:13:52  * ednapiranhaquit (Read error: Connection reset by peer)
15:14:20  * ednapiranhajoined
15:17:01  * ednapiranhaquit (Read error: Connection reset by peer)
15:17:11  * ednapiranhajoined
15:50:20  * mikealquit (Quit: Leaving.)
15:58:35  * timoxleyquit (Remote host closed the connection)
16:03:48  * dguttmanquit (Quit: dguttman)
16:04:32  * ryan_ramagejoined
16:09:06  * rudquit (Quit: rud)
16:18:18  * timoxleyjoined
16:21:04  * mikealjoined
16:30:14  * mikealquit (Ping timeout: 240 seconds)
16:42:09  * jcrugzzquit (Ping timeout: 252 seconds)
16:53:30  * jxsonjoined
16:59:47  * dguttmanjoined
17:04:31  * jxsonquit (Remote host closed the connection)
17:04:57  * jxsonjoined
17:05:38  * timoxleyquit (Remote host closed the connection)
17:05:52  * jxsonquit (Remote host closed the connection)
17:06:30  * jxsonjoined
17:27:14  * vincentmacjoined
17:27:28  * mikealjoined
17:31:04  * Acconutjoined
17:31:41  * mikealquit (Ping timeout: 245 seconds)
17:39:50  * ryanjjoined
17:42:54  * vincentmacquit (Read error: Operation timed out)
17:43:20  * Acconutquit (Quit: Acconut)
17:52:25  * kenansulaymanjoined
17:53:12  * jcrugzzjoined
17:58:54  * ryanjquit (Ping timeout: 264 seconds)
18:09:29  * jcrugzzquit (Ping timeout: 240 seconds)
18:11:35  * jcrugzzjoined
18:14:12  * ryanjjoined
18:20:50  * jerrysvjoined
18:21:23  * jerrysvquit (Quit: Leaving...)
18:21:55  * jerrysvjoined
18:22:20  <jerrysv>tmcw: are you coming to rtc?
18:22:37  <tmcw>rtc?
18:23:52  <tmcw>(that's a no - haven't heard about it)
18:24:05  * Acconutjoined
18:24:10  * Acconutquit (Client Quit)
18:34:08  <jerrysv>tmcw: real time conference - happening here in portland
18:34:17  <jerrysv>http://2013.realtimeconf.com
18:37:01  * jmartinsquit (Quit: Konversation terminated!)
18:39:41  * thlorenzjoined
18:40:00  * mikealjoined
18:44:23  * thlorenzquit (Ping timeout: 260 seconds)
18:49:59  * Acconutjoined
18:50:01  * Acconutquit (Client Quit)
19:58:34  * jxsonquit (Remote host closed the connection)
19:59:18  * julianduquejoined
19:59:45  * fallsemojoined
20:02:51  * jxsonjoined
20:04:09  * dominictarrjoined
20:05:39  * jxsonquit (Read error: Operation timed out)
20:33:09  * jxsonjoined
20:41:16  * jxsonquit (Ping timeout: 245 seconds)
21:01:38  * rudjoined
21:01:38  * rudquit (Changing host)
21:01:38  * rudjoined
21:03:42  * werlequit (Ping timeout: 264 seconds)
21:09:09  * ramitosquit (Quit: Textual IRC Client: www.textualapp.com)
21:16:41  * werlejoined
21:18:30  * jxsonjoined
21:19:55  * tmcwquit (Remote host closed the connection)
21:20:31  * tmcwjoined
21:23:03  * werlequit (Ping timeout: 245 seconds)
21:24:43  * tmcwquit (Ping timeout: 245 seconds)
21:26:57  * tmcwjoined
21:28:12  <levelbot>[npm] [email protected] <http://npm.im/level-remove-notfound>: levelUp.get won't callback with notFoundError anymore, instead (null, null) (@alessioalex)
21:28:18  * dominictarrquit (Ping timeout: 264 seconds)
21:28:42  * jxsonquit (Remote host closed the connection)
21:29:26  * jxsonjoined
21:33:18  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
21:34:17  * dominictarrjoined
21:35:03  * kenansulaymanjoined
21:50:48  * jcrugzzquit (Ping timeout: 240 seconds)
22:03:28  * dominictarrquit (Ping timeout: 245 seconds)
22:04:44  * dominictarrjoined
22:10:20  * dropdrivequit (Read error: Operation timed out)
22:10:30  * dkquit (Read error: Operation timed out)
22:10:37  * dropdrivejoined
22:11:50  * Guest27598joined
22:15:23  * dominictarrquit (Quit: dominictarr)
22:36:08  * tmcwquit (Remote host closed the connection)
22:48:56  <mbalho>brycebaril: would be cool to see a multibuffer-stream that did the streaming encoding and parsing on top of the stream API
22:50:26  <brycebaril>the leveldb streaming stuff I was doing this weekend is sort-of that. Reads from levelDOWN iterators into a multibuffer [key, value], and the writeStream only accepts multibuffers, so you never de-buffer things
22:51:25  <brycebaril>at least if I understand you correctly
22:51:38  <mbalho>brycebaril: ah i was just thinking about a stream that you can write as many buffers as you want on one side and you get the same buffers emitted out the other side
22:51:47  <mbalho>brycebaril: your use case seems a little more leveldb specific
22:52:12  <brycebaril>ahh, yeah.
22:52:57  <mbalho>brycebaril: i think multibuffer-stream as i see it would be pretty easy to implement, you read the multibuffer header, wait for the buffers you receive to be longer than the length from the header, then you slice, decode, emit and use leftover data as the start of the buffer when you repeat
22:55:26  <brycebaril>Yeah that wouldn't be too hard
22:57:14  <mbalho>brycebaril: i think that is all i need for dat at the moment. the redis protocol might be a good fit if it turns out that i actually need the full request/response semantics, but i dont know if i do
22:58:41  * ryan_ramagequit (Quit: ryan_ramage)
23:00:25  <rescrv>mbalho: I seem to recall you iterating over items as you insert them. Is that correct?
23:01:25  * fallsemoquit (Quit: Leaving.)
23:01:31  <mbalho>rescrv: im parsing the keys/values out of other data formats (like csv)
23:02:02  <brycebaril>mbalho: sorry - a bit distracted, just wrote myself out of mu sudoers file on a vagrant vm and I don't think I have a way back into root :(
23:02:03  * jcrugzzjoined
23:02:14  <mbalho>lol
23:02:24  <rescrv>mbalho: gotcha. I knew someone was doing it, but my memory is failing me as to who it was.
23:04:15  * jcrugzzquit (Client Quit)
23:05:40  * jjmalinaquit (Quit: Leaving.)
23:05:46  <brycebaril>ok, so yeah, essentially stream.Transform that accepts multibuffers and emits the decoded buffers. Sure! That'd be super easy.
23:06:13  <brycebaril>I could probably do that tonight after yoga
23:07:08  * ednapiranhaquit (Remote host closed the connection)
23:07:12  <mbalho>brycebaril: or even if it just accepted buffers, and the multibuffers was just something it did internally
23:07:33  <mbalho>brycebaril: that way the API is just buffers in, buffers out
23:07:46  <brycebaril>what would the input be in that case though? streams are already buffers in, buffers out
23:08:38  <mbalho>brycebaril: there sno guarantee that the transport preserves the buffer lengths, it could combine them etc
23:09:07  <mbalho>brycebaril: so multibuffer-stream is basically just a guaranteed 'these buffers are the exact same ones that were written in'
23:12:56  <brycebaril>Hmm. So E.g. guarding against some transport that does something like only emits 256 byte chunks or something, then it would reassemble anything longer than what got chunked
23:13:34  <mbalho>yea
23:14:11  <mbalho>brycebaril: my requirement is that if i .write 3 buffers that are each 10 bytes that i get 3 buffers that are each 10 bytes on the other end. stdin/stdout for exmaple will merge all 3 into one 30 byte buffer
23:14:26  <mbalho>tcp also will merge, most transports will merge
23:16:20  <brycebaril>right. Sure, so in addition to the Transform I mentioned above, a Transform that simply calls multibuffer.encode (not currently exported) on each chunk to add the metadata.
23:18:26  <mbalho>i guess if you .write multiple times in the same tick it should batch them in one multibuffer? i dunno if that makes sense
23:19:11  <mbalho>cause for example if i'm parsing a csv, the fs will give me a chunk tha tmight contain 10 rows, and those 10 rows will become 10 buffers, so i might as well batch them in a length 10 multibuffer
23:19:18  <mbalho>not sure the best way to achieve that
23:19:35  <mbalho>or if batching is actually after, i'm just guessing
23:23:37  <brycebaril>it should be multibuffer(multibuffer) safe, so you could do .write(multibuffer.pack(csv_buffer_array))
23:28:29  <mbalho>brycebaril: ah good point
23:31:15  <brycebaril>mbalho: happy to take a stab at it later this evening, but I'm out for now
23:31:18  <mbalho>brycebaril: ah good point
23:32:29  * jerrysvquit (Remote host closed the connection)
23:33:31  * esundahljoined
23:33:40  <mbalho>oops
23:46:08  * jxsonquit (Read error: Connection reset by peer)
23:46:25  * jxsonjoined
23:51:06  * eugenewarejoined