00:07:02  <mikeal>can i use binary keys with sublevel?
00:15:26  <thlorenz>brycebaril: using leveldown-hyper - how can I tell which buffersize it is actually using?
00:16:05  <thlorenz>I have db.options.writeBufferSize set, but have no guarantee that got picked up
00:16:57  <thlorenz>here is what I have levelup(dblocation, { db: leveldown, valueEncoding: 'json', writeBufferSize: 16 * 1024 * 1024 /* 16MB */ }, function (err, db) { .. })
00:17:02  <thlorenz>leveldown being hyper
00:18:25  <rvagg>the options object is passed straight down into leveldown, which in turn passes the relevant options to leveldb, hyperlevel in this case
00:18:31  <rvagg>so it should be what you've goven it
00:18:33  <rvagg>given
00:18:58  <thlorenz>rvagg: cool - that's what I assumed, but I wasn't sure - didn't see any improvements setting this
00:19:13  <rescrv>thlorenz: assuming rvagg didn't make a typo (a good assumption), hyperleveldb picks it up
00:19:26  <thlorenz>so maybe it was the default for hyper all along since that's the suggested value
00:19:42  <rescrv>thlorenz: 16MB is not some magic value that gives improvements. Bigger values just have diminishing returns and introduce more noise
00:19:46  <rescrv>no, it's not the default
00:20:04  <thlorenz>anyhoo - seeing basically no improvements - giving mbalho's level-batch a go
00:20:16  * jmartinsjoined
00:20:19  <rescrv>thlorenz: how big are your objects?
00:20:26  <rescrv>eventually, your disk becomes the bottleneck.
00:20:35  <rescrv>On my disk, it happens around 2KB/obj
00:20:36  <thlorenz>I'm not entirely sure - not that huge though
00:21:10  <thlorenz>I mean after pretty much all data as been pulled I'm at ~40MB total
00:21:23  <rescrv>are you using snappy?
00:21:39  <thlorenz>I suppose yes since I didn't change the compression defaults
00:22:14  <rvagg>https://github.com/rvagg/node-leveldown/blob/hyper-leveldb/src/database.cc#L206
00:22:17  <rvagg>https://github.com/rvagg/node-leveldown/blob/hyper-leveldb/src/database_async.cc#L38
00:22:57  <rescrv>rvagg: I trusted you did it correctly and was just being pedantic
00:23:00  <rvagg>substack: levelup is stuck on "failed build" since the readstream stuff, can we get this fixed? https://travis-ci.org/rvagg/node-levelup/
00:23:13  <rvagg>rescrv: I don't usually trust myself that much... so...
00:23:17  <substack>uh oh
00:27:49  <levelbot>[npm] [email protected] <http://npm.im/level-sets>: Buckets of unique keys for level. (@mikeal)
00:27:59  <mikeal>:)
00:28:34  <mikeal>had to ditch sublevel
00:28:39  <mikeal>didn't seem to work with binary keys
00:28:50  * tmcwjoined
00:31:11  * jxsonquit (Remote host closed the connection)
00:32:03  * jmartinsquit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
00:32:25  <thlorenz>mbalho: running with level-batcher now, so far so good - will know more in about 10mins once the db grew
00:32:57  <thlorenz>but so far memory footprint looks much better and batching doesn't seem to hang anymore
00:33:57  * tmcwquit (Ping timeout: 276 seconds)
00:35:29  <thlorenz>OMG mbalho you are my hero for today :) this little gem solved all my problems :)
00:35:33  <thlorenz>I'm now with mikeal where github is my bottleneck
00:40:39  * jxsonjoined
00:40:50  <thlorenz>just to give some numbers: ~5mins for something that took 30mins+ previously
00:41:31  <thlorenz>staying below 300MB memory and came down to 160MB (before it climbed up to 700MB+ and never came fully back down)
00:43:14  * esundahl_joined
00:44:58  <rescrv>rvagg: I introduced a perf regression in the latest hyperleveldb. I'm pushing a fix tonight UTC-0400
00:50:58  <rvagg>rescrv: ok, I'm about to start some longhaul travel but will organise a new release on our end in a couple of days (if I remember, otherwise hopefully someone in here will remind me!)
00:53:45  <rescrv>rvagg: no real hurry. I may take a day or two to really push the perf to beyond that of lmdb
00:53:55  <rvagg>heh, nice!
00:54:21  <rescrv>reads take 10x longer than writes for this benchmark
00:54:27  <rescrv>I think it should be better than that
00:54:29  <mikeal>wow
00:54:30  * jcrugzzjoined
00:54:52  <rvagg>hopefully you see some interest in hyper* after nodeconf.eu, I'm sure we'll have reason to talk about hyperleveldb/leveldown-hyper
00:55:44  <mikeal>nodeconf.eu is going to be amazing
00:56:35  <rescrv>rvagg: would you be the best person to get some advice on for wrapping HyperDex up for node.js? I just want advice on what the end user API should be and a pointer to the docs for the correct version of node
00:56:57  <rescrv>I've seen you mention that different versions are incompatible, and I can only spare the time to target one
00:57:09  <rvagg>rescrv: yeah, me, or mikeal, or mbalho probably
00:57:21  <rvagg>rescrv: always target latest stable release, in this case 0.10.x
00:57:38  * esundahl_quit (Remote host closed the connection)
00:57:54  <rvagg>it's just that if you're doing compiled addons then you need to care about the unstable releases 0.11.x cause they are introducing a lot of API pain, but we're working on making that nicer
00:58:10  * esundahljoined
00:58:11  <rvagg>rescrv: are you just going to release a JS package though, or will you be bundling C++?
00:58:35  <rescrv>rvagg: it's both. I'm wrapping C in JS
00:59:10  <rvagg>rescrv: ok, so you'll need to care about the incompatibilities or else you'll be stranded when 0.12 is released and everything breaks
00:59:27  <rvagg>see https://github.com/rvagg/nan which is being used by a bunch of major libs to get wider compatibility
00:59:35  <rescrv>when's the target release for 0.12?
00:59:57  <rvagg>rescrv: err... a couple of months perhaps...
01:00:24  * jxsonquit (Remote host closed the connection)
01:00:25  <rvagg>https://github.com/tjfontaine/node-addon-layer this one may become the "official" way of doing c/c++ interaction with v8/node
01:00:37  <rvagg>or now, NAN is the best option tho I think
01:01:09  <rvagg>rescrv: pull me into a github discussion and/or repo and I'll be happy to contribute or advise, for now I really need to shutdown and get off to the airport
01:01:27  <rescrv>rvagg: have a safe trip. I'll followup via github
01:02:29  * esundahlquit (Ping timeout: 248 seconds)
01:05:46  <mbalho>thlorenz: sweet glad to hear it worked out. i release level-batcher 0.0.2 today, make sure you upgrade
01:06:00  <mbalho>thlorenz: (dont remember if i released before or after i told you about it)
01:06:24  <thlorenz>mbalho: I'm running 0.0.3
01:06:29  <thlorenz>sorry 0.0.2
01:06:32  <mbalho>cool
01:07:13  <thlorenz>mbalho: saved my day - I spent over a day on tracking this down (well learning how to read heapdumps as well)
01:07:25  * mikealquit (Quit: Leaving.)
01:07:42  <thlorenz>mbalho: since all the retained objects were related to requests I looked in the wrong corner at first
01:09:29  <mbalho>thlorenz: can you leave a comment on https://github.com/maxogden/level-bulk-load/issues/1 summarizing your experience/use case? i think it would be useful for others watching that thread
01:10:06  * mikealjoined
01:11:03  * ednapiranhajoined
01:14:55  * soldairquit (Ping timeout: 250 seconds)
01:15:16  * mikealquit (Quit: Leaving.)
01:17:45  <thlorenz>mbalho: I will do - I also summarized problem and solution here: https://github.com/thlorenz/valuepack-mine/issues/2#issuecomment-23756766
01:22:21  <mbalho>cool
01:47:39  * timoxleyquit (Remote host closed the connection)
01:50:57  * disordinaryjoined
01:59:39  * timoxleyjoined
02:04:50  * ednapiranhaquit (Remote host closed the connection)
02:08:43  * dguttmanquit (Quit: dguttman)
02:09:43  * dguttmanjoined
02:24:43  * julianduquequit (Remote host closed the connection)
02:25:41  * julianduquejoined
03:03:50  * dguttmanquit (Quit: dguttman)
03:04:05  * ramitosquit (Ping timeout: 248 seconds)
03:22:03  * thlorenzchanged nick to thlorenz_zz
03:42:03  * esundahljoined
03:55:54  * werlequit (Ping timeout: 264 seconds)
04:24:14  * tmcwjoined
04:25:33  * werlejoined
04:28:40  * disordinaryquit (Ping timeout: 264 seconds)
04:30:28  * werlequit (Ping timeout: 264 seconds)
04:33:57  * dguttmanjoined
04:35:48  <levelbot>[npm] [email protected] <http://npm.im/eksi-server>: Eksi Sozluk JSON API Server (@azer)
04:48:45  * julianduquequit (Ping timeout: 245 seconds)
04:48:52  * dguttmanquit (Quit: dguttman)
04:49:46  * tmcwquit (Remote host closed the connection)
04:50:15  * tmcwjoined
04:54:42  * tmcwquit (Ping timeout: 256 seconds)
04:56:45  * julianduquejoined
05:06:06  * julianduquequit (Quit: leaving)
05:20:45  * tmcwjoined
05:24:11  * mikealjoined
05:24:30  * mikealquit (Client Quit)
05:27:27  * mikealjoined
05:31:15  * tmcwquit (Ping timeout: 260 seconds)
05:57:05  * tmcwjoined
06:02:12  * tmcwquit (Ping timeout: 276 seconds)
06:18:40  * esundahlquit (Remote host closed the connection)
06:19:12  * esundahljoined
06:23:20  * esundahlquit (Ping timeout: 245 seconds)
06:24:55  * jcrugzzquit (Ping timeout: 260 seconds)
06:57:34  * tmcwjoined
06:57:54  * ehdquit (Ping timeout: 240 seconds)
06:57:54  * dkquit (Excess Flood)
06:59:39  * Guest49098joined
06:59:39  * Guest49098quit (Max SendQ exceeded)
07:02:09  * dk_joined
07:02:39  * tmcwquit (Ping timeout: 276 seconds)
07:18:25  * jcrugzzjoined
07:22:20  * dominictarrjoined
07:28:19  * tmcwjoined
07:33:12  * tmcwquit (Ping timeout: 276 seconds)
08:44:53  * dominictarrquit (Quit: dominictarr)
08:54:38  * ehdjoined
08:59:57  * fb55joined
09:00:34  * tmcwjoined
09:05:16  * tmcwquit (Ping timeout: 264 seconds)
09:31:19  * tmcwjoined
09:32:35  * timoxleyquit (Remote host closed the connection)
09:33:10  * timoxleyjoined
09:35:46  * tmcwquit (Ping timeout: 256 seconds)
09:36:34  * dominictarrjoined
09:38:00  * timoxleyquit (Ping timeout: 276 seconds)
09:39:52  * BruNeXquit (Ping timeout: 256 seconds)
09:41:56  * fb55quit (Remote host closed the connection)
09:47:04  * timoxleyjoined
09:47:22  * fb55joined
09:49:03  * fb55quit (Remote host closed the connection)
09:56:18  <levelbot>[npm] [email protected] <http://npm.im/level-agile>: multilevel client and server with various data transforms for inserting into leveldb (@jcrugzz)
09:56:20  * fb55joined
10:02:04  * tmcwjoined
10:06:01  * timoxleyquit (Remote host closed the connection)
10:06:15  * tmcwquit (Ping timeout: 245 seconds)
10:24:06  * rudquit (Quit: rud)
10:32:19  * BruNeXjoined
10:32:37  * fb55quit (Remote host closed the connection)
10:32:49  * tmcwjoined
10:33:44  * fb55joined
10:35:50  * jcrugzzquit (Ping timeout: 256 seconds)
10:37:48  * tmcwquit (Ping timeout: 276 seconds)
10:40:23  * fb55quit (Remote host closed the connection)
10:46:05  * rudjoined
10:46:06  * rudquit (Changing host)
10:46:06  * rudjoined
10:46:51  * Acconutjoined
10:47:08  * Acconutquit (Client Quit)
10:49:18  * Acconutjoined
10:50:32  * Acconutquit (Client Quit)
10:54:50  * rudquit (Quit: rud)
11:00:20  * Acconutjoined
11:00:23  * Acconutquit (Client Quit)
11:03:33  * tmcwjoined
11:08:08  * tmcwquit (Ping timeout: 256 seconds)
11:22:51  * tmcwjoined
11:29:25  * Acconutjoined
11:29:45  * Acconutquit (Client Quit)
11:34:49  * tmcwquit (Remote host closed the connection)
11:35:16  * tmcwjoined
11:36:55  * werlejoined
11:39:52  * Acconutjoined
11:40:18  * tmcwquit (Ping timeout: 264 seconds)
11:42:15  * Acconutquit (Client Quit)
11:49:04  * timoxleyjoined
11:53:51  * timoxleyquit (Ping timeout: 276 seconds)
11:56:13  * timoxleyjoined
12:03:48  * fb55joined
12:05:09  * timoxleyquit (Remote host closed the connection)
12:05:45  * timoxleyjoined
12:08:08  * thlorenz_zzquit (Remote host closed the connection)
12:08:15  * Acconutjoined
12:08:46  * Acconutquit (Client Quit)
12:10:00  * timoxleyquit (Ping timeout: 245 seconds)
12:19:25  * thlorenz_zzjoined
12:20:58  * kenansulaymanjoined
12:25:44  * fb55_joined
12:26:06  * fb55quit (Read error: Connection reset by peer)
12:32:26  * tmcwjoined
12:32:43  * timoxleyjoined
12:34:10  * tmcwquit (Remote host closed the connection)
12:36:16  * scttnlsnjoined
12:46:41  <scttnlsn>is there a way to create a readstream/iterator that is not a snapshot? i.e. will continue to "tail" the database?
12:49:45  <rescrv>scttnlsn: not in stock LevelDB
12:50:11  <kenansulayman>scttnlsn nope
12:50:31  <kenansulayman>You might want a Level livestream?
12:51:06  <kenansulayman>Just hook the get / put and emit events
12:52:30  <scttnlsn>kenansulayman: i don't know, trying to implement a persistent queue with leveldb
12:52:46  <scttnlsn>except items must be "acked" before they are removed
12:52:49  <kenansulayman>Aren't there already modules for that?
12:53:03  <scttnlsn>i haven't seen any
12:54:47  <kenansulayman>scttnlsn Hasn't juliangruber built a queue?
12:55:16  <juliangruber>scttnlsn: level-jobs and level-schedule
12:55:57  <kenansulayman>^+1
12:57:09  <scttnlsn>but when an item is dequeued it's gone forever, right? i need it to be re-enqueued if the worked does not send an ack after a certain time (like Amazon SQS)
12:57:13  <scttnlsn>is this possible with level?
13:01:38  <scttnlsn>i suppose two sublevels could work...one for enqueued items, one for dequeued (and non-acked items)
13:01:51  <rescrv>scttnlsn: it sounds like you don't need that iteration to build what you need
13:02:26  <scttnlsn>rescrv: yeah, just keep reading the first item off of the enqueued items sublevel
13:02:42  <rescrv>your two-sublevel approach will work
13:03:38  <scttnlsn>yeah, i guess when the process starts up it will just need to put everything in the dequeued sublevel at the end of the enqueued sublevel
13:10:48  <rescrv>not necessarily
13:10:53  <rescrv>it should after a certain timeout
13:15:05  <rescrv>that gives the workers time to retry their "done" message, assuming it's just a blip that's shorter than the average work unit
13:24:16  * thlorenz_zzquit (Remote host closed the connection)
13:24:36  * Acconutjoined
13:25:09  * thlorenz_zzjoined
13:26:24  * Acconutquit (Client Quit)
13:32:22  * thlorenz_zzquit (Remote host closed the connection)
13:55:06  * tmcwjoined
13:59:27  * fb55_quit (Remote host closed the connection)
14:01:15  * rudjoined
14:01:26  * rudquit (Changing host)
14:01:26  * rudjoined
14:08:53  * thlorenzjoined
14:08:58  * thlorenzquit (Remote host closed the connection)
14:09:10  * thlorenzjoined
14:09:34  * rudquit (Quit: rud)
14:16:26  * fallsemojoined
14:17:09  * dguttmanjoined
14:26:06  * werlequit (Ping timeout: 262 seconds)
14:35:14  * timoxleyquit (Remote host closed the connection)
14:36:57  * jjmalinajoined
14:46:19  * werlejoined
14:47:32  * ramitosjoined
14:48:24  * fb55joined
14:50:26  * thlorenzquit (Remote host closed the connection)
14:50:41  * thlorenzjoined
14:53:01  * dk_changed nick to dk
14:55:23  * ednapiranhajoined
14:57:25  * esundahljoined
15:00:12  * daurnimatorjoined
15:03:32  * thlorenzquit (Remote host closed the connection)
15:03:54  * thlorenzjoined
15:05:35  * mikealquit (Quit: Leaving.)
15:09:08  * thlorenzquit (Remote host closed the connection)
15:09:32  * thlorenzjoined
15:10:21  * thlorenzquit (Remote host closed the connection)
15:10:39  * thlorenzjoined
15:14:01  * timoxleyjoined
15:15:02  * rudjoined
15:15:02  * rudquit (Changing host)
15:15:02  * rudjoined
15:16:18  * mikealjoined
15:32:38  * thlorenzquit (Ping timeout: 240 seconds)
15:41:56  * rickbergfalkjoined
15:46:40  * ryan_ramagejoined
15:47:23  * mikealquit (Quit: Leaving.)
15:55:51  * fb55quit (Remote host closed the connection)
15:57:41  * scttnlsnquit (Remote host closed the connection)
15:59:05  * thlorenzjoined
16:35:04  * rickbergfalkquit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
16:44:36  * fb55joined
16:55:17  * jcrugzzjoined
16:55:17  * jcrugzzquit (Client Quit)
16:55:29  * jcrugzzjoined
16:58:24  * ryan_ramagequit (Quit: ryan_ramage)
16:58:40  * jcrugzzquit (Client Quit)
17:00:36  * jcrugzzjoined
17:10:42  * dominictarrquit (Quit: dominictarr)
17:12:00  * mikealjoined
17:20:42  * jxsonjoined
17:20:57  * tmcwquit (Remote host closed the connection)
17:21:29  * tmcwjoined
17:25:31  * tmcwquit (Ping timeout: 240 seconds)
17:29:02  * jerrysvjoined
17:33:12  * ryan_ramagejoined
17:37:37  * dominictarrjoined
17:39:04  * tmcwjoined
17:43:13  <thlorenz>mbalho: did you see https://github.com/maxogden/level-batcher/pull/1 ?
17:44:41  <thlorenz>I didn't run into this last night cause I never called `write`, but when I could see straight again this morning I found that level-batcher breaks for sublevels
17:45:26  <thlorenz>so btw the results I got yesterday are meaningless since I never actually persisted :( way to code when beyond exhausted
17:46:21  * jxsonquit (Remote host closed the connection)
17:46:38  <mbalho>thlorenz: lol
17:47:13  <thlorenz>mbalho: :) I'll try again tonight since now level-batcher will work for sublevels
17:48:06  <mbalho>thlorenz: published as 0.0.3
17:48:17  <levelbot>[npm] [email protected] <http://npm.im/level-batcher>: stream designed for leveldb that you write objects to, and it emits batches of objects that are under a byte size limit (@maxogden)
17:48:19  <thlorenz>mbalho: cool, what about tests though?
17:48:30  * jxsonjoined
17:48:30  <thlorenz>seems like we should have some that catch these
17:48:57  <thlorenz>I also added a commit comment cause I think you are losing the chunk that pushes the size over the limit
17:49:58  <thlorenz>mbalho: https://github.com/maxogden/level-batcher/blob/master/index.js#L38-L46 doesn't seem to push onto current batch in that case so it gets lost
17:50:44  <mbalho>thlorenz: i always just use tape so it works in browser
17:51:01  <thlorenz>mbalho: ok sounds good - will whip up some tests tonite :)
17:51:18  <mbalho>thlorenz: also i think that highlighted code is fine
17:51:51  <mbalho>thlorenz: it isnt pushing it over the limit, its testing to see if it will push it over the limit
17:52:10  <thlorenz>but what about that chunk that pushed the size over the limit? in that case it never gets added to the batch
17:52:32  <thlorenz>since you just call write (unless it was bigger than the limit itself)
17:53:09  <mbalho>thlorenz: oh i see
17:53:16  <thlorenz>only chunks that don't push it over the limit get added to batch, so that unlucky one that happens to make the size go over the limit is lost
17:53:42  <thlorenz>mbalho: I'll write a failing test tonite and if I'm right will fix it
17:54:02  <mbalho>thlorenz: good catch. i had a way more complicated earlier version of this module that handled that case but i must have forgotton to port it over
17:54:06  <mbalho>thlorenz: ill write a test :D
17:54:14  <thlorenz>cool :)
17:57:10  <thlorenz>mbalho: also should we assume that you'll always have a key and value and remove that check and alternative path?
17:57:25  <thlorenz>since its 'level'-batcher
17:57:41  <mbalho>ah yea
17:58:12  <thlorenz>ok, so that makes things easier - although your current tests will break
17:58:14  <mbalho>we could also rename it to batch-stream
17:58:23  <mbalho>and make it non level specific
17:58:46  <thlorenz>I thought of chunker or something like that, but then it'd be weird if it knows about key/value
17:58:52  <mbalho>yea good point
17:59:12  <mbalho>thlorenz: also i think that try catch you added might have some perf impact
17:59:31  <thlorenz>unless you want to make it configurable like I can pass in the prop names that decide on what gets used to determine size
17:59:44  <thlorenz>mbalho: only if an error occurs though right?
17:59:52  * ryan_ramagequit (Quit: ryan_ramage)
17:59:57  <mbalho>thlorenz: not sure, i dont usually microbenchmark stuff
18:00:13  <mbalho>i just thought i heard someone say that unnecessary try catches arent good
18:00:29  <thlorenz>mbalho: well we could remove that if we are sure that we'll deal with well formed level key/value s
18:01:00  * ryan_ramagejoined
18:01:02  <thlorenz>I was gonna write a small wrapper around this - something like level-batchify
18:01:25  <mbalho>thlorenz: what would that do?
18:01:41  <thlorenz>you'll give it a db and entries - it will add 'type': 'put', determine writeBufferSize and then call into batcher
18:01:50  <thlorenz>i.e. batchify(db, batch)
18:02:12  <thlorenz>oh batchify(db, batch, doneCb) actually
18:02:29  <mbalho>wouldnt you want a stream api?
18:02:45  <mbalho>cause entries has to be a stream anyway
18:03:01  <thlorenz>it could do either, like return a stream if you don't provide a cb
18:03:25  <thlorenz>well actually in my case I aggregate all batches before hand and write them one by one into batcher
18:03:36  <thlorenz>maybe I oughta change that ;)
18:03:51  <mbalho>yea seems like an uncecessary step
18:04:11  <thlorenz>makes sense - just my code was setup to work with leveldb.batch
18:05:02  <mbalho>thlorenz: well if youre gonna write a wrapper anyway then i think it makes more sense to make level-batcher generic
18:05:10  * mikealquit (Quit: Leaving.)
18:05:31  <mbalho>thlorenz: we can just have functions as options for selecting the object/bytelength
18:05:59  <thlorenz>mbalho: make it something like chunker(limit, [ 'sizeField1', sizeField2' ]) ?
18:06:00  <mbalho>thlorenz: or i can npm owners add you to level-batcher and then make a new module called batch-stream
18:06:39  <thlorenz>so level-batcher becomes a wrapper on the latter - makes sense
18:07:03  <thlorenz>so batch-stream needs to allow me to pass what fields to use to determine object size
18:07:45  <mbalho>thlorenz: well probably just chunker(limit, getByteLength)
18:08:09  <thlorenz>ah yeah that's more extensible
18:08:10  <mbalho>so you can override the default function
18:08:20  * Acconutjoined
18:08:39  * Acconutquit (Client Quit)
18:09:36  <thlorenz>good plan, let me know once level-batcher just calls into batch-stream and I'll take over from there
18:09:48  <thlorenz>most likely will provide streaming and callback API
18:09:50  <mbalho>ok youre an owner on level-batcher npm + github, also if you dont wanna use my repo you can just fork it and change the repository link on npm
18:10:08  <thlorenz>mbalho: you had no link :P
18:10:12  <mbalho>oh lol
18:10:39  <mbalho>i usually do npm init again after i git remote add origin so it fills all that stuff in for me
18:11:34  <thlorenz>mbalho: this does it all for me https://github.com/thlorenz/dotfiles/blob/master/bash/functions/ngen.sh#L8
18:11:51  <thlorenz>waiting on Raynos to publish something similar as an npm package
18:11:58  <mbalho>thlorenz: ah cool
18:11:59  <mbalho>yea
18:12:29  <mbalho>thlorenz: oh also did you have thoughts on the .next() api?
18:12:34  <thlorenz>mbalho: once you are done I'll just pull you changes into my fork and keep working from there
18:12:52  <thlorenz>mbalho: not sure looks like a pull-stream juxtaposed onto streams1
18:13:01  <thlorenz>but I guess it works fine
18:13:05  <mbalho>thlorenz: yea, it was necessary for the semantics i needed though
18:13:14  <mbalho>thlorenz: cause i dont want to write too much data all at once to leveldb
18:13:32  <thlorenz>mbalho: it makes total sense to me and is very intuitive to use
18:13:38  <mbalho>thlorenz: ok cool
18:13:53  <mbalho>thlorenz: i was thinking maybe i could have it listen to pause and resume events and know when to send more data
18:13:57  <mbalho>thlorenz: but i havent tried that yet
18:14:17  * fb55quit (Remote host closed the connection)
18:14:27  <thlorenz>I'd keep the explicitness over adding magic that may break
18:14:38  <mbalho>good call
18:14:58  <thlorenz>also calling it 'next' makes sure people know that this is not part of a node streams implementation
18:15:19  <thlorenz>I myself am very confused what to call on streams nowadays
18:16:01  <mbalho>yea me too
18:48:17  * esundahlquit (Remote host closed the connection)
18:48:43  * esundahljoined
18:49:35  * esundahl_joined
18:49:49  * rudquit (Quit: rud)
18:52:55  * esundahlquit (Ping timeout: 245 seconds)
18:57:18  * jmartinsjoined
19:07:44  * rudjoined
19:07:44  * rudquit (Changing host)
19:07:44  * rudjoined
19:09:49  * ryan_ramagequit (Quit: ryan_ramage)
19:25:27  <thlorenz>so where is everyone hanging out until the buses pick people up for nodeconf?
19:25:43  <thlorenz>I'm coming in Sunday morning - would like to do some level related hacking
19:26:50  <mbalho>thlorenz: not sure yet but you should hop in #nerdtracker
19:27:05  <mbalho>thlorenz: me, isaacs, and substack use it for oakland coordination but we'll all be in dublin too
19:27:13  <thlorenz>ah, will do, thanks
19:28:00  <substack>I'm going to be down in waterford with rvagg and dominictarr
19:28:15  <substack>in chicago right now
19:29:01  <dominictarr>thlorenz: there will be plenty of people in dublin at that time
19:29:52  <thlorenz>dominictarr: cool, I'll just connect with you guys once I'm down there
19:29:58  <substack>I'm writing a lib for my talk that will track ALL the keys and key ranges on a page
19:30:27  * mikealjoined
19:30:39  <substack>trying to get meteor-style reactive updates working with a few tiny libraries in a very tiny example
19:33:08  <substack>might be using level-reactive for some of the client-side parts but I'm not sure yet
19:44:29  <mikeal>substack: you're doing that declarative markup stuff?
19:47:52  * ednapiranhaquit (Remote host closed the connection)
19:53:17  * tmcwquit (Remote host closed the connection)
19:53:51  * tmcwjoined
19:57:55  * tmcwquit (Ping timeout: 245 seconds)
19:59:55  * julianduquejoined
20:03:31  * tmcwjoined
20:05:05  <juliangruber>substack: re reactive, have you seen level-list too?
20:09:57  * ryan_ramagejoined
20:12:04  * ryan_ramagequit (Client Quit)
20:15:01  * timoxleyquit (Remote host closed the connection)
20:17:33  * rickbergfalkjoined
20:18:58  * ednapiranhajoined
20:24:47  * ednapiranhaquit (Read error: Connection reset by peer)
20:25:01  * ednapiranhajoined
20:25:03  * ryan_ramagejoined
20:28:29  * scttnlsnjoined
20:37:58  * scttnlsnquit (Remote host closed the connection)
20:44:35  * jcrugzzquit (Ping timeout: 245 seconds)
20:46:02  * timoxleyjoined
20:48:06  * dguttmanquit (Read error: Connection reset by peer)
20:48:34  * dguttmanjoined
20:49:02  * brianloveswordsquit (Excess Flood)
20:49:15  * brianloveswordsjoined
20:51:01  * timoxleyquit (Ping timeout: 268 seconds)
20:56:00  * disordinaryjoined
21:13:04  <Raynos>substack: a nice lib for meteor style things is to be able to do pre-emptive local operation on your level and have it rollback if the server side level says "ERROR"
21:13:40  <Raynos>meteor has latency compensation by assuming every db operation succeeds doing it locally and then rolling it back asynchronously when and if the server says no
21:14:18  <levelbot>[npm] [email protected] <http://npm.im/fash>: A consistent hashing library for node (@yunong)
21:15:34  * jxsonquit (Remote host closed the connection)
21:36:21  <mikeal>that looks impressive
21:43:07  <dominictarr>it needs to use scuttlebutt
21:45:54  * jxsonjoined
21:45:59  * esundahl_quit (Remote host closed the connection)
21:46:25  * esundahljoined
21:46:48  * timoxleyjoined
21:48:51  * esundahl_joined
21:50:31  * jcrugzzjoined
21:50:50  * esundahlquit (Ping timeout: 245 seconds)
21:51:26  * timoxleyquit (Ping timeout: 256 seconds)
21:54:31  * jxsonquit (Ping timeout: 264 seconds)
22:01:15  * Acconutjoined
22:06:00  * thlorenzquit (Remote host closed the connection)
22:10:50  * werlequit (Ping timeout: 245 seconds)
22:19:14  * jcrugzzquit (Ping timeout: 246 seconds)
22:22:04  * jcrugzzjoined
22:24:18  * Acconutquit (Quit: Acconut)
22:27:44  * fallsemoquit (Quit: Leaving.)
22:28:01  * jxsonjoined
22:28:14  * jcrugzzquit (Ping timeout: 240 seconds)
22:28:36  * Acconutjoined
22:28:55  * ryan_ramagequit (Quit: ryan_ramage)
22:30:32  * jcrugzzjoined
22:35:07  * ryan_ramagejoined
22:35:30  * ryan_ramagequit (Client Quit)
22:43:27  * jcrugzzquit (Read error: Connection reset by peer)
22:46:57  * jcrugzzjoined
22:47:29  * timoxleyjoined
22:50:58  * dominictarrquit (Quit: dominictarr)
22:52:18  * timoxleyquit (Ping timeout: 264 seconds)
22:52:54  * ryan_ramagejoined
22:58:24  * Acconutquit (Quit: Acconut)
22:58:39  * jcrugzzquit (Read error: Connection reset by peer)
23:02:49  * Acconutjoined
23:02:54  * jcrugzzjoined
23:04:09  * fallsemojoined
23:05:37  * disordinaryquit (Quit: Konversation terminated!)
23:05:58  * disordinaryjoined
23:07:10  * jcrugzzquit (Read error: Connection reset by peer)
23:09:36  * Acconutquit (Quit: Acconut)
23:11:45  * Acconutjoined
23:14:32  * tmcwquit (Remote host closed the connection)
23:15:07  * tmcwjoined
23:19:02  * tmcwquit (Ping timeout: 240 seconds)
23:20:54  * disordinaryquit (Quit: Konversation terminated!)
23:21:11  * werlejoined
23:21:11  * disordinaryjoined
23:21:13  * disordinaryquit (Client Quit)
23:21:31  * disordinaryjoined
23:23:19  <levelbot>[npm] [email protected] <http://npm.im/level-queryengine>: Search levelup/leveldb instances with pluggable query engines and pluggable indexing schemes. (@eugeneware)
23:24:27  * timoxleyjoined
23:24:37  * timoxleyquit (Read error: Connection reset by peer)
23:28:06  * esundahl_quit (Remote host closed the connection)
23:28:32  * esundahljoined
23:30:45  * jerrysvquit (Read error: Connection reset by peer)
23:32:42  * esundahlquit (Ping timeout: 240 seconds)
23:33:26  * fallsemoquit (Ping timeout: 256 seconds)
23:33:44  * Acconutquit (Quit: Acconut)
23:34:42  * tmcwjoined
23:35:57  * rickbergfalkquit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
23:37:04  * kenansulaymanquit (Quit: โ‰ˆโ™กโ‰ˆ)
23:41:19  * fallsemojoined
23:41:24  * julianduquequit (Remote host closed the connection)
23:42:07  * julianduquejoined
23:43:29  * fallsemoquit (Quit: Leaving.)
23:46:42  * ednapiranhaquit (Remote host closed the connection)
23:51:49  * ryan_ramagequit (Quit: ryan_ramage)
23:52:55  * jjmalinaquit (Quit: Leaving.)
23:53:28  * SomeoneWeirdpart ("Leaving")
23:56:40  * jcrugzzjoined
23:56:44  * mikealquit (Quit: Leaving.)
23:59:02  * mikealjoined