00:00:33  <mbalho>https://github.com/rvagg/node-leveldown/pull/57
00:00:56  <mbalho>rvagg: i think the version on npm was out of date with the version on that branch
00:01:07  <rvagg>quite possible
00:01:11  * timoxleyquit (Remote host closed the connection)
00:01:34  * jerrysvquit (Remote host closed the connection)
00:03:02  * esundahlquit (Ping timeout: 240 seconds)
00:05:16  * ralphtheninjaquit (Quit: leaving)
00:10:48  * eugenewarejoined
00:14:19  <rescrv>mbalho: It's leveldb::Status, right? I think you can just copy that from Put. I think rvagg can help with that.
00:19:05  * eugenewarequit (Remote host closed the connection)
00:58:08  * thlorenzjoined
00:58:58  * thlorenzquit (Remote host closed the connection)
00:59:41  * kenansulaymanjoined
01:04:16  * ednapiranhajoined
01:22:18  * dguttmanquit (Quit: dguttman)
01:32:23  * jxsonquit (Remote host closed the connection)
01:38:16  * kenansulaymanquit (Ping timeout: 264 seconds)
01:38:49  * kenansulaymanjoined
01:47:44  <rvagg>rescrv: ring any bells? ../deps/leveldb/leveldb-hyper/db/skiplist.h:422: void leveldb::SkipList<Key, Comparator>::InsertWithHint(leveldb::SkipList<Key, Comparator>::InsertHint*, const Key&) [with Key = const char*; Comparator = leveldb::MemTable::KeyComparator]: Assertion `x == __null || !Equal(key, x->key)' failed.
01:48:43  * jmartinsquit (Remote host closed the connection)
01:54:17  <levelbot>[npm] [email protected] <http://npm.im/leveldown-hyper>: A Node.js LevelDB binding, primary backend for LevelUP (HyperDex fork) (@rvagg)
01:55:47  <levelbot>[npm] [email protected] <http://npm.im/hyperlevel>: A HyperDex-LevelDB wrapper (a convenience package bundling LevelUP & LevelDOWN-hyper) (@kenansulayman)
01:55:58  <rvagg>heh, speedy work kenansulayman
01:56:01  <rvagg>
01:56:02  <kenansulayman>:D
01:58:51  <kenansulayman>rvagg Building seems to err with 0.8.0
01:59:28  <kenansulayman>rvagg http://data.sly.mn/R7WT
02:02:00  <kenansulayman>rvagg this is plain "npm install hyperlevel" using leveldown-hyper 0.8.0: http://data.sly.mn/R6Z3
02:02:48  <kenansulayman>make: *** No rule to make target `Release/obj.target/leveldb/deps/leveldb/leveldb-hyper/util/bloom.o', needed by `Release/leveldb.a'. Stop.
02:03:12  <rvagg>hm
02:04:18  <rvagg>perhaps the submodule wasn't included in the release
02:04:29  <kenansulayman>Let me have a look
02:05:22  <rvagg>indeed it hasn't
02:05:23  <rvagg>odd
02:05:34  <rvagg>grr, it's in .npmignore
02:05:36  <rvagg>doh!
02:07:07  <kenansulayman>0.8.1 it is then ;)
02:07:13  <rvagg>NEIN
02:07:17  <kenansulayman>haha
02:07:24  <kenansulayman>DOCH
02:07:26  <rvagg>needs to sync properly with leveldown ... sooooo ....
02:07:35  <kenansulayman>yeah right
02:07:41  <kenansulayman>delete 0.8.0 and republish
02:07:48  <levelbot>[npm] [email protected] <http://npm.im/leveldown-hyper>: A Node.js LevelDB binding, primary backend for LevelUP (HyperDex fork) (@rvagg)
02:08:01  <kenansulayman>0.8.0-1 ?
02:08:11  <rvagg>aye, close enough without a --force
02:08:16  <rvagg>compiles on 0.8
02:08:31  <kenansulayman>~0.8.0 => 0.8.0-1?
02:08:44  <rvagg>yeah
02:08:47  <kenansulayman>k mom
02:09:04  <kenansulayman>yeah like a charm
02:09:07  <kenansulayman>nop
02:09:16  <kenansulayman>In file included from ../deps/leveldb/leveldb-hyper/db/repair.cc:28:
02:09:16  <kenansulayman>In file included from ../deps/leveldb/leveldb-hyper/db/db_impl.h:10:
02:09:16  <kenansulayman>In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/usr/include/c++/4.2.1/tr1/memory:54:
02:09:16  <kenansulayman> { return __ti == typeid(_Deleter) ? &_M_del : 0; }
02:09:16  <kenansulayman> ^
02:09:17  <kenansulayman> { return static_cast<_Del*>(__p._M_get_deleter(typeid(_Del))); }
02:09:17  <kenansulayman> ^
02:09:18  <kenansulayman>2 errors generated.
02:09:18  <kenansulayman>make: *** [Release/obj.target/leveldb/deps/leveldb/leveldb-hyper/db/repair.o] Error 1
02:09:19  <kenansulayman>make: Leaving directory `/private/tmp/node_modules/hyperlevel/node_modules/leveldown-hyper/build'
02:09:28  <rvagg>OSX? what is this? a circus??
02:09:42  <rvagg>that's rescrv's problem
02:09:45  <kenansulayman>http://data.sly.mn/R6li
02:09:52  <kenansulayman>it worked on 0.5.0 :(
02:10:00  <rvagg>mm, this is latest hyperleveldb master
02:10:06  <rvagg>obviously developed on !osx
02:10:08  <rvagg>works on linux
02:10:24  <kenansulayman>I see
02:10:53  <kenansulayman>Earlier today I cloned leveldown and replaced leveldb with hyperleveldb
02:10:57  <kenansulayman>same issue
02:11:06  <kenansulayman>"cannot use typeid with -fno-rtti"
02:11:29  <rvagg>yeah, so we need to turn that off for osx
02:13:12  <rvagg>kenansulayman: try this: in deps/leveldb/leveldb.gyp find the "xcode_settings" block
02:13:21  <kenansulayman>already in it mom
02:13:28  <rvagg>and add this: 'GCC_ENABLE_CPP_RTTI': '-frtti' as a new thing
02:13:42  <kenansulayman>k
02:13:56  <rvagg>if it works, I'll make a -2
02:14:46  <kenansulayman>it builds sec
02:14:49  <kenansulayman>boom!
02:14:57  <kenansulayman>works
02:15:02  <kenansulayman>let me run the tests
02:15:13  <kenansulayman>ok all good
02:15:28  <kenansulayman>http://data.sly.mn/R6eC
02:15:32  <kenansulayman>the leveldb.gyp
02:16:17  <levelbot>[npm] [email protected] <http://npm.im/leveldown-hyper>: A Node.js LevelDB binding, primary backend for LevelUP (HyperDex fork) (@rvagg)
02:16:27  <kenansulayman>lets check hyperlevel
02:18:21  <kenansulayman>like a charm. http://data.sly.mn/R6pC
02:18:29  <rvagg>nice, there ya go mbalho
02:20:47  <levelbot>[npm] [email protected] <http://npm.im/level-namequery>: An intelligent search engine on top of LevelDB for Name <-> User-ID relations. (@kenansulayman)
02:21:00  <kenansulayman>woah 4:20 am here already
02:21:02  <kenansulayman>gn8 rvagg
02:23:29  * kenansulaymanchanged nick to kenansulayman|af
02:23:40  <rvagg>night
02:25:21  * kenansulayman|afquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
02:26:26  * thlorenzjoined
02:42:35  * esundahljoined
02:43:28  * ednapiranhaquit (Remote host closed the connection)
02:47:46  <mbalho>rvagg: whats the difference between npm hyperleve and npm leveldown-hyper
02:52:12  <rvagg>hyperlevel is by kenansulayman, it's just like 'level' in that it bundles levelup with leveldown-hyper
02:52:42  <rvagg>(I'm assuming that's what it does, I haven't actually looked at hyperlevel)
02:53:13  <mbalho>ahhhhh
02:53:16  <mbalho>makes sense
02:53:33  <mbalho>oh right Dependenciesleveldown-hyper, levelup
03:07:20  <rvagg>mbalho: I'm just publishing a new version that makes it async
03:07:25  <rvagg>and will return the error as the first argument
03:07:37  <rvagg>*but*, the actual filename isn't returned from hyperleveldb, it's all internal
03:07:43  <rvagg> std::string backup_dir = dbname_ + "/backup-" + name.ToString() + "/";
03:07:47  <levelbot>[npm] [email protected] <http://npm.im/leveldown-hyper>: A Node.js LevelDB binding, primary backend for LevelUP (HyperDex fork) (@rvagg)
03:08:19  <rvagg>so, you'll have to go hunting for it yourself, I'm not confident about generating that path name to return when there's a chance that they might change it in hyperdex
03:08:49  <rvagg>so, db.db.liveBackup('foobar', function (err) { if (err) throw err })
03:09:14  <rvagg>and I guess 'foobar' will make /path/to/db/backup-foobar/
03:14:26  <mbalho>rvagg: ahh sounds good
03:15:43  <mbalho>rescrv: would it be feasible to return a data stream of the db backup instead of writing the entire thing to disk? e.g. does livebackup lock the db while the backup is happening or can it generate a backup while the db is still accepting writes
03:15:51  <rvagg>untested, let me know how it goes mbalho, all I know is that it compiles
03:16:13  <mbalho>rvagg: hah sweet
03:16:52  <mbalho>rvagg: where should i add a test? are there any other hyperleveldb specific tests?
03:17:18  <rvagg>mbalho: no, just put it in ./test/ but give it a 'hyperleveldb-' prefix so we can add more if needed
03:18:56  <rvagg>mbalho: it looks to me like one of the node worker threads will be occupied performing the backup (hyperleveldb doesn't seem to spawn a separate thread for this), but it creates a snapshot so the files are left alone while the backup happens
03:19:08  <rvagg>so you get a backup of a consistent state and other threads should be able to continue reading & writing
03:19:11  <rvagg>which is all very very neat
03:20:21  <mbalho>rvagg: wow nice
03:21:37  <mbalho>so its probably possible to have it stream data into node instead of writing to the fs, just gotta find a bored c++ dev :D
03:24:48  <rvagg>yeah, would be possible, would be kind of messy & complicated tho
03:25:24  <rvagg>probably what would be easier is to have it emit a "copy" event each time it copies a file so you can pick that file up with node and stream + delete it
03:29:13  * thlorenzquit (Remote host closed the connection)
03:30:49  <mbalho>oh good call
04:02:02  * mikealjoined
04:38:03  * mikealquit (Quit: Leaving.)
04:44:15  * mikealjoined
04:56:38  * julianduquequit (Quit: leaving)
05:16:41  * esundahlquit (Remote host closed the connection)
06:00:25  * i_m_cajoined
06:36:15  * i_m_caquit (Ping timeout: 260 seconds)
06:43:50  * timoxleyjoined
06:48:43  * mcollinajoined
07:02:00  * timoxleyquit (Ping timeout: 243 seconds)
07:03:33  * timoxleyjoined
07:08:02  * mcollinaquit (Ping timeout: 264 seconds)
07:15:26  * timoxleyquit (Remote host closed the connection)
07:16:05  * timoxleyjoined
07:49:02  * missinglinkquit (Ping timeout: 240 seconds)
08:24:29  <hij1nx>sublevel monkey patching createReadStream has some adverse side effects
08:24:54  <hij1nx>maybe hiding sublevels is good in some cases
08:26:24  <hij1nx>maybe a method like createSublevelStream() might be nice, something to get all of the sublevels in a sublevel/root
08:40:17  <hij1nx>or maybe an option to Sublevel#createReadStream({ sublevels: <Int depth> })
08:40:35  * kenansulaymanjoined
08:44:52  * kenansulaymanquit (Client Quit)
08:46:05  * kenansulaymanjoined
09:05:46  <hij1nx>mm, even better, i can just monkey patch sublevel.
09:06:01  <hij1nx>sounds like a lot of monkey business.
09:13:23  * dominictarrjoined
09:19:40  * alanhoffquit (Ping timeout: 264 seconds)
09:20:13  * alanhoffjoined
09:59:38  <hij1nx>dominictarr: ping
09:59:50  <dominictarr>hij1nx: hey whats up?
10:01:47  <hij1nx>dominictarr: bouncing ideas around on listing sublevels
10:02:39  <dominictarr>hmm, so, I can think of two basic approaches
10:02:42  <hij1nx>dominictarr: the first thing i tried was sending { start: '\xff~', end: '~' }
10:03:00  <hij1nx>that gives me everything, then i'd have to filter that down
10:03:02  <dominictarr>\xff!
10:03:03  <hij1nx>kind of messy
10:03:23  <hij1nx>ah yes, \xff!
10:03:23  <dominictarr>you could use level-peek
10:03:26  <hij1nx>messy though
10:03:51  <dominictarr>peek.first({gte: '\xff'})
10:03:52  <hij1nx>it would be nice if you could get just the sublevels from the root or a sublevel
10:04:07  <dominictarr>that would tell you the first key
10:04:16  <dominictarr>then you could do
10:05:15  <dominictarr>peek.first({gt: '\xff' + whatFirstCallReturned})
10:05:25  <dominictarr>that would get you the first key of the next sublevel
10:05:37  <hij1nx>it would seem to me that "db.sublevels" should just have the sublevels in it already
10:05:44  <dominictarr>right
10:05:52  <dominictarr>that is other option
10:06:04  <hij1nx>thats the better option ;)
10:06:12  <hij1nx>lets work on that maybe
10:06:25  <dominictarr>but then you need to write out a map of the sublevels somewhere that lev can see it
10:07:17  <dominictarr>like the db directory
10:07:28  <dominictarr>see level-manifest …
10:07:43  <dominictarr>you'd have to add level-manifest to your program code, however
10:09:48  <hij1nx>hmm. ok, maybe there's a new module here.
10:09:56  <hij1nx>i've got some ideas
10:24:09  * timoxleyquit (Remote host closed the connection)
10:27:10  * Acconutjoined
10:43:15  * jcrugzzquit (Ping timeout: 260 seconds)
10:47:02  * Acconutquit (Quit: Acconut)
11:05:13  <dominictarr>Raynos: what are you gonna talk about at nodeconf.eu?
11:16:03  <hij1nx>dominictarr: so far this is my idea https://raw.github.com/hij1nx/level-subtree/master/index.js
11:16:52  <dominictarr>how do you use it?
11:27:22  <rescrv>kenansulayman: can you change your description on hyperlevel to say it uses the HyperLevelDB fork? HyperDex is different, and I don't want to generate confusion between the two
11:27:35  <rescrv>mbalho: why not just stream the files that are copied/linked?
11:27:52  <kenansulayman>rescrv sure thing. mom
11:28:56  <rescrv>the copy/link step is fast because it doesn't have to hold any locks for any extended period of time. Changing it to instead stream the backup will be less efficient than streaming from the copy. It'd be an all-around loss.
11:29:16  <levelbot>[npm] [email protected] <http://npm.im/hyperlevel>: A Hyper-LevelDB wrapper (a convenience package bundling LevelUP & LevelDOWN-hyper) (@kenansulayman)
11:29:17  <rescrv>rvagg: never seen that error, but indeed, nothing I do gets developed on OS X. We've ordered a box for testing though.
11:29:24  <rescrv>kenansulayman: thanks!
11:29:31  <kenansulayman>yup
11:30:08  <kenansulayman>rescrv Actually that's because Apples' LLVM 5 fork is plain crap
11:30:48  <rvagg>rescrv: that particular error I was getting is on Linux, I get it during one of my test cases; the other error from kenansulayman was an osx one we've fixed now
11:31:32  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
11:31:44  * kenansulaymanjoined
11:31:44  * kenansulaymanquit (Client Quit)
11:33:19  * kenansulaymanjoined
11:34:11  * Acconutjoined
11:38:36  <kenansulayman>rescrv How hard would it be to deliver your hyperclient as npm dep?
11:39:45  <kenansulayman>Also I will publish a homebrew formular for it :) (even Kyoto Tycoon has one.. even though it doesn't work)
11:42:43  * Acconutquit (Ping timeout: 246 seconds)
11:42:50  * Acconutjoined
11:47:04  <kenansulayman>Uh I see it already exists
11:52:30  <kenansulayman>I give up on it :)
11:54:28  <hij1nx>dominictarr in the test.js file
11:54:53  <dominictarr>give me an english description
11:56:08  <hij1nx>dominictarr: "build and maintain a tree from the sublevels in a leveldb instance"
11:56:19  <hij1nx>^ https://github.com/hij1nx/level-subtree/blob/master/test/test.js
11:56:30  <hij1nx>two methods. `build`, `update`
11:56:31  <dominictarr>right, okay. how does a lev find it?
11:56:50  * Acconutquit (Quit: Acconut)
11:57:37  <hij1nx>it builds the tree once, `var t = new Tree(db)`, `t.build()`, lev would use that tree
11:57:41  <hij1nx>when a sublevel is added
11:57:51  <hij1nx>youd call `t.update(key)`
11:58:22  <hij1nx>then you can do all kinds of things with the tree, pretty print it like npm ls, etc.
11:59:45  <hij1nx>ah, i think im going to make an "npm ls" type visualization from it, `lev ./db --tree`
12:01:33  <hij1nx>dominictarr: it has to do a full scan of all the fields in the database when you call the ctor, but you cant trust that its acurate unless it does, if you materialize that in some cache, another program can access the database and add a key, but that program doesn't update the tree, so then the tree is out of sync
12:01:51  <hij1nx>s/fields/keys/
12:02:19  <dominictarr>full scan is only feasible if the database is small
12:02:57  <hij1nx>well, its async ;) ...how do you keep a tree up to date in that case?
12:03:06  * alanhoffquit (Ping timeout: 264 seconds)
12:03:30  <hij1nx>dominictarr: i think if you want to produce a tree, you need to let it scan at least once
12:03:35  <dominictarr>you could just walk the prefixes
12:03:41  * alanhoffjoined
12:03:46  <dominictarr>like look for the first \xffKEY
12:03:51  <hij1nx>thats what i mean by scan.
12:04:02  <dominictarr>and then jump to the last \xffKEY
12:04:22  <hij1nx>ah, you mean, random gets
12:04:37  <dominictarr>read peak.first(\xff)
12:04:43  <hij1nx>yeah
12:04:48  <dominictarr>and that will get you the first sublevel
12:05:00  <hij1nx>that is just a stream with the start key \xff and a limit of 1 right?
12:05:07  <dominictarr>then read peak.first(\xffKEY\xff\xff)
12:05:15  <dominictarr>that will get you the NEXT sublevel
12:05:45  <dominictarr>you could also repeat this into sublevels recursively
12:05:48  <rvagg>this memory leak is killing me... it's actually pretty serious for anyone using level* > ~0.13 or 0.14
12:06:24  <hij1nx>dominictarr: but peek is just a readstream with the start key \xff and a limit of 1 right?
12:06:30  <dominictarr>yeah
12:07:00  <hij1nx>dominictarr: ok, yep
12:10:16  <kenansulayman>rvagg Sure thing it's caused by leveldown?
12:10:38  <rvagg>I'm suspecting NAN at this stage actually
12:10:56  <rvagg>not freeing references to objects so they can't be garbage collected
12:11:09  <rvagg>... except I can't see how it *wouldn't* be freeing references
12:11:32  * Acconutjoined
12:12:07  * Acconutquit (Client Quit)
12:15:22  <kenansulayman>rvaggWhen do these memory accumulations occur specifically?
12:15:37  <kenansulayman>s/rvaggWhen/rvagg When/
12:15:55  <rvagg>probably any time you perform an async operation
12:15:58  <rvagg>that's what I'm guessing
12:16:08  <rvagg>so you'll see a very slow build up in memory usage as objects don't get freed
12:16:31  <kenansulayman>Actually we don't see that effect
12:18:28  <kenansulayman>rvagg ANY async operation?
12:18:42  <rvagg>oh? on the latest leveldown?
12:18:43  <rvagg>hm
12:18:56  <kenansulayman>Let me put a test run on that
12:19:09  <rvagg>test/leak-tester.js is seeing it, it does a get() (mostly a miss) then a put(), then starts again
12:23:19  <kenansulayman>rvagg I'm seeing your hyperlevel issue, can't install on CentOS (ubuntu works)
12:23:22  <kenansulayman>../deps/leveldb/leveldb-hyper/util/env_posix.cc: In member function »virtual leveldb::Status leveldb::<unnamed>::PosixEnv::CopyFile(const std::string&, const std::string&)«:
12:23:23  <kenansulayman>../deps/leveldb/leveldb-hyper/util/env_posix.cc:542: Warnung: »fd2« may be used uninitialized in this function
12:23:34  <kenansulayman>typeid kann nicht mit -fno-rtti verwendet werden
12:23:41  <kenansulayman>we should apply the OSX fix to CentOS
12:24:32  <kenansulayman>rvagg
12:24:32  <kenansulayman>http://app.legify.com/heartbeat
12:24:57  <hij1nx>dominictarr: being lazy here, but does sublevel double up seps (ie: `\xffKEY\xff\xff`)?
12:25:17  <kenansulayman>rvagg Not seeing huge fluctuations on high level usage
12:25:22  <kenansulayman>LRU is turned off
12:25:49  <rvagg>well that's annoying then!
12:25:55  <rvagg>so it's limited to certain use-cases
12:26:35  <kenansulayman>rvagg Wait
12:26:41  <kenansulayman>Now it increases
12:26:57  <kenansulayman>Could you take a look at the stats while I am doing stuff on the platform to emulate usage?
12:26:58  <dominictarr>hij1nx: yes
12:27:12  <dominictarr>if sublevels are nested that happens
12:27:25  <dominictarr>~sub1~~sub2~key
12:27:28  <dominictarr>like that
12:29:29  <hij1nx>dominictarr: ah, right.
12:30:24  <kenansulayman>rvagg 70,119,424 to 81,604,608 in 2 minutes ewww
12:30:56  <rvagg>not the end of the world tho, if it stays there after your activity has settled down then it's probably ok
12:31:04  <rvagg>sorry, if it stays there then that's a concern
12:31:08  <rvagg>if it decreases then great
12:31:25  <kenansulayman>But the GC jumps in every now and then and boils it down. I don't see a leak as reported then :)
12:32:14  <kenansulayman>Btw we're on hyperlevel not Google-Leveldown
12:32:24  <kenansulayman>Maybe that's why?
12:33:16  <rvagg>nah
12:33:20  <rvagg>what version of node?
12:33:24  <kenansulayman>mom
12:33:34  <kenansulayman>latest v0.10.17
12:33:46  <rvagg>mm, that's what I'm testing with
12:33:57  <kenansulayman>Maybe platform specific?
12:34:02  <kenansulayman>What OS are you on?
12:34:37  <kenansulayman>I'm on x86_64 GNU/Linux 3.8.6/ Ubuntu 12
12:34:55  <kenansulayman>For the server
12:39:17  <rvagg>oooo! I think I might have found it!
12:40:28  * thlorenzjoined
12:46:53  * Acconutjoined
12:49:22  * Acconutquit (Client Quit)
12:53:58  <kenansulayman>rvagg What's it?
12:56:27  <rvagg>kenansulayman: https://github.com/rvagg/node-levelup/issues/171#issuecomment-23558974
12:58:40  * kenansulaymanquit (Ping timeout: 264 seconds)
13:00:43  * kenansulaymanjoined
13:11:30  <rvagg>bed
13:19:45  <rescrv>rvagg: when you get a chance, can you share that test case that caused the assert-fail you've seen? I'd like to get it fixed.
13:20:32  <thlorenz>dominictarr: just wanted to preempt people complaining about sublevels the same way they do about callbacks without pointing to the obvious solution (organize things better)
13:21:07  <thlorenz>maybe it came across different than I meant it ;)
13:22:16  <dominictarr>thlorenz: if people are complaining, but using that means you are winning :)
13:23:20  <thlorenz>dominictarr: I guess so :), but seriously I had gotten myself lost in my data until I took a little time to step back and jot down how it is organized
13:36:05  <thlorenz>dominictarr: is this what you meant? https://github.com/dominictarr/level-sublevel/issues/29
13:36:30  <thlorenz>we could mark them with a particular label so people find them even if they are closed
13:37:16  <levelbot>[npm] [email protected] <http://npm.im/level-subtree>: build and maintain a tree from the sublevels in a leveldb instance (@hij1nx)
13:38:09  <kenansulayman>hij1nx Do you want "\xff" or ÿ?
13:38:59  <kenansulayman>Because the docs say\xfftest1
13:39:00  <kenansulayman>\xfftest1\xff\xfftest11
13:44:24  <dominictarr>thlorenz: perfect
13:44:37  <dominictarr>now people that have that question will have a starting point
13:45:32  <dominictarr>really, you should consider 'issues' to be 'forums' where anything relevant to that project should be discussed
13:47:16  <levelbot>[npm] [email protected] <http://npm.im/level-json-wrapper>: LevelDB JSON Wrapper (@azer)
13:48:17  <kenansulayman>LOL
13:48:45  <kenansulayman>Is that seriously a wrapper which synthesizes the JSON encoding?
13:50:05  <thlorenz>dominictarr: issues as forums make sense, but if I look at a repo with tons of issues I may think "wow this seems broken"
13:50:11  <hij1nx>kenansulayman: sorry, not sure what you mean
13:50:26  <dominictarr>thlorenz: that is a UX bug
13:50:34  <dominictarr>maybe use discussion tag?
13:50:40  <thlorenz>so maybe close those issues and label them with question and link to them from the readme?
13:50:48  <thlorenz>yep discussion tag makes sense
13:51:05  <kenansulayman>hij1nx Just read the readme of level-subtree and it wasn't clear if you meant to use the ascii string "\xff" or the character it represents :)
13:51:11  <dominictarr>oh, no such thing
13:51:16  <dominictarr>question tag then
13:52:29  <hij1nx>kenansulayman: \xff is the separator
13:52:39  <thlorenz>dominictarr: you can create labels
13:52:42  <kenansulayman>hij1nx I know, that's why I asked :)
13:52:50  <hij1nx>kenansulayman: you can use anything you wan
13:52:52  <thlorenz>I think discussion label would be nice
13:52:52  <dominictarr>thlorenz: lots of issues means engagement
13:53:06  <hij1nx>s/wan/want/
13:53:22  <thlorenz>s/lots of issues/lots of closed issues/
13:53:24  <dominictarr>the bad sign is lots of issues without maintainer response
13:53:33  <kenansulayman>hij1nx Yes in namequery I use \xab
13:54:01  <thlorenz>dominictarr: exactly and if I don't take the time to look through them, but just see a huge number I may think it's broken
13:54:04  <hij1nx>kenansulayman: ah, ok. so what was unclear about the doc? PR? :)
13:54:10  <thlorenz>but you are right - it's UI problem
13:54:12  <dominictarr>on the other hand, some people use issues as a todo list
13:54:26  <kenansulayman>hij1nx haha. sure I'll PR later ;)
13:54:33  <thlorenz>yep, I plan future tasks by assigning myself issues
13:54:35  <hij1nx>kenansulayman: w00t!! you rock :)
13:56:51  <hij1nx>ok, now i'll pull level-subtree into lev so that ls will show sublevels, i'll also make `lev ./db --tree` do pretty much the same thing as `npm ls`
13:57:46  * hij1nxtopic: tiny robots
13:57:56  <hij1nx>oh i can do that :)
13:58:45  <thlorenz>hij1nx: kenansulayman looks like you guys figured out step 2 already :) Awesome!
13:59:19  <kenansulayman>thlorenz step 2? ;)
13:59:46  <thlorenz>kenansulayman: https://twitter.com/hij1nx/status/373435994088800257
14:00:27  <thlorenz>kenansulayman: sorry maybe not related, saw you guys talking about lev and sublevels
14:01:06  * hij1nxtopic: "An irc room focused on discussions about Dominic Tarr's hair, and some leveldb stuff as well"
14:01:15  <kenansulayman>lol
14:03:30  <kenansulayman>hij1nx You should checkout hyperlevel for lev, it's really an explosion in performance
14:04:26  <kenansulayman>After some hacking with rvagg now even runs on osx^
14:10:53  <rescrv>kenansulayman: we'll be making hyperleveldb support OS X once our mac arrives, so you'll hopefully not have to do too much on that front in the future
14:11:23  <kenansulayman>rescrv OSX is now fully working. We have to apply our bugfix for OSX to CentOS as well it seems, though
14:12:28  <kenansulayman>In specific that means we'll just compile hyperleveldb from leveldb.gyp using 'GCC_ENABLE_CPP_RTTI': '-frtti'
14:13:27  * tmcwjoined
14:14:09  * Acconutjoined
14:14:10  * Acconutquit (Client Quit)
14:14:20  <kenansulayman>On CentOS it fails like this:
14:14:21  <kenansulayman>http://data.sly.mn/R6eo
14:15:01  <kenansulayman>Which is pretty much equal to the issue on OSX before force-enabling frtti:
14:15:02  <kenansulayman>http://data.sly.mn/R6Ja
14:16:46  <kenansulayman>rescrv Any reason it's doing -fno-rtti anyway?
14:17:41  * werlejoined
14:20:55  <rescrv>kenansulayman: that's not a flag I added. I may have copied it from upstream. Can you open an issue on github.com/rescrv/hyperleveldb/ and I'll follow up? I'm a little busy atm
14:21:13  <kenansulayman>sure thing
14:26:24  * kenansulaymanquit (Remote host closed the connection)
14:26:53  * kenansulaymanjoined
14:26:53  * kenansulaymanquit (Client Quit)
14:27:07  * kenansulaymanjoined
14:41:26  * jmartinsjoined
14:44:00  * esundahljoined
14:47:52  * fallsemojoined
14:48:45  * timoxleyjoined
15:03:36  <hij1nx>hehe -- https://github.com/isaacs/npm/blob/master/lib/ls.js#L75
15:04:00  <kenansulayman>hij1nx Minimalistic sophistication
15:14:47  <levelbot>[npm] [email protected] <http://npm.im/level-subtree>: build and maintain a tree from the sublevels in a leveldb instance (@hij1nx)
15:16:16  <levelbot>[npm] [email protected] <http://npm.im/level-subtree>: build and maintain a tree from the sublevels in a leveldb instance (@hij1nx)
15:28:10  * dguttmanjoined
15:41:45  <dominictarr>hij1nx: sublevel only makes key \xffNAME\xff so you should check that the separator is on the end of the key too.
15:41:53  <dominictarr>otherwise, it's not really a sublevel
16:06:23  <thlorenz>has anyone implemented a tree walker? can't find one: https://twitter.com/thlorenz/status/373476668142522369
16:08:35  <kenansulayman>Can't you just do that recursively?
16:09:02  <kenansulayman>Although I find that extremely inefficient
16:13:29  * jerrysvjoined
16:25:21  * fb55joined
16:32:17  <levelbot>[npm] [email protected] <http://npm.im/polyclay>: a schema-enforcing model class for node with optional key-value store persistence (@ceejbot)
16:35:28  * Acconutjoined
16:38:05  * Acconutquit (Client Quit)
16:49:44  * dominictarrquit (Quit: dominictarr)
16:50:03  * tmcwquit (Remote host closed the connection)
16:50:35  * tmcwjoined
16:52:22  * rickbergfalkquit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
16:55:18  * tmcwquit (Ping timeout: 264 seconds)
16:56:05  * fb55quit (Remote host closed the connection)
17:03:20  * fb55joined
17:04:47  * fb55quit (Remote host closed the connection)
17:09:15  * fb55joined
17:11:11  * i_m_cajoined
17:16:15  * mikealquit (Quit: Leaving.)
17:17:25  * i_m_caquit (Ping timeout: 248 seconds)
17:21:07  * tmcwjoined
17:22:00  * thlorenzquit (Remote host closed the connection)
17:24:50  * tmcwquit (Remote host closed the connection)
17:25:06  * tmcwjoined
17:29:03  * fb55quit (Remote host closed the connection)
17:30:19  * thlorenzjoined
17:32:10  * Acconutjoined
17:32:27  * thlorenzquit (Remote host closed the connection)
17:35:45  * ryan_ramagejoined
17:36:49  * Acconutquit (Client Quit)
17:37:27  * thlorenzjoined
17:38:11  * thlorenzquit (Remote host closed the connection)
17:44:14  * i_m_cajoined
17:46:07  * DTrejojoined
17:46:22  * jcrugzzjoined
17:49:56  * dguttmanpart
17:50:23  * dguttmanjoined
17:50:48  * mikealjoined
17:52:06  * jcrugzzquit (Ping timeout: 264 seconds)
17:53:20  * esundahlquit (Read error: Connection reset by peer)
17:54:20  * esundahljoined
17:58:54  * Acconutjoined
17:58:55  * Acconutquit (Client Quit)
18:00:00  * i_m_caquit (Ping timeout: 268 seconds)
18:09:19  * dominictarrjoined
18:19:36  * dguttmanquit (Ping timeout: 245 seconds)
18:24:22  * dguttmanjoined
18:24:36  * dguttmanpart
18:30:54  * jxsonjoined
18:33:47  * julianduquejoined
18:35:00  * julianduquequit (Client Quit)
18:35:40  * julianduquejoined
18:44:31  * esundahlquit (Remote host closed the connection)
18:44:58  * esundahljoined
18:45:23  * julianduquequit (Quit: leaving)
18:46:08  * julianduquejoined
18:47:19  * soldairjoined
18:49:54  * esundahlquit (Ping timeout: 264 seconds)
18:51:03  * dominictarrquit (Quit: dominictarr)
18:51:50  * fritzyjoined
18:53:12  <fritzy>Howdy people. Sorry for tweeting @ you before looking at the support sectoin rvagg. I can see that levelup supports streaming multiple values, but I was wondering if there was a way to do a stream of a single large value? I failed to see a way to do that with writeStream.
18:53:40  <kenansulayman>fritzy Use level-store for that
18:54:05  <kenansulayman>juliangruber Did a great job with it
18:54:36  <fritzy>kenansulayman: thanks, I'll check it out. If I were to use both, I presume I'd have to disconnect with one before connecting with the other.
18:54:57  <kenansulayman>Well what do you want to do?
18:55:48  <fritzy>oh, key-value stuff, I guess. ;) I guess I could just port everything to level-store.
18:56:03  * esundahljoined
18:56:16  <kenansulayman>fritzy No. I am speaking of the actual application
18:56:34  <kenansulayman>Why can't you write using level?
18:57:07  <fritzy>I've got a directory db implemented on top of levelup. Most values are just json or msgpack values, but sometimes I want to store a larger file, and I don't want to buffer it from the http server before writing it.
18:57:52  <fritzy>the occasional 20MB file would be bad for the service if they started stacking under any significant load.
18:58:23  <kenansulayman>Is it so huge that you can't load it to RAM in order to write it?
18:59:24  <fritzy>I'd rather store it as I read it off of the http socket rather than load it in ram then write it. Yes, it could potentially be big enough to be a problem if multiple clients are doing that same thing.
18:59:37  <kenansulayman>juliangruber How do you pipe a file to Level?
19:00:12  <fritzy>kenansulayman: docs look pretty obv to me at https://github.com/juliangruber/level-store so I'm good now. :)
19:00:13  <fritzy>thanks.
19:00:25  <kenansulayman>fritzy Well yes and no
19:00:36  <fritzy>level-store looks like it does exactly what I want
19:00:36  <kenansulayman>fritzy https://github.com/juliangruber/level-fs
19:01:10  <fritzy>oh, interesting
19:01:39  <fritzy>kenansulayman: looks like level-fs uses level-store. I can bend it to my will.
19:02:04  <kenansulayman>fritzy We're deploying level-store in production. But I just want to make sure it fits you best :)
19:04:44  <fritzy>thanks kenansulayman, I'll give it a go
19:04:54  <kenansulayman>Ok :)
19:08:42  * dominictarrjoined
19:13:52  * Acconutjoined
19:13:54  * Acconutquit (Client Quit)
19:15:02  * timoxleyquit (Remote host closed the connection)
19:17:31  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
19:29:24  * thlorenzjoined
19:36:21  * DTrejoquit (Remote host closed the connection)
19:40:49  * jmartinsquit (Quit: Communi 2.2.0)
19:44:13  * jmartinsjoined
19:51:09  * dominictarrquit (Quit: dominictarr)
19:56:29  * Acconutjoined
19:57:03  * Acconutquit (Client Quit)
20:05:09  * kenansulaymanjoined
20:15:18  * dominictarrjoined
20:17:01  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
20:17:13  * kenansulaymanjoined
20:32:08  <mbalho>dominictarr: i think i need a module that works like this: nsj = require('newline-separated-json'); incoming.pipe(nsj.parse()); objectstream.pipe(nsj.stringify()); does that make sense?
20:32:26  <mbalho>dominictarr: ive found lots of modules that do pieces of this but none that do this one thing well, plan is to just tie them all together
20:32:28  <dominictarr>totally
20:32:50  <mbalho>dominictarr: do you have any shorter names than newline-separated-json
20:33:00  <mbalho>ive seen people refer to it as 'streaming json' but that is misleading
20:33:12  <dominictarr>yeah, so there was a mailing list thread about this the other week
20:33:16  <dominictarr>oh, mbalho actually
20:33:34  <dominictarr>I use stream-serializer
20:33:56  <dominictarr>at wraps the stream so that on the outside it's json lines
20:34:02  <dominictarr>but on the inside it's objects
20:34:10  <mbalho> needs to require('os').EOL instead of https://github.com/dominictarr/stream-serializer/blob/master/index.js#L33
20:34:12  <dominictarr>serializer(through(…))
20:34:13  <dominictarr>like that
20:34:37  <dominictarr>will merge
20:35:08  <dominictarr>or make it /\r?\n/
20:35:29  <dominictarr>and then use EOL to join lines
20:35:43  <mbalho>oh yea thats what i meant sorry
20:36:14  <dominictarr>mbalho: also, there was a maining list thread on what to call that content type
20:36:37  <dominictarr>it did seem to come to a consensus… but I forget what it was
20:36:46  <mbalho>nodejs mailing list?
20:36:50  <dominictarr>yeah
20:37:03  <dominictarr>I think it was ndj or something
20:37:31  <mbalho>ah https://groups.google.com/forum/#!topic/nodejs/0ohwx0vF-SY
20:39:40  <mbalho>LDJSON
20:39:46  <mbalho>(via the wikipedia page)
20:43:04  <mbalho>dominictarr: can you fix stream-serializer so it support a.pipe(serializer.json()).pipe(b)
20:44:20  * jxsonquit (Remote host closed the connection)
20:45:10  <dominictarr>where a is text and b is objects?
20:45:38  <mbalho>dominictarr: a is objects
20:45:45  <dominictarr>mbalho: what you want is about 10 lines using split
20:45:59  <mbalho>dominictarr: 10 lines is enough for a module, which is why i was gonna write one :P
20:46:13  <dominictarr>mbalho: what about objects.pipe(LDJSON.stringify()).pipe(text)
20:46:39  <mbalho>dominictarr: i thought thats what stream-serializer was for, but if you think thats out of scope i'll write the ldjson module
20:46:40  <dominictarr>mbalho: sure.
20:47:02  <jerrysv>dominictarr: my rtc ticket is secured
20:47:25  <dominictarr>I think the reason I isn't like that already is because all the stream stuff ive written (scuttlebutt, mux-demux etc)
20:47:28  <dominictarr>is duplex
20:48:36  <dominictarr>I used to tell people to do tcp.pipe(es.split()).pipe(es.parse()).pipe(scutt.createStream()).pipe(es.stringify()).pipe(tcp)
20:48:48  <dominictarr>but people complained about "boilerplate"
20:48:52  <dominictarr>jerrysv: nice!
20:49:38  <mbalho>dominictarr: how does removing pipe get rid of boilerplate?
20:50:05  <dominictarr>mbalho: now it's just one line tcp.pipe(stream).pipe(tcp)
20:50:30  <dominictarr>mbalho: but your situation is quite different, because you are writing transforms
20:50:40  <mbalho>well two lines, var stream = require('stream-serialize').json(inputStream)
20:50:41  <dominictarr>and not duplex
20:51:14  <dominictarr>mbalho: most of the modules just just stream-serializer internally
20:51:28  <dominictarr>so you can pipe it down a text stream and it just works.
20:51:55  <dominictarr>jerrysv: I have been reading the realtimeconf cyberpunk thriller http://2013.realtimeconf.com/part-one
20:51:58  <dominictarr>http://2013.realtimeconf.com/part-two
20:52:02  <dominictarr>pretty good
20:52:14  <jerrysv>dominictarr: oooo. will have to read those - we all got our countries set up today
21:04:52  <dominictarr>mbalho: http://wtfviz.net/
21:16:13  * rickbergfalkjoined
21:18:43  * ryan_ramagequit (Quit: ryan_ramage)
21:19:07  <mbalho>ok https://npmjs.org/package/ldjson-stream
21:19:36  <mbalho>dominictarr: saw that, i actually thought they missed the point on the "Disconnected subway map? Sequential, linear relationships?"
21:20:23  <mbalho>dominictarr: its obviously a data science skill map, makes sense to me. if i was the author of it and it got posted to wtfviz i would probably be sad
21:20:28  <jerrysv>mbalho: does that work in the browser as well? (sans browserify?)
21:20:33  <mbalho>jerrysv: yea
21:20:38  <jerrysv>awesome
21:21:39  * ryan_ramagejoined
21:22:08  <dominictarr>jerrysv: there are two types of front end developers: ones who use browserify, and ones who don't use browserify yet.
21:22:57  <dominictarr>mbalho: you could alias serialize to stringify
21:23:10  <mbalho>will do
21:23:13  <dominictarr>then it's the same as JSON and JSONStream
21:25:39  <jerrysv>dominictarr: ha.
21:26:24  <dominictarr>jerrysv: your life gets much easier when you give in and just use browserify
21:26:31  * mikealquit (Quit: Leaving.)
21:27:58  * mikealjoined
21:28:21  <substack>mbalho: http://trephine.org/t/index.php?title=Newline_delimited_JSON
21:29:09  <substack>but https://en.wikipedia.org/wiki/Line_Delimited_JSON#MIME_Type_and_File_Extensions
21:30:51  <mbalho>the obvious answer is newline/line-separated/delimited-json or nlsdjson
21:33:58  * esundahlquit (Remote host closed the connection)
21:34:24  * esundahljoined
21:36:38  * esundahl_joined
21:38:42  * esundahlquit (Ping timeout: 240 seconds)
21:39:57  <mbalho>dominictarr: stream question: if i wanted to do module.exports = transform(){} so you could stdin.pipe(require('./transform')()).pipe(stdout)
21:40:19  <mbalho>but transform needs to do a.pipe(b).pipe(c)
21:40:36  <dominictarr>mbalho: that would break if you used it more than once a process.
21:40:52  <dominictarr>require('./transform')()
21:41:04  <mbalho>dominictarr: thats wha ti did
21:41:15  * Acconutjoined
21:41:30  <mbalho>dominictarr: question is: if i "return a.pipe(b).pipe(c)" then stdin goes to c, but if i "return a" then c doesnt go to stdout
21:42:01  <mbalho>have you done a similar thing before?
21:43:14  * Acconutquit (Client Quit)
21:44:41  <dominictarr>mbalho: this is what stream-combiner is for
21:44:50  <dominictarr>combine(a, b, c)
21:45:18  <dominictarr>returns one stream where write -> w.write; c.on('data') -> emit('data',...)
21:46:36  * mikealquit (Quit: Leaving.)
21:47:14  <mbalho>w00t worked
21:50:24  * Acconutjoined
21:51:34  * ednapiranhajoined
21:52:00  * Acconutquit (Client Quit)
21:54:51  <mbalho>https://github.com/maxogden/ldjson-csv
21:54:53  <mbalho>:D
21:58:23  <mbalho>jerrysv: npm install dat ldjson-csv -g; mkdir foo; cd foo; dat init; curl http://apps.npr.org/playgrounds/npr-accessible-playgrounds.csv | ldjson-csv | dat
22:02:22  <jerrysv>mbalho: Error: Cannot find module 'through' :(
22:02:28  <mbalho>dangit! messed up dependencies
22:03:13  <mbalho>jerrysv: install ldjson-csv again :D
22:03:44  * i_m_cajoined
22:04:44  <jerrysv>net.js:612
22:04:44  <jerrysv> throw new TypeError('invalid data');
22:04:44  <jerrysv> ^
22:04:45  <jerrysv>TypeError: invalid data
22:04:48  <mbalho>dangit
22:07:16  <mbalho>jerrysv: what node version/os?
22:07:25  <mbalho>oh wait
22:07:28  <mbalho>just reproduced
22:08:47  <levelbot>[npm] [email protected] <http://npm.im/dat>: data sharing and replication tool (@maxogden)
22:08:52  <mbalho>o/ was the problem lol
22:09:19  <mbalho>jerrysv: if you npm install dat -g it should work
22:15:18  * i_m_caquit (Ping timeout: 268 seconds)
22:16:19  <jerrysv>mbalho: awesome!
22:16:22  <jerrysv>works
22:17:00  <jerrysv>i have terraformer streaming geostore results from leveldb
22:17:18  <jerrysv>i'll update my demo over the weekend
22:20:23  * thlorenzquit (Remote host closed the connection)
22:25:39  <jerrysv>dominictarr: you were using a myvu crystal at the hardware hack, weren't you?
22:26:00  <dominictarr>I could never get it working though :(
22:28:27  <jerrysv>oh. hrm.
22:29:50  * fb55joined
22:42:08  * tmcwquit (Remote host closed the connection)
22:42:40  * tmcwjoined
22:47:01  * tmcwquit (Ping timeout: 248 seconds)
22:47:14  * jxsonjoined
22:48:46  * tmcwjoined
22:48:57  * tmcwquit (Remote host closed the connection)
22:49:30  * tmcwjoined
22:49:58  * fallsemoquit (Quit: Leaving.)
22:51:34  <mbalho>rvagg: if i npm install level at the moment does it install the memory leak fixed version?
22:51:48  <rvagg>mbalho: yes
22:51:48  * ryan_ramagequit (Quit: ryan_ramage)
22:51:56  <mbalho>sweet
22:51:58  <rvagg>`npm ls | grep nan` should be 0.3.2
22:52:33  * fb55quit (Remote host closed the connection)
22:54:06  * tmcwquit (Ping timeout: 264 seconds)
22:55:34  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
22:56:34  * ryan_ramagejoined
22:59:56  <mbalho>i need to figure out a good way to automatically split write streams into batches that are at or under the write buffer size
23:01:02  <mbalho>maybe millionsOfObjects.pipe(require('buffered-batch-stream')(db))
23:01:50  <mbalho>and it just counts aggregate object size and when it is going to exceed write buffer size it writes a batch and returns false (backpressure) until the batch write finishes
23:01:51  * fritzyquit (Remote host closed the connection)
23:04:17  <levelbot>[npm] [email protected] <http://npm.im/dat>: data sharing and replication tool (@maxogden)
23:05:19  <rvagg>so mbalho is it going to be ok to keep these things in memory, buffered, or do you want back-pressure?
23:05:38  <mbalho>i'd like backpressure
23:05:43  <rvagg>do we just need to make sure that levelup's writestream provides proper backpressure?
23:05:53  <rvagg>yeah, so one of those competing writestream implementations needs to get merged
23:06:07  <rvagg>not sure what the holdup is, I'm thinking I should just make a 3rd one
23:06:17  <mbalho>im not aware of these, where are they?
23:06:40  <rvagg>https://github.com/rvagg/node-levelup/pull/165
23:06:45  <rvagg>I think that's the one that ought to get in
23:07:00  <mbalho>rvagg: try out this to reproduce the problem (huge batches causing 100% cpu pegging and 2gb+ memory usage) https://github.com/maxogden/dat/wiki
23:07:02  <rvagg>substack's simpler one is here: https://github.com/rvagg/node-levelup/pull/177
23:07:17  <mbalho>rvagg: im pretty sure if it did intelligent batching it would fix the problem
23:07:26  * wolfeidauquit (Remote host closed the connection)
23:07:47  <rvagg>yeah, I'm not convinced we're there yet with either of those writestream replacements but we need to get to streams2 first and then tune as we go
23:08:10  * ryan_ramagequit (Quit: ryan_ramage)
23:10:26  <mbalho>ya i'd rather see third party modules like the buffered-batch-stream i described above
23:10:53  <mbalho>so we can try to figure out the best way to get good bulk write throughput without having to comply with levelups test suite and api requirements etc
23:11:13  * no9joined
23:11:17  * fallsemojoined
23:12:33  * no9quit (Client Quit)
23:13:44  <mbalho>rvagg: after reading both of those PRs neither of them measure buffer byte size but rather the number of operations
23:13:58  <mbalho>rvagg: number of operations doesnt matter, its the byte size that makes the perf difference
23:14:06  <mbalho>rvagg: (generally speaking)
23:14:23  <rvagg>mbalho: it really ought not matter, the backpressure should come out of leveldb which won't return the batch() callbacks until the batch has completed
23:14:44  <rvagg>what we need is for the writestream to queue up enough writes to make a batch worthwhile, but not waste time doing so
23:14:50  <rvagg>(not waste too much time)
23:15:04  <rvagg>that was the intent of the current writeStream() but 'within tick' is not enough
23:15:16  <rvagg>needs to be 'within 10ms' or something a bit larger than a tick
23:15:30  <rvagg>mbalho: hang on, I have a PR too that you could try on your problem
23:15:58  <rvagg>https://github.com/rvagg/node-levelup/pull/153
23:16:07  <rvagg>that was the original PR in this story
23:16:22  <mbalho>rvagg: i am not sure batches work like that. leveldb returns the batch callbacks after it completes the batch, yes, but in my experience CPU gets pegged if you write huge batches
23:16:56  <rvagg>I think the next batch() should suffer/delay if you're dealing with a compaction
23:17:14  <rvagg>perhaps we should just remove writestream and leave this to userland
23:17:19  <mbalho>like if you write a 1gb batch, you'll get a callback after some number of seconds, but do another one and there will be weird long pauses
23:17:33  * ryan_ramagejoined
23:17:34  * ryan_ramagequit (Client Quit)
23:17:36  <mbalho>and CPU usage will hit 100%, and memory usage will be large
23:17:50  <mbalho>overall write throughput with huge batches is really low compared to many small ones
23:18:19  <mbalho>its almost like... leveldb says 'yes the batch is done' but i am not sure if that means 'give me more data now' necessarily
23:18:29  <mbalho>i think you have to be cautious as to not write too much data
23:18:30  <rvagg>mbalho: is it possible for you to write up a benchmark test case for this large-data situation that we can test these implementations against?
23:19:12  <rvagg>mbalho: this is the simplistic benchmark we're referencing: https://github.com/rvagg/node-levelup/blob/master/test/benchmarks/stream-bench.js
23:20:20  <mbalho>yea i would take a gander that if you did 4mb batches instead of 10 in that it would perform better
23:20:48  * rickbergfalkquit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
23:21:26  <mbalho>i'll make a pure levelup one, most of the benchmarks ive done lately have been dat specific
23:21:58  <mbalho>my working hypothesis is that you have to respect backpressure in order to achieve maximum write throughput
23:22:18  <mbalho>e.g. if your batches are too large you're gonna have a bad time
23:37:07  * soldairquit (Ping timeout: 250 seconds)
23:40:54  * thlorenzjoined
23:54:27  * esundahl_quit (Remote host closed the connection)
23:54:54  * esundahljoined
23:57:56  * rvaggtopic: Respect the backpressure, man
23:59:07  * esundahlquit (Ping timeout: 240 seconds)