00:00:08  <brycebaril>hmm, not getting the same values I'm putting when using http://npm.im/varint in, protobuf.js#readVarInt32 out
00:00:14  <brycebaril>At least not for all values
00:01:00  <brycebaril>Oh, one looks signed, one unsigned
00:01:33  <mbalho>i think it uses uint8
00:01:55  <mbalho>oh nvm
00:01:58  <mbalho>i see what you're saying
00:03:53  <brycebaril>They differ past Math.pow(2, 31) - 1
00:04:20  <rvagg>use readVarInt64 perhaps
00:04:53  <rvagg>mine are tested against the internal leveldb implementation https://github.com/rvagg/leveljs-coding/blob/master/test/pbuf.cc
00:05:34  <brycebaril>Yeah, that works.
00:06:01  <rvagg>varint has encode and decode tho, just use those I guess
00:06:09  <rvagg>require('varint/encode')
00:06:53  <brycebaril>Yeah but your protobuf.js is pretty much exactly what a varint-multibuffer implementation would need in terms of both input and output
00:07:48  <mbalho>brycebaril: im tryin to hack in varint now :P
00:08:12  <mbalho>feel free to also do so, im still n00b at binary crap so this is fun for me
00:08:59  <brycebaril>mbalho: cool. I need to go make some soup for the family, but I'm guessing between varint and rvagg's reader we have the tools needed to make a varint-multibuffer
00:09:39  <mbalho>why is it that everything i do always ends up already having been done by jeff dean 10 years ago
00:31:17  <mbalho>brycebaril: ok i hooked up varint instead of writeUint32BE, here are the 3 benchmarks before i changed anything
00:31:20  <mbalho>Macintosh:bench max$ node csv.js
00:31:22  <mbalho>4761387 bytes in 50001 chunks in 954 ms
00:31:25  <mbalho>Macintosh:bench max$ node raw.js
00:31:27  <mbalho>and after:
00:31:30  <mbalho>4811388 bytes in 74 chunks in 38 ms
00:31:32  <mbalho>Macintosh:bench max$ node pack.js
00:31:34  <mbalho>9161475 bytes in 50001 chunks in 1334 ms
00:31:37  <mbalho>Macintosh:bench max$ node csv.js
00:31:39  <mbalho>4761387 bytes in 50001 chunks in 963 ms
00:31:42  <mbalho>Macintosh:bench max$ node raw.js
00:31:45  <mbalho>4811388 bytes in 74 chunks in 40 ms
00:31:47  <mbalho>Macintosh:bench max$ node pack.js
00:31:49  <mbalho>4811388 bytes in 50001 chunks in 1534 ms
00:34:54  <mbalho>https://github.com/maxogden/multibuffer/tree/varint, havent fixed the test suite (it has lots of hardcoded 4 byte test buffers) and also didnt update readPartial
00:36:25  <mbalho>brycebaril: if you approve i'll finish it up and send a pull req
00:36:29  <brycebaril>I'll check it out in a bit, readPartial is used by multibuffer-stream
00:36:41  <mbalho>ah gotcha
00:36:54  <mbalho>i have to bike home, will be back online later
00:37:06  <brycebaril>ok, I'm busy making & then eating dinner for a while myself
00:56:40  * thlorenzjoined
01:13:09  * mikealquit (Quit: Leaving.)
01:14:05  * eugenewarequit (Remote host closed the connection)
01:17:52  * kenansulaymanquit (Ping timeout: 264 seconds)
01:19:41  * kenansulaymanjoined
01:40:48  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
02:02:01  * ramitosquit (Quit: Computer has gone to sleep.)
02:20:45  <brycebaril>mbalho: commented on your commit. But overall I'm happy with moving multibuffer to use varint prefixes instead of fixed-width ones.
02:24:08  * fallsemoquit (Ping timeout: 240 seconds)
02:26:18  <mbalho>chrisdickinson: do you have any feedback on why you chose the varint.write and varint.ondata API over something like https://github.com/rvagg/leveljs-coding/blob/master/protobuf.js ?
02:29:49  <mbalho>brycebaril: before i go and write another varint module with a simpler api i wanna find out if/why the existing varint api exists. i dont personally have any objections to it, are yours based on aesthetics or performance?
02:32:02  * tmcwjoined
02:32:13  * fallsemojoined
02:32:46  * mikealjoined
02:36:56  <brycebaril>partially aesthetics, but I'm thinking there has to be performance implications of using events for something like that
02:37:47  <mbalho>yea i hear ya, its possible, though i imagine since theyre all sync functions that seem to get inlined that its still pretty fast
02:38:45  <mbalho>brycebaril: there is also require('varint/decode') which doesnt use eventemitter and just uses a callback
02:38:56  * fallsemoquit (Ping timeout: 248 seconds)
02:38:59  <rvagg>hm, why does it use a callback?
02:39:13  <rvagg>there's nothing async about any of this
02:39:45  <mbalho>sorry i guess its not a callback in the proper sense, it just calls a function with the result
02:40:00  <mbalho>you feed it 1 byte at a time rather than giving it the entire buffer
02:40:48  <mbalho>https://github.com/chrisdickinson/varint/blob/master/decode.js
02:41:01  <mbalho>i doubt there are any noticable perf differences
02:41:36  * tmcwquit (Remote host closed the connection)
02:50:06  * ramitosjoined
02:50:44  * ramitosquit (Client Quit)
02:50:58  * fallsemojoined
02:54:05  <mbalho>yes so switching to require('varint/encode') and require('varint/decode') is actually a performance increase from the baseline
02:55:40  * fallsemoquit (Ping timeout: 264 seconds)
02:58:03  * tmcwjoined
03:01:54  <brycebaril>mbalho: how are you using 'varint/decode' directly? Did you assign it an ondata function?
03:02:48  <mbalho>yep
03:03:10  <mbalho>https://github.com/maxogden/multibuffer/commit/460672a464478362db99962d37fa5ca62685451f
03:07:09  <rvagg>hm, that callable function is a bit awkward in decode, I wonder if there's any perf impact on not involving a callback
03:08:07  <rvagg>you could try require('leveljs-coding/protobuf') and use readVarInt32 if you want to try using the returned array approach instead
03:08:36  <mbalho>word
03:09:21  <mbalho>i dont think itd be worth it to write another module when the current one is working fine, but if you wanna break protobuf out into its own varint module be my guest
03:10:16  <rvagg>perhaps when I get there with the full leveldb coding stuff, I need to write varints too
03:10:21  <rvagg>and other protobuf stuff
03:10:35  <rvagg>for now, all that coding stuff is going into a single package
03:10:40  <brycebaril>I'm writing a benchmark comparing them atm
03:11:25  <mbalho>i bet we can send a persuasive pull request to varint if we come up with a faster implementation
03:11:36  * fallsemojoined
03:13:17  <brycebaril>[email protected]:/tmp$ time node varintDecode.js
03:13:17  <brycebaril>real 0m1.197s
03:13:17  <brycebaril>user 0m1.132s
03:13:17  <brycebaril>sys 0m0.068s
03:13:17  <brycebaril>[email protected]:/tmp$ time node probuf.js
03:13:17  <brycebaril>real 0m0.860s
03:13:17  <brycebaril>user 0m0.820s
03:13:18  <brycebaril>sys 0m0.040s
03:13:19  <rvagg>brycebaril: where are you located?
03:13:25  <brycebaril>Seattle, WA
03:13:35  <mbalho>brycebaril: nice
03:13:55  <mbalho>brycebaril: how many buffers is that?
03:14:00  <brycebaril>1e6
03:14:14  <rvagg>ah, mbalho et. al. are doing a level* meetup in Nov, if you think you might want to head down to SF to help out you should let them know cause I'm sure they could do with other pros
03:14:16  <brycebaril>essentially just doing varint decode on 0..1_000_000
03:14:25  <mbalho>ah gotcha
03:14:42  <mbalho>well its gonna be a nodeschool.io IRL event at github
03:14:56  <brycebaril>that's cool, when is it at?
03:15:08  <rvagg>oh well, I'm sure brycebaril would be helpful in either case
03:15:30  <mbalho> thursday, november 21st from 7 - 9:30PM
03:15:39  <mbalho>dunno if its worth a SF trip but hey ya never know
03:15:58  <brycebaril>rvagg: I got a beaglebone black in the mail last week, going to be playing with making a IoT node backed by timestreamdb. should be fun
03:16:09  <rvagg>nice
03:16:20  <rvagg>those are nice devices, they have onboard storage don't they?
03:16:45  <brycebaril>something like 2gb onboard, but it is class 4 I think
03:16:50  <rvagg>sd card slows the pi down a bit too much
03:17:02  <brycebaril>I'll probably grab a class 10 sd to compare
03:17:18  <brycebaril>I got it over the rpi because it has all the direct gpio
03:17:54  <wolfeidau>brycebaril: I would use a beagle bone because it has eMMC and a ton more GPIOs :)
03:18:19  <wolfeidau>eMMC == 10 x faster IO than SDcards in my experiance
03:18:23  <brycebaril>wolfeidau: that's what my thinking was when I got the BBB. Plus my initials are BBB so that helped :)
03:18:36  <brycebaril>wolfeidau: good to know, I'll definitely be trying to write fast to leveldb on it
03:18:57  <wolfeidau>brycebaril: yeah BBB is an awesome device much much faster access times as well
03:19:39  <wolfeidau>Also much easier to hack on if you run the ubuntu based images
03:20:11  <brycebaril>I haven't replaced the stock image yet
03:22:04  <wolfeidau>brycebaril: We are using these ones http://www.armhf.com/
03:22:58  <wolfeidau>brycebaril: if you want to use the GPIOs make sure you read this https://github.com/jadonk/validation-scripts/tree/master/test-capemgr
03:23:12  <wolfeidau>Explains device trees
03:23:18  <wolfeidau>How what where and how :)
03:23:31  <wolfeidau>Who what where and how even
03:23:36  <brycebaril>ahh, excellent :)
03:24:47  <rvagg>then if all else fails you just bother wolfeidau who will figure it out if he doesn't know
03:25:54  * fallsemoquit (Ping timeout: 264 seconds)
03:28:39  <mbalho>lol
03:34:05  * mikealquit (Quit: Leaving.)
03:45:27  * thlorenzquit (Remote host closed the connection)
04:04:49  * mikealjoined
04:05:05  <mbalho>brycebaril: can you gist that benchmark you made?
04:09:08  * mikealquit (Ping timeout: 240 seconds)
04:16:12  <levelbot>[npm] [email protected] <http://npm.im/continuous-storage>: Store a continuous ndarray in a level.js/levelup database (@hughsk)
04:16:34  * thlorenzjoined
04:16:36  * timoxleyquit (Remote host closed the connection)
04:25:04  * thlorenzquit (Ping timeout: 248 seconds)
04:34:49  * mikealjoined
04:39:38  * tmcwquit (Remote host closed the connection)
04:45:28  * mikealquit (Ping timeout: 264 seconds)
04:52:17  * esundahljoined
05:12:59  * mikealjoined
05:15:27  * dominictarrjoined
05:36:30  <brycebaril>mbalho: https://gist.github.com/brycebaril/6913533
05:37:05  <mbalho>thx
05:39:01  <brycebaril>That one is slightly different than the one I timed above, the other one first created all the buffers, this time I saved them all to a file and they just read the file.
05:52:28  * nathan7joined
05:56:10  * eugenewarejoined
05:58:34  * dominictarrquit (Quit: dominictarr)
06:16:53  <mbalho>ok updated https://github.com/maxogden/varint/tree/buffer-read, send a PR
06:19:43  * esundahlquit (Remote host closed the connection)
06:22:04  * timoxleyjoined
06:29:01  * dguttmanquit (Quit: dguttman)
06:36:02  * DTrejojoined
06:50:46  * esundahljoined
06:59:07  * esundahlquit (Ping timeout: 248 seconds)
07:18:27  * eugenewarequit (Remote host closed the connection)
07:22:41  * thlorenzjoined
07:23:41  * DTrejoquit (Remote host closed the connection)
07:25:34  * esundahljoined
07:27:28  * thlorenzquit (Ping timeout: 264 seconds)
07:30:06  * esundahlquit (Ping timeout: 264 seconds)
07:39:24  * jmartinsquit (Ping timeout: 252 seconds)
07:39:58  * jmartinsjoined
07:49:17  * timoxleyquit (Remote host closed the connection)
07:49:52  * timoxleyjoined
07:52:08  * jcrugzzquit (Ping timeout: 240 seconds)
07:53:21  * timoxley_joined
07:54:03  * timoxleyquit (Ping timeout: 248 seconds)
08:50:36  * dominictarrjoined
09:22:25  * kenansulaymanjoined
09:24:01  * thlorenzjoined
09:28:41  * thlorenzquit (Ping timeout: 265 seconds)
10:19:02  * dominictarrquit (Quit: dominictarr)
10:26:33  * eugenewarejoined
10:28:57  * esundahljoined
10:33:31  * esundahlquit (Ping timeout: 245 seconds)
10:36:10  * timoxley_quit (Remote host closed the connection)
10:45:44  * insertcoffeejoined
10:50:20  * timoxleyjoined
10:50:59  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
10:52:20  * kenansulaymanjoined
10:53:54  * kenansulaymanquit (Client Quit)
11:00:24  * esundahljoined
11:05:16  * esundahlquit (Ping timeout: 264 seconds)
12:23:08  * rudjoined
12:25:48  * thlorenzjoined
12:30:24  * thlorenzquit (Ping timeout: 248 seconds)
12:41:49  * dominictarrjoined
12:49:15  * fallsemojoined
12:53:57  * dominictarrquit (Quit: dominictarr)
12:56:18  * dominictarrjoined
13:11:28  * fallsemoquit (Ping timeout: 240 seconds)
13:12:35  * thlorenz_joined
13:14:02  * thlorenz_quit (Remote host closed the connection)
13:14:38  * fallsemojoined
13:14:54  * fallsemoquit (Client Quit)
13:18:12  * timoxleyquit (Remote host closed the connection)
13:18:46  * timoxleyjoined
13:23:30  * timoxleyquit (Ping timeout: 264 seconds)
13:40:03  * jmartinsquit (Read error: Connection reset by peer)
13:40:22  * jmartinsjoined
13:52:42  <levelbot>[npm] [email protected] <http://npm.im/daily>: daily - A LevelDB based logging system (@andreasmadsen)
13:52:52  * Acconutjoined
14:00:28  * thlorenzjoined
14:01:28  * timoxleyjoined
14:08:16  * Acconutquit (Quit: Acconut)
14:09:46  * eugenewarequit (Ping timeout: 245 seconds)
14:15:07  * thlorenz_joined
14:15:16  * jjmalinajoined
14:19:54  * thlorenz_quit (Ping timeout: 264 seconds)
14:30:36  * timoxleyquit (Ping timeout: 245 seconds)
14:35:26  * eugenewarejoined
14:39:11  * tmcwjoined
14:44:46  * ramitosjoined
14:52:21  * jerrysvjoined
14:53:19  * mikealquit (Quit: Leaving.)
14:55:07  * mikealjoined
15:15:24  * mikealquit (Quit: Leaving.)
15:15:47  * thlorenz_joined
15:19:16  * mikealjoined
15:20:33  * thlorenz_quit (Ping timeout: 265 seconds)
15:28:44  * dguttmanjoined
15:30:37  * kenansulaymanjoined
15:33:08  * ednapiranhajoined
15:35:19  * mikealquit (Quit: Leaving.)
15:44:49  * mikealjoined
15:48:08  * jcrugzzjoined
15:48:12  <levelbot>[npm] [email protected] <http://npm.im/level-prefix>: Get prefixed databases (@juliangruber)
15:49:04  * esundahljoined
15:52:44  * jerrysvquit (Read error: Connection reset by peer)
15:55:12  * mikealquit (Quit: Leaving.)
15:58:51  * mikealjoined
15:59:11  * timoxleyjoined
16:04:40  * ednapiranhaquit (Remote host closed the connection)
16:06:07  * ednapiranhajoined
16:11:32  * ednapiranhaquit (Remote host closed the connection)
16:12:53  * ednapiranhajoined
16:17:18  * jerrysvjoined
16:26:35  * jmartinsquit (Quit: Konversation terminated!)
16:31:02  * jerrysvquit (Read error: Connection reset by peer)
16:31:21  * jmartinsjoined
16:31:59  * jerrysv_joined
16:32:16  * jerrysv_changed nick to jerrysv
16:33:55  * nnnnathannquit (Remote host closed the connection)
16:39:10  * julianduquejoined
16:44:42  * mikealquit (Quit: Leaving.)
16:51:36  * timoxleyquit (Ping timeout: 252 seconds)
16:56:50  * timoxleyjoined
17:01:36  * julianduquequit (Quit: leaving)
17:03:50  * mikealjoined
17:10:51  * dguttmanquit (Quit: dguttman)
17:14:39  * timoxley_joined
17:15:15  * timoxleyquit (Ping timeout: 252 seconds)
17:16:43  * jxson_joined
17:20:12  * jxsonquit (Ping timeout: 252 seconds)
17:21:23  * jxson_quit (Ping timeout: 265 seconds)
17:22:54  * ramitosquit (Ping timeout: 264 seconds)
17:31:56  * timoxley_quit (Remote host closed the connection)
17:32:30  * jerrysvquit (Ping timeout: 264 seconds)
17:32:30  * timoxleyjoined
17:35:53  * jxsonjoined
17:36:56  <thlorenz>dominictarr: so my first attempt at reproing this failed: https://github.com/thlorenz/repro-livestream-sublevel-issue/blob/master/index.js
17:37:02  <thlorenz>this works without a problem
17:37:04  * timoxleyquit (Ping timeout: 248 seconds)
17:37:24  <thlorenz>so either it's related to index hooks or the fact that I used it with multilevel
17:37:28  * ednapiranhaquit (Remote host closed the connection)
17:37:40  <dominictarr>thlorenz: yeah, I'm thinking multilevel
17:38:11  <thlorenz>ah, makes sense, since it had to do with json-buffer which is used by muxdemux ^ juliangruber
17:39:01  * ednapiranhajoined
17:39:06  <thlorenz>ok shelving this for now
17:39:46  * ednapiranhaquit (Remote host closed the connection)
17:40:47  * ramitosjoined
17:41:33  <dominictarr>thlorenz: hmm, the manifest for it looks good… must be in the multilevel client
17:41:38  * ednapiranhajoined
17:41:42  <dominictarr>although, the server should validate
17:42:07  <thlorenz>dominictarr: it happens only once I actually add some data which triggers live-stream to update
17:42:21  <chrisdickinson>mbalho: taking a look at the varint changes -- am i right in assuming that this changes the public api?
17:42:42  <dominictarr>ramitos: houdy!
17:42:54  <thlorenz>I think at that point some of the pieces (sublevel??) get deserialized and they are circular
17:42:59  <ramitos>dominictarr hey :D
17:43:16  * dguttmanjoined
17:43:56  <ramitos>dominictarr have you seen https://github.com/kordon/ase#cli-example
17:44:38  <dominictarr>ramitos: yes
17:44:47  * ryan_ramagejoined
17:45:17  <dominictarr>I just got this working: https:github.com/dominictarr/merkle-stream
17:45:31  <dominictarr>which can be used to create secure replication...
17:46:54  <dominictarr>after seeing this talk http://www.youtube.com/watch?v=T4DgxvS9Xho
17:47:08  * thlorenz_joined
17:47:29  <dominictarr>I kinda of have an urge to build an immutable blob store alongside leveldb that works for large blobs
17:47:36  <dominictarr>(level is not optimised for large)
17:48:44  <dominictarr>you could use that as a really simple storage plugin for your dynamo clone, becasue you wouldn't need versioning
17:50:54  <ramitos>dominictarr you link to the merle tree gives a 404. You meant https://github.com/dominictarr/merkle
17:50:57  <ramitos>?
17:51:00  <ryan_ramage>dominictarr: merkle-stream looks interesting
17:51:28  * thlorenz_quit (Ping timeout: 248 seconds)
17:52:00  <dominictarr>ramitos: oh, thanks I have to rename that repo
17:52:16  * insertcoffeequit (Ping timeout: 245 seconds)
17:52:22  <dominictarr>fixed
17:53:10  <ramitos>dominictarr have you seen this? https://github.com/pgte/level-vectorclock
17:54:02  <dominictarr>ramitos: aha, no I havn't seen that
17:54:17  <ramitos>I'll probably use it
17:57:23  <ramitos>dominictarr so, the merle trees can e used to propagate the values to the replicas right?
17:57:28  <ramitos>can be*
17:58:02  * ednapiranhaquit (Remote host closed the connection)
17:58:03  <dominictarr>ramitos: in dynamo, merkle trees are used for the antientropy
17:58:22  <dominictarr>for resynchronizing replicas after a partition or other failure
17:58:53  <ramitos>for the hinted-hadoff stuff, right? I don't remember very well that part (I need to read the paper again)
17:59:29  <dominictarr>ah, it's a process that runs periodically
17:59:43  <dominictarr>I need to refresh on hinted handoff
18:00:02  <dominictarr>but it's like the scuttlebutt layer
18:00:18  <dominictarr>except scuttlebutt replicates the list of nodes to all nodes
18:00:29  * ednapiranhajoined
18:00:33  <ramitos>iirc, when a node fails, other node starts getting the data that was supposed to go to the nde that failed
18:01:02  <dominictarr>oh, right - and then they send it back when a new node is there?
18:01:13  <ramitos>yes
18:01:21  <ramitos>handoff
18:01:22  <dominictarr>but usually there are 3 nodes responsible for each bit of data, right?
18:01:26  <ramitos>yes
18:01:33  <dominictarr>is this for when that fails?
18:01:43  <ramitos>so, each key has 3 nodes
18:02:03  <ramitos>when one of fails, other node gets it's data temporarily
18:02:15  <ramitos>when the the node that failed gets online again
18:02:26  * ednapiranhaquit (Remote host closed the connection)
18:02:33  <ramitos>the other node will send the data that received to the node that went back online
18:02:45  <ramitos>hinted handoff
18:02:55  * ednapiranhajoined
18:04:57  <ramitos>in the paper is the chapter 4.6
18:05:06  <ramitos>the merle tree is in the chapter 4.7
18:05:11  <ramitos>merkle
18:05:22  <ramitos>the merle tree is used when the failure is permanent
18:05:32  <ramitos>I'm going to read that chapter again :)
18:06:11  * ednapiranhaquit (Remote host closed the connection)
18:06:22  <dominictarr>right - what about when you add a new node - does that use merkle tree also?
18:06:39  <dominictarr>I guess it could just bulk load the data the first time
18:07:28  * ednapiranhajoined
18:09:32  <ramitos>dominictarr " Each node maintains a separate Merkle tree for each key range (the set of keys covered by a virtual node) it hosts. This allows nodes to compare whether the keys within a key range are up-to-date. In this scheme, two nodes exchange the root of the Merkle tree corresponding to the key ranges that they host in common. Subsequently, using the tree traversal scheme described above the nodes determine if they have any differences
18:09:32  <ramitos> and perform the appropriate synchronization action."
18:10:57  <dominictarr>yes
18:11:00  <ramitos>yeah, when the node is new it should just get load all the data for the first time
18:13:33  <ramitos>brb
18:13:34  * ramitosquit (Quit: Computer has gone to sleep.)
18:16:30  * tmcwquit (Remote host closed the connection)
18:19:27  * ramitosjoined
18:25:08  * Acconutjoined
18:25:21  * Acconutquit (Client Quit)
18:28:26  * jxsonquit (Remote host closed the connection)
18:28:53  * jxsonjoined
18:30:56  * dominictarrquit (Remote host closed the connection)
18:31:21  * dominictarrjoined
18:33:08  * jxsonquit (Ping timeout: 240 seconds)
18:35:41  * jxsonjoined
18:47:03  * tmcwjoined
18:47:43  * thlorenz_joined
18:51:51  * thlorenz_quit (Ping timeout: 245 seconds)
18:55:50  * Acconutjoined
18:59:59  * Acconut1joined
19:00:28  * Acconutquit (Ping timeout: 264 seconds)
19:06:12  <levelbot>[npm] [email protected] <http://npm.im/level-json-edit>: Taking editing json to the next level with multilevel. (@thlorenz)
19:07:09  * ednapiranhaquit (Remote host closed the connection)
19:08:04  * esundahlquit (Remote host closed the connection)
19:18:28  * ednapiranhajoined
19:32:21  * Acconut1quit (Ping timeout: 248 seconds)
19:39:04  * esundahljoined
19:42:32  <chrisdickinson>mbalho: it's amazing how slow buf.readUInt8 is compared to buf[i] :|
19:47:39  * esundahlquit (Ping timeout: 248 seconds)
19:48:25  * thlorenz_joined
19:52:59  * thlorenz_quit (Ping timeout: 248 seconds)
19:57:13  * dominictarrquit (Quit: dominictarr)
20:03:15  * dominictarrjoined
20:06:33  * julianduquejoined
20:10:23  * ramitosquit (Quit: Computer has gone to sleep.)
20:13:20  * dominictarrquit (Quit: dominictarr)
20:13:25  <mbalho>chrisdickinson: yea pretty interesting. luckily its the same api for Buffer and Uint8Array at least
20:13:53  * esundahljoined
20:18:42  * esundahlquit (Ping timeout: 264 seconds)
20:31:36  * Acconutjoined
20:31:49  * Acconutquit (Client Quit)
20:32:29  * esundahljoined
20:32:38  * gkatsevquit (Ping timeout: 240 seconds)
20:36:21  * jcrugzzquit (Ping timeout: 248 seconds)
20:41:26  * rudquit (Ping timeout: 245 seconds)
20:44:01  * rudjoined
20:44:21  * ramitosjoined
20:49:27  * thlorenz_joined
20:53:47  * thlorenz_quit (Ping timeout: 248 seconds)
20:55:43  <mbalho>brycebaril: i think the multibuffer test suite needs to get pretty much rewritten, every single one of the tests has hardcoded 4 byte checks
20:57:02  <mbalho>brycebaril: well actually what do you think about changing all of the writeUInt32BE parts and replacing them with a varint factory kind of thing
21:01:40  * eugenewarequit (Remote host closed the connection)
21:01:44  <mbalho>brycebaril: actually i'll just start diving into it, it might not be as bad as i am thinking
21:01:47  * eugenewarejoined
21:07:14  * eugenewa_joined
21:10:54  * eugenewarequit (Ping timeout: 264 seconds)
21:15:48  * eugenewarejoined
21:19:54  * eugenewa_quit (Ping timeout: 264 seconds)
21:22:48  * eugenewarequit (Ping timeout: 240 seconds)
21:31:29  * jmartinsquit (Quit: Konversation terminated!)
21:34:40  * rudquit (Ping timeout: 265 seconds)
21:43:12  * mikealquit (Quit: Leaving.)
21:48:41  <mbalho>w00t
21:48:56  * mikealjoined
21:50:04  * thlorenz_joined
21:52:04  * rudjoined
21:52:04  * rudquit (Changing host)
21:52:04  * rudjoined
21:54:40  * thlorenz_quit (Ping timeout: 248 seconds)
22:04:11  <levelbot>[npm] [email protected] <http://npm.im/level-json-edit>: Taking editing json to the next level with multilevel. (@thlorenz)
22:08:25  * thlorenzquit (Remote host closed the connection)
22:08:29  * ramitosquit (Quit: Computer has gone to sleep.)
22:21:56  * abstractjjoined
22:24:00  * ramitosjoined
22:32:09  <rvagg>"caolan starred rvagg/node-levelup 34 minutes ago" we may yet have level* backing for hoodie
22:32:28  <rvagg>I know people have been in their ears about this
22:32:39  <substack>rvagg: caolan was saying that in london
22:32:46  <substack>ghost too
22:33:06  <rvagg>cool, I was talking to @olizilla about this
22:33:23  <rvagg>oh, ghost... yeah but they're using jugglingdb which is a bit tricky to adapt to a pure k/v
22:33:40  <rvagg>I had a rant about ghost using sqlite when they first announced and I got into a bit of trouble so I've shut up about it
22:33:56  <substack>yeah sqlite is pretty silly
22:34:13  <substack>but they're targetting wordpress users who are already heavily invested into mysql
22:34:23  <rvagg>perhaps not, they get to define relational cruft and they can be backed by any db that jugglingdb can support, which is quite a few
22:34:29  <rvagg>perhaps not a bad path for ghost
22:34:40  <rvagg>except that sqlite is crap
22:34:49  <substack>yeah :/
22:35:02  <substack>for the "just npm install it" angle level is really ideal
22:35:14  <substack>not so much for integrating with existing legacy stacks
22:35:34  <rvagg>what we really need is for someone to attempt to adapt jugglingdb to level*, then everyone wins
22:38:43  * thlorenzjoined
22:41:41  <levelbot>[npm] [email protected] <http://npm.im/level-indico>: Simple indexing and querying for leveldb (@mariocasciaro)
22:44:19  * tmcwquit (Remote host closed the connection)
22:46:18  * thlorenzquit (Ping timeout: 264 seconds)
22:50:40  * thlorenzjoined
22:55:23  * thlorenzquit (Ping timeout: 265 seconds)
23:01:24  * ryan_ramagequit (Quit: ryan_ramage)
23:06:58  * jjmalinaquit (Quit: Leaving.)
23:07:25  <brycebaril>mbalho: multibuffer 2.0.0 published :)
23:10:22  * ednapiranhaquit (Remote host closed the connection)
23:12:58  * thlorenzjoined
23:17:20  * thlorenzquit (Ping timeout: 248 seconds)
23:19:52  * tmcwjoined
23:29:15  <mbalho>w00t
23:32:44  <kenansulayman>rvagg
23:37:10  <kenansulayman>Hum, I give it up :)
23:48:02  <rvagg>kenansulayman: ?
23:49:04  * kenansulaymanquit (Ping timeout: 264 seconds)
23:50:14  * kenansulaymanjoined
23:50:50  <kenansulayman>rvagg Ah there you are
23:51:19  * thlorenzjoined
23:51:22  <rvagg>for the moment
23:51:30  <kenansulayman>Regarding the lmdb implementation; did you already notice bugs? any test you could do went as wrong as possible
23:52:00  <rvagg>huh?
23:52:22  <kenansulayman>Just put it in as test and it yielded several errors after the first writes
23:52:56  <kenansulayman>already a few days back, mom
23:53:18  <rvagg>you're getting errors using it? what sort of errors? what are you doing to get errors?
23:53:45  <rvagg>there's a test suite that hits it fairly hard, same as the leveldown suite, but there's always a good chance for problems with something that doesn't yet get much use
23:53:53  <rvagg>there's also a lot of tunables for lmdb so it's tricky to get it right
23:55:36  * thlorenzquit (Ping timeout: 245 seconds)
23:55:36  <kenansulayman>rvagg hm. did you recently push a new revision of lmdb?
23:55:44  <rvagg>no
23:56:03  <kenansulayman>rvagg let me get some test workload