00:00:08
| <brycebaril> | hmm, not getting the same values I'm putting when using http://npm.im/varint in, protobuf.js#readVarInt32 out |
00:00:14
| <brycebaril> | At least not for all values |
00:01:00
| <brycebaril> | Oh, one looks signed, one unsigned |
00:01:33
| <mbalho> | i think it uses uint8 |
00:01:55
| <mbalho> | oh nvm |
00:01:58
| <mbalho> | i see what you're saying |
00:03:53
| <brycebaril> | They differ past Math.pow(2, 31) - 1 |
00:04:20
| <rvagg> | use readVarInt64 perhaps |
00:04:53
| <rvagg> | mine are tested against the internal leveldb implementation https://github.com/rvagg/leveljs-coding/blob/master/test/pbuf.cc |
00:05:34
| <brycebaril> | Yeah, that works. |
00:06:01
| <rvagg> | varint has encode and decode tho, just use those I guess |
00:06:09
| <rvagg> | require('varint/encode') |
00:06:53
| <brycebaril> | Yeah but your protobuf.js is pretty much exactly what a varint-multibuffer implementation would need in terms of both input and output |
00:07:48
| <mbalho> | brycebaril: im tryin to hack in varint now :P |
00:08:12
| <mbalho> | feel free to also do so, im still n00b at binary crap so this is fun for me |
00:08:59
| <brycebaril> | mbalho: cool. I need to go make some soup for the family, but I'm guessing between varint and rvagg's reader we have the tools needed to make a varint-multibuffer |
00:09:39
| <mbalho> | why is it that everything i do always ends up already having been done by jeff dean 10 years ago |
00:31:17
| <mbalho> | brycebaril: ok i hooked up varint instead of writeUint32BE, here are the 3 benchmarks before i changed anything |
00:31:20
| <mbalho> | Macintosh:bench max$ node csv.js |
00:31:22
| <mbalho> | 4761387 bytes in 50001 chunks in 954 ms |
00:31:25
| <mbalho> | Macintosh:bench max$ node raw.js |
00:31:27
| <mbalho> | and after: |
00:31:30
| <mbalho> | 4811388 bytes in 74 chunks in 38 ms |
00:31:32
| <mbalho> | Macintosh:bench max$ node pack.js |
00:31:34
| <mbalho> | 9161475 bytes in 50001 chunks in 1334 ms |
00:31:37
| <mbalho> | Macintosh:bench max$ node csv.js |
00:31:39
| <mbalho> | 4761387 bytes in 50001 chunks in 963 ms |
00:31:42
| <mbalho> | Macintosh:bench max$ node raw.js |
00:31:45
| <mbalho> | 4811388 bytes in 74 chunks in 40 ms |
00:31:47
| <mbalho> | Macintosh:bench max$ node pack.js |
00:31:49
| <mbalho> | 4811388 bytes in 50001 chunks in 1534 ms |
00:34:54
| <mbalho> | https://github.com/maxogden/multibuffer/tree/varint, havent fixed the test suite (it has lots of hardcoded 4 byte test buffers) and also didnt update readPartial |
00:36:25
| <mbalho> | brycebaril: if you approve i'll finish it up and send a pull req |
00:36:29
| <brycebaril> | I'll check it out in a bit, readPartial is used by multibuffer-stream |
00:36:41
| <mbalho> | ah gotcha |
00:36:54
| <mbalho> | i have to bike home, will be back online later |
00:37:06
| <brycebaril> | ok, I'm busy making & then eating dinner for a while myself |
00:56:40
| * thlorenz | joined |
01:13:09
| * mikeal | quit (Quit: Leaving.) |
01:14:05
| * eugeneware | quit (Remote host closed the connection) |
01:17:52
| * kenansulayman | quit (Ping timeout: 264 seconds) |
01:19:41
| * kenansulayman | joined |
01:40:48
| * kenansulayman | quit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈) |
02:02:01
| * ramitos | quit (Quit: Computer has gone to sleep.) |
02:20:45
| <brycebaril> | mbalho: commented on your commit. But overall I'm happy with moving multibuffer to use varint prefixes instead of fixed-width ones. |
02:24:08
| * fallsemo | quit (Ping timeout: 240 seconds) |
02:26:18
| <mbalho> | chrisdickinson: do you have any feedback on why you chose the varint.write and varint.ondata API over something like https://github.com/rvagg/leveljs-coding/blob/master/protobuf.js ? |
02:29:49
| <mbalho> | brycebaril: before i go and write another varint module with a simpler api i wanna find out if/why the existing varint api exists. i dont personally have any objections to it, are yours based on aesthetics or performance? |
02:32:02
| * tmcw | joined |
02:32:13
| * fallsemo | joined |
02:32:46
| * mikeal | joined |
02:36:56
| <brycebaril> | partially aesthetics, but I'm thinking there has to be performance implications of using events for something like that |
02:37:47
| <mbalho> | yea i hear ya, its possible, though i imagine since theyre all sync functions that seem to get inlined that its still pretty fast |
02:38:45
| <mbalho> | brycebaril: there is also require('varint/decode') which doesnt use eventemitter and just uses a callback |
02:38:56
| * fallsemo | quit (Ping timeout: 248 seconds) |
02:38:59
| <rvagg> | hm, why does it use a callback? |
02:39:13
| <rvagg> | there's nothing async about any of this |
02:39:45
| <mbalho> | sorry i guess its not a callback in the proper sense, it just calls a function with the result |
02:40:00
| <mbalho> | you feed it 1 byte at a time rather than giving it the entire buffer |
02:40:48
| <mbalho> | https://github.com/chrisdickinson/varint/blob/master/decode.js |
02:41:01
| <mbalho> | i doubt there are any noticable perf differences |
02:41:36
| * tmcw | quit (Remote host closed the connection) |
02:50:06
| * ramitos | joined |
02:50:44
| * ramitos | quit (Client Quit) |
02:50:58
| * fallsemo | joined |
02:54:05
| <mbalho> | yes so switching to require('varint/encode') and require('varint/decode') is actually a performance increase from the baseline |
02:55:40
| * fallsemo | quit (Ping timeout: 264 seconds) |
02:58:03
| * tmcw | joined |
03:01:54
| <brycebaril> | mbalho: how are you using 'varint/decode' directly? Did you assign it an ondata function? |
03:02:48
| <mbalho> | yep |
03:03:10
| <mbalho> | https://github.com/maxogden/multibuffer/commit/460672a464478362db99962d37fa5ca62685451f |
03:07:09
| <rvagg> | hm, that callable function is a bit awkward in decode, I wonder if there's any perf impact on not involving a callback |
03:08:07
| <rvagg> | you could try require('leveljs-coding/protobuf') and use readVarInt32 if you want to try using the returned array approach instead |
03:08:36
| <mbalho> | word |
03:09:21
| <mbalho> | i dont think itd be worth it to write another module when the current one is working fine, but if you wanna break protobuf out into its own varint module be my guest |
03:10:16
| <rvagg> | perhaps when I get there with the full leveldb coding stuff, I need to write varints too |
03:10:21
| <rvagg> | and other protobuf stuff |
03:10:35
| <rvagg> | for now, all that coding stuff is going into a single package |
03:10:40
| <brycebaril> | I'm writing a benchmark comparing them atm |
03:11:25
| <mbalho> | i bet we can send a persuasive pull request to varint if we come up with a faster implementation |
03:11:36
| * fallsemo | joined |
03:13:17
| <brycebaril> | [email protected]:/tmp$ time node varintDecode.js |
03:13:17
| <brycebaril> | real 0m1.197s |
03:13:17
| <brycebaril> | user 0m1.132s |
03:13:17
| <brycebaril> | sys 0m0.068s |
03:13:17
| <brycebaril> | [email protected]:/tmp$ time node probuf.js |
03:13:17
| <brycebaril> | real 0m0.860s |
03:13:17
| <brycebaril> | user 0m0.820s |
03:13:18
| <brycebaril> | sys 0m0.040s |
03:13:19
| <rvagg> | brycebaril: where are you located? |
03:13:25
| <brycebaril> | Seattle, WA |
03:13:35
| <mbalho> | brycebaril: nice |
03:13:55
| <mbalho> | brycebaril: how many buffers is that? |
03:14:00
| <brycebaril> | 1e6 |
03:14:14
| <rvagg> | ah, mbalho et. al. are doing a level* meetup in Nov, if you think you might want to head down to SF to help out you should let them know cause I'm sure they could do with other pros |
03:14:16
| <brycebaril> | essentially just doing varint decode on 0..1_000_000 |
03:14:25
| <mbalho> | ah gotcha |
03:14:42
| <mbalho> | well its gonna be a nodeschool.io IRL event at github |
03:14:56
| <brycebaril> | that's cool, when is it at? |
03:15:08
| <rvagg> | oh well, I'm sure brycebaril would be helpful in either case |
03:15:30
| <mbalho> | thursday, november 21st from 7 - 9:30PM |
03:15:39
| <mbalho> | dunno if its worth a SF trip but hey ya never know |
03:15:58
| <brycebaril> | rvagg: I got a beaglebone black in the mail last week, going to be playing with making a IoT node backed by timestreamdb. should be fun |
03:16:09
| <rvagg> | nice |
03:16:20
| <rvagg> | those are nice devices, they have onboard storage don't they? |
03:16:45
| <brycebaril> | something like 2gb onboard, but it is class 4 I think |
03:16:50
| <rvagg> | sd card slows the pi down a bit too much |
03:17:02
| <brycebaril> | I'll probably grab a class 10 sd to compare |
03:17:18
| <brycebaril> | I got it over the rpi because it has all the direct gpio |
03:17:54
| <wolfeidau> | brycebaril: I would use a beagle bone because it has eMMC and a ton more GPIOs :) |
03:18:19
| <wolfeidau> | eMMC == 10 x faster IO than SDcards in my experiance |
03:18:23
| <brycebaril> | wolfeidau: that's what my thinking was when I got the BBB. Plus my initials are BBB so that helped :) |
03:18:36
| <brycebaril> | wolfeidau: good to know, I'll definitely be trying to write fast to leveldb on it |
03:18:57
| <wolfeidau> | brycebaril: yeah BBB is an awesome device much much faster access times as well |
03:19:39
| <wolfeidau> | Also much easier to hack on if you run the ubuntu based images |
03:20:11
| <brycebaril> | I haven't replaced the stock image yet |
03:22:04
| <wolfeidau> | brycebaril: We are using these ones http://www.armhf.com/ |
03:22:58
| <wolfeidau> | brycebaril: if you want to use the GPIOs make sure you read this https://github.com/jadonk/validation-scripts/tree/master/test-capemgr |
03:23:12
| <wolfeidau> | Explains device trees |
03:23:18
| <wolfeidau> | How what where and how :) |
03:23:31
| <wolfeidau> | Who what where and how even |
03:23:36
| <brycebaril> | ahh, excellent :) |
03:24:47
| <rvagg> | then if all else fails you just bother wolfeidau who will figure it out if he doesn't know |
03:25:54
| * fallsemo | quit (Ping timeout: 264 seconds) |
03:28:39
| <mbalho> | lol |
03:34:05
| * mikeal | quit (Quit: Leaving.) |
03:45:27
| * thlorenz | quit (Remote host closed the connection) |
04:04:49
| * mikeal | joined |
04:05:05
| <mbalho> | brycebaril: can you gist that benchmark you made? |
04:09:08
| * mikeal | quit (Ping timeout: 240 seconds) |
04:16:12
| <levelbot> | [npm] [email protected] <http://npm.im/continuous-storage>: Store a continuous ndarray in a level.js/levelup database (@hughsk) |
04:16:34
| * thlorenz | joined |
04:16:36
| * timoxley | quit (Remote host closed the connection) |
04:25:04
| * thlorenz | quit (Ping timeout: 248 seconds) |
04:34:49
| * mikeal | joined |
04:39:38
| * tmcw | quit (Remote host closed the connection) |
04:45:28
| * mikeal | quit (Ping timeout: 264 seconds) |
04:52:17
| * esundahl | joined |
05:12:59
| * mikeal | joined |
05:15:27
| * dominictarr | joined |
05:36:30
| <brycebaril> | mbalho: https://gist.github.com/brycebaril/6913533 |
05:37:05
| <mbalho> | thx |
05:39:01
| <brycebaril> | That one is slightly different than the one I timed above, the other one first created all the buffers, this time I saved them all to a file and they just read the file. |
05:52:28
| * nathan7 | joined |
05:56:10
| * eugeneware | joined |
05:58:34
| * dominictarr | quit (Quit: dominictarr) |
06:16:53
| <mbalho> | ok updated https://github.com/maxogden/varint/tree/buffer-read, send a PR |
06:19:43
| * esundahl | quit (Remote host closed the connection) |
06:22:04
| * timoxley | joined |
06:29:01
| * dguttman | quit (Quit: dguttman) |
06:36:02
| * DTrejo | joined |
06:50:46
| * esundahl | joined |
06:59:07
| * esundahl | quit (Ping timeout: 248 seconds) |
07:18:27
| * eugeneware | quit (Remote host closed the connection) |
07:22:41
| * thlorenz | joined |
07:23:41
| * DTrejo | quit (Remote host closed the connection) |
07:25:34
| * esundahl | joined |
07:27:28
| * thlorenz | quit (Ping timeout: 264 seconds) |
07:30:06
| * esundahl | quit (Ping timeout: 264 seconds) |
07:39:24
| * jmartins | quit (Ping timeout: 252 seconds) |
07:39:58
| * jmartins | joined |
07:49:17
| * timoxley | quit (Remote host closed the connection) |
07:49:52
| * timoxley | joined |
07:52:08
| * jcrugzz | quit (Ping timeout: 240 seconds) |
07:53:21
| * timoxley_ | joined |
07:54:03
| * timoxley | quit (Ping timeout: 248 seconds) |
08:50:36
| * dominictarr | joined |
09:22:25
| * kenansulayman | joined |
09:24:01
| * thlorenz | joined |
09:28:41
| * thlorenz | quit (Ping timeout: 265 seconds) |
10:19:02
| * dominictarr | quit (Quit: dominictarr) |
10:26:33
| * eugeneware | joined |
10:28:57
| * esundahl | joined |
10:33:31
| * esundahl | quit (Ping timeout: 245 seconds) |
10:36:10
| * timoxley_ | quit (Remote host closed the connection) |
10:45:44
| * insertcoffee | joined |
10:50:20
| * timoxley | joined |
10:50:59
| * kenansulayman | quit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈) |
10:52:20
| * kenansulayman | joined |
10:53:54
| * kenansulayman | quit (Client Quit) |
11:00:24
| * esundahl | joined |
11:05:16
| * esundahl | quit (Ping timeout: 264 seconds) |
12:23:08
| * rud | joined |
12:25:48
| * thlorenz | joined |
12:30:24
| * thlorenz | quit (Ping timeout: 248 seconds) |
12:41:49
| * dominictarr | joined |
12:49:15
| * fallsemo | joined |
12:53:57
| * dominictarr | quit (Quit: dominictarr) |
12:56:18
| * dominictarr | joined |
13:11:28
| * fallsemo | quit (Ping timeout: 240 seconds) |
13:12:35
| * thlorenz_ | joined |
13:14:02
| * thlorenz_ | quit (Remote host closed the connection) |
13:14:38
| * fallsemo | joined |
13:14:54
| * fallsemo | quit (Client Quit) |
13:18:12
| * timoxley | quit (Remote host closed the connection) |
13:18:46
| * timoxley | joined |
13:23:30
| * timoxley | quit (Ping timeout: 264 seconds) |
13:40:03
| * jmartins | quit (Read error: Connection reset by peer) |
13:40:22
| * jmartins | joined |
13:52:42
| <levelbot> | [npm] [email protected] <http://npm.im/daily>: daily - A LevelDB based logging system (@andreasmadsen) |
13:52:52
| * Acconut | joined |
14:00:28
| * thlorenz | joined |
14:01:28
| * timoxley | joined |
14:08:16
| * Acconut | quit (Quit: Acconut) |
14:09:46
| * eugeneware | quit (Ping timeout: 245 seconds) |
14:15:07
| * thlorenz_ | joined |
14:15:16
| * jjmalina | joined |
14:19:54
| * thlorenz_ | quit (Ping timeout: 264 seconds) |
14:30:36
| * timoxley | quit (Ping timeout: 245 seconds) |
14:35:26
| * eugeneware | joined |
14:39:11
| * tmcw | joined |
14:44:46
| * ramitos | joined |
14:52:21
| * jerrysv | joined |
14:53:19
| * mikeal | quit (Quit: Leaving.) |
14:55:07
| * mikeal | joined |
15:15:24
| * mikeal | quit (Quit: Leaving.) |
15:15:47
| * thlorenz_ | joined |
15:19:16
| * mikeal | joined |
15:20:33
| * thlorenz_ | quit (Ping timeout: 265 seconds) |
15:28:44
| * dguttman | joined |
15:30:37
| * kenansulayman | joined |
15:33:08
| * ednapiranha | joined |
15:35:19
| * mikeal | quit (Quit: Leaving.) |
15:44:49
| * mikeal | joined |
15:48:08
| * jcrugzz | joined |
15:48:12
| <levelbot> | [npm] [email protected] <http://npm.im/level-prefix>: Get prefixed databases (@juliangruber) |
15:49:04
| * esundahl | joined |
15:52:44
| * jerrysv | quit (Read error: Connection reset by peer) |
15:55:12
| * mikeal | quit (Quit: Leaving.) |
15:58:51
| * mikeal | joined |
15:59:11
| * timoxley | joined |
16:04:40
| * ednapiranha | quit (Remote host closed the connection) |
16:06:07
| * ednapiranha | joined |
16:11:32
| * ednapiranha | quit (Remote host closed the connection) |
16:12:53
| * ednapiranha | joined |
16:17:18
| * jerrysv | joined |
16:26:35
| * jmartins | quit (Quit: Konversation terminated!) |
16:31:02
| * jerrysv | quit (Read error: Connection reset by peer) |
16:31:21
| * jmartins | joined |
16:31:59
| * jerrysv_ | joined |
16:32:16
| * jerrysv_ | changed nick to jerrysv |
16:33:55
| * nnnnathann | quit (Remote host closed the connection) |
16:39:10
| * julianduque | joined |
16:44:42
| * mikeal | quit (Quit: Leaving.) |
16:51:36
| * timoxley | quit (Ping timeout: 252 seconds) |
16:56:50
| * timoxley | joined |
17:01:36
| * julianduque | quit (Quit: leaving) |
17:03:50
| * mikeal | joined |
17:10:51
| * dguttman | quit (Quit: dguttman) |
17:14:39
| * timoxley_ | joined |
17:15:15
| * timoxley | quit (Ping timeout: 252 seconds) |
17:16:43
| * jxson_ | joined |
17:20:12
| * jxson | quit (Ping timeout: 252 seconds) |
17:21:23
| * jxson_ | quit (Ping timeout: 265 seconds) |
17:22:54
| * ramitos | quit (Ping timeout: 264 seconds) |
17:31:56
| * timoxley_ | quit (Remote host closed the connection) |
17:32:30
| * jerrysv | quit (Ping timeout: 264 seconds) |
17:32:30
| * timoxley | joined |
17:35:53
| * jxson | joined |
17:36:56
| <thlorenz> | dominictarr: so my first attempt at reproing this failed: https://github.com/thlorenz/repro-livestream-sublevel-issue/blob/master/index.js |
17:37:02
| <thlorenz> | this works without a problem |
17:37:04
| * timoxley | quit (Ping timeout: 248 seconds) |
17:37:24
| <thlorenz> | so either it's related to index hooks or the fact that I used it with multilevel |
17:37:28
| * ednapiranha | quit (Remote host closed the connection) |
17:37:40
| <dominictarr> | thlorenz: yeah, I'm thinking multilevel |
17:38:11
| <thlorenz> | ah, makes sense, since it had to do with json-buffer which is used by muxdemux ^ juliangruber |
17:39:01
| * ednapiranha | joined |
17:39:06
| <thlorenz> | ok shelving this for now |
17:39:46
| * ednapiranha | quit (Remote host closed the connection) |
17:40:47
| * ramitos | joined |
17:41:33
| <dominictarr> | thlorenz: hmm, the manifest for it looks good… must be in the multilevel client |
17:41:38
| * ednapiranha | joined |
17:41:42
| <dominictarr> | although, the server should validate |
17:42:07
| <thlorenz> | dominictarr: it happens only once I actually add some data which triggers live-stream to update |
17:42:21
| <chrisdickinson> | mbalho: taking a look at the varint changes -- am i right in assuming that this changes the public api? |
17:42:42
| <dominictarr> | ramitos: houdy! |
17:42:54
| <thlorenz> | I think at that point some of the pieces (sublevel??) get deserialized and they are circular |
17:42:59
| <ramitos> | dominictarr hey :D |
17:43:16
| * dguttman | joined |
17:43:56
| <ramitos> | dominictarr have you seen https://github.com/kordon/ase#cli-example |
17:44:38
| <dominictarr> | ramitos: yes |
17:44:47
| * ryan_ramage | joined |
17:45:17
| <dominictarr> | I just got this working: https:github.com/dominictarr/merkle-stream |
17:45:31
| <dominictarr> | which can be used to create secure replication... |
17:46:54
| <dominictarr> | after seeing this talk http://www.youtube.com/watch?v=T4DgxvS9Xho |
17:47:08
| * thlorenz_ | joined |
17:47:29
| <dominictarr> | I kinda of have an urge to build an immutable blob store alongside leveldb that works for large blobs |
17:47:36
| <dominictarr> | (level is not optimised for large) |
17:48:44
| <dominictarr> | you could use that as a really simple storage plugin for your dynamo clone, becasue you wouldn't need versioning |
17:50:54
| <ramitos> | dominictarr you link to the merle tree gives a 404. You meant https://github.com/dominictarr/merkle |
17:50:57
| <ramitos> | ? |
17:51:00
| <ryan_ramage> | dominictarr: merkle-stream looks interesting |
17:51:28
| * thlorenz_ | quit (Ping timeout: 248 seconds) |
17:52:00
| <dominictarr> | ramitos: oh, thanks I have to rename that repo |
17:52:16
| * insertcoffee | quit (Ping timeout: 245 seconds) |
17:52:22
| <dominictarr> | fixed |
17:53:10
| <ramitos> | dominictarr have you seen this? https://github.com/pgte/level-vectorclock |
17:54:02
| <dominictarr> | ramitos: aha, no I havn't seen that |
17:54:17
| <ramitos> | I'll probably use it |
17:57:23
| <ramitos> | dominictarr so, the merle trees can e used to propagate the values to the replicas right? |
17:57:28
| <ramitos> | can be* |
17:58:02
| * ednapiranha | quit (Remote host closed the connection) |
17:58:03
| <dominictarr> | ramitos: in dynamo, merkle trees are used for the antientropy |
17:58:22
| <dominictarr> | for resynchronizing replicas after a partition or other failure |
17:58:53
| <ramitos> | for the hinted-hadoff stuff, right? I don't remember very well that part (I need to read the paper again) |
17:59:29
| <dominictarr> | ah, it's a process that runs periodically |
17:59:43
| <dominictarr> | I need to refresh on hinted handoff |
18:00:02
| <dominictarr> | but it's like the scuttlebutt layer |
18:00:18
| <dominictarr> | except scuttlebutt replicates the list of nodes to all nodes |
18:00:29
| * ednapiranha | joined |
18:00:33
| <ramitos> | iirc, when a node fails, other node starts getting the data that was supposed to go to the nde that failed |
18:01:02
| <dominictarr> | oh, right - and then they send it back when a new node is there? |
18:01:13
| <ramitos> | yes |
18:01:21
| <ramitos> | handoff |
18:01:22
| <dominictarr> | but usually there are 3 nodes responsible for each bit of data, right? |
18:01:26
| <ramitos> | yes |
18:01:33
| <dominictarr> | is this for when that fails? |
18:01:43
| <ramitos> | so, each key has 3 nodes |
18:02:03
| <ramitos> | when one of fails, other node gets it's data temporarily |
18:02:15
| <ramitos> | when the the node that failed gets online again |
18:02:26
| * ednapiranha | quit (Remote host closed the connection) |
18:02:33
| <ramitos> | the other node will send the data that received to the node that went back online |
18:02:45
| <ramitos> | hinted handoff |
18:02:55
| * ednapiranha | joined |
18:04:57
| <ramitos> | in the paper is the chapter 4.6 |
18:05:06
| <ramitos> | the merle tree is in the chapter 4.7 |
18:05:11
| <ramitos> | merkle |
18:05:22
| <ramitos> | the merle tree is used when the failure is permanent |
18:05:32
| <ramitos> | I'm going to read that chapter again :) |
18:06:11
| * ednapiranha | quit (Remote host closed the connection) |
18:06:22
| <dominictarr> | right - what about when you add a new node - does that use merkle tree also? |
18:06:39
| <dominictarr> | I guess it could just bulk load the data the first time |
18:07:28
| * ednapiranha | joined |
18:09:32
| <ramitos> | dominictarr " Each node maintains a separate Merkle tree for each key range (the set of keys covered by a virtual node) it hosts. This allows nodes to compare whether the keys within a key range are up-to-date. In this scheme, two nodes exchange the root of the Merkle tree corresponding to the key ranges that they host in common. Subsequently, using the tree traversal scheme described above the nodes determine if they have any differences |
18:09:32
| <ramitos> | and perform the appropriate synchronization action." |
18:10:57
| <dominictarr> | yes |
18:11:00
| <ramitos> | yeah, when the node is new it should just get load all the data for the first time |
18:13:33
| <ramitos> | brb |
18:13:34
| * ramitos | quit (Quit: Computer has gone to sleep.) |
18:16:30
| * tmcw | quit (Remote host closed the connection) |
18:19:27
| * ramitos | joined |
18:25:08
| * Acconut | joined |
18:25:21
| * Acconut | quit (Client Quit) |
18:28:26
| * jxson | quit (Remote host closed the connection) |
18:28:53
| * jxson | joined |
18:30:56
| * dominictarr | quit (Remote host closed the connection) |
18:31:21
| * dominictarr | joined |
18:33:08
| * jxson | quit (Ping timeout: 240 seconds) |
18:35:41
| * jxson | joined |
18:47:03
| * tmcw | joined |
18:47:43
| * thlorenz_ | joined |
18:51:51
| * thlorenz_ | quit (Ping timeout: 245 seconds) |
18:55:50
| * Acconut | joined |
18:59:59
| * Acconut1 | joined |
19:00:28
| * Acconut | quit (Ping timeout: 264 seconds) |
19:06:12
| <levelbot> | [npm] [email protected] <http://npm.im/level-json-edit>: Taking editing json to the next level with multilevel. (@thlorenz) |
19:07:09
| * ednapiranha | quit (Remote host closed the connection) |
19:08:04
| * esundahl | quit (Remote host closed the connection) |
19:18:28
| * ednapiranha | joined |
19:32:21
| * Acconut1 | quit (Ping timeout: 248 seconds) |
19:39:04
| * esundahl | joined |
19:42:32
| <chrisdickinson> | mbalho: it's amazing how slow buf.readUInt8 is compared to buf[i] :| |
19:47:39
| * esundahl | quit (Ping timeout: 248 seconds) |
19:48:25
| * thlorenz_ | joined |
19:52:59
| * thlorenz_ | quit (Ping timeout: 248 seconds) |
19:57:13
| * dominictarr | quit (Quit: dominictarr) |
20:03:15
| * dominictarr | joined |
20:06:33
| * julianduque | joined |
20:10:23
| * ramitos | quit (Quit: Computer has gone to sleep.) |
20:13:20
| * dominictarr | quit (Quit: dominictarr) |
20:13:25
| <mbalho> | chrisdickinson: yea pretty interesting. luckily its the same api for Buffer and Uint8Array at least |
20:13:53
| * esundahl | joined |
20:18:42
| * esundahl | quit (Ping timeout: 264 seconds) |
20:31:36
| * Acconut | joined |
20:31:49
| * Acconut | quit (Client Quit) |
20:32:29
| * esundahl | joined |
20:32:38
| * gkatsev | quit (Ping timeout: 240 seconds) |
20:36:21
| * jcrugzz | quit (Ping timeout: 248 seconds) |
20:41:26
| * rud | quit (Ping timeout: 245 seconds) |
20:44:01
| * rud | joined |
20:44:21
| * ramitos | joined |
20:49:27
| * thlorenz_ | joined |
20:53:47
| * thlorenz_ | quit (Ping timeout: 248 seconds) |
20:55:43
| <mbalho> | brycebaril: i think the multibuffer test suite needs to get pretty much rewritten, every single one of the tests has hardcoded 4 byte checks |
20:57:02
| <mbalho> | brycebaril: well actually what do you think about changing all of the writeUInt32BE parts and replacing them with a varint factory kind of thing |
21:01:40
| * eugeneware | quit (Remote host closed the connection) |
21:01:44
| <mbalho> | brycebaril: actually i'll just start diving into it, it might not be as bad as i am thinking |
21:01:47
| * eugeneware | joined |
21:07:14
| * eugenewa_ | joined |
21:10:54
| * eugeneware | quit (Ping timeout: 264 seconds) |
21:15:48
| * eugeneware | joined |
21:19:54
| * eugenewa_ | quit (Ping timeout: 264 seconds) |
21:22:48
| * eugeneware | quit (Ping timeout: 240 seconds) |
21:31:29
| * jmartins | quit (Quit: Konversation terminated!) |
21:34:40
| * rud | quit (Ping timeout: 265 seconds) |
21:43:12
| * mikeal | quit (Quit: Leaving.) |
21:48:41
| <mbalho> | w00t |
21:48:56
| * mikeal | joined |
21:50:04
| * thlorenz_ | joined |
21:52:04
| * rud | joined |
21:52:04
| * rud | quit (Changing host) |
21:52:04
| * rud | joined |
21:54:40
| * thlorenz_ | quit (Ping timeout: 248 seconds) |
22:04:11
| <levelbot> | [npm] [email protected] <http://npm.im/level-json-edit>: Taking editing json to the next level with multilevel. (@thlorenz) |
22:08:25
| * thlorenz | quit (Remote host closed the connection) |
22:08:29
| * ramitos | quit (Quit: Computer has gone to sleep.) |
22:21:56
| * abstractj | joined |
22:24:00
| * ramitos | joined |
22:32:09
| <rvagg> | "caolan starred rvagg/node-levelup 34 minutes ago" we may yet have level* backing for hoodie |
22:32:28
| <rvagg> | I know people have been in their ears about this |
22:32:39
| <substack> | rvagg: caolan was saying that in london |
22:32:46
| <substack> | ghost too |
22:33:06
| <rvagg> | cool, I was talking to @olizilla about this |
22:33:23
| <rvagg> | oh, ghost... yeah but they're using jugglingdb which is a bit tricky to adapt to a pure k/v |
22:33:40
| <rvagg> | I had a rant about ghost using sqlite when they first announced and I got into a bit of trouble so I've shut up about it |
22:33:56
| <substack> | yeah sqlite is pretty silly |
22:34:13
| <substack> | but they're targetting wordpress users who are already heavily invested into mysql |
22:34:23
| <rvagg> | perhaps not, they get to define relational cruft and they can be backed by any db that jugglingdb can support, which is quite a few |
22:34:29
| <rvagg> | perhaps not a bad path for ghost |
22:34:40
| <rvagg> | except that sqlite is crap |
22:34:49
| <substack> | yeah :/ |
22:35:02
| <substack> | for the "just npm install it" angle level is really ideal |
22:35:14
| <substack> | not so much for integrating with existing legacy stacks |
22:35:34
| <rvagg> | what we really need is for someone to attempt to adapt jugglingdb to level*, then everyone wins |
22:38:43
| * thlorenz | joined |
22:41:41
| <levelbot> | [npm] [email protected] <http://npm.im/level-indico>: Simple indexing and querying for leveldb (@mariocasciaro) |
22:44:19
| * tmcw | quit (Remote host closed the connection) |
22:46:18
| * thlorenz | quit (Ping timeout: 264 seconds) |
22:50:40
| * thlorenz | joined |
22:55:23
| * thlorenz | quit (Ping timeout: 265 seconds) |
23:01:24
| * ryan_ramage | quit (Quit: ryan_ramage) |
23:06:58
| * jjmalina | quit (Quit: Leaving.) |
23:07:25
| <brycebaril> | mbalho: multibuffer 2.0.0 published :) |
23:10:22
| * ednapiranha | quit (Remote host closed the connection) |
23:12:58
| * thlorenz | joined |
23:17:20
| * thlorenz | quit (Ping timeout: 248 seconds) |
23:19:52
| * tmcw | joined |
23:29:15
| <mbalho> | w00t |
23:32:44
| <kenansulayman> | rvagg |
23:37:10
| <kenansulayman> | Hum, I give it up :) |
23:48:02
| <rvagg> | kenansulayman: ? |
23:49:04
| * kenansulayman | quit (Ping timeout: 264 seconds) |
23:50:14
| * kenansulayman | joined |
23:50:50
| <kenansulayman> | rvagg Ah there you are |
23:51:19
| * thlorenz | joined |
23:51:22
| <rvagg> | for the moment |
23:51:30
| <kenansulayman> | Regarding the lmdb implementation; did you already notice bugs? any test you could do went as wrong as possible |
23:52:00
| <rvagg> | huh? |
23:52:22
| <kenansulayman> | Just put it in as test and it yielded several errors after the first writes |
23:52:56
| <kenansulayman> | already a few days back, mom |
23:53:18
| <rvagg> | you're getting errors using it? what sort of errors? what are you doing to get errors? |
23:53:45
| <rvagg> | there's a test suite that hits it fairly hard, same as the leveldown suite, but there's always a good chance for problems with something that doesn't yet get much use |
23:53:53
| <rvagg> | there's also a lot of tunables for lmdb so it's tricky to get it right |
23:55:36
| * thlorenz | quit (Ping timeout: 245 seconds) |
23:55:36
| <kenansulayman> | rvagg hm. did you recently push a new revision of lmdb? |
23:55:44
| <rvagg> | no |
23:56:03
| <kenansulayman> | rvagg let me get some test workload |