00:04:22  * timoxleyquit (Ping timeout: 256 seconds)
00:04:35  * gangleriquit (Quit: Leaving)
00:16:27  * jjmalinaquit (Quit: Leaving.)
00:16:46  * tmcwjoined
00:17:58  * i_m_cajoined
00:21:22  * tmcwquit (Ping timeout: 256 seconds)
00:29:28  * eugenewarejoined
00:34:44  * eugenewa_joined
00:36:22  * ramitosjoined
00:36:23  * timoxleyjoined
00:38:54  * eugenewarequit (Ping timeout: 264 seconds)
00:39:46  * jxsonquit (Remote host closed the connection)
00:40:06  * eugenewa_quit (Ping timeout: 256 seconds)
00:59:38  * wolfeidauquit (Remote host closed the connection)
01:01:25  * daurnimatorquit (Ping timeout: 248 seconds)
01:01:33  * eugenewarejoined
01:01:54  * daurnimatorjoined
01:07:35  * eugenewarequit (Remote host closed the connection)
01:10:47  * eugenewarejoined
01:17:32  * eugenewarequit (Remote host closed the connection)
01:19:37  * eugenewarejoined
01:22:41  * eugenewarequit (Remote host closed the connection)
01:26:26  * eugenewarejoined
01:30:07  * eugenewarequit (Remote host closed the connection)
01:36:14  * eugenewarejoined
01:40:05  * jxsonjoined
01:41:13  * jxsonquit (Read error: Connection reset by peer)
01:44:50  * esundahljoined
01:45:21  * ryan_ramagejoined
01:50:51  * eugenewarequit (Remote host closed the connection)
01:51:11  * mcollinajoined
01:55:54  * mcollinaquit (Ping timeout: 264 seconds)
01:59:18  * timoxleyquit (Ping timeout: 264 seconds)
02:25:43  * fallsemojoined
02:26:39  * fallsemoquit (Client Quit)
02:45:24  * timoxleyjoined
02:48:35  * ednapiranhajoined
02:51:38  * jjmalinajoined
02:53:30  * ednapiranhaquit (Ping timeout: 264 seconds)
02:59:21  * fallsemojoined
03:07:01  * tmcwjoined
03:15:26  * mcollinajoined
03:17:48  * ryan_ramagequit (Quit: ryan_ramage)
03:19:54  * mcollinaquit (Ping timeout: 256 seconds)
03:20:54  * fallsemoquit (Ping timeout: 264 seconds)
03:31:35  * fallsemojoined
03:36:18  * fallsemoquit (Ping timeout: 256 seconds)
03:37:55  * ryan_ramagejoined
03:40:34  * eugenewarejoined
03:42:01  * i_m_caquit (Ping timeout: 256 seconds)
03:42:45  * eugenewarequit (Remote host closed the connection)
03:42:59  * eugenewarejoined
03:44:40  * eugenewarequit (Remote host closed the connection)
03:45:24  * tmcwquit (Remote host closed the connection)
03:45:33  * fallsemojoined
03:45:50  * tmcwjoined
03:46:04  * eugenewarejoined
03:49:50  * fallsemoquit (Ping timeout: 240 seconds)
03:50:14  * tmcwquit (Ping timeout: 240 seconds)
04:00:12  * eugenewarequit (Remote host closed the connection)
04:00:38  * jxsonjoined
04:03:40  * eugenewarejoined
04:06:20  * i_m_cajoined
04:19:48  * ehdquit (Ping timeout: 260 seconds)
04:28:04  * venportmanjoined
04:40:01  * mcollinajoined
04:42:58  * jjmalinaquit (Quit: Leaving.)
04:43:44  * i_m_caquit (Ping timeout: 256 seconds)
04:44:49  * mcollinaquit (Ping timeout: 246 seconds)
04:56:26  * tmcwjoined
05:01:01  * esundahlquit (Remote host closed the connection)
05:01:02  * tmcwquit (Ping timeout: 240 seconds)
05:01:33  * esundahljoined
05:06:24  * esundahlquit (Ping timeout: 256 seconds)
05:07:50  * julianduquequit (Quit: leaving)
05:11:59  * eugenewarequit (Remote host closed the connection)
05:12:58  * eugenewarejoined
05:21:11  * jcrugzzquit (Ping timeout: 256 seconds)
05:26:05  * wolfeidaujoined
05:26:46  * wolfeidauquit (Remote host closed the connection)
05:26:54  * wolfeidaujoined
05:32:10  * esundahljoined
05:35:35  * ryan_ramagequit (Quit: ryan_ramage)
05:40:42  * esundahlquit (Ping timeout: 264 seconds)
05:49:44  * jcrugzzjoined
05:56:38  * jcrugzzquit (Ping timeout: 264 seconds)
06:04:33  * mcollinajoined
06:07:03  * esundahljoined
06:10:12  * dguttmanquit (Quit: dguttman)
06:11:18  * mcollinaquit (Ping timeout: 264 seconds)
06:11:31  * esundahlquit (Ping timeout: 260 seconds)
06:17:28  * ehdjoined
06:18:11  * dguttmanjoined
06:20:58  * thlorenzjoined
06:24:56  * dguttmanquit (Quit: dguttman)
06:41:32  * eugenewarequit (Remote host closed the connection)
06:42:11  * eugenewarejoined
06:42:44  * thlorenzquit (Remote host closed the connection)
06:57:18  * eugenewarequit (Remote host closed the connection)
07:07:29  * esundahljoined
07:11:50  * esundahlquit (Ping timeout: 240 seconds)
07:28:01  * jxsonquit (Remote host closed the connection)
08:08:03  * esundahljoined
08:12:53  * esundahlquit (Ping timeout: 256 seconds)
08:55:35  * thlorenzjoined
09:08:33  * esundahljoined
09:13:11  * esundahlquit (Ping timeout: 260 seconds)
09:42:21  * thlorenz_joined
09:42:21  * thlorenzquit (Read error: Connection reset by peer)
09:49:29  * ganglerijoined
09:56:43  * dominictarrjoined
09:57:56  * dominictarr_joined
10:01:07  * dominictarrquit (Ping timeout: 256 seconds)
10:01:07  * dominictarr_changed nick to dominictarr
10:10:04  * timoxleyquit (Remote host closed the connection)
10:33:47  * thlorenz_quit (Remote host closed the connection)
10:51:28  * thlorenzjoined
11:13:05  * gangleriquit (Read error: Connection reset by peer)
11:38:53  <levelbot>[npm] [email protected] <http://npm.im/levelmeup>: Level Me Up Scotty! An intro to Node.js databases via a set of self-guided workshops. (@rvagg)
11:43:28  * thlorenz_joined
11:43:28  * thlorenzquit (Read error: Connection reset by peer)
11:53:03  * Acconutjoined
11:53:23  * Acconutquit (Client Quit)
12:15:24  * thlorenz_quit (Remote host closed the connection)
12:16:18  * rudquit (Quit: rud)
12:24:32  * dominictarrquit (Ping timeout: 260 seconds)
12:32:45  * matteojoined
12:33:09  * matteochanged nick to Guest63935
12:33:49  * Guest63935changed nick to mcollina
12:37:50  * mcollinaquit (Quit: leaving)
12:43:03  * matteo_joined
12:43:18  * nnnnathannjoined
12:43:24  * matteo_quit (Client Quit)
12:44:16  * mcollinajoined
12:45:02  * mcollinaquit (Client Quit)
12:45:48  * matteo_joined
12:53:25  * matteo_quit (Quit: leaving)
12:53:27  * tmcwjoined
12:53:43  * mcollinajoined
13:00:04  * mcollinaquit (Ping timeout: 246 seconds)
13:07:42  * mcollinajoined
13:07:59  * eugenewarejoined
13:15:40  * eugenewarequit (Remote host closed the connection)
13:19:04  * nnnnathannquit (Quit: Konversation terminated!)
13:24:59  * fb55_quit (Remote host closed the connection)
13:25:36  * fb55joined
13:27:24  * Acconutjoined
13:30:06  * fb55quit (Ping timeout: 264 seconds)
13:30:10  * mcollinaquit (Ping timeout: 246 seconds)
13:46:02  * eugenewarejoined
13:50:51  * eugenewarequit (Ping timeout: 260 seconds)
13:50:55  * kenansulaymanjoined
13:51:21  * dominictarrjoined
13:56:09  * dominictarrquit (Client Quit)
13:58:16  * dominictarrjoined
14:04:04  * rudjoined
14:04:04  * rudquit (Changing host)
14:04:04  * rudjoined
14:09:00  * tmcwquit (Remote host closed the connection)
14:09:26  * tmcwjoined
14:14:30  * tmcwquit (Ping timeout: 264 seconds)
14:16:32  * eugenewarejoined
14:16:44  * jjmalinajoined
14:22:24  * fallsemojoined
14:23:30  <mbalho>quite happy with this: https://gist.github.com/maxogden/6551333
14:23:38  <mbalho>5.2 million inserts in 50 seconds
14:23:46  <mbalho>from a csv on disk
14:26:39  * Acconutquit (Quit: Acconut)
14:32:49  * eugenewarequit (Ping timeout: 246 seconds)
14:34:21  <juliangruber>mbalho: nice!
14:36:08  <mbalho>im gonna try splitting the csv rows into many values, one per cell
14:36:10  <mbalho>and see what it does to throughput
14:36:29  * tmcwjoined
14:37:05  * esundahljoined
14:37:37  * jcrugzzjoined
14:38:26  <kenansulayman>mbalho What happens if you try that with uberlevel?
14:38:52  <mbalho>kenansulayman: is uberlevel the riak one?
14:38:57  <kenansulayman>(no compatible with leveldb tho)
14:39:09  <kenansulayman>mbalho Nope, lmdb
14:39:24  <mbalho>ah
14:39:32  <mbalho>ill put it on my todo list
14:40:42  <kenansulayman>let me fire that bench
14:42:12  <mbalho>cool
14:51:28  * julianduquejoined
15:06:40  * rvaggquit (Ping timeout: 245 seconds)
15:13:22  * rvaggjoined
15:16:36  * ednapiranhajoined
15:17:00  * ednapiranhaquit (Remote host closed the connection)
15:18:23  * ednapiranhajoined
15:21:55  <mbalho>substack: is there a browserify shim that does require('os').EOL ?
15:22:19  <substack>os is shimmed
15:22:31  <substack>I submitted a patch for EOL etc but I'm not sure if it got merged
15:23:05  <mbalho>substack: ah with newest browserify i get Error: ENOENT, open 'os' while resolving "os" from file /Users/max/src/js/binary-split/index.js
15:23:17  <mbalho>substack: whats the repo called that has the shim?
15:25:52  <substack>mbalho: it looks like I just need to bump the browser-builtins version
15:26:10  <mbalho>ah cool
15:27:47  * rvaggquit (Ping timeout: 260 seconds)
15:31:47  * rudquit (Quit: rud)
15:34:34  * rvaggjoined
15:34:37  <substack>mbalho: the latest browser-builtins is failing some of the browserify tests
15:35:05  * Acconutjoined
15:35:52  * Acconutquit (Client Quit)
15:38:51  <kenansulayman>mbalho Are you on OSX?
15:40:23  * julianduquequit (Quit: leaving)
15:44:28  * dguttmanjoined
16:00:21  * eugenewarejoined
16:00:59  * eugenewa_joined
16:01:00  * eugenewarequit (Read error: Connection reset by peer)
16:05:13  * eugenewa_quit (Ping timeout: 246 seconds)
16:06:29  <mbalho>kenansulayman: yep 10.7
16:06:45  <kenansulayman>mbalho What are your specs?
16:07:09  <mbalho>https://github.com/maxogden/binary-split#how-fast-is-it
16:07:22  <kenansulayman>My 24gb RAM / 8 x 2.4ghz server yields:
16:07:23  <kenansulayman>[email protected]:/data/Relay/test# time node level.js
16:07:23  <kenansulayman>real 1m40.310s
16:07:23  <kenansulayman>user 2m3.400s
16:07:23  <kenansulayman>sys 0m8.828s
16:07:30  <kenansulayman>And on uberlevel:
16:07:31  <kenansulayman>[email protected]:/data/Relay/test# time node ulevel.js
16:07:31  <kenansulayman>real 1m28.248s
16:07:31  <kenansulayman>user 1m35.012s
16:07:31  <kenansulayman>sys 0m0.980s
16:07:38  <kenansulayman>but my mac sucks hardocre
16:08:11  <kenansulayman>With level I get a value above 3 minutes (OSX / 8GB RAM / 4 x 2.4 ghz)
16:08:13  <mbalho>kenansulayman: ssd? node version? i was using ssd with node 0.10.16
16:08:17  <kenansulayman>ah ssd
16:08:23  <kenansulayman>SAS here
16:08:45  <mbalho>not familiar with SAS, does that means spinning platters?
16:08:53  <kenansulayman>yup
16:09:04  <mbalho>kenansulayman: should i just be able to do require('uberlevel') and have everything work?
16:09:08  <kenansulayman>yes
16:09:17  <kenansulayman>I could also try doing an tmpfs
16:10:37  * timoxleyjoined
16:13:37  <mbalho>kenansulayman: i get 0m38.754s with uberlevel
16:13:43  <kenansulayman>woah
16:13:52  <mbalho>(using same batch size, 16mb)
16:15:36  <kenansulayman>level: real 1m6.878s
16:15:52  <mbalho>i dont know how lmdb works so i dont know how much data i should be writing to it
16:16:00  <mbalho>at a time
16:16:42  <kenansulayman>real 0m53.939s
16:17:31  <kenansulayman>mbalho http://symas.com/mdb/doc/
16:19:04  * julianduquejoined
16:31:41  <mbalho>kenansulayman: with 128mb buffer it took 42 seconds, with a 4mb buffer it took 39s, so it doesnt seem to matter in this case
16:32:12  <kenansulayman>ok
16:34:43  <mbalho>kenansulayman: with npm install hyperlevel i get 54 seconds. have you benchmarked your binding?
16:35:01  <kenansulayman>Which?
16:35:42  <kenansulayman>I published basholevel (Basho-LevelDB), hyperlevel (Hyper-LevelDB) and uberlevel (lmdb)
16:35:42  <mbalho>kenansulayman: hyperlevel
16:35:55  <kenansulayman>No didn't get to it thoroughly
16:36:07  <mbalho>kenansulayman: im wondering if there is any overhead introduced by your binding vs leveldown
16:36:27  <kenansulayman>hyperlevel is identical to level
16:36:36  <kenansulayman>Besides being backed by Hyper-LevelDB
16:36:57  <kenansulayman>they're compatible, lmdb is not
16:37:17  <mbalho>the levelup api is compatible though, right
16:37:33  * tmcwquit (Remote host closed the connection)
16:37:44  <mbalho>ah i see https://github.com/KenanSulayman/hyperlevel/blob/master/index.js
16:38:09  * tmcwjoined
16:39:45  <kenansulayman>mbalho They're all highlevel compatible
16:40:02  <kenansulayman>All the leveldb forks are compatible, too
16:40:16  <kenansulayman>lmdb just uses some alien disk-memory mapping
16:40:16  <mbalho>k, just clarifying what you mean by 'compatible'
16:40:23  <kenansulayman>ah k
16:42:35  * tmcwquit (Ping timeout: 260 seconds)
16:43:54  <mbalho>trevnorris: p.s. https://gist.github.com/maxogden/6551333
16:59:45  * timoxleyquit (Remote host closed the connection)
17:05:19  <rescrv>kenansulayman: lmdb uses two COW B-Trees
17:05:37  <kenansulayman>rescrv Yes I saw that in the docs
17:05:55  * tmcwjoined
17:06:34  <rescrv>practically, that means that every write overwrites O(log n) pages above it, all the way to the root. Usually 5-7. For small writes, this becomes the major cost.
17:08:40  <kenansulayman>hum
17:09:02  <kenansulayman>Makes sense, is best in the bench tho
17:10:53  * Acconutjoined
17:11:25  * Acconutquit (Client Quit)
17:13:53  <rescrv>kenansulayman: It does pretty well. I'm not bashing it, just providing insight into its internal function
17:14:34  <kenansulayman>rescrv Yes, thanks and good to know. Although I'm puzzled by the "high costs" on small values considering level & hyper's slower?
17:19:38  <rescrv>kenansulayman: There's costs to them as well. I suspect lmdb may be batching writes to avoid many copied pages.
17:20:04  <rescrv>I don't know why Level is slower, but for many workloads we've evaluated, there's no clear winner.
17:20:16  <kenansulayman>Even compared to lmdb?
17:24:46  * dominictarrquit (Ping timeout: 256 seconds)
17:25:18  <rescrv>kenansulayman: yes. Between lmdb and hyperleveldb, there is a wide variation, and each comes out ahead under different scenarios. There's a backend for each within HyperDex, and we considered using LMDB. We have to merge the work so others can select their backend of choice
17:25:32  * thlorenzjoined
17:26:30  <kenansulayman>rescrv I looked into HyperDex. We considered it as a new database for our infrastructure since we require a scalable database
17:26:40  <kenansulayman>rescrv How can we adapt to it using Node?
17:27:41  <rescrv>kenansulayman: there are bindings that are in desparate need of updating. We currently have a deadline we're working toward, but the next release is scheduled for after the deadline, and Node bindings are a blocker for it.
17:28:33  <kenansulayman>rescrv Sure thing. We'll wait for it. Any estimate?
17:29:39  <rescrv>kenansulayman: I don't have one readily available. Given that I haven't ever played with Node/V8, I don't know how long it would take.
17:29:50  * thlorenzquit (Ping timeout: 240 seconds)
17:32:31  <kenansulayman>rescrv Ok thanks :) Is HyperDex REST'able?
17:44:51  <rescrv>meaning?
17:45:08  * rudjoined
17:47:42  * jxsonjoined
17:57:15  <rescrv>if you mean "does it support HTTP GET/POST?" the answer is it doesn't, but it'd be trivial to build a Python client that did
18:00:22  * dominictarrjoined
18:35:18  * Acconutjoined
18:35:36  * Acconutquit (Client Quit)
19:16:16  * jxsonquit (Remote host closed the connection)
19:20:50  * jxsonjoined
19:23:03  <kenansulayman>rescrv Sorry, was afk. Thanks
19:23:45  * dominictarrquit (Quit: dominictarr)
19:27:30  * ednapiranhaquit (Remote host closed the connection)
19:29:32  * Acconutjoined
19:36:34  * dominictarrjoined
19:44:24  * Acconutquit (Quit: Acconut)
19:53:00  * Acconutjoined
19:57:57  * ednapiranhajoined
20:05:07  * Acconutquit (Quit: Acconut)
20:06:32  * ednapiranhaquit (Ping timeout: 260 seconds)
20:07:16  <substack>mbalho: ok my patch was merged so we can have the latest browser-builtins in browserify once this gets published https://github.com/alexgorbatchev/node-browser-builtins/commit/33140ac4c4ab40bcb6285a3897ce56577b42e428
20:11:51  * brianc_joined
20:11:54  <brianc_>yo
20:11:59  <brianc_>anyone taken a look at lmdb?
20:12:46  <brianc_>supposerdly its better than leveldb
20:13:02  <brianc_>also works with multiple processes accessing it at the same time
20:14:52  <rescrv>brianc_: "better" is a very non-descriptive word when used in isolation
20:15:02  <brianc_>right
20:15:07  <brianc_>has better throughput in most situations
20:15:22  <brianc_>lemme find the article
20:15:39  <brianc_>https://symas.com/is-lmdb-a-leveldb-killer/
20:16:46  <brianc_>http://symas.com/mdb/
20:16:53  <rescrv>I've seen those articles
20:16:57  * Acconutjoined
20:18:51  <rescrv>so this is the claim that keeps popping up, where people say, "LevelDB's not built for real workloads because X." In this case, he has numbers that show that his very-carefully engineered B-Tree outperforms LevelDB for the workloads he shows.
20:19:41  <brianc_>It would be interesting to see them side by side as different storage engines for levelup
20:20:05  <brycebaril>brianc_: that's been implemented
20:20:16  <brianc_>brycebaril: can you link meh?
20:20:36  <brycebaril>https://npmjs.org/package/uberlevel << lmdb version
20:21:07  <rescrv>It's true, but he has over-extended that to say that LevelDB's design is fundamentally flawed and cannot perform. The HyperDex benchmarks he ran came away with no clear winner, and I know for a fact that LevelDB can be modified to make HyperDex faster.
20:21:08  <brianc_><3
20:22:55  <brianc_>my problem w/ LevelDB is that it's not multi-process access safe
20:23:34  <brianc_>since the way to "scale out" with node is run multiple processes it always struck me as odd using LevelDB. it's major constraint conflicts with the way node is supposed to scale
20:23:56  <Acconut>brianc_: You can expose leveldb via the network.
20:24:06  <rescrv>brianc_: I'm fairly certain that rvagg uses multiple background threads
20:24:18  <rescrv>enabling concurrency that way
20:24:19  <brianc_>rescrv: in leveldown?
20:24:30  <rescrv>I think he said 4 threads
20:24:39  <rescrv>they handle the queue in the background
20:24:44  <brianc_>ah well that makes way moar sense
20:24:53  <brianc_>that's nice
20:25:17  <brianc_>get "real" concurrency without having to muck around w/ threads or even multiple processes
20:25:28  <brianc_>still it would be nice to have a cluster of processes all be able to share a dataset
20:30:49  * brianc_part
20:30:58  * ednapiranhajoined
20:31:18  <Acconut>brianc_: As I said, you can use https://github.com/juliangruber/multilevel
20:31:26  <Acconut>Expose a levelDB over the network, to be used by multiple processes, with levelUp's API.
20:34:36  * Acconutquit (Quit: Acconut)
20:39:05  * rudquit (Quit: rud)
21:02:57  * jxsonquit (Remote host closed the connection)
21:05:12  * rudjoined
21:05:12  * rudquit (Changing host)
21:05:12  * rudjoined
21:09:41  * jxsonjoined
21:13:22  * mmckegg_joined
21:14:11  * mmckeggquit (Ping timeout: 246 seconds)
21:14:12  * mmckegg_changed nick to mmckegg
21:23:32  * jxsonquit (Remote host closed the connection)
21:48:38  * Acconutjoined
21:49:17  * Acconutquit (Client Quit)
21:51:19  * esundahlquit (Remote host closed the connection)
21:51:46  * esundahljoined
21:53:50  * jxsonjoined
21:56:30  * esundahlquit (Ping timeout: 264 seconds)
22:02:07  * jxsonquit (Ping timeout: 260 seconds)
22:04:13  * esundahljoined
22:13:46  * julianduquequit (Ping timeout: 246 seconds)
22:31:20  * thlorenzjoined
22:32:23  <juliangruber>dominictarr: async hooks
22:32:29  <juliangruber>I need them to work with sublevel
22:32:43  <juliangruber>because I do batches that spawn multiple sublevels
22:32:49  <dominictarr>juliangruber: what are you trying to do?
22:32:49  <juliangruber>and they need to trigger my hooks
22:33:04  <juliangruber>which doesn't work if I just overwrite db.put and db.batch
22:33:38  <dominictarr>can you open an issue describing what you are tring to do?
22:33:41  <juliangruber>ok
22:33:47  <dominictarr>I'm still looking for a compelling usecase
22:34:54  <dguttman>async hook would be great for looking up the previous value of a key before writing
22:36:02  <dominictarr>dguttman: give me an example of the data you might do that with
22:37:02  <dominictarr>(not arguing, I just want an example for discussion)
22:37:45  <dguttman>of course, looking for a good example
22:37:48  <juliangruber>dominictarr: https://github.com/dominictarr/level-sublevel/issues/36
22:39:24  * tmcwquit (Remote host closed the connection)
22:39:56  * tmcwjoined
22:42:04  <mbalho>rescrv: heres a benchmark I made for your (and others) consideration https://gist.github.com/maxogden/6551333
22:42:37  <dguttman>a simple example would just to keep a record of changes
22:43:58  * jxsonjoined
22:44:16  <dguttman>like an undo buffer or a history
22:44:38  * tmcwquit (Ping timeout: 264 seconds)
22:47:01  <dguttman>dominictarr: ^
22:47:49  <dominictarr>dguttman: in leveldb, it's better to do that as separate records
22:48:00  <dominictarr>since you have fast range queries
22:48:09  <dguttman>also like the idea of getting an event if the new one is different
22:49:38  * thlorenzquit (Remote host closed the connection)
22:50:12  <dguttman>dominictarr: sure, so every time there's a put to a key, also store a version somewhere else?
22:50:22  <dguttman>and then use the range to get all the versions
22:52:12  <dominictarr>dguttman: like put(key +'!' + timestamp, change)
22:52:49  <dominictarr>and then do createReadStream({start: key+'!', end: 'key+"!~'})
22:52:56  <dominictarr>and think of that as your document
22:53:13  <brycebaril>https://npmjs.org/package/level-version
22:53:18  <brycebaril>Is that what you're looking for?
22:53:50  <dominictarr>yeah, that is quite similar
22:54:07  <dguttman>I'm actually trying to find when I last brought this up in this channel
22:54:27  <dguttman>but can't find it in my logs
22:54:48  <dguttman>level-version was suggested last time, but didn't quite do it
22:54:56  <dguttman>anyways, this was just a more simple use case
22:55:09  <dguttman>I have a much more complicated one, that I'm trying to avoid bringing up ;)
22:55:22  <brycebaril>dguttman: so you're looking for a shadow table move than versioned data?
22:55:59  <dguttman>brycebaril: not sure what that is, but sounds cool ;)
22:56:20  * ednapiranhaquit (Remote host closed the connection)
22:57:15  <brycebaril>dguttman: that is typically a concept of a changelog for data. E.g. in postgres you'd make a trigger that'd write the record to a separate table upon change, with change metadata.
22:57:31  <brycebaril>Common in high regulatory spaces
22:57:52  <dguttman>ah, yes, like that
22:58:22  <dguttman>when I asked last time, I was working on a performance stats reporting app
22:58:32  <dguttman>(impressions, clicks, revenue, etc)
22:59:07  <dguttman>and it would have been nice to track the changes only, not the full record
22:59:08  * wolfeidauquit (Remote host closed the connection)
22:59:40  <dguttman>so I'm doing that in the app, when a change is posted
23:00:03  <dguttman>would have been nice to do it automatically in a trigger
23:02:24  <dguttman>the more complicated use case we have is we build a CEP of sorts, where we constantly "roll-up" lower level events into higher level docs
23:02:44  * thlorenzjoined
23:04:36  <dguttman>a "page load" event comes in, when then creates/updates a "page view" which then creates or updates a "session" which then creates or updates a document that keeps track of stats for the day
23:05:00  <dguttman>damn, this is not a good day for writing intelligently
23:05:22  <dguttman>anyways, we accomplish this by checking existence first, then doing the updates
23:05:46  <dguttman>but it would be neat to let level handle it at in a hook
23:08:05  <dominictarr>dguttman: if it was a hook, what would the example code look like?
23:09:53  * fallsemoquit (Quit: Leaving.)
23:14:52  <levelbot>[npm] [email protected] <http://npm.im/level-secondary>: Secondary indexes for leveldb. (@juliangruber)
23:17:40  <juliangruber>dominictarr: level-secondary now has a way nicer api and works with sublevels https://github.com/juliangruber/level-secondary
23:19:20  * tmcwjoined
23:19:24  <dominictarr>looks good
23:19:54  * tmcwquit (Remote host closed the connection)
23:20:29  * dominictarrquit (Quit: dominictarr)
23:21:03  * thlorenzquit (Remote host closed the connection)
23:43:04  * thlorenzjoined
23:45:17  * thlorenzquit (Remote host closed the connection)
23:48:13  * jjmalinaquit (Quit: Leaving.)
23:51:10  <dguttman>dominictarr: https://gist.github.com/davidguttman/6557449
23:51:40  * esundahlquit (Remote host closed the connection)
23:52:07  * esundahljoined
23:56:58  * esundahlquit (Ping timeout: 256 seconds)