00:23:51  * ryan_ramagejoined
00:26:30  * eugenewarejoined
00:45:52  * eugenewarequit (Remote host closed the connection)
00:48:17  * thlorenz_joined
00:56:29  * jxsonquit (Remote host closed the connection)
00:58:03  <thlorenz_>hey - whoever is in Ireland already and from the US - do the Mac Air US plugs need an adapter or do they fit?
00:58:09  * thlorenz_changed nick to thlorenz
00:58:44  <thlorenz>I know that the voltage will be converted by the Power Plug, but am wondering if I can physically plug it in w/out an adapter
01:04:49  * jcrugzzjoined
01:10:24  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
01:10:40  * kenansulaymanjoined
01:25:27  * ryan_ramagequit (Quit: ryan_ramage)
01:26:07  * julianduquequit (Read error: Connection reset by peer)
01:26:47  * jxsonjoined
01:35:00  * jxsonquit (Ping timeout: 245 seconds)
01:37:24  * eugenewarejoined
02:01:04  * kenansulaymanquit (Ping timeout: 264 seconds)
02:01:20  * kenansulaymanjoined
02:01:59  * fallsemojoined
02:01:59  * jcrugzzquit (Ping timeout: 260 seconds)
02:03:08  * esundahljoined
02:24:00  * eugenewarequit (Remote host closed the connection)
02:24:29  * eugenewarejoined
02:27:33  * ryan_ramagejoined
02:27:39  * ryan_ramagequit (Client Quit)
02:28:34  * eugenewarequit (Ping timeout: 241 seconds)
02:48:58  * thlorenzquit (Remote host closed the connection)
02:51:03  * thlorenzjoined
02:55:00  * eugenewarejoined
03:02:16  * timoxleyjoined
03:05:24  * tmcwjoined
03:07:35  * fallsemoquit (Quit: Leaving.)
03:09:50  * tmcwquit (Ping timeout: 264 seconds)
03:25:21  * timoxleyquit (Remote host closed the connection)
03:31:56  * mageemooneyjoined
03:33:14  <levelbot>[npm] [email protected] <http://npm.im/modella-leveldb>: modella plugin for leveldb (@mattmueller)
03:59:33  * fallsemojoined
04:01:03  * fallsemoquit (Client Quit)
04:06:54  * tmcwjoined
04:11:15  * tmcwquit (Ping timeout: 245 seconds)
04:34:13  * jxsonjoined
04:35:52  * jxsonquit (Read error: Connection reset by peer)
04:36:05  * jxsonjoined
04:36:50  * jxsonquit (Remote host closed the connection)
04:37:40  * tmcwjoined
04:42:30  * tmcwquit (Ping timeout: 264 seconds)
04:43:37  * timoxleyjoined
04:58:14  * jondelamotte_joined
04:59:50  * jondelamottequit (Ping timeout: 240 seconds)
05:08:24  * tmcwjoined
05:13:09  * tmcwquit (Ping timeout: 256 seconds)
05:28:29  * thlorenzquit (Remote host closed the connection)
06:09:55  * tmcwjoined
06:14:01  * tmcwquit (Ping timeout: 240 seconds)
06:28:37  * esundahlquit (Remote host closed the connection)
06:29:11  * esundahljoined
06:29:51  * mageemooneyquit (Remote host closed the connection)
06:33:22  * esundahlquit (Ping timeout: 240 seconds)
06:33:51  * mageemooneyjoined
06:36:26  * jcrugzzjoined
06:59:43  * esundahljoined
07:08:19  * esundahlquit (Ping timeout: 264 seconds)
07:11:25  * tmcwjoined
07:15:43  * tmcwquit (Ping timeout: 260 seconds)
07:34:33  * esundahljoined
07:39:03  * esundahlquit (Ping timeout: 260 seconds)
07:44:52  * mageemooneyquit (Remote host closed the connection)
07:55:45  <levelbot>[npm] [email protected] <http://npm.im/level-assoc>: relational foreign key associations (hasMany, belongsTo) for leveldb (@substack)
07:58:59  * eugenewarequit (Remote host closed the connection)
07:59:06  * eugenewarejoined
07:59:07  * kenansulaymanquit (Quit: ≈ and thus my mac took a subtle yet profound nap ≈)
08:00:27  * kenansulaymanjoined
08:00:47  * kenansulaymanquit (Client Quit)
08:12:55  * tmcwjoined
08:15:06  <juliangruber>substack: maybe live coding a chat with multilevel and level-live-stream, but did this before, so not too exciting
08:15:28  <juliangruber>substack: or all about dealing with the single process limit, so npm.im/multilevel and npm.im/role
08:15:46  <juliangruber>substack: or an overview about the modules, and what leveldb is effective for using for right now
08:16:55  <substack>ok! you should probably just do whatever you think will be the most fun and interesting!
08:17:19  * tmcwquit (Ping timeout: 260 seconds)
08:19:09  <substack>my talk is going to be: trumpet -> hyperglue -> hyperspace -> hyperkey (unreleased) -> level-assoc -> hyperkey+assoc
08:20:10  <substack>the broader topic is shared rendering with progressive enhancement
08:20:39  <substack>that touches on level but not deeply because it's primarily about the other modules
08:24:15  <substack>oh and level-track
08:35:05  * esundahljoined
08:39:44  * esundahlquit (Ping timeout: 260 seconds)
08:43:16  <juliangruber>substack: ok, cool!
08:43:19  <juliangruber>substack: sounds good
08:43:40  * tmcwjoined
08:48:31  * tmcwquit (Ping timeout: 264 seconds)
09:18:15  * dominictarrjoined
10:04:32  * dominictarrquit (Quit: dominictarr)
10:10:26  * dominictarrjoined
10:15:54  * tmcwjoined
10:20:18  * tmcwquit (Ping timeout: 256 seconds)
10:46:59  * jcrugzzquit (Ping timeout: 260 seconds)
11:08:09  <rvagg>juliangruber, hij1nx, dominictarr: see nodeconfeu issue #6, I've given you access to my WIP presentation and put a quick rundown of the structure in the issue
11:36:32  <juliangruber>rvagg: cloning :)
11:37:11  <rvagg>ok, when you're done, pull the latest, I've just pushed more
11:39:18  <juliangruber>rvagg: will clone when I'm dublin, the airport wifi is dying on me
11:39:36  <rvagg>righto, have a good flight!
11:44:32  * timoxleyquit (Remote host closed the connection)
11:48:08  * tmcwjoined
11:52:47  * tmcwquit (Ping timeout: 260 seconds)
12:05:45  <levelbot>[npm] [email protected] <http://npm.im/level-assoc>: relational foreign key associations (hasMany, belongsTo) for leveldb (@substack)
12:07:37  <juliangruber>rvagg: thanks man! are you in dublin yet?
12:18:13  <levelbot>[npm] [email protected] <http://npm.im/bytewise-hex>: Support for leveldb/levelup bytewise encodings in hex format (@eugeneware)
12:34:36  <tarruda>what could cause database corruption that would make necessary a call to 'leveldb::RepairDB'
12:38:46  <rescrv>tarruda: bit flips on disk, bugs in the fs/leveldb, applications above leveldb that corrupte the heap and corrupt LevelDB's state in a way that makes it to disk but doesn't crash
12:39:07  <rescrv>tarruda: do you have a reproducible test case?
12:41:09  <tarruda>rescrv: no, I just got curious after reading the documentation
12:41:30  * dominictarrquit (Quit: dominictarr)
12:41:54  <rescrv>tarruda: it's basically a safety net. if there was some way you could knowingly corrupt the db, they'd just code a way to prevent that.
12:42:15  <tarruda>rescrv: ok
12:44:24  <tarruda>rescrv: I started reading your paper on how ACID transactions are implemented on HyperDex, unfortunately that was too complicated for me
12:44:51  <tarruda>rescrv: I got another idea on how to implement it though:
12:46:30  <tarruda>rescrv: 1 - each transaction gets a unique id, and updates made on the transaction are saved under a transaction-specific key namespace(keys are prefixed with the transaction id)
12:47:31  <tarruda>rescrv: also keeping track of uncommitted transactions on another key namespace
12:49:11  <tarruda>2 - when the transaction is committed, iterate through all values updated by that transaction, and insert in batches outside the transaction-specific key namespace with sync = true
12:49:38  * tmcwjoined
12:50:29  <tarruda>but instead of adding the transaction id before the key we add it after in this second step(first step: [txid, key] second step: [key: txid])
12:51:00  <tarruda>finally delete txid from the uncommitted transactions with sync: true
12:52:19  <tarruda>what do you think?
12:53:45  * tmcwquit (Ping timeout: 245 seconds)
12:54:56  <rescrv>tarruda: I think it's unnecessarily complicated, and writes way too much data
12:55:10  <rescrv>tarruda: figure out what to write in the transaction and do it as one batch with sync=true
12:55:51  <rescrv>if you're doing it on one node, that's easy
12:56:08  <rescrv>tarruda: it's cross-node transactions that are expensive and that we target with Warp
12:56:17  <tarruda>rescrv: I tought about that too, but I wanted to implement in a way that lets one have long-runnign transactions
12:57:03  <tarruda>rescrv: so one could checkout a transaction and have a consistent snapshot of the database
12:57:18  <tarruda>rescrv: like a version control system
12:57:23  <rescrv>tarruda: long-running transactions are almost always aggregates. They don't change much, but compute an aggregate and possibly write it. Use LevelDB's snapshots and provide snapshot isolation instead of serializability
12:57:40  <rescrv>tarruda: so you want historical data
12:58:15  <tarruda>rescrv: I want ArchDB to work a bit like git
12:58:37  <tarruda>rescrv: where one checkout a revision, do some work and then merge with the master branch
12:59:21  <tarruda>rescrv: when merging the database checks for updated keys and throws conflict errors
12:59:28  <rescrv>tarruda: if you're on a single host, use wall clock time. Checkout Spanner (Corbett et. al. 2012) and Adya et. al 1995.
12:59:31  <tarruda>rescrv: exactly like git does when it fails to merge
12:59:40  <rescrv>what benefit does this give to the user?
13:00:22  <rescrv>tarruda: it's MVCC with git-like semantics. It seems like mechanism for mechanism's sake. What is it that you want in the end? Find that and then build mechanism that meets the goal. Don't start with the mechanism.
13:00:28  <tarruda>rescrv: that has the benefit of letting one work offline with data and then merge when connection is available
13:00:51  <rescrv>and what happens when there's a conflict?
13:01:25  <tarruda>rescrv: the database throws an error with all the conflicted keys and their updated values
13:01:36  <tarruda>rescrv: which the user will be presented
13:01:59  <tarruda>rescrv: when its updated again the merge will succeed(if it wasnt updated in the meantime)
13:02:03  <rescrv>tarruda: and what does an application do? will the transaction commit and later abort? what can I do when in offline mode?
13:02:28  <tarruda>rescrv: heres an example workflow:
13:02:35  * alanhoffjoined
13:03:49  <tarruda>rescrv: user A checkouts a revision and starts working with its local cache. user B checkouts the same revision updates key X and commits
13:04:25  * alanhoffpart
13:04:52  <mbalho>tarruda: sounds like couch/dropbox datastore. mikeal has a similar project called couchup thats in node + levelup
13:05:14  <tarruda>rescrv: if user A also updates key X the commit will fail, and the central system will send the updated value of X
13:05:25  <mbalho>tarruda: mikeals version doesnt do branches, so each document has a linear revision history (to keep things simple)
13:05:53  <rescrv>tarruda mbalho it sounds like you both want the same thing: shove a database format into Git and write custom merge functions, so people can share data and share updates to data
13:06:13  <tarruda>yes
13:06:29  <mbalho>tarruda: im working on this for the next 6 months at least via a grant: https://github.com/maxogden/dat
13:06:34  <tarruda>I'm focusing more on data consistency
13:06:48  <rescrv>tarruda: I'm wondering when a transaction may commit. If it has to wait for connectivity with others, then you're best to not even allow offline operation.
13:08:14  <tarruda>rescrv: a transaction can only commit when connection is available, however archdb works in the web browser so its trivial to do offline work and then commit the transaction when connection is available
13:09:07  <tarruda>mbalho: dat sounds nice
13:09:15  <tarruda>mbalho: Im gonna investigate it
13:09:20  <rescrv>tarruda: so it's meant for interactive access, where latency does not matter?
13:09:29  <tarruda>rescrv: yes
13:09:38  <mbalho>tarruda: feel free to check out/comment the issues on that repo, im open to ideas and feedback
13:10:14  * alanhoffjoined
13:10:32  <tarruda>I implemented ArchDB to work with a pluggable backend storage, so it works seamless on browser or node.js with the fs module
13:10:52  <rescrv>tarruda: what is the consistency guarantee you want to provide?
13:11:23  <tarruda>rescrv: that one cannot acidentally overwrite data
13:11:42  <tarruda>besides ACID semantics
13:12:37  <tarruda>I'm really new to this database business so I welcome any help/feedback you guys can provide
13:12:41  <rescrv>tarruda: ACID will give you the property that "one cannot accidently override data" so long as one checks a key before writing it.
13:13:18  <rescrv>here, I'm assuming ACID == one-copy serializability. Which requires that you track values read and written.
13:13:24  <tarruda>rescrv: thats all I want
13:13:37  <rescrv>which? that property, or ACID?
13:13:41  <tarruda>ACID
13:14:36  <rescrv>so you'll need to track the values that were read and written to ensure acid, which can be potentially many
13:14:38  <tarruda>I want it to work a lot like git, except that when a merge/commit fails, one doesnt need to checkout the entire master branch
13:14:45  <tarruda>just the updated keys
13:15:06  <tarruda>(which will be done automatically since the error already contains information about the updated keys)
13:15:42  <rescrv>the updated keys will be values you read that were overwritten, and values that were written by others that you wrote as well, right?
13:15:53  <tarruda>rescrv: yes but that is already done automatically due to a design decision I made on archdb
13:16:15  <tarruda>rescrv: archdb keeps a history index
13:16:24  <tarruda>rescrv: that is used when doing conflict check
13:16:47  <rescrv>tarruda: then it sounds like you can do it with the way you designed keys. I just don't understand updating from a historical transaction. E.g., checkout last week, update, and merge to master.
13:17:34  <tarruda>rescrv: see these two test cases: https://github.com/tarruda/archdb/blob/master/test/acceptance/api.coffee#L423-L534
13:18:11  <tarruda>it shows how a merge with/without key conflicts
13:18:50  <tarruda>tx1,tx2,tx3 shows a conflict
13:19:11  <rescrv>when you do find().all(), does it return every item in the DB?
13:19:31  <tarruda>yes
13:19:43  <tarruda>every item that matches the query
13:20:01  <tarruda>find().each is used to iterating in a cursor-like behavior
13:20:18  <tarruda>find() can receive a query argument
13:20:45  <rescrv>I start a transaction, do find().all(), and maintain a count of the number of items. While I'm doing this, you write a key that didn't exist. Will you generate a conflict?
13:21:09  <tarruda>heres a description of the merge algorithm: https://github.com/tarruda/archdb/blob/master/src/local/database.coffee#L51-L73
13:21:57  <rescrv>If I write the count to a new key, does it generate a merge conflict?
13:22:57  <tarruda>no, conflicts are only generated when updating a key that was modified after the revision was checked out
13:24:02  <rescrv>so you don't want ACID the. You want something looser. The scenario I presented is not strictly serializable if my commit doesn't fail/generate a merge
13:25:12  <tarruda>what does serializable means exactly?
13:26:25  * dominictarrjoined
13:26:59  <rescrv>one-copy serializable means that the execution is equivalent to some serial schedule. Another way to think about it is that the result of the database after executing should be as if there was a giant lock around the DB that each transaction held for its entire duration without releasing until it committed.
13:27:30  <tarruda>theres no lock, however commits are serialized
13:27:55  <tarruda>the index implementation uses copy-on-write to maintain isolation
13:28:05  <rescrv>The execution is serializable if there's some such ordering of the transactions under the locking scheme that produces the same result. It's not saying you need a lock. It's saying you need the resulting state to be identical to a state where you held the locks.
13:28:29  <rescrv>the way you described it above, it's not providing isolation
13:28:49  <tarruda>due to the conflict check?
13:29:28  <rescrv>because it allows a non-serializable execution
13:30:44  <tarruda>by starting a transaction, one is actually saving the root node of the database
13:30:53  <tarruda>since the indexes are copy-on-write
13:31:02  <tarruda>I think its isolated from other transactions
13:31:23  <tarruda>isn't that what isolation means in transaction context?
13:31:27  <rescrv>consider this concrete example. The DB has {k_A: v_A, k_B: v_B, k_C: v_C}. We both start a transaction. I count all the keys and get 3. You count all the keys and get 3. I write k_rescrv: 3. You write k_tarruda: 3. You commit. I try to commit. If I'm allowed to commit, it's incorrect because there's no order where we both can add one key and have the other still see 3.
13:32:27  <rescrv>isolation means that the transactions' operations do not affect each other. COW can help with this, but its use does not automatically guarantee it.
13:33:07  <tarruda>hmmm
13:34:29  <tarruda>I still dont get why this previous example is wrong since we both counter our versions of the database
13:34:35  <tarruda>counted*
13:34:55  <tarruda>so thats not acid?
13:36:19  <tarruda>what would happen if you did that on a transaction with 'serializable' isolation level on postgres?
13:37:00  <rescrv>consider if there's a lock around the database. Only one of us can have the lock at a time. There's two possible outcomes: {k_A: v_A, k_B: v_B, k_C: v_C, k_rescrv: 3, k_tarruda: 4} and {k_A: v_A, k_B: v_B, k_C: v_C, k_rescrv: 4, k_tarruda: 3}
13:37:08  <rescrv>{k_A: v_A, k_B: v_B, k_C: v_C, k_rescrv: 3, k_tarruda: 3} is not a valid outcome
13:37:44  <rescrv>I'm not familiar with postgres enough to know. I hope it would abort or retry one or both of the transactions.
13:37:59  <tarruda>I think I get it
13:38:49  * Acconutjoined
13:39:15  <tarruda>so to be considered ACID a database has to simulate serializable transactions
13:40:06  <rescrv>in my mind, ACID implies the strongest guarantee of one-copy serializability. You'll find plenty of database vendors that actually don't do that and market their product that does less as ACID, just to have the right buzzwords.
13:40:22  <tarruda>I see
13:40:58  <rescrv>there's a weaker guarantee called "snapshot isolation" that many people like to provide and call ACID. I think it's OK to do that as long as you state "ACID" and "snapshot isolation" in the same size font within the paragraph where you claim ACID.
13:41:11  <rescrv>I think the scheme you have would indeed be snapshot isolation.
13:41:29  <tarruda>ok
13:41:30  <rescrv>and it would prevent you from overwriting data that others wrote.
13:41:55  <rescrv>if that's good enough for you, you can just call it that, and call it done
13:42:36  <tarruda>for me its more important to have offline transactions than to have serializable transactions
13:42:43  <tarruda>so its fine for now
13:42:53  <tarruda>however the example you gave me
13:43:00  <tarruda>raised a few questions in my head
13:43:16  <tarruda>I might have trouble implementing the data-processing framework
13:44:06  * thlorenzjoined
13:44:29  <tarruda>databases like mongodb and couchdb provide mapreduce, I want to provide a better tool
13:44:43  <tarruda>which uses the history index
13:45:06  <tarruda>do you get how I might have problems?
13:47:02  <rescrv>I don't know what a history index is, so I can imagine it might be tricky, but cannot enumerate specific problems.
13:47:49  <tarruda>heres an example: https://github.com/tarruda/archdb/blob/master/test/acceptance/api.coffee#L325-L358
13:48:03  <tarruda>basically the database logs every update made
13:48:19  <tarruda>to a special read-only index called '$history'
13:48:23  <rescrv>what guarantees does the kistory index make?
13:49:11  <tarruda>I'm implementing another special index called '$hooks' which one can use to install procedures to be executed at database-specific events
13:49:16  <tarruda>for example
13:49:27  <tarruda>if one wants to index all customers by name
13:49:56  * Acconutquit (Quit: Acconut)
13:51:09  <tarruda>it can be done by installing the following hook: 'for (let entry of db('$history') { db('customers_by_name').set(entry.value.name, entry.ref) }'
13:51:10  * tmcwjoined
13:51:24  <tarruda>there are a few more details like only processing new history entries
13:51:52  <tarruda>but one can install a before-query 'customers_by_name' hook to build such a lazy index
13:52:35  <tarruda>that can lead to inconsistencies if its done at transaction level
13:53:11  <tarruda>one can add an index to calculate aggregate values like sum/count, which would be wrong like in the example you gave
13:55:45  <tarruda>now that I think better, I think this problem can be solved by running these hooks in the same queue used to merge commits
13:56:02  * tmcwquit (Ping timeout: 264 seconds)
13:57:17  * thlorenzquit (Remote host closed the connection)
14:07:45  * Acconutjoined
14:08:39  * Acconutquit (Client Quit)
14:12:17  <eugeneware>rvagg: I jotted down my ideas about some possible levelmeup exercises at https://github.com/rvagg/levelmeup/issues/1
14:13:43  <levelbot>[npm] [email protected] <http://npm.im/level-microblog>: A simple microblog app build on leveldb/levelup (@eugeneware)
14:16:44  <levelbot>[npm] [email protected] <http://npm.im/level-assoc>: relational foreign key associations (hasMany, belongsTo) for leveldb (@substack)
14:21:37  * thlorenzjoined
14:21:54  * tmcwjoined
14:26:16  * tmcwquit (Ping timeout: 264 seconds)
14:41:14  * eugenewarequit (Remote host closed the connection)
14:54:43  * thlorenzquit (Remote host closed the connection)
15:00:47  * thlorenzjoined
15:01:58  * eugenewarejoined
15:09:43  <levelbot>[npm] [email protected] <http://npm.im/hyperkey>: shared server+client rendering with live updates for key/value stores (@substack)
15:11:05  * thlorenzquit (Remote host closed the connection)
15:17:28  * esundahljoined
15:17:43  * eugenewarequit (Remote host closed the connection)
15:18:37  * thlorenzjoined
15:20:32  * thlorenzquit (Remote host closed the connection)
15:30:58  * thlorenzjoined
15:43:26  * thlorenzquit (Remote host closed the connection)
15:54:12  * thlorenzjoined
15:58:13  <levelbot>[npm] [email protected] <http://npm.im/hyperkey>: shared server+client rendering with live updates for key/value stores (@substack)
15:59:32  * timoxleyjoined
16:03:47  * eugenewarejoined
16:16:14  <levelbot>[npm] [email protected] <http://npm.im/mosca>: The multi-transport MQTT broker for node.js. It supports AMQP, Redis, ZeroMQ, MongoDB or just MQTT. (@matteo.collina)
16:29:18  <levelbot>[npm] [email protected] <http://npm.im/consensus>: vote for topics for your next meeting (@ceejbot)
16:35:29  * thlorenzquit (Remote host closed the connection)
16:42:14  <levelbot>[npm] [email protected] <http://npm.im/bytewise>: Binary serialization which sorts bytewise for arbirarily complex data structures (@deanlandolt)
16:44:14  * jjmalinajoined
16:54:25  <rvagg>eugeneware: thanks! great to get more input
16:54:31  * tmcwjoined
16:58:55  * tmcwquit (Ping timeout: 260 seconds)
17:08:32  * kenansulaymanjoined
17:27:31  * esundahlquit (Remote host closed the connection)
17:32:21  * Acconutjoined
17:32:46  * Acconutquit (Client Quit)
17:35:49  * esundahljoined
17:48:13  <levelbot>[npm] [email protected] <http://npm.im/level-exists>: Check if a datum exists without reading its value. (@juliangruber)
17:48:33  <mbalho>oooh
17:49:14  <mbalho>juliangruber: would be cool if that worked for the peek last use case too
17:49:37  <mbalho>juliangruber: im guessing that createKeyStream in optimized in levelup/down then?
17:51:43  <levelbot>[npm] [email protected] <http://npm.im/level-exists>: Check if a datum exists without reading its value. (@juliangruber)
17:57:34  <rvagg>keystream is a readstream with keys:false so yes, optimised because while leveldb fetches the value from the store, it's not copied by leveldown so there's no JS-land conversion cost
17:57:53  <rvagg>dominictarr: it's raining so I don't think I'll be coming over!
18:05:34  * jjmalinaquit (Quit: Leaving.)
18:08:28  * jjmalinajoined
18:13:15  <levelbot>[npm] [email protected] <http://npm.im/level-track>: receive updates from a set of keys and ranges in leveldb (@substack)
18:14:19  * jjmalinaquit (Quit: Leaving.)
18:14:29  * jjmalinajoined
18:18:27  * eugenewarequit (Ping timeout: 268 seconds)
18:25:14  * Acconutjoined
18:25:31  * Acconutquit (Client Quit)
18:26:49  * tmcwjoined
18:31:40  * tmcwquit (Ping timeout: 264 seconds)
18:45:08  * gwenbelljoined
18:57:44  * tmcwjoined
19:02:16  * tmcwquit (Ping timeout: 264 seconds)
19:05:59  * gwenbellquit (Ping timeout: 260 seconds)
19:06:01  * esundahlquit (Remote host closed the connection)
19:06:34  * esundahljoined
19:10:50  * esundahlquit (Ping timeout: 245 seconds)
19:20:38  * timoxleyquit (Remote host closed the connection)
19:30:24  * thlorenzjoined
19:37:08  * esundahljoined
19:40:30  * jjmalinaquit (Quit: Leaving.)
19:43:17  * jjmalinajoined
19:45:31  * esundahlquit (Ping timeout: 260 seconds)
19:51:11  * timoxleyjoined
19:59:14  * tmcwjoined
20:03:00  * dominictarrquit (Quit: dominictarr)
20:03:20  * tmcwquit (Ping timeout: 245 seconds)
20:14:29  * eugenewarejoined
20:14:56  * dominictarrjoined
20:18:47  * eugenewarequit (Ping timeout: 255 seconds)
20:35:31  * thlorenzquit (Remote host closed the connection)
20:37:39  * esundahljoined
20:42:30  * esundahlquit (Ping timeout: 256 seconds)
20:53:47  * tarrudapart
21:01:33  * tarrudajoined
21:06:04  <tarruda>I wanna write a node-levelup plugin in the spirit of level-sublevel(a wrapper around a levelup object) is there any npm module that provides a class with all methods?
21:06:54  * gwenbelljoined
21:09:26  * esundahljoined
21:27:44  * thlorenz_joined
21:27:48  * esundahl_joined
21:31:06  * esundahlquit (Ping timeout: 264 seconds)
21:31:29  * tmcwjoined
21:35:52  * tmcwquit (Ping timeout: 264 seconds)
21:49:22  <kenansulayman>tarruda Object.keys(levelUp)
21:50:43  * esundahl_quit (Remote host closed the connection)
21:52:58  * Acconutjoined
21:53:38  * Acconutquit (Remote host closed the connection)
21:56:50  * jjmalinaquit (Quit: Leaving.)
21:58:27  * jcrugzzjoined
21:58:31  * dominictarrquit (Quit: dominictarr)
22:19:53  * esundahljoined
22:26:45  * Acconutjoined
22:26:59  * Acconutquit (Client Quit)
22:43:16  * ryan_ramagejoined
22:45:00  * thlorenz_quit (Remote host closed the connection)
22:45:12  * Acconutjoined
22:45:56  * Acconutquit (Client Quit)
22:55:53  * thlorenzjoined
22:57:06  * thlorenzquit (Remote host closed the connection)
23:03:45  * tmcwjoined
23:06:30  * gwenbellquit (Quit: Lost terminal)
23:07:55  * tmcwquit (Ping timeout: 245 seconds)
23:08:19  * ryan_ramagequit (Quit: ryan_ramage)
23:09:37  * ryan_ramagejoined
23:11:14  * ryan_ramagequit (Client Quit)
23:20:44  * dominictarrjoined
23:23:22  * dominictarr_joined
23:25:26  * dominictarrquit (Ping timeout: 264 seconds)
23:25:27  * dominictarr_changed nick to dominictarr
23:35:34  * ryan_ramagejoined
23:43:46  <rvagg>tarruda: not yet, but we're planning on moving to a more public & classic prototypal system so you'll eventually be able to just take the LevelUP object and bend it to your will, or even just cherry pick methods off the prototype
23:44:16  <rvagg>tarruda: we'll get there soon, cause the current very-private implementation is annoying some of us a little too much
23:44:59  <rvagg>for now tho... just implement manually
23:49:29  * ryan_ramagequit (Quit: ryan_ramage)
23:56:06  * dominictarrquit (Quit: dominictarr)
23:57:16  <levelbot>[npm] [email protected] <http://npm.im/level-assoc>: relational foreign key associations (hasMany, belongsTo) for leveldb (@substack)