00:04:59  <kenansulayman>juliangruber Thanks you for help btw.
00:05:09  <juliangruber>kenansulayman: sure :)
00:05:18  <kenansulayman>The coffee is on me then
00:14:27  <juliangruber>:)
00:15:27  * thlorenzjoined
00:20:31  <juliangruber>substack dominictarr: I get the impression that pull based streams aren't optimal for slow connections, as the roundtrip for "give me data" might take a long time
00:24:32  <juliangruber>shouldn't that kind of feedback be transmitted in the background/off-line?
00:29:38  * ryan_ramagequit (Quit: ryan_ramage)
00:31:52  <juliangruber>and how do you do backpressure with websockets?
00:41:41  * ryan_ramagejoined
00:47:42  * kenansulaymanquit (Quit: ≈♡≈)
00:58:55  * jxsonquit (Remote host closed the connection)
01:00:04  * jxsonjoined
01:00:37  * jxsonquit (Remote host closed the connection)
01:01:28  * dominictarr_joined
01:04:56  * dominictarrquit (Ping timeout: 268 seconds)
01:04:56  * dominictarr_changed nick to dominictarr
01:07:04  * kenansulaymanjoined
01:14:16  * kenansulaymanquit (Ping timeout: 264 seconds)
01:21:35  * i_m_cajoined
01:30:56  * jxsonjoined
01:38:58  * jxsonquit (Ping timeout: 246 seconds)
01:44:09  <substack>mbalho: now lexicographic-integer doesn't have the precision problems I mentioned earlier
01:44:29  <substack>at the expense of an extra byte in the worst case
01:45:01  <substack>integers greater than ~4 billion will take 10 bytes to store
01:48:34  <substack>and if you want a compact floating point representation, you can just store the lexicographic integer paired with a single-byte power-of-16 exponent divisor
01:51:59  * tmcwjoined
01:53:58  * tmcwquit (Remote host closed the connection)
01:57:48  * soldairquit (Quit: Page closed)
02:08:54  * dguttmanquit (Quit: dguttman)
02:11:12  * i_m_caquit (Read error: Connection reset by peer)
02:11:38  * i_m_cajoined
02:21:10  * timoxleyjoined
02:28:33  * eugenewarejoined
02:45:23  * timoxleyquit (Remote host closed the connection)
02:53:45  * ryan_ramagequit (Quit: ryan_ramage)
03:01:33  * eugenewarequit (Remote host closed the connection)
03:05:30  * timoxleyjoined
03:09:55  * timoxleyquit (Remote host closed the connection)
03:11:04  * timoxleyjoined
03:21:07  * ryan_ramagejoined
03:21:30  * timoxleyquit (Remote host closed the connection)
03:29:59  * thlorenzquit (Remote host closed the connection)
03:34:39  * dominictarr_joined
03:37:41  * dominictarrquit (Ping timeout: 248 seconds)
03:37:42  * dominictarr_changed nick to dominictarr
03:40:17  * timoxleyjoined
03:48:42  * ryan_ramagequit (Quit: ryan_ramage)
03:51:39  * ryan_ramagejoined
04:04:37  * ryan_ramagequit (Quit: ryan_ramage)
04:28:59  * i_m_caquit (Ping timeout: 260 seconds)
04:50:34  * ryan_ramagejoined
05:22:29  * ryan_ramagequit (Quit: ryan_ramage)
05:53:34  * jxsonjoined
05:56:15  * jxson_joined
05:57:51  * jxsonquit (Ping timeout: 240 seconds)
05:59:31  * jxson_quit (Remote host closed the connection)
06:03:26  * jxsonjoined
06:08:07  * ryan_ramagejoined
06:08:11  * ryan_ramagequit (Client Quit)
06:17:34  * dominictarr_joined
06:20:38  * dominictarrquit (Ping timeout: 264 seconds)
06:20:39  * dominictarr_changed nick to dominictarr
06:22:30  * jxsonquit (Remote host closed the connection)
06:47:33  * jxsonjoined
06:47:37  * jxsonquit (Remote host closed the connection)
07:30:39  <mbalho>anyone know how to determine the best batch size for mazimum write throughput?
07:31:06  <mbalho>for instance if i do 1 million small documents in a batch its about 1/3 as many writes/ssecond as if i do 50,000
07:40:30  * dominictarrquit (Quit: dominictarr)
08:24:21  * kenansulaymanjoined
08:34:04  * kenansulaymanquit (Ping timeout: 264 seconds)
08:34:27  * kenansulaymanjoined
09:27:06  * esundahl_joined
09:29:26  * esundahl_quit (Remote host closed the connection)
09:40:55  * kenansulaymanquit (Quit: ∞♡∞)
09:41:18  * kenansulaymanjoined
09:57:10  * dominictarrjoined
10:05:57  * jcrugzzquit (Ping timeout: 248 seconds)
10:28:29  * timoxleyquit (Remote host closed the connection)
10:34:13  * jcrugzzjoined
10:40:48  * jcrugzzquit (Ping timeout: 276 seconds)
10:49:38  * fb55joined
10:59:13  * thlorenz_joined
11:01:39  * thlorenz_quit (Remote host closed the connection)
11:13:28  * Acconutjoined
11:13:43  * Acconutquit (Remote host closed the connection)
11:18:40  * fb55quit (Remote host closed the connection)
11:19:15  <rescrv>substack: you may find "ordered_encode_double" useful: https://github.com/rescrv/HyperDex/blob/master/common/ordered_encoding.cc
11:20:02  <rescrv>it takes a double and returns a uint64_t whose big-endian representation is memcmp-able where double(a) < double(b) => memcmp(a, b) < 0
11:20:26  <rescrv>mbalho: your batch size should not exceed your write-buffer size
11:23:24  <rescrv>mbalho: If by "small documents", you mean sizeof(key) + sizeof(value) < 32B, then you'll find that inserting in reverse order of how they eventually end up in LevelDB will give you best perf.
11:27:13  <rescrv>and smaller batches will help
11:55:49  * thlorenzjoined
12:59:39  * ednapiranhajoined
13:27:06  <kenansulayman>dominictarr Can be partial matching be implemented efficiently with level-search?
13:31:55  <dominictarr>kenansulayman: what do you mean by partial matching?
13:32:19  <kenansulayman>Kenan Sulayman <= "kenan" "Sulayman" whereas also "ken" will match "Kenan Sulayman"
13:33:34  <dominictarr>kenansulayman: yes, although that isn't implemented just
13:34:06  <dominictarr>level-search implements partial matching on json properties
13:34:32  <dominictarr>strings could be split into sections by spaces - but not implemented yet
13:34:45  <kenansulayman>That sounds extremely inefficient
13:34:59  <kenansulayman>That could be done with a tree
13:35:02  <dominictarr>you should also check this https://github.com/eugeneware/level-queryengine
13:35:16  <dominictarr>kenansulayman: a trie?
13:35:34  <kenansulayman>Oh yes
13:36:02  <dominictarr>kenansulayman: you end up taking 3 times as much space to save each document - but it's not really a big deal
13:36:41  <dominictarr>I trie probably works best in memory, because it will take more lookups
13:37:33  <dominictarr>but if you are putting it on disk you want optimize for a a single seek, I think.
13:37:50  <kenansulayman>actually we're not saving documents
13:38:00  <kenansulayman>we want search for people
13:38:32  <kenansulayman>But a person can change names rapidly; also Graph is not an option and quite overkill
13:38:52  <dominictarr>Graph?
13:39:56  <dominictarr>kenansulayman: if you just want to index names or a few fields then implementing something like that is pretty easy
13:40:31  <kenansulayman>Graph = Graph Database
13:40:45  <dominictarr>just save an extra two pointers, firstname -> doc_id, lastname -> doc_id
13:41:02  <kenansulayman>ah
13:41:02  <kenansulayman>yes
13:41:04  <kenansulayman>range search
13:41:06  <dominictarr>probably, put the indexes into the same database
13:41:17  <dominictarr>so you can update them atomically
13:41:22  <dominictarr>with the document
13:41:36  <dominictarr>then you will know that the indexes are consistent.
13:41:47  <kenansulayman>yo bg
13:41:50  <dominictarr>sorry - you arn't saving the whole document again
13:41:52  <kenansulayman>wupps
13:42:14  <kenansulayman>okay
13:42:22  <dominictarr>but, if you use pointers like this, you can index every property and it only takes 3 times the space - this is what level-search does.
13:42:42  <kenansulayman>How do you make multiple keys point on a user?
13:43:00  <kenansulayman>Also how do you magically update the index?
13:43:09  <kenansulayman>Remove the reference in-fly?
13:43:23  <dominictarr>kenansulayman: I use level-sublevel, and prehooks
13:43:56  <dominictarr>then you do indexSubDb.query(name) and it returns a stream of document that match that query.
13:48:28  * alanhoffquit
13:51:04  <kenansulayman>that is awesome
13:52:15  <kenansulayman>How can a How can an index be most efficiently be removed?
14:03:37  * fallsemojoined
14:04:16  * kenansulaymanquit (Remote host closed the connection)
14:10:25  * dguttmanjoined
14:13:23  * kenansulaymanjoined
14:21:01  * fallsemoquit (Quit: Leaving.)
14:22:36  * Acconutjoined
14:22:40  * Acconutquit (Client Quit)
14:25:22  * esundahljoined
14:32:43  * tmcwjoined
14:37:00  * fallsemojoined
14:44:37  * paulfryzeljoined
14:48:34  <dominictarr>kenansulayman: so, the simplest way is to delete stale indexes on read
14:48:43  <kenansulayman>ok
14:48:48  <dominictarr>otherwise you can have a batch job that periodically cleans
14:49:42  <dominictarr>oh! substack used this technique the otherday with level-assoc
15:01:43  * jerrysvjoined
15:22:40  * dguttmanquit (Quit: dguttman)
15:26:06  * dguttmanjoined
16:19:19  * dropdrivejoined
16:20:39  <dropdrive>Just for educational purposes, are there implementations of things like leveldb (key-value with good range queries) in high-level languages (or even pseudocode)? What are some good search terms to look for?
16:21:39  * paulfryzelquit (Remote host closed the connection)
16:41:36  <dominictarr>dropdrive: I've started one,
16:42:02  <dominictarr>http://github.com/dominictarr/json-ss + https://github.com/dominictarr/json-logdb
16:42:11  <dominictarr>but they havn't been combined.
16:42:18  <dominictarr>so no compaction,
16:42:36  <dominictarr>and there is this https://github.com/bigeasy/locket
16:42:41  <dominictarr>which uses a b-tree
16:50:27  * dominictarrquit (Quit: dominictarr)
16:51:41  * wilmoore-dbjoined
17:05:01  * jxsonjoined
17:07:03  * jcrugzzjoined
17:07:45  * tmcwquit (Remote host closed the connection)
17:08:56  * tmcwjoined
17:18:50  * julianduquequit (Quit: leaving)
17:33:39  * dominictarrjoined
17:41:07  * esundahlquit (Remote host closed the connection)
17:41:33  * esundahljoined
17:43:53  * esundahl_joined
17:45:51  * esundahlquit (Ping timeout: 245 seconds)
17:47:34  * tmcwquit (Remote host closed the connection)
17:48:02  * wilmoore-dbquit (Ping timeout: 256 seconds)
17:48:17  * tmcwjoined
17:49:45  * wilmoore-dbjoined
17:59:33  * dguttmanquit (Quit: dguttman)
18:00:58  * alanhoffjoined
18:01:34  <alanhoff>Hey juliangruber: I'm receiving [object Object] from gets in multilevel, how to get my json data?
18:04:02  <alanhoff>Well, thats a string :P
18:04:16  <alanhoff>How can I configure multilevel to store and retrieve objects?
18:09:52  * tmcwquit (Remote host closed the connection)
18:11:20  <alanhoff>Nevermind, just found it.. Need to configure valueEncoding in the levelup side
18:13:48  * tmcwjoined
18:31:03  <dropdrive>dominictarr: Thanks! And are you aware of any "worst case" scenarios that cause leveldb to behave poorly? (E.g. random writes, backwards writes, or whatever)
18:31:47  <dominictarr>dropdrive: we are working on some issues involving memory at the moment
18:32:14  <dominictarr>memory is balooning if you write shittonnes of stuff at once
18:32:23  <dominictarr>but I think we'll figure that stuff out...
18:32:38  <dropdrive>dominictarr: Cool. Who's "we"?
18:32:56  <dominictarr>https://github.com/rvagg/node-levelup/issues/171
18:33:11  <dominictarr>the various levelup contributors!
18:35:51  * scttnlsnjoined
18:36:43  <dropdrive>Ah, so it's a leveldb-wrapper, not a leveldb-clone, yes?
18:37:25  <dominictarr>dropdrive: yeah - leveldb node binding
18:37:42  <dominictarr>other links I gave where clones
18:38:01  <kenansulayman>rvagg san…@google.com: "1.13 automatically triggers compactions based on reads that are hitting such deleted ranges." 2 hours ago
18:46:48  <ednapiranha>dominictarr: mbalho: yay i get to meet you guys in october! :)
18:47:12  <dominictarr>ednapiranha: I'm looking forward to it!
18:47:32  <ednapiranha>woo
18:48:22  <dominictarr>ednapiranha: hey, you work at mozilla now? (since recently)
18:48:30  <ednapiranha>dominictarr: almost 2 years
18:48:31  <ednapiranha>:)
18:49:03  <dominictarr>oh, i must have got the 'recently' impression because you moved house.
18:49:16  <ednapiranha>dominictarr: oh! moving to the other office next week :)
18:49:24  <ednapiranha>toronto office -> pdx office
18:49:24  <dominictarr>do you work of firefox os?
18:49:29  <ednapiranha>i work on apps
18:49:40  <ednapiranha>not firefox os gaia/b2g
18:49:40  * esundahl_quit (Remote host closed the connection)
18:49:47  <dominictarr>fxos apps?
18:49:49  <ednapiranha>yep
18:50:07  * esundahljoined
18:50:15  <dominictarr>aha, cool.
18:50:38  <dominictarr>I'm pretty much the last person to get a smart phone, but I approve of Firefox os
18:50:45  <ednapiranha>dominictarr: haha
18:50:49  <ednapiranha>i bricked my android
18:50:54  <ednapiranha>oops
18:51:08  <dominictarr>exactly why I disaprove
18:51:13  <ednapiranha>lol
18:51:18  <ednapiranha>updated nightly, couldn't sign into the play store anymore
18:51:21  <ednapiranha>even after clearing cache
18:51:32  <ednapiranha>and it uninstalled some apps for me during the update O_O
18:51:45  <dominictarr>that is terrible
18:51:49  <ednapiranha>yeah
18:51:52  <ednapiranha>then i got really angry
18:52:00  <ednapiranha>and just switched to garmin + back to my iphone for everyday use
18:52:10  <ednapiranha>(used runkeeper for running)
18:52:22  <dominictarr>oh, garmin gps?
18:52:26  <ednapiranha>yeah
18:52:28  <ednapiranha>the watch :)
18:52:59  <dominictarr>right - maps seems the main use case - I just ask strangers for directions the old fashioned way
18:53:30  <dominictarr>… of crowdsourcing
18:54:04  <ednapiranha>dominictarr: yeah also the android runkeeper seems to collect geolocation about .3km off compared to the ios version
18:54:06  <ednapiranha>no idea why O_O
18:54:07  <dropdrive>So is leveldb strictly a LSM tree, or some flavor of it?
18:54:10  <ednapiranha>drives me insane
18:54:17  <dominictarr>I've been thinking there should be a thing to strap a smart phone to your arm - that being another classic scifi trope
18:54:29  * esundahlquit (Ping timeout: 248 seconds)
18:54:42  <dominictarr>need some bio sensors though, so you can have realtime graph of your vital signs
18:54:53  <ednapiranha>dominictarr: yea i wish the watch was a bit more slim..
18:55:10  <ednapiranha>dominictarr: it's also great for long hiking trips :D
18:55:50  <dominictarr>dropdrive: I'm not sure. I've found it hard to find good literature on LSMs
18:56:38  <dominictarr>ednapiranha: TODO http://microship.com/
18:56:46  <mbalho>feckin real time conf YESSSS
18:56:55  <ednapiranha>LOL karolina
18:56:57  <ednapiranha>https://twitter.com/fox/status/370982722455760897
18:57:17  <mbalho>lol
18:57:47  <ednapiranha>mbalho: dominictarr: i have a little leveldb chat exp im about to deploy
18:57:52  <ednapiranha>you guys interested in being beta testers?
18:57:52  <mbalho>p.s. heres the leveldb project ill be presenting https://github.com/maxogden/dat/blob/master/developing.md (it will be way different by then probably)
18:57:55  <mbalho>ednapiranha: a
18:57:58  <mbalho>ednapiranha: ya
18:57:59  <ednapiranha>you basically have to pose in front of your camera
18:57:59  <mbalho>lol
18:58:02  <ednapiranha>and type stuff
18:58:04  <ednapiranha>and hope for the best
18:58:17  <mbalho>booyea i just shampooed by beard, perfect timing
18:58:37  <ednapiranha>lol
18:59:07  <dominictarr>ednapiranha: sure
18:59:43  <ednapiranha>no nudity.. unless you like it that way :)
18:59:58  <ednapiranha>although it's ephemeral so it'll be wiped in 150 seconds or something
19:00:05  <dominictarr>mbalho: you use shampoo? don't you know that is all a scam by the chemical industrial complex?
19:00:33  <mbalho>it makes a noticable difference to the quality of my subbeard chin skin
19:00:47  <mbalho>whereas conditional ensures a delightfun beard bounce/wavyness
19:00:54  <mbalho>conditioner*
19:01:07  <ednapiranha>mbalho: i bet toothpaste is a bitch
19:01:09  <dominictarr>I havn't used shampoo in years
19:01:58  <dominictarr>ednapiranha: link?
19:02:01  <hij1nx>dominictarr: i just started using it with this new beard i got, beard can get a lot of food near it and such.
19:02:17  <ednapiranha>dominictarr: not ready yet
19:02:21  <ednapiranha>still deploying
19:02:36  <ednapiranha>dominictarr: mbalho: but pls dont share link when i send to you :) it's my realtimeconf surprise
19:02:43  <ednapiranha>it's just in testing stage
19:02:52  <mbalho>keep it secret keep it safe
19:02:53  <dominictarr>okay
19:03:04  <dominictarr>"what link?"
19:03:14  * ryan_ramagejoined
19:03:43  <dominictarr>sorry officer, I no I don't know an "edna piranha"
19:03:51  * paulfryzeljoined
19:04:27  <hij1nx>mbalho uses some kind of beard softening product
19:04:45  <ednapiranha>i demand to speak with my lawyer
19:04:51  <hij1nx>anyone going to realtime-conf in portland? https://tito.io/&yet/realtimeconf-2013
19:05:00  <mbalho>ya bro
19:05:03  <mbalho>all us r
19:05:08  <mbalho>errbudy
19:05:14  <ednapiranha>hmm
19:05:14  <mbalho>up in this club
19:05:24  <ednapiranha>im into this 'tweeting random things people say without prepending OH'
19:05:33  <mbalho>plagiarism
19:05:34  <ednapiranha>im going to pull something from here to follow up on another tweet if you dont mind
19:05:47  <ednapiranha>perfect
19:05:50  <ednapiranha>hij1nx: thx
19:05:51  <ednapiranha>:)
19:05:55  * mikealjoined
19:06:24  <dominictarr>I only use OH for things that I have thought my self.
19:06:29  <mbalho>brb hunter gathering
19:06:44  <ednapiranha>dominictarr: i had some weird ones yesterday
19:06:45  <juliangruber>alanhoff: on the server, set valueEncoding: 'json'
19:06:58  <hij1nx>you can just make up OHs too
19:07:08  <alanhoff>juliangruber already figured it out, thanks :)
19:07:21  <ednapiranha>dominictarr: hij1nx: mbalho: https://twitter.com/ednapiranha/status/370570443452735488 and https://twitter.com/ednapiranha/status/370570963366445056
19:07:23  <ednapiranha>lol
19:07:27  <hij1nx>or fake RTs
19:07:46  <hij1nx>RT: @maxogden my beard is actually an orange cat.
19:07:53  <juliangruber>hij1nx: http://creativewebbusiness.com/wp-content/uploads/2013/05/one-does-not-simply-build-links-516x188.jpg
19:07:58  <juliangruber>:D
19:13:17  <ryan_ramage>question about storing binary values in level….any chance that the value can be read in a readStream with start and end flags? ie storing audio/video to support range requests
19:14:43  <dominictarr>Raynos: yes, if you can provide a key with the correct sort properties
19:15:11  <alanhoff>juliangruber can I add start and end to readstreams?
19:15:23  <juliangruber>alanhoff: yes :)
19:15:25  <Raynos>dominictarr: yes what ?
19:15:29  <juliangruber>alanhoff: supports all the options
19:15:44  <alanhoff>juliangruber thx :)
19:16:02  <dominictarr>Raynos: oops
19:16:11  <dominictarr>means ryan_ramage ^
19:16:19  <dominictarr>Raynos: I do have a question for you though
19:16:37  <dominictarr>you remember your old thing was it data channel?
19:16:51  <Raynos>yes ?
19:16:54  <dominictarr>it relayed messages from browser to browser?
19:16:59  <Raynos>thats just a ws -> stream thing
19:17:04  <dominictarr>yeah
19:17:05  <Raynos>I had a signal channel
19:17:11  <Raynos>and I had a fake webrtc thing
19:17:12  <dominictarr>maybe that was it
19:17:25  <dominictarr>the fakewebrtc thing
19:17:30  <Raynos>https://github.com/Raynos/signal-channel#channel-example
19:17:47  * w________joined
19:17:52  <Raynos>https://github.com/Raynos/peer-connection-shim
19:18:18  <Raynos>oh wait were in ##leveldb
19:19:39  * jxsonquit (Remote host closed the connection)
19:19:50  <ednapiranha>hahhahhaha
19:19:51  <ednapiranha>hehehehe
19:20:22  * scttnlsnquit (Remote host closed the connection)
19:20:38  * esundahljoined
19:21:00  * esundahlquit (Remote host closed the connection)
19:21:08  * esundahljoined
19:21:16  * wilmoore-dbquit (Ping timeout: 260 seconds)
19:21:27  <alanhoff>How can I peform a readStream just in the keys containing 'user::' at the beggining?
19:23:26  <juliangruber>alanhoff: db.createReadStream({ start: 'user::!', end: 'user::!~' })
19:23:40  <juliangruber>but you should really use '!' instead of '::' to seperate your keys
19:23:41  <juliangruber>so it's
19:23:49  <juliangruber>db.createReadStream({ start: 'user!', end: 'user!~' })
19:24:38  * scttnlsnjoined
19:25:22  <alanhoff>juliangruber why? As I could see levelgraph uses :: too
19:25:32  <alanhoff>thats wy I adopted it
19:25:41  <juliangruber>levelup advices '!'
19:26:03  <juliangruber>it's safer as it sorts earlyer than most characters you'd use
19:26:14  <juliangruber>see http://www.asciitable.com/
19:27:18  <alanhoff>OK, thanks!
19:55:46  * scttnlsnquit (Remote host closed the connection)
19:58:00  <ednapiranha>dominictarr: i have a blue wig on
19:58:01  <ednapiranha>quick!
20:07:18  * no9quit (Ping timeout: 264 seconds)
20:08:13  * dominictarrquit (Quit: dominictarr)
20:08:20  * jxsonjoined
20:10:51  * dominictarrjoined
20:11:56  <mbalho>hij1nx: wanna play music at nodeconfeu? me you substack
20:11:58  <mbalho>substack: o/
20:12:07  <mbalho>hij1nx: they will have instruments i'm told
20:12:22  <mbalho>ednapiranha: im on crappy wifi page wont load, will do it later
20:12:31  <ednapiranha>mbalho: nooooo! ok
20:14:22  * redidasquit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
20:20:33  * redidasjoined
20:25:41  * julianduquejoined
20:32:46  * jxsonquit (Remote host closed the connection)
20:33:54  * jxsonjoined
20:42:29  * w________quit (Remote host closed the connection)
20:53:41  * ednapiranhaquit (Remote host closed the connection)
20:54:05  <hij1nx>mbalho: of course!
21:03:30  <mbalho>hij1nx: w00t
21:06:08  <jerrysv>heya hij1nx!
21:06:45  * julianduquequit (Ping timeout: 276 seconds)
21:08:05  * julianduquejoined
21:09:35  <hij1nx>hey jerrysv!! :)
21:09:45  <jerrysv>hij1nx: missed you at jsconf!
21:10:05  <hij1nx>jerrysv: i think i was in europe :(
21:10:15  <jerrysv>bummer. coming to RTC?
21:10:23  <hij1nx>jerrysv: speaking at it! ;)
21:10:43  <jerrysv>awesome! looks like amber is too, and i think 12 people from my office are planning on going
21:12:07  <hij1nx>jerrysv: Array(Number.MAX_VALUE).join('awesome!')
21:22:06  * Acconutjoined
21:23:33  <mbalho>weird query question... i have keys like this ÿdÿfooÿsÿ01
21:23:44  <mbalho>sublevel('d').sublevel('s')
21:23:59  <mbalho>so data key foo @ sequence 1
21:24:12  <mbalho>i wanna get the highest sequence of each key
21:24:41  <mbalho>the thing i cant figure out is how to get the list of all unique keys in sublevel('d')
21:24:52  * Acconutquit (Client Quit)
21:25:08  <mbalho>cause i need to visit each key range in sublevel('d') and peek the last sublevel('s')
21:25:34  <mbalho>mikeal: any ideas o/
21:32:25  <mbalho>hmm maybe if i peek first d to get the first key, then peek last that keyrange, then do a range starting at d\xff and repeat
21:32:38  <mbalho>or rather d\xfffirstkey\xff
21:42:28  <kenansulayman>juliangruber
21:42:47  <kenansulayman>Would it be possible to add keys to the level-trie?=
21:45:48  <juliangruber>kenansulayman: hm?
21:46:03  <kenansulayman>Well you have the trie
21:46:06  <juliangruber>kenansulayman: store metadata somewhere else
21:46:28  <kenansulayman>Well that'd be the case anyway
21:46:44  <kenansulayman>mh. ok
21:54:15  <jerrysv>mikeal: ping?
21:54:30  <mikeal>hey
21:54:49  <jerrysv>mikeal: what was the name of that bar we ended up at with all the whiskey in sf?
21:55:05  <hij1nx>mbalho: why wouldnt you create a readstream on a sublevel?
21:55:21  <mikeal>you need to be more specific :)
21:55:32  <mikeal>what event was this around?
21:55:50  <jerrysv>i thought it was on folsom, it was the hardware hack weekend
21:56:00  * DTrejojoined
21:56:14  <mikeal>so, we were at Joyent, and then we went somewhere
21:56:19  <mikeal>probably Rickhouse
21:56:22  <mbalho>hij1nx: cause i'll get multiple entries for each data key, i just wanna get the value of the data key with the highest s key
21:56:35  <mikeal>or
21:56:49  <jerrysv>it wasn't joyent, it was after the pile of naan
21:56:52  <jerrysv>the naanument
21:57:08  <mikeal>oh, hrm.....
21:57:20  <jerrysv>lots of animal heads and that animal shooting game
21:57:30  <mikeal>oh, that place
21:57:42  <mikeal>i forgot the name of that spot
21:57:46  <jerrysv>ha
21:58:19  <jerrysv>ok, i was trying to find places for a soon-to-be new employee to go before she starts here
21:58:44  <hij1nx>mbalho: possibly a stream on each sublevel that has reverse true and limit 1
21:59:04  * no9joined
21:59:27  <mbalho>hij1nx: the question boils down to: how do i get all unique keys in a sublevel
21:59:41  <dominictarr>mbalho: hey
21:59:43  <hij1nx>mbalho: you cant have dupes
22:00:01  <hij1nx>a dupe key is an update
22:00:33  <dominictarr>you want to get all the keys that live under a sublevel including in sub sublevels?
22:01:12  <hij1nx>mbalho: ah! i see what you are saying
22:01:38  <juliangruber>mbalho: so nested sublevels shouldn't be sorted inside their parents
22:02:15  <juliangruber>if b is nested inside a, a should have prefix a! and b should have prefix ab! or something
22:02:31  <juliangruber>...or something that would actually work
22:02:36  <juliangruber>but I think that's the idea
22:03:19  <juliangruber>but that would mean that you can't do sub('a').createReadStream().pipe(...) do get all the contents of a and the sublevels inside a
22:15:17  * dguttmanjoined
22:16:54  <dominictarr>mbalho: can you post an issue on sublevel describing your usecase?
22:27:10  * redidasquit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
22:38:35  * scttnlsnjoined
22:54:27  * scttnlsn_joined
22:57:16  * DTrejo_joined
22:58:16  * scttnlsnquit (Ping timeout: 260 seconds)
22:59:16  * DTrejoquit (Ping timeout: 246 seconds)
22:59:21  * tmcwquit (Remote host closed the connection)
23:08:07  * DTrejo_quit (Remote host closed the connection)
23:09:38  * fallsemoquit (Quit: Leaving.)
23:22:29  * tmcwjoined
23:23:37  * scttnlsn_quit (Remote host closed the connection)
23:26:02  * soldairjoined
23:26:29  * chiltsquit (Ping timeout: 248 seconds)
23:27:19  * chiltsjoined
23:31:55  * tmcwquit (Remote host closed the connection)
23:38:07  * jerrysvquit (Remote host closed the connection)
23:58:00  * esundahlquit (Remote host closed the connection)
23:58:19  * no9quit (Quit: Leaving)
23:58:27  * esundahljoined