00:01:18  <brycebaril>mbalho: another big speed bump for pack(). it is getting way ugly
00:02:11  <mbalho>fast, ugly and a nice API are acceptable but only if all three
00:02:40  * tmcwquit (Remote host closed the connection)
00:03:06  * tmcwjoined
00:06:02  * tmcw_joined
00:06:26  * tmcwquit (Read error: Connection reset by peer)
00:14:00  * tmcw_quit (Remote host closed the connection)
00:14:27  * tmcwjoined
00:18:54  * tmcwquit (Ping timeout: 253 seconds)
00:20:32  * eugenewa_joined
00:27:34  <brycebaril>mbalho: I'm getting segfaults when I try to run the level-csv-bench scripts :/
00:28:04  <brycebaril>*without using anything different yet
00:32:59  * st_lukejoined
00:33:30  * mikealquit (Quit: Leaving.)
00:35:47  * eugenewa_quit (Remote host closed the connection)
00:36:36  * eugenewarejoined
00:38:34  * eugenewarequit (Remote host closed the connection)
00:40:49  * st_lukequit (Remote host closed the connection)
00:41:20  <mbalho>brycebaril: i got segfaults when I had improper values in the write buffer
00:42:15  * nnnnathannjoined
00:42:44  <mbalho>i'll try again with fresh dependencies
00:44:36  * eugenewarejoined
00:44:40  <mbalho>hah! segfaults
00:46:14  <mbalho>i might have been using a local dependency at a differnet version, checking now
00:50:39  <mbalho>brycebaril: interesting, i was using [email protected] yesterday, which doesnt segfault, but [email protected] segfaults
00:50:46  <mbalho>i'll try to narrow it down
00:51:08  <brycebaril>Maybe related to https://github.com/rvagg/node-levelup/issues/171
00:51:47  <mbalho>ahh
00:54:31  * jxsonquit (Remote host closed the connection)
00:54:57  * jxsonjoined
00:56:32  <mbalho>ok interseting... level 0.17 and 0.16 both segfault, level 0.15 just runs really really slow but didnt crash after a few minutes (also didnt finish, i quit early)
00:59:11  * jxsonquit (Ping timeout: 245 seconds)
00:59:40  <brycebaril>interesting. I made smaller versions to play with, e.g. `head -50001 1994.csv > 50k.csv` I wonder if I can find the approx point of the segfault
01:00:34  <mbalho>nice
01:01:06  <brycebaril>100k seems to work a few times in a row, (and is pretty fast)
01:01:13  <brycebaril>dinner, maybe back later
01:17:40  <mbalho>brycebaril: https://github.com/rvagg/node-levelup/issues/171#issuecomment-25670492
01:22:18  <brycebaril>mbalho: multibuffer 1.3.0 is out, which has my pack() changes, FYI
01:23:16  * st_lukejoined
01:23:31  <mbalho>nice
01:23:51  <mbalho>brycebaril: so i guess i'd recommend using 0.11 or 0.12 for those benchmarks
01:24:36  <brycebaril>https://github.com/brycebaril/multibuffer/tree/master/bench these are very similar but don't use level, what I used to test/improve pack()
01:25:02  * tmcwjoined
01:25:29  * jxsonjoined
01:29:31  * tmcwquit (Ping timeout: 246 seconds)
01:30:14  * nnnnathannquit (Ping timeout: 264 seconds)
01:30:19  * fallsemojoined
01:33:19  * st_lukequit (Remote host closed the connection)
01:33:47  * jxsonquit (Ping timeout: 248 seconds)
01:55:39  * fallsemoquit (Ping timeout: 248 seconds)
01:56:48  * fallsemojoined
02:06:48  * fallsemoquit (Quit: Leaving.)
02:11:26  * rudjoined
02:11:27  * rudquit (Changing host)
02:11:27  * rudjoined
02:16:45  * fallsemojoined
02:22:41  * eugenewarequit (Remote host closed the connection)
02:36:18  * fallsemoquit (Ping timeout: 256 seconds)
02:43:17  * enosquit (Read error: Connection reset by peer)
02:43:50  * enosjoined
02:50:33  * thlorenzjoined
02:52:04  * thlorenzquit (Read error: Connection reset by peer)
02:52:35  * thlorenzjoined
02:53:17  * eugenewarejoined
02:54:31  * fallsemojoined
03:00:09  * mikealjoined
03:01:00  * tmcwjoined
03:02:51  * eugenewarequit (Ping timeout: 248 seconds)
03:13:10  * timoxleyjoined
03:31:41  * thlorenzquit (Remote host closed the connection)
03:41:54  * eugenewarejoined
03:50:35  * fallsemoquit (Quit: Leaving.)
04:11:53  * eugenewa_joined
04:12:24  * fallsemojoined
04:12:51  * tmcwquit (Remote host closed the connection)
04:13:19  * tmcwjoined
04:13:35  * eugenewarequit (Read error: Connection reset by peer)
04:14:08  * eugenewarejoined
04:17:10  * eugenewa_quit (Ping timeout: 246 seconds)
04:18:06  * tmcwquit (Ping timeout: 264 seconds)
04:19:45  * jxsonjoined
04:20:03  * julianduquequit (Quit: leaving)
04:22:36  * jxson_joined
04:24:27  * jxsonquit (Ping timeout: 248 seconds)
04:41:29  * ralphthe1injaquit (Quit: leaving)
04:44:14  * tmcwjoined
04:54:14  * tmcwquit (Ping timeout: 240 seconds)
05:18:33  * dguttmanquit (Quit: dguttman)
05:20:40  * tmcwjoined
05:20:54  * timoxleyquit (Remote host closed the connection)
05:22:43  * jxson_quit (Remote host closed the connection)
05:23:09  * jxsonjoined
05:25:15  * tmcwquit (Ping timeout: 248 seconds)
05:27:26  * jxsonquit (Ping timeout: 240 seconds)
05:36:45  * ednapiranhajoined
05:52:22  * fallsemoquit (Quit: Leaving.)
06:07:31  * jxsonjoined
06:07:57  * jxsonquit (Remote host closed the connection)
06:08:24  * jxsonjoined
06:12:30  * jxsonquit (Ping timeout: 240 seconds)
06:28:48  * st_lukejoined
06:32:56  * st_lukequit (Ping timeout: 245 seconds)
06:40:12  * eugenewarequit (Remote host closed the connection)
06:46:12  * st_lukejoined
06:51:30  * tmcwjoined
06:55:55  * tmcwquit (Ping timeout: 248 seconds)
06:57:13  * ednapiranhaquit (Remote host closed the connection)
07:12:25  * st_lukequit (Remote host closed the connection)
07:37:50  * frankblizzardjoined
07:48:11  * frankblizzardquit (Remote host closed the connection)
07:52:02  * tmcwjoined
07:56:37  * tmcwquit (Ping timeout: 246 seconds)
08:09:09  * frankblizzardjoined
08:38:43  * jcrugzzquit (Ping timeout: 260 seconds)
08:40:48  * dropdrivequit (Ping timeout: 240 seconds)
08:42:01  * dropdrivejoined
09:06:37  * jcrugzzjoined
09:13:02  * jcrugzzquit (Ping timeout: 240 seconds)
09:39:37  * rudquit (Quit: rud)
09:47:56  * thlorenzjoined
09:48:22  * rudjoined
09:52:14  * thlorenzquit (Ping timeout: 240 seconds)
09:54:00  * tmcwjoined
09:57:06  * rudquit (Quit: rud)
09:58:19  * tmcwquit (Ping timeout: 248 seconds)
10:08:49  * thlorenzjoined
10:10:29  * Guest27598quit (Ping timeout: 240 seconds)
10:11:14  * dkjoined
10:11:40  * dkchanged nick to Guest82035
10:52:15  * thlorenzquit (Remote host closed the connection)
10:52:47  * thlorenzjoined
10:56:59  * thlorenzquit (Ping timeout: 248 seconds)
11:26:13  * eugenewarejoined
11:26:17  * tmcwjoined
11:31:20  * tmcwquit (Remote host closed the connection)
11:31:32  * tmcwjoined
11:36:48  * tmcwquit (Remote host closed the connection)
11:38:22  * kenansulaymanjoined
12:06:14  * ralphtheninjaquit (Ping timeout: 240 seconds)
12:29:50  * frankblizzardquit (Remote host closed the connection)
12:35:50  * nnnnathannjoined
12:49:18  * nnnnathannquit (Read error: Connection reset by peer)
12:50:35  * nnnnathannjoined
12:53:17  * nnnnathannquit (Remote host closed the connection)
13:09:02  * jmartinsjoined
13:14:52  <kenansulayman>mbalho How did you get to the 8m30s of git @ http://maxogden.github.io/slides/okcon/index.html#17 ?
13:28:47  * Acconutjoined
13:28:49  * Acconutquit (Client Quit)
13:31:58  * jmartinsquit (Quit: Konversation terminated!)
13:32:22  * jjmalinajoined
13:32:36  * jjmalinaquit (Client Quit)
13:35:49  * jmartinsjoined
13:36:23  * jmartinsquit (Client Quit)
13:44:23  <kenansulayman>trevnorris Assumption when 0.12 lands?
13:46:01  * fallsemojoined
14:02:16  * ednapiranhajoined
14:05:37  * frankblizzardjoined
14:08:29  * jjmalinajoined
14:10:04  * frankblizzardquit (Ping timeout: 246 seconds)
14:13:08  * ednapiranhaquit (Remote host closed the connection)
14:23:41  * tmcwjoined
14:40:58  * dguttmanjoined
14:42:24  * jmartinsjoined
14:45:50  * frankblizzardjoined
14:47:49  * dguttmanquit (Quit: dguttman)
14:48:41  * dguttmanjoined
14:48:51  * jmartinsquit (Quit: Konversation terminated!)
14:53:07  * dguttmanquit (Client Quit)
14:54:41  * dguttmanjoined
15:10:26  * bradleymeckjoined
15:26:23  * mikealquit (Quit: Leaving.)
16:03:21  * Acconutjoined
16:03:28  * Acconutquit (Client Quit)
16:07:26  * ednapiranhajoined
16:24:23  * ednapiranhaquit (Read error: Connection reset by peer)
16:24:38  * ednapiranhajoined
16:28:08  <mbalho>kenansulayman: 55 million line csv
16:28:23  <kenansulayman>That test file you used before?
16:28:28  <mbalho>yea
16:28:36  <kenansulayman>8m?
16:28:56  <kenansulayman>how can dat do that in 5ms?
16:29:03  <kenansulayman>do you mean 5m?
16:29:41  <mbalho>its a key value store, adding a new key/value is fast
16:29:46  <mbalho>whereas git has to diff the whole file
16:31:18  * bradleymeckquit (Quit: bradleymeck)
16:31:37  <kenansulayman>mbalho yes sure.. but 5 milliseconds?!
16:37:00  * ramitosjoined
16:37:11  * st_lukejoined
16:38:54  * ednapira_joined
16:40:05  * ednapiranhaquit (Ping timeout: 256 seconds)
16:43:27  * jcrugzzjoined
16:47:29  * ednapira_quit (Remote host closed the connection)
16:49:43  * soldairjoined
16:54:09  <mbalho>kenansulayman: yea roughly, from batch inserts
16:54:20  <kenansulayman>hum
16:54:26  <kenansulayman>k :)
16:54:39  <mbalho>kenansulayman: say youre writign 100,000 k/v pairs a second
16:55:06  <mbalho>thats 10 microseconds
16:55:11  <mbalho>per k/v
17:01:30  <soldair>mbalho: have you plotted how key size and value size change input throughput? just wondering. it certainly makes a difference i just haven't had time to break it down yet
17:05:02  <trevnorris>kenansulayman: when it's ready.
17:05:07  <trevnorris>hopefully in the next month or so
17:08:52  * kenansulaymanquit (Ping timeout: 264 seconds)
17:09:19  * frankblizzardquit (Remote host closed the connection)
17:13:59  * kenansulaymanjoined
17:18:54  * mikealjoined
17:19:04  * kenansulaymanquit (Ping timeout: 264 seconds)
17:53:27  * jxsonjoined
18:02:46  * julianduquejoined
18:04:10  * asterisksjoined
18:05:26  * st_lukequit (Remote host closed the connection)
18:06:24  * dominictarrjoined
18:07:22  * frankblizzardjoined
18:07:37  * frankblizzardquit (Remote host closed the connection)
18:09:06  * julianduquequit (Ping timeout: 264 seconds)
18:10:24  * eugenewarequit (Remote host closed the connection)
18:10:25  * asterisksquit (Read error: Connection reset by peer)
18:13:02  * eugenewarejoined
18:14:31  * Acconutjoined
18:28:26  * jcrugzzquit (Ping timeout: 264 seconds)
18:30:36  * tmcwquit (Remote host closed the connection)
18:31:08  * tmcwjoined
18:31:23  * dominictarrquit (Quit: dominictarr)
18:31:26  * tmcwquit (Read error: Connection reset by peer)
18:33:11  * tmcwjoined
18:36:32  * eugenewarequit (Remote host closed the connection)
18:43:33  * thlorenzjoined
18:50:59  * thlorenzquit (Ping timeout: 260 seconds)
18:51:34  <mbalho>soldair: havent yet, definitely would be nice to see though
18:52:50  * jmartinsjoined
19:00:15  * thlorenz_joined
19:00:30  * Acconutquit (Quit: Acconut)
19:02:02  * Acconutjoined
19:05:20  * dominictarrjoined
19:06:54  * eugenewarejoined
19:08:25  * Acconutquit (Quit: Acconut)
19:11:37  * Acconutjoined
19:16:01  * eugenewarequit (Ping timeout: 245 seconds)
19:16:11  * Acconutquit (Ping timeout: 260 seconds)
19:19:48  * thlorenz_quit (Remote host closed the connection)
19:40:14  * fallsemo1joined
19:40:14  * tmcwquit (Read error: Connection reset by peer)
19:40:19  * fallsemoquit (Read error: Connection reset by peer)
19:40:23  * tmcwjoined
19:40:49  * TheSteve0joined
19:45:37  * Acconutjoined
19:45:47  * Acconutquit (Client Quit)
19:49:00  * TheSteve0part
19:53:46  * dominictarrquit (Ping timeout: 245 seconds)
19:58:54  * jxsonquit (Remote host closed the connection)
19:59:20  * jxsonjoined
20:03:47  * jxsonquit (Ping timeout: 260 seconds)
20:10:38  * Acconutjoined
20:10:43  * Acconutquit (Client Quit)
20:11:09  * mikealquit (Quit: Leaving.)
20:11:21  * st_lukejoined
20:15:03  <mbalho>whoa https://code.google.com/p/leveldb-go/
20:15:51  <mbalho>brycebaril: im writing a benchmark that stores each cell of each row in the csv as a separate key, so the batches are like 40,000 key/values per batch
20:16:13  <mbalho>brycebaril: and i get really bad results, after 2 huge batches it just hangs forever before the third batch goes through
20:16:29  <mbalho>brycebaril: which is weird because the batches are less than 16mb total, theres just lots of k/v pairs
20:18:08  * julianduquejoined
20:19:38  * st_lukequit (Remote host closed the connection)
20:26:38  * julianduquequit (Ping timeout: 264 seconds)
20:28:54  * julianduquejoined
20:29:52  * jxsonjoined
20:30:13  * jcrugzzjoined
20:35:40  <brycebaril>mbalho: strange
20:36:22  <brycebaril>mbalho: Based on my benchmarks the bottleneck I'm seeing when using binary-csv + multibuffer is now mostly binary-csv
20:36:34  <brycebaril>mbalho: it definitely was the other way around until multibuffer 1.3.0
20:38:05  <mbalho>nice
20:38:17  <mbalho>rescrv: have you experimented with approaches like this? https://gist.github.com/erikfrey/3610989
20:38:30  * jxsonquit (Ping timeout: 264 seconds)
20:38:31  <brycebaril>mbalho: could probably get another order of magnitude improvement if binary-csv had a direct .multibufferLine (etc.) but there's no api in multibuffer yet that would work like that. It'd need to be something like .pack(buffer, segments) where segments would be the slice offsets that define the csv fields
20:39:05  <brycebaril>mbalho: leaving binary-csv to do the 'csv-ish' stuff of comma/quote tracking but then offload all the buffer handling to multibuffer
20:39:31  <mbalho>rescrv: from that i can make of that code its basically piping the full table read iterator into 1000 k/v batches of the larger database. seems kind of odd
20:39:57  <mbalho>rescrv: i found that code via https://groups.google.com/forum/#!searchin/leveldb/batch/leveldb/g3WzmVXrhSE/MIrc8ylOynwJ
20:40:17  <mbalho>brycebaril: ahh nice idea
20:41:40  <mbalho>brycebaril: what if the format for multibuffer was new Buffer(segments, buffer
20:41:45  <mbalho>that is, segments + buffer
20:42:19  <brycebaril>mbalho: sure
20:42:50  <mbalho>brycebaril: cause isnt it segment + buffer + segment + buffer + segment + buffer right now?
20:43:34  <brycebaril>oh, the multibuffer spec is metadata+buffer+metadata+buffer yeah
20:44:18  * Acconutjoined
20:44:21  * Acconutquit (Remote host closed the connection)
20:51:13  * kenansulaymanjoined
20:51:22  * tmcwquit (Remote host closed the connection)
20:51:44  * st_lukejoined
21:05:02  * st_lukequit (Remote host closed the connection)
21:07:03  * mikealjoined
21:10:51  * jxsonjoined
21:26:06  <kenansulayman>juliangruber Aware of an existing project which allows pushing to level with an iterative index?
21:26:24  <kenansulayman><prefix>1, <prefix>2, <prefix>3, <prefix>4, ...
21:27:10  <brycebaril>kenansulayman: I'm not sure I follow what you mean by that
21:27:26  <kenansulayman>Let's say we want an eventlog
21:27:28  * julianduquequit (Read error: Connection reset by peer)
21:27:54  <kenansulayman>we could sublevel per day
21:28:10  <kenansulayman>but I'd love to have growing indices
21:28:30  <kenansulayman>for instance we could use a unix timestamp for messages
21:28:41  <brycebaril>Hmm, well I wrote this https://npmjs.org/package/level-version which sort-of does that, if I understand you correctly
21:28:52  <kenansulayman>let me check it out
21:29:34  <brycebaril>I don't think it is quite the same as what you're saying though
21:32:26  <kenansulayman>brycebaril hum
21:33:02  <kenansulayman>three duplicate keys get three different suffices; bfff cfff ffff
21:33:10  <kenansulayman>What is it exactly doing=
21:33:11  <kenansulayman>?*
21:33:30  <kenansulayman>ow wait
21:33:34  <kenansulayman>that's awesome
21:33:59  <brycebaril>:)
21:34:00  <kenansulayman>we could push into the day and create a readstream
21:34:48  <kenansulayman>and I love how the equal interfaces allow uberlevel to be below level-version
21:34:49  <kenansulayman>t
21:34:50  <kenansulayman>y
21:34:54  <brycebaril>level-version is what https://github.com/brycebaril/timestreamdb uses to make a timeseries db.
21:36:59  <kenansulayman>brycebaril http://data.sly.mn/RnmM/contents
21:37:23  <kenansulayman>Really cool
21:37:43  * fallsemo1quit (Quit: Leaving.)
21:38:17  * fallsemojoined
21:41:41  <brycebaril>kenansulayman: :)
21:41:54  * mikealquit (Quit: Leaving.)
21:41:58  <brycebaril>you can also provide your own version generator function if you don't want timestamps
21:44:36  * mikealjoined
21:49:57  <kenansulayman>brycebaril Timestamps are perfectly ok
21:50:09  <kenansulayman>because we just need an internal IRC logger
21:55:38  <kenansulayman>brycebaril http://data.sly.mn/RnVP/contents
21:57:10  <kenansulayman>There should be a queue
21:57:56  * jxsonquit (Remote host closed the connection)
21:58:22  * jxsonjoined
21:58:39  * jxsonquit (Read error: Connection reset by peer)
21:59:00  * jxsonjoined
21:59:51  <brycebaril>What do you mean?
22:00:13  <brycebaril>I think in this one you're pushing commands too fast for Date.now() maybe
22:02:09  <kenansulayman>brycebaril No there's a race condition
22:02:42  <kenansulayman>"brycebaril:Yo!" is shorter than "kenansulayman:Hey!" and gets written much faster
22:03:47  <kenansulayman>okay delete the "much"; yet it's some cycles faster than the other one ;)
22:04:29  * julianduquejoined
22:06:06  <kenansulayman>brycebaril Could you patch level-version to support optionally process.hrtime?
22:06:20  <kenansulayman>Wait I
22:06:22  <kenansulayman>will PR
22:07:38  <brycebaril>is there a browser polyfill for process.hrtime?
22:07:47  <kenansulayman>hum
22:08:08  <kenansulayman>brycebaril https://github.com/NHQ/since-when/blob/master/README.md
22:08:17  <kenansulayman>"Now with process.hrtime shim for working in browsers with browserify (which shims process itself)."
22:08:50  <kenansulayman>that is, : "Although it returns nanoseconds, it can only really be considered accurate to the millisecond when used in the browser"
22:09:28  <brycebaril>https://gist.github.com/brycebaril/8fd794f99aba05d67398
22:09:45  <brycebaril>your example with http://npm.im/microtime-x
22:10:34  <kenansulayman>yes let me give it a spin
22:10:37  <brycebaril>same situation in the browser, but will be microseconds in node
22:14:51  <kenansulayman>hahaha
22:14:56  <kenansulayman>broke LMDB with this
22:14:58  <kenansulayman>rvagg
22:15:17  <kenansulayman>brycebaril http://data.sly.mn/RnF8/contents
22:15:34  <kenansulayman>rvagg ^^^ WriteError: MDB_MAP_FULL: Environment mapsize limit reached ^^^
22:16:02  <kenansulayman>Should try that with hyperlevel again
22:16:31  <brycebaril>interesting, it says that error is it going over the max filesize specified for mdb
22:18:23  <kenansulayman>yes works perfectly good with hyperlevel
22:18:31  <kenansulayman>I love it :D
22:18:55  <kenansulayman>Try it http://data.sly.mn/RnoO/contents
22:18:57  <kenansulayman>really cool
22:19:11  <kenansulayman>let's rebuild logs.nodejs.org for fun with it
22:21:50  * int3rgr4mmjoined
22:21:50  <int3rgr4mm>yolo
22:21:58  <kenansulayman>the bot's online
22:22:32  * int3rgr4mmquit (Remote host closed the connection)
22:22:43  <juliangruber>kenansulayman: no I don't know of one
22:22:56  <kenansulayman>already figured with brycebaril
22:23:00  <kenansulayman>thanks tho ;)
22:36:02  <juliangruber>kenansulayman brycebaril: level-store is similar to that but only works for auto-increment versions
22:36:15  <juliangruber>getting a single version would mean a small pullrequest
22:38:04  <kenansulayman>juliangruber Sounds like an idea
22:38:08  <kenansulayman>how would you handle that?
22:38:18  <kenansulayman>And how are you incrementing?
22:40:34  <juliangruber>kenansulayman: https://github.com/juliangruber/level-store#indexes
22:40:55  <kenansulayman>juliangruber that's cool
22:41:02  <juliangruber>the key is the same as with level-version: <key>!<version>
22:41:04  <kenansulayman>how are you getting the latest chunk id?
22:41:11  <kenansulayman>does that require a full scan?
22:41:16  * jjmalinaquit (Quit: Leaving.)
22:41:40  * jxsonquit (Remote host closed the connection)
22:42:07  * jxsonjoined
22:42:09  <juliangruber>just gets the last chunk id and increments
22:42:18  <juliangruber>oh
22:42:29  <juliangruber>I see a possible race condition there :D :D
22:43:13  <juliangruber>getting something by a specific version would be `store.get('key', { lte: version, limit: 1 }, cb)`
22:43:33  <juliangruber>so it's already implemented, just not so nice, because it returns an array instead of a stream
22:44:35  * jxsonquit (Read error: Connection reset by peer)
22:44:53  * jxsonjoined
22:46:53  * int3rgr4mmjoined
22:47:46  <kenansulayman>juliangruber hah yes race condition ;) but I guess level store is more appropriate for blobs
22:47:46  * int3rgr4mmquit (Remote host closed the connection)
22:48:15  <juliangruber>yeah, it's more low level in general
22:48:17  <juliangruber>it can handle versions
22:48:18  <kenansulayman>brycebaril juliangruber http://data.sly.mn/Rmn8/contents
22:48:23  <juliangruber>but also stream binary
22:48:33  * int3rgr4mmjoined
22:48:54  * int3rgr4mmquit (Remote host closed the connection)
22:49:00  * int3rgr4mmjoined
22:49:04  <juliangruber>kenansulayman: what does that do?
22:49:30  <kenansulayman>An IRC bot on-top of level-version (the code we creatively shot together)
22:49:57  <kenansulayman>which stores messages as version of the day
22:50:15  <juliangruber>you should be able to do the same with level-store, _except_ you can't provide versions manually for now
22:50:53  <kenansulayman>juliangruber I'm sure I can do that with level-store :D but this is just fun
22:51:13  <juliangruber>:D :D
22:51:17  <juliangruber>if it works, it works
22:51:28  <kenansulayman>this is cool
22:51:36  <kenansulayman>everything we write is written to leveldb right now
22:52:08  * mikealquit (Quit: Leaving.)
22:52:44  <juliangruber>fck yeah
22:53:07  <kenansulayman>ow typo
22:53:24  <kenansulayman>it only wrote [object Object] lol
22:53:33  * int3rgr4mmquit (Remote host closed the connection)
22:53:45  * int3rgr4mmjoined
22:53:49  <kenansulayman>this should work
22:53:51  <kenansulayman>http://data.sly.mn/RnVn/contents
22:54:57  <kenansulayman>this is strange
22:55:29  * int3rgr4mmquit (Remote host closed the connection)
22:55:35  * int3rgr4mmjoined
22:56:15  * int3rgr4mmquit (Remote host closed the connection)
22:56:22  * int3rgr4mmjoined
22:56:29  * jmartinsquit (Quit: Konversation terminated!)
22:58:49  <kenansulayman>*
23:00:21  * int3rgr4mmquit (Remote host closed the connection)
23:03:38  * jxsonquit (Remote host closed the connection)
23:04:04  * jxsonjoined
23:06:45  * jcrugzzquit (Ping timeout: 252 seconds)
23:08:24  * jxsonquit (Ping timeout: 252 seconds)
23:10:25  * ednapiranhajoined
23:12:37  * jxsonjoined
23:13:09  * ednapira_joined
23:15:07  * ednapiranhaquit (Ping timeout: 260 seconds)
23:17:33  <kenansulayman>hij1nx_inberlin ping
23:22:54  * mikealjoined
23:23:13  * soldairquit (Quit: Page closed)
23:27:05  * rudjoined
23:31:48  * mikealquit (Ping timeout: 240 seconds)
23:37:17  <kenansulayman>brycebaril Could you allow non-key stores?
23:39:19  <brycebaril>kenansulayman: what do you mean?
23:39:40  <kenansulayman>brycebaril http://local.sly.mn:8000
23:39:53  <kenansulayman>currently I must provide a keyname per version
23:40:21  <kenansulayman>For my irc bot I construct a sublevel per day and inside have the messages as versions
23:40:50  <brycebaril>Maybe the key name is the room in that case? e.g. ##leveldb
23:41:06  <brycebaril>Otherwise you could just use the timestamp as the key name and not need level-version I suppose
23:41:25  <kenansulayman>no, the day is already a sublevel of the channelname in order to save lookups
23:41:36  * fallsemoquit (Quit: Leaving.)
23:41:48  <kenansulayman>And using timestamps as keyname would be redundant
23:42:24  <kenansulayman>Because I calculate the day after January 1st, 1970 (~~(Date.now()/86400000)) as sublevel identifier per day
23:42:35  <kenansulayman>And level-version already uses timestamps
23:43:07  <brycebaril>Yeah, but it definitely expects a key, otherwise the key would simply be a bitwise-hex timestamp
23:43:17  <kenansulayman>brycebaril http://data.sly.mn/RnEC/contents
23:43:53  <kenansulayman>Can't we make it a "bitwise-hex timestamp"? ;)
23:43:58  <brycebaril>you could do key = type, then use stream-joins .union() if you wanted a stream of all types of messages
23:44:30  <kenansulayman>hum
23:44:40  <kenansulayman>would the stream be still in order?
23:45:11  <brycebaril>Yep, all the stream-joins are designed for ordered streams. Also built for timestreamdb :)
23:45:51  <kenansulayman>the thing is I could do db.createReadStream to access the levelup stream, but would have to de-sugar the level-version api then
23:46:15  <rescrv>mbalho: not really sure what I should be looking at
23:46:32  <kenansulayman>brycebaril I definitely would appreciate to allow empty keys for indexing versions
23:46:46  <brycebaril>rescrv: he put in some c++ code that directly merges leveldb databases into one larger one, some sort of optimized bulk load system
23:47:09  <brycebaril>kenansulayman: it may work with empty-string keys :)
23:47:21  <kenansulayman>brycebaril It surely does
23:47:23  <kenansulayman>let me check
23:47:37  <kenansulayman>I tested it but were unable to lev into the keys
23:48:39  <kenansulayman>brycebaril nah doesn't work — http://data.sly.mn/Rn8O
23:49:07  * ednapira_quit (Remote host closed the connection)
23:49:20  <brycebaril>ahh, hmm. I haven't used lev with level-version much :)
23:49:25  <brycebaril>(or at all perhaps)
23:49:42  <kenansulayman>brycebaril well I only use it to check consistency stuff ;)
23:49:51  <rescrv>brycebaril: it'd be easy to do that within the implementation without any copying
23:49:59  <rescrv>so long as you know keys are disjoint
23:50:53  <brycebaril>mbalho: ^^
23:50:54  <kenansulayman>WTF
23:51:00  <kenansulayman>Just got a mail from Apple
23:51:01  <kenansulayman>"Users who have already purchased your app are now able to download previous versions, allowing them to use your app with older devices that may no longer be supported by the current version."
23:55:28  * ednapiranhajoined
23:55:34  <kenansulayman>brycebaril I don't see the reason why empty strings fail to get stored
23:55:42  <kenansulayman>[key, encode(version)].join(delimiter) <= your makeKey code
23:56:50  <kenansulayman>brycebaril ow wait
23:57:02  <kenansulayman>you're using the same delimiter as level-sublevel
23:57:08  <kenansulayman>which makes them incompatible
23:57:11  <kenansulayman>\xff
23:57:53  <brycebaril>You can specify a different delmiter
23:58:13  <brycebaril>They are actually compatible so long as you use a key, I guess :)
23:58:43  <kenansulayman>well yes sorry. not "breaking" incompatible
23:58:55  <kenansulayman>I mean context-incompatible
23:59:17  <kenansulayman>for like lev which detects breaks due to the same delimiter