00:03:42  * wolfeidaujoined
00:11:06  <mbalho>rvagg: also this is
00:11:09  <mbalho>rvagg: oops
00:11:19  <mbalho>rvagg: also this is not entirely elegant but it works https://github.com/rvagg/node-abstract-leveldown/commit/7396972156e9b2d33bf94ed989db92a94d268b5b
00:22:02  * werlejoined
00:23:01  <levelbot>[npm] [email protected] <http://npm.im/level-js>: leveldown/leveldb library for browsers using IndexedDB (@maxogden)
00:24:32  * thl0joined
00:25:30  <levelbot>[npm] [email protected] <http://npm.im/level-js>: leveldown/leveldb library for browsers using IndexedDB (@maxogden)
00:25:40  * rvaggstabs levelbot
00:25:56  <mbalho>rvagg: i republished cause i messed up the git url
00:26:33  * levelbotquit (Remote host closed the connection)
00:26:39  * werlequit (Client Quit)
00:26:50  * levelbotjoined
00:26:51  <levelbot>[npm] [email protected] <http://npm.im/level-js>: leveldown/leveldb library for browsers using IndexedDB (@maxogden)
00:26:56  <rvagg>k, so, JSON.stringify() vs String()
00:27:22  <rvagg>I wanted to leave the actual encoding up to levelup, or whatever was building on leveldown
00:27:52  <mbalho>rvagg: ahh i see
00:28:11  <rvagg>JSON.stringify() seems a bit too implementation specific, since leveldown only takes strings or buffers, it seems most appropriate to force something into a string where it isn't a string or a buffer, adding an extra JSON.stringify() encodes it into a form that perhaps isn't right
00:28:16  <rvagg>cause when you fetch it..
00:28:23  <rvagg>db.put('foo', 1)
00:28:28  <rvagg>db.get('foo') -> "1"
00:28:51  <rvagg>no, that's not right, stringify would do that properly
00:29:04  <rvagg>meh, perhaps there's a place for an if !buffer && !string then stringify
00:29:47  <mbalho>levelup handles objects right?
00:29:57  <rvagg>yeah, levelup does it all properly
00:30:08  <rvagg>so in theory you shouldn't have to care about this at leveldown
00:30:09  <mbalho>basically in the browser leveldown shouldnt touch any values
00:30:11  <mbalho>yea
00:30:27  <mbalho>i dunno if it causes problems if leveldown doesnt filter stuff out
00:30:33  <mbalho>in leveldb or whatevs
00:31:48  * wolfeidauquit (Ping timeout: 245 seconds)
00:32:18  * rvagghmmmmms
00:32:25  <rvagg>it might be possible to just completely remove it
00:32:29  <rvagg>leveldown does it itself anyway
00:32:33  <rvagg>memdown wouldn't care
00:32:38  <rvagg>dunno about localstorage
00:32:43  * wolfeidaujoined
00:32:44  <mbalho>localstorage just does strings
00:32:57  <rvagg>it could just be left up to the implementer, this is really low-level details
00:33:07  * werlejoined
00:33:14  <mbalho>looks like localStorage does toString
00:33:19  <mbalho>yea i think you're right
00:33:25  <mbalho>i found it annoying that leveldown messed with my stuff
00:33:27  <mbalho>:P
00:33:42  <rvagg>fair enough
00:34:13  <rvagg>might have been good to open a PR with the change so we could get broader discussion, no9 would probably have an opinion on this too
00:35:35  <mbalho>good call, i was jsut thinking about that
00:38:02  * werlequit (Quit: Leaving.)
00:45:59  * wolfeidauquit (Remote host closed the connection)
00:59:41  * wolfeidaujoined
01:11:04  * wolfeidauquit (Remote host closed the connection)
01:14:33  * wolfeidaujoined
01:20:46  * wolfeidauquit (Remote host closed the connection)
01:44:16  * thl0quit (Remote host closed the connection)
01:50:04  * st_lukequit (Remote host closed the connection)
01:53:20  * wolfeidaujoined
02:21:08  * wolfeidauquit (Remote host closed the connection)
02:21:52  * wolfeidaujoined
02:24:36  * Pwnnaquit (Remote host closed the connection)
02:27:00  * Pwnnajoined
02:32:22  * wolfeidauquit (Remote host closed the connection)
02:34:45  * levelbotquit (Remote host closed the connection)
02:35:03  * levelbotjoined
03:06:07  * thl0joined
03:14:21  * Pwnnaquit (Ping timeout: 240 seconds)
03:38:31  * levelbotquit (Remote host closed the connection)
03:38:46  * levelbotjoined
03:42:27  * thl0quit (Remote host closed the connection)
04:00:19  * Pwnnajoined
04:22:41  * wolfeidaujoined
04:53:20  * timoxleyjoined
06:14:43  * no9quit (Ping timeout: 245 seconds)
06:27:58  * no9joined
06:45:12  * timoxleyquit (Quit: Computer has gone to sleep.)
07:52:00  * timoxleyjoined
07:52:40  * Pwnnaquit (Ping timeout: 246 seconds)
08:01:36  * brianloveswordsquit (Excess Flood)
08:03:29  * brianloveswordsjoined
08:06:50  * dominictarrjoined
08:23:15  <levelbot>[npm] [email protected] <http://npm.im/level-store>: A streaming storage engine based on LevelDB. (@juliangruber)
08:51:59  * dominictarrquit (Quit: dominictarr)
08:54:33  * timoxleyquit (Quit: Computer has gone to sleep.)
08:57:36  * timoxleyjoined
09:01:46  * hackerGBQjoined
09:01:48  * no9quit (Ping timeout: 245 seconds)
09:04:15  * hackerGBQpart ("Silentium est aureum")
09:14:47  * no9joined
09:14:48  * no9quit (Read error: Connection reset by peer)
09:15:32  * no9joined
09:22:39  <no9>mbalho are you looking for the ablility to provide your own type checker?
09:27:09  <no9>I prefer friction free implementation with the ability to override it. I think the if browser solution is wrong
09:28:34  <no9>wrong is a bit strong but it doesn't make some assumptions around the capability of the datastore in a given envrionment .
09:56:34  * no9quit (Ping timeout: 276 seconds)
09:57:08  * no9joined
10:32:12  * dominictarrjoined
10:43:03  * no9quit (Read error: Operation timed out)
10:45:00  * st_lukejoined
10:51:07  * wolfeidauquit (Remote host closed the connection)
10:57:35  * no9joined
11:10:00  <dominictarr>had an idea last night:
11:10:09  <dominictarr>way to implement arbitary search/indexing
11:10:46  <dominictarr>when data is written trigger index (maybe in prehook, or with trigger)
11:10:58  * st_lukequit (Remote host closed the connection)
11:11:06  <dominictarr>then map value to keys -> back to document
11:11:10  <dominictarr>and save them in batch
11:12:13  <dominictarr>don't bother to make a note of which keys belong to what values
11:12:37  <dominictarr>when a document is deleted, it will still be indexed
11:12:47  <dominictarr>but, when you do a read
11:12:59  <dominictarr>you retrive the document, and check that it's still valid
11:13:11  <dominictarr>if it's not, _then_ you can do the delete
11:13:36  <dominictarr>this means you only clean up records for terms you actually search for
11:14:22  <dominictarr>in many cases, the database is write only
11:14:41  <dominictarr>so indexes are never removed
11:36:31  * wolfeidaujoined
12:43:08  <dominictarr>rvagg: would it be feasible to notify on compactions?
12:44:35  <rvagg>not without hacking leveldb itself
12:44:58  <rvagg>is this for production use or just for educational purposes?
12:45:14  * no9quit (Ping timeout: 261 seconds)
12:45:49  <dominictarr>educational
12:45:58  <dominictarr>hmm, maybe you could do it with dtrace?
12:47:54  <rvagg>yeah, or you could just hack in to leveldb and put some print statements
12:51:19  * jluisjoined
12:51:43  * jluischanged nick to Guest31167
12:51:49  * joaoluisquit (Read error: Operation timed out)
12:51:56  <rvagg>dominictarr: there is an internal logging thing in leveldb that should so what's going on
12:52:03  <rvagg>just need to figure how we'd activate it
12:58:13  * no9joined
13:03:13  <rvagg>dominictarr: bingo, got it, will push a branch for you to play with
13:04:21  <mbalho>no9: ah yea a custom type checker would be better, i'll open a pull req next time instead of committing
13:04:23  <rvagg>dominictarr: check out logger-play branch of leveldown, run `node-gyp rebuild` over it then link it to your levelup install and you'll get a log to /tmp/leveldb.log
13:04:41  <rvagg>for fun, you can run the leak-tester.js in the tests/ directory of leveldown, that makes the log go crazy
13:04:58  <dominictarr>okay, cool
13:04:59  <rvagg>.. should perhaps think about enabling logging as a standard feature, it's a bit awkward for windows compat but I think we have all the bits
13:05:01  <dominictarr>thanks!
13:05:21  <rvagg>dominictarr: https://github.com/rvagg/node-leveldown/compare/logger-play
13:05:28  <rvagg>difference is trivial, you can change the location easily enough
13:05:35  <rvagg>and I've set it to append to the log
13:05:52  <rvagg>bed time for me, have fun and let me know if you find out anything interesting!
13:06:03  <dominictarr>spewing out messages that are probably not important breaks the abstraction. and the rule of scilence. substack will hate it
13:06:19  <rvagg>dominictarr: yeah, but it could be as a hidden option you can enable if you're an advanced user
13:06:37  <dominictarr>a logging _option_ is completely okay
13:06:41  <rvagg>levelup("/foo/bar.db", { log: "/tmp/leveldb.log" })
13:06:57  <rvagg>oh yeah, not on full time, that'd totally suck, particularly for performance
13:07:03  <rvagg>anyway, ttfn
13:17:41  <mbalho>ok what do i have to do to get a node server to sync a bunch of binary data to level.js
13:18:38  * brianloveswordsquit (Excess Flood)
13:18:58  * brianloveswordsjoined
13:21:01  <mbalho>just one way, master slave
13:22:31  <dominictarr>mbalho: http:ghub.io/level-master
13:22:45  <dominictarr>that is between two leveldbs, is that what you want?
13:23:32  <no9>mbalho NP Are you planning to roll back?
13:24:18  <mbalho>no9: ill replace the if (process.browser) stuff with a function you can pass in
13:25:21  <mbalho>dominictarr: i suppose that is the correct stack, ill try to get it working in browser
13:25:32  <mbalho>dominictarr: do you know whats up with binary streaming between levels?
13:28:07  <mbalho>dominictarr: cause https://npmjs.org/package/stream-serializer will break when i try and send arraybuffers
13:43:49  <dominictarr>mbalho: use stream-serializer/msgpack
13:44:19  <dominictarr>or stream-serializer/jsonb
13:44:37  <dominictarr>hmm, hang on...
13:45:42  <dominictarr>that is mux-demux/{msgpack,jsonb}
13:46:00  <dominictarr>mbalho: are you using mux-demux or just streaming directly?
13:46:22  <mbalho>dominictarr: i have no implementation yet
13:46:34  <mbalho>dominictarr: but im storing arraybuffers in indexeddb using level.js
13:46:50  <mbalho>dominictarr: and i wanna push them + pull them to tacodb
13:47:08  <dominictarr>right - so, we fixed this for multilevel
13:47:24  <dominictarr>so if you use multilevel you can do it.
13:48:27  <mbalho>dominictarr: oh so multilevel works in browser now?
13:49:37  <dominictarr>yes, have been using it via shoe
13:50:11  <mbalho>dominictarr: will tacodb solve my problems when it comes out? e.g. should i just wait for that?
13:50:23  <mbalho>dominictarr: i just wanna hook up something to level.js that lets me push and pull databases
13:51:45  <dominictarr>it will use all the level-* modules, so just use them
13:52:16  <dominictarr>multilevel, level-master
13:52:43  <mbalho>dominictarr: so its just gonna be an endpoint you can sync to, not a client side lib
13:52:47  <dominictarr>hmm, maybe just use db.createWriteStream() to push the database
13:53:04  <dominictarr>that is correct
13:53:28  <mbalho>dominictarr: ok so as long as multilevel can replicate binary data correctly then that should work for me
13:53:37  <dominictarr>yes
13:54:06  <dominictarr>also, I'm gonna start working on master-master replication after I've got this thing up
13:56:40  <mbalho>dominictarr: do a demo with level.js :D
13:57:20  <dominictarr>will do
13:57:24  <juliangruber>mbalho: pipe multilevel over your websocket stream
13:58:36  <juliangruber>multilevel works in the browser and is binary by default
13:58:37  <mbalho>juliangruber: is there an example demonstrating binary mode?
13:58:39  <mbalho>juliangruber: OH
13:58:44  <juliangruber>yeah
13:58:51  <juliangruber>I need to add an option to be not-binary :D
13:58:52  <mbalho>juliangruber: you should add that to the readme
13:59:18  <juliangruber>why wouldn't you expect it to just work?
13:59:56  <mbalho>juliangruber: cause all this stuff is new
14:00:05  <juliangruber>ok
14:00:09  <juliangruber>good point
14:00:23  <mbalho>juliangruber: i expect node stuff to be binary by default but modules are hit or miss in my experience, though i personally knwo to trust the people in this channel
14:00:28  <mbalho>but for everyone else its not as clear
14:01:04  * thl0joined
14:04:24  <juliangruber>mbalho: https://github.com/juliangruber/multilevel/commit/eb4788df86904d6fccec2e7dbbc4fb76ed637d24
14:04:37  <mbalho>w00t
14:05:15  <dominictarr>juliangruber: require('multilevel/jsonb') gives you json, and it will base64 buffers, so it still works for binary, also
14:06:04  <mbalho>dominictarr: if you browserify it does it know to handle arraybuffers?
14:06:19  <dominictarr>oh, hmm, maybe I didn't make that pr
14:06:56  <mbalho>dominictarr: it would be easy to swap out any Buffer specific code for https://github.com/chrisdickinson/bops
14:07:29  <dominictarr>oh, require('multilevel') is binary compatible json
14:07:42  <dominictarr>and requrie('multilevel/msgpack') is msgpack
14:08:21  <juliangruber>yes
14:08:41  <juliangruber>we still need to run the tests using both encodings...
14:08:42  <dominictarr>mbalho: right and that tells you if you have a binary able object?
14:09:11  <mbalho>dominictarr: bops is for writing binary code that works in node and browser
14:09:45  <dominictarr>right - and so don't use Buffer.isBuffer
14:09:50  <dominictarr>use bops.is
14:09:55  <mbalho>or require('isbuffer')
14:10:05  <mbalho>basically dont use Buffer
14:10:10  <mbalho>since it doesnt work in the browser
14:10:13  <juliangruber>bops all the things
14:10:31  <dominictarr>right
14:10:41  <dominictarr>there is probably a bit of porting to do here
14:10:57  <mbalho>if a module only needs to work in node it doesnt matter
14:11:04  <mbalho>but if it needs to work in a browser use bops
14:12:18  <dominictarr>but all the modules ought to work in the browser!
14:12:56  <mbalho>well yea but i'd rather the useful ones work first etc
14:54:05  <dominictarr>thl0: looks there might be a problem with encodings in sublevels with streams?
14:54:36  <thl0>dominictarr: what problems?
14:54:41  <thl0>read streams or write?
14:55:07  <thl0>cuz I didn't get to enforce encodings in write streams yet
14:57:50  <dominictarr>ah, right - I was using a write stream, but it was failing silently
14:59:05  <thl0>dominictarr: I'll have to get to that - it's on my TODO, as well as considering keys on del (I saw I missed that)
14:59:33  <thl0>we also need a bunch of more tests to cover all these scenarios
15:00:19  <thl0>dominictarr: have you considered writing a few tests that just ensure that params get propagated to levelup properly instead of testing the actual db?
15:00:27  <dominictarr>mbalho: okay, this tacodb thing is nearly ready. you can push customizations to it, connect with http or ws, and query logs from history or in realtime
15:00:34  <thl0>i.e. you'd mock out levelup and test what gets passed to it
15:01:02  <thl0>we could use proxyquire to stub out levelup in these cases
15:01:32  <dominictarr>hmm, I'm kinda suspucious of mocking, but maybe this is a sensible case for it.
15:01:32  <thl0>unless you always wanna do full integration tests
15:01:58  <dominictarr>if we refactored the tests a bit, we could reuse them for, say levelup in the browser
15:02:27  <thl0>kinda is, since we are just making sure that it calls levelup correctly, instead of testing levelup (which is already tested itself)
15:02:28  <dominictarr>I think I'd rather test that a sublevel still passes levelup tests
15:02:35  <thl0>ok
15:02:47  <dominictarr>but, you are testing that sublevel behaves the same as levelup
15:03:04  <dominictarr>there are somethings that get adjusted on the way out, like key prefixes
15:03:30  <thl0>yeah, maybe integ. tests is more reliable here
15:03:35  <thl0>when I have some bandwidth this weekend I'll get those tests in along with fixing write stream
15:04:06  <thl0>right now I wanna get valuepack-mine-npm published so people can play with it and then move on to valuepack-mine-github
15:23:29  * dominictarrquit (Quit: dominictarr)
17:22:43  * thl0quit (Remote host closed the connection)
17:23:20  * thl0joined
17:27:44  * thl0quit (Ping timeout: 252 seconds)
17:29:25  * thl0joined
17:37:31  * thl0quit (Remote host closed the connection)
18:25:30  * Raynosquit (K-Lined)
18:25:31  * niftylettucequit (K-Lined)
18:34:00  * Raynosjoined
18:34:10  * niftylettucejoined
18:37:37  * Raynosquit (Client Quit)
18:38:41  * Pwnnajoined
18:51:00  * thl0joined
19:16:07  * no9quit (Ping timeout: 264 seconds)
19:28:54  * no9joined
19:41:09  * Pwnna-joined
19:46:33  * Pwnnaquit (Quit: Leaving)
19:46:33  * Pwnna-changed nick to Pwnna
19:52:35  * Pwnnaquit (Quit: ZNC - http://znc.in)
19:53:52  * Pwnnajoined
20:00:53  * Pwnnaquit (Quit: ZNC - http://znc.in)
20:01:30  * Pwnnajoined
20:03:41  <juliangruber>it's getting kinda annoying that you have to compile leveldb so many times
20:04:41  * Pwnnaquit (Client Quit)
20:05:21  * Pwnnajoined
20:13:47  * Pwnnaquit (Quit: mrrow~)
20:13:48  <thl0>juliangruber: you could probably try to alias whatever is called to compile to something that just copies the binaries
20:14:23  * Pwnnajoined
20:15:11  <juliangruber>interesting idea!
20:21:10  <levelbot>[npm] [email protected] <http://npm.im/level-dump>: Dumps all values and/or keys of a level db or a sublevel to the console. (@thlorenz)
20:24:35  <thl0>btw not bad ..
20:24:38  <thl0>➝ ./store-npm-packages.js --read --owner --keys | grep juliangruber | wc -l
20:24:38  <thl0> 88
20:26:17  <juliangruber>khehe
20:26:57  <thl0>takes less than a sec to spit that out from level
20:29:01  <juliangruber>sweet
20:29:39  <juliangruber>you could store the db in a ~/.valuepack directory
20:30:07  <juliangruber>and automatically download the db if there is none there
20:32:55  * Pwnnaquit (Quit: mrrow~)
20:33:47  * Pwnnajoined
20:34:39  <thl0>juliangruber: it has an init script
20:34:44  * Pwnnaquit (Client Quit)
20:35:03  <thl0>that stores it there
20:35:31  <thl0>juliangruber: https://github.com/thlorenz/valuepack-mine-npm#fetching-data-initializing-the-data-store
20:35:44  <juliangruber>ok
20:36:49  <thl0>mine-github is next, but they got this API limit, not sure how I can get to all the data that way
20:37:13  * Pwnnajoined
20:37:16  <thl0>juliangruber: and they wrote me this scheiss automatic email in response to my request to up those limits for an account
20:37:52  <thl0>so not sure how to go about that - may have to find some people that know some people ... in github
20:43:16  * Pwnnaquit (Quit: mrrow~)
20:44:12  * Pwnnajoined
20:44:20  <mbalho>thl0: my gf works there
20:44:24  <mbalho>thl0: whats the question
20:52:42  <thl0>mbalho: I need to increase the API rate limit
20:53:06  <thl0>I asked support, but got this precanned email, telling me how to cache things and such
20:53:37  <thl0>however I need to reevaluate all npm user's info on a daily basis, i.e. to see if some repos were starred
20:53:50  <thl0>there are 8000+ users, the limit is 5000
20:54:17  <thl0>I'd also need to follow through in some cases, i.e. get the stargazers info
20:55:03  <thl0>mbalho: see here: https://github.com/thlorenz/valuepack/blob/f28018b91c6ff69bc8fdc7f6cbca69af38952d66/data/mine.md#github-users
20:59:00  <mbalho>thl0: did you get the email from [email protected] or a specific person?
20:59:28  <thl0>mbalho: "Wynn Netherland (GitHub Staff)" <[email protected]>
20:59:43  <thl0>looks very automatic to me
21:00:01  <mbalho>thl0: theyre all hand sent, might be templates though
21:00:19  <thl0>yep, I guess these guys are busy
21:00:21  <mbalho>thl0: is your question basically 'is it possible to get a higher rate limit' ?
21:00:34  <thl0>yes for a specific account
21:00:47  <thl0>I was gonna create one just for valuepack
21:00:59  <mbalho>thl0: its 5k/hr right?
21:01:05  <thl0>yes
21:01:27  <thl0>but I wanna get all info in one go instead of spreading it out over the entire day ;)
21:02:07  * Pwnnaquit (Quit: mrrow~)
21:02:29  * Pwnnajoined
21:03:19  <thl0>mbalho: i.e. I'd wanna hit all npm users/repos on github and see if the starts/watcher count on any repo changed
21:03:31  <thl0>that alone is 8000+ reqs
21:03:50  * Pwnnaquit (Client Quit)
21:04:06  <thl0>and if it did, I'd wanna find out who the stargazers are to weigh the stars
21:04:11  * Pwnnajoined
21:04:22  <thl0>same goes for issues and pullreqs
21:04:49  <thl0>mbalho: so if I could get like 10K or 20K that would help a lot!
21:06:58  <mbalho>thl0: cool ill see what i can do
21:07:59  <thl0>mbalho: awesome, I appreciate any help. if you get an ok, let me know so I can create that valuepack account.
21:09:41  * Raynosjoined
21:18:35  * no9quit (Quit: Leaving)
21:25:48  <mbalho>thl0: register and oauth application under your account, you dont have any right now
21:26:07  <mbalho>an*
21:36:49  <thl0>mbalho: not sure what you mean, I'd just want an account with higher limit
21:37:10  <thl0>i.e. not working on OAuth yet - that's much further down the line
21:37:31  <mbalho>thl0: they can only increase rate limits on oauth apps
21:37:44  <thl0>ok
21:38:22  <thl0>mbalho: so I'll create a valuepack account and register an OAuth application under it?
21:38:43  <mbalho>thl0: theres no point in having a new account, just make the app under your account
21:38:51  <thl0>ok
21:38:57  <mbalho>thl0: then you can have 12,500
21:39:05  <mbalho>thl0: for requests that come form that app
21:39:07  <mbalho>from*
21:39:31  <thl0>mbalho: cool!, dumb question - how do I create and register oauth application?
21:39:36  <Pwnna>just user mozilla persona
21:40:00  <thl0>mbalho: i.e. is this the right place to look? http://developer.github.com/v3/oauth/
21:40:36  <mbalho>thl0: https://github.com/settings/applications
21:41:30  <thl0>mbalho: thanks, I guess I don't worry about callback url and such right?
21:41:49  <mbalho>thl0: maube just put in localhost
21:42:18  <thl0>mbalho: ok, done
21:42:23  <thl0>called it valuepack
21:43:23  <mbalho>kewl
21:43:41  <thl0>mbalho: so what do you need me to do now?
21:43:49  <mbalho>thl0: wait :D
21:43:57  <thl0>mbalho: can do :)
22:04:41  <levelbot>[npm] [email protected] <http://npm.im/valuepack-mine-npm>: Mines the npm registry for user and package data used by valuepack. (@thlorenz)
22:25:11  * wolfeidauquit (Remote host closed the connection)
23:11:44  * thl0quit (Remote host closed the connection)
23:14:46  * wolfeidaujoined
23:28:28  * jez0990quit (Ping timeout: 245 seconds)