00:47:56  * eugenewarequit (Remote host closed the connection)
00:48:25  * eugenewarejoined
00:48:35  * eugenewarequit (Read error: Connection reset by peer)
00:48:57  * eugenewarejoined
00:54:27  * eugenewarequit (Ping timeout: 276 seconds)
01:13:58  <chapel>mikeal: is requestdb so it can work in the browser too?
01:21:30  * thlorenzjoined
03:45:02  * mikealquit (Ping timeout: 240 seconds)
04:13:37  * mikealjoined
04:34:22  * thlorenzquit (Remote host closed the connection)
04:59:45  * mikealquit (Read error: Connection reset by peer)
04:59:51  * mikeal1joined
05:02:06  * mikeal1quit (Client Quit)
05:06:00  * mikealjoined
05:36:21  * mcollinajoined
05:56:07  * mikealquit (Quit: Leaving.)
05:59:52  * mcollinaquit (Remote host closed the connection)
06:00:59  * mcollinajoined
06:07:35  * mcollinaquit (Remote host closed the connection)
06:26:17  * mikealjoined
06:33:38  * mcollinajoined
06:45:34  * mcollinaquit (Read error: Connection reset by peer)
06:45:39  * mcollina_joined
07:36:26  * mcollina_quit (Remote host closed the connection)
07:36:54  * mcollinajoined
07:39:41  * eugenewarejoined
07:39:41  * eugenewarequit (Remote host closed the connection)
07:39:49  * eugenewarejoined
07:41:22  * mcollinaquit (Ping timeout: 246 seconds)
07:51:17  * wilmoore-dbquit (Remote host closed the connection)
07:56:43  * jcrugzzjoined
08:57:30  * alanhoff89joined
08:58:57  * alanhoffquit (Ping timeout: 264 seconds)
09:03:47  * timoxley_joined
09:06:26  * timoxleyquit (Ping timeout: 256 seconds)
09:35:04  * dominictarrjoined
10:08:00  * kenansulaymanjoined
10:08:09  <kenansulayman>hey ho
10:08:34  <kenansulayman>Anyone active right now? need some advise
10:10:28  <kenansulayman>juliangruber dominictarr - are you multi-level guys here?
10:10:44  <dominictarr>kenansulayman: sure
10:10:59  <kenansulayman>We're currently working on leveldb for our instances
10:11:15  <kenansulayman>but we're seeking some horizontal scaling
10:11:26  <kenansulayman>what'd be the best approach of scaling with leveldb?
10:12:06  <dominictarr>aha, well! it depends on your data
10:12:19  <dominictarr>how much load do you have?
10:13:19  <kenansulayman>we're sporting 8 databases whereas 2 are reference databases to create relations (id->user; mail->user)
10:13:59  <kenansulayman>one database is for storing avatars and we're using juliangruber's level-store for it
10:14:27  <kenansulayman>also, let me bench the average load per user..
10:16:14  <kenansulayman> the current average is 6012 bytes
10:16:24  <kenansulayman>JSON
10:16:28  <dominictarr>what are the other 5 databases?
10:17:33  <kenansulayman>1. avatars_meta: meta data for avatars; like expiration, size and origin
10:17:48  <kenansulayman>2. presence of users (for chat et al.)
10:18:08  <kenansulayman>3. waiting — users which registered but are in a confirmation queue (email confirmation ftw)
10:18:17  <kenansulayman>4. session — session store
10:18:35  <kenansulayman>5. entity — stores conversations between users
10:18:51  <kenansulayman>6. document — stores documents (we're working in a legal space) of the users
10:19:12  <dominictarr>okay, how many users, and active users, do you have?
10:19:36  <dominictarr>sorry, I mean, how many users do you have at any one time?
10:19:47  <dominictarr>(currently using the app)
10:20:18  <kenansulayman>wait; we planned on switching to LevelDB which we ditched as they're explosively expensive. I made a statistic for them, let me get it
10:20:58  <kenansulayman>We have a current 71 transactions per second to the database
10:21:30  <kenansulayman>Whereas an average user kicks about 3326,44 transactions per session
10:21:46  <kenansulayman>With an average of 2.26 transactions per second per user
10:23:01  <dominictarr>that is 332 thousand?
10:23:12  <dominictarr>and this is all currently into a single process?
10:23:18  <kenansulayman>nah sorry, german notation here; 3326
10:23:59  <kenansulayman>yup and it works quite well until now
10:24:25  <kenansulayman>we optimized the system so that it can peak up to 1000 concurrents per process
10:24:58  <dominictarr>ah, okay!
10:25:25  <dominictarr>what about your data - do you have overwrites/update/deletes, or is everything write-once
10:25:53  <kenansulayman>wel lets consider the average user flow:
10:27:47  <kenansulayman>if the user logs in over facebook or linkedin, we'll sync the data — and if the foreign data is different (and the user did not change it manually on our platform), we'll overwrite the local data with it
10:28:03  <kenansulayman>Like city, name, current business et al.
10:28:28  <kenansulayman>deletes are quite rare. only if a friendship is cancelled
10:28:35  <kenansulayman>or if we force-delete a user
10:28:58  <kenansulayman>or a document is deleted, which is rare too
10:29:51  <kenansulayman>(for information: we're http://legify.com/)
10:29:58  <dominictarr>what about in the session or presence databases?
10:31:24  <kenansulayman>sessions are kept mainly in RAM (we can handle up to 100k sessions in memory)
10:31:39  <dominictarr>oh, so you just archive into leveldb?
10:31:55  <levelbot>[npm] [email protected] <http://npm.im/search-index>: A text search index module for Node.js. Search-index allows applications to add, delete and retrieve documents from a corpus. Retrieved documents are ordered by tf-idf relevance, filtering on metadata, and field weighting (@fergie)
10:32:10  <kenansulayman>uh?
10:32:29  <kenansulayman>yes archive is pretty much it
10:33:18  <kenansulayman>The presence database is consisting of a key (of the user, which'd also reference to the user data in the user-database) and a unix time
10:33:53  <dominictarr>do you just get and set? or do you use createReadStream to get ranges?
10:34:01  <kenansulayman>get and set
10:34:10  <kenansulayman>looks pretty much like this:
10:34:35  <kenansulayman>here's the code
10:34:35  <kenansulayman>http://f.cl.ly/items/1y352k1O1a3A2b192m3f/Text%202013.08.18%2012%3A34%3A21.txt
10:36:09  <kenansulayman>appRT is a client of echtzeit (https://github.com/Legify/echtzeit) our pub/sub library
10:36:32  <dominictarr>okay, cool
10:36:41  <kenansulayman>and if the user hasn't got a field in presence or the last stamp is older than 30 seconds, we'll tell his contacts
10:38:17  <dominictarr>is each of these "databases" separate levelup instances, or are you using sublevel?
10:38:26  <dominictarr>(or something similar)
10:39:20  <kenansulayman>http://f.cl.ly/items/0X16072R2x1B0N1N473g/Text%202013.08.18%2012%3A39%3A10.txt
10:39:59  <kenansulayman>seperate instances that is
10:40:46  <dominictarr>right. okay so there are a couple of approaches
10:41:49  <dominictarr>the simplest is to have a central db that accepts writes, and then fan that data into databases that handle the reads.
10:43:01  <dominictarr>though, you'd obviously need to make appRT work between processes too.
10:43:37  <kenansulayman>that isn
10:43:50  <kenansulayman>that isn't too hard since we're on a redis backend for echtzeit
10:44:15  <kenansulayman>We experimented with your level-master though it felt like a hack (or we did something wrong)
10:44:18  <dominictarr>ah, good
10:45:03  <dominictarr>sure, level-master is a proof of concept, it hasn't been put into production.
10:45:18  <kenansulayman>also we experimented with a custom DHT implementation but that wasn't too realtime-ish
10:45:42  <dominictarr>which ever method you use, I think you'll be the first in node/level land
10:46:11  <dominictarr>https://github.com/dominictarr/level-replicate
10:46:22  <dominictarr>^that one is a bit more grown up than level-master
10:46:25  <dominictarr>but same idea.
10:46:27  <kenansulayman>Yes we've been with level when you just kicked-off levelup / down
10:46:53  <kenansulayman>ah yes I tried that one too but somehow it crippled
10:46:58  <kenansulayman>let me look into that for a sec
10:47:23  <kenansulayman>ah yes; somehow both sides died if a node crashed
10:47:40  <dominictarr>oh, right - well, that can be fixed.
10:47:50  <kenansulayman>that is we tried forever-ing the client
10:47:57  <kenansulayman>but that was too instable
10:48:48  <dominictarr>no, you just need a error handler on the replication stream
10:48:58  <kenansulayman>true
10:49:00  <dominictarr>you should have posted an issue on level-replicate
10:49:13  <kenansulayman>Yes sorry it's quite an up and down here currently
10:49:51  <dominictarr>one of the hardest things about writing modules is that you never really know if anyone is using it or not
10:50:04  <dominictarr>it's really helpful to get issues
10:50:24  <dominictarr>and discussions about how something is actually being used.
10:50:32  <kenansulayman>I'm following juliangruber and checking out everything new level-ish
10:50:50  <dominictarr>yes, but the discussion needs to go both ways :)
10:51:22  <kenansulayman>you're right; we're just crazy guys and build stuff immediately ourselves if it's not there yet ;)
10:51:42  <dominictarr>that is good! just publish it too!
10:52:19  <kenansulayman>That's hard since our investor relations don't cheer too much if we publish too much OSS-ish
10:52:45  <dominictarr>you don't need to publish the whole thing
10:52:55  <kenansulayman>we got some stuff online already https://github.com/Legify
10:53:16  <kenansulayman>let's see if I can box out other stuff too
10:53:38  <dominictarr>like, don't publish the whole thing, but when you have separateable parts published
10:53:45  <dominictarr>that is very good
10:53:58  <dominictarr>it's like, having extra devs that work for free
10:54:14  <dominictarr>your business value is really your users
10:54:29  <kenansulayman>that's true, if people really participate in your oss-projects
10:54:37  <dominictarr>how it's constructed is the easy part
10:55:19  <dominictarr>there are two approaches to make that happen 1) write modules other people want to use, and 2) use modules that other people have written.
10:55:50  <dominictarr>ideally you want both - to use someone elses modules, and them to use theirs
10:56:02  <dominictarr>then you have a mutually benefical relationship.
10:56:09  <kenansulayman>It's just very critical to "just" use modules of other people
10:56:18  <kenansulayman>like the express thing
10:56:27  <kenansulayman>you've got no idea what's happening
10:56:30  <kenansulayman>or socket.io
10:56:33  <dominictarr>you mean, with out modifying them?
10:56:47  <kenansulayman>no it's too much sugar and overhead
10:57:01  <kenansulayman>That's why we built Absinthe & echtzeit
10:57:10  <dominictarr>right.
10:57:18  <dominictarr>yeah, I try to avoid too much sugar etc
10:57:43  <dominictarr>generally the leveldb and stackvm people do too, I think
10:57:51  <kenansulayman>And now think of an exploit kicking off
10:58:08  <kenansulayman>suddenly every express / socket.io installation can instantly be targetted
10:58:29  <kenansulayman>that'd be a total fuckup
10:58:34  <dominictarr>right - especially if you don't understand what is really happening.
10:59:15  <dominictarr>on the otherhand, you'll have a huge bunch of people who are motivated to fix the hole.
10:59:48  <kenansulayman>That "huge bunch of people" is utopical most of the time
11:00:42  <dominictarr>well, it makes it more likely that there will be one person who gives a damn.
11:01:30  <kenansulayman>well yes
11:01:40  <dominictarr>anyway,
11:02:31  <dominictarr>yeah so there are two approaches - sharding and replication
11:03:00  <kenansulayman>How fail-proof is sharding with level?
11:03:00  <dominictarr>which can be used in a variety of combinations
11:03:18  <kenansulayman>I mean can each shard be replicated?
11:03:41  <dominictarr>but I think generally, designs go for replication first
11:04:07  <kenansulayman>that's true, I just fear of making the db too huge
11:04:45  <dominictarr>so you could replicate your entire database (probably don't need to do that for the archives though)
11:05:13  <kenansulayman>and there's only one true way of making leveldb being partially across different clusters: hdfs
11:05:48  <kenansulayman>I don't like that idea of having the "one" huge database grow and grow
11:06:35  <kenansulayman>Since it has to be either raid-1+0-ed at some point or be put onto a distributed fs
11:06:53  <dominictarr>hmm, so I'm guessing that the entity database would be the heaviest used, is that correct?
11:07:40  <kenansulayman>well it can be optimized, but the whole user is currently a JSON blob in the root database
11:08:19  <kenansulayman>I think that can be subleveled
11:08:56  <kenansulayman>but that's really complicated and hurting my brain right now
11:09:02  <dominictarr>brb
11:09:04  <kenansulayman>k
11:10:44  * ralphtheninjaquit (Quit: Lost terminal)
11:12:15  * dominictarrquit (Read error: Connection reset by peer)
11:23:05  * dominictarrjoined
11:24:22  <kenansulayman>wb
11:27:51  * dominictarrquit (Ping timeout: 260 seconds)
11:36:53  * alanhoffjoined
11:38:36  * alanhoff89quit (Ping timeout: 276 seconds)
11:55:24  * Acconutjoined
11:56:07  * No9joined
11:57:23  * Acconutquit (Client Quit)
12:21:11  * dominictarrjoined
12:22:33  <kenansulayman>dominictarr back?
12:22:48  <dominictarr>kenansulayman: sorry, my network died (am on 3g here)
12:22:56  <kenansulayman>eww
12:23:36  <kenansulayman>I see
12:25:23  <dominictarr>kenansulayman: anyway, can you tell me about your DHT thing?
12:25:47  <kenansulayman>well we initially did it for scaling considering sessions
12:27:17  <kenansulayman>What specifically?
12:29:20  <dominictarr>just in general -
12:29:53  <dominictarr>do you know what parts of your app are the heaviest?
12:30:06  <dominictarr>I thought you said the sessions where in memory?
12:30:19  <kenansulayman>the DHT was an experiment
12:30:50  <kenansulayman>do you mean heavy in terms of memory, space or cloc?
12:30:52  <kenansulayman>loc*
12:31:25  <dominictarr>interms of being the bottleneck that is forcing you to think about scaling
12:32:18  <dominictarr>kenansulayman: would is be possible to publish the dht experiment - there isn't one of those in the level-* ecosystem yet, really
12:32:20  <kenansulayman>well we have to think
12:32:33  <kenansulayman>I will talk with the team about that
12:32:43  <kenansulayman>but the scaling thing isn't really a technical thing
12:32:55  <kenansulayman>we have to do that since that's what we told the investors lol
12:33:22  <kenansulayman>(joke); no seriously, we have to process a HUGE amount of data
12:34:16  <kenansulayman>Echtzeit is behind everything; we have a realtime-collaboration editor whereas every character which is sent produces several transactions (4);
12:34:29  <dominictarr>aha, into the database?
12:35:02  <kenansulayman>yes. also every character produces a Diff/Match/Patch output which is sent to every connected client (of the document)
12:35:19  <dominictarr>oh, interesting.
12:35:45  <dominictarr>what scheme are you using for realtime editing? ot?
12:35:50  <kenansulayman>ot?
12:35:57  <dominictarr>operational transform
12:36:19  <kenansulayman>Well DMP implements OR
12:36:22  <kenansulayman>ot
12:36:39  <dominictarr>whats OR?
12:37:31  <kenansulayman>typo for OT ;)
12:37:50  <kenansulayman>but we're migrating to https://tools.ietf.org/html/rfc3284
12:37:58  <kenansulayman>It's more lightweight
12:38:43  <kenansulayman>if you're interested: http://jsperf.com/dmp-vs-vcdiff
12:39:46  <kenansulayman>It's actually really complicated to talk about the scheme we're using, since the editor isn't plaintext
12:40:24  <kenansulayman>Every character requires the document to be plain-texted, parsed and the patch to be created; this is then applied to the servers' version and pushed to each client
12:41:06  <dominictarr>do you send the whole document, or just the change?
12:41:06  <kenansulayman>and on each client (different from the origin) this has to be transformed from plain back to html
12:41:11  <kenansulayman>just the patch
12:41:39  <dominictarr>right - do you have to do it on each character, or could you do it on each word? or group the recent changes together?
12:41:55  <kenansulayman>the problem is that is has to look like realtime
12:41:56  <kenansulayman>:)
12:42:09  <kenansulayman>what we could do is delay with timeout
12:42:20  <kenansulayman>like 2 seconds
12:42:23  <dominictarr>sure, but like, irc is still realtime, even though I type a whole word
12:42:28  <dominictarr>a whole line I mean.
12:42:32  <kenansulayman>It's packet realtime
12:42:39  <kenansulayman>remember Google Wave?
12:42:46  <kenansulayman>It had per-character-realtime
12:43:10  <kenansulayman>whereas every entity is updated in realtime (sentence et al.)
12:43:11  <dominictarr>I've done collaborative editing with google docs
12:43:27  <kenansulayman>Docs uses OT, yes
12:43:37  <dominictarr>and implemented a collaborative editing thing too.
12:43:43  <kenansulayman>But even then they're transferring field per field
12:43:52  <dominictarr>https://npm.im/r-edit
12:43:59  <kenansulayman>mom
12:44:39  <dominictarr>that uses a different approach to OT, it's commutative.
12:44:57  <dominictarr>not suggesting that you use that though
12:45:13  <dominictarr>it's an experiment
12:45:23  <kenansulayman>quite a bunch of modules it uses :)
12:45:45  <dominictarr>but, I'd bet you that google wave wasn't always per character
12:45:58  <dominictarr>if the network was fast it probably was
12:46:06  <kenansulayman>well we talked to a google wave guy
12:46:12  <dominictarr>but when things get busy i bet it degraded
12:46:19  <dominictarr>gracefully
12:46:44  <kenansulayman>and he also released a RT collab tool, mom
12:47:02  <dominictarr>this guy worked on google wave: http://sharejs.org/
12:47:17  <kenansulayman>yes this guy I'm talking about
12:47:47  <kenansulayman>Joseph I thin
12:47:51  <kenansulayman>*+k
12:47:53  <dominictarr>thats it
12:48:26  <kenansulayman>I published an example once on that
12:48:27  <dominictarr>maybe you could fake the one character at a time thing
12:48:32  <kenansulayman>yes
12:48:40  <dominictarr>but just inserting text so it looks like some one it typing
12:48:43  <kenansulayman>let me show you a demo (mom)
12:48:50  <dominictarr>sure
12:56:34  * pgtejoined
12:56:39  <kenansulayman>http://app.greensto.re/
12:56:54  <kenansulayman>dominictarr sorry had to do some refactor on it ;)
12:57:07  <pgte>dominictarr: here :)
12:57:27  <dominictarr>pgte: that if(data.value) thing is meant to be there
12:57:52  <dominictarr>that is the job it self - so when the job is deleted that means the job is complete
12:58:17  <dominictarr>the other thing, is the source data, that triggers the job.
12:59:02  <kenansulayman>^for me or pgte?
12:59:24  <dominictarr>pgte
12:59:44  <pgte>dominictarr: ok, I agree that if (data.value) guard on the doJob is correct
12:59:49  <dominictarr>kenansulayman: pgte and I are discussing this https://github.com/dominictarr/level-trigger/pull/5
13:00:25  <dominictarr>pgte: why are you deleting the record?
13:00:35  <kenansulayman>I see
13:01:12  <pgte>dominictarr: for house cleaning. I could do that after the work is done...
13:02:53  <pgte>dominictarr: I just don't understand why the doHook has to react to dels...
13:03:17  <dominictarr>pgte: well, I use that in some situations.
13:03:24  <dominictarr>like in a map reduce
13:03:45  <dominictarr>if you delete a partial reduce you have to recalculate the reduction that is a part of.
13:05:15  <pgte>dominictarr: ok, but the doJob still doesn't process deletes (line 41)
13:05:36  <dominictarr>that is a different value
13:05:50  <pgte>correct
13:06:07  <pgte>dominictarr: ok, understood, thanks!
13:06:07  <dominictarr>when data is inserted into the main db, that triggers a "todo" item to be saved in the jobs database
13:06:33  <dominictarr>when that todo is deleted, that means it's complete
13:06:54  <dominictarr>but when the source is deleted… the user has to tell trigger what that means.
13:07:30  <pgte>dominictarr: yup, got it, makes sense.
13:07:32  <dominictarr>pgte: are you inserting data just to trigger a job? or is it data that is used for something else
13:07:54  <pgte>dominictarr: just to trigger the job,
13:08:01  <dominictarr>aha.
13:08:05  <pgte>so that it's automatically retried, etc.
13:08:20  <dominictarr>what sort of thing is the job?
13:08:43  <pgte>it's an id, a github repo and a commit sha1
13:08:58  <dominictarr>kenansulayman: the latency seems to stack up when I type really fast.
13:09:06  <kenansulayman>dominictarr https://github.com/KenanSulayman/DMP-Realtime-Collaboration
13:09:28  <kenansulayman>This is because your client uses much time to generate the patches per character
13:09:37  <kenansulayman>I could switch to vcdiff, but that's a todo
13:15:25  <rvagg>pgte: I haven't looked at substack's PRs yet but it sounds like he's stepping on your writestream territory, you'd better get involved there since you've had your head in there for a while and have a major change waiting to land
13:15:26  * dominictarrquit (Ping timeout: 240 seconds)
13:15:34  * pgtequit (Remote host closed the connection)
13:29:14  * dominictarrjoined
13:31:55  <levelbot>[npm] [email protected] <http://npm.im/search-index>: A text search index module for Node.js. Search-index allows applications to add, delete and retrieve documents from a corpus. Retrieved documents are ordered by tf-idf relevance, filtering on metadata, and field weighting (@fergie)
13:38:17  <kenansulayman>dominictarr back on non-3g?
13:38:59  <kenansulayman>I see threembb.ie is 3g :)
13:42:15  <dominictarr>hmm, yes. 3g. but normally it's more stable than this.
13:51:41  * jcrugzzquit (Ping timeout: 246 seconds)
14:05:35  * pgtejoined
14:08:40  <kenansulayman>dominictarr http://jsperf.com/dmp-vs-vcdiff/2
14:08:54  <kenansulayman>worked on it a bit; my vcdiff vs googles' dmp ;)
14:14:44  <dominictarr>but that is just a cpu problem - isn't this a network thing?
14:15:07  <dominictarr>oh 100% faster, thats good.
14:17:49  * pgtequit (Remote host closed the connection)
14:18:39  * jcrugzzjoined
14:20:33  <juliangruber>kenansulayman: I'm in berlin next month, we should meet up!
14:21:07  <kenansulayman>Open for every coffee out there!
14:21:37  <juliangruber>sweet :)
14:21:52  <juliangruber>legify looks good!
14:22:02  <juliangruber>i wished i could search already
14:22:09  <kenansulayman>Well yes it
14:22:58  <kenansulayman>it's quite some hard work to get everything running and fine as a startup :)
14:24:22  <kenansulayman>We're trying to get it up and running asap
14:26:24  * jcrugzzquit (Ping timeout: 268 seconds)
14:28:17  <kenansulayman>afk, lunch / dinner whatever
14:28:45  * kenansulaymanchanged nick to apexpredator
14:28:53  * dominictarr_joined
14:29:16  * apexpredatorchanged nick to kenansulayman
14:29:22  * kenansulaymanquit (Quit: Textual IRC Client: www.textualapp.com)
14:29:33  * kenansulaymanjoined
14:32:05  * dominictarrquit (Ping timeout: 245 seconds)
14:32:06  * dominictarr_changed nick to dominictarr
14:36:00  * ralphtheninjajoined
14:42:02  <kenansulayman>back
14:44:44  <kenansulayman>juliangruber you're from munich, right?
14:45:15  <juliangruber>kenansulayman: I lived in bavaria for a long time but recently started spending more time in berlin
14:46:06  <kenansulayman>http://juliangruber.com/ :(
14:46:22  <kenansulayman>berlin > bavaria anyway ;)
14:47:51  <juliangruber>kenansulayman: what's bad about that page?
14:48:05  <juliangruber>didn't really update it
14:48:08  <kenansulayman>502 Bad Gateway — nginx
14:48:09  <kenansulayman>for me
14:48:28  <juliangruber>omg
14:48:31  <kenansulayman>http://data.sly.mn/QtRU
14:48:40  <juliangruber>http://www.downforeveryoneorjustme.com/http://juliangruber.com
14:48:50  <juliangruber>wtf
14:48:59  <juliangruber>can you do a traceroute?
14:49:30  <kenansulayman>well of course.. it's an over http delivered error... :)
14:50:30  <kenansulayman>mom
14:50:36  <juliangruber>kenansulayman: i'll replace that site now anyways
14:50:38  <kenansulayman>rackspace.net?
14:50:58  <juliangruber>since i'm not looking for a job anymore this shoud just have links to github and twitter
14:51:12  <kenansulayman>http://data.sly.mn/QtJP
14:51:15  <juliangruber>kenansulayman: i think it's github pages
14:51:31  <kenansulayman>uhm
14:51:32  <kenansulayman>https://s3.amazonaws.com/f.cl.ly/items/2h3X2y0u1F1L2Y2f1C07/Text%202013.08.18%2016%3A51%3A09.txt
14:52:26  <kenansulayman>s3 for pages ftw
14:53:33  <juliangruber>kenansulayman: that's what i'm getting too
14:53:45  <juliangruber>probably just github being weird
14:53:45  <kenansulayman>because it's a server error :)
14:54:19  <kenansulayman>juliangruber this is over US: http://data.sly.mn/QtTx
14:54:26  <kenansulayman>so I guess it's the Berlin backbone
14:54:38  <juliangruber>yup
14:54:39  <juliangruber>:D
14:54:40  <juliangruber>wtf
14:54:42  <juliangruber>berlin :D
14:54:49  <kenansulayman>haha
14:55:50  <kenansulayman>BoerseGo I see
14:56:07  <juliangruber>kenansulayman: do you know them?
14:56:17  <kenansulayman>No but now I do :D
14:59:31  <juliangruber>kenansulayman: I used to work for them
14:59:37  <juliangruber>awesome guys, but still so much php
14:59:49  <kenansulayman>That's my word
15:00:07  <kenansulayman>I can't code PHP w/o crying anymore
15:02:19  <kenansulayman>Where're you at now?
15:08:45  <kenansulayman>juliangruber
15:09:02  <juliangruber>kenansulayman: can't tell you right now :P
15:09:17  <juliangruber>on a secret mission to providing awesomeness to the whole world
15:09:20  <kenansulayman>ah ok ;) Legify supports NDAs from the start btw :D
15:09:44  <juliangruber>i hope so :)
15:09:49  <kenansulayman>Ah yes, I know that mission
15:10:01  <kenansulayman>Gave me a 100k experience points ;)
15:15:20  <juliangruber>:)
15:23:21  * jcrugzzjoined
15:26:28  * mikealquit (Quit: Leaving.)
15:26:43  <kenansulayman>juliangruber why are you browserify'ing everything? had a look at it but it's concept is alien for me
15:26:56  <kenansulayman>and it looks like a deep rape of npm
15:27:10  <juliangruber>kenansulayman: with browserify you can use a lot of node code
15:27:16  <juliangruber>so it's easier to write platform independent code
15:27:21  <kenansulayman>You can without, too
15:27:24  <juliangruber>e.g. multilevel works on node and browser
15:27:30  <kenansulayman>hm
15:27:32  <juliangruber>I hate those require shims
15:28:00  <kenansulayman>for production I think require is a pain in the ass
15:28:24  <juliangruber>why?
15:28:41  <juliangruber>for * I think require is a blessing
15:28:43  <juliangruber>:)
15:29:04  <kenansulayman>ya well for node
15:29:22  <kenansulayman>but it makes reverse engineering extremely easy
15:29:30  * timoxley_quit (Remote host closed the connection)
15:29:41  <juliangruber>I don't care about frontend code theft
15:29:51  <juliangruber>if someone wants to do it...
15:29:57  <juliangruber>she can do it anyways
15:30:32  <kenansulayman>security through obscurity is most of the time an acceptable approach imho..
15:30:50  <kenansulayman>(as long real security is in place though)
15:30:53  <juliangruber>kenansulayman: but if you don't need that level of security? :P
15:30:58  <kenansulayman>well ok then
15:31:00  <juliangruber>and what do you mean by security?
15:31:05  <juliangruber>code theft != security
15:31:22  <kenansulayman>I don't target code theft specially
15:31:33  <kenansulayman>I mean reverse engineering to target data theft
15:31:49  <juliangruber>people should reverse engineer all my code
15:31:51  <kenansulayman>like finding vectors for CSRF
15:32:02  <kenansulayman>etc
15:32:18  <kenansulayman>brb cake with gf
15:32:25  <juliangruber>maybe I have that opinion because I didn't have a problem with that yet
15:32:29  <juliangruber>kenansulayman: njoy!
15:40:35  * thlorenzjoined
15:52:01  * mcollinajoined
15:55:30  <juliangruber>kenansulayman: I redeployed my site, does it work for you now?
15:59:21  <kenansulayman>wait
15:59:57  <kenansulayman>juliangruber :(
16:00:04  <juliangruber>meh
16:00:15  <kenansulayman>Maybe a whole backbone outage
16:00:33  <kenansulayman>which wouldn't be fixable by the data itself :(
16:00:34  * timoxleyjoined
16:00:42  <kenansulayman>let me check
16:00:57  <kenansulayman>yup
16:00:57  <kenansulayman>204.232.175.78
16:01:04  <kenansulayman>There isn't a GitHub Page here.
16:01:17  <kenansulayman>That's an nginx misconfig, just ping the support once
16:02:31  * mcollinaquit (Remote host closed the connection)
16:02:58  * mcollinajoined
16:03:50  <kenansulayman>[17:32:26] <juliangruber> maybe I have that opinion because I didn't have a problem with that yet
16:04:09  <kenansulayman>ya well it's quite a deep shitload if you experience it the first time ;)
16:04:38  * timoxleyquit (Ping timeout: 240 seconds)
16:05:09  * timoxleyjoined
16:05:09  <juliangruber>kenansulayman: do you want to tell me what caused it for you?
16:05:18  <kenansulayman>oh yes
16:05:48  <kenansulayman>an overly funny QA tester who managed to create a mail-able page which would delete user accounts on opening
16:06:17  <kenansulayman>with a simple CSRF trick and an Audio-element
16:06:34  * timoxleyquit (Read error: Connection reset by peer)
16:06:40  <kenansulayman>which btw is what we're using for tracking users :D
16:06:47  * timoxleyjoined
16:07:05  <kenansulayman>new (Audio||Image)("i")
16:07:16  <kenansulayman>makes a request to /i
16:07:28  * mcollinaquit (Ping timeout: 256 seconds)
16:07:33  <kenansulayman>but since for a request forgery you don't need to receive data, you can fake that on everything
16:08:01  <kenansulayman>new (Audio||Image)("fake.domain.com/deleteUser/?lol-I-deleted-you")
16:08:27  <kenansulayman>that wouldn't even appear in the console with XHR logging turned on :)
16:09:16  * pgtejoined
16:11:04  <kenansulayman>juliangruber gotta go. copy?
16:11:26  * timoxleyquit (Ping timeout: 256 seconds)
16:11:44  <kenansulayman>dominictarr thanks again btw
16:11:48  <juliangruber>mmh
16:12:03  <dominictarr>kenansulayman: no, thank you!
16:12:46  <kenansulayman>I'll try to make the responsible guy kick that dht implementation out, looking forward to code some stuff with you guys :)
16:13:09  <juliangruber>yes that'll be awesome!
16:14:03  <kenansulayman>cu
16:14:09  * mcollinajoined
16:14:13  * kenansulaymanquit (Quit: ∞♡∞)
16:18:48  * mcollinaquit (Remote host closed the connection)
16:19:16  * mcollinajoined
16:23:50  * thlorenzquit (Remote host closed the connection)
16:24:09  * mcollinaquit (Ping timeout: 264 seconds)
16:24:38  * mikealjoined
16:34:57  * dominictarrquit (Ping timeout: 264 seconds)
16:36:36  * mcollinajoined
16:37:38  * timoxleyjoined
16:37:46  * mcollinaquit (Remote host closed the connection)
16:38:13  * mcollinajoined
16:39:04  * timoxleyquit (Read error: Connection reset by peer)
16:39:17  * dominictarrjoined
16:42:12  * mikealquit (Quit: Leaving.)
16:42:45  * mcollinaquit (Ping timeout: 248 seconds)
16:49:50  * timoxley_joined
16:51:08  * timoxley_quit (Read error: Connection reset by peer)
16:55:59  * timoxley_joined
17:00:21  * timoxley_quit (Ping timeout: 248 seconds)
17:00:34  * thlorenzjoined
17:00:35  * timoxley_joined
17:01:53  * timoxley_quit (Read error: Connection reset by peer)
17:02:15  * timoxley_joined
17:03:38  * timoxley_quit (Read error: Connection reset by peer)
17:05:15  * timoxley_joined
17:09:26  * timoxley_quit (Ping timeout: 240 seconds)
17:11:08  * dominictarr_joined
17:13:40  * dominictarrquit (Ping timeout: 260 seconds)
17:13:40  * dominictarr_changed nick to dominictarr
17:20:26  <prettyrobots>?
17:20:33  <prettyrobots>I'm writing an add-on.
17:20:57  <prettyrobots>What add-ons have you written that you're proud of that aren't too big that I can look at to see how its done?
17:22:26  * mikealjoined
17:32:04  * wilmoore-dbjoined
17:35:20  * thlorenzquit (Remote host closed the connection)
17:37:22  * mikealquit (Quit: Leaving.)
17:47:36  * mikealjoined
17:56:25  <levelbot>[npm] [email protected] <http://npm.im/npmd>: distributed npm client (@dominictarr)
17:57:57  * mikealquit (Quit: Leaving.)
18:11:42  * mikealjoined
18:25:25  * dominictarrquit (Ping timeout: 245 seconds)
18:47:37  <mikeal>oh man
18:47:54  <mikeal>i'm already using 4 different databases to get the data for this nodeconf.eu talk
18:47:59  <mikeal>it is absurd
18:49:05  * dominictarrjoined
18:57:21  * dominictarrquit (Ping timeout: 276 seconds)
19:05:08  <chapel>mikeal: is requestdb meant to be used clientside as well?
19:05:14  <chapel>wondering why you are using lodash
19:08:12  <mikeal>i always use lodash
19:08:21  <mikeal>deep clone comes in handy
19:08:46  <mikeal>it has a couple extra methods that make it worth it
19:09:00  <mikeal>i'm still waiting on them to publish every method as an npm module tho :)
19:09:02  * dominictarrjoined
19:09:40  <chapel>mikeal: was just wondering, you use each and what not as well
19:10:04  <chapel>no matter, was just curious
19:10:24  <chapel>mikeal: any plans to make requestdb do lru/ttl?
19:10:27  <mikeal>each and keys are just syntactically shorter than the ES5 methods :)
19:10:41  <chapel>lets all use coffeescript :P
19:10:54  <mikeal>chapel: i'm going to add a mode that will check etag and send if-modified headers
19:11:11  <chapel>ah thats a good idea
19:13:52  * pgtequit (Remote host closed the connection)
19:18:12  * Acconutjoined
19:18:22  * Acconutquit (Client Quit)
19:23:01  * mikealquit (Quit: Leaving.)
19:27:54  <levelbot>[npm] [email protected] <http://npm.im/search-index>: A text search index module for Node.js. Search-index allows applications to add, delete and retrieve documents from a corpus. Retrieved documents are ordered by tf-idf relevance, filtering on metadata, and field weighting (@fergie)
19:35:37  * mikealjoined
19:55:34  * dominictarrquit (Quit: dominictarr)
19:55:55  * thlorenzjoined
19:56:02  * thlorenzquit (Remote host closed the connection)
19:56:17  * thlorenzjoined
20:46:41  * dominictarrjoined
21:16:05  * thlorenzquit (Remote host closed the connection)
21:17:18  * thlorenzjoined
21:29:50  * paulfryzeljoined
21:48:42  * paulfryzelquit (Remote host closed the connection)
21:49:17  * wolfeidauquit (Remote host closed the connection)
21:49:19  * paulfryzeljoined
21:49:28  * wolfeidaujoined
21:51:26  <mbalho>any consensus on extension for leveldb folders? e.g. .db, .level, .leveldb, .lvl
21:51:30  <mbalho>i think we should use .lvl
21:51:55  <mbalho>imagine if github added automatic file browsing support to any .lvl folder for instance, it would be nice if we had a convention in place
21:53:26  * paulfryzelquit (Ping timeout: 240 seconds)
21:55:16  <dominictarr>mbalho: but does it make good sense to check leveldb into git?
21:55:33  <mbalho>sure it does
21:55:53  <mbalho>it wont be the most infinitely scalable thing in the world but people will do it anyway
21:56:53  <wolfeidau>A compression phase will change every file though eh?
21:59:56  <dominictarr>yeah, it won't git well
22:00:12  <dominictarr>because people could end up with the same data but different files
22:04:06  <mbalho>really? git will corrupt a leveldb?
22:07:48  <dominictarr>no, it will work
22:08:12  <dominictarr>but it won't be diffable very well
22:08:34  <mbalho>right
22:08:49  <mbalho>people put all sorts of weird stuff on github
22:08:58  <mbalho>so i expect people will check in their leveldbs too
22:08:59  <dominictarr>because git snapshots every instance of every file
22:09:14  <dominictarr>it would be okay for small datasets
22:09:39  <dominictarr>but it would work better to have a git like schema for leveldb
22:10:11  <mbalho>~25,000 sqlite databases on github https://github.com/search?q=extension%3Asqlite&type=Code&ref=advsearch&l=
22:10:45  <mbalho>dominictarr: im not talking about optimal solutions, people dont usually use those in my experience :)
22:11:13  * kenansulaymanjoined
22:11:32  <dominictarr>neither, I'm just talking about sensible ones
22:11:43  <mbalho>its pretty sensible to put leveldb on github
22:11:45  <mbalho>because its free file hosting
22:11:46  <mbalho>and it sync
22:11:47  <dominictarr>mbalho: you could detect that it's a leveldb anyway
22:11:49  <dominictarr>from the file names
22:11:54  <mbalho>dominictarr: good point
22:12:09  <dominictarr>mbalho: actually, I put a leveldb in a gist just the other day
22:13:46  * thlorenzquit (Remote host closed the connection)
22:19:01  <mikeal>hey
22:19:08  <mikeal>readStream doesn't appear to be pausing for me
22:23:33  <mikeal>i'm calling pause approximately 20K times
22:23:37  <mikeal>and it's still just going
22:28:53  <juliangruber>hm
22:29:17  <juliangruber>mikeal: can you share a little gist?
22:31:25  <mikeal>the code is pretty buried, but basically you just create a readStream over a large database that is writing more than it is reading on every row
22:31:48  <mikeal>i have some code that checks the number of pending write and calls pause(), then calls resume() when writes succeed (no matter what)
22:34:15  <juliangruber>maybe insert some console.logs in lib/read-stream.js to check the paused state
22:35:13  <juliangruber>mikeal: there's a test for pausing https://github.com/rvagg/node-levelup/blob/master/test/read-stream-test.js#L43
22:36:14  <mikeal>for now, i'm just gonna re-write the code to pull a limit of 1000 and then loop over that
23:05:44  <dominictarr>mikeal: are you getting too many resumes?
23:07:22  <mikeal>maybe, i dunno
23:08:24  <dominictarr>that could be it.
23:11:18  * dominictarrquit (Quit: dominictarr)
23:37:24  * thlorenzjoined
23:48:45  * thlorenzquit (Remote host closed the connection)
23:50:00  * thlorenzjoined
23:54:10  * thlorenzquit (Ping timeout: 245 seconds)
23:55:58  <mikeal>wow
23:56:05  <mikeal>peekLast is REALLY slow sometimes