00:13:01  * jxsonquit (Remote host closed the connection)
00:19:38  <Raynos>I started working on an ODMy thing to move away from mongodb to leveldb by writing a generic data interfacey thing backed by either mongo or level
00:19:50  <Raynos>Has anyone else worked on anything to make migration from existing nosql databases easier?
00:20:55  * dominictarrquit (Quit: dominictarr)
00:25:44  <mbalho>sort of, im working on https://github.com/maxogden/dat
00:37:07  * Pwnnaquit (Read error: Operation timed out)
00:37:08  * dguttman_joined
00:37:30  * Pwnnajoined
00:37:47  * chrisdickinsonquit (Read error: Operation timed out)
00:37:54  * chrisdickinsonjoined
00:39:36  * dguttmanquit (Ping timeout: 256 seconds)
00:39:36  * dguttman_changed nick to dguttman
01:06:06  <Raynos>mbalho: I've got this thing going ( https://gist.github.com/Raynos/6302093 ), need to make it a real thing + tests, also need to make it simpler
01:08:38  <mbalho>Raynos: yea thats pretty similar to dat
01:08:50  <Raynos>cool
01:08:54  * mikealquit (Quit: Leaving.)
01:09:20  <mbalho>Raynos: my goal is to get people write adapters that plug databases into a common tabular format
01:11:06  <Raynos>nice.
01:13:19  * jxsonjoined
01:17:26  * jxsonquit (Ping timeout: 240 seconds)
01:24:54  * esundahljoined
01:26:38  * jcrugzzjoined
01:36:20  * timoxleyjoined
01:58:42  * prettyrobotsjoined
01:59:04  * prettyrobotschanged nick to Guest84080
02:00:24  * Guest84080changed nick to prettyrobots
02:03:44  * timoxleyquit (Remote host closed the connection)
02:14:59  * timoxleyjoined
02:22:58  * i_m_cajoined
03:00:14  * i_m_caquit (Ping timeout: 264 seconds)
03:01:24  * i_m_cajoined
03:02:58  * jxsonjoined
03:18:54  <mbalho>rvagg: is it better to do the biggest leveldb batch possible for write throughput or is there a sweetspot
03:24:21  * timoxley_joined
03:26:14  * timoxleyquit (Ping timeout: 240 seconds)
03:43:02  * i_m_caquit (Ping timeout: 240 seconds)
03:43:12  * timoxleyjoined
03:45:50  * timoxley_quit (Ping timeout: 245 seconds)
03:55:58  * wolfeidauquit (Read error: Connection reset by peer)
03:56:26  * wolfeidaujoined
03:57:59  * timoxley_joined
04:00:36  * timoxleyquit (Ping timeout: 256 seconds)
04:05:55  * dguttmanquit (Quit: dguttman)
04:07:40  * timoxleyjoined
04:10:00  * timoxley_quit (Ping timeout: 245 seconds)
04:13:01  * dguttmanjoined
04:20:33  * mikealjoined
04:22:43  * timoxley_joined
04:24:26  * timoxle__joined
04:25:08  * timoxleyquit (Ping timeout: 245 seconds)
04:27:48  * timoxley_quit (Ping timeout: 256 seconds)
04:30:28  * esundahlquit (Remote host closed the connection)
04:35:19  <mbalho>anyone have a module for encoding an arbitrary integer as an efficient lexicographically sort preserving ascii string
04:35:30  <mbalho>efficient meaning not zero padding
04:38:56  <mikeal>rvagg: i want to get a call together on levelup/sublevel best practices soon
05:02:43  * esundahljoined
05:06:09  * timoxleyjoined
05:09:14  * timoxle__quit (Ping timeout: 264 seconds)
05:10:11  <substack>mbalho: interesting problem!
05:12:12  <substack>mbalho: you might be able to do it like how unicode works with the high-bit spill-over
05:18:02  * jxsonquit (Remote host closed the connection)
05:27:40  * esundahlquit (Remote host closed the connection)
05:39:12  * mikealquit (Quit: Leaving.)
05:39:40  * mikealjoined
05:43:29  <mbalho>substack: i guess in the case of leveldb you wanna set a upper bound on supported integer length
05:44:25  <mbalho>substack: actually i am now thinking of lots of scenarios where you might not know the upper bound, nvm
05:46:14  <mbalho>are there any crazy hacks for inserting huge amounts of data
05:47:57  <mbalho>stock leveldb advertises something like 200k+ writes/second... though I wonder what the upper bound is when you have node in front of it
05:48:16  * alanhoffquit (Ping timeout: 246 seconds)
05:49:01  * alanhoffjoined
05:50:20  <mbalho>oh but i guess it kinda sucks with large values
05:53:04  * dguttmanquit (Quit: dguttman)
05:53:28  * timoxleyquit (Ping timeout: 245 seconds)
06:18:21  * jxsonjoined
06:23:16  * jxsonquit (Ping timeout: 264 seconds)
06:32:35  * julianduquequit (Quit: leaving)
06:42:12  <substack>mbalho: I've figured your puzzle partly out
06:43:59  <mbalho>substack: nice! my use case is autoincrementing ids for leveldb FYI
06:46:22  <substack>I'm using a unicode-style approach
06:49:58  <substack>this approach works from 0 through 33151, inclusive
06:50:20  <substack>generalizing it to n-bytes
07:49:37  * fb55joined
08:02:58  * dominictarrjoined
08:21:57  * mikealquit (Read error: Connection reset by peer)
08:22:14  * mikealjoined
08:29:30  * jcrugzzquit (Ping timeout: 264 seconds)
08:31:48  * mikealquit (Ping timeout: 245 seconds)
08:33:39  * mikealjoined
08:37:26  * timoxleyjoined
08:38:03  * mikealquit (Ping timeout: 245 seconds)
08:40:12  * mikealjoined
08:44:05  * mikealquit (Read error: Connection reset by peer)
08:44:24  * mikealjoined
08:56:01  * jcrugzzjoined
08:56:54  * fb55quit (Remote host closed the connection)
08:57:22  * fb55joined
09:00:50  * fb55_joined
09:02:14  * fb55quit (Ping timeout: 256 seconds)
09:04:17  * jcrugzzquit (Ping timeout: 246 seconds)
10:12:29  * timoxley_joined
10:15:14  * timoxleyquit (Ping timeout: 264 seconds)
10:22:12  * timoxleyjoined
10:24:43  * timoxley_quit (Ping timeout: 245 seconds)
10:41:58  <wolfeidau>man where did levelbot go :(
10:43:03  <wolfeidau>I release a new version of a leveldb thing and no dice
10:44:24  * mcollinajoined
11:25:00  * alanhoffquit
11:25:12  * alanhoffjoined
11:25:47  <alanhoff>dominictarr: are you there? Could we pm?
11:26:22  <dominictarr>alanhoff: sure - what do you want to discuss?
11:26:43  <alanhoff>nodegraph :)
11:27:12  <alanhoff>sry, levelgraph
11:27:31  <alanhoff>omg, that project is from mcollina
11:27:41  <mcollina>ahahaha
11:27:42  <mcollina>;)
11:27:45  <mcollina>WOOOOOOW
11:27:52  <alanhoff>hehehe so embarrassing
11:28:14  <alanhoff>mcollina would you mind some questions?
11:28:20  <mcollina>Ask ;)
11:28:38  <mcollina>If I disappear, just email me
11:28:50  <mcollina>(or tweet me)
11:29:01  <alanhoff>great
11:29:50  <alanhoff>Is is possible to add nodes with levelgraph and then create the paths (links/relation) later?
11:31:05  <alanhoff>*Is it
11:33:07  <mcollina>nope
11:33:36  <mcollina>but you can do it very quickly with a "dummy" link/predicate/relation
11:34:01  <alanhoff>What do you mean?
11:35:13  <alanhoff>I was playing around with multilevel this morning, that's why I had dominictarr in my mind :P
11:36:03  <dominictarr>alanhoff: actually, juliangruber wrote multilevel, I just helped
11:37:12  <alanhoff>omg, add one more fail to my list..
11:38:20  <mcollina>alanhoff: you can store a triple { subject: "mycoolid", predicate: "selfloop", object: "mycoolid" }
11:38:36  <mcollina>It's a node linking to itself
11:38:44  <mcollina>or
11:38:58  <mcollina>{ subject: "mycoolid", predicate: "exist", object: true }
11:40:50  * fb55_quit (Remote host closed the connection)
11:42:30  <alanhoff>mcollina: undestood, one more question, about performance: The path theory says that que query for 1 node have the same amount of time that the query for 10000 nodes (ala neo4j), is this happening with levelgraph?
11:43:34  <dominictarr>alanhoff: mcollina I've been thinking that a good way to do it is to extract the graph from regular documents
11:44:05  <dominictarr>maybe instead of inserting the relations directly
11:44:14  <dominictarr>you just insert documents
11:44:33  <dominictarr>but you provide a function that finds the relations in the documents
11:44:50  <dominictarr>then, those get put into an index…
11:44:58  <dominictarr>best of both worlds?
11:45:15  <dominictarr>(I need this for cyphernet anyway)
11:45:24  <mcollina>alanhoff: I think you misunderstood something about databases. Loading X objects from disk and loading 1000X cannot take the same time.
11:46:38  <mcollina>alanhoff: they can have the same complexity (it can scale).
11:47:47  <mcollina>dominictarr: you can hack something similar to https://github.com/mcollina/levelgraph-jsonld
11:48:18  <mcollina>dominictarr: the principle is the very same, without the expansion of properties into URIs.
11:50:22  <dominictarr>mcollina: does levelgraph have batch?
11:50:36  <mcollina>yes
11:50:46  <dominictarr>mcollina: doing this is really annoying https://github.com/mcollina/levelgraph/blob/master/index.js#L1
11:50:57  <dominictarr>you made me click, and then click back!
11:51:09  <mcollina>Ah ok :)
11:51:32  <dominictarr>2x unnecessary network round trips
11:51:42  <mcollina>I'll fix it ;)
11:53:41  <alanhoff>mcollina: Well, I think that with graph databases is a little different, because the db already know all possibles paths, then it just return all documents contained inside the path requested, take a look at this video, have a benchmark of neo4j and some relational db: http://youtu.be/UodTzseLh04?t=55m30s
11:54:23  <alanhoff>It shows the same ammount of time for 1000 nodes and 1000000 nodes
11:55:56  <mcollina>alanhoff: no earbuds to listen, sorry (
11:57:13  <mcollina>alanhoff: as I understood without listening, the example states that the query time is almost independent from the DB size
11:57:34  <dominictarr>alanhoff: you probably can't fairly compare neo4j to sql for graph data
11:57:46  <dominictarr>because sql is so ill suited to graph data
11:57:54  <dominictarr>and neo4j is in memory
11:58:05  <mcollina>alanhoff: should be kind of the same for levelgraph. If you can do some benches :)
11:58:34  <dominictarr>so even a large traversal will be very fast - where as sql has to many network roundtrips
11:58:56  <dominictarr>level can have a lot less latency, so it should be fairly fast
11:59:04  <dominictarr>but not as fast as memory
11:59:13  * fb55joined
11:59:27  <dominictarr>(however, maybe you could just load everything into memory?)
12:00:44  <mcollina>dominictarr: for batches https://github.com/mcollina/levelgraph#multiple-puts
12:01:06  * fb55quit (Remote host closed the connection)
12:03:21  <dominictarr>mcollina: I might have to make a pull request for levelgraph that makes it into a sublevel style plugin
12:03:39  <dominictarr>and make it support multilevel
12:03:53  <mcollina>dominictarr: please do so.
12:04:04  <alanhoff>dominictarr I already have it running on multilevel
12:04:18  <mcollina>I can add you as a contributor directly, so I can help
12:04:36  <dominictarr>alanhoff: what about db.join?
12:04:47  <mcollina>alanhoff: you are running it on top of multilevel, dominictarr wants to run it only on the server
12:04:57  <mcollina>and expose it on top of multilevel
12:05:02  <dominictarr>mcollina: yes
12:05:22  <alanhoff>That would be awesome :)
12:05:24  <dominictarr>adding it on top of multilevel will be slow, many roundtrips
12:05:59  <dominictarr>run it multilevel on top of levelgraph -> only 1 round trip. fast!
12:06:03  <mcollina>;)
12:06:09  <mcollina>dominictarr: exactly
12:06:42  <mcollina>get rid of the index.js file if you want ;)
12:06:54  <alanhoff>Thanks guys, you helped me a lot :)
12:07:07  <dominictarr>okay, cool - I might do this this weekend
12:08:34  * thlorenzjoined
12:09:41  <mcollina>If I'm offline ping me via twitter or in a pull-request/GH issue
12:23:45  * alanhoffquit (Ping timeout: 245 seconds)
12:23:53  * alanhoffjoined
12:38:28  * fb55joined
12:45:00  * mcollinaquit (Read error: No route to host)
12:45:09  * mcollina_joined
13:08:07  * ednapiranhajoined
13:12:40  * thlorenzquit (Remote host closed the connection)
13:13:31  * fb55quit (Remote host closed the connection)
13:14:47  * kenansulaymanjoined
13:46:32  * chapelquit (Ping timeout: 260 seconds)
13:53:39  * chapeljoined
14:06:57  * fallsemojoined
14:15:27  * tmcwjoined
14:21:33  * jxsonjoined
14:26:16  * jxsonquit (Ping timeout: 264 seconds)
14:29:39  * mikealquit (Quit: Leaving.)
14:29:55  * mcollina_quit (Ping timeout: 264 seconds)
14:32:12  * eugenewarejoined
14:32:19  * eugenewarequit (Remote host closed the connection)
14:34:16  * mikealjoined
14:39:40  * mikealquit (Quit: Leaving.)
14:48:05  * dguttmanjoined
14:57:03  * esundahljoined
15:02:26  * mcollinajoined
15:02:50  * thlorenzjoined
15:11:01  * ednapiranhaquit (Remote host closed the connection)
15:11:28  * ednapiranhajoined
15:16:23  * redidasquit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
15:16:39  * ednapiranhaquit (Ping timeout: 276 seconds)
15:30:30  * ednapiranhajoined
15:31:22  * mikealjoined
15:35:37  * redidasjoined
15:44:10  * chapelquit (Ping timeout: 245 seconds)
15:48:49  * mikealquit (Quit: Leaving.)
15:51:23  * chapeljoined
15:54:35  * paulfryzeljoined
16:00:02  * tmcwquit (Remote host closed the connection)
16:00:17  * tmcwjoined
16:02:29  * ednapiranhaquit (Remote host closed the connection)
16:15:47  * thlorenzquit (Remote host closed the connection)
16:17:59  * timoxley_joined
16:20:55  * timoxleyquit (Ping timeout: 264 seconds)
16:21:05  * dguttmanquit (Quit: dguttman)
16:21:52  <redidas>So yesterday I was in asking about levelup and windows, asking about larger leveldb sizes
16:22:00  * mcollinaquit (Ping timeout: 268 seconds)
16:22:10  <redidas>and I made a quick little test to showcase what causes it to die on me
16:22:14  <redidas>https://gist.github.com/rickbergfalk/6309444
16:22:27  * jxsonjoined
16:22:35  <redidas>Is my node.js code leaky? Or is something else going on?
16:22:53  * jxsonquit (Read error: Connection reset by peer)
16:23:08  * jxsonjoined
16:23:30  <redidas>by 1.85 million records things slow to a halt, ram is at 2+ gbs at the time
16:24:04  <redidas>kind of strange really - the leveldb that I end up at time of death is only 50 mb
16:26:04  <kenansulayman>reid What happens if you disable the LRU cache??
16:27:39  * jxsonquit (Ping timeout: 256 seconds)
16:28:28  <redidas>how would I go about doing that? set cachesize option to 0?
16:30:10  * paulfryzelquit (Remote host closed the connection)
16:30:46  * paulfryzeljoined
16:32:13  * mcollinajoined
16:33:12  <redidas>how would I go about doing that? set cachesize option to 0?
16:33:23  <redidas>sorry didn't mean to repeat that
16:33:35  * ednapiranhajoined
16:33:42  <redidas>I set to 0 and ran again - no improvement at all
16:34:04  <kenansulayman>redidas Just do fillCache: false
16:34:22  <kenansulayman>https://github.com/rvagg/node-levelup#get, last paragraph
16:34:35  <dominictarr>redidas: you do it all within a single tick of the event loop
16:34:52  <dominictarr>you have to be async to give it a chance to empty the queue
16:35:02  * paulfryzelquit (Ping timeout: 240 seconds)
16:35:13  <dominictarr>oh hang on, now I understand your code
16:36:09  <redidas>should I call my putBatch() inside a setImmediate or whatever?
16:36:21  <dominictarr>redidas: no, it should be okay
16:36:22  <dominictarr>hmm
16:36:24  <kenansulayman>dominictarr I don't see an issue
16:36:36  <dominictarr>I wasn't reading the code rigth
16:37:07  <dominictarr>maybe try sync=true https://github.com/rvagg/node-levelup#options-1
16:37:19  <dominictarr>that means it will write to disk before it cbs
16:37:44  <dominictarr>might work, *fingers crossed*
16:38:11  <kenansulayman>dominictarr How would that cause a leak?
16:39:14  <dominictarr>well, maybe it's not a leak, maybe it's more like resource starvation
16:39:24  <dominictarr>the queue is filling faster than it drains?
16:39:40  <dominictarr>(just a hunch)
16:40:02  <redidas>trying it now
16:40:22  <redidas>i'm at 1.6 million records, and ram has still ballooned to 2 gb
16:40:31  <redidas>lol and my inserts have come to a halt again
16:41:11  <dominictarr>redidas: in more realistic cases, it often builds memory and then drains
16:41:23  <dominictarr>redidas: can you post an issue?
16:41:30  <redidas>Sure
16:41:41  * ednapiranhaquit (Ping timeout: 248 seconds)
16:42:03  <redidas>which project should I post it under?
16:42:59  <dominictarr>rvagg/leveldown
16:43:11  * dominictarrquit (Quit: dominictarr)
16:46:59  * thlorenzjoined
16:52:41  * jcrugzzjoined
16:55:07  * thlorenzquit (Ping timeout: 240 seconds)
16:57:56  * tmcwquit (Remote host closed the connection)
17:01:11  <mbalho>dangit whered dominic go
17:03:32  <juliangruber>alanhoff: dominictarr added auth and plugins support to multilevel :)
17:06:08  * tmcwjoined
17:07:49  * dguttmanjoined
17:08:46  * ednapiranhajoined
17:11:42  <alanhoff>juliangruber: nice to know, I will check it out later
17:11:42  * mcollinaquit (Read error: Connection reset by peer)
17:11:58  * mcollinajoined
17:13:09  * jez0990_quit (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.)
17:13:59  * jez0990joined
17:14:03  <kenansulayman>juliangruber https://github.com/rvagg/node-leveldown/issues/56 any idea?
17:14:26  <juliangruber>kenansulayman: you have to rebuild leveldown
17:14:37  <juliangruber>cd node_modules/level/node_modules/leveldown && npm i
17:14:38  <juliangruber>i think
17:14:53  <kenansulayman>Let me give it a spin
17:16:08  <kenansulayman>juliangruber http://data.sly.mn/QyH1/Text%202013.08.22%2019%3A15%3A57.txt
17:16:31  <juliangruber>kenansulayman: did it successfully rebuild?
17:16:36  <kenansulayman>yes
17:16:40  <juliangruber>ehm
17:16:47  <juliangruber>rm -Rf node_modules && npm i
17:16:49  <juliangruber>try that
17:16:52  <juliangruber>if that doesn't work
17:17:04  <juliangruber>we have a centos problem
17:18:52  <juliangruber>maybe it is also level-msgpack
17:18:55  <kenansulayman>juliangruber http://f.cl.ly/items/220V2D3F3V0W2i3r360K/Text%202013.08.22%2019%3A18%3A44.txt
17:19:16  <kenansulayman>we're not deploying level-msgpack in the dev stage
17:20:27  * dominictarrjoined
17:21:09  <juliangruber>kenansulayman: maybe glibc is incompatible
17:21:20  <kenansulayman>hm
17:21:29  <kenansulayman>gcc?
17:21:44  <juliangruber>can you try 'git clone {levelup repo} && cd node-levelup && npm i && npm test' on that machine?
17:22:05  <kenansulayman>Well it DOES work when I do not use a database from OSX
17:22:06  <juliangruber>kenansulayman: that's just the compiler
17:22:29  <juliangruber>omg
17:22:32  <juliangruber>so
17:22:36  <juliangruber>you create a database in osx
17:22:39  <juliangruber>upload that
17:22:51  <juliangruber>and let it use that database folder from centos?
17:23:08  <kenansulayman>Pretty much yes, it contains test-users for the platform
17:23:38  <juliangruber>the way I do this is I create a boot script and execute that on the server
17:23:52  <juliangruber>that will fix your problem for now
17:24:04  <juliangruber>but it would be worth investigating what's causing that crash
17:24:16  <juliangruber>rvagg: maybe the level db folder isn't portable?
17:24:42  <kenansulayman>na
17:24:56  <kenansulayman>deleted the db and it doesn't work either
17:25:20  <kenansulayman>last time it worked when I fully recompiled node (latest release-branch, not trunk)
17:25:27  <kenansulayman>for two days or so
17:25:44  <kenansulayman>then it crippled
17:25:57  <juliangruber>eh
17:26:02  <juliangruber>so the problem is not the db?
17:26:07  <juliangruber>that you created in osx?
17:26:18  <kenansulayman>I thought so, but I just deleted it
17:26:23  <juliangruber>ok
17:26:24  <kenansulayman>and it fails nevertheless
17:26:29  <juliangruber>what node are you running?
17:26:46  <kenansulayman>v0.10.16 on deployment instances
17:26:52  <juliangruber>hm
17:27:06  <kenansulayman>let me upgrade to .17, mom
17:27:11  <juliangruber>where does it work and where does it not
17:27:23  <kenansulayman>OSX — our dev machines: perfectly
17:27:35  <kenansulayman>CentOS — :(
17:29:13  <kenansulayman>Ok just recompiled node (git fetch to v.0.10.17; make all; make install)
17:29:23  <kenansulayman>and reinstalled the modules, lets give it a spin
17:29:57  <kenansulayman>ewwww
17:30:12  * jxsonjoined
17:30:22  <kenansulayman>it works now, when it creates a new database
17:30:47  <kenansulayman>when I feed it the osx database, it won't work until node reinstall... lol?
17:32:35  <kenansulayman>good lord that makes no sense
17:32:38  * chapelquit (Ping timeout: 264 seconds)
17:35:17  <juliangruber>n'oh my god
17:35:21  <juliangruber>sounds bad
17:36:05  * mcollinaquit (Read error: Connection reset by peer)
17:36:10  * mcollina_joined
17:39:45  <kenansulayman>juliangruber 10 minutes OK — now: *** glibc detected *** node: free(): invalid next size (normal): 0x00000000015ec230 ***
17:40:55  <kenansulayman>This drives me crazy
17:40:59  <kenansulayman>We have to deliver this
17:41:19  <juliangruber>uh oh
17:41:51  <juliangruber>kenansulayman: how much mem do you have?
17:41:59  <kenansulayman>128 GB
17:42:05  <kenansulayman>32 Cores
17:42:14  <kenansulayman>3.2 Ghz
17:42:17  <juliangruber>128gb memory?
17:42:19  <kenansulayman>yes
17:42:47  <kenansulayman>for the start we're working with 4 dedicated 1&1 servers
17:42:49  <juliangruber>can you update glibc?
17:42:54  * mcollina_quit (Remote host closed the connection)
17:42:57  <kenansulayman>wait
17:43:08  <kenansulayman>yum upgrade ftw wait
17:44:12  <kenansulayman>no it's current
17:44:55  <juliangruber>oh you're using phantomjs
17:45:00  <juliangruber>might that be the problem?
17:45:07  <kenansulayman>let me check
17:45:30  <kenansulayman>nope
17:45:36  <kenansulayman>*** glibc detected *** node: free(): invalid pointer: 0x0000000002d13880 ***
17:46:27  <kenansulayman>glibc 2.12-1.107.el6_4.2
17:46:43  <kenansulayman>latest is 2.18
17:46:59  <kenansulayman>I'll compile it myself; yum is shiieeeet sometimes
17:47:10  <juliangruber>ehm
17:47:47  <juliangruber>be careful
17:47:52  <juliangruber>don't compile libc yourself
17:47:57  <juliangruber>that's such a core part of the system
17:47:59  <juliangruber>what about this
17:48:08  <hij1nx>kenansulayman: `yum install build-essential`?
17:48:14  <juliangruber>you get rid of those crappy 1&1 servers
17:48:15  <kenansulayman>long done
17:48:17  <juliangruber>and deploy to joyent
17:48:24  <kenansulayman>crappy? lol
17:48:28  <juliangruber>1&1 man
17:48:30  <hij1nx>kenansulayman: yeah, joyent, you can do `$gdb --args node <scrashing-script-name.js>`
17:48:38  <hij1nx>kenansulayman: then then type `run`
17:48:58  <kenansulayman>hij1nx And what does it bring?
17:48:59  <hij1nx>kenansulayman: when your script crashes, type `bt
17:49:29  <hij1nx>kenansulayman: you should be able to debug your memory problem with free()
17:49:52  <hij1nx>centos is stupid
17:50:10  <kenansulayman>whatever, I traced it down to leveldown
17:50:23  <juliangruber>did you do the gdb thing?
17:50:24  <kenansulayman>centos is cool
17:50:26  <kenansulayman>:D
17:50:34  * chapeljoined
17:50:35  <juliangruber>that's not an argument :P
17:50:36  <hij1nx>kenansulayman: http://www.youtube.com/watch?v=73XNtI0w7jA
17:50:49  <hij1nx>kenansulayman: centos is not a web server.
17:50:54  <juliangruber>kenansulayman: how much time do you have left?
17:51:01  <hij1nx>kenansulayman: centos is a middle-child of history
17:51:05  <hij1nx>;)
17:51:14  <kenansulayman>hij1nx that's evil
17:51:46  <hij1nx>smartos is the closest thing there is to a web server
17:51:55  <kenansulayman>juliangruber we're not directly bound, I just get shitstormed if something like that messes our sprint
17:52:06  <juliangruber>i understand
17:52:14  <juliangruber>so let's try to get it to run!
17:53:04  <kenansulayman>hij1nx Why should we try to get to C10M? that'd be a crazy horsefck to manage
17:53:22  <juliangruber>what's c10m?
17:53:31  <kenansulayman>10 million concurrent connections
17:53:50  <juliangruber>ok
17:53:51  <juliangruber>focus
17:53:51  <juliangruber>:D
17:53:53  <kenansulayman>… and I'm happy with C10K ;)
17:53:56  <juliangruber>what's the current state?
17:53:58  <hij1nx>kenansulayman: that was just a talk about why the current OSs we use for webservers are crappy.
17:54:19  <rescrv>kenansulayman: is your issue within LevelDB (proper) or upper levels
17:54:39  <juliangruber>rescrv: seems to be leveldown
17:54:52  <hij1nx>kenansulayman: anyway, GNU debugger, has the principal purpose of allowing you to stop your program before it terminates
17:55:01  <hij1nx>kenansulayman: If your program terminates, the debugger helps you determine where it failed.
17:55:10  <kenansulayman>hij1nx Let me gdb it, mom
17:55:27  <hij1nx>kenansulayman: you can set breakpoints with the breakpoint command.
17:55:37  <kenansulayman>hij1nx Program received signal SIGABRT, Aborted.
17:55:41  <kenansulayman>hij1nx 0x00007ffff71cf8a5 in raise () from /lib64/libc.so.6
17:55:42  <hij1nx>and then navigate through the program with the step command or the next command.
17:55:54  <rescrv>juliangruber: I ask because I've received reports of LevelDB proper erroring out with a similar problem
17:55:59  <rescrv>kenansulayman: "thread apply all bt"
17:56:03  <hij1nx>kenansulayman: are you sure the machine has as much memory as you think it does?
17:56:18  <kenansulayman>hij1nx we're paying 600 per machine per month
17:56:22  <hij1nx>kenansulayman: also, just out of curiosity, what are your ulimits?
17:56:25  <juliangruber>kenansulayman: is is a shared/virtual server or a bare metal one?
17:56:32  <kenansulayman>hij1nx I set it 10 108k
17:56:38  <kenansulayman>juliangruber dedicated
17:57:02  <juliangruber>ok
17:57:30  <kenansulayman>rescrv what does it do? doesn't change anything
17:57:36  <kenansulayman>[New Thread 0x7ffff6929700 (LWP 581)]
17:57:36  <kenansulayman>[New Thread 0x7ffff5f28700 (LWP 582)]
17:57:36  <kenansulayman>[New Thread 0x7ffff5527700 (LWP 583)]
17:57:36  <kenansulayman>[New Thread 0x7ffff4b26700 (LWP 584)]
17:57:36  <kenansulayman>Detaching after fork from child process 585.
17:59:10  <juliangruber>hmmmm
17:59:20  <rescrv>kenansulayman: I'm most interested in helping debug the invalid free. I have received a report of something similar and have been watching for a live test case. If you reproduce the glibc crash in gdb, the command I gave will tell you where each thread was when it crashed.
17:59:23  <juliangruber>setting up a joyent machine is like 15m of work
17:59:27  <juliangruber>you should just do that
17:59:31  <juliangruber>and you'd be happy
18:00:03  <kenansulayman>rescrv Could you give me some instructions? I'm used to IDA rather than gdb
18:01:18  <rescrv>kenansulayman: gdb /path/to/program/that/crashes; within gdb type "run <args>" where <args> are the arguments that trigger the crash. Your program will run. Do whatever external things you need to reproduce the crash
18:01:35  <rescrv>when it aborts, "thread apply all bt" will tell the next step
18:01:48  <kenansulayman>run <args> ?
18:02:04  <kenansulayman>It crashes upon opening the datbase
18:02:07  <kenansulayman>*+a
18:02:37  <kenansulayman>ah I see
18:02:42  <kenansulayman>mom I 'll upload it
18:03:14  <kenansulayman>rescrv https://s3.amazonaws.com/f.cl.ly/items/0t1k1l0w1u1A1I1O3Z3c/Text%202013.08.22%2020%3A03%3A05.txt
18:04:43  * ryan_ramagejoined
18:06:17  <rescrv>kenansulayman: It looks like the problem is entirely outside LevelDB proper. I think the other folks here are better equipped to help in this case.
18:06:59  <kenansulayman>"entirely outside LevelDB proper"? Sorry I'm not native enough to decompile that level of English :D
18:07:58  <hij1nx>kenansulayman: i wonder if you might try going back a version or two of build essentials
18:07:59  <juliangruber>kenansulayman: there's no "Level" in the thread stacks
18:08:15  <juliangruber>#4 0x00000000005a0a43 in node::ObjectWrap::WeakCallback(v8::Persistent<v8::Value>, void*) ()
18:08:17  <rescrv>kenansulayman: Your glibc error sounded very much like a problem I've heard of in LevelDB (the C++ lib from Google). It looks like it's in the layers above that wrap LevelDB in Javascript
18:08:19  <juliangruber>seems to be the problem
18:08:48  <kenansulayman>juliangruber let me test something
18:08:54  <juliangruber>kenansulayman: "Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.107.el6_4.2.x86_64 libgcc-4.4.7-3.el6.x86_64 libstdc++-4.4.7-3.el6.x86_64"
18:08:58  <juliangruber>maybe install those
18:09:10  <kenansulayman>juliangruber nah they're not avaiable already checked
18:09:56  <juliangruber>kenansulayman: does your process crash immediately?
18:10:01  <rescrv>how much of a pain would it be to run the entire thing under valgrind?
18:10:07  <juliangruber>or when it crashes after 10min, did it idle or did you use the app?
18:10:20  <kenansulayman>well you see our Legify logo
18:10:29  <kenansulayman>after it's shown, the modules are loaded
18:10:44  <kenansulayman>then a statistics of modules and submodules (internally) would be printed
18:10:59  <kenansulayman>but isn't, so it's failing at the init of a module
18:11:08  <hij1nx>kenansulayman: im thinking you need more control factors.
18:11:13  <juliangruber>kenansulayman: also try this: https://github.com/joyent/node/issues/3868#issuecomment-7753547
18:11:14  <ryan_ramage>can someone remind me where a listing page of level-* modules might be…I know I had seen it somewhere….just can't remember :)
18:11:33  <hij1nx>kenansulayman: can you run the exact same code in other environments?
18:11:38  <kenansulayman>hij1nx yup
18:11:45  <hij1nx>ryan_ramage: its on the wiki of the levelup repo
18:11:46  <juliangruber>ryan_ramage: ghub.io/levelup/wiki/Modules
18:11:46  <kenansulayman>46 days no-downtime OSX
18:12:06  <ryan_ramage>hij1nx: juliangruber thx!
18:12:13  <kenansulayman>ryan_ramage search npmjs.org
18:12:40  <hij1nx>ryan_ramage: you might like this utility ;) github.com/hij1nx/lev
18:13:22  <hij1nx>kenansulayman: hmm, osx, you mean your local machine?
18:13:41  <kenansulayman>well we have a local Mac Pro (5,1) as dev server
18:13:52  <kenansulayman>then we have a beta stage
18:13:58  <kenansulayman>which is the CentOS server
18:13:59  <ryan_ramage>hij1nx: lev looks v useful
18:14:19  <hij1nx>kenansulayman: have you tried running it on a vmbox version of centos?
18:14:27  <kenansulayman>uhm no
18:14:39  <hij1nx>kenansulayman: that might provide a more controlled environment
18:14:49  <kenansulayman>hij1nx Linux s16365416 2.6.32-279.1.1.el6.x86_64 #1 SMP Tue Jul 10 13:47:21 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
18:14:57  <kenansulayman>Works pretty awesome though normally
18:15:17  <kenansulayman>We're running our primary website on the same server
18:15:26  <kenansulayman>And are on a 20:15:20 up 49 days, 22:19, 1 user, load average: 0.03, 0.05, 0.03
18:15:36  <kenansulayman>That is we'll pretty much want to stick to the server
18:15:56  <dominictarr>hij1nx: juliangruber need some dtrace help with this one https://github.com/rvagg/node-leveldown/issues/55
18:16:41  <juliangruber>dominictarr: want to login to my joyent machine?
18:16:55  <dominictarr>I don't think it's in node
18:17:17  <hij1nx>dominictarr: if you can write a simple test that leaks memory, i can instrument it with dtrace and track down the memory leak
18:17:29  <dominictarr>I don't think this is memory
18:17:36  <dominictarr>I think this is resource starvation
18:18:07  <dominictarr>memory is only balooning because it's queued for write but not written
18:18:32  <dominictarr>looking at my disk activity widget… I want something higher resolution to check.
18:18:34  <hij1nx>dominictarr: maybe the writes climb quite high because compaction is kicking in?
18:18:44  <dominictarr>yeah, that is my guess too
18:18:59  <dominictarr>well, maybe that is why writes slow
18:19:09  <hij1nx>dominictarr: this might not really be a problem if its a passive activity that doesnt monopolize the systems resources
18:19:14  <dominictarr>because the script writes as fast as it can anyway
18:19:42  <dominictarr>there are plenty of real world usecases where you need to write heaps fast - like copying a adatabase
18:20:01  <juliangruber>dominictarr: maybe try with the new writeStream implementation
18:20:19  * paulfryzeljoined
18:20:29  <dominictarr>is that released?
18:20:49  <dominictarr>Either way, this is a good time for dnode...
18:21:48  <dominictarr>I mean, dtrace
18:21:57  <kenansulayman>juliangruber update: doesn't crash because of opening the database, let me investigate further
18:22:21  <juliangruber>dominictarr: not yet released, but the pull-request works
18:23:07  <juliangruber>kenansulayman: what's your app doing why initializing?
18:23:10  * fb55joined
18:23:16  <juliangruber>s/why/while/
18:23:21  <dominictarr>do you know how I'd run the script and trace disk io with dtrace?
18:25:43  <kenansulayman>juliangruber First loading non-npm modules, then monkey-patching http-server for sessions, then loading our modules (controllers, logics — see Legify/Absinthe for that, it's oss), then loading the npm modules, opening the level modules
18:26:50  <juliangruber>kenansulayman: which level modules?
18:27:11  <juliangruber>oh
18:27:20  <kenansulayman>level, level-store, level-msgpack; whereas store is for avatars, msgpack yet unused
18:27:26  <juliangruber>ok
18:27:26  <juliangruber>so
18:27:28  <juliangruber>the plan:
18:27:32  <juliangruber>instead of node index.js
18:27:35  <juliangruber>do node debug index.js
18:27:42  <juliangruber>and step through it
18:27:52  <kenansulayman>wait I will use node-inspect for it, I love the GUI
18:27:53  <kenansulayman>:D
18:27:58  <kenansulayman>ok
18:27:59  <juliangruber>ok
18:28:24  <kenansulayman>when I process.exit() before loading the server, nothing fails
18:28:28  <kenansulayman>:(
18:30:38  <kenansulayman>juliangruber nah
18:30:44  <kenansulayman>can't even debug-brk
18:35:25  <kenansulayman>juliangruber
18:35:27  * thlorenzjoined
18:35:41  <kenansulayman>it works when I exclude level
18:52:10  <hij1nx>kenansulayman: have you tried using any of the alternative levelup backends?
18:52:23  <kenansulayman>memdown?
18:52:24  * thlorenz_joined
18:52:32  <hij1nx>kenansulayman: there are nothers
18:52:42  <hij1nx>kenansulayman: s/nothers/others/
18:52:55  <kenansulayman>yes mysql and stuff
18:53:08  <kenansulayman>but levelup is non-low-level
18:53:33  <kenansulayman>and the issue isn't here if I exclude level (up+down) fully
18:53:42  <kenansulayman>thus it's leveldown
18:57:06  * thlorenz_quit (Ping timeout: 264 seconds)
18:57:25  <kenansulayman>hij1nx
18:57:30  <kenansulayman>works perfectly with MemDown
18:58:04  * jxsonquit (Remote host closed the connection)
18:58:34  * jxsonjoined
19:02:44  <kenansulayman>rvagg
19:04:18  * ryan_ramagequit (Quit: ryan_ramage)
19:04:44  <hij1nx>kenansulayman: memdown is just one of many, rvagg was experimenting with several other backends.
19:06:13  * fb55quit (Remote host closed the connection)
19:06:51  <hij1nx>kenansulayman: ah, i have a very simple solution for you.
19:08:17  <kenansulayman>hij1nx tell me
19:08:19  <hij1nx>kenansulayman: build your solution on a different flavor of linux.. that is, if you /must/ use linux :) then you can take your solution to market. when you sort out the issue, go back to centos
19:08:38  <hij1nx>honestly, im not trolling.
19:08:46  <kenansulayman>damned
19:09:00  <hij1nx>if you've created your solution already and it works great, just rebuild your vm using ubuntu or something
19:09:07  <hij1nx>deploy to that
19:09:12  <kenansulayman>That's not so easy
19:09:16  <hij1nx>then you can come back to centos.
19:09:30  <hij1nx>what's the issue with it? angry IT guys?
19:09:57  <kenansulayman>I'm the IT guy for it he
19:10:29  <kenansulayman>It's just that we've got some 1.* TB data on it for various purposes and a running webserver
19:10:30  <hij1nx>kenansulayman: perfect! :D
19:10:56  <hij1nx>kenansulayman: it is probably easier to move that data than it is to rewrite your solution :)
19:10:58  <kenansulayman>(beta stage that is)
19:11:05  <kenansulayman>This makes me sad
19:11:23  <kenansulayman>Because leveldown is failing and I have to move the server :(
19:11:48  <hij1nx>kenansulayman: im definately not saying there is no solution for running it on centos.
19:12:03  <hij1nx>kenansulayman: have you tried different versions of build-essentials?
19:12:08  * ryan_ramagejoined
19:12:19  <kenansulayman>what different versions are there?
19:12:38  <hij1nx>kenansulayman: you can find out by using yum
19:14:10  <hij1nx>kenansulayman: also did you try installing centos in a vmbox?
19:14:25  <kenansulayman>+
19:14:30  <kenansulayman>you asked me before
19:14:40  <hij1nx>kenansulayman: just checking if you did it.
19:14:43  <kenansulayman>no
19:14:55  <hij1nx>kenansulayman: that can reveal some interesting results
19:15:08  <kenansulayman>Can I do it from shell?
19:15:41  <hij1nx>kenansulayman: not sure i understand
19:15:51  <kenansulayman>haha
19:16:02  <kenansulayman>server -> vmbox?
19:16:29  <hij1nx>kenansulayman: the idea is that you create a new VM inside vmbox (https://www.virtualbox.org/)
19:16:37  <kenansulayman>uhm I know
19:16:57  <kenansulayman>it's just hard
19:17:04  <kenansulayman>since I have no physical access
19:17:43  <hij1nx>you install vmbox on your local machine
19:17:57  <kenansulayman>and it takes about 4 hours to schedule a wipe and some time for personnel to install the server
19:18:14  * paulfryzelquit (Ping timeout: 240 seconds)
19:18:21  <kenansulayman>bare metal ftw...
19:18:45  <hij1nx>kenansulayman: it looks like your provider (oneandone) will allow you to switch to another flavor of linux
19:18:52  <kenansulayman>yes
19:18:57  <kenansulayman>in Cloud
19:19:00  <hij1nx>kenansulayman: what does your app do?
19:19:09  <kenansulayman>legify.com
19:19:29  <kenansulayman>alas Cloud ≠ dedicated
19:20:31  <hij1nx>kenansulayman: you have a lot of data on your existing instance that you need?
19:20:38  <kenansulayman>yes
19:20:47  <hij1nx>start copying :)
19:20:54  <kenansulayman>and to properly move MySQL is a pita
19:21:09  <hij1nx>kenansulayman: can you replicate it?
19:21:21  <kenansulayman>well I either do a .sql dump
19:21:29  <hij1nx>mysql clister has master-master
19:21:35  <hij1nx>s/clister/cluster
19:21:49  <kenansulayman>or replicate the /var/mysql
19:21:52  <hij1nx>master-slave would work too
19:22:13  <hij1nx>oh!
19:22:20  <hij1nx>even better, turn that into your dbserver
19:22:27  <kenansulayman>ewww
19:22:34  <hij1nx>kenansulayman: thats usually a good thing
19:22:51  <hij1nx>kenansulayman: separation of concerns principle :)
19:22:54  <kenansulayman>mysql < level
19:24:03  <hij1nx>so don't you need to migrate that data to level anyway?
19:24:25  <kenansulayman>uhm
19:24:26  <kenansulayman>no
19:24:28  <kenansulayman>not at all
19:24:43  <kenansulayman>the mysql is of another infrastructure / website
19:24:54  <kenansulayman>internally used :)
19:25:13  <hij1nx>ah, so just put your new stuff on a new box and if your new app needs it, just use it as a service
19:25:24  <kenansulayman>and of course nobody gets to his fckn telephone
19:25:29  <kenansulayman>>_>
19:26:18  <kenansulayman>Hm. I'll start moving the data now
19:26:34  <kenansulayman>btw can I mount a ftp as a dir? like sshfs?
19:26:49  <hij1nx>well im biased; i have a "ship it" mentality. so i'd just leave the data in mysql on the centos box and access it via https://github.com/felixge/node-mysql
19:27:14  <kenansulayman>ya well mysql ≠ our apps' data
19:28:09  <hij1nx>thats fine, node-mysql is quite good. also, inter-data-center network access is much faster than you'd think -- https://medium.com/p/37a93d4e0013
19:30:14  <kenansulayman>hm
19:32:06  * alanhoffquit (Ping timeout: 276 seconds)
19:32:31  * paulfryzeljoined
19:32:54  * alanhoffjoined
19:47:36  * timoxley_quit (Remote host closed the connection)
19:52:33  * thlorenzquit (Ping timeout: 240 seconds)
19:53:27  <kenansulayman>hij1nx CentOS < OpenSuse?
19:53:56  <kenansulayman>hij1nx which one would you chose? http://data.sly.mn/Qy3F
19:54:31  <kenansulayman>smartos looks cool tho
19:56:09  <kenansulayman>juliangruber How easy is it to deploy a joyent thing? is there a free tier to test it? (aws style)
19:56:34  <juliangruber>kenansulayman: i don't think so, but you can get one for 20 bucks
19:56:44  <juliangruber>kenansulayman: opensuse >X
19:56:47  <kenansulayman>I just got $130 free
19:56:49  <juliangruber>i mean
19:56:50  <kenansulayman>for Joyet
19:56:57  <kenansulayman>*+n
19:57:04  <juliangruber>opensuse sucks
19:57:10  <juliangruber>use a server os
19:57:13  <juliangruber>like smartos or debian
19:57:21  * no9joined
19:57:34  <kenansulayman>Can't
19:57:47  <kenansulayman>woah this is all confusing
20:00:18  <kenansulayman>juliangruber Joyent doesn't take my card what a pitty
20:00:26  * paulfryzelquit (Remote host closed the connection)
20:00:26  <juliangruber>kenansulayman: it took mine oO
20:00:32  <kenansulayman>Visa`
20:00:33  <kenansulayman>?*
20:00:37  <juliangruber>yup
20:01:02  * paulfryzeljoined
20:01:28  <kenansulayman>hm ok I'm in.. it took it when I clicked "continue" again
20:04:26  <kenansulayman>juliangruber wtf
20:04:37  <juliangruber>kenansulayman: it's all good
20:05:04  * paulfryzelquit (Ping timeout: 246 seconds)
20:05:13  <kenansulayman>$0.48 - for 15GB ram 4 cpu & 1.4TB HD?
20:05:58  <juliangruber>kenansulayman: just pick one for now
20:06:32  <kenansulayman>Why doesn't it say anything about my $150? :(
20:07:45  <juliangruber>150 for what?
20:07:49  <juliangruber>0.48 per hour?
20:09:31  <kenansulayman>no 150 for new customers
20:09:38  <kenansulayman>free-money
20:09:46  <kenansulayman>to be spent in 2 months
20:09:55  <kenansulayman>75/month that i
20:09:56  <kenansulayman>s
20:11:53  <juliangruber>ah
20:11:54  <juliangruber>wow
20:11:57  <juliangruber>didn't know that
20:12:07  <juliangruber>too bad i already have one :D
20:14:56  <kenansulayman>haha
20:15:04  <kenansulayman>yeah currently chatting with a joyent guy
20:15:14  <kenansulayman>cool they looked up legify prior to chatting with me...
20:15:19  * DTrejojoined
20:15:44  <juliangruber>yeah they're rad
20:19:07  * julianduquejoined
20:24:06  * ryan_ramagequit (Read error: Connection reset by peer)
20:24:19  * ryan_ramagejoined
20:25:33  * ednapiranhaquit (Remote host closed the connection)
20:43:16  * mikealjoined
20:52:07  * mikealquit (Quit: Leaving.)
20:56:41  * ednapiranhajoined
21:01:51  * soldairjoined
21:04:26  * ryan_ramagequit (Quit: ryan_ramage)
21:04:55  * ednapiranhaquit (Ping timeout: 246 seconds)
21:13:54  * fb55joined
21:15:04  * ryan_ramagejoined
21:15:10  * ryan_ramagequit (Changing host)
21:15:10  * ryan_ramagejoined
21:21:10  * fb55quit (Remote host closed the connection)
21:24:30  <substack>mbalho: it looks like http://en.wikipedia.org/wiki/Radix_tree is what you want for that number-packing problem
21:33:02  * jxsonquit (Remote host closed the connection)
21:33:12  <substack>or perhaps not
21:33:13  <substack>mbalho: anyways, I have something working
21:33:36  * ryan_ramagequit (Quit: ryan_ramage)
21:33:43  <substack>it only works on the integers though
21:34:19  <substack>I might be able to write something that works on the rationals but the reals is impossible because there aren't enough numbers for that
21:34:29  <substack>because of the pigeon-hole principle
21:35:48  <mbalho>oh sweet, integers is good for my use case
21:43:30  <kenansulayman>juliangruber Up and running!
21:45:18  * ryan_ramagejoined
21:47:48  * Acconutjoined
21:48:45  * mmckeggquit (Ping timeout: 245 seconds)
21:49:54  * Acconutquit (Client Quit)
21:51:06  * mmckeggjoined
22:02:29  <kenansulayman>juliangruber are you running on smart os?
22:03:20  * jxsonjoined
22:11:56  * jxsonquit (Ping timeout: 260 seconds)
22:13:19  * jxsonjoined
22:16:52  * fallsemoquit (Quit: Leaving.)
22:23:52  <mbalho>dominictarr: YES got my concept working
22:24:11  <dominictarr>mbalho: cool
22:24:40  <mbalho>dominictarr: while a leveldb is opened by levelup i stream the entire leveldb folder as a tar.gz to the client and then unpack it and it can be read sucessfully by a second node process
22:25:02  <mbalho>dominictarr: havent tested what happens when i'm bulk inserting during the copy
22:25:14  <dominictarr>ah, that will be the interesting part.
22:26:03  <mbalho>worst case scenario is i just have to buffer writes in memory or something
22:30:11  * tmcwquit (Remote host closed the connection)
22:35:18  <mbalho>dominictarr: wanna test it out real quick? https://gist.github.com/maxogden/6313646
22:40:07  <juliangruber>kenansulayman: awesome!
22:40:12  <juliangruber>yep, using smartos
22:40:17  <juliangruber>mainly because of dtrace
22:45:02  <kenansulayman>juliangruber how do you compile?
22:45:09  <kenansulayman>phantomjs for instance doesn't compile
22:45:20  <kenansulayman>I killed the machine again and setup an ubuntu
22:45:25  <kenansulayman>:(
22:45:33  <juliangruber>kenansulayman: never used phantomjs on smartos
22:45:39  <juliangruber>what do you need that for in production?
22:45:56  <kenansulayman>We need pdfs rendered off generated html
22:46:02  <kenansulayman>any other suggestion?
22:47:19  <juliangruber>kenansulayman: i used ironworker for such things
22:47:28  <juliangruber>http://www.iron.io/worker
22:48:06  <kenansulayman>bad in a RT system
22:48:22  <kenansulayman>when you need the pdf up-to-date as the user requests it
22:48:43  <juliangruber>realtime html to pdf?
22:48:57  <kenansulayman>pretty much
22:48:58  <juliangruber>you do know that phantomjs uses a whole lot of memory?
22:49:03  <kenansulayman>yes
22:49:07  <kenansulayman>extremely much
22:49:25  <juliangruber>a worker doesn't imply that it takes long to finish
22:49:31  <kenansulayman>But we didn't come across a better just-in-time solution
22:49:32  <juliangruber>they can be spawned very quickly
22:50:07  <juliangruber>http://www.princexml.com/
22:51:05  <juliangruber>https://saucelabs.com/features
22:51:08  <kenansulayman>"This license adds a small logo to the first page of generated PDF files."
22:51:09  <kenansulayman>gay
22:51:14  <juliangruber>saucelabs has screenshots too
22:51:17  <juliangruber>oh didn't see that
22:51:42  <juliangruber>or!
22:51:43  <juliangruber>http://www.browserstack.com/responsive
22:51:45  <kenansulayman>https://github.com/traviscooper/node-wkhtml
22:51:48  <juliangruber>that seems like a good solution
22:52:06  <juliangruber>but wait
22:52:12  <juliangruber>phantomjs doesn't run on ubuntu? oO
22:52:19  <kenansulayman>it odes!
22:52:22  <kenansulayman>does
22:52:47  <kenansulayman>it just doesn't compile on solaris x64
22:53:09  <kenansulayman>var pdf = wkhtml.spawn('pdf');
22:53:09  <kenansulayman>pdf.stdout.pipe(response);
22:53:09  <kenansulayman>pdf.stdin.end('<h1>Hello World</h1>');
22:53:12  <kenansulayman>pretty cool
22:53:18  <juliangruber>but i thought you were to use ubuntu anyways
22:54:06  <kenansulayman>yes
22:54:12  <kenansulayman>had a long talk with the joyent guy
22:54:20  <kenansulayman>he wanted to get me to use jitsu lol
22:54:31  <kenansulayman>the deployment failed with a 500
22:54:46  <kenansulayman>thus we configured a SmartOS (sounds cool, eh?)
22:54:59  <kenansulayman>Doesn't compile Phantom -> wipe, Ubuntu it is
22:56:04  <kenansulayman>how does solaris differ btw?
23:00:23  <kenansulayman>juliangruber level works, which is cool
23:00:51  <juliangruber>kenansulayman: you get dtrace, that's the biggest thing for me
23:01:00  <juliangruber>wouldn't want to run anything in production without dtrace
23:01:00  <kenansulayman>trace?
23:01:02  <kenansulayman>d*
23:02:56  <kenansulayman>I see
23:03:06  <kenansulayman>Why?
23:03:09  * esundahlquit (Remote host closed the connection)
23:03:35  * esundahljoined
23:06:06  * esundahl_joined
23:08:03  * esundahlquit (Ping timeout: 245 seconds)
23:10:04  <mbalho>fs + leveldb question: in a benchmark i'm writing i'm noticing that if i do a large batch insert into leveldb and then immediately try to copy the leveldb files themselves there is a long pause until the file operations finish. in my case i'm generating a tarball of the entire leveldb
23:10:33  * esundahl_quit (Ping timeout: 245 seconds)
23:10:40  <mbalho>but if i pause for a few seconds then the file operations only take ~200ms
23:11:17  <mbalho>so i'm guessing that leveldb has a file open doing compaction or something? and that causes the node fs read on the same file to wait until leveldb is done?
23:11:29  <mbalho>(im assuming filesystems automatically queue reads like that?)
23:11:39  <mbalho>i have a test case in case anyone wants to play with this
23:12:34  <kenansulayman>juliangruber WTF WTF WTF WTF WTF
23:12:41  <substack>mbalho: ok with my thing the biggest something can ever get is 9 bytes
23:12:46  <substack>publishing in just a moment
23:12:49  <mbalho>nice
23:13:07  <juliangruber>kenansulayman: relax
23:13:09  <substack>and all numbers are just a single byte up to 251
23:13:12  <mbalho>substack: do you know filesystems work? does the above question make any sense?
23:13:18  <kenansulayman>juliangruber New Joyent Ubuntu: *** glibc detected *** node: double free or corruption (!prev): 0x0000000002058210 ***
23:13:28  <juliangruber>kenansulayman: uhoh
23:13:51  <mbalho>leveldb needs a slogan like couchdbs 'relax'
23:13:55  <substack>mbalho: for really big numbers there is some loss of precision though :/
23:14:03  <mbalho>substack: how big?
23:14:04  <substack>but that shouldn't be a problem for auto-incrementing ids
23:14:15  <mbalho>substack: ahh yea as long as they sort its okay if they are sparse
23:14:16  <substack>>Math.pow(2,32)
23:14:21  <mbalho>oh nice
23:14:21  * i_m_cajoined
23:14:28  <mbalho>thats a pretty big number
23:15:00  <juliangruber>kenansulayman: leveldb runs on ubuntu
23:15:04  <kenansulayman>yes
23:15:09  <juliangruber>kenansulayman: iirc rvagg is using ubuntu
23:15:51  <juliangruber>kenansulayman are you using any experimental native stuff?
23:16:07  <kenansulayman>no
23:16:22  <juliangruber>kenansulayman: how are you deploying? did you touch the base system?
23:16:43  <kenansulayman>nope, just developer-essentials
23:16:47  <kenansulayman>for compiling level
23:16:57  <juliangruber>odd
23:17:16  <kenansulayman>oww
23:17:21  <kenansulayman>somehow
23:17:29  <kenansulayman>if I keep msgpack nothing crashes
23:17:37  <kenansulayman>if I keep phantom nothing crashes
23:17:44  <kenansulayman>if I keep level nothing crashes
23:17:47  <kenansulayman>but
23:17:54  <kenansulayman>msgpack + phantom
23:18:02  <kenansulayman>= 0xDEADBEEF
23:18:44  <juliangruber>so it's not a level problem
23:19:11  <kenansulayman>not it crashed again
23:19:13  <kenansulayman>good lord
23:19:16  <kenansulayman>it's 1 am
23:20:30  <kenansulayman>I think i give up
23:22:03  <mbalho>uhoh rvaggs computer blew up
23:22:12  * i_m_caquit (Ping timeout: 276 seconds)
23:24:42  <kenansulayman>mbalho it did?
23:27:02  <substack>mbalho: https://github.com/substack/lexicographic-integer
23:33:24  <kenansulayman>juliangruber Okay I traced the issue down
23:34:15  <juliangruber>substack: did you see https://github.com/juliangruber/sortable-hash ?
23:34:20  <juliangruber>kenansulayman: what's it?
23:34:27  <kenansulayman>V8 bug
23:34:52  <juliangruber>substack: you're missing the index file
23:35:36  <juliangruber>kenansulayman: can you share your results?
23:37:24  <kenansulayman>well if you had a look at Absinthe you'll see the structure of Legify; the issues started when we made the server itself a "logic". That's why you pointed out the v8 weakmaps before in the log; it seems node / v8 doesn't like fast scope-hot swapping
23:37:59  <juliangruber>i don't understand
23:38:02  <kenansulayman>Seemingly having the http server be a module which references modules causes a pointer issue
23:38:04  <juliangruber>how are you doing code swapping?
23:39:13  <juliangruber>substack: yours looks like the better method though
23:39:17  <kenansulayman>code swapping is pretty much 1. deleting a require-cache 2. reloading the code from the disk 3. pointing references to old hot-code to the newly loaded code
23:39:30  <substack>juliangruber: ok published the index, thanks
23:40:34  <kenansulayman>https://github.com/KenanSulayman/ub3rgr4mm/blob/master/logic/helper.js#L11 <= deleting the cache
23:40:37  * tmcwjoined
23:40:50  <kenansulayman>(example I use in an IRC bot to have zero-downtime)
23:41:13  <kenansulayman>I then call https://github.com/KenanSulayman/ub3rgr4mm/blob/master/ub3r.js#L15 to reload the modules and overload them
23:41:15  <substack>juliangruber: I was writing a thing for mbalho's problem yesterday of encoding integers without zero-padding them
23:43:31  <mbalho>substack: awesome!
23:44:24  * DTrejoquit (Remote host closed the connection)
23:45:01  * tmcwquit (Ping timeout: 245 seconds)
23:45:35  <juliangruber>substack: sortable-hash does the same thing, but for arrays of floats
23:54:51  * wolfeidauquit (Remote host closed the connection)
23:56:13  * wolfeidaujoined
23:56:55  <juliangruber>where do i found the source code of the linux "pipe" syscall?
23:57:43  <juliangruber>ehm: https://github.com/torvalds/linux/blob/master/fs/pipe.c
23:57:52  <juliangruber>s/found/find
23:58:26  <kenansulayman>is that <cmd> | <cmd> ?
23:58:55  <juliangruber>kenansulayman: more like fd | fd
23:59:09  <kenansulayman>ya well I mean that ;)