00:13:10  * retrofoxquit (Quit: Computer has gone to sleep.)
00:28:28  * retrofoxjoined
00:29:15  * coderarizzzchanged nick to coderarity
00:30:50  * generalissimojoined
00:37:42  * st_lukejoined
00:38:42  * joshsmithquit (Quit: joshsmith)
00:52:26  * intabulasquit (Remote host closed the connection)
00:52:52  * intabulasjoined
01:02:01  * motiooonjoined
01:03:12  * joshsmithjoined
01:08:50  * retrofoxquit (Quit: Computer has gone to sleep.)
01:12:09  * swaagiequit (Quit: Leaving)
01:14:08  * konklonepart
01:14:38  * konklonejoined
01:16:39  * eldiosquit (Quit: bye =))
01:16:46  <konklone>huh - could someone check if my drone is dead or hanging? username: konklone, app: sockets
01:16:50  <konklone>getting an 8s response time again
01:17:02  <konklone>I am gonna look at switching from socket.io to sockjs
01:17:06  <konklone>if this is indeed a memory leak
01:17:14  <coderarity>alright
01:17:26  <konklone>I have 2 drones behind it
01:18:38  <coderarity>yeah, they're slow, that's for sure :P
01:18:55  <konklone>both of them?
01:19:00  * switzpart ("Linkinus - http://linkinus.com")
01:19:04  <coderarity>yeah, let me see if i can figure out why
01:19:07  <konklone>thank you
01:20:54  * clarkfischerquit (Ping timeout: 252 seconds)
01:23:06  * InspiredJWjoined
01:24:20  <mdedetrich>I am back
01:28:15  * clarkfischerjoined
01:31:15  * generalissimoquit (Remote host closed the connection)
01:36:50  <coderarity>konklone: it seems to be using a ton of memory, and then crashing and restarting
01:37:22  <coderarity>you might have a ton of traffic, or maybe you're right and it's a memory leak
01:39:37  <coderarity>konklone: this is happening on both of your drones
01:39:59  <konklone>coderarity: I do in general have a ton of traffic
01:40:04  <konklone>~200 concurrents most days
01:40:09  <konklone>not all of those connect to the streaming server
01:40:16  <konklone>it looks like ~80/200 do
01:40:33  <konklone>80 concurrents really isn't that much though, right? and only a fraction of those 80 are actually sending much traffic right now
01:40:50  <konklone>so not a ton of traffic in regards to these drones
01:41:15  <konklone>coderarity: the restarting is a cycle? can you see how long it takes for them to build up memory and crash and restar?
01:41:18  <konklone>*restart
01:42:20  <coderarity>no, but maybe you can with `jitsu logs`
01:47:26  <konklone>wow
01:47:29  <konklone>looks like every 30 minutes
01:47:52  <konklone>or even as fast as every 18 minutes
01:47:53  <konklone>holy crap
01:49:35  <konklone>or even as fast every 6 minutes in some cases.
01:49:37  <konklone>well.
01:49:47  <konklone>so this is basically terrible
01:50:22  <coderarity>konklone: added another drone for you to help you keep up, it could be the load
01:50:43  <konklone>thanks
01:50:58  <konklone>still, this is only a very small fraction of the number of concurrent connections I expect to have on launch
01:51:22  <konklone>I'm already planning on switching to the automatic plan and revving up a lot of drones, but
01:51:28  <konklone>something tells me this memory usage is not going to be sustainable
01:51:53  <konklone>I'm gonna try switching to Node 0.6.2 and see if, as someone said on one of the mem leak tickets I was looking at, that helps things
01:52:41  * benv_joined
01:53:59  <konklone>and I'm gonna switch it back down to 2 drones, so I can do an even comparison
01:54:02  <konklone>thank you for that though
01:54:15  <AvianFlu>konklone: 0.6.2 is really old, that seems like weird advice
01:54:26  <AvianFlu>we don't have earlier than about 0.6.19 or 0.6.20 on the drones anymore
01:54:27  <coderarity>konklone: i think that your app was using lke 80 MB of memory after it restarted one time, seems like a lot to me
01:55:31  <konklone>seems like a lot to me too, though I'm not doing anything special
01:55:40  <konklone>socket.io + redis-store + websockets
01:55:52  <coderarity>it was at like 345 MB when I checked before it crashed, and i saw about 50 connections - that puts it at 265/50 which is about 5 MB per connection, seems like a lot too
01:56:03  <coderarity>yeah
01:56:10  <konklone>this is the server code as deployed: https://github.com/isitchristmas/sockets/blob/master/app.js
01:56:33  <konklone>just auths a redis server, hands it off to socket.io, and handles new connections and broadcasts a bunch of events
01:56:40  <konklone>the only events though, are "I'm here" and "my mouse position"
01:56:58  <konklone>there are, however, a large number of memory leak threads on the socket.io ticket tracker
01:57:04  <konklone>and a lot of built up frustration about it not being dealt with
01:57:30  <konklone>so I suspect it's not 5MB/connection
01:57:40  <konklone>but that old connections are leaving memory behind after they leave
01:57:46  <coderarity>alright
01:58:29  <AvianFlu>it's quite possible
01:58:51  <coderarity>that must be a lot of memory left behind for 6 minutes :P
01:59:15  <konklone>...yeah
01:59:32  <coderarity>konklone: if you want to spawn a test app I can watch the memory while you add and remove connections, and we can see if there's a leak
01:59:56  <konklone>okay - what's the easiest way to do a test app for it?
02:00:08  <coderarity>just change the subdomain/appname and `jitsu deploy`
02:00:29  <konklone>oh right, server-side test app, sure
02:01:01  * joshonthewebjoined
02:03:06  * stagasquit (Ping timeout: 252 seconds)
02:03:27  <konklone>okay
02:03:35  <konklone>I have an app called sockets-test
02:03:36  <konklone>at http://iic_sockets_test.jit.su/
02:03:38  <konklone>username konklone
02:03:50  <konklone>I have two clients connected, both from my own IP (separate tabs in my browser)
02:04:01  <coderarity>okay, lemme ssh in
02:04:25  * brianloveswordsquit (Ping timeout: 252 seconds)
02:04:58  <coderarity>26 MB right now
02:05:41  <konklone>that's from 2 connections current, 4 connections total (I re-connected them both)
02:05:44  <AvianFlu>how about now
02:06:08  <coderarity>25.8 MB
02:06:17  <konklone>okay, I'm gonna reconnect 10 times
02:06:39  <konklone>okay that was 10 re-connections
02:07:10  <coderarity>25,948KB
02:07:31  * brianloveswordsjoined
02:07:35  <konklone>incidentally, if either of you want to connect, if you visit http://isitchristmas.com/?streaming=iic_sockets_test.jit.su:80&visible=1 it will connect you to this test instance
02:07:46  <konklone>okay now I'm going to move my mouse a lot
02:07:59  <konklone>this generates a lot of throughput
02:08:08  <AvianFlu>likewise
02:08:10  <AvianFlu>check it now, coderarity
02:08:31  <coderarity>i mean, 25372KB
02:08:50  <coderarity>it was at 26224KV for a second
02:08:53  <konklone>and that's even with several more concurrents, since others joined in
02:09:15  <coderarity>hmm, just saw it at 27480KB
02:09:20  <AvianFlu>strange
02:09:37  <konklone>that's remarkably constant
02:09:39  <coderarity>now it's back at 2536 KB
02:09:47  <coderarity>25136*
02:10:22  <coderarity>maybe we should get a ton of concurrent connections going, and then disconnect them all at once
02:10:43  <coderarity>just to see how much memory it gets up to with ~50 connections
02:10:51  <konklone>okay - I'll open up, let's say 60 tabs with that URL
02:11:00  <konklone>and then close the whole window at aonce
02:11:29  <AvianFlu>konklone: watch out for browser pipelining
02:11:34  <AvianFlu>it might not load them all actually
02:11:42  <konklone>browsers will pipeline websockets?
02:12:06  <konklone>the only connection that matters is the one to the websocket server, not to the web server at isitchristmas.com
02:12:26  <konklone>I wouldn't expect a browser to silently multiplex all the websockets throughput for various tabs
02:13:01  <AvianFlu>yeah but as I understand it, the pages don't load if oyu just open a lot of tabs quickly
02:13:07  <AvianFlu>it doesn't even send the stuff yet
02:13:15  <coderarity>yeah, you have to let each tab load
02:13:18  <AvianFlu>okay
02:13:36  <coderarity>it might be better to use some command line socket.io client thing, if there is one
02:14:20  <konklone>well, I've got all the concurrent connections
02:14:32  <konklone>if you open up the Chrome dev console on http://isitchristmas.com/?streaming=iic_sockets_test.jit.su:80&visible=1
02:14:39  <konklone>it mass-joins all the concurrents, you can see them all
02:15:03  <coderarity>do you have them up right now?
02:15:06  <konklone>yeah
02:15:12  <coderarity>-_-
02:15:16  <konklone>they're not generating a lot of activity besides the occasional heartbeat
02:15:19  <coderarity>28524 MB
02:15:33  <coderarity>we must be missing something :O
02:15:34  * sorensenquit (Ping timeout: 246 seconds)
02:15:42  <konklone>hmm
02:15:57  <konklone>it's using the same Redis URL as the production server was (though the production server is off, so it's got the Redis client to itself)
02:15:58  <coderarity>let me see if the connections are open with netstat -a
02:16:09  <konklone>maybe it's the Redis activity
02:16:15  <coderarity>oh yeah, they are :P
02:17:20  <coderarity>konklone: i am going to add more drones to the main one, i made it so i can go past the limit temporarily
02:17:47  <konklone>okay
02:17:57  <coderarity>we'll see what happens
02:18:03  <konklone>btw - I'm gonna do the mass shut off of the 60 connections now
02:18:27  <konklone>shut the tabs
02:18:28  <coderarity>okay
02:18:41  <coderarity>ah, still at 28832 MV
02:19:01  <coderarity>I mean, KB
02:19:07  <Sly>lol @ mv
02:19:45  <coderarity>dropped to 23680 KB, must have been garbage collector taking time :P
02:19:54  <konklone>huh, wow
02:20:19  <konklone>I wonder if the garbage collector is prevented from running so long as there's at least one connection...?
02:20:44  <konklone>it is pretty heartening to see 60 connections not impact memory very much
02:21:37  <coderarity>like, if evey connection held a reference to every other connection, so when only 1 was left there was still those references?
02:21:50  <konklone>yeah
02:22:04  <coderarity>could try it
02:22:05  <konklone>that doesn't sound terribly implausible, when you say it like that
02:22:12  <coderarity>yeah :P
02:22:37  * sorensenjoined
02:22:56  <konklone>okay, I have 60 connections open again
02:23:11  <konklone>how's memory?
02:23:11  <coderarity>30.8 MB
02:23:13  <konklone>okay
02:23:19  <konklone>I'm gonna shut off all but 1
02:23:55  <konklone>one sec
02:24:42  <konklone>okay
02:24:43  <konklone>closed all but 1
02:24:55  <konklone>now we wait a bit, I guess
02:24:55  <coderarity>okay, let's give the GC a few seconds
02:25:13  <coderarity>dropped to 25504 KB
02:25:58  <konklone>which is basically what it started at
02:26:08  <konklone>okay, just closed the last one
02:27:39  <konklone>hrm
02:27:47  <coderarity>it was still pretty steady
02:27:54  <coderarity>25.6MB or something
02:28:04  <konklone>yeah, I'm inclined to say we did not find evidence of a memory leak in these tests
02:28:17  <konklone>though neither did we find evidence of single connections taking up a lot of memory
02:28:40  <coderarity>yeah
02:29:06  <konklone>and I really don't think I was taking up more than 120 connections across the 2 drones I had when I first brought this up
02:29:15  <coderarity>indeed
02:29:22  <coderarity>i saw like 90-something
02:29:39  <konklone>shall I start the production one back up and we'll take another look at it?
02:30:01  <coderarity>yeah, put it at 5 drones
02:30:02  * shykeschanged nick to zz_shykes
02:30:29  <konklone>okay, starting it
02:31:39  <konklone>I suppose one thing that's different about production is I do have Internet Explorer 8/9 clients attempt to start a flashsockets connection to the server
02:31:55  <konklone>they don't get through because nodejitsu's node-http-proxy is not configured to pass on those kind of connections
02:32:34  <konklone>though I suppose they could be dragging the proxy server in some way
02:32:41  <konklone>anyway the production server is started
02:32:59  <konklone>looks like ~20 connections
02:33:28  <coderarity>okay, lemme check it out
02:34:01  <konklone>in the dev console, running "i=0; for (other in others) {i++}; console.log(i)" will print out the number of connections that are broadcasting to each other
02:34:27  <konklone>23 now
02:34:28  * joshsmithquit (Quit: joshsmith)
02:34:45  * benv_quit (Quit: Computer has gone to sleep.)
02:35:48  <coderarity>around 52 MB on each drone
02:36:51  <konklone>wow
02:36:59  <konklone>and only 15 concurrent right now
02:37:01  <konklone>the churn is high though
02:37:06  <konklone>lots of people in and out
02:37:27  <konklone>wow on each of 5 drones
02:37:58  <coderarity>seems to be more memory usage than the test, but nothing crazy
02:38:09  <coderarity>no crashes yet, but i'll keep an eye on them
02:39:34  <konklone>thank you
02:39:46  <coderarity>one is up to 98 MB
02:40:52  <konklone>I added a thing to start measuring the churn, client-side
02:41:00  <konklone>as of like the last 3 minutes
02:41:11  <konklone>37 concurrents, 144 connections passed through
02:41:44  <coderarity>wow, one of these drones is already at 114640 MB
02:41:50  <konklone>114MB you mean?
02:42:02  <konklone>190 connections passed through now
02:42:02  <coderarity>yes
02:42:08  <konklone>so that's ~50 connections/min at this rate
02:42:22  <konklone>this is with, according to Google Analytics' real-time analytics viewer, 356 people on the front page right now
02:42:22  <coderarity>another got up to like 110 MB and then went back down to around 60 MB
02:43:15  <coderarity>konklone: ooops, got them mixed up, all the drones have been staying around 60 MB
02:43:29  <konklone>hah
02:43:32  <konklone>okay
02:43:42  <coderarity>the first one did go up to 114 MB the first time, though
02:43:51  <konklone>hmm, ok
02:44:18  <konklone>288 connections come through
02:44:26  <konklone>yeah, ~50/minute
02:44:40  <konklone>so a bit < 1/sec
02:45:07  <konklone>coderarity: thank you for doing all this monitoring
02:45:08  * zz_shykeschanged nick to shykes
02:45:49  <Sly>konklone: Sorry to see you're still having problems with this. Hopefully it gets worked out, though. :)
02:46:12  * joshonthewebquit (Quit: Computer has gone to sleep.)
02:46:20  <konklone>yeah, I
02:46:30  <konklone>'m just glad I have enough time to test it out and figure out how to best do this
02:46:38  <konklone>and very glad to have such a responsive support team/channel
02:48:25  <coderarity>okay, so the second drone has stayed at around 100-120 MB consistently
02:49:46  <coderarity>third one was at 60, then down to about 40, now at about 110 MB
02:49:49  * admcjoined
02:50:13  <konklone>that's wild
02:50:21  * admcquit (Client Quit)
02:50:49  * generalissimojoined
02:50:50  <konklone>stayed up steady for 18 minutes so far
02:51:10  * anoemiquit (Quit: anoemi)
02:51:21  * anoemijoined
02:51:34  <konklone>traffic down to 234 people on the prod site at any one time (obviously not all of them are establishing socket connections)
02:51:48  <konklone>weird, I'd expect more than 27 out of 234
02:51:56  <konklone>(27 is the concurrent connections I see in my tab)
02:52:34  <konklone>I wonder if mobile iOS/Android will open up sockets
02:53:23  <konklone>visiting it on Chrome on my Galaxy Nexus (Android 4.2) shows me the flags moving around, so that's pretty cool
02:54:23  <konklone>(aside: moving my mouse on my laptop and seeing the flag move around on my phone, connected over 3G, is a pretty neat experience)
02:54:33  <konklone>no latency at all
02:55:48  <coderarity>konklone: did you make any changes when you deployed the test drone?
02:56:35  <konklone>no...
02:56:55  <coderarity>alright, just checking :P
02:57:18  <konklone>incidentally, what's the memory usage of the test drone like?
02:57:20  <konklone>http://iic_sockets_test.jit.su/
02:57:23  <konklone>I can't get to it
02:57:32  <coderarity>yeah, that's my bad, lol
02:57:39  <konklone>here's something interesting: the test drone, and the 5 production drones, they share the same redis store
02:58:04  <coderarity>konklone: oh, redeploy it
02:58:04  <konklone>so the test drone will experience a lot of the load that the system at large is experiencing
02:58:07  <coderarity>the test app, that is
02:58:07  <konklone>ok
02:58:23  <konklone>restart, or redeploy?
02:58:29  <coderarity>`jitsu start` works
02:58:38  <konklone>ok, doing so
02:58:54  <coderarity>we'll see if we find something out there
02:59:24  <konklone>got a 500 when starting it
02:59:32  <konklone>trying again
02:59:44  <konklone>ok, worked that time
02:59:48  <konklone>and http://iic_sockets_test.jit.su/ works now
03:00:45  <konklone>huh, the test drone is not streaming through the connections from the production drones as I would expect
03:01:19  <coderarity>yeah, and only at 24 MB
03:02:03  <konklone>well fuck
03:02:11  * niallopart
03:02:12  <konklone>apparently I was *not* having any of them run through the redis store
03:02:24  <konklone>I had turned it off in config.js, they're all doing memory store
03:02:30  <konklone>so there's no redis brokering at all
03:02:31  <coderarity>well, that might isolate the location of the proble :P
03:02:43  <konklone>all right, I'll turn on redis brokering and restart the production drones
03:04:51  <konklone>this would also explain why I was getting a lower-than-expected number of concurrents in-browser compared to how many Google said were on the page
03:05:18  <konklone>though normal traffic on the page is down to 154 now, the spike has subsided
03:06:59  <konklone>this might make a big difference
03:07:05  <coderarity>i hope it does :P
03:07:24  <konklone>trying to get the new one deployed, it failed the first time
03:07:46  * admcjoined
03:07:53  <konklone>it feels like in the past, the more drones I've had, the higher the chance of a deploy/activate failing because of a timeout
03:08:36  <konklone>no, failed again :(
03:08:52  <konklone>error: Error running command snapshots activate
03:08:52  <konklone>error: socket hang up
03:08:52  <konklone>info: jitsu's client request timed out before the server could respond
03:08:58  <konklone>will try again..
03:09:12  <AvianFlu>konklone: `jitsu config set timeout 900000`
03:09:16  <AvianFlu>that's a local timeout
03:09:21  <konklone>:o
03:09:23  <konklone>oh that is helpful
03:09:37  <konklone>thank you
03:09:57  * joshsmithjoined
03:10:33  <konklone>hmm, this one I got just now was a server issue though
03:10:47  <konklone>https://gist.github.com/acac5b06af53033b156b
03:11:15  <konklone>trying again (this time with bigger client timeout)
03:11:28  <AvianFlu>yeah, that's an ongoing occasional problem, the "took too long to listen on a socket" problem
03:11:32  <AvianFlu>we're looking into that one
03:11:34  * joshonthewebjoined
03:11:35  <AvianFlu>just try it agian in the meantime
03:12:31  <konklone>same failure - I'll just keep trying and will let you know when it succeeds
03:12:37  <konklone>the chances of it happening are quite high with a higher number of drones
03:13:51  <konklone>okay, it worked
03:14:08  * dylangquit (Quit: dylang)
03:14:24  <konklone>coderarity: production drones are re-deployed with redis brokering
03:15:17  <coderarity>okay
03:15:27  <coderarity>test drone is still at 25 MB
03:15:32  <coderarity>let me go look at production drones
03:15:52  <konklone>I am now redeploying the test drone with redis brokering
03:16:05  <konklone>it will be interesting because the *only* load the test drone will experience is messages sent from redis
03:16:22  <konklone>it won't have any connections of its own (assuming you all closed your connections to it as well)
03:16:22  <AvianFlu>a good isolation test
03:16:32  <AvianFlu>this is what we'll have to do here
03:16:42  <AvianFlu>keep thinking of ways to isolate different pieces of it until one stands out
03:17:51  <konklone>ok, test drone is deployed with redis brokering - will briefly connect just to verify that it's getting the events passing through the production site
03:18:00  <konklone>yep, it is
03:18:04  <konklone>disconnected from it
03:23:49  <coderarity>okay, let me see all the memory stuff
03:24:42  <coderarity>1st drone - 41 MB, second drone - 40 MB 3rd drone - 65 MB test drone - 37 MB
03:24:51  <coderarity>those were not all done at exactly the same time, btw
03:25:43  <coderarity>1 - 39 MB 2 - 39 MB 3 - 68 MB test - 32 MB
03:26:00  <konklone>is the production app only using 3/5 drones?
03:26:13  <coderarity>no, that's all i've got open right now
03:26:26  <coderarity>i've cleared up a few terminal windows so i'll go ahead and look at the last two
03:27:46  * jmar777quit (Remote host closed the connection)
03:28:22  * jmar777joined
03:29:45  <konklone>ok
03:30:35  <coderarity>4 is at 80 MB and 5 is at 58 MB
03:30:51  <coderarity>1st is at 80 MB too
03:32:03  <konklone>this seems about the same as before
03:32:06  <coderarity>test drone is at about 50 MB
03:32:13  <coderarity>which is almost twice as much as it was before
03:32:16  <coderarity>yeah
03:32:23  <konklone>yeah, but the test drone wasn't seeing any broadcasts at all
03:32:34  <konklone>it's actually handling connections from redis now
03:32:37  <konklone>or messages, rather
03:33:06  * jmar777quit (Ping timeout: 264 seconds)
03:33:17  <konklone>socket.io is going to make a new connection object when it gets a new connection message from redis, so that's expected
03:33:27  <coderarity>56, 72, 70, 91, 57 in order, test at 56
03:33:29  <konklone>the raw (actual) connection doesn't take up much memory, it's socket.io's bookkeeping
03:33:30  <coderarity>all in MB
03:33:32  <konklone>hmm
03:33:44  <konklone>yeah, so there's like a baseline load caused by redis
03:33:51  <konklone>though there's no evidence that redis causes a memory leak
03:34:23  <konklone>just that the brokering costs memory, which is expected
03:34:38  <konklone>splitting the load across drones only helps with the front half of the load
03:34:48  <konklone>connections on drone 4 still cause drone 1 to do work
03:34:52  <konklone>just, less work
03:35:02  * sorensenquit (Ping timeout: 246 seconds)
03:35:52  <coderarity>i see
03:36:21  <konklone>I'm contemplating writing my own brokering using sockjs
03:36:30  <konklone>which is a vastly thinner layer over websockets than is socket.io
03:36:44  <coderarity>i mean, the test drone is now at 67 MB, up like 10 MB, and afaik nothing is affecting that other than redis stuff
03:36:49  <konklone>huh...
03:37:05  * davidbanhamjoined
03:37:06  <konklone>yeah, only 36 concurrent showing to my in-browser client
03:37:07  * davidbanhamquit (Client Quit)
03:37:49  <konklone>really shouldn't be generating this much memory usage
03:38:20  <coderarity>108, 93, 70, 106, 92 in order, test at 79
03:38:26  <konklone>wow
03:38:30  <konklone>that is a direct upward trajectory
03:38:44  <coderarity>yeah
03:38:52  <coderarity>and it only happens with redis working between all the drones
03:39:20  <konklone>this makes a ton of sense though - the most obvious memory leak is one where clients show up and then leave, but their leaving doesn't get reflected across all drones
03:39:27  <konklone>I actually observed this behavior on my dev machine
03:39:31  <konklone>with just two processes
03:39:45  <konklone>have 2 people show up to each one, for 4 total
03:39:47  * sorensenjoined
03:39:51  <konklone>both processes see 4 clients
03:39:55  <konklone>then disconnect the 2 from one process
03:39:58  <coderarity>i see
03:40:01  <konklone>that process sees just 2, but the other one sees 4 still
03:40:31  <AvianFlu>not propagating disconnect events properly, perhaps?
03:41:20  <konklone>this looks about right: https://github.com/LearnBoost/socket.io/issues/1040
03:41:47  <konklone>and the OP says he solved it by switching to SockJS, with the same node version
03:42:06  <coderarity>WOAHHH
03:42:17  <coderarity>5th drone has a TON of TIME_WAIT connections
03:42:17  <konklone>o_O
03:42:30  <konklone>what does that mean?
03:42:32  <coderarity>hmmm, hold up
03:42:33  <konklone>(I'm not a very good sysadmin)
03:42:46  <coderarity>neither am I, that's why we have AvianFlu
03:43:05  <AvianFlu>a socket goes through a number of different states
03:43:10  <AvianFlu>there are at least 8 or 10 in total
03:43:34  <AvianFlu>TIME_WAIT is the state they're in after they're closed, and the kernel keeps them dead to cool down and make sure both ends are really disconnected
03:43:43  <AvianFlu>too many of them, and some kind of socket pooling should be introduced
03:43:48  <AvianFlu>apparently not happening here
03:44:21  <coderarity>konklone: do you use aws at all?
03:44:28  <konklone>in my work work, yeah
03:44:35  <coderarity>i mean, in this app?
03:44:39  <konklone>not that I know of
03:44:49  <konklone>the Redis server is on redistogo
03:44:49  <AvianFlu>how about redistogo
03:44:53  <konklone>which actually, yeah, is on AWS
03:44:55  <AvianFlu>which is on AWS
03:44:57  <AvianFlu>:D
03:45:04  <konklone>though there's a finite number of connections to redis, though - 3 per drone
03:45:27  <coderarity>i don't see any AWS connections on any other drone, only the last one
03:45:34  <coderarity>and 47 TIME_WAIT connections to AWS on that last drone
03:46:00  <AvianFlu>it's apparently not reusing sockets
03:46:06  <AvianFlu>but 47 isn't that many TIME_WAIT
03:46:19  <AvianFlu>the connection churn on the redis instance, though, could add up
03:46:37  <konklone>I don't think the node drones are re-connecting to redis, though...
03:46:50  <konklone>or at least, it's not like every user connection causes a redis connection
03:47:07  <AvianFlu>try to add some logging around it maybe
03:51:55  <konklone>hmm
03:52:04  <konklone>yeah I can look at that
03:55:55  * anoemiquit (Quit: anoemi)
03:57:33  <st_luke>SEAL THE DEAL
03:58:04  <konklone>but wait, can't you identify the TIME_WAIT IPs or something
03:58:19  <konklone>we should be able to identify whether they are redis<->drone or user<->drone, right?
03:58:22  <konklone>especially because
03:58:28  <konklone>the drones connect right to redis directly
03:58:33  <AvianFlu>konklone: they're from AWS
03:58:34  <konklone>but users hit the drones through the proxy
03:58:35  <AvianFlu>so it's redis
03:58:51  <konklone>ok
04:09:14  <coderarity>konklone: 1st drone is now at 217 MB, about to crash
04:09:19  <konklone>wow
04:09:34  <coderarity>second is at 227 MB
04:09:42  <konklone>I've just about got logs tied up to redis events
04:09:45  <konklone>testing it locally
04:10:04  <konklone>actually coderarity: the logs show a whole bunch of crashes already
04:10:07  <coderarity>21 MB for third, only 140 MB for 4th
04:10:14  <coderarity>oh, maybe I missed them :O
04:10:15  <konklone>the third probably already restarted
04:10:32  <konklone>https://gist.github.com/4146570
04:10:43  <coderarity>test drone is similar, 212 MB
04:11:09  * sorensenquit (Ping timeout: 246 seconds)
04:11:30  <AvianFlu>coderarity: there you go!
04:11:44  <AvianFlu>it's definitely the pubsub somehow, and dangling references in the JS
04:11:49  <coderarity>yes
04:12:22  <konklone>ok, testing redis logging locally
04:14:11  <konklone>okay, tested, I am deploying to production now
04:14:40  <konklone>I guess if we see a lot of excess logging, then the TIME_WAITs from AWS are explained
04:14:44  <konklone>I don't know if that explains all the memory usage though
04:14:48  <konklone>we may be seeing two independent phenomena
04:14:55  <konklone>the TIME_WAITs being less impactful than the memory leaks
04:14:58  <konklone>this is my suspicion anyway
04:15:07  <coderarity>i think it's a JS thing
04:15:37  <konklone>I would certainly like to isolate it because even if I do switch to SockJS, I'll be using a Redis store
04:16:07  <AvianFlu>this could just be a leak in socket.io's redis store
04:16:12  <AvianFlu>the TIME_WAIT sockets wouldn't cause OOM
04:16:16  <AvianFlu>they'd cause other slowdowns
04:16:30  <konklone>yeah I suspect it is a leak in socket.io's redis store, or in some effect of using *any* non-memory store
04:16:42  <konklone>where processes are keeping track of clients that did not originally connect to that client
04:18:14  <konklone>okay, I deployed to the production drones with redis logging
04:18:17  <konklone>deploying to test drone now
04:19:41  <konklone>what's memory usage like?
04:19:55  * sorensenjoined
04:20:12  <konklone>I am seeing some redis reconnects though not with a frenzy or anything
04:20:18  <konklone>once every 40 seconds or so
04:20:37  <AvianFlu>that sounds about right
04:20:42  <AvianFlu>that number of sockets he saw wasn't insane
04:20:51  <AvianFlu>and they only stay in that state for 2 or 3 minutes
04:21:14  <konklone>yeah
04:21:25  <konklone>okay now it's a little faster
04:21:28  <konklone>every 6-8s
04:21:42  <AvianFlu>any idea what they're coming from?
04:22:28  * shykeschanged nick to zz_shykes
04:22:30  <konklone>interestingly, they're always from the "pub" and "client" clients
04:22:34  <konklone>not the "sub" client (that receives data)
04:22:36  * standoojoined
04:22:43  <konklone>"pub" sends data to redis, and "client" is there to issue other commands
04:23:27  <konklone>er also...
04:23:29  <konklone>error: http://sockets.isitchristmas.com/
04:23:35  <konklone>not sure what's up there
04:25:26  <AvianFlu>what are you seeing?
04:25:27  <AvianFlu>it looks up to me
04:25:34  <coderarity>database was slow
04:25:36  <coderarity>it just updated
04:26:30  <konklone>yeah, lots of reconnects every 6-15s
04:26:32  <konklone>redis reconnects
04:26:36  <konklone>I think that explains the TIME_WAITs to me
04:26:49  <konklone>and by "lots", I mean "about one reconnect every 6-15s"
04:26:53  <konklone>which is unexpected, sure
04:27:00  <coderarity>over 5 drones?
04:27:03  <konklone>right
04:27:17  <konklone>so yeah for each one, probably once every 30s-75s
04:27:22  <konklone>or it's probably staggered there
04:27:33  <konklone>every 40s or so, like I first thought (while the other drones were still starting, maybe)
04:28:09  <konklone>so, okay
04:28:11  <konklone>I think
04:28:14  <konklone>I am going to switch to SockJS
04:28:18  <konklone>with my own Redis very light brokering
04:28:21  <konklone>*very light Redis brokering
04:28:45  <konklone>and probably holding on to nothing extra
04:29:04  <konklone>each drone only manages the connections connected right to it
04:29:10  <konklone>and ignores all irrelevant messages from redis
04:29:44  <coderarity>cool
04:30:10  * bardujoined
04:30:12  * bardupart
04:31:43  <konklone>coderarity, AvianFlu - thank you *a ton* for working through this with me
04:31:44  <konklone>so so helpful
04:31:50  * thepumpkinjoined
04:32:00  <konklone>and I hope it helps others doing websockets stuff on nodejitsu
04:32:02  <konklone>to learn from it
04:33:04  <AvianFlu>yeah, I'll certainly know what to do if *this* happens again XD
04:33:11  <AvianFlu>and we're usually here if anything else weird happens
04:33:22  <AvianFlu>also - this might not necessarily be a leak
04:33:37  <AvianFlu>if all 5 drones are all keeping the total aggregate state of the system in-memory inadvertently
04:33:46  <AvianFlu>the total pile of memory might just add up that high
04:34:01  <AvianFlu>and they'd be out-of-sync a bid based purely on load and coincidence
04:34:06  <AvianFlu>s/bid/bit/
04:34:17  <AvianFlu>and leaks here and there would compound it
04:34:53  <AvianFlu>and all our efforts to add more drones would just increase the amount of communication happening, lolz
04:37:01  <coderarity>i mean, adding more drones didn't really help
04:37:31  <coderarity>in any case, all this thinking makes me hungry
04:37:49  <konklone>yeah, this is also true
04:38:04  <konklone>but whether it's keeping the state of the system, or an actual leak - both can be fixed by ditching socket.io's redis store
04:38:15  * hugo_dcjoined
04:38:16  <konklone>and making it so processes do not keep the aggregate state of the system
04:38:18  <AvianFlu>yes. I support your decision.
04:38:30  <konklone>:)
04:38:39  <konklone>well hopefully I can do something that is reasonably extract-able for others
04:38:47  <konklone>though I am on a tight enough time frame that I may do that extraction post-christmas
04:38:54  <konklone>but I'll try to keep it in mind as I work
04:39:02  <AvianFlu>that's the spirit :)
04:39:27  <AvianFlu>GET THIS SHIT INTO PRODUCTION - BUT OPEN SOURCE IT SOON THEREAFTER
04:39:31  <AvianFlu>that's like, my rule #5
04:39:41  <AvianFlu>...except for all the other "rule #5"s
04:39:56  <konklone>rule 5(5)
04:40:44  <konklone>you guys mind if I post an abridged version of this chat into a public gist to reference on some Github tickets?
04:41:30  <AvianFlu>no, that's fine
04:41:45  <AvianFlu>maybe show it to us first just to be sure
04:41:59  <konklone>yeah, I will
04:42:00  <AvianFlu>but I don't think anybody said anything too stupid or offensive in the last hour or so XD
04:42:13  * thl0joined
04:42:16  <konklone>I'm removing all the enter/exits, anything not said by us 3, and all the irrelevant stuff
04:43:32  <AvianFlu>sounds like a good plan
04:43:54  <coderarity>konklone: wait, the world can't know i'm hungry
04:43:58  <coderarity>lol
04:51:30  <konklone>coderarity: I deleted that line :)
04:53:52  * YoYquit (Ping timeout: 256 seconds)
04:55:45  * mesoquit (Remote host closed the connection)
04:55:50  <st_luke>my cousin vinny is no longer on netflix instant, bullshit
04:56:23  * YoYjoined
04:57:55  <konklone>coderarity, AvianFlu - https://gist.github.com/4146668
04:58:06  <konklone>I got rid of irrelevant stuff and fixed typos, added a header describing the situation
04:58:11  <konklone>it is public, but let me know if I should remove it
04:58:17  <konklone>and I shall!
05:02:17  <coderarity>seems fine to me
05:04:01  <AvianFlu>yeah it's kinda long, but it's all fine
05:05:10  * jaswopejoined
05:06:17  <jaswope>I couldn't seem to find this info, is there a way to have multiple accounts administer a single app on nodejitsu?
05:06:22  * sorensenquit (Ping timeout: 246 seconds)
05:06:34  <jaswope>Or do I need to create a shared user account for it?
05:06:45  <coderarity>shared account for now
05:06:53  <jaswope>Alright, thanks
05:08:25  <konklone>yeah it's long, 'swhy I summed it up at the top, and anyone who cares about the details can skim it
05:14:26  <konklone>*shrug* it'll come in handy some day for somebody
05:23:27  <mdedetrich>man the thing I hated about going to a dynamic language
05:23:33  <mdedetrich>is the number of errors you make
05:23:43  <mdedetrich>almost all of which would have been picked up in a static language
05:23:50  <mdedetrich>(im talking about stupid errors here
05:29:41  * motiooonquit (Quit: motiooon)
05:30:47  * konklonepart
05:34:11  * YoYquit (Ping timeout: 245 seconds)
05:35:23  * prenaudjoined
05:37:22  * YoYjoined
05:38:45  * _yoy_joined
05:39:38  * prenaudquit (Ping timeout: 245 seconds)
05:42:03  * YoYquit (Ping timeout: 260 seconds)
05:42:03  * _yoy_changed nick to YoY
05:42:42  <coderarity>mdedetrich: i think you could have static typing in a dynamic language, it would just throw a runtime error, which is not ideal
05:42:49  <coderarity>mdedetrich: that's why things like typescript exist
05:42:54  * rodwjoined
05:43:32  <mdedetrich>@coderarity: true, but the problem with typescript is it wont have static typing with the libraries you use unless the library is written in typescript
05:43:46  <mdedetrich>also the runtime errors are not always obvious, as we all know
05:44:34  <rodw>Does "Error: socket hang up" mean anything to anyone? I'm getting a failure on `jitsu snapshot activate`, but it seems to run fine locally.
05:44:37  <mdedetrich>I just so wish that someone made a scala->js compiler
05:44:38  * hugo_dcquit (Quit: Ex-Chat)
05:44:43  <mdedetrich>that would be so awesome
05:44:56  * sorensenjoined
05:45:58  * AvianFluquit
05:47:35  <rodw>error output is at http://pastebin.com/eqgEf3sJ
05:48:58  <coderarity>rodw: hi
05:49:12  <coderarity>rodw: username/appname?
05:49:15  * thl0quit (Remote host closed the connection)
05:49:47  <coderarity>rodw: also, `jitsu config set timeout 10000`
05:49:53  <rodw>@coderarity : username `rodw`, appname `m.shrm`
05:50:13  <coderarity>rodw: after doing that command, give it another shot, and i'll see if there's anything wrong over here
05:50:46  <rodw>ack'd, i'm trying it again now
05:52:47  <rodw>that actually seems to fail faster, with slightly different error (http://pastebin.com/GEM7Gbsr)
05:53:05  <rodw>I'm bumped the timeout up to 30000, just to see what will heppen
05:53:13  <coderarity>that's a good idea :P
05:53:24  <coderarity>if it's getting ETIMEDOUT should definitely raise the timeout
05:53:41  <rodw>that also failed, with the "socket hand up" error again
05:53:46  * luismreisquit (Ping timeout: 245 seconds)
05:54:46  <rodw>btw, my app doesn't start slowly locally
05:55:18  <coderarity>okay, it's getting a spawn ENOMEM when unpacking
05:55:52  <coderarity>which is weird, but we can probably fix it - what's the output from `tar -tf $(npm pack)`
05:56:04  <coderarity>run inside the directory of your package.json
05:56:49  <rodw>ok, one moment
05:58:25  <rodw>enomem means out of memory?
05:58:30  <coderarity>yeah
05:58:57  <coderarity>it's a specific type of ENOMEM, which happens when the unpacking process uses a ton of memory
05:59:22  <coderarity>rodw: what snapshot version where you trying to activate?
05:59:24  * thl0joined
05:59:33  * ds1joined
05:59:43  <rodw>I did add ~100 jpg files to the app since I last deployed, but the whole tgz file is only 21M.
05:59:54  <rodw>0.5.0
06:00:00  <coderarity>okay
06:00:24  <ds1>hey all, I'm trying to get setup on Jitsu and have ran into an issue. Anyone here able to lend a hand?
06:00:35  <coderarity>ds1: what's up?
06:01:00  <ds1>I'm having errors with soynode, a closure template lib
06:01:24  <ds1>I'm unsure if it's: because it runs in a child process or if it's because it's a .jar
06:01:33  <coderarity>what's the error?
06:01:39  <coderarity>also, i highly doubt we have java installed
06:01:44  <ds1>https://github.com/Obvious/soynode/blob/master/lib/soynode.js
06:01:45  * tmpvarquit (Ping timeout: 276 seconds)
06:01:58  <ds1>error is generic, just that it can't find the template
06:02:08  <ds1>but it's a catch-all, so it's not very helpful
06:02:27  <ds1>I've emulated 'production' on my local machine, and it works as expected
06:02:29  <coderarity>it won't work
06:02:34  <coderarity>your spawning java, and we don't have that
06:02:36  <ds1>no java?
06:02:38  <ds1>le sigh
06:02:49  <coderarity>you can email [email protected] and we can see about getting it integrated
06:02:52  <coderarity>but no promises
06:02:57  * generalissimoquit (Remote host closed the connection)
06:03:01  <rodw>the tar output is now at http://pastebin.com/WanMFGiC
06:03:35  <coderarity>rodw: can you also show me `ls -l`?
06:04:20  <ds1>@coderarity thanks for the help
06:04:28  <coderarity>np
06:04:33  <ds1>I'll drop them an email. Would be _amazing_ if java were on the instances
06:04:56  <rodw>Yes, one sec. BTW, I just noticed when expanded my package comes out to 173M. Is that too much?
06:05:18  * tmpvarjoined
06:05:38  <ljharb>i've had problems with 40MB packages
06:06:50  <rodw>ls -l is at http://pastebin.com/Zjsiq3Tj
06:07:44  <rodw>@ljharb is that 40MB as a tgz, or 40MB after unzipping and untarring?
06:08:04  <ljharb>as a tgz. i don't remember the full size
06:08:14  * mesojoined
06:08:16  * generalissimojoined
06:08:22  <ljharb>but why is your app that big? mine was that big because my dependencies had huge log files in them (which they shouldn't hae)
06:08:24  <ljharb>*have
06:08:44  <rodw>static files, for the most part
06:08:51  <rodw>(images and such)
06:09:18  <ds1>question, how do I see the tar / package that's being sent across to jitsu?
06:09:24  <rodw>I can probably trim it down a bit, but in the grand scheme of things 180mb isn't all that large
06:10:35  * tomshredsjoined
06:10:47  <tomshreds>hey what was that cool website where you could test website in multiple browsers through vms?
06:11:13  <coderarity>rodw: yes
06:11:20  * generalissimoquit (Remote host closed the connection)
06:11:26  <coderarity>sorry, have to get brownies out of the oven
06:11:29  <rodw>Yes, too large?
06:11:41  <coderarity>173 MB might be too large, yet
06:11:55  <coderarity>i mean, we haven't had an ENOMEM in a while :P
06:12:00  <ljharb>rodw: why not host that content somewhere else?
06:12:36  <coderarity>rodw: yeah, you can host that content on something like amazon S3, it should be faster to put it there anyways :P
06:12:42  <rodw>ljharb: right now, because this is a development instance
06:12:48  <tomshreds>anyone? was it browsify or something?
06:12:53  <tomshreds>browserling
06:13:01  <tomshreds>YEAH found it
06:13:03  <coderarity>yeah
06:13:04  <tomshreds>nevermind
06:13:06  <ljharb>browserstack browserling
06:13:36  <coderarity>rodw: anything over 50 MB shows an error, but it's 180 MB because of a ton of dependencies
06:13:53  <coderarity>rodw: the drones themselves are pretty small, so if you have huge packages it has trouble untaring them
06:13:58  <substack>tomshreds: yep! I run that site with pkrumins
06:14:06  <substack>so if you have problems just bug me over irc
06:14:15  * thl0quit (Remote host closed the connection)
06:15:24  <rodw>Yes my node_modules (including devDependencies) is 110M
06:16:27  * tomshredsquit (Quit: Linkinus - http://linkinus.com)
06:17:27  <rodw>In contrast, my "docroot" is ~22 mb
06:18:01  <rodw>so I need to trim down node_modules?
06:19:39  <rodw>(I've only got 9 entries directly in my dependencies, but several of those have a lot of dependencies in turn)
06:20:56  <coderarity>well, anything you take out helps
06:21:19  <coderarity>but you can bundle dependencies and then remove things from those dependencies, like test folders
06:21:36  <coderarity>check out package.json.jit.su for information on bundleDependencies
06:21:41  <rodw>is there a way to make `npm pack` leave out my devDependencies?
06:22:06  <rodw>that's where most of the space is
06:22:39  <rodw>(zombie is ~50MB, jscoverage is another ~20MB)
06:22:57  <coderarity>it should already?
06:24:16  <rodw>I've been running `make test` (essentially `npm test`) as part of my deploy script, so maybe it's just picking up whatever is already in node_modules?
06:24:30  <rodw>Or is that happening on the jitsu side of things?
06:24:46  <coderarity>should happen locally, if you did a predeploy script
06:26:03  <rodw>The package tgz doesn't include anything but what's in my bundleDependencies.
06:26:43  <rodw>Is there a way to tell nodejitsu to `npm install` the production dependencies only?
06:27:47  <ds1>@rodw make sure NODE_ENV is production
06:27:58  <ds1>and or, $ npm install —production
06:28:15  <rodw>My local NODE_ENV shouldn't matter, right?
06:28:26  <ds1>https://npmjs.org/doc/config.html#production
06:28:46  <ds1>dev dependencies are only *not* installed at the top-most level unless production is flagged to true
06:28:50  <coderarity>your local node_eenv doesn't matter
06:29:23  <coderarity>rodw: wait, so when you do `jitsu snapshots fetch` of the snapshot you're trying to deploy, does it have devDependencies installed?
06:29:24  * Arrojoined
06:30:30  <rodw>Didn't know I could fetch it back. Maybe this is a red herring and my package is too big without the dev dependencies. You're right, I don't have any evidence that nodejitsu is installing them, I just know that they are large locally.
06:31:06  <Arro>getting socket hangup on database create. just me or everyone?
06:31:20  <coderarity>Arro: i see
06:31:26  <coderarity>gimme a second
06:31:39  * travis-cijoined
06:31:39  <travis-ci>[travis-ci] nodejitsu/node-http-proxy#71 (master - 22639b3 : Maciej Małecki): The build passed.
06:31:39  <travis-ci>[travis-ci] Change view : https://github.com/nodejitsu/node-http-proxy/compare/886a395429f2...22639b378189
06:31:39  <travis-ci>[travis-ci] Build details : http://travis-ci.org/nodejitsu/node-http-proxy/builds/3359435
06:31:39  * travis-cipart
06:31:49  * Slyquit (Remote host closed the connection)
06:31:54  * sreeixjoined
06:32:35  <rodw>Odd, `jitsu snapshots fetch` is giving me a truncated tgz file back.
06:32:44  <coderarity>rodw: you have a package directory with a copy of everything in it, delete that
06:32:51  * zz_shykeschanged nick to shykes
06:33:20  <rodw>I noticed that too, but the 0.5.0-1 version seemed to fail also.
06:33:42  <coderarity>Arro: seems fine to me, maybe try `jitsu config set timeout 30000` and give it another shit\
06:33:45  <coderarity>shot*
06:33:48  <coderarity>lol
06:34:05  <rodw>Although I don't see that in the snapshot list. Let me try again.
06:34:40  <Arro>@coderarity ok tried that, still "socket hang up"
06:34:51  <Arro>and occasionally "connect ECONNREFUSED"
06:35:03  <coderarity>Arro: can you paste the complete output to gist.github.com?
06:35:41  <Arro>https://gist.github.com/4146858
06:37:03  <Arro>just a thought: your servers may hate my IP. I had a node knockout entry where I had a restart script, and got an email about it yesterday telling me to shut it off.
06:37:22  * Slyjoined
06:37:32  <Arro>that would be on a different account, but same ip
06:38:01  <coderarity>restart script?
06:38:37  <Arro>yeah now i can't even log in.
06:39:06  <Arro>i just had a python script that restarted my app every 10 minutes. i wasn't allowed to touch my node.js code.
06:39:10  * gorillatronquit (Ping timeout: 250 seconds)
06:40:05  * gorillatronjoined
06:41:54  * metafedorajoined
06:42:14  <ds1>thanks for the help @coderarity, hopefully Java appears. Not wanting to us heroku :/
06:42:19  <ds1>use*
06:42:33  <rodw>repackaged everything, but now I'm gettng ECONNREFUSED for anything I try to do against the jitsu server
06:42:49  <rodw>(e.g.. jitsu deploy, jitsu logs)
06:42:50  <coderarity>rodw: okay
06:43:47  <coderarity>Arro: rodw: try again
06:43:56  * ds1part
06:44:30  <coderarity>Arro: wait, you had that python script running locally?
06:44:48  <Arro>@coderarity yes
06:44:48  <coderarity>i'm not sure what happened there, tbh, but it sounds like a bad idea
06:44:55  <Arro>yeah, for sure
06:45:16  <rodw>ack'd, I can hit `jitsu snapshots list` now, trying the deploy
06:45:38  <Arro>after the hackathon i had like no brain power left. so tired. and it kept crashing.
06:46:58  <Arro>@coderarity: i am able to log in again.
06:47:22  * mesoquit (Remote host closed the connection)
06:47:33  <Arro>@coderarity: however, on database create, new problems: https://gist.github.com/4146901
06:47:56  <Sly>Arro: what version of jitsu are you using?
06:48:14  <Sly>The latest version is 0.11.3, so update if it's lower than that.
06:48:22  <Arro>ok will do
06:48:50  <Arro>(it was lower)
06:48:54  <coderarity>yep, that's the problem
06:49:05  * mappumjoined
06:49:16  <Arro>ok that fixed it
06:49:24  <Sly>Cool.
06:49:28  <Arro>thanks for the help @coderarity, and @Sly
06:49:36  <Sly>np
06:50:07  * standoopart
06:51:05  * NaNjoined
06:51:30  <rodw>@coderarity: sorry for the delay, I'm getting timeouts when uploading the app but I think that's network issues on my end.
06:52:12  <rodw>I also just upgraded jitsu based on Sly's advice to Arro.
06:52:36  <coderarity>rodw: ETIMEDOUT? just set the timeout even higher :P
06:52:44  <coderarity>`jitsu config set timeout 100000`
06:52:53  <coderarity>`jitsu config set timeout 500000`
06:52:59  <coderarity>`jitsu config set timeout 1000000`
06:53:03  <coderarity>lol
06:53:12  <rodw>No, it's failing during the upload.
06:53:22  * mesojoined
06:53:38  <rodw>It is ETIMEDOUT though, lemme try it.
06:53:38  <Sly>rodw, are you sure it's failing?
06:53:46  <coderarity>with the old version? it's probably not
06:53:47  <Sly>Have you checked your snapshots list to see if it actually uploaded?
06:54:21  * nmanousosjoined
06:54:31  <nmanousos>having issues deploying, I keep getting "Script took too long to listen on a socket"
06:54:39  <nmanousos>any advice?
06:54:44  <rodw>just checked, def. not uploading
06:54:59  <Sly>rodw: alright. Just wondering.
06:55:08  <Sly>nmanousos: what is the username/app name?
06:55:36  <nmanousos>nmanousos / coolence
06:55:45  <nmanousos>ah, worked now
06:55:48  <nmanousos>just took a few tries
06:56:16  <nmanousos>small complaint - this is becoming very frequent, maybe 1/2 of the time I can't deploy, takes a few tries to get it don
06:56:17  <nmanousos>e
06:56:33  <nmanousos>is there anything that can be done to make deploys happen more reliably?
06:56:38  <Sly>You may have a dependency that's causing it to time out.
06:56:45  <Sly>Like, if you have a *huge* dependency that has to install.
06:56:50  <coderarity>some of those are bad servers
06:56:55  <Sly>^ that too
06:57:01  <nmanousos>ah
06:57:13  <coderarity>they just go away as we find and fix them
06:57:23  <nmanousos>is there a way to deploy without re-downloading all the deps?
06:57:28  <nmanousos>if its just a code change on my side?
06:57:44  <coderarity>no
06:58:30  <Sly>The short answer, no.. as coderarity said.
06:58:40  <Sly>The long answer, yes.. but it wouldn't help.
06:58:59  <Sly>You can bundle them, but that can still cause timeouts because then you have more to upload to the server.
06:59:10  <nmanousos>hmmm
06:59:11  <nmanousos>ok
07:00:14  <Sly>There's a number of things that could have caused it to time out. It could just simply be a bad drone that maybe hasn't been effected by patches that we've ran.
07:00:27  <Sly>We don't run patches on drones that have active applications, to prevent crashing a production app.
07:00:35  <nmanousos>k
07:00:41  <Sly>That's why you guys may run into bad drones that are out of date, compared to others.
07:00:56  <nmanousos>no worries, i can survive with this
07:01:03  * DTrejojoined
07:01:17  <Sly>Cool. Just letting you all know. :)
07:01:17  <nmanousos>is there a way to see what my capacity is like now?
07:01:22  <nmanousos>i apprecaite it :)
07:01:25  <Sly>What do you mean by "capacity"?
07:01:36  <nmanousos>to see if I am good on the 1 drone plan i have, or if i need to upgrade
07:01:46  * joshsmithquit (Quit: joshsmith)
07:02:02  <nmanousos>i have an ecommerce app, and i am expecting a huge traffic increase tomorrrow w/ cyber monday shopping
07:02:14  <nmanousos>a bit concerned to keep everything up and running
07:02:39  <Sly>If you're concerned about traffic spikes, it would probably be a good idea to have at least 2 drones.
07:02:55  <Sly>That's jmo, though.
07:03:06  <nmanousos>is there a way to actually see though, see where i'm at w/ traffic
07:03:12  <nmanousos>i have no idea
07:03:27  <Sly>Not that I'm aware of.
07:03:56  <nmanousos>maybe a feature idea for you guys, i would upgrade if i could see some dashboard telling me i needed to because of my traffic
07:04:53  <Sly>Well, the thing is.. your app may be able to handle the amount of traffic it gets on one drone. It depends on how fast your app handles a request, what it's serving, etc.
07:05:10  <Sly>We don't limit traffic, so we don't really keep up with what drones are doing network wise.
07:05:23  <Sly>I'm sure there are logs for network traffic somewhere on Joyent's servers, but *we* don't log traffic.
07:05:56  <Sly>You can submit it as a request, though. We're always open to feature requests.
07:06:04  <nmanousos>i understand, but i just have nothing to base this on
07:06:05  <rodw>@coderarity: fyi, got the 0.5.1 version uploaded; still hitting "socket hang up" errors on activate. trying again with timeout 1000000.
07:06:12  <nmanousos>no idea if i need 1 drone, 100 drones, 1000 drones
07:06:24  <rodw>same issue
07:06:32  <nmanousos>so i just got the cheapest and hope that my app doesnt crash if i get too much traffic
07:08:07  <Sly>nmanousos: to give you an idea, our blog.jit.su site uses 2 drones.
07:08:27  <Sly>The drones can handle traffic pretty well. Like I said, it just depends on how fast your app responds to requests and what it's serving.
07:08:32  <coderarity>nmanousos: if your app crashes, it'll restart - and you'll see an error in `jitsu logs`
07:08:38  <Sly>^
07:08:41  <coderarity>if you see a memory error in `jitsu logs`, it may be time to add drones
07:08:45  <nmanousos>that's true.. some of the times
07:09:09  <nmanousos>twice in the past couple days, it has crashed due to out of memory errors, and not restarted
07:09:24  <coderarity>rodw: okay, second
07:09:29  <Sly>If that happens again, let us know and we'll look into it.
07:10:01  <nmanousos>ok
07:10:31  <nmanousos>totally unrelated - anyone know how to "set the default write concern" for mongodb?
07:11:00  <nmanousos>i set safe:false when i init the db, but it keeps asking me
07:11:03  <coderarity>rodw: same thing
07:11:13  <coderarity>rodw: still ENOMEMing, need to make it smaller
07:11:58  <rodw>To clarify, I need to make my package file (tgz file) smaller?
07:12:19  <coderarity>yeah, `npm pack`
07:15:02  * mdedetrichquit (Quit: Computer has gone to sleep.)
07:17:08  <coderarity>rodw: we might have to talk to someone and follow up, but we'll see - I'd just try to make it smaller if possible and deploy a few more times
07:17:31  <rodw>coderarity: ack'd. I'm stripping out everything optional *and* the ~15MB of images added since the last good deploy.
07:18:24  <rodw>That won't really work (broken image links) but I just want to see if the deploy works without those images.
07:18:33  <coderarity>yeah
07:19:29  * nmanousosquit (Quit: Page closed)
07:21:43  * metafedoraquit (Ping timeout: 245 seconds)
07:24:59  <rodw>coderarity: Ok. The initial deploy gave me the timeout during the activation phase, but `jitsu snapshots activate` worked when I called it directly.
07:25:15  <coderarity>i'd say, put those on a CDN like S3
07:25:18  <rodw>I guess I'll just need to find another host for the image files.
07:25:36  <rodw>Right. Thats so much for your help. You guys give excellent support.
07:25:41  <coderarity>np
07:25:47  <rodw>*Thanks*, not thats
07:26:01  <booyaa>mornigng
07:26:05  <coderarity>hi
07:30:01  * benv_joined
07:30:03  * rodwquit (Ping timeout: 245 seconds)
07:30:53  <booyaa>ahhh fresh new os install, really need to find a way to automate user settings. no matter.. most of the stuff i've nicked from others
07:31:04  * defunctzombiequit (Remote host closed the connection)
07:31:08  <booyaa>vim config is mmalecki 's
07:31:23  <booyaa>i wonder if i should switch to zsh i heard all the hipsters are using it now
07:31:27  <coderarity>i just steal the nnomap ; : thingy
07:32:01  <booyaa>oh yes that's a nice one, bleeding obvious once you see it ;)
07:33:49  * joshonthewebquit (Quit: Computer has gone to sleep.)
07:35:16  <booyaa>working my way through this nicely annotated config too http://amix.dk/vim/vimrc.html
07:35:56  * joshonthewebjoined
07:37:51  <booyaa>man my eyes are getting crap, i'm now up to 15pt terminal :(
07:38:13  <booyaa>could be it's early, but i thinkk old age is starting to kick in
07:38:15  <coderarity>i'm at 16pt
07:38:34  <booyaa>are you short sighted? (i am)
07:38:41  * travis-cijoined
07:38:41  <travis-ci>[travis-ci] flatiron/plates#91 (master - 6bc531b : Paolo Fragomeni): The build passed.
07:38:41  <travis-ci>[travis-ci] Change view : https://github.com/flatiron/plates/compare/1d8d99c08d0b...6bc531b9b378
07:38:41  <travis-ci>[travis-ci] Build details : http://travis-ci.org/flatiron/plates/builds/3359947
07:38:41  * travis-cipart
07:38:47  <coderarity>not technically
07:38:58  <coderarity>but i think visiting the eye doctor could change that
07:39:13  <coderarity>or wait, is that far sighted? idk
07:44:15  <booyaa>man i wish i was far sighted, i'd happily read my papers from 5ft away vs not be able to see 5" in front of
07:47:39  * joshonthewebquit (Quit: Computer has gone to sleep.)
07:52:49  * alucardXjoined
07:52:57  <alucardX>morning
07:54:20  * DTrejoquit (Remote host closed the connection)
07:54:40  <coderarity>hi
08:07:18  <booyaa>lo
08:08:06  * joshonthewebjoined
08:08:40  * NaNpart
08:18:24  * st_lukequit (Remote host closed the connection)
08:24:46  * Swaagiejoined
08:26:33  * FrenkyNetjoined
08:29:36  * benv_quit (Quit: Computer has gone to sleep.)
08:31:13  * vbarachjoined
08:32:30  * defunctzombiejoined
08:36:52  * defunctzombiequit (Ping timeout: 252 seconds)
08:43:00  * st_lukejoined
08:43:23  * tmpvarquit (Quit: Leaving)
08:47:10  * shykeschanged nick to zz_shykes
08:54:10  * st_lukequit (Remote host closed the connection)
08:57:27  * MannyCaljoined
08:57:33  * vbarachquit (Quit: vbarach)
09:01:35  * toonketelsjoined
09:04:45  * toonketelsquit (Remote host closed the connection)
09:05:04  * toonketelsjoined
09:14:53  * chakritjoined
09:14:53  * chakritquit (Changing host)
09:14:53  * chakritjoined
09:18:14  * admcquit (Quit: Leaving.)
09:18:55  * lwicksjoined
09:20:10  <nathan7>coderarity: 16pt?!
09:32:55  * `3rdEdenjoined
09:33:41  <booyaa>tried 16pt but got depressed
09:33:54  <booyaa>just switch from 14pt to 15pt in iterm2
09:38:30  <nathan7>I live on 8pt
09:38:41  <nathan7>The one thing that annoys me is that bold is invisible
09:45:26  * niftylettucequit
09:45:56  * eldiosjoined
09:46:34  * YoYquit (Ping timeout: 240 seconds)
09:47:11  * niftylettucejoined
09:47:12  * YoYjoined
09:49:00  * standoojoined
09:59:50  * InspiredJWquit (Remote host closed the connection)
10:01:46  * indexzerojoined
10:08:40  * joshonthewebquit (Quit: Computer has gone to sleep.)
10:09:50  * lwicksquit (Quit: lwicks)
10:17:52  * mdedetrichjoined
10:22:27  * mdedetrichquit (Ping timeout: 252 seconds)
10:23:09  * lwicksjoined
10:25:15  * standoopart
10:27:09  * mdedetrichjoined
10:27:22  * lwicksquit (Client Quit)
10:33:27  * defunctzombiejoined
10:33:27  * defunctzombiequit (Changing host)
10:33:27  * defunctzombiejoined
10:38:07  * defunctzombiequit (Ping timeout: 252 seconds)
10:38:23  <booyaa>nathan7: crikey man you got eagle eyes
10:39:21  <nathan7>booyaa: Perhaps.
10:44:35  * stagasjoined
10:53:04  * mappumquit (Ping timeout: 246 seconds)
11:02:20  * lwicksjoined
11:02:34  * retrofoxjoined
11:04:02  * lwicksquit (Client Quit)
11:08:44  * `3rdEdenchanged nick to `3E|LUNCH
11:19:13  * crystalnethjoined
11:19:21  * mdedetrichquit (Ping timeout: 240 seconds)
11:19:47  <crystalneth>Getting: package.json error: can't find starting script: lib/app.js
11:19:49  <crystalneth>but that path exists
11:20:43  * mdedetrichjoined
11:22:08  * retrofoxquit (Quit: Computer has gone to sleep.)
11:33:47  <crystalneth>changed to "node lib/app.js" - still not finding it
11:34:29  * defunctz_joined
11:35:17  * Swaagiequit (Quit: Ik ga weg)
11:35:23  <coderarity>crystalneth: is it in your .gitignore or .npmignore? if it's in your .gitignore add a blank .npmignore, if it's in your .npmignore remove it
11:35:51  * Swaagiejoined
11:36:57  <crystalneth>coderarity: does jitsu deploy from the local git repo or the file system? given that changes to package.json seem to take effect without committing, I figured the file system.
11:37:07  <coderarity>file system
11:37:15  <coderarity>but it still uses those files for ignoring stuff
11:37:17  <crystalneth>yes lib is in gitignore because it's generated from coffeescript
11:37:29  <coderarity>it uses `npm pack`
11:37:35  <coderarity>crystalneth: ah, just add a blank .npmignore
11:38:38  <coderarity>see https://npmjs.org/doc/developers.html#Keeping-files-out-of-your-package
11:38:54  <crystalneth>coderarity: ding! thank you. that's working. first deploy :)
11:39:03  <coderarity>sweet :D
11:39:45  * defunctz_quit (Ping timeout: 276 seconds)
11:55:14  * toonketelsquit (Remote host closed the connection)
11:56:01  * papachanjoined
11:56:13  * mmaleckichanged nick to mmalecki[out]
11:58:09  * `3E|LUNCHchanged nick to `3rdEden
12:00:11  * chakritquit (Read error: Connection reset by peer)
12:02:17  * jgomezjoined
12:06:18  <crystalneth>do i need to app.listen(80) or another port? example uses 8080.
12:06:40  <coderarity>any port is fine
12:07:15  <crystalneth>hmm, got a timeout "spawning drone"
12:07:24  <crystalneth>"Script took too long to listen on a socket"
12:07:42  * charvjoined
12:07:45  * dylangjoined
12:07:50  <coderarity>you want to do the app.listen right after you create the app, and you should create it as early as possible
12:09:15  <charv>hi there guys, is there a way to move my nodejitsu app to europe? I read joyent has some metal in Amterdam
12:09:16  <crystalneth>i call listen right after calls to configure and requiring the routes file
12:09:38  <charv>Amsterdam*
12:09:44  <coderarity>crystalneth: the earlier, the better, but you have a few seconds to listen
12:10:00  <coderarity>crystalneth: oh, that might be a bad drone too, so maybe you should just deploy again
12:10:10  <coderarity>charv: no, not yet
12:10:23  <coderarity>charv: i do think it's in the pipeline though
12:10:25  * retrofoxjoined
12:11:24  <charv>coderarity: thanks, do you know how deep in the pipeline it is? :)
12:11:47  <coderarity>charv: within the next couple of months
12:12:01  <coderarity>maybe earlier, not exactly sure
12:12:10  <coderarity>but probably not later
12:12:34  <charv>ok, I will stick with nodejitsu, couple of months is fine :)
12:12:35  <crystalneth>coderarity: ok, it worked on my third deploy with no changes other than the port number…. are bad drones common?
12:12:41  <charv>thanks
12:12:47  <coderarity>np
12:13:14  <coderarity>crystalneth: depends, we try to get rid of them as much as we can, but sometimes there's more and sometimes there's less
12:13:38  <coderarity>crystalneth: we can't touch people's drones they're running production stuff on, so sometimes those become free and get to the top of the pool
12:16:30  * toonketelsjoined
12:18:01  * coderaritychanged nick to coderarizzz
12:18:27  <coderarizzz>crystalneth: i gtg to sleep, but someone should be on soon to help if you have any more problems
12:18:56  <crystalneth>coderarizzz: thanks! great having you around to help.
12:19:00  * mdedetrichquit (Ping timeout: 248 seconds)
12:21:10  * mdedetrichjoined
12:25:00  * charvquit (Quit: This computer has gone to sleep)
12:33:29  * thl0joined
12:33:37  * thl0quit (Remote host closed the connection)
12:33:54  * chakritjoined
12:54:18  * crystalnethquit (Quit: crystalneth)
12:54:36  * TheJHjoined
13:02:15  * joshonthewebjoined
13:04:06  * joshonthewebquit (Client Quit)
13:04:07  * chakrit_joined
13:04:40  * chakritquit (Ping timeout: 246 seconds)