00:11:01  * ysaberiquit (Quit: ysaberi)
00:26:20  * bnoordhuisjoined
00:28:40  * BobGneujoined
00:29:49  * Bob_Gneuquit (Ping timeout: 245 seconds)
00:31:01  * bnoordhuisquit (Ping timeout: 250 seconds)
01:00:08  * plutoniixjoined
01:29:45  * ncthom91quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
01:45:23  * ofrobotsquit (Quit: My Mac has gone to sleep. ZZZzzz…)
01:55:21  * bradleymeckjoined
02:12:07  * j0hnsm1thjoined
02:29:44  * ncthom91joined
02:44:24  * bradleymeckquit (Quit: bradleymeck)
02:54:37  * ofrobotsjoined
03:22:54  * ncthom91quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
03:24:34  * bradleymeckjoined
03:37:38  * ofrobotsquit (Quit: My Mac has gone to sleep. ZZZzzz…)
03:41:42  * ncthom91joined
03:44:06  * bradleymeckquit (Quit: bradleymeck)
03:54:15  * ncthom91quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
04:04:47  <trungl-bot>Tree closed by [email protected]: Tree is closed (Automatic: "Check" on http://build.chromium.org/p/client.v8/builders/V8%20Win64%20-%20debug/builds/3243 "V8 Win64 - debug" from 5f047ff651a51a95f9625aab26c2a4a5f4f37587: [email protected])
04:07:48  <trungl-bot>Tree opened by [email protected]: Tree is open (same procedure as every day)
04:18:43  * ncthom91joined
04:29:35  * ncthom91quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
04:32:00  * caitp-quit (Ping timeout: 244 seconds)
04:57:46  * ncthom91joined
05:28:23  * caitp-joined
05:32:58  * caitp-quit (Ping timeout: 244 seconds)
05:40:14  * ncthom91quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
05:52:22  * dobsonquit (Ping timeout: 264 seconds)
06:04:24  * muellijoined
06:07:46  * dobsonjoined
07:28:02  <trungl-bot>Tree closed by [email protected]: Tree is closed (Automatic: "Check" on http://build.chromium.org/p/client.v8/builders/V8%20Linux%20-%20noi18n%20-%20debug/builds/3543 "V8 Linux - noi18n - debug" from 4a2b4cdbf8a2cd18d5bb8bb85aa6fee59e4ea3ca: [email protected])
07:30:44  * caitp-joined
07:33:05  <trungl-bot>Tree opened by [email protected]: Tree is open
07:35:25  * caitp-quit (Ping timeout: 244 seconds)
08:04:12  * decoderquit (Quit: No Ping reply in 180 seconds.)
08:04:29  * decoderjoined
08:44:22  * stalledquit (Ping timeout: 255 seconds)
08:52:29  * bnoordhuisjoined
08:55:30  * plutoniixquit (Read error: Connection reset by peer)
09:10:03  * stalledjoined
09:11:04  * plutoniixjoined
09:32:09  * bnoordhuisquit (Ping timeout: 240 seconds)
09:39:10  * bnoordhuisjoined
09:47:40  * caitp-joined
09:52:51  * caitp-quit (Ping timeout: 244 seconds)
10:02:34  * muelliquit (Ping timeout: 264 seconds)
10:19:22  * bnoordhu1sjoined
10:22:32  * bnoordhuisquit (Ping timeout: 272 seconds)
10:28:41  * rendarjoined
10:49:09  * plutoniixquit (Quit: จรลี จรลา)
11:17:28  <bnoordhu1s>when a CL lands, how do i make it get back-ported to older branches?
11:18:00  <bnoordhu1s>i usually ask but as often as not, that ends up by the wayside
11:22:24  * bnoordhu1schanged nick to bnoordhuis
11:46:36  <trungl-bot>Tree closed by [email protected]: Tree is closed (Automatic: "Check" on http://build.chromium.org/p/client.v8/builders/V8%20Win32%20-%20debug%20-%203/builds/3005 "V8 Win32 - debug - 3" from 8ceb90356b0098c7df01678359535e4ed4bf05e8: [email protected],[email protected],[email protected])
11:54:03  * StephenLynxjoined
11:55:41  <trungl-bot>Tree opened by [email protected]: Tree is open
12:32:30  * esasquit
12:34:12  * bobmcwjoined
12:48:58  * ofrobotsjoined
12:50:15  * ofrobotsquit (Client Quit)
12:52:17  * muellijoined
12:54:04  * ofrobotsjoined
12:54:07  * enaqxjoined
12:56:30  * enaqx_quit (Read error: Connection reset by peer)
13:03:49  * caitp-joined
13:08:57  * Net147joined
13:18:09  * muelliquit (Ping timeout: 245 seconds)
13:25:44  <arv>bnoordhuis: Talk/email to [email protected]
13:27:32  <arv>bnoordhuis: There is `tools/release/merge_to_branch.py` but I'm not sure if there are any instructions (I found this on the internal wiki)
13:33:44  * bradleymeckjoined
13:35:18  * ofrobotsquit (Quit: My Mac has gone to sleep. ZZZzzz…)
13:35:19  <trungl-bot>Tree closed by [email protected]: Tree is closed (maintenance)
13:36:14  * plutoniixjoined
13:43:41  <bnoordhuis>arv: thanks
13:45:23  * wingoraces the cq versus tree maintenance
13:54:00  * C-Manjoined
13:54:16  <caitp->what exactly does an IC accomplish in Function.apply() anyway?
13:54:36  * Net147quit (Quit: HydraIRC -> http://www.hydrairc.com <- It'll be on slashdot one day...)
13:55:09  <wingo>feedback on the receiver and target function? /me guesses
13:56:11  <caitp->hum
14:07:25  * esasjoined
14:09:07  * caitp-quit (Ping timeout: 244 seconds)
14:10:37  <trungl-bot>Tree opened by [email protected]: Tree is open
14:11:20  <bnoordhuis>"Please wait for an LGTM, then type "LGTM<Return>" to commit your change. (If you need to iterate on the patch or double check that it's sane, do so in another shell, but remember to not change the headline of the uploaded CL."
14:11:48  <bnoordhuis>this has got to be the weirdest way of cherry-picking patches i've encountered so far
14:12:05  <bnoordhuis>that's from tools/release/merge_to_branch.py
14:12:17  <bnoordhuis>i don't dare ^C it
14:13:07  <wingo>wow :)
14:23:58  * ofrobotsjoined
14:47:49  <bnoordhuis>"CLs for remote refs other than refs/pending/heads/master must contain NOTRY=true
14:47:50  <bnoordhuis>and NOPRESUBMIT=true in order for the CQ to process them"
14:48:02  <bnoordhuis>does that mean i need to edit the CL or should i resubmit it?
14:48:33  <wingo>sounds like edit to me but i am not an owner
14:48:42  <wingo>you might try to find your reviewer over hangouts
14:49:14  <bnoordhuis>please god, no. i like my hermetic lifestyle. i'll try editing it
14:49:45  <wingo>i meant hangouts chat :)
14:49:59  <wingo>anyway :)
14:51:51  <bnoordhuis>curious though; i've never seen the commit bot complain about that with other cherry-pick CLs
14:53:47  <bnoordhuis>but edit+commit seems to have worked so hurray
15:00:36  <caitp>i really don't get the fixation on hangouts, irc works so much better in terms of not endlessly beeping at me :(
15:05:22  * caitp-joined
15:12:09  * caitp-quit (Ping timeout: 244 seconds)
15:13:09  <bnoordhuis>probably something to do with dog food
15:17:26  <caitp>yeah it just seems like the wrong tool for the job
15:18:01  <StephenLynx>millenials
15:20:26  <arv>caitp: you can turn of the sound ;-)
15:22:46  <caitp>arv do you have any last bits on 548833002 before I cq it?
15:22:53  <caitp>it's been a long timewaiting
15:44:35  * RT|Chatzillaquit (Quit: ChatZilla [Firefox])
15:51:07  * ofrobotsquit (Quit: My Mac has gone to sleep. ZZZzzz…)
15:54:18  * caitp-joined
15:56:33  * caitp-quit (Client Quit)
15:57:59  * caitp-joined
16:03:59  * ncthom91joined
16:06:51  * ysaberijoined
16:06:59  * ofrobotsjoined
16:18:01  * davijoined
16:18:01  * daviquit (Changing host)
16:18:01  * davijoined
16:21:40  * ofrobotsquit (Quit: My Mac has gone to sleep. ZZZzzz…)
16:22:45  * ofrobotsjoined
16:25:23  * ncthom91quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
16:29:05  * ncthom91joined
16:32:45  <ncthom91>hi all. Is it possible to create a new Isolate from an existing isolate?
16:32:57  * ofrobotsquit (Quit: My Mac has gone to sleep. ZZZzzz…)
16:34:11  <bradleymeck>ncthom91: look at snapshots? you can't exactly do a memcpy
16:34:29  * bradleymeckquit (Quit: bradleymeck)
16:35:00  <ncthom91>i'd like to try writing a nodejs addon wherein, from JS-land I can fill a shared work queue, and in C++ land I have a pool of worker threads that just pop off the queue and do their processing. The hard part with that is that the processing step that each thread will run involves executing javascript
16:36:37  <ncthom91>I suppose http://stackoverflow.com/a/7921673/2320243 that might be what I need
16:37:31  <ncthom91>although I'm not sure how much concurrency I'll get by using that since the main task involves using v8
16:37:45  <ncthom91>hence my question about duplicating isolates :)
16:47:27  * ysaberi_joined
16:47:44  * ysaberiquit (Ping timeout: 252 seconds)
16:47:45  * ysaberi_changed nick to ysaberi
17:05:58  <ncthom91>for anyone following that ^, I suspect Node's VM source code will point me to the answers I need :) https://nodejs.org/api/vm.html#vm_vm_createcontext_sandbox
17:19:01  * rendarquit (Ping timeout: 264 seconds)
17:20:32  * j0hnsm1thquit (Remote host closed the connection)
17:25:44  * rendarjoined
17:42:47  * ofrobotsjoined
17:58:46  <ncthom91>ok, so after doing a bit more research, I understand that v8 is not thread-safe by default. However, I intend to create a new isolate for each thread and duplicate the relevant context into each of those isolates. I can use v8::Locker for those isolates as well, though I'm not sure it will be necessary. For those of you with more experience embedding v8... is this a bad idea?
18:14:40  * bradleymeckjoined
18:23:35  <bradleymeck>ncthom91: ended up leaving before i saw if you got an answer, did you figure out snapshots ?
18:24:20  <ncthom91>bradleymeck no :P, but I came up with a different approach that I think will work. Can I run it by you?
18:24:30  <bradleymeck>ok
18:25:39  <ncthom91>ok so the plan is to write a node addon wherein, from JS-land I can push things into a work queue, then from C++ land spin a bunch of threads off to consume the queue concurrently, passing the results into a different queue that the main JS thread will read from
18:26:11  <ncthom91>the challenging part is that the task that each thread will execute requires running JS, so now I'm in a threaded-v8 mindset, but v8 isn't threadsafe
18:26:58  <ncthom91>so I'm thinking: in each thread, create a new isolate, attach a new Context (which I can create with the same Global object as the default context), and eval my script to get the result
18:27:37  <StephenLynx>what if
18:27:39  <ncthom91>because, my understanding is that using multiple isolates will circumvent v8's non-threadsafe issues
18:27:46  <StephenLynx>ok, hear me out.
18:28:00  <StephenLynx>what if you use the cluster module on node/io?
18:28:20  <StephenLynx>and use workers as threads that send information back to the main thread?
18:28:38  <ncthom91>StephenLynx I have, this is sort of an experiment to see if I can accomplish the same goal faster
18:28:46  <StephenLynx>oh, nvm then.
18:29:04  <bradleymeck>ncthom91: sounds like threx
18:29:05  * daviquit (Ping timeout: 256 seconds)
18:30:16  <ncthom91>bradleymeck https://github.com/trevnorris/threx ?
18:30:19  <bradleymeck>ya
18:30:43  <ncthom91>interesting... hadn't seen this
18:31:00  <trevnorris>what's up?
18:32:39  <ncthom91>trevnorris heh, how convenient. We were just chatting about threaded node addons
18:32:50  <ncthom91>and bradleymeck suggested your project
18:33:47  <ncthom91>trevnorris I'm looking over your main.cc file, and don't see exactly where you guarantee MT safety?
18:35:04  <trevnorris>threx does absolutely minimum. it's up to the user to make sure the thread is cleaned up before it's brought down.
18:35:49  <ncthom91>it also looks like threx encourages only 1 thread?
18:36:56  <trevnorris>no. running `new Thread()` should spawn a new thread every time.
18:37:44  * bradleymeckquit (Quit: bradleymeck)
18:38:16  <ncthom91>oh, oops, i see
18:38:26  <trevnorris>it's not the most elegant API, but wanted to leave it as flexible as possible for better interfaces down the road.
18:38:45  * bobmcwquit (Remote host closed the connection)
18:42:15  <ncthom91>trevnorris should each thread have it's own isolate?
18:42:39  <trevnorris>ncthom91: yes
18:42:50  <trevnorris>you can't share an Isolate across threads.
18:42:52  <ncthom91>is that something thex does for you?
18:43:12  <ncthom91>I don't see it in main.cc (but i suck at reading C++ still, so taht's probably why)
18:43:58  <trevnorris>threx only handles thread management. it's up to the user to implement whatever they want on top of that.
18:44:26  <trevnorris>basically, it's a tiny library that someone could use to more easily create a multi-threaded JS env.
18:45:49  <ncthom91>i see
18:47:32  * BobGneuquit
18:47:57  * StephenLynxquit (*.net *.split)
18:47:57  * stalledquit (*.net *.split)
18:47:59  * ysaberiquit (*.net *.split)
18:47:59  * esasquit (*.net *.split)
18:48:00  * wingoquit (*.net *.split)
18:48:00  * jwilmquit (*.net *.split)
18:48:01  * dobsonquit (*.net *.split)
18:49:13  * ofrobotsquit (Quit: My Mac has gone to sleep. ZZZzzz…)
18:49:27  * ysaberijoined
18:49:27  * esasjoined
18:49:27  * dobsonjoined
18:49:27  * wingojoined
18:49:27  * jwilmjoined
18:49:34  * StephenLynxjoined
18:49:34  * stalledjoined
18:49:58  * stalledquit (Max SendQ exceeded)
18:51:39  <ncthom91>trevnorris did you ever have success making new isolates and attaching new node contexts? Any chance you have code I could look at for that?
18:53:33  <trevnorris>ncthom91: i did some experimentation simply evaluating a script and returning some simple output, but nothing serious.
18:53:44  * ofrobotsjoined
18:53:58  <ncthom91>that sounds roughly like what I intend to do ;)
18:54:39  <trevnorris>sure. you can look at the code in node.cc to see how to spin up a new Isolate, Context and execute a script.
18:54:51  <trevnorris>after evaluating you can pass the message back using threx.
19:00:29  <ncthom91>trevnorris have you seen https://github.com/iojs/nan/blob/master/examples/async_pi_estimate/async.cc ?
19:01:09  * stalledjoined
19:01:16  <ncthom91>it seems similar to what you built; using NAN with the NanAsyncWorker abstraction, threading is kind of taken care of for you, so long as you're safe about v8 access in the Execute method
19:01:44  <ncthom91>which, as far as I can tell, would be accomplished by creating a new isolate
19:02:31  <ncthom91>(also, I don't see a node.cc file?)
19:06:02  * ofrobotsquit (Quit: My Mac has gone to sleep. ZZZzzz…)
19:10:01  * ofrobotsjoined
19:10:20  <trevnorris>interesting. I haven't used that before.
19:12:09  <ncthom91>trevnorris can you point me to that node.cc file you mentioned? I'm not sure where you were referring, but I'd love tos ee it
19:13:14  <trevnorris>oop. meant from io.js. here's where we start up the JS application, as an example: https://github.com/iojs/io.js/blob/master/src/node.cc#L3870-L3881
19:15:39  <ncthom91>ah interesting
19:17:23  * C-Manquit (Quit: Connection reset by beer)
19:18:49  * ysaberiquit (*.net *.split)
19:18:49  * esasquit (*.net *.split)
19:18:49  * wingoquit (*.net *.split)
19:18:50  * jwilmquit (*.net *.split)
19:19:14  * ysaberijoined
19:19:14  * esasjoined
19:19:14  * wingojoined
19:19:14  * jwilmjoined
19:30:54  <ncthom91>hey trevnorris can I bug you a little more? With this idea of creating a new isolate for each thread, I don't quite see how Context comes into play, particularly because of examples like this: https://github.com/iojs/node-addon-examples/blob/master/2_function_arguments/node_0.12/addon.cc#L6-L8
19:32:05  <ncthom91>do you suppose it would be sufficient to just create a new isolate?
19:34:50  * bradleymeckjoined
19:35:54  * bobmcwjoined
19:36:08  <ncthom91>hm nvm i think i got it
19:41:05  * {aaron}joined
19:49:36  * {aaron}quit (Quit: Leaving)
19:56:16  <trungl-bot>Tree closed by [email protected]: Tree is closed (Automatic: "Check" on http://build.chromium.org/p/client.v8/builders/V8%20Linux%20-%20nosnap/builds/2872 "V8 Linux - nosnap" from 65c56d49b2d671ac9e379de726bff3eb03a508c1: [email protected])
20:00:04  * {aaron}joined
20:06:10  <{aaron}>guys, i don't know if this is the right channel, but I have encountered very bizarre js behavior and i'm wondering if it may have to do with array iteration or function call optimizations
20:07:10  <{aaron}>essentially, before a jump into a function (a run of the mill lodash predicate), a parameter has a certain value, but once stepped through it has a completely different type and value
20:07:31  <{aaron}>i have a couple of console screenshots
20:09:32  <bnoordhuis>{aaron}: not a v8 dev but i can channel what they'd ask: a) does it happen in other browsers, and b) do you have a reduced test case?
20:10:13  <{aaron}>it's hard to reproduce. i have not yet reproduced it in firefox
20:10:39  <{aaron}>i don't have a reduced test case because again it appears sporadically, i have no idea what the underlying cause is or how i'd reproduce it
20:10:51  <{aaron}>i'll try to paste the console output somewhere hold on
20:11:51  <{aaron}>http://ctrlv.in/570598
20:13:02  <{aaron}>as you can see here the first invocation of _.any produces what i consider the correct output: we have an array with single item which is a PhraseModel, which is an object defined with coffeescript inheritence (nothing really remarkable)
20:13:21  * ofrobotsquit (Quit: My Mac has gone to sleep. ZZZzzz…)
20:13:31  <{aaron}>i key up and invoke the same exact command, and now magically the output has changed. i have made 0 changes between the two invocations
20:13:55  <{aaron}>from then on out, the value is "stuck" to the number 5002, which doesn't appear to have any relation to anything else in the object
20:14:23  <bnoordhuis>{aaron}: that's a lot of moving layers (lodash, coffeescript)
20:14:45  <{aaron}>hold on i'll paste the debug screenshot
20:15:28  <{aaron}>http://ctrlv.in/570605
20:16:11  <{aaron}>yes i wondering if lodash had anything to do with it so i debugged, it's really just a typical loop over array. the output in the second screenshot shows the values before and after this invocation:
20:16:25  <{aaron}>if (predicate(array[index], index, array)) {
20:16:27  <trungl-bot>Tree opened by [email protected]: Tree is open
20:16:36  <bnoordhuis>maybe try starting chrome with --nocrankshaft to disable the optimizing compiler and see if the problem goes away
20:16:43  <{aaron}>ah ok
20:18:15  <{aaron}>there is one reason i suspect it mayy have to do with arrays. the object returned by getPhrases() is not a vanilla array. i used this technique to add methods to a vanilla array: http://www.bennadel.com/blog/2292-extending-javascript-arrays-while-keeping-native-bracket-notation-functionality.htm
20:18:55  <{aaron}>my next step is to just dispense with that and use a vanilla array (the problem disappears when the item is placed in a new array)
20:19:18  <{aaron}>i will try --nocrankshaft and see if that changes anything
20:29:46  * ofrobotsjoined
20:32:29  <ncthom91>Hey all. I've another quick question about threading and v8, particularly in respect to the NAN AsyncQueueWorker abstraction:
20:32:40  <ncthom91>the Execute method explicitly suggests not touching v8: https://github.com/iojs/nan/blob/master/examples/async_pi_estimate/async.cc#L25-L31
20:32:58  <ncthom91>however, I see a lot of suggestions that v8 can be made threadsafe by using separate isolates and Locker instances
20:33:12  <ncthom91>"threadsafe" might be the wrong word, but, hopefully that gets my point across
20:33:21  <ncthom91>my question then is, is it unsafe to instantiate new Isolates in the execute method?
20:33:29  <ncthom91>and compile/execute scripts against that isolate?
20:34:33  <ncthom91>maybe it's smarter to send new isolates in as arguments to the constructor?
20:37:47  <bnoordhuis>ncthom91: you can create a new isolate in Execute()
20:38:14  <bnoordhuis>ncthom91: if you're using nan with io.js, keep in mind that Execute() runs in the (limited size) thread pool
20:38:25  <ncthom91>bnoordhuis and that will be "safe" in this case?
20:38:34  <ncthom91>I'm ok with a limited thread pool. Do you know per chance how big it is?
20:39:02  <bnoordhuis>sadly i do. i wrote it :-/ it defaults to 4, configurable with the UV_THREADPOOL_SIZE env var
20:39:12  <ncthom91>bnoordhuis my other question, is it costly to create new isolates & contexts all the time like that?
20:39:15  <bnoordhuis>i never got around to making it auto-scale without regressing some use cases
20:39:26  <bnoordhuis>yes, it's costly
20:39:43  <bnoordhuis>it's on the order of milliseconds
20:40:24  <ncthom91>hah, awesome. I'm talking to the right person then :). Is there a way to make it less costly?
20:40:41  <ncthom91>the v8 embedders guide suggests adding the build option `snapshot=yes`?
20:40:59  <bradleymeck>it is still costly
20:41:04  <bnoordhuis>snapshots help but yeah, still costly
20:41:47  * {aaron}quit (Ping timeout: 246 seconds)
20:42:39  <ncthom91>how do I supply that option in a binding.gyp file?
20:44:15  <bnoordhuis>you don't, you set it at run-time: env UV_THREADPOOL_SIZE=64 iojs app.js
20:44:28  <bnoordhuis>oh wait, you mean snapshots?
20:44:49  <bnoordhuis>set v8_use_snapshot to true
20:44:58  <bradleymeck>does snapshotting still keep the fns deopted?
20:46:03  <bnoordhuis>i don't think so. i know hydrogen optimizes some intrinsics so i assume the builtins are eligible for optimization now
20:47:10  <ncthom91>bnoordhuis sorry, i'm super new to all of this. How do I set v8_use_snapshot to true? Is that a binding.gyp file option?
20:47:57  <bnoordhuis>ncthom91: apologies, i took a shortcut there. you can't influence it from an addon's binding.gyp
20:48:31  <bnoordhuis>you can when you build v8 from source (with v8_use_snapshot) or when you build io.js from source (./configure --with-snapshot)
20:49:01  <caitp->code has to get pretty hot before it can be inlined
20:49:01  <ncthom91>oh, i see. hm... that's a bummer. Well, a few ms per instantiation hopefully won't be bad
20:49:11  <caitp->and not everything can be inlined
20:52:34  <bnoordhuis>caitp-: nice work on the optional params. exciting times
20:52:58  <caitp->well, it will be nicer work when they work correctly :p
20:53:01  <caitp->but yeah, it's cool
20:53:38  <bnoordhuis>did you figure out the lazy parsing thing?
20:54:35  <caitp->no, but that's deferred until scoping is done right i guess
20:54:42  <caitp->andreas wanted scoping in a followup
20:55:09  * StephenLynxquit (Remote host closed the connection)
20:55:20  <ncthom91>cool, my test project to evaluate the perf of NanAsyncQueueWorker vs. Node's default IPC vs. a shared memory IPC message queue has NanAsyncQueueWorker winning by almost 4x, even with context & isolate creation in the Execute function
20:55:31  <caitp->i'm hoping marja can help with that when she gets back, since i'm guessing she knows the lazy parsing stuff best
20:55:34  * {aaron}joined
20:56:55  * bobmcwquit (Remote host closed the connection)
21:01:52  <ncthom91>bnoordhuis I don't suppose there's any way to create one individual, thread-local isolate for each of the threads in the pool, is there? To limit the create cost?
21:03:05  <ncthom91>and somehow have Execute called with a reference to that isolate
21:04:41  <bnoordhuis>ncthom91: you could do that. libuv has apis for thread-local storage
21:04:49  <bnoordhuis>not in node.js v0.10 though
21:05:09  <ncthom91>interesting... that's ok, I don't need node v10
21:05:51  <bnoordhuis>ncthom91: maybe you should move over to #io.js :) we've moved mostly outside the realm of v8 proper
21:08:40  <ncthom91>bnoordhuis heh, sure :)
21:33:59  * bradleymeckquit (Quit: bradleymeck)
21:34:07  * {aaron}quit (Quit: Leaving)
21:35:42  * {aaron}joined
21:37:14  <{aaron}>bnoordhuis: this is as close as i can get to a reduced test case: http://jsfiddle.net/aaronh/g7m9z95m/
21:38:24  * ncthom91quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
21:38:25  <{aaron}>of course this doesn't demonstrate the bug. in reality i have have hundreds or thousands of collection items and only see the bug on 1-4 of them, which don't look any different from any of the other items on visual inspection (the __proto__s are all identical as well)
21:42:57  <caitp->what exactly is the bug?
21:43:34  <{aaron}>http://ctrlv.in/570598
21:43:38  <{aaron}>http://ctrlv.in/570605
21:43:42  <{aaron}>essentially, before a jump into a function (a run of the mill lodash predicate), a parameter has a certain value, but once stepped through it has a completely different type and value
21:44:49  <{aaron}>the item (object) in the array magically transforms to an integer on the other side of the function invocation
21:45:21  <{aaron}>despite the fact that the collection argument itself is intact, and you can see arr[2][0] is clearly not an integer but the object
21:48:12  <{aaron}>have not been able to trigger it since using --js-flags="--nocrankshaft"...but i can't reliably trigger it in the first place so don' tknow if that means anything
21:49:59  <caitp->well, it's possible it's a crankshaft bug
21:50:13  <caitp->try to shrink the reperoduction down to something really simple and file a bug
21:50:19  <{aaron}>ok i just relaunched chrome without nocrankshaft and triggered it
21:50:36  <caitp->probably a type feedback issue
21:51:03  <{aaron}>i'm decorating array instances probably in an inadvisable manner
21:51:07  <{aaron}>http://www.bennadel.com/blog/2292-extending-javascript-arrays-while-keeping-native-bracket-notation-functionality.htm
21:51:26  <{aaron}>i strongly suspect this, and my next step will be to dispense with that and just wrap a vanilla array
21:52:13  <caitp->i don't think it's even worth doing it like that
21:52:32  <caitp->look at how babel emulates the class [[Construct]] protocol
21:52:50  <caitp->you should be able to do what you want a lot easier using something like that
21:52:53  <{aaron}>that is all coffeescript boilerplate
21:53:02  <{aaron}>original source is coffeescript
21:53:10  * ncthom91joined
21:53:14  <{aaron}>the approach i'm talking about is right in the Collection constructor
21:53:23  <{aaron}>where it's assigning methods directly to array instance
21:53:50  <caitp->i can see what you're talking about
21:53:54  <caitp->and there is a better way =)
21:54:10  <{aaron}>well, my better way will be to stop playing with fire and just wrap the damn array :)
21:54:15  <caitp->all you need to do is make sure you return an Array exotic object, and set its prototype to the class prototype you want to use in the constructor
21:54:43  <caitp->you won't have newTarget without extra work, but if you don't need it, it's good enough
21:55:04  <{aaron}>do you have a link, this is hard to google
21:55:16  * ofrobotsquit (Quit: My Mac has gone to sleep. ZZZzzz…)
21:55:34  <caitp->i'll link you a version of it
21:56:34  <{aaron}>hmm i do see here where babel says it has "Subclassable Built-ins"
21:56:41  <caitp->https://jsfiddle.net/0L11e3xf/ the whole ExtendedArray stuff
21:57:15  <{aaron}>thanks man
22:05:03  * ofrobotsjoined
22:07:24  * ncthom91quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
22:12:43  * caitp-quit (Ping timeout: 244 seconds)
22:13:58  * {aaron}quit (Quit: Leaving)
22:25:17  * StephenLynxjoined
22:27:03  * tavjoined
22:27:34  * RT|Chatzillajoined
22:29:20  * caitp-joined
22:52:45  * ncthom91joined
22:54:02  * rendarquit
23:01:03  * ofrobotsquit (Quit: My Mac has gone to sleep. ZZZzzz…)
23:05:07  * ofrobotsjoined
23:12:39  * tavquit (Ping timeout: 276 seconds)
23:14:14  * tavjoined
23:42:02  * bnoordhuisquit (Ping timeout: 246 seconds)