00:03:28  * marsellquit (Quit: marsell)
00:07:53  * trentmquit (Quit: Leaving.)
01:06:18  * fredkquit (Quit: Leaving.)
01:15:04  * bahamatquit (Quit: Leaving.)
01:20:00  * ed209quit (Remote host closed the connection)
01:20:07  * ed209joined
01:41:29  * pmooneyquit (Ping timeout: 246 seconds)
01:53:46  * pmooneyjoined
02:23:10  * pmooneyquit (Ping timeout: 265 seconds)
02:49:34  * pmooneyjoined
02:54:19  * pmooneyquit (Ping timeout: 250 seconds)
03:00:02  * pmooneyjoined
03:15:43  * pmooneyquit (Ping timeout: 256 seconds)
03:51:59  * trentmjoined
04:25:04  * marselljoined
05:39:37  * pmooneyjoined
05:41:55  * bahamatjoined
05:53:25  * pmooneyquit (Ping timeout: 265 seconds)
06:18:59  * pmooneyjoined
06:23:49  * pmooneyquit (Ping timeout: 248 seconds)
06:25:41  * marsellquit (Quit: marsell)
06:54:53  * trentmquit (Quit: Leaving.)
08:13:53  * bahamatquit (Quit: Leaving.)
09:08:06  * bixu_joined
09:22:15  * marselljoined
10:20:01  * ed209quit (Remote host closed the connection)
10:20:08  * ed209joined
12:45:22  * chorrelljoined
13:32:35  * pmooneyjoined
13:55:54  * pmooneyquit (Ping timeout: 256 seconds)
13:57:37  * pmooneyjoined
15:45:54  * trentmjoined
16:25:57  * fredkjoined
16:51:21  * therealkoopajoined
18:08:19  * bahamatjoined
18:38:22  * ryancnelsonjoined
18:48:11  * chorrellquit (Quit: My Mac has gone to sleep. ZZZzzz…)
18:57:48  * chorrelljoined
19:17:17  * ryancnelsonpart
20:16:16  <bixu_>Has anyone considered an 'mmv' utility (I know, there is already a binary with that name...)
20:16:58  <trentm>mantash in python-manta has it, becaseu I wanted it. :)
20:17:46  <bahamas10>bixu_: mln foo bar && mrm foo ?
20:18:10  <bixu_>Ah - I had not thought of that.
20:18:22  <bixu_>The snaplink becomes the new file?
20:18:31  <bixu_>Er, object?
20:19:15  <bahamas10>yes, https://apidocs.joyent.com/manta/#snaplinks the example starting with "you can mimic a move with SnapLinks"
20:20:00  * ed209quit (Remote host closed the connection)
20:20:07  * ed209joined
20:28:04  <bixu_>Yes that does work.
20:29:33  <trentm>that's what mantash mv does
20:30:31  <bahamas10>mmv() { mln "$1" "$2" && mrm "$1"; }
20:30:33  <bahamas10>enjoy :D
20:31:23  <trentm>-R ? ;)
20:32:49  <bahamas10>aha, i'll leave that to someone else.
20:51:37  <bixu_>lol yeah
21:01:48  * gkyildirimjoined
21:07:08  <nahamu>the hard part is if you want to rename a directory.
21:07:25  <nahamu>you have to create the new one, snaplink in all the children, then remove the old ones
21:07:54  <nahamu>trentm: does mantash have mv -R?
21:09:18  <nahamu>https://github.com/joyent/python-manta/blob/master/bin/mantash#L1417 I guess so...
21:09:35  <trentm>https://gist.github.com/trentm/1d69ad9b98dfe422c406
21:09:39  <trentm>example run and help output: ^^^
21:09:47  <trentm>sort of. It can move dirs and do the right thing to a point
21:09:58  <trentm>it isn't as complete as your shell's `mv`
21:10:03  <nahamu>right.
21:10:24  <nahamu>will it work if the dir has one or more subdirs?
21:11:10  <gkyildirim>Hello. Sorry if this was asked before. I am trying to figure out manta storage capacity requirements. If I need 100TB of capacity as well as node redundancy should I buy two server hardwares each of 100TBs? Or can I make 3 servers with 50TBs each (like RAID5 over servers)?
21:11:38  <nahamu>100TB of logical capacity?
21:11:54  <gkyildirim>net capacity
21:11:58  <nahamu>by default Manta uses copies=2
21:12:38  <nahamu>so if you want to store 100TB of objects all with copies=2 you need 200TB of disk usable by Manta.
21:13:16  <goekesmi>gkyildirim: to your point, Manta doesn't do raid5 like redundancy over servers.
21:13:21  <nahamu>(also, #manta exists and might be a good place to ask...)
21:13:52  <gkyildirim>ok thx for answers
21:13:59  <nahamu>oh, this is #manta
21:14:06  <gkyildirim>:)
21:14:10  * goekesmichuckles.
21:14:13  * nahamuthought this was #smartos for some reason...
21:14:29  <goekesmi>That's probably to your left a tab or two.
21:14:31  <trentm>nahamu: yes, works with subdirs: https://gist.github.com/trentm/1d69ad9b98dfe422c406#file-example-run-subdirs-txt
21:14:46  <nahamu>trentm: nice!
21:15:08  <bixu_>Witchcraft!
21:15:13  <nahamu>recursion and all that.
21:15:37  <nahamu>gkyildirim: also, if you're using Manta, are you only storing objects or will you also be doing compute?
21:15:54  <nahamu>because number of nodes might want to change depending on how much computation you anticipate.
21:18:10  <gkyildirim>nahamu: I am looking for object store solution on illumos. I need 1PB of net space and will grow more Having double copies burns my pocket :(
21:18:18  <nahamu>... goekesmi understood the question much better than I did. sorry for being silly...
21:18:31  <nahamu>gkyildirim: well compression will buy you some of that back.
21:18:58  <gkyildirim>Yes data is mostly jpegs and mpegs
21:19:06  <nahamu>but yes, if you want objects resilient against an entire node failing you need copies > 1 and thus redundant data.
21:19:07  <gkyildirim>*yes but
21:19:31  <nahamu>right, if the objects are already pretty well compressed, zfs compression won't do a whole lot.
21:20:06  <goekesmi>Note, if you can recreate data after a node loss (derived data) you can set copies independently on each object.
21:20:13  <goekesmi>That can be useful in some cases.
21:20:49  <nahamu>right, if half of your data are the results of a compute job, you could set copies=1 on those and recreate them after a node failure.
21:21:34  <goekesmi>With data volumes of the size you are describing, gkyildirim, it's worth getting to understand your dataset well.
21:22:10  <goekesmi>Another order of magnitude, I promote that to 'very well'.
21:22:34  <gkyildirim>sadly data can not be created, i thought about that.
21:23:04  <nahamu>can you afford to lose data?
21:23:09  <goekesmi>Well then, I guess you are in the budget vs risk fight.
21:24:52  <gkyildirim>goekesmi you are right. I was thinking of erasure coding ( at least it is what they call). RAID5 or RAID6 like servers will lower budget issues. However it is complicated so you take risk
21:25:26  <nahamu>right, Manta does the erasure coding at the ZFS level to protect you from individual spindles dying.
21:25:29  <gkyildirim>nahamu I should have an HA
21:25:57  <nahamu>another thing to consider: does the project make money?
21:26:20  <nahamu>because if you can plan out the expansion plan of rolling out additional storage nodes over time...
21:26:35  <nahamu>ideally after some of the first ones have "paid for themselves"...
21:27:13  <gkyildirim>nahamu it is not
21:27:57  <nahamu>hmmm
21:28:01  <gkyildirim>nahamu ZFS protects at spindle level but I need server redundancy as well
21:28:10  <goekesmi>gkyildirim: as nahamu points out, your individual nodes have RAID5/6 equivilent erasure coding for protection against spindle loss (in rational default SDC server deploys)
21:28:13  <nahamu>gkyildirim: sure, that's fine.
21:29:02  <goekesmi>If you need server redundancy, you are going to need more than one copy, because you also need site redundancy at that point, and unless you have 3 or more sites, erasure coding across sites doesn't make any sense.
21:29:21  <goekesmi>Also, network between those sites to handle the erasure coding is going to be significant.
21:29:25  <nahamu>So one option is instead of paying ZFS to do erasure coding, you use some other object store that uses single spindles and does either duplication or erasure encoding across nodes,etc.
21:29:40  <goekesmi>(I'm now speaking of hypothetical systems, and not manta)
21:30:12  <nahamu>which depending on your budget and your risk tolerance (and lack of need for compute?) might be preferable.
21:30:36  <nahamu>Manta, like everything else that comes out of Joyent is pretty opinionated.
21:30:46  <nahamu>its opinions might not match your workload.
21:30:55  <gkyildirim>Sadly
21:31:03  <gkyildirim>Manta is great
21:32:30  <gkyildirim>Data is well organized by years.
21:32:51  * bixu__joined
21:33:04  <gkyildirim>Old data is mostly for archieve purposes
21:33:25  <gkyildirim>Lets say I do not want HA for old data but I want HA for recent
21:33:34  <gkyildirim>Is this possible?
21:34:34  <goekesmi>Yes.
21:34:42  <goekesmi>Set copies=2 for the new data.
21:34:51  <goekesmi>Space expands as you expect it would.
21:35:04  <goekesmi>Set copies=1 for the archive data.
21:35:21  * bixu_quit (Ping timeout: 250 seconds)
21:35:40  <gkyildirim>that seems possible.
21:37:21  <gkyildirim>Is it possible to expose all of data in a single namespace?
21:38:36  <goekesmi>That's basically what manta does.
21:38:51  <goekesmi>Heck, there is the nfs adapter, if that's really your thing.
21:39:18  <gkyildirim>I was about to ask nfs :). Yep that!s my thing
21:39:30  <gkyildirim>*that’s
21:39:55  <goekesmi>It has exactly the sort of limitations that a posix adapter on to an object store implies.
21:40:07  <goekesmi>If you can live with those, well NFS away.
21:40:35  <goekesmi>https://github.com/joyent/manta-nfs
21:41:25  * fredkquit (Quit: Leaving.)
21:41:50  <gkyildirim>Thanks for helping. I need to investigate further
21:42:56  * gkyildirimpart
22:09:35  <bahamas10>anyone who uses bash may be interested in this function i use: murl() { for u in "[email protected]"; do echo "$MANTA_URL${u/#~~//$MANTA_USER}"; done; }
22:09:53  <bahamas10>it converts manta paths to URLs, ie.
22:09:55  <bahamas10>$ murl ~~/public/foo /Joyent_Dev/public/bar
22:09:57  <bahamas10>https://us-east.manta.joyent.com/bahamas10/public/foo
22:09:59  <bahamas10>https://us-east.manta.joyent.com/Joyent_Dev/public/bar
23:13:01  * fredkjoined
23:48:35  * bahamatquit (Quit: Leaving.)
23:54:09  * bahamatjoined