computergeek125

@[email protected]

This profile is from a federated server and may be incomplete. View on remote instance

computergeek125 ,

I missed the Photoshop lol....

I've been through enough airports with that doggo profile and a similar message I hadn't considered the possibility it wasn't some new way TSA was printing their "don't pet the service dogs" poster.

computergeek125 ,

Full extension rails are probably best going to come from the original vendor as a general principle, rather than attempting to use universal rails.

If you have a wall mounted rack, unless your walls are not drywall, physics is working against you. It's already a pretty intense heavy cantilever, and putting a server in there that can extend past the front edge is only going to make that worse.

If you want to use full extension rails, you should get a rack that can sit squarely on the floor on either feet or appropriately rated casters. You should also make sure your heaviest items are on the bottom ESPECIALLY if you have full extension rails - it will make the rack less likely to overbalance itself and tip over when the server is extended.

computergeek125 ,

Fair - there are ways to handle it. I didn't want to include specifics since I'm not a professional contractor for this sort of thing, but I should have indicated that there are exceptions.

I had to migrate from Samba AD to Windows Server AD and I'm sad (RIP Samba)

Samba is amazing, Windows server is a lot less so. The problem with Windows server is that it takes tons of steps to do basic things. On Samba I had Samba tool and it was very nice and friendly. On Windows server you have a ton of different management panels....

computergeek125 ,

You use console to turn on embedded shell then Ctrl+Alt+Fn over to it (I forget whether it's on f1 or f2), then you can use esxcli and all the rest of that to fix it up.

Once you get enough networking/storage pieces sorted out you can get back into the management HTML UI and SSH

Then when you're done fixing, turn shell and SSH back off.

computergeek125 ,

It is far less improbable than you think, especially if all of your drives have similar age/wear - as would be the case if you bought all 40 around the same time.

computergeek125 ,

Other have all mentioned various tech to help you out with this - Ceph, ZFS, RAID at 50/60, raid is not a backup, etc.

40 drives is a massive amount. On my system, I have ~58TB (before filesystem overhead) comprised of a 48TB NAS (5x12TB@RAID-5) 42TB of USB backup disks for said NAS (RAID is not a backup), a 3-node vSAN array with 12TB (3x500GB cache, 6x2TB capacity) of all-flash storage at RF2 (so ~6TB usable, since each VM is in independent RAID-1), and a standalone host with ~4TB@RAID-5 (16 disks spread across 2 RAID-5 arrays, don't have the numbers off hand)

That's 5+9+16=30 drives, and the whole rack takes up 950w including the switches, which iirc account for ~250-300w (I need to upgrade those to non-PoE versions to save on some juice). Each server on its own takes up 112-185w, as measured at iDRAC. It used to take up 1100w until I upgraded some of my older servers into newer ones with better power efficiency as my own build-out design principle.

While you can just throw 40-60 drives in a 4u chassis (both Dell and 45drives/Storinator offer the ability to have this as a DAS or server), that thing will be GIGA heavy fully loaded. Make sure you have enough power (my rack has a dedicated circuit) and that you place the rack on a stable floor surface capable of withstanding hundreds of pounds on four wheels (I think I estimated my rack to be in the 300-500lbs class)

You mentioned wanting to watch videos for knowledge - if you want anywhere to start, I'd like you to start by watching the series Linus Tech Tips did on their Petabyte Project's many iterations as a case study for understanding what can go wrong when you have that many drives. Then look into the tech you can use to not make the same mistakes Linus did. Many very very good options are discussed in the other comments here, and I've already rambled on far too long.

Other than that, I wish you the best of luck on your NAS journey, friend. Running this stuff can be fun and challenging, but in the end you have a sweet system if you pull it off. :) There's just a few hurdles to cross since at the 140TB size, you're basically in enterprise storage land with all the fun problems that come with scale up/out. May your storage be plentiful and your drives stay spinning.

computergeek125 , (edited )

So, convince me. Why should I get a homelab instead of a regular NAS?

Eventually it just might get out of hand and you end up with both. TLDR at bottom.

Serving out of the home and I’ve been racking uptime
Gotta gotta add more because I want it all
It started out with a switch how did it end up like this?
It was only a switch, it was only a switch

I started out in college with an old digital sign controller (2 core 4g RAM) and added a hard drive to a slot based on its OEM design and reflashed the bios back to OEM, what it’s value-added manufacturer never intended, to get the HDD to work. It was all the right price of free. That became my first NAS. Then I got the switch, then reflashed my router to OpenWRT for features, discovered it couldn’t route a gig to NAT anymore, so I did what any logical nerd would do: tried pfsense but ended up building my own Linux based firewall and everything server, also from the free electronics recycling bin. Then my old laptop got converted into a server

Once I got a job and a real budget, I started running into hardware limits. You can only run so much on hardware that is old enough to attend middle school. Out was the homebuilt router-everything server and digital sign NAS, in was the Synology DS1517+ and super micro 8 core mini server and in was the small rack. Settled on ESXi free for a hypervisor. Later, additional nodes were purchased and I upgraded from free to VMUG and added vCenter and vSAN (aggregate size: 1.5TB cache w/ 3TB capacity tier, which worked out to ~2TB total after RAID) to help manage it all. Windows Active Directory was built (had a leftover perma key from college) to coordinate my ever increasing VM count with the same password everywhere.

Timeskip forwards, and I have a pair of R720XD I got for cheap because I knew how to BIOS hack them off of their original Google Search Appliance firmware back to Dell stock, total vSAN capacity at ~8TB after RAID, the Synology is still alive but now rocks ~48TB after RAID, and one “loose” R720 with ~2TB storage and a then-new now-aging Intel NUC with 32GB RAM. All three R720 have 128GB RAM each, I have more switch ports than I’d like to count, and 15 minutes of battery backup added last year. The NAS backups up all devices with a minimum RPO of 1 week and a maximum RPO of 24h for critical stuff. RTO is 2-4h from event, and by golly that has saved my bacon a minimum of 6 times in 3 years. At this point I have more infrastructure redundancy and capacity than some medium businesses.

Could I live without it and it’s monstrous power bill? Yeah absolutely I could downsize that and save some cash. But where’s the fun in that? Every component I added to my system was done so with a very specific common goal: every piece of this monster was built to help me learn about something IT related. With this experience under my belt, I was able to confidently jump in to stuff at work with the mantra of “yeah I already know how to do this”.

Plus, as a side benefit, its a fun learning hobby. I have an absolute blast learning about all this technology and figuring out how to get the most bang for my buck when it comes to selecting software (paid vs FLOSS) and procuring hardware. All things you need to do in the real IT world. Sure I don’t get to play with the fancy spaceship servers that have multiple terabytes of RAM each like I do at work, but I don’t (yet) need a multi terabyte RAM chassis at home.

Summary and TLDR: Build something that solves a problem in your life. Photo video storage? NAS! Kids want better internet? OPNsense/OpenWRT firewall with a switch! – as two examples. But my chief most import prime directive rule: build something that makes you and your family happy.

computergeek125 ,

Likely won’t change CG-NAT config, the new modem would still have to get its IP address from the attached ISP

computergeek125 ,

Is this for a workstation or server platform? I didn’t think the servers had an option to mount two drives per caddy.

computergeek125 ,

EDIT: I take back what I said, I missed a detail when I was did my first check. My thought process had a bad assumption, sorry about that.

Ah I forgot a detail to that question, but I think you answered it.

Since you said Dell, I was curious if you meant a rack mount server chassis or tower workstation - the 210 RAID card operates in both chassis.

By that port count I’m guessing this is a Precision tower of some variety.

computergeek125 ,

Can confirm that also works on OPN

Static outbound is a feature I wish more firewalls had because it requires the targeted device to send outbound once before it accepts incoming (or at least that’s my understanding)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines