litchralee

@[email protected]

This profile is from a federated server and may be incomplete. View on remote instance

litchralee ,

It never ceases to amaze me how prolific PowerPC/PowerISA was (still is?) in the embedded space

litchralee , (edited )

Agreed. When I was fresh out of university, my first job had me debugging embedded firmware for a device which had both a PowerPC processor as well as an ARM coprocessor. I remember many evenings staring at disassembled instructions in objdump, as well as getting good at endian conversions. This PPC processor was in big-endian and the ARM was little-endian, which is typical for those processor families. We did briefly consider synthesizing one of them to match the other's endianness, but this was deemed to be even more confusing haha

litchralee ,

Your primary issue is going to be the power draw. If your electricity supplier has cheap rates, or if you have an abundance of solar power, then it could maybe find life as some sort of traffic analyzer or honeypot.

But I think even finding a PCI NIC nowadays will be rather difficult. And that CPU probably doesn't have any sort of virtualization extensions to make it competitive against, say, a Raspberry Pi 5.

litchralee ,

To lay some foundation, a VLAN is akin to a separate network with separate Ethernet cables. That provides isolation between machines on different VLANs, but it also means each VLAN must be provisioned with routing, so as to reach destinations outside the VLAN.

Routers like OpenWRT often treat VLANs as if they were distinct NICs, so you can specify routing rules such that traffic to/from a VLAN can only be routed to WAN and nowhere else.

At a minimum, for an isolated VLAN that requires internet access, you would have to

  • define an IP subnet for your VLAN (ie /24 for IPv4 and /64 for IPv6)
  • advertise that subnet (DHCP for IPv4 and SLAAC for IPv6)
  • route the subnets to your WAN (NAT for IPv4; ideally no NAT66 for IPv6)
  • and finally enable firewalling

As a reminder, NAT and NAT66 are not firewalls.

litchralee , (edited )

Starting with brass tacks, the way I'm reading the background info, your ISP was running fibre to your property, and while they were there, you asked them to run an additional, customer-owned fibre segment from your router (where the ISP's fibre has landed) to your server further inside the property. Both the ISP segment and this interior segment of fibre are identical single-mode fibres. The interior fibre segment is 30 meters.

Do I have that right? If so, my advice would be to identify the wavelength of that fibre, which can be found printed on the outer jacket. Do not rely on just the color of the jacket, and do not rely on whatever connector is terminating the fibre. The printed label is the final authority.

With the fibre's wavelength, you can then search online for transceivers (xcvrs) that match that wavelength and the connector type. Common connectors in a data center include LC duplex (very common), SC duplex (older), and MPO (newer). 1310 and 1550 nm are common single mode wavelengths, and 850 and 1300 nm are common multimode wavelengths. But other numbers are used; again, do not rely solely on jacket color. Any connector can terminate any mode of fibre, so you can't draw any conclusions there.

For the xcvr to operate reliably and within its design specs, you must match the mode, wavelength, and connector (and its polish). However, in a homelab, you can sometimes still establish link with mismatching fibres, but YMMV. And that practice would be totally unacceptable in a commercial or professional environment.

Ultimately, it boils down to link losses, which are high if there's a mismatch. But for really short distances, the xcvrs may still have enough power budget to make it work. Still, this is not using the device as intended, so you can't blame them if it one day stops working. As an aside, some xcvrs prescribe a minimum fibre distance, to prevent blowing up the receiver on the other end. But this really only shows up on extended distance, single mode xcvrs, on the order of 40 km or more.

Finally, multimode is not dead. Sure, many people believe it should be deprecated for greenfield applications. I agree. But I have also purchased multimode fibre for my homelab, precisely because I have an obscene number of SFP+ multimode, LC transceivers. The equivalent single mode xcvrs would cost more than $free so I just don't. Even better, these older xcvrs that I have are all genuine name-brand, pulled from actual service. Trying to debug fibre issues is a pain, so having a known quantity is a relief, even if it means my fibre is "outdated" but serviceable.

litchralee ,

Regarding future proofing, I would say that anyone laying single pairs of fibres is already going to constrain themselves when looking to the future. Take 100 Gbps xcvrs as an example: some use just the single pair (2 fibres total) to do 100 Gbps, but others use four pairs (8 fibres total) driving each at just 25 Gbps.

The latter are invariably cheaper to build, because 25 Gbps has been around for a while now; they're just shoving four optical paths into one xcvr module. But 100 Gbps on a single fiber pair? That's going to need something like DWDM which is both expensive and runs into fibre bandwidth limitations, since a single mode fibre is only single-mode for a given wavelength range.

So unless the single pair of fibre is the highest class that money can buy, cost and technical considerations may still make multiple multimode fibre cables a justifiable future-looking option. Multiplying fibres in a cable is likely to remain cheaper than advancing the state of laser optics in severely constrained form factors.

Naturally, a multiple single-mode cable would be even more future proofed, but at that point, just install conduit and be forever-proofed.

litchralee ,

In my first draft of an answer, I thought about mentioning GPON but then forgot. But now that you mention it, can you describe if the fibres they installed are terminated individually, or are paired up?

GPON uses just a single fibre for an entire neighborhood, whereas connectivity between servers uses two fibres, which are paired together as a single cable. The exception is for "bidirectional" xcvrs, which like GPON use just one fibre, but these are more of a stopgap than something voluntarily chosen.

Fortunately, two separate fibres can be paired together to operate as if they were part of the same cable; this is exactly why the LC and SC connectors come in a duplex (aka side-by-side) format.

But if the ISP does GPON, they may have terminated your internal fibre run using SC, which is very common in that industry. But there's a thing with GPON specifically, where the industry has moved to polishing the fiber connector ends with an angle, known as Angled Physical Contact (APC) and marked with green connectors, versus the older Ultra Physical Contact (UPC) that has no angle. The benefit of APC is to reduce losses in the ISP's fibre plant, which helps improve services.

Whereas in data center and networking, I have never seen anything but UPC, and that's what xcvrs will expect, with tiny exceptions or if they're GPON xcvrs.

So I need to correct my previous statement: to be fully functional as designed, the fiber and xcvr must match all of: wavelength, mode, connector, and the connector's polish.

The good news is that this should mostly be moot for your 30 meter run, since the extra losses from mismatched polish should still link up.

As for that xcvr, please note that it's an LRM, or Long Range Multimode xcvr. Would it probably work at 30 meters? Probably. But an LR xcvr that is single mode 1310 nm would be ideal.

litchralee ,

I've only looked briefly into APC/UPC adapters, although my intention was to do the opposite of your scenario. In my case, I already had LC/UPC terminated duplex fibre through the house, and I want to use it to move my ISP's ONT closer to my networking closet. That requires me to convert the ISP's SC/APC to LC/UPC at the current terminus, then convert it back in my wiring closet. I hadn't gotten past the planning stage for that move, though.

Although your ISP was kind enough to run this fibre for you, the price of 30 meters LC/UPC terminated fibre isn't terribly excessive (at least here in USA), so would it be possible to use their fibre as a pull-string to run new fibre instead? That would avoid all the adapters, although you'd have to be handy and careful with the pull forces allowed on a fibre.

But I digress. On the xcvr choice, I don't have any recommendations, as I'm on mobile. But one avenue is to look at a reputable switch manufacturer and find their xcvr list. The big manufacturers (Cisco, HPE/Aruba, etc) will have detailed spec sheets, so you can find the branded one that works for you. And then you can cross-reference that to cheaper, generic, compatible xcvrs.

litchralee ,

Re: 2.5 Gbps PCIe card

In some ways, I kinda despise the 802.3bz specification for 2.5 and 5 Gbps on twisted pair. It came into existence after 10 Gbps twisted-pair was standardized, and IMO exists only as a reaction to the stubbornly high price of 10 Gbps ports and the lack of adoption -- 1000 Mbps has been a mainstay and is often more than sufficient.

802.3bz is only defined for twisted pair and not fibre. So there aren't too many xcvrs that support it, and even fewer SFP+ ports will accept such xcvrs. As a result, the cheap route of buying an SFP+ card and a compatible xcvr is essentially off-the-table.

The only 802.3bz compatible PCIe card I've ever personally used is an Aquantia AQN-107 that I bought on sale in 2017. It has excellent support in Linux, and did do 10 Gbps line rate by my testing.

That said, I can't imagine that cards that do only 2.5 Gbps would somehow be less performant. 2.5 Gbps hardware is finding its way into gaming motherboards, so I would think the chips are mature enough that you can just buy any NIC and expect it to work, just like buying a 1000 Mbps NIC.

BTW, some of these 802.3bz NICs will eschew 10/100 Mbps support, because of the complexity of retaining that backwards compatibility. This is almost inconsequential in 2024, but I thought I'd mention it.

litchralee ,

I quickly looked up the HPE/Aruba transceiver document, and starting on page 61 is the table of SFP+ transceivers, specifically describing the frequency and mode. At least from their transceivers, J9151A, J9151E, JL749A, and JL783A would work for your single-mode, 1310 nm needs.

You will have to do additional research to find generic parts which are equivalent to those transceivers. Good luck in your endeavors!

litchralee ,

just reuse old equipment you have around

Fully agree. Sometimes the best equipment is that which is in-hand and thus free.

you can just send vlan tagged traffic across a dumb switch no problem

A small word of caution: some cheap unmanaged switches rigidly enforce 1500 Byte payload sizes, and if the switch has no clue that 802.1q VLAN tags even exist, will consider the extra 4 bytes as part of the payload. So your workable MTU for tagged traffic could now be 1496 Bytes.

Most traffic will likely traverse that switch just fine, but max-sized 1500 Byte payload frames with a VLAN tag may be dropped or cause checksum errors. Large file transfers tend to use the full MTU, so be aware of this if you see strange issues specific to tagged traffic.

litchralee ,

Since you mentioned that your ONT is 2.5 Gbps, I am assuming that you need a twisted-pair NIC. I don't have a recommendation for a NIC exactly for 2.5 Gbps, but since you're specifically looking for low operating temperature, you may want to avoid 10 Gbps twisted-pair NICs.

10GBaseT -- sometimes called 10G copper, but 10Gbps DACs also use copper -- operates very hot, whether in an SFP+ module or as a NIC. The latter is observable just by looking at the relatively large heat sinks needed for some cards. This is an inevitable result of trying to push 800 MSymbols/sec over pairs of copper wires, and it's lucky to exceed 55 meters on CAT6. It's impressive how far copper wire has come, but the end is nigh.

Now, it could be that when a 10 Gbps NIC is only linked at 2.5 Gbps, it could drop into a lower power state. But my experience with the 10/100/1000 baseT specs suggest that the PHY on a 10 Gbps NIC will just repeat the signals four times, to produce the same transmission of the quarter-as-fast 2.5 Gbps spec. So possibly no heat savings there.

A dedicated 2.5 Gbps card would likely operate cooler and is more likely to be available as a single port, which would fit in your available PCIe ports. Whereas 802.3bz 2.5/5/10 Gbps NICs tend to be dual-port.

A final note: you might find "2.5 Gbps RJ45 SFP+" modules online. But I'm not aware of a formal 802.3 spec that defines the 2.5/5 Gbps speeds for modular connectors, so these modules probably won't work with SFP+ NICs.

litchralee ,

Rack-mounted beer holder.

Jk. But really, anything which helps organize stuff is a worthwhile job for a 3d printer. Even something to loop fibre optic cables on, so that they don’t exceed their maximum bend radii, is useful.

I think you’ll also find the 3d printer aids in other endeavors. I’ve used mine to print replacement car trims, ham radio accessories, a photo film spooler, a bushing to convert vacuum hose diameters, and other odds and ends.

Looking to buy some Mellanox ConnectX-3 cards

I was found a listing on eBay for a “Mellanox CX354A ConnectX-3 FDR Infiniband 40GbE QSFP+” card for quite cheap. By the sound of the listing title it supports both infiniband and 40GbE, is that right? I would like to try out infiniband, but I would be buying for the 40GbE. And are there good drivers for modern linux distros...

litchralee , (edited )

I only have experience with Mellanox CX-5 100Gb cards at work, but my understanding is that mainline Linux has good support for the entire CX lineup. That said, newer kernel versions – starting at maybe 5.4? – will have all sorts of bug fixes, so hopefully your preferred distro has built with those driver modules included, or loadable.

As for Infiniband (IB), I think you’d need transceivers with specific support for IB. That Ethernet and IB share the (Q)SFP(+) modular connector does not guarantee compatibility, although a quick web search shows a number of transceivers and DACs that explicitly list support for both.

That said, are you interested in IB fabrics or what they can enable? One use-case native to IB is RDMA, but has since been brought to – so called “Converged” – Ethernet in the form of RoCE, in support of high-performance storage technologies like SPDK that enable things like NVMe storage over the network.

If all you’re looking for are the semantics of IB, and you’re only ever going to have two nodes that are direct-attached, then the Linux fabric abstractions can be used the same way you’d use IB. The debate of Converged Ethernet (CE) vs IB is more about whether/how CE switches can uphold the same guarantees that an IB fabric would. Direct attachment avoids these concerns outright.

So I think perhaps you can get normal 40 Gb Ethernet DACs to go with these, and still have the ability to play with fabric abstractions atop Ethernet (or IP if you use RoCE v2, but that’s not available on the CX-3).

Just bear in mind that IB and fabrics in general will get complicated very quickly, because they’re meant to support cluster or converged computing, which try to make compute and storage resources uniformly accessible. So while you can use fabrics to transport a whole NVMe namespace from a NAS to a client machine with near line-rate performance, or set up some incredible RPC bindings between two machines, there may be a large learning curve to achieve these.

Installing some weird rails and a server in a rack ! A blog post by me! ( blog.krafting.net )

I got a server case and some rails for free, they were annoying to build (yes, build), and I could not find anything regarding those rails online, so I decided to blog about it, in the hope of helping someone with all the same questions as me!...

litchralee ,

Nice job making it work!

This reminds me of when I installed my Dell m1000e blade server into my rack. As it turns out, the clearance behind the face of a 19" rack isn’t standardized, so a protrusion on the ears would have interfered. The solution ended up being an angle grinder to remove the protrusion, and then re-leveling my rack, since otherwise the holes on the server wouldn’t align unless the rails are absolutely plumb.

litchralee ,

It works, and that’s what counts lol

Btw, I noticed your blog post was titled “random rail story #1”. Should I infer that more rack rail-related blog posts will follow?

Platform for First Proxmox Server

Looking to build my first server out, trying to figure out if there is a “better” platform for my needs. Right now I’m just planning a mix of machines and containers in Proxmox for running a NAS and Plex server, router of some sort (also, any preferences on wireless access points?), a pihole if that’s not just as easily...

litchralee ,

For wireless APs, Ubiquiti equipment is fairly well-priced and capable for prosumer gear, although I’m beginning to be less enthralled with the controller model for APs. They also can operate on 48vdc passive power, or 802.3af/at PoE, which might work nicely if you have a compatible switch.

I’ve heard from colleagues running Plex on Proxmox that core count is nice, except when doing transcoding, where you either want high single-corr performance or a GPU to offload. So an AMD Epic CPU might serve you well, if you can find one of the cheap ones being sold off from all the Chinese data centers on eBay.

Now with that said, have you considered deploying against existing equipment, and then identify deficiencies that new hardware would fix? That would certainly be the fastest way to get set up, and it lets you experiment for cheap, while waiting for any deals that might pop up.

litchralee ,

The multi port NIC can work, although I would recommend jumping straight to a managed or enterprise switch that can do VLANs. It saves on physical wiring and a managed switch often overlaps with other desired homelab features anyway, like PoE, IGMP/MLD snooping, and STP or loop-protect.

litchralee , (edited )

Similar to your modem case, the fibre ONT on the side of my house is now PoE powered, because it would otherwise need two pairs from the CAT6 cable to provide 12v to itself, from a backup battery supply inside the house. Replacing that supply with PoE, this allowed me to centralize my network stack’s power source, so that a single UPS in my networking closet can power that ONT. It also reflects the reality that if my PoE switch goes down, my network is hosed anyway. There was also the issue that with only two remaining pairs, it would be impossible to realize 1 Gbps on the CAT6.

I also have PoE to the RPi1 units which attach to my TVs. These serve as set-top boxes with interactivity with CEC via the TV’s HDMI port, and are PoE because I insist on all my devices being wired rather than on WiFi, so might as well provide power as well. These use a microUSB PoE splitter, because 1) the RPi PoE hats mean I can’t fit into standard RPi cases, and 2) the PoE hat runs very hot and makes a high frequency squeal, which was unacceptable in this application.

Power cycling via SNMP on the switch is another nice benefit to having stuff PoE powered. In fact, I have one more application which depends on this behavior. I have a blade server which sits in my garage, that would otherwise consume a lot of standby power when I don’t need it. To fix that, a 240vac relay with 12vdc control coil sits ahead of it, so activating the relay turns on the blade server. That relay is powered by PoE, commanded by the switch, so whenever I want the blade server, it’s only an SNMP command away. iDRAC then communicates over the network using that same CAT6 that’s powering the relay, again recognizing the dependency that if PoE fails, the blade server is down anyway.

I’m only using 802.3at power levels right now, as that’s all my switch can do. If I ever acquire an 802.3bt switch, I might consider PoE lighting or PoE phone chargers, or silly things like that. There’s a lot that can be done with 60ish Watts. Note that the efficiency of PoE switches tend to be abysmal when lightly loaded.

litchralee ,

I second this idea, if it’s feasible. As noted elsewhere in this thread, the lead-acid batteries in UPS units have a limited lifespan, even if not regularly drained. Solar and off-grid enthusiasts have determined that parity between overall lifetime cost of lead-acid versus lithium batteries was reached years ago, and now it’s firmly in lithium’s favor, mostly due to the greater number of recharge cycles.

Contraindications for lithium batteries would include:

  • high local costs for lithium battery packs
  • lack of space for the hybrid inverter, as they’re usually not rack-mountable
  • the homelab drops below 0 C (32 F), in the specific case of LiFePO4 cells

That said, breathing life into old equipment is usually more environmentally friendly than acquiring new equipment.

litchralee ,

Did y’all mean to say milliseconds, and not microseconds? Sub-millisecond power loss would be less time than one AC cycle, whether 50 or 60 Hz.

Anyway, I do recall seeing some enterprise gear specifying operation through a drop in AC power lasting two cycles, precisely to cover the switch to UPS power, at least for 60 Hz power. So up to 33 milliseconds. A cursory search for hybrid inverters online shows a GroWatt with “<20ms” switchover, so this may be fine for servers and switches, when the inverter is operated without any solar panels.

For consumer grade equipment, all bets are off; some cheaper switch-mode power supplies do very weird things under transient conditions.

litchralee ,

Looks like a reasonable deal. The mobo has IPMI, which if you’ve never used it, it’s a dream for server management. It’s no iDRAC or iLO, but it should work well enough for hands-off management.

litchralee ,

This answer would be incomplete without mentioning that Dell iDRAC and HPE iLO have a lot of proprietary functionality beyond what the IPMI standard requires. For example, iDRAC and iLO support rich KVM-like screen sharing, plus the ability to mount ISOs and other media onto the server. Indeed, so much more functionality exists in these implementations that a license key must be purchased to enable the most fancy features.

I will note that SuperMicro does simply call their offering as “SuperMicro IPMI” despite having a few of these proprietary features. But by and large, basic IPMI is an interoperability specification, with each implementation having their own unique strengths.

litchralee ,

From your description, this new box would not be necessarily have to be a full homelab-in-a-box but needs to be enough to run on its own, with possibly an umbilical cord to your normal homelab for regular syncing. The new box needs to be fairly user-friendly, in the sense that someone else can connect it to their monitor/keyboard/mouse, enter a password, and be able to browse all the documents.

The first thing that comes to mind for me is a NUC or other small form-factor PC, with capacity for your desired SSDs. On a daily basis, this would sit somewhere convenient, like a home or maybe off-site from your homelab, with only power and a network connection. But it would run an OS with a GUI – GNOME? – even though it mostly runs headless. All your syncing could be done with rsync or whatever, and neither your homelab nor this machine should require the other in order to function properly, retaining independence. This machine could then be easily disconnected and tested semi-annually to make sure that it will work properly when the time comes.

Is this the sort of answer you’re looking for?

Also, TIL paperless-ngx

HP P822 contoller

Hi everybody I own a HP DL380 g7 which works fine. Today I added a p822 controller for my d2700 storageworks. But when a attache the d2700 the controller doesn’t initialize anymore. If I start the server and attache the d2700 sas cables when it’s started it finds the d2700 and I can see the disks. Obviously I can’t use it...

litchralee ,

I don’t have specific experience with the gen7 series, but firmware updates ostensibly come as an ISO or USB image which you can boot in lieu of your normal OS to apply firmware updates. At least, that’s one of the ways I think HP would still support, in case customers are running neither Windows nor a Linux-based OS.

To rule out a cable-specific electrical issue at boot, what happens if you boot the server with the cables attached to the controller, but not attached to the d2700?

litchralee ,

To be abundantly clear, the firmware update resolved the issue you were having with the disk shelf?

litchralee ,

Your switch and AP configuration seem to be fine, so I would guess that the issue is on the routing/firewall side in OPNsense. Do I understand correctly that you assigned 192.168.10.0 as the IP address for OPNsense on the VLAN10 interface?

That might pose an issue, since in a /24 sized subnet in IPv4, the .0 address is the network identifier. Some software historically would wrongly disallow using this as an IP address, either as a source or as a destination. You might try changing your address to 192.168.10.1/24 to see if that works for your devices.

BTW: do you plan to enable IPv6 as well?

litchralee ,

Are you familiar with using Wireshark for traffic analysis? I think the next step is to figure out what is getting through and what isn’t, to the Windows machine to start with.

Focusing on IPv4 for now, I would hope the network trace shows the DHCP request being sent out, the DHCP response with an IP for the Windows machine, and then some outbound web TCP traffic (eg google.com), followed by some sort of TCP response. But since it’s not working, I imagine the latter would be replaced by – ideally – ICMP error messages that will describe the problem.

litchralee ,

Looking at the firewall config, nothing stands out to me as unusual. On the gaming rules page, can you include the 16 autogenerated rules? I don’t imagine that’s where the issue is, but it might be worth a look.

When your Windows machine is attached on the VLAN network, you said it is successfully assigned an IPv4 address using DHCP, right? Is it able to ping the router? Can it ping anything successfully?

litchralee ,

Np, it helps me keep my networking skills fresh and relevant.

I can ping things like google.com or just the DNS of 8.8.8.8 no problem

When you ping google.com, does this resolve as Google’s v4 or V6 address? In either case, this at least proves that the VLAN routing is enough to: 1) reach the system’s configured DNS server, 2) receive the DNS record, 3) send an ICMP (v6?) Echo to the default gateway, and 4) receive the ICMP Reply in response. If this works on v6, that makes sense since you have a rule explicitly for v6 ICMP to pass through. If this works on v4, I’m slightly confused why this works but nothing else does.

I can’t ping the static router address of 192.168.10.1, but I think that’s because of the rule I have in place that includes all private networks

Which rule was this? But more importantly, in the Wireshark trace, does any traffic at all from 192.168.10.1 show up as a source IP? The pings from earlier, they only need the MAC address of the gateway. But the DHCP responses should be coming from 192.168.10.1. Does anything else come from that IP? On a related note, do you see any ARP broadcasts originating from your laptop asking for any addresses on the network, such as 192.168.10.1? I’m trying to rule out certain odd situations.

I’ve got 1 collision error on the LAN, and 2 in/out errors on the vlan on the out side

While collisions are unexpected in today’s point-to-point switching topologies, if it’s just in the single digits and the vast, vast number of total frames are passing through without issue, then this is not a cause for great concern about your L2 network. To be clear, are you running 1 Gbps on the OPNSense interface and on all the switch ports?

litchralee ,

It does appear that you have addressing working but not connectivity. As I said, I’m no expert on OPNSense but I did find this thread which has some thoughts: forum.opnsense.org/index.php?topic=29459.msg14233…

In -> Firewall -> Settings -> Advanced. Make sure the checkbox “Allow IPv6” in enabled for obvious reasons.

As well as:

You just have to choose for hybrid Firewall: NAT: Outbound and add a rule to it:

Interface: WAN Protocol: IPv6 pass from any to any

This latter rule is… odd to me since there shouldn’t really be NAT for IPv6 to a delegated prefix. But maybe that rule is meant to effectively disable the NAT and allow traffic to pass straight through without translation, obviously after applying your firewall rules.

litchralee ,

Good luck! Also, when you do have everything working, back up your config. And also check to make sure your firewall is blocking inbound traffic as expected, for both v4 and v6.

litchralee ,

This may be a sizable leap in debugging, but for strange networking issues, I’ll usually start Wireshark and monitor whatever traffic is coming from the ISP’s equipment, looking for clues. A really nice clue would be something like VLAN tagged traffic, which would indicate the ISP requires a certain VLAN ID. Or perhaps you could see if your DHCP requests are being answered or not.

I do recognize that this sort of network sleuthing is as much art as it is science, so your mileage will vary.

litchralee ,

If the ISP router has a VLAN ID configured, there’s a possibility that they strip it before passing through to your equipment, so you wouldn’t need to configure it on your end. So while there’s no guarantee copying the VLAN ID will work, it could still be worth a try.

litchralee ,

The ONT can still have an IP address independent of pass-through mode; this is often done so the ONT can be remotely trouble shooted by the ISP, although if they’re burning a public IPv4 address to do this… that’s just wasteful.

As for CGNAT, I think what matters is whether other hosts on the Internet can see the address your router has configured. I like to check wtfismyip.com

Traceroute has some known deficiencies, or rather it is often used for things it wasn’t meant for, so I wouldn’t necessarily put too much concern behind what it reports for the intermediate routers. If you’ve got a pubic IP address and it behaves like one to your applications, then you should be good to go.

For a discussion about traceroute: gekk.info/articles/traceroute.htm

litchralee ,

IMO, custom firmware is a means to an end, rather than an activity undertaken just for the sake of it. That is, I run custom firmware when it gives me features I otherwise wouldn’t have had, or because the original firmware has issues.

For a great majority of home routers, OpenWRT and the like open up enormous possibilities, so I have no objection there. For a managed switch, however, the returns are diminishing: most of the time, the complexity of a network falls upon its gateway or firewall, rather than the switch. Yes, there could exist complex VLANs with priority flow control and GRE tunnels, but if a switch doesn’t support that, it’s usually because it can’t, due to lack of ASIC support or necessary performance, rather than firmware not implementing it.

Of course, things get wild in the enterprise switch space, where switches rise to the forefront of network design, with things like per-user VLANs and “lite L3” routing. But I’m ignoring those, since they’re hideously expensive and beyond the entirety of Ubiquity’s product line.

So I posit to you: what sort of feature would you want to see in your switch that’s not there today? Would that feature have to be on the switch, or could it still operate if it was on your router?

litchralee ,

Do you have no sense of adventure!! I installed openWRT for the fun of it!

OK, I concede haha. You’re absolutely right that doing things Just Because ™ is as valid as reason as anything else, and as an engineer I shouldn’t be dissuading other folks from exploring. One thing I will say is that because my work develops network switches, it’s an occupational hazard that I’ve become less interested in going home and doing more recreational networking. I still do, but not on my “production” home network. I have a separate equipment stack for playing around with.

maybe I should learn more networking and learn to use this first router well

I would doubly recommend this: networking is a great big world that underpins so many things, but is often unsung and misunderstood, or even just not understood at all. Looking under the hood is seldom unenlightening.

my Unifi switch needs a separate controller software running on a Pi or similar to configure it

You’ve pretty much arrived at exactly the reason why I don’t use Ubiquiti’s switch products, inexpensive and capable as they are. I’m a proponent of “fewer moving parts”, so it’s either self-contained network appliances (ie router, switch, modem) or tightly-integrated equipment with configurability and performance that overcomes the complexity burden. These controller-managed or cloud-managed devices are just adding points-of-failure, IMO.

Regarding the feature you mention, I think the industry uses the term “mirroring”, as in Port Mirroring or VLAN Mirroring. That said, the volume of traffic is basically a firehose and could potentially overwhelm whatever port or entity is to receive the mirrored traffic. High-end switches will instead forward traffic on a more granular basis, based on filters issued from the IDS for what constitutes suspicious traffic. You might consider reading about OpenFlow and Software Defined Networking (SDN) for how some of these scenarios are implemented, but this is getting rather deep into networking.

The refresher I was given a while ago to read for networking was The All-New Switch Book, second edition. It’s a bit old at this point, but it’s a solid foundation on Ethernet and standard network features.

litchralee ,

Dare I ask what happens if the gateway doesn’t have this auxiliary cooling? Does it drop packets? Something worse?

litchralee ,

To protect the world from devastation

To unite all peoples within our nation

litchralee ,

I bought a Naples (Gen 1) Epyc CPU and mobo from eBay back in 2021. My understanding is that it was from Chinese data centers clearing out to make room for Rome (Gen 2), since they would have been running Naples for a while and it probably made sense to upgrade.

Overall, the experience was fine, although I will note the CPU was rather lightly packaged and the description didn’t make it clear if it came with the plastic alignment piece to install the CPU – it did.

litchralee ,

The product manual for the OR700 indicates that it comes with a set of rack mount brackets. A bit of searching with Google Images shows that these brackets are only supported by the front rails.

Generally speaking, a product’s official rack-mount hardware is sufficient to support the product on its own, without anything pressing on it from above, and assuming all four screws (probably not included) are secured into cage nuts of the matching size and thread.

From my experience, 8 kg (18 lbs) for a device which is only 23 cm deep (9.2 in) is no cause from alarm, when installed with all 4 screws. Heavier appliances exist which also don’t require a 4 post rack.

litchralee ,

Whichever rack you do get, try to get one with square holes. That said, pre-threaded holes aren’t common for server racks, so they should be easy to identify and avoid. I say to prefer square holes because it is preferable to replace a removable cage nut (eg M6-1.0 size) than to repair and retap a stripped hole in the rack.

litchralee ,

The other comments have already touched upon specific security recommendations and useful learning material. But since you did request an ELI5, I figured I’d throw in some simple advice.

I don’t know much about Tailscale, but it looks to be an encrypted VPN into your server. This pipe to your server is secure from actors spying on your public WiFi connection, but would not help if – for example – your laptop is compromised and uses the VPN to further attack your server. To that end, the principle of “defense in depth” says that the server itself should have its own firewall, as a secondary or tertiary layer to keep the bad things away.

Your server firewall should default to reject*/block all inbound connections, with explicit exceptions only for services you intend to expose, such as a web server, SSH server, JellyFin, etc. Once an inbound connection is approved through the firewall, the outgoing reply to the client would also be allowed through, as would any follow-up traffic that is part of that same connection. This is connection-tracking, which all stateful firewalls can perform. Debian/Ubuntu use ufw as the default firewall, and it is IMO very easy to configure for common services or port numbers.

The next thing you can do to secure the server itself is to limit your attack space. Don’t use password authentication if you can avoid it, and use good, complex passwords where you must. Your SSH server can be configured to silently reject passwords and only accept public-key authentication, and your JellyFin authentication can be generated and stored by a good password manager.

At this point, we could go on about per-application recommendations, but just having a firewall on your server staves off a lot of script-kiddie level of attacks, from outside or even within your LAN.

  • The difference between reject and block in the firewall context is that reject causes a reply to be sent back to the client, positively informing them that access is denied and to not try again. The drawback is that this reveals that a firewall is in place, but is also valuable information when debugging a network connection. Whereas block silently discards network traffic, the same result as if the network lost the packet. IMO, block should only ever be used for WAN firewalls – to not reveal too much info to a potential attacker – but internally, firewalls within a LAN should use reject, as the benefits outweigh the risk of a network intruder who is already on the LAN.

As for the bonus question, with that much hardware, you could do interesting things such as experimenting with a Kubernetes cluster, or a ZFS filesystem. Or maybe you can do Chia mining with all that disk space, or Folding@Home (and CureCoin). If you’re more into just VMs and how they network together, it would be a good test bench to learn about Layer 2 forwarding and Layer 3 routing, if you wanted to understand how IPv6* traffic traverses multiple (virtual) Ethernet links.

  • Note: I am an IPv6 fanboy and promote it wherever I can over legacy IP (aka IPv4)

Finally, from the hardware specs you’ve given, might this be some sort of Dell or HPE server? If so, I would strongly urge you to enable the Lights-out Management (LOM) functionality (Dell calls this iDRAC; HPE calls it iLO), if you haven’t already. It may be the single most important tool for any system administrator, which is your role now, since you are in charge of this server. In short, LOM is like having a KVM, plus power control, and the ability to push physical buttons on the server and attach USB drives, all via a slick HTML5 interface.

Good luck and have fun!

litchralee ,

As other posters have remarked, it’s difficult to offer a generalized statement on PoE efficiency. One thing I will point out that hasn’t been mentioned yet is that PoE switches tend to have poor “wall to device” efficiency when lightly loaded. Certifications like 80 Plus only assess efficiency at specific loading levels.

Hypothetically, a 400W PoE switch serving only a 5W load may cause an additional 10W to be drawn from the wall, which is pretty horrific. But if serving loads totalling 350 W, could draw 390 W from the wall, which might be acceptable efficiency.

Your best bet is to test your specific configuration and see how the wall efficiency looks, with something like a Kill-o-Watt. Note that even a change from 120V input voltage to 240V input voltage can affect your efficiency results.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • All magazines