So many sci-fi authors exploring interstellar civilizations toss in some sort of faster-than-light travel or wormholes to facilitate the narrative. What I liked about Vinge was he considered how things might play out if you actually stuck to the laws of physics as we know them today.
He imagined a nomadic society which moves from star system to star system mostly trading knowledge. While they travel at sub-light speeds, they broadcast a galactic Internet’s worth of data at the speed of light. The catch is that much of it is encrypted and only they have the keys, so they have tremendous power wherever they go.
This is not to say he never considered FTL, but when he did, he went deep into its implications. It was not just a means of hopping quickly around the galaxy. He realized that it would enable outrageously powerful AI, as the speed of thought would be increased by orders of magnitude.
I really liked how he envisioned space travel and the culture that came with it. A small but rich detail was how all the time measurements where given using kiloseconds and mega seconds to describe months and years since a nomadic space tribe would have little use for calendars associated to orbits. It’s creative and thought out.
His books and short stories set in a sooner future where society and our education system is vastly different because of AR are a lot of fun as well.
Several people on Lemmy and other places recommended ‘A Fire Upon the Deep’ and ‘A Deepness in the Sky’ to me and I plowed through them. Really enjoyable reads with actually unique takes that I haven’t seen in other media even though it’s 30 years old. The aliens feel actually alien but follow a logic which I appreciate. The ‘zones of thought’ is now just forever in my head.
Can anyone weigh in on whether any of these can be used for a cluster?
I use VMware in my homelab via vMUG, and I’m sure that’s going to get destroyed next, so I’m looking for an alternative that can allow for running VMs across hosts using shared storage with migrations between hosts. I’d prefer FOSS, but the only hypervisor I know supports all of this right now is hyper-V. I really REALLY don’t want to use hyper-v… Most of my workloads are Linux, with a handful of Windows servers that I use for an internal domain and testing.
From the brief Google searching I’ve done it appears to be possible, though, I’m not sure if proxmox skills will help me professionally. I used VMware before because I needed to learn VMware esxi and vcenter. I know it fairly well at this point.
I want to target a hypervisor solution used in large companies, I’m not sure that’s proxmox. Currently I’m leaning towards OpenStack, since I know some cloud providers use it for VPS offerings. I know enough about hyper-V that I know I don’t want to use it, ever. At least outside the context of Azure VMs. I can’t really do Azure cloud at home (they’re is a way, I’ve looked into it, but it’s very expensive), though my current workplace uses Azure extensively.
I’m just not aware of any company using proxmox as a VM platform, whether single host or clustered.
Well I can’t speak for enterprise but for me it works pretty well in a 3 node cluster. I can live transfer VMs that are hosting services with very little interruption. Proxmox also supports HA and Ceph but I haven’t used those features.
Good to know. I’ll examine everything carefully. I’ve been debating on replacing my existing monolithic iSCSI storage configuration with Ceph, so maybe that will weigh in… Having something that can access Ceph natively is a big plus. Otherwise I need something to sit in between that can basically translate Ceph to iSCSI luns, which is just more complexity that I’d like to avoid.
A lot of things to consider. Thank you for the comments.
I have clients that use internal, but they do it as a subdomain; so internal.contoso.com
Any internal only domains that I set up are probably going to go the same way. I’ve used domain.local previously, and the DNS headache I get from that is immeasurable.
With so many things going “to the cloud” or whatever, the internal.domain.tld convention tends to make more sense to me.
Maybe some MBA did the math and is smarter than me or maybe they have different goals for esxi that extend beyond (having people and companies use it), but they have to realize free tier esxi is what the nerds and IT pros are going to use to hone their skills. And then those are the people that talk their companies into buying products.
Moves like this always seem so short sighted. 5 years from now you are going to see an uptick in proxmox setups or managed solutions using proxmox and other competitors.
The reality is that nobody's learning much useful from Free ESXi, as you need vCenter for any of the good stuff. They want you using the eval license for that, which gives you the full experience but only for 60 days.
Still, there's a lot of folks running free ESXi in labs (home and otherwise) and other small environments that may need to expand at some point. They're killing a lot of good will and entry-level market saturation for what appears (to me at least) literally zero benefit. The paid software is the same, so they're not developing any less. And they weren't offering support with the free license anyway, so they're not saving anything there.
That’s a great point. But vsphere not being available in the free tier kind of proves my point. Why hamstring your free tier by eliminating the more useful features? I understand not giving away your product for free but there was a way to do it where you turn it into a marketing tool.
You drive people away and then you end up in a situation where “esxi free tier is pointless” and then you kill that and all your goodwill completely. I guess we’ll see how it plays out.
Broadcom isn’t know for being great with acquisitions. It’s probably going to strip it for parts and sell it off.
I’m amazed it held out for so long. Small stacks and getting people used to useing your tool sounds like a good lon green strategy, and Boradcom doesn’t do those.
I use VirtualBox right now. My daily driver windows 10 guest is so slow, that pushing the start button comes with a 20s wait. Looking at the performance monitor while this is happening, nothing pops outs as the culprit. Plenty of resources left.
I’ve always sworn to VirtualBox, but I’m going to ask my boss for a workstation pro license next time I see him.
I can relly recommend proxmox. Some years ago we switched from a 60.000€ dell VMWare Storage/Server-Setup to a three Host proxmox Setup for about half the price (to be fair, add 5-10k for Setup for our local Linux Team because we did not know much about proxmox). Mainly because we were able to place one of the Hosts in our Warehouse (connected with 10g Fiber) so there theoretically will be no harm to our production in case of water/fire/whatever in the server room because the one system can instantly take over (after some learning it works Like a Charm). I had some concerns regarding ceph, but for us it has proven Rocksolid, even while we had some real weird Switch issues it always recovered fast and without issues as soon as the connection was there. A big issue were the licensing terms for Microsoft products because with three amd-systems you have a lot of cores to buy licenses for - so we had a good excuse to substitute and cut out some products that only supported Windows environments.
Honestly the whole fabric of the internet, how email/SMTP and DNS and things work, is just a relic of an earlier time. I honestly think the money-men have their hands deep enough into the workings at this point that you wouldn't be able to create something like those things today and have them go anywhere. I'm surprised that it all still works as well as it does.
No, I was talking about the shared infrastructure. SMTP, DNS, ICANN, things like that require a level of cooperation and shared investment in the whole thing working well, not really because anyone's going to "win" the business game by running it to their particular advantage. That's a very alien way of thinking on the modern internet. The equivalent today would be something like massive publicly available caching web proxies that anyone could use as a big reverse-CDN to speed up their web access that were just kind of provided to everyone, government-funded, just sitting out there as a public resource. You know, like communism.
I've heard network engineers say they had a lot of trouble talking to their bosses about "peering" (setting up routes between two ISPs that happen to have operations close to each other, so they can hand traffic off to each other if it'd be more efficient to use the other guy's routes and both networks get more efficient to operate). They said they had a lot of trouble explaining the concept to the business people. They pay us for service? Fine. We pay them for service? Fine. We provide service to each other and both of us benefit without any money being involved? Plt... bzzt... I give up, I don't get it. Who gets paid? Why do we do this?
They've lost sight of the idea that it's a good thing to set up the world in a nice well working way (for everyone, including yourself), and just focused on how they can make their check bigger even if there's no point, or even if everything gets worse as a result.
theregister.com
Active