Maybe some MBA did the math and is smarter than me or maybe they have different goals for esxi that extend beyond (having people and companies use it), but they have to realize free tier esxi is what the nerds and IT pros are going to use to hone their skills. And then those are the people that talk their companies into buying products.
Moves like this always seem so short sighted. 5 years from now you are going to see an uptick in proxmox setups or managed solutions using proxmox and other competitors.
The reality is that nobody's learning much useful from Free ESXi, as you need vCenter for any of the good stuff. They want you using the eval license for that, which gives you the full experience but only for 60 days.
Still, there's a lot of folks running free ESXi in labs (home and otherwise) and other small environments that may need to expand at some point. They're killing a lot of good will and entry-level market saturation for what appears (to me at least) literally zero benefit. The paid software is the same, so they're not developing any less. And they weren't offering support with the free license anyway, so they're not saving anything there.
That’s a great point. But vsphere not being available in the free tier kind of proves my point. Why hamstring your free tier by eliminating the more useful features? I understand not giving away your product for free but there was a way to do it where you turn it into a marketing tool.
You drive people away and then you end up in a situation where “esxi free tier is pointless” and then you kill that and all your goodwill completely. I guess we’ll see how it plays out.
Broadcom isn’t know for being great with acquisitions. It’s probably going to strip it for parts and sell it off.
The weird thing to me about the majority of VMware environments I see is that they exist to prop up and extend Microsoft environments.
Microsoft is hostile towards this use case because having your own cloud competes with their cloud products.
VMware was a commodity product that exists because they know how desperately IT professionals need to keep these Windows systems running with some level of reliability with advanced backup and replication strategies. And it was good.
After trying out proxmox I can say that:
VM performance under windows is much faster on vmware. I think this boils down to the drivers for storage. I could go more into detail but not here.
Containers and Linux VMs are offering me more than I ever really hoped for in proxmox.
But now I’m starting to think what the alternatives are really. VMware was a windows first virtualization platform. Other virtualization platforms in the open source ecosystem really put things like Linux first. Having to race to get to the point of hosting windows systems with constantly increasing licensing prices has really diminished the value to me of virtualization over all for windows.
I think we as a community need to move away from windows on the server and embrace technologies like containers,docker,podman, Kubernetes and phase out reliance on Windows.
For starters, does anybody have a rock solid setup guide for a Kubernetes Active Directory System?
Yeeahh... I'm thinking (hoping) he means an alternative LDAP/IDP, like Keycloak or Authentik..? Wanting to reduce reliance on Windows = kicking AD to the curb, too.
The problem with Samba AD in a container or Samba in container is that Samba isn’t designed to be run in a temporary environment. You could run it in a LXC container but anything beyond that will break things in the short or long term.
I figured you could get around some of the storage limitations with something like persistent volume claims. I’m testing it out at the moment. I am a big fan of LXC.
I see a few people have created docker Samba Containers and I’m giving them a whirl. Can’t say much for stability but I think it’s an interesting experiment.
I know in the past smb server didn’t work in LXC containers because certain kernel modules caused conflicts.
If you manage to create persistent containers how are you going to update them down the road? Like I have said previously, Samba isn’t designed in a way that allows for effectively hot swapping system components.
It seems like it would better to create a VM template and then setup a fail over cluster. Just make sure you have a time server somewhere on the network.
If you are dead set on containers you could try LDAP in a container. I just don’t think active directory was built for Linux containerization.
There are a few applications out there that I don’t fully understand the deployment of but seem to work in containers.
Typically the storage is mounted outside of the container and passed through in the compose file for docker. This allows your data to be persistent. Ideally you would also want those to reside in a file system that can easily be snapshot like ZFS. When you pull down a new docker container, it should just remount the same location and begin to run.
Or at least that’s how I’d imagine it would run. I feel like one would run into the same challenges people have running databases persistently in containers.
I used to work for a company that outsourced most of its developers to Infosys. I managed a few of them. They were lovely people, treated like shit by my company and by Infosys. I did my best with the little power available to me to give them reasonable projects and maintain reasonable expectations, they said I was the best manager they ever had. After I got laid off, they all quit too and most I think ended up working for Oracle somehow, I don’t know how they’re doing these days but Oracle sounds like it’s probably a fate worse than death, unfortunately.
Several people on Lemmy and other places recommended ‘A Fire Upon the Deep’ and ‘A Deepness in the Sky’ to me and I plowed through them. Really enjoyable reads with actually unique takes that I haven’t seen in other media even though it’s 30 years old. The aliens feel actually alien but follow a logic which I appreciate. The ‘zones of thought’ is now just forever in my head.
Yeah that's a fair point, although it's still a bit… well, funny (not "funny ha ha") that they even temporarily blocked those extensions. Not sure what Roskomnadzor could have done if Mozilla had refused even a temporary block, at least assuming the foundation doesn't have any legal entities in Russia which they may well have
So many sci-fi authors exploring interstellar civilizations toss in some sort of faster-than-light travel or wormholes to facilitate the narrative. What I liked about Vinge was he considered how things might play out if you actually stuck to the laws of physics as we know them today.
He imagined a nomadic society which moves from star system to star system mostly trading knowledge. While they travel at sub-light speeds, they broadcast a galactic Internet’s worth of data at the speed of light. The catch is that much of it is encrypted and only they have the keys, so they have tremendous power wherever they go.
This is not to say he never considered FTL, but when he did, he went deep into its implications. It was not just a means of hopping quickly around the galaxy. He realized that it would enable outrageously powerful AI, as the speed of thought would be increased by orders of magnitude.
I really liked how he envisioned space travel and the culture that came with it. A small but rich detail was how all the time measurements where given using kiloseconds and mega seconds to describe months and years since a nomadic space tribe would have little use for calendars associated to orbits. It’s creative and thought out.
His books and short stories set in a sooner future where society and our education system is vastly different because of AR are a lot of fun as well.
For those who don’t know, EUC stands for end user computing.
Why is so hard to setup VMs for employees? Maybe I’m missing something but it seems like a matter of just creating a virtual machine with a GPU attached.
In our case we have over 1500 employees using it, but only about 500 at a time. It’s an extreme waste of resources to have to provision 3x the hardware rather than use ephemeral systems. Also it’s much easier to patch a “gold” image and recompose entire pools than have to manage all of the systems as if they were full on laptops. Just to name a couple things off the top of my head.
Yup. That’s another reason we don’t have individual systems. And most thin clients aren’t designed to connect 1:1 to a VM. They usually need a broker of some sort.
Very significantly different performance requirements. The client communication needs tuning for fast UI response. Unified comms (zoom, teams, etc) need to be redirected to avoid bottlenecking through the server. usage patterns aren’t very well distributed (everyone logs in at 8) which means you can’t over subscribe as much.
The latest Firefox 127 appeared on June 11 with a modest list of changes – automatically reloading the browser when the OS reboots, closing duplicate tabs, and requiring more authentication to access stored passwords.
A change Mozilla didn't mention in the release notes has users complaining online, though.
Users complained on Mozilla's forums and on Reddit at the time, but it was at least possible to recombine the icons with the option in about:config – but no longer.
As you might imagine, people are not happy, although according to the official response in this complaint, it looks like the change will be reverted in Firefox 128:
According to this thread, users of Firefox on Apple iOS are finding that if you have both a main and private Firefox instances open, when the main one is closed, all the tabs in the private instance are closed too.
Slip-ups like this suggest to us that, as has long been the case, the Firefox developers lack a good understanding of how its remaining followers use it, and why they stick with it.
The original article contains 406 words, the summary contains 179 words. Saved 56%. I'm a bot and I'm open source!
theregister.com
Top