Hmm interesting. I only started to have issues after Firefox 110 update. I guess I will need to use other browser just for vSphere until this is somehow resolved.
I don’t mean to be rude but why on earth would someone think data in the cloud cannot be deleted? This is not even something that I have seen remotely advertised or something like that so I don’t get where it’s coming from? Especially when the person is working in IT …
Docker inspect $container should return you most of the info for the container. You can also get a shell inside the container via docker exec -it $container sh. If you have a dockerfile for the container you can see how the container has been set up.
Additionally the shell history can also yield useful information on what has been done. Docker saves the logs of running containers in /var/lib/docker/containers
thanks, super-useful. I think I will bring up a couple of docker containers at home and check where and what they log, then try and extrapolate from that. I’ve managed to get into a couple of them with the -it command.
You usually want to prioritize changing the container’s build config instead of getting into them and modifying them. Much better to get into the mindset of them not being pets you have to nurture.
Separating the different services into containers is overall a good practice but having the DB in one can be a pain as it’s easier to work with stateless applications. The isolation aspect is very valuable as its easier to debug a problem. I would look into container orchestration if there isn’t already and making sure logs are centralized first.
okay, that makes a lot of sense. I can’t see any immediate orchestration, but maybe I’m looking in the wrong place. would the logs go to /var/log on the main system? just realized I haven’t looked for those (d’oh!)
Terny has the correct answer here OP. While I have never used Docker in an enterprise environment (manufacturing applications aren’t known for supporting any technology from the last decade at least), I have used Docker extensively in my home lab. You don’t want to modify the container itself, but the image it was created from. The data doesn’t reside in the container itself anyway, but typically a volume attached to the container (assuming it stores anything in the first place). Your best bet will be to figure out what image the container was created from, and modify the image. From there, you can update the existing containers to use the new image, or move them elsewhere if you like.
You mentioned these VM’s are in the cloud. Depending on the hyperscaler, it is likely that you could migrate these to a native container service to save on cost since you wouldn’t have to pay for the overhead of a VM.
If you don’t understand the system, how do you know switching to containers will be an improvement, or even work at all? Are there already published container images for this, or are you also going to learn how to build a container for a custom app?
so the dockers already exist. I want to understand how they were built (I suspect Ansible is involved, but I don’t know how this would work) so I can understand how they interact with one another and then modify.
If you want to know how they’re built, look for Dockerfile in the code base; that’s usually the file that would create a container image when docker build … command is used. Perhaps you’d also see something about CI and you’d find a build server some where, too.
If you have proxmox at home, play with docker in a VM, there are a great deal of docker images you can throw up and play with to help you understand. Once you get that down, play with building docker images to wrap your head around that, then best to copy the image that’s being used in your work infra if you need to make changes, then throw it up on another test VM to ensure you don’t break anything before pulling it into the live environment.
As for how the docker infra is setup, your explanation is pretty vague as far as what the images are doing, so nobody will really be able to tell you without that information - but my bet would be resources and/or segmentation
You can use the ‘move’ function in Hyper-V to perform a live migration which incurs no downtime. Select the VM, click Move, select ‘Move the virtual machine’ (not ‘Move the virtual machine’s storage’, that only moves the storage, not the whole VM), and then finish out the wizard. IIRC that is sensitive to host architecture being sufficiently similar (changing processor generations can make it sufficiently different), so it may not work for you.
If you can’t do that and since it is a single-digit number of VMs, you could turn them off, copy the .vhd files to the new location, set the VMs up on the new server, and turn them on. That is, of course, going to incur some downtime, so it isn’t optimal.
Did that before. First, ensure you aren’t running on any snapshots. If your VMs have any snapshots, delete them.
If the move function works then use that but I suspect it won’t. You can simply use the “move the vm’s storage” option to ensure the VHD is properly consolidated, then you can put it on the new host and start the VM there (shut it off before moving of course). If your host is running a newer hyper v version you might need to bump the hypervisor version for each VHD in PowerShell, that might take a while but it’s usually fast.
Thanks for the advice! I am pretty sure the move function won’t work since the OS and hardware are 10 years apart but I will try it anyway. If not I will try the storage move.
Edit: I also found, at least in my case, Win11 had two Firefox processes running in efficiency mode which would kill my browsing at times. Verify that too if you can.
There are some known issues with timezone handling resulting in posts appearing to be from the future. I think it usually happens when you post from kbin into lemmy.
Hell yeah! Glad to see this be rebuilt in the fediverse, I’m also a Reddit refugee and while I’ll missy cranky sysadmin rants… I’m not going back to Reddit
Sysadmin
Oldest