Sysadmin

possiblylinux127 , in Anydesk 8 removes TCP Tunneling from the free service.
@possiblylinux127@lemmy.zip avatar

Rustdesk still supports it. Spin up your own server and you’ll be golden.

possiblylinux127 , in ICANN proposes creating .INTERNAL domain
@possiblylinux127@lemmy.zip avatar

Please no

It would be nice to figure out a way to get local SSL certs for .lan and .local domains though.

Supermariofan67 ,

What’s wrong with it?

possiblylinux127 ,
@possiblylinux127@lemmy.zip avatar

Internal is 8 letters while lan is three

duplexsystem ,

You can do this, I already use .internal and you can male your own root ca and make your own certificates with that

MigratingtoLemmy ,

Time for your own CA

5714 , in ICANN proposes creating .INTERNAL domain

Abolish ICANN.

Supermariofan67 ,

Explain what’s wrong with this. I’m out of the loop, seems like a good idea to me at first glance.

5714 ,

It’s the SPOF for most of the internet, it’s function should be democratic and distributed.

Registering TLDs costs absurd amounts of money last I checked.

owen ,

SPOF = single point of failure

theit8514 , in ICANN proposes creating .INTERNAL domain

If only they had done this with .local ages ago. Still, it’s a nice change, but I doubt my company will adopt.

breadsmasher ,
@breadsmasher@lemmy.world avatar

Just out of curiosity, does your company use a different TLD or something more arbitrary/just an IP?

mozz ,
@mozz@mbin.grits.dev avatar

We broke .local, pls give another chance, we promise we'll be responsible with .internal tho

MSgtRedFox ,
@MSgtRedFox@infosec.pub avatar

For real. Once Google and others started killing DNS lookups in mobile devices, think about how many legacy networks had to get rebuilt.

Maybe we could all just make up our minds.

mozz ,
@mozz@mbin.grits.dev avatar

Honestly the whole fabric of the internet, how email/SMTP and DNS and things work, is just a relic of an earlier time. I honestly think the money-men have their hands deep enough into the workings at this point that you wouldn't be able to create something like those things today and have them go anywhere. I'm surprised that it all still works as well as it does.

c0mbatbag3l ,
@c0mbatbag3l@lemmy.world avatar

You mean the OSI and TCP/IP models? Or just specifically TCP/UDP ports?

mozz ,
@mozz@mbin.grits.dev avatar

No, I was talking about the shared infrastructure. SMTP, DNS, ICANN, things like that require a level of cooperation and shared investment in the whole thing working well, not really because anyone's going to "win" the business game by running it to their particular advantage. That's a very alien way of thinking on the modern internet. The equivalent today would be something like massive publicly available caching web proxies that anyone could use as a big reverse-CDN to speed up their web access that were just kind of provided to everyone, government-funded, just sitting out there as a public resource. You know, like communism.

I've heard network engineers say they had a lot of trouble talking to their bosses about "peering" (setting up routes between two ISPs that happen to have operations close to each other, so they can hand traffic off to each other if it'd be more efficient to use the other guy's routes and both networks get more efficient to operate). They said they had a lot of trouble explaining the concept to the business people. They pay us for service? Fine. We pay them for service? Fine. We provide service to each other and both of us benefit without any money being involved? Plt... bzzt... I give up, I don't get it. Who gets paid? Why do we do this?

They've lost sight of the idea that it's a good thing to set up the world in a nice well working way (for everyone, including yourself), and just focused on how they can make their check bigger even if there's no point, or even if everything gets worse as a result.

Shadywack , in ICANN proposes creating .INTERNAL domain
@Shadywack@lemmy.world avatar

Porn sites would like this.

UID_Zero , in What do you use to track BMCs/KVMs/IPMI?
@UID_Zero@infosec.pub avatar

We use a separate subdomain. For example, all our hosts are joined to the ad.example.com domain, so remote management would be the same hostname on ilo.example.com.

We also have all HP hardware (at least for servers), so we have everything in OneView. Other devices (NetScaler SDX appliances, other stuff with management interfaces) just have their interface in that subdomain and it works out great.

MSgtRedFox ,
@MSgtRedFox@infosec.pub avatar

Did you ever use HP SIM? I guess it’s not one to one features, but newer. Curious if it’s worth the time.

UID_Zero ,
@UID_Zero@infosec.pub avatar

I have not. I don’t handle our hardware much, so I’m not entirely sure what we’re using.

MSgtRedFox , in What crazy or unusual things are you guys working on?
@MSgtRedFox@infosec.pub avatar

Running personal active directory hybrid sync with azure, hybrid exchange, a separate red forest for management of vSphere infrastructure, using saltstack for Linux config management. ~50 VMs and containers.

MNByChoice , in What do you use to track BMCs/KVMs/IPMI?

There are inventory programs, many of them, that will handle keeping system information updated.

I am posting to say that I tend to set the IPs to match with a known offset.
For example:
192.168.5.12 is the host.
192.168.105.12 is the BMC.

A wiki is the great for documentation of use, links, and files.

d00phy , in What do you use to track BMCs/KVMs/IPMI?

This seems like a good use case for a cluster manager. I’ve used xCAT in the past and recently Lenovo has an interesting project called Confluent that includes a web interface. A paid option would be Bright. These are made to manage hundreds to thousands of nodes.

nightrunner , in What do you use to track BMCs/KVMs/IPMI?
@nightrunner@lemmy.world avatar

Look into getting a CMDB and keep track of all of your hardware. That can store the hostname / IPs of your KVM / OS or virtualization layer, vkernels, storage adapter IPs, your vCenters and so on and so forth. If your data is getting so big that spreadsheets are getting tough to manage, then you probable need a more enterprise method of storing it.

nightrunner ,
@nightrunner@lemmy.world avatar

Examples: ServiceNow, Connectwise, Jira Service Desk

haywire7 , in What do you use to track BMCs/KVMs/IPMI?
@haywire7@lemmy.world avatar

Thinking out loud but wouldn’t chrome bookmarks for the URLs backed up to a file/account work better than a sheet of it’s just for access?

As we have mostly Windows based machines we look after everything is in Pulseway or TeamViewer. Routers and misc tend to be on specific ports on their connections IP and we have a shared Keeper repository for passwords and notes.

The company I work for has been buying other companies and customers like is silly season in the last year so we are digesting all the extra crud that came with it and trying to streamline half a dozen CRM, RM and Monitoring systems at the moment.

Chefdano3 , in What do you use to track BMCs/KVMs/IPMI?
@Chefdano3@lemm.ee avatar

At my company, we just have a standardized remote management suffix that we just throw at the end of the hostname, so we don’t actually track the urls. For example the server is named frosty, the url would be frostysuffix.

Then we track our servers with either an outdated access database that nobody updates, my locally saved personal Excel sheet, or by logging into one of the 4 different health checking applications that each monitor a piece of the infrastructure. (This part actually really sucks and I hate it.)

Arcayne , in What crazy or unusual things are you guys working on?

Idk if it counts as crazy or unusual, per se... but, another OpenStack deployment.

phanto , in What crazy or unusual things are you guys working on?

I was trying to get yacy working in a tiny container, but the dang thing kept crashing after indexing about 500,000 sites. Yacy is like a peer to peer web crawler. Too busy to dig into it and figure out why.

possiblylinux127 OP ,
@possiblylinux127@lemmy.zip avatar

You did this for work?

phanto ,

Nope, just for fun.

possiblylinux127 OP ,
@possiblylinux127@lemmy.zip avatar

That makes more sense. I think self hosted search engines are a interesting idea but they are hard to make work usually

e_t_ Admin ,

I've tried getting yacy to work on two separate occasions. I've thrown generous resources at it but never had a satisfying experience.

phanto ,

So it’s not just me. I watched its memory use rock slowly up until it ran out, then it died.

randomaside , in Leaving VMware? Consider these 5 FOSS hypervisors • The Register
@randomaside@lemmy.dbzer0.com avatar

The weird thing to me about the majority of VMware environments I see is that they exist to prop up and extend Microsoft environments.

Microsoft is hostile towards this use case because having your own cloud competes with their cloud products.

VMware was a commodity product that exists because they know how desperately IT professionals need to keep these Windows systems running with some level of reliability with advanced backup and replication strategies. And it was good.

After trying out proxmox I can say that:

  1. VM performance under windows is much faster on vmware. I think this boils down to the drivers for storage. I could go more into detail but not here.
  2. Containers and Linux VMs are offering me more than I ever really hoped for in proxmox.

But now I’m starting to think what the alternatives are really. VMware was a windows first virtualization platform. Other virtualization platforms in the open source ecosystem really put things like Linux first. Having to race to get to the point of hosting windows systems with constantly increasing licensing prices has really diminished the value to me of virtualization over all for windows.

I think we as a community need to move away from windows on the server and embrace technologies like containers,docker,podman, Kubernetes and phase out reliance on Windows.

For starters, does anybody have a rock solid setup guide for a Kubernetes Active Directory System?

possiblylinux127 OP ,
@possiblylinux127@lemmy.zip avatar

Active directory doesn’t normally go with Kubernetes. What are you asking?

Arcayne ,

Yeeahh... I'm thinking (hoping) he means an alternative LDAP/IDP, like Keycloak or Authentik..? Wanting to reduce reliance on Windows = kicking AD to the curb, too.

possiblylinux127 OP ,
@possiblylinux127@lemmy.zip avatar

There is Samba AD but that will very much not run in kubernetes

randomaside ,
@randomaside@lemmy.dbzer0.com avatar

I’m fooling around with a few samba AD docker containers. I ask because I’ve phased almost everything else out of my lab environment.

possiblylinux127 OP ,
@possiblylinux127@lemmy.zip avatar

The problem with Samba AD in a container or Samba in container is that Samba isn’t designed to be run in a temporary environment. You could run it in a LXC container but anything beyond that will break things in the short or long term.

randomaside ,
@randomaside@lemmy.dbzer0.com avatar

I figured you could get around some of the storage limitations with something like persistent volume claims. I’m testing it out at the moment. I am a big fan of LXC.

I see a few people have created docker Samba Containers and I’m giving them a whirl. Can’t say much for stability but I think it’s an interesting experiment.

I know in the past smb server didn’t work in LXC containers because certain kernel modules caused conflicts.

A man can dream.

possiblylinux127 OP ,
@possiblylinux127@lemmy.zip avatar

If you manage to create persistent containers how are you going to update them down the road? Like I have said previously, Samba isn’t designed in a way that allows for effectively hot swapping system components.

It seems like it would better to create a VM template and then setup a fail over cluster. Just make sure you have a time server somewhere on the network.

If you are dead set on containers you could try LDAP in a container. I just don’t think active directory was built for Linux containerization.

randomaside ,
@randomaside@lemmy.dbzer0.com avatar

There are a few applications out there that I don’t fully understand the deployment of but seem to work in containers.

Typically the storage is mounted outside of the container and passed through in the compose file for docker. This allows your data to be persistent. Ideally you would also want those to reside in a file system that can easily be snapshot like ZFS. When you pull down a new docker container, it should just remount the same location and begin to run.

Or at least that’s how I’d imagine it would run. I feel like one would run into the same challenges people have running databases persistently in containers.

randomaside ,
@randomaside@lemmy.dbzer0.com avatar

I’m also interested in these alternatives!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • [email protected]
  • All magazines