I’m an equal opportunity OS user - I use Windows, MacOS and Linux daily. Ultimately the entire conversation usually comes down to “use what you know and are most comfortable with”.
That said… I’ve always enjoyed working with Linux… I’ve been using it since Slackware 3.1 and since then I’ve used Redhat, Mandriva, CentOS, Fedora, Debian, and Ubuntu at one point or another. I’ve used Windows, Ubuntu and MacOS as desktop OS’s. Today, my primary OS of choice for Desktop is MacOS and for Servers is a combination of CoreOS + Ubuntu or Alpine containers, and/or Ubuntu LTS, with some CentOS interspersed.
Desktop reasoning - using Ubuntu as a desktop was … fine… There are apps you won’t be able to get for Linux, and some things are harder to configure than other OS’s. Windows and MacOS are more polished when it comes to system setup, from detecting your video card to GUI customization. Windows always had more broad app support, but now that Mac is intel they’re catching up, plus the widespread deployment of web based applications means you can use the same stuff from all OS’s… The one thing I use that I couldn’t get a decent Mac version of was the full version of Quicken. But I like MacOS mainly because I can have my cake and eat it to - it’s a good, stable OS with polished features, built in applications, support for Office apps and primary apps that work cross platform, plus I can get to the terminal, which is based on BASH, which gets me to a “linux-like” experience on the desktop, where I can leverage XQuartz to run remote linux apps, and run command line development and apps and programs just like I could on linux.
Windows has gotten better… Now that WSL is a thing, you can install a full Ubuntu install on your Windows machine and do very similar things. This drive my point home about the differences between OS’s not being as important today as they once were.
So let’s talk longevity of skills - while you can run Windows Core/Nano containers, for the most part containers are all based on Linux. It’s a small subset of Linux knowledge necessary to do containers… but if you believe containers are the future (I believe they’re here to stay) knowing some amount of Linux is a good thing. You can run those containers on Windows, MacOS, or Linux - under the hood, it’s either running natively (on linux) or running a lightweight VM, or using some emulation technology to run linux-on-something-else.
For servers, I was going Redhat for many years, then Mandriva (because there were intel optimizations), then Debian (for security and stability), and then Ubuntu (for widespread PPA support, plus Debian’s stability in some situations). If you were going to pick up a Linux distribution, I’d use Ubuntu LTS today. (Skip the 6mo releases, so you want 18.04 Bionic)
In my opinion, deploying a linux server today is going to end up on one of two OS’s - CentOS (or RHEL if you can afford it and want Ent. support), or Ubuntu LTS. They both offer long term support. CentOS has older software for longer - it’s a more Enterprise approach of “let’s run what’s working for as long as possible and not worry about new versions of things”. It’s very stable. And because it’s a frontrunner in the Linux arena, if you’re looking for vendor support for just about anything (i.e. drivers, dell tools, hardware support, etc) the vendor will likely have something that supports CentOS.
Ubuntu is backed by Canonical and has paid options as well, but most server ops would say it’s more of a Desktop OS… Largely because Canonical has done work over the years to try to pitch Linux as a viable desktop option. But you don’t have to install a GUI at all, and under the covers Ubuntu starts from Debian… so it can be run properly, and minimally, and have as much or as little surface area as you want. But you also have the ecosystem of Ubuntu wrapped around it, and things like PPA’s mean that if there’s something Canonical doesn’t supply, the community can and you can add it to your distro with minimal support. Most vendors will also support the Debian ecosystem (which also includes Ubuntu).
When you get containers in the mix it blends lines - many of the docker containers at there use CI and are built from vendor source directly. When you use a distribution like REHL/Centos/Ubuntu, they have a philosophy that brings order to file structure and the types of software included. (Debian goes out of their way to only include FREE software - not just 0 cost, but software unencumbered by patents and onerous licensing) So every package has some amount of massaging to fit into the distro’s method of configuration, tooling, and frequently the distribution will provide backported bugfixes.
This has happened to me in the past with things like Tomcat and ProFTPd - I don’t have specific versions but it goes like this - Distro packages version A. Life moves on, vendor releases versions B,C,D… which have new features but also bugfixes. Major security issue happens and vendor releases version E, which may have new features but also has a major bugfix. Distro decides instead of introducing more features (and risk), they’ll just take the security fix and backport it to version A, and release a version A.1 for their distro. CentOS does this a lot, as does Debian/Ubuntu based ecosystems.
The “docker way” for using the community tomcat container for instance, is you get the vendor’s latest and greatest all the time and skip the individual distro based maintenance.
As with most things, one isn’t better than the other, they’re just different. So as a container maintainer you get to make the choice - build on ubuntu or centos and use the distro’s packaging, with the distro’s team for backports… or build on alpine because you care the most about drive space and small containers, or build on whatever base OS you want and automate downloading the source from the vendor and building from scratch with the container.
Basically, it’s a bunch of design decisions you have to make as to what direction you go in and none are “right”, they’re just different. And the world is converging - even Microsoft will admit containers are here to stay, that’s primarily why SQL Server can now run on Linux.
I’ve probably rambled long enough. So your ACTUAL question…
TL;DR:
When it comes to resource utilization, Linux does more with less. That has always been my experience.
When it comes to Lucee, your key performance tuning is going to come from Java and the JVM, not the OS.
When it comes to the webserver, Apache can run on both OS’s. (you don’t HAVE to use IIS on windows) You can also use Tomcat on both OS’s and forgo a frontend webserver. Things like nginx and lighttpd are going to be Linux-only. (And are based on doing more with less! So they’re built around Linux’s model specifically)
All OS’s give you the opportunity to do things right, or do things wrong. You could run Tomcat as the SYSTEM account on Windows. That’s less secure than a service account. You could run Tomcat as “root” on Linux. That’s also less secure than a service account. I will say that distros out of the box will default to Tomcat running as a service account. Frequently Windows defaults are in the opposite direction.
If you’re looking to pick up Linux to learn something, I’d recommend Ubuntu LTS for the reasons I listed above… Ubuntu provides almost anything you’d need, it’s going to be easier to pick up, there’s PLENTY of resources online for how to do just about anything when you run into problems, resources online are targeted towards people like you (people learning, people who are windows people primarily, people who are trying linux for the first time), and if you want to go advanced eventually, you can do that too.
You then also have the option of - do you deploy Ubuntu on bare metal, or do you do a VM, or what? If you’re planning to run everything on your desktop, consider WSL
You can install ubuntu side by side with windows, so that’ll be easiest to get your feet wet.
And you can also run docker containers on windows (or any OS), so that’s a good thing to learn as well.