Development Environment

Hi All,

I’m hoping to get some honest feedback on the development environment I’ve setup for my lucee app.

I’ll start by saying I was a tech and sales engineer for a small but successful datacenter in St. Louis for 10+ years, and they were a CentOS/vmWare heavy shop. I have quite a bit of experience in this realm so even thought their might be better solutions out there (amazon?), this is what I’m comfortable with maintaining and will most easily be able to support. I’m also working with the datacenter to support this app so I have additional motivation to use CentOS and vmWare.

That said, I’ve been a stay at home dad for the last 2 years and I kind of feel like I’m working in a vacuum these days, so I’d like to gather my thoughts and get some checks/balances on my approach.

Thanks! I’ll get right to it.

Hosting – I currently have a single server for dev running vmWare at the datacenter. (devHost01). I’m currently working on a plan to upgrade this server to more current architecture, but for now it runs ok with a dozen or so bare bone vms. The datacenter has a handfull of spare servers I can grab at any time if this fails, and I don’t consider it mission critical atm.

I also have 2 additional vmWare servers setup for an initial launch. I call them prodhost01 and prodhost02. They are nicer IBM xseries servers each having a 10 disc raid 10 array (15k sas) with 4 hot spares. The current vmWare build doesn’t support transparent failover between each server yet, but I have a plan with the datacenter crew to get that upgraded soon and likely before I’d actually put anything in production. But as for now I at least have 2 hot servers I can use for live, production stuff. I’d likely put everything on one, and use the other strictly as a hot backup.

They’re all tied together with a mirrored pair of 10Gbps HP switches, and the switches are tightly configured with VLANS (storage, vmware, management, public, private, etc) and access rights.

I have both a production and development block of public IPS. Production IP’s are routed to the production servers’ public ports, and dev is routed to the dev block. Any kind of access to either network has to go through a mirrored pair of juniper firewalls via VPN.

For storage, we’re running fiber to the switches so I can get a direct connection to the datacenter’s netapps. They probably have 30 full size netapp racks loaded with SSD’s that I can use for large scale, elastic storage. All super redundant, etc.

So, regardless of what actual application stack is used, I at least have an environment setup that is secure, has elastic storage, has sandboxed dev and production servers, and the ability to make/destroy vms as needed. If someone wanted to help, I can quickly turn over a vpn login, and any additional logins needed to get started (including Windows 10 box if needed).

I figure this should be enough to get something rolling and get a MVP rolled out that would be fairly reliable knowing that once things actually get busy on the production side of things I’ll have the flexibility to stretch things out pretty quickly (add new servers, storage, CDN, etc)

App Stack – For the actual app I want to build in this environment, I plan to use CentOS 7 primarily. I’ve built vm’s for various services, such as SMTP, backups, database, image hosting, search catalogs, etc.

For instance, the SMTP server is a bare bones ‘minimal’ CentOS7 build with a secure install of postfix with intentions on using for outgoing application based emails. (aka password reset, initial registration, etc).

For image hosting, it’s a bare bones ‘minimal’ CentOS7 build with a secure install of apache and an elastic netapp SAN volume for images mounted as ‘/images’. In this case, the vm gets an additional virtual nic for storage, which is linked to a dedicated physical nic for storage traffic, that attaches to the storage VLAN, then takes a dedicated link out to the SAN.

For the database, it is a bare bones ‘minimal’ CentOS7 build with a secure install of MariaDB. I’m most familiar with MariaDB so that’s my db of choice. It also seems to have all the features I need for this specific app, so I’m fairly locked in on this.

For web server, it is a bare bones ‘minimal’ CentOS7 build with a secure install of apache tomcat setup to running Lucee. All of the servers are setup so that the web server can access them over a dedicated network (with vlan), ie… talk to the mail server or upload images to the image server. Once we get a little further into development I’ll look at adding in additional webhost layer (ie NGINX) but I don’t want to overcomplicate things just yet and create any more moving targets.

I personally use a Windows 10 vm running locally on the development server so I can access it from anywhere. So I’ll connect via VPN, rdp into the Windows 10 box, and manage everything from there. I do however, have everything setup so I can use a local physical machine (ie at my house) to access the entire network. This way if the development server goes down and takes my windows 10 dev box down with it, I can still get to everything.

I don’t want to post any more detail (for sake of being too verbose). I’m really just hoping for people to say ‘yea that sounds just fine’ or ask any questions if things are unclear. Or any general advice.

Thanks!

Jason

Just with reading my post I found one thing I should probably address. I don’t have a centralized login system yet. ie AD or LDAP. The environment is small enough that I didn’t want to overcomplicate things by adding another layer of authentication. But I could likely get that setup in a weekend if it comes to it.

Yep, that sounds just fine.

Lucee is very flexible and you can set up your environment to your liking. That actually sounds like overkill, but if it works for you then great!

Remote/shared dev servers are a thing of the past IMO. Just have your devs spin up local copies of the site with CommandBox and be done with it. Everything else sounded find for staging or production.

I’ve thought about that; but I haven’t been able to think through how I’d reconcile some of the environmental dependencies, access to the db, etc.

It sounds like you’ve put a lot more effort that would take into your current dev env :slight_smile:

Can you elaborate on your developmental dependencies? Accessing the DB would work the same as any other server. You’d configure the data sources (ideally in your .cfconfig.json file and put in the host/username/password etc to connect. And if you wanted to externalize things like the passwords or hosts to make the config more generic or secure, just throw the commandbox-dotev module in the mix and have each dev put the config in their own gitignored .env file and then use env var replacements in your config that reads them.

I’m porting this from a previous OpenBD build, so I’m still kind of hacking at this.

Currently datasources are defined in the application.cfc file located in the webroot.

Does Lucee prefer the .cfconfig.json file? I don’t think I’ve even seen that before.

I guess for db access, I’d still provide devs a vpn back into our network so they can use the db?

I’ll try and post some more once I get back to my desk.

Currently datasources are defined in the application.cfc file located in the webroot.

That’s totally fine.

Does Lucee prefer the .cfconfig.json file? I don’t think I’ve even seen that before.

CFConfig is a CommandBox module that manages configuration for Lucee and Adobe servers for you. It can export, import, transfer and auto apply configuration to any CommandBox server when starting. When using it with CommandBox, you can place a .cfconfig.json file in your web root and it will automatically get sucked in when starting a server which makes for nice, portable dev setups that take zero additional configuration for your devs to start up.

I guess for db access, I’d still provide devs a vpn back into our network so they can use the db?

Yep, that’s how I do it.

Back to a generic CommandBox setup-- You can basically script out an entire server installation with 3 JSON files.

  • server.json - This takes the place of your JVM settings, ports, hosts, custom jars, web servers, SSL, CF engine/version configuration
  • box.json - This specifies all of the 3rd party dependencies you may need installed like ForgeBox packages, Git repos, jars, etc
  • .cfconfig.json - All your CF config like mappings, request timeoutes, datasources, mail servers, etc

So the steps to get your site up and running for a new dev are

git clone <repo-url>
box install
box server start

That’s it. That will take a brand new dev from an empty folder to a running server with the browser opened up to the home page in about a minute or two.

Read more at this slide deck:

All my projects work like this, I since I deploy to stage and prod on Docker Swarm powered by the Ortus CommandBox images, the same JSON files are used all the way from dev to production for a consistent configuration across the board.

How would mounted filesystems work in this scenario?

For instance, the /webroot/images folder is actually an nfs share that is configured on the webserver itself. Without the nfs link, it’d just write images to the local folder. So while a dev using commandbox would update the database, it would be writing to a different images folder.

Is there a way to configure nfs shares via command box?

Since the commandbox instance would still be going across the VPN to access the db, it’d have network access to the nfs shares. Just not sure how that would be configured.

That’s something I’d usually handle at an application level. i.e. a CF mapping called /shared-data/ or something that points to the nfs share on production, but on dev just points to a local folder. Depending on what your app does with this folder, that may be fine. If your app depends on specific files to pre-exist in this folder in order to work, then you can seed your local folder, or try and map it back over the VPN. Generally speaking, life will be easier and more performant, the more ties you can break with resources that need to be accessed across the VPN and can be re-created locally.

CommandBox doesn’t really know or care where your folders are coming from. All the same solutions that would apply to a local installation of Lucee would still apply here and are largely depending on your OS and/or filesystem. In my case, the remote shared file system is generally S3, and I simply connect my dev instance to a staging version of the bucket (via env specific configuration and access keys) or just an empty local folder.

Right now, we create an NFS share on the image server (basically a centOS NFS server) on the webserver as /images, and use that filepath within the application. Then when I want to work with the file system, aka create a folder or something, it looks like this

<cfdirectory 
          action="create"
          directory="//images/profiles/#newUserID.id#"
        >

It seems more efficient to make the NFS link on the OS level so that the application doesn’t have to authenticate for each request. However, is there a way to authenticate and link the NFS share from within the application?

This way, if using command box, any read/writes to the NFS volumes would still be accessible.

I don’t know, but that’s not really a CommandBox question at this point, it’s just a general Lucee question. Using CommandBox certainly doesn’t preclude the possibility of manually creating some shares first on the file system.

My concern there would be if the dev is using a windows box, it wouldn’t read the filesystem the same and the application code would break.

Actually, if someone runs commandbox on a Windows machine, does it inherit windows filesystem conventions? I assume it does. I’m going to watch those Command box vids you posted when I get a chance.

That said, speaking in Lucee generalities, can cffile/cfdirecty type references to NFS shares be configured from within Lucee?

EDIT: I don’t use mac. Is it pretty easy to map an NFS share across a vpn with one? I know they’re using a nix’ish filesystem. I think as long as it can map an NFS share to /images on the dev’s machine it’d work.

My concern there would be if the dev is using a windows box, it wouldn’t read the filesystem the same and the application code would break.

That usually means your app isn’t generic enough :slight_smile: I develop on Windows for deployments on Docker/*nix but all file access is based on CF mappings and tier-specific settings if needed. Generally speaking, the main difference is Linux will be case sensitive and Windows/Mac will not be.

Actually, if someone runs commandbox on a Windows machine, does it inherit windows filesystem conventions? I assume it does. I’m going to watch those Command box vids you posted when I get a chance.

CommandBox works the same way that a traditional Lucee installation would in regards to that. It’s just a Lucee server, but fully automated.

That said, speaking in Lucee generalities, can cffile/cfdirecty type references to NFS shares be configured from within Lucee?

I know there are some build in resource managers like S3 you can access via extensions. I’m not aware of anything built at the Java level for NFS. Generally speaking, Lucee just works with whatever the local file system is.

I don’t use mac. Is it pretty easy to map an NFS share across a vpn with one?

Neither do I :slight_smile:

I was able to get my Windows 10 box to mount the NFS share across the vpn. Had to install the built-in NFS client and run the following command:

mount -o anon \192.168.1.20\images01 X:

So now I can R/W to the nfs volume, which I assume means a commandbox app ran locally on my box would be able to r/w to the nfs volume like so

directory=“x:/images/profiles/#userID#”

Where as we normally use

directory=“//images01/images/profiles/#userID#”

Is there a way that I can create a commandbox specific mapping that would map ‘//images01’ to ‘X:’?

This way we could leave the native unix directory syntax in place and just write a mapping to convert it to windows syntax?

basically need command box to see //images01 and map it to X:

CommandBox or Lucee or Java in general aren’t capable of changing how the JVM sees the underlying file system. Hard-coding file paths that are specific to a given OS or server setup is more of an application code issue. Like I mentioned in one of my messages above, I recommend using a CF Mapping that’s specific to the environment that maps /images to whatever you need it to point to on the file system. Or another approach would be ot have a setting in your app that contains the base path. If this were a Coldbox app for example, I’d have a Coldox setting called imagepath or something and I’d override it on dev and stage as necessary to match the actual file system path. You can use the setting as much as you like in your code, but your code would be generic at that point.

Can I just make a CF Mapping in lucee, and then include an override in the CommandBox build to modify the mapping to something appropriate to the host os?

Sure, that’s possible. Mappings can be defined in your Application.cfc, or your .cfonfig.json file could define the mapping you want created (will get imported into the server context on start).

Thanks.

IIRC, .config.json is for commandbox, right? So I could have default settings in application.cfc, but anything declared in .config.json would override?

Jason

Almost.

.cfconfig.json is for the CFConfig module in CommandBox which writes config directly into the server home for Lucee or Adobe servers (CommandBox or otherwise). So it’s basically a way of automating the steps you normally take when you log into the CF admin in your browser, set a setting, hit save, etc. Except it’s

cfconfig set requestTimeout=0,0,0,50

Or

cfconfig import fileOfSettings.json

Or

# CFConfig JSON automatically picked up by conventions with no manual steps at all
server start 

So, given that those settings go into the normal server admin, your Application.cfc would actually override it. In Lucee, it goes application context, web context, server context in order of resolution.

For further CommandBox or CFConfig questions that aren’t strictly about Lucee, I would invite you to the CFML Slack team’s #box-products channel or the Box Slack team’s #commandbox channel.