Large cfc causing instability when uploading/updating

#1

Hi Guys,

We have a CFC file which over time has grown to almost 1Mb in size. Whenever we upload this file to our servers it causes us server instability. It is as if while it complies and we are getting traffic we trigger errors and this leads to instability and having to restart servers. If we take 2 of our 3 servers our of our pool and then upload the file, we can with a bit of a wrestle get 2 servers back online before the 3rd eventually crashes and requires a restart. Even doing it this way as soon as we start up the 2 servers and receive traffic, for a window of time (I guess while it is compiling) we get errors/instability.

I attempted to reduce this file by splitting it into 2 and extending one from the other, but it did not improve things. My assumption is that when the first file is instantiated the second is instantiated by extension and we have the same issue.

Just wanted to know if anybody else has issues with larger files causing these kind of issues and how me might approach reducing this file size or is there a better way of deploying the file?

Thanks,

Ian.

#2

Our primary DBAccess module is 5.5MB.

We split it into 44 individual CFMs quite a while ago, and deal with it like a “mixin”

In our CFC we have this:


<cffunction name="_includelcl" output="false" access="private">
                <cfargument name="template" type="string" required="true" />

                <cfinclude template="#arguments.template#" />
        </cffunction>

Then at the end of the CFC we have a bunch of these:


<cfset _includelcl("progress_monitoring.cfm") />
        <cfset _includelcl("custom_user.cfm") />
        <cfset _includelcl("getDistrictPeople.cfm") />

The trick is to fix the scoping of your functions after inclusion - so in the init() method we call copyPublicFunctions which copies any functions with access=“public” from Variables (which is where cfinclude dumps it) to THIS scope (where public accessors can find it)


<cffunction name="_copyPublicFunctions" access="public" output="false" returntype="void">
                <cfset var idx = '' />
                <cfset var tst = '' />
                <cfset var thisMeta = '' />
                <cfloop item="idx" collection="#Variables#">
                        <cfif CompareNoCase(idx, "THIS") NEQ 0 and not StructKeyExists(this, idx)>
                                <cfset thisMeta = getMetaData(variables[idx]) />
                                <cfif IsStruct(thisMeta) and (not StructKeyExists(thisMeta, "ACCESS") or CompareNoCase(thisMeta.ACCESS, "private") NEQ 0)>
                                        <cftrace abort="no" text="Adding #GetCurrentTemplatePath()# function #idx# to this" />
                                        <cfset this[idx] = Variables[idx] />
                                </cfif>
                        </cfif>
                </cfloop>
        </cffunction>

That said, Lucee doesn’t like it. At all. We run on ACF, and this has worked for years, on CF8 up through CF2016.

It will largely depend on what your “large file” issue is. If it’s a java compilation error (i.e. byte too short for jump or similar) then there’s just too much stuff in a single function, and the function needs refactoring. If it’s compiling a single file, this technique could help, because I believe cfincluded templates are compiled individually before inclusion. (I could be wrong) . We largely did it for organizational reasons, to keep our sanity with a huge file, easy of editing, etc.

Other techniques we use:

  1. Entire CFC is cached in Application scope once, and used many times. File on disk is ignored unless we pass a URL variable that our onRequestStart handler picks up, triggering a cflocked area that re-creates the component. (Then you only have 1 thread trying to compile the CFC). For that matter if you do it right you could do something like

<cfif StructKeyExists(url, "reloadMe")>

<cfset var newCfc = new some.big.component() />

<cflock type="exclusive" name="big-component-refresh">

<cfset Application.MyComponentName = newCfc />

</cflock>

</cfif>

(obviously you also need to create the component on request 1 - so maybe onApplicationStart) . In actuality we let Coldbox/Wirebox do this.

In a concurrent situation, all req threads would read the large object from memory, from Application scope. Even as you change the file(s) on disk. Then you initiate a refresh by doing a URL hit with ?reloadMe=1 or similar. Your ONE thread now creates the NEW component in your request thread, and then atomically sets it in the Application. (Realistically you can probably ditch the lock too, since it’s just a variable assignment, but safe is good) Or you could leverage Request scope, i.e.:


<cflock type="readonly" name="big-component-refresh" timeout="10">

<cfif not StructKeyExists(Application,"MyComponentName")>

<cfset url.reloadMe = 1/>

<cfelse>

<cfset Request.MyComponentName = Application.MyComponentName />

</cfif>

</cflock>

<cfif StructKeyExists(url, "reloadMe")>

<cfset var newCfc = new some.big.component() />

<cfset Request.MyComponentName = newCfc />

<cflock type="exclusive" name="big-component-refresh" timeout="20">

<cfset Application.MyComponentName = newCfc />

</cflock>

</cfif>

And now you just reference Request.MyComponentName and you’re good to go… This is the type of technique I wish wirebox did - right now when you fwreinit, it clears singletons and persistent scopes then creates the new objects where they belong - I wish it instead created a unique wirebox key in the persistent scopes to create all the new objects in, and atomically switched to the new key, before dropping the old keys. (especially since structs are passed by reference this would be trivial to switch from one struct to another)

The other thing is since we use rsync, all file transfers are also atomic, that is:

.Component.cfc.sjkasdj <-- in progress filename used while the new file is uploaded

Component.cfc

Then when it’s done uploading, it atomically does mv -f .Component.cfc.sjkasdj Component.cfc

That’s not something we had to do, that’s just how rsync works.

Hope that helps!

1 Like
#3

Have you deploying as an archive, aka lex file? That way it’s pre-compiled