we have the same for cfdump (<cfdump> :: Lucee Documentation) “enabled:dumps are enabled by default, pass false to short circuit a dump execution and effectively disable it”
The handy thing with this is, you then can do the following in the Application.cfc
this.tag.cfdump.enabled=false;
to disable all dumps (unless tehex have set that attribute explicitly).
having “enabled” would be consistent with existing implementation (dump).
you can also simply change cfcatch’s “type” attribute. It accepts arbitrary strings and cfcatch will only kick in if this “type” matches the exception’s actual type.
you can also simply change cfcatch’s “type” attribute.
Yeah. That’s a better approach.
Or a <cfdump var="#cfcatch#" abort="true"> at the top of the catch block. Job done.
I think this feature is dangerous as it will encourage code to run differently in dev than in production (esp with the suggested this.tag.cftry.enabled=#runtimeValue#). No-one ought to want that. It’s fine with <cfdump> as it’s just output; this is flow control.
I also suspect it’s an enabler for ppl who don’t write tests; tested code would way less often require one to comment out flow-control constructs to check behaviour.
Seems like busywork to me, and solving a problem that is a) already solved; b) can leave code unstable. Let’s not enable that sort of thing in our dev community.
The use case for me would be in dev/test you may have servers that go up and down and so services may or may not be available in that environment that in productionyou want an error to be pushed, but in dev it might not matter if that service is alive. That way you can run code with graceful fail.
So could make that senario better. (only thought about this for five minutes)
Ironically I could see try / catch as a perfect place to put the cftimer idea that Gert was talking at on the cfcamp cfalive…
@dawesi: The use case for me would be in dev/test you may have servers that go up and down and so services may or may not be available in that environment that in production you want an error to be pushed, but in dev it might not matter if that service is alive. That way you can run code with graceful fail.
As you are undoubtedly aware, a common best-practice in software development is the phased DTAP cycle (Development, Testing, Acceptance, Production). Following this best-practice, whatever is in the production environment has already been tested, tried out and accepted in the preceding DTA phase.
The idea is to avoid surprises in production. In other words, to avoid the sort of scenarios you describe.
But accidents will happen, people say. If an unforeseen problem occurs in production, it will then become the new task for the next DTAP cycle.
Not sure what best practice has to do with my comment. You can do DTAP perfectly well with systems not operational in other stages. Complex environments demand this, especially when you aren’t in control of systems or servers, or they are 3rd party. DTAP is one of several ‘best practice’ methodologies for testing.
Do i not test my facebook posting feature as the linked in server is down during testing? both use the same part of my app (the social posting feature)? This is just one example.
There are many dev/test environments (eg govt) where you have no control if other dev/test servers are running when you’re testing. That doesn’t invalidate testing on other parts of the app that dont directly rely on that service for that page, but would normally load that service into the application.
That said my comment doesn’t change DTAP, it’s purely a config option for monolithic systems that rely in some parts on 2ndary systems, there is many reasons to run different code for different environments (aka config files).
Either way, adding a perfectly valid test case IMHO. .