Just wondering the best place to find info about Lucee 7.x
Saw snapshots on the update channel…
- what is the eta on 7 being released?
- what are the new features
- is there a blog post / webcast about it?
Just wondering the best place to find info about Lucee 7.x
Saw snapshots on the update channel…
Gert Franz from the Lucee Foundation is going to give us a peek into Lucee’s future, at the next Mid-Michigan CFUG meeting on Tuesday, January 14th at 7:00 pm.
Gert will show us what is coming in Lucee versions 6.1, 6.2 and 7.0. These upcoming versions will have several enhancements aimed at improving functionality, performance, and integrating AI capabilities. In this talk he will scratch the surface of what will come in the next 12 months and what you can expect from the Lucee team in the future.
After the talk, Gert promises there will be time for lots of Q&A and he looks forward to tackling your toughest questions — so, bring them on!
In response to the coronavirus, we are going virtual. We will have a meeting URL next week.
I hope to see you next week.
Rick Mason
www.mmcfug.org
Geez, we’re not even on 6.1 everywhere yet and 7.x is on the horizon ?!?
At least it seems to be “integrating AI capabilities” so can safely be ignored while the bubble burst ?
I missed that this was eastern time. did anyone get the chance to see this? Any highlights to share?
Also, any chance that this was recorded and will be released here or on YouTube?
Great!
Thanks @carehart
By the way, I’m no expert when it comes to forum etiquette but is it frowned on, for example, for me to say “Thanks Charlie!” or should you just always address someone by their handle?
I’m no expert on that either, but I would say that for a simple thanks like that, just using their name should be sufficient: it communicates who your response is meant for, to anyone reading the thread (or getting notifications about it).
As for use of the handle, I’d say that’s better suited to when you’re naming someone NOT in the thread already, or perhaps when it’s been a while since their last reply in the thread (and you want to be sure to get their attention), or certainly if there’s more than one person in the thread having their first name.
I’m sure I’m forgetting another great use case, or may myself be committing some faux pas. These forums can indeed be slippery slopes sometimes, but most folks are forgiving and tolerant, understanding that there are few real “rules” of the road–if I can mix my metaphors.
some thoughts:
overall some great stuff
like interface stuff. there are ways to inspect code outside compile time with vscode also for hints, a few extensions are doing this now, to inspect your custom interfaces.
wish config variables where objectised (pet peeve) eg: this.monitoring.debugging vars could be their own object, rather than ‘debugging’ prefix in settings? seems more logical, not a big deal, just more normalised and then able to set as a block one part of config.
AI will be interesting future, especially if the right LLM models are used. Question about bad code being used in model is good question also as there is a tonne of bad answers on stack overflow in cfml, and many relate to ACF, so need to somehow get a clean learning model, and make it easy to extend that model with local code repos so you don’t get hallucinations and bad examples.
might be cool to look at scribejava/scribejava for oauth2 implementation - very mature project - been using this for years, has built in connectors for all the main services out of the box also, and is on maven ready to use
would be great to get a remote admin going so you could have one admin for multiple instances/servers and running up a new container auto-connects to a config endpoint to hook into monitoring/admin for config on local machine, perhaps rest api for remote admin that you can limit to ip, same as error template ip filters.
catch/throw wondering if we could just have a message in the catch(e,type,message) instead of throwing second exception??
AI seems to be non-core extension thats bundled in already?,
with single mode only: hmm, need to rethink how we’re doing multi-tenant on bare metal (for data cleaning/augmentation millions vs millions of rows db and lucee processing)… cant just run up another bare metal server… guess most are using containers, but we need raw horsepower to grind out jobs.
like the removal of timeouts - for big jobs this will be great.