For ease of deployment and minimise confusion, can you create an Ollama class or rename the OpenAI class to something more generic?
I can see a lot people using ollama than openAI as now data processing agreements and other relationship agreements now prohibit use of AI near govt or large corporate datasets, or leased datasets.
Could also be great to have some docs about language models to show nubes the best ones to use for cfml or javascript code like qwen2.5-coder, codellama, starcoder2 codegemaa, and sqlcoder for sql queries
Boo… This is gonna take a while for us to dig through and understand the implications. If different client/applications are less well separated, we need to figure out how that works for us.
Like, you can still have per-Application.cfc settings right ? And override server-wide defaults that way ?
I think we already have cgi.host_name in all our application names, so it should be largely seemless is my understanding ?
As far as splitting the logs, if you output all the logs to the console (one of the new options in 6.2), your docker/k8 container should automatically slap the hostname or container name on it for you. It should be a lot less work to transport it to Cloudwatch or any other logging instance.
Yeah, it’s easier in a way, but we’ve kinda got used to having a file per host, so we can aggregate into CloudWatch by just taking the path to the file and using as part of the log group name.
So ../.../applicationname/clientname/application.log with then log streams per instance (we have a cluster of servers)
Now we’ll just have ../.../applicationname/application.log, steams per instance, with all the “clientname” (aka application scope name aka cgi.host_name) all mixed in.
There’s probably a way to configure the CloudWatch Logs Agent to restore the old path. Cross that when come to it !