On the lucee lang spreadsheet there was a question like: remove query data type? and everyone voted no. Instead of asking to remove it, the question should have been:
Can we retrieve other more efficient data types after executing a query without being forced to always return a massive “query” data type?
Can we get an OO database api in pure CFML that works like this please?
d=new getDatabaseConnection({datasource: "myDatasource", fetchLimit:1, closeOnError:true, closeOnRequestEnd:true});
// allow 2 ways to build a query
// method 1
d.setSQL("select * from user limit ?, ?");
d.setParams([0, 1000000]);
resultObject=d.executeQuery();
while(resultObject.hasMoreRows()){
struct=resultObject.getNextRow(); // defaults to struct data type.
echo(struct.id);
}
// change params and reuse of same connection and preparedstatement java objects for next query
d.setParams([1000000, 1000000]);
resultObject=d.executeQuery();
// continue to support for..in looping to retrieve structs one row at a time.
for(struct in resultObject){
echo(struct.id);
}
// method 2
resultObject=d.executeQuery("select * from user limit ?, ?", [0, 1000000]);
// allow retrieving the data as a simple array instead of struct, for even more efficient data type
while(resultObject.hasMoreRows()){
arr1=resultObject.getNextRow('array');
echo(arr1[1]);
}
// allow us to manually close connection
d.close();
// allow us to check connection status
if(d.isClosed()){
echo('db was closed');
}
Notice that I allow auto-close of connection as an option on the connection to simplify the code, otherwise you’d need try/catch boilerplate code on every query written this way to avoid bugs/errors causing exhaustion of all connections. Auto-close management should be left on by default, and only disabled when developer wants to manage the connection in a more complex way.
cfquery and the huge query data type it creates wastes a lot of memory and cpu time and causes a greater amount of garbage collection after each query then there would be with a smaller fetchSize. To workaround, we have to write extra code when we need to split large results into many separate queries using offset,perpage limit. For example, I often pull 30 records out at a time, instead of all of them, and this seems like an acceptable balance, but I’d rather just have more fine grained control of the data types returned like you can in pretty much every other language.
lazy="true"
wasn’t a good enough solution since the heap will still run out of memory if too much data is accessed and there is no reuse of a connection allowed.