To be clear, ACF probably hasn’t had the JSON serialization issues you’re thinking of in many years. Most of those were fixed in 2016, 2018, an 2021. Only people waaay back on ACF10 would need to worry about that
This is really your “quick fix”
This is the “real” answer IMO. And honestly, I think you’re overstating the work. It’s not too hard to change
var myStr = {
foo : 'bar',
baz : 'bum'
}
to
var myStr = {
'foo' : 'bar',
'baz' : 'bum'
}
or
myStr.brad = 'wood';
to
myStr[ 'brad' ]= 'wood';
Sure, it will be a little annoying, but you and your co workers can probably knock it all out in an afternoon.
Don’t even believe those statements. As a CF contractor, I can’t recall how many companies I’ve worked with who were “6 months” from retiring CF code and here it is years and years later grinding on in production.
What @bdw429s said is of course the best choice. Don’t know your code and how much files you have or need to change.
Also, I’d not change the core of Lucee. As soon as you get an urgent update, you’d need to at least rebuild it and deploy it with your changed core code.
My alternative would be (like you already said) create a helper component that has a function that converts the keynames (just like you said recursively). I didn’t test this extensively, but here is something that worked quickly and I’d try that as a practical approach. However, that would create more overhead. You should really consider what @bdw429s said. Here is the other option:
MyJsonConverter.cfc
component displayname="MyJsonConverter.cfc" output="false" {
public struct function init(){
return this;
}
private struct function lowerStructKeyNames ( any dataobject required ){
local.result=[:];
arguments.dataobject.each( function( key, value ) {
if( isStruct( arguments.value )){
result.append( { "#lcase(arguments.key)#": lowerStructKeyNames ( arguments.value ) } );
}else{
result.append( { "#lcase(arguments.key)#": arguments.value } );
}
}
);
return local.result;
}
public string function serializeJsonWithLowerKeyNames ( any dataobject required ){
return serializeJSON( lowerStructKeyNames( arguments.dataobject) );
}
}
Then in your code you would need to add that component in global manner e.g. myJsonConverter= new MyJsonConverter(); and replace your serializeJSON() function in your code just like @LionelHolt already mentioned call it with MyJsonConverter.serializeJsonWithLowerKeyNames( ) instead:
In the CF10 legacy code, I am using Taffy and the serialization is done in just one place, so having this there simplifies the process as I can catch all requests for JSON in the one place. That was preferable to manually changing all the structs at origin because there were many and there was a good chance of missing some, given the way the data objects were being constructed (and it is just me - no team )
In terms of overhead, unless the object being serialized is very large, the speed is acceptable:
Average test results comparing the native Lucee SerializeJSON and the convert-to-lowercase function:
17 small records: 985 bytes of json:
native: 85 ms
lowercased: 102 ms
13 large, nested records, 34 Kb of json
native: 5268 ms
lowercased: 5340 ms
3,450 large nested records, 1.4 Mb of json Yes - big!
native: 6630 ms
lowercased: 13320 ms
/**
CFC to call the Lucee serializeJSON() after converting all the keys
in the passed in data object to lowercase.
eg:
myJsonConverter = new MyJsonConverter();
j1 = MyJsonConverter.serializeJsonWithLowerKeyNames( myNestedStruct );
j2 = MyJsonConverter.serializeJsonWithLowerKeyNames( myArrayOfNestedStructs );
See: https://lucee.daemonite.io/t/serializejson-with-lowercase-keys-and-override-jsonconverter-java/10619
for the rationale and
credit to @andreas for the initial code in that post. :-)
*/
component displayname="MyJsonConverter.cfc" output="false" {
public struct function init(){
return this;
}
/**
PUBLIC
Uses the Lucee native serializeJSON() to serialize the object to JSON,
AFTER converting all keys to lowercase.
*/
public string function serializeJsonWithLowerKeyNames ( any dataobject required ){
if (isArray( dataobject )) {
// We might start with an array at the top level
return "[#serializeJSON( lowerStructKeyNames( arguments.dataobject[1]) )#]";
} else {
// Or a struct, or simple value
return serializeJSON( lowerStructKeyNames( arguments.dataobject ) );
}
}
/**
PRIVATE
If the dataobject is a struct or array,
for all keys in that object convert them to lower case
and return a new object with those lower case keys.
If the dataobject is a simple value, just return it unchanged.
*/
private any function lowerStructKeyNames ( any dataobject required ){
if (isSimpleValue(dataobject)) {
// Just return the simple value unchanged
local.result = dataobject;
} else {
local.result=[:];
arguments.dataobject.each( function( key, value ) {
if( isStruct( arguments.value )){
// Structs recurse ...
result.append( { "#lcase(arguments.key)#": lowerStructKeyNames ( arguments.value ) } );
} else if (isArray( arguments.value )){
// Array, iterate and recurse ...
var arr = [];
arguments.value.each( function (item) {
arr.append( lowerStructKeyNames ( item ) );
});
result.append( { "#lcase(arguments.key)#": #arr# } );
} else {
// Everything else, just transform
result.append( { "#lcase(arguments.key)#": arguments.value } );
}
});
}
return local.result;
}
}
@Gavin_Baumanis Yes, v3.6. All works well. I created a custom serializer so all the toLowerCase process was taken care of there and I could leave the rest of the CFCs that generated the arrays of structs independant of the need to worry about key case. I also abstracted those “resource” CFCs so I only have 4 end point taffy_uri patterns that route to the “real” resources that do the work. That way those “real resource” CFCs also know nothing about Taffy - they just return structs or arrays of structs back to Taffy who serializes them for the client app. Works well.
Update: I am now not so clueless. Docker noob. I am on a Win 10 box using VSCode. To avoid complete overload I began my journey running Docker Desktop without the VSCode remote WSL approach. Realising this was what was slowing the whole app down, including the serialization, I learnt how to use the VSCode Windows remote WSL Ubuntu approach to make the containers inside Ubuntu and voila! instant dramatic speed improvement. Phew!
Would be super interesting to see your journey to make that work. In case you make a blog post about that specific setup with Lucee/Docker/WSLUbuntu/VScode, please let me know. I’d love to read it.
@andreas Yes, I thought I might. As a newby to Lucee + MySql (MariaDB) + nginx AND Docker AND WSL AND Ubuntu I found the whole exercise having a very steep learning curve! I am still in the weeds making everything work. Mostly done, just stuck on a db corruption issue now.
@Terry_Whitney. Thanks for that. I am confused by your post. What is the purpose of that tweak? Is it some kind of speed improvement that impacts serialising?
This directly effects the network stack.
If you’re making a json call (or any communication request) to any other host, this will help speed up communication between the two hosts.