We have a site that generates AWS S3 signed URLs, and then returns them via <cfcontent file="#u#" type="application/pdf" />
If I use any other random HTTPS url, such as <cfcontent file="https://falkensweb.com/" />
then I get HTML shown in the browser as expected. So connection to the internet, and fetching content is working fine.
But our code has started returning file or directory [https:XXXXXXX...] does not exist
.
I’ve added
<cfhttp url="#u#"/>
<cfdump var="#u#"/>
<cfdump var="#cfhttp#"/>
by way of debugging, and this reports a “200 OK” and responseheader[‘Content-Length’] is 7629903. filecontent
is a native byte array as expected.
Likewise, copy and pasting the URL to the browser works fine.
So does <CFLOCATION url="#u#"/>
though this reveals the S3 URL.
The URLs that don’t work have a pattern like (removed confidential stuff) https://something.s3.eu-west-1.amazonaws.com/files/D/DD/DDDDDD.pdf?X-Amz-Security-Token=XXXXXXXX&X-Amz-Date=20250307T141930Z&X-Amz-SignedHeaders=host&X-Amz-Credential=XXXXXXXXXXXX&X-Amz-Expires=604800&X-Amz-Signature=XXXXXXXXXXX
Has anyone got any ideas what could have changed ? The release notes ( Breaking Changes Between Lucee 6.0 and 6.1 :: Lucee Documentation ) only talk about sites that return 403, and that doesn’t apply, because CFHTTP proves it’s an OK response for the exact same string CFCONTENT claims doesn’t work.