I'm using a database in Azure with ColdFusion 2016. I'm using Sqljdbc41.jar from (helpful links on how to configure it: link1 link2) This is my query. [uuid] has an index on it and is varchar(36) and pkIdentity is a primary key int.
select pkIdentity
from tbl1
where [uuid] = <cfqueryparam cfsqltype="cf_sql_varchar" value="#attributes.uuid#" maxlength="36">
This query is consuming the most DTUs in Azure. Coldfusion is sending it to Azure as
(@P0 nvarchar(4000))select pkIdentity
from tbl1
where [uuid]= @P0
I've read that a setting on the jdbc driver may be casting the varchar datatype as varchar when passing it to Azure SQL. However, I don't have the option in the CF database setup screen to disable the conversion to nvarchar.
I think these are my options. Which do you think is better?
- try to reverse engineer what coldfusion is doing when you use cfqueryparam but specify the correct datatype (use sp_prepexec ?)
- remove use of cfqueryparam totally and just validate that string is valid uuid before hard coding it into query (ex. where [uuid] = '#attributes.uuid#') but then I'm afraid I'd lose visiblity to all the executions of this query being grouped together within the Azure SQL Performance Insight tool
Many thanks to @BernhardDöbler for commenting with the parameter I needed to correct the issue of the datasource always casting parameters in Unicode. The parameter that needs added to the JDBC URL is sendStringParametersAsUnicode=false (Ironically I first copied it from his comment and added it to the setup but it didn't work b/c there were some strange characters embedded between a couple of letters.) The change in DTU consumption is significantly lower now! See screenshots below.