This really doesn't have anything to do with C#. This is your classic nvarchar vs varchar issue (or unicode vs ASCII). The same thing happens if you mix collations.<p>I'm not sure why anyone would choose varchar for a column in 2026 unless if you have some sort of ancient backwards compatibility situation.
<i>> I'm not sure why anyone would choose varchar for a column in 2026</i><p>The same string takes roughly half the storage space, meaning more rows per page and therefore a smaller working set needed in memory for the same queries and less IO. Also, any indexes on those columns will also be similarly smaller. So if you are storing things that you know won't break out of the standard ASCII set⁰, stick with [VAR]CHARs¹, otherwise use N[VAR]CHARs.<p>Of course if you can guarantee that your stuff will be used on recent enough SQL Server versions that are configured to support UTF8 collations, then default to that instead unless you expect data in a character set where that might increase the data size over UTF16. You'll get the same size benefit for pure ASCII without losing wider character set support.<p>Furthermore, if you are using row or page compression it doesn't really matter: your wide-character strings will effectively be UTF8 encoded anyway. But be aware that there is a CPU hit for processing compressed rows and pages every access because they remain compressed in memory as well as on-disk.<p>--------<p>[0] Codes with fixed ranges, etc.<p>[1] Some would say that the other way around, and “use NVARCHAR if you think there might be any non-ASCIII characters”, but defaulting to NVARCHAR and moving to VARCHAR only if you are confident is the safer approach IMO.
I agree with your first point. I've seen this same issue crop up in several other ORMs.<p>As to your second point. VARCHAR uses N + 2 bytes where as NVARCHAR uses N*2 + 2 bytes for storage (at least on SQL Server). The vast majority of character fields in databases I've worked with do not need to store unicode values.
> The vast majority of character fields in databases I've worked with do not need to store unicode values.<p>This has not been my experience at all. Exactly the opposite, in fact. ASCII is dead.
Generally if it stores user input it needs to support Unicode. That said UTF-8 is probably a way better choice than UTF-16/UCS-2
The one place UTF-16 massively wins is text that would be two bytes as UTF-16, but three bytes as UTF-8. That's mainly Chinese, Japanese, Korean, etc...
UTF-8 is a relatively new thing in MSSQL and had lots of issues initially, I agree it's better and should have been implemented in the product long ago.<p>I have avoided it and have not followed if the issues are fully resolved, I would hope they are.
> UTF-8 is a relatively new thing in MSSQL and had lots of issues initially, I agree it's better and should have been implemented in the product long ago.<p>Their insistence on making the rest of the world go along with their obsolete pet scheme would be annoying if I ever had to use their stuff for anything ever. UTF-8 was conceived in 1992, and here we are in 2026 with a reasonably popularly database still considering it the new thing.
I would be more critical of Microsoft choosing to support UCS-2/UTF-16 if Microsoft hadn't completed their implementation of Unicode support in the 90s and then been pretty consistent with it.<p>Meanwhile Linux had a years long blowout in the early 2000s over switching to UTF-8 from Latin-1. And you can still encounter Linux programs that choke on UTF-8 text files or multi-byte characters 30 years later (`tr` being the one I can think of offhand). AFAIK, a shebang is still incompatible with a UTF-8 byte order mark. Yes, the UTF-8 BOM is both optional and unnecessary, but it's also <i>explicitly allowed by the spec</i>.
In 92 it was a conference talk. In 98 it was adopted by the IETF. Point probably stands though.
the data types were introduced with SQL Server 7 (1998) so i’m not sure it’s accurate to state that it’s considered as the new thing.
To complicate matters SQL Server can do Nvarchar compression, but they should have just done UTF-8 long ago:<p><a href="https://learn.microsoft.com/en-us/sql/relational-databases/data-compression/unicode-compression-implementation?view=sql-server-ver17" rel="nofollow">https://learn.microsoft.com/en-us/sql/relational-databases/d...</a><p>Also UTF-8 is actually just a varchar collation so you don't use nvarchar with that, lol?
Since MS SQL Server 2019 varchar supports unicode so now it’s the opposite, you use nvarchar instead of varchar for backwards compatibility reasons.
Yes I have run into this regardless of client language and I consider it a defect in the optimizer.
I think this is a rather pertinent showcase of the danger of outsourcing your thinking to LLMs. This article strongly indicates to me that it is LLM-written, and it's likely the LLM diagnosed the issue as being a C# issue. When you don't understand the systems you're building with, all you can do is take the plausible-sounding generated text about what went wrong for granted, and then I suppose regurgitate it on your LLM-generated portfolio website in an ostensible show of your profound architectural knowledge.
This is not at all just an LLM thing. I've been working with C# and MS SQL Server for many years and never even considered this could be happening when I use Dapper. There's likely code I have deployed running suboptimally because of this.<p>And it's not like I don't care about performance. If I see a small query taking more than a fraction of a second when testing in SSMS or If I see a larger query taking more than a few seconds I will dig into the query plan and try to make changes to improve it. For code that I took from testing in SSMS and moved into a Dapper query, I wouldn't have noticed performance issues from that move if the slowdown was never particularly large.
This is a common issue, and most developers I worked with are not aware of it until they see the performance issues.<p>Most people are not aware of how Dapper maps types under the hood; once you know, you start being careful about it.<p>Nothing to do with LLMs, just plain old learning through mistakes.
actually, LLMs do way better, with dapper the LLM generates code to specify types for strings
Utf8 solved this completely. It works with any length unicode and on average takes up almost as little storage as ascii.<p>Utf16 is brain dead and an embarrassment
Blame the Unicode consortium for not coming up UTF-8 first (or, really, at all). And for assuming that 65526 code points would be enough for everyone.<p>So many problems could be solved with a time machine.
The first draft of Unicode was in 1988. Thompson and Pike came up with UTF-8 in 1992, made an RFC in 1998. UTF-16 came along in 1996, made an RFC in 2000.<p>The time machine would've involved Microsoft saying "it's clear now that USC-2 was a bad idea, so let's start migrating to something genuinely better".
I don't think it was clear at the time that UTF-8 would take off. UCS-2 and then UTF-16 was <i>well</i> established by 2000 in both Microsoft technologies and elsewhere (like Java). Linux, despite the existence of UTF-8, would still take years to get acceptable internationalization support. Developing good and secure internationalization is a hard problem -- it took a long time for everyone.<p>It's now 2026, everything always looks different in hindsight.
I don’t remember it quite that way. Localization was a giant question, sure. Are we using C or UTF-8 for the default locale? That had lots of screaming matches. But in the network service world, I don’t remember ever hearing more than a token resistance against choosing UTF-8 as the successor to ASCII. It was a huge win, especially since ASCII text is already valid UTF-8 text. Make your browser default to parsing docs with that encoding and you can still parse all existing ASCII docs with zero changes! That was a huge, enormous selling point.<p>Windows is far from a niche player, to be sure. Yet it seems like literally every other OS but them was going with one encoding for everything, while they went in a totally different direction that got complaints even then. I truly believe they thought they’d win that battle and eventually everyone else would move to UTF-16 to join them. Meanwhile, every other OS vendor was like, nah, no way we’re rewriting everything from scratch to work with a not-backward compatible encoding.
MS could easily have added proper UTF-8 support in the early 2000s instead of the late 2010s.
It gets worse for UTF-16, Windows will let you name files using unpaired surrogates, now you have a filename that exists on your disk that cannot be represented in UTF-8 (nor compliant UTF-16 for that matter). Because of that, there's yet another encoding called WTF-8 that can represent the arbitrary invalid 16-bit values.
It's weird that the article does not show any benchmarks but crappy descriptions like "milliseconds to microseconds" and "tens of thousands to single digits". This is the kind of vague performance description LLMs like to give when you ask them about performance differences between solutions and don't explicitly ask for a benchmark suite.
I disagree. I think it's a nice discovery many might be unaware of and later spend a lot of time on tracking down the performance issue independently. I also disagree that a rigorous benchmark is needed for every single performance-related blog post because good benchmarks are difficult to write, you have to account for multiple variables. Here, the author just said - "trust me, it's much faster" and I trust them because they explained the reasoning behind the degradation.
> No schema changes. No new indexes. No query rewrites. Just telling Dapper the correct parameter type.
Are we automatically discarding everything that might or might not have been written or assisted by an LLM? I get it when the articles are the type of meaningless self improvement or similar kind of word soup. However, if hypothetically an author uses LLM assistance to improve their styling to their liking, I see nothing wrong with that as long as the core message stands out.
I've found and fixed this bug before. There are 2 other ways to handle it<p>Dapper has a static configuration for things like TypeMappers, and you can change the default mapping for string to use varchar with: Dapper.SqlMapper.AddTypeMap(typeof(string),System.Data.DbType.AnsiString). I typically set that in the app startup, because I avoid NVARCHAR almost entirely (to save the extra byte per character, since I rarely need anything outside of ANSI.)<p>Or, one could use stored procedures. Assuming you take in a parameter that is the correct type for your indexed predicate, the conversion happens once when the SPROC is called, not done by the optimizer in the query.<p>I still have mixed feelings about overuse of SQL stored procedures, but this is a classic example of where on of their benefits is revealed: they are a defined interface for the database, where DB-specific types can be handled instead of polluting your code with specifics about your DB.<p>(This is also a problem for other type mismatches like DateTime/Date, numeric types, etc.)
Interesting problem, but the AI prose makes me not want to read to the end.
I never had this issue with Dapper, as others point out, an holding it wrong problem.
I thought, having just read the title, that maybe it's time to upgrade if you're still on Ubuntu 6.06.
Been bit by that before: it's not just an issue with Dapper, it can also hit you with Entity Framework.
This feels like a bug in the SQL query optimizer rather than Dapper.<p>It ought to be smart enough to convert a constant parameter to the target column type in a predicate constraint <i>and then</i> check for the availability of a covering index.
There's a data type precedence that it uses to determine which value should be casted[0]. Nvarchar is higher precedence, therefore the varchar value is "lifted" to an nvarchar value first. This wouldn't be an issue if the types were reversed.<p>0: <a href="https://learn.microsoft.com/en-us/sql/t-sql/data-types/data-type-precedence-transact-sql?view=sql-server-ver17" rel="nofollow">https://learn.microsoft.com/en-us/sql/t-sql/data-types/data-...</a>
It's the optimizer caching the query plan as a parameterized query. It's not re-planning the index lookup on every execution.
How do you safely convert a 2 byte character to a 1 byte character?
even better is Entity Framework and how it handles null strings by creating some strange predicates in SQL that end up being unable to seek into string indexes
This is due to utf-16, an unforgivable abomination.
This is a really interesting blog post - the kind of old school stuff the web used to be riddled with. I must say - would it have been that hard to just write this by hand? The AI adds nothing here but the same annoying old AI-isms that distract from the piece.
Life is too short to use SQL Server. I know people that use it will swear it's "not bad anymore" but yes it is.