AWS DMS limitation when using SQL Server as a source - replicating rows with more than 8000 bytes of information

1.5k views Asked by At

I am thinking of creating a read only DB in Amazon RDS, using their DMS to replicate the data from an on-site database. On the Amazon DMS list of general limitations, one of them are:

Changes to rows with more than 8000 bytes of information, including header and mapping information, are not processed correctly due to limitations in the SQL Server TLOG buffer size.

In what circumstances would this apply? I was under the impression that a table can contain a maximum of 8,060 bytes per row. I understand varchar, nvarchar, varbinary, sql_variant, or CLR user-defined types are pushed into another row with a pointer in its place. Is that what this refers to? I am trying to understand any situation where this would occur.

Also, how can I determine which, if any, tables would be affected?

1

There are 1 answers

0
TheTor On BEST ANSWER

OK, it seems I can answer my own question now, at least in one use case.

It's not relating to the possible size of the combined columns, but as stated, the amount of data in them. Create a table with an id, and 9 varchar(1000) columns. Fill the first 8 with 1000 bytes, and all is replicated as it should. Fill the remaining column with data, and the update is not replicated (presumably as the column data is pushed off page). Strangely, varchar(max) does not seem to be affected in this way.

To locate tables with rows affected by this, run the following, paying attention to the MaximumRecordSize column:

dbcc showcontig with tableresults

A way around the problem is to split your offending tables into smaller related tables (a.k.a normalizing it), and performing joins to retrieve the data. This may or may not be feasible depending on the size of your database.