Home  |  Articles  |  Scripts  |  Consultancy  |  FAQs  |  Links  |  SSIS

General  |  Transactional  |  Merge


Merge FAQs

I try to resolve an update/delete conflict and receive the message "The insert failed. It conflicted with an identity range check constraint" - why?
My experience is that this is caused when there is an update/delete conflict and we have set up automatic identity column management. This is not a particularly unusual setup though. In this case the winner is the delete, so the record has been removed on both sites. Now if you are looking in the conflict viewer and decide to submit the loser the above error occurrs. Here the conflict loser is the update. Merge doesn't distinguish between inserts and updates, so rather than try to do an update to a non-existant record, it will try to reinsert the record. You'd expect this to be fine because the initial insert went through OK? I would as well. However the initial insert was accepted because the check constraint which verifies the identity values is defined as "Not For Replication" and this attribute works when the merge agent synchronizes. However the conflict viewer doesn't trip the same attribute so it fails. I would say that this is a bug but the manual workaround is to disable the check constraint manually, submit the conflict loser and then reenable the check constraint.
Adding a FK to a merge article causes the merge agent to fail – why?

Check to see if there is a schema change which is failing. Perhaps the PK table doesn’t exist on the subscriber? There are many such DDL changes which can fail on the subscriber when the publisher and subscriber database schemas are not identical. Check the error message and then check the entries in sysmergeschemachange. There is a handy stored procedure just for this:

Exec sp_enumeratependingschemachanges @publication = 'yourpublicationname'

Once you find the schema change which is failing you can select to skip it – just like running sp_setsubscriptionxactseqno in transactional replication or using the –skiperrors parameter.

Exec sp_markpendingschemachange @publication = 'TestMergeSchemaChanges'

     ,  @schemaversion =  16

     ,  @status =  'skipped'
"Error # 5 was generated by .Net SqlClient Data Provider The insert failed. It conflicted with an identity range check constraint in database 'DB', replicated table 'dbo.Table', column 'ID'. If the identity column is automatically managed by replication, update the range as follows: for the Publisher, execute sp_adjustpublisheridentityrange; for the Subscriber, run the Distribution Agent or the Merge Agent. Unable to insert [dbo].[Table] The statement has been terminated." This error was produced on the publisher sopradically - we would get some errors and then it runs ok - without an obvious pattern. What is the solution?
OK - quick recap on Identity ranges. This relates to rows added by the application or users but nothing to do with replication. You'll have 2 ranges allocated on the publisher and 2 on the subscriber. The identity values will be used up as rows are added and once the first range is full it will automatically tip over from the first into the second range. Really as these 2 ranges are adjacent they can be seen as one logical range.

On the publisher an insert which is attempted after the second range is filled up will also be ok - 2 new ranges are created by sp_MSrefresh_publisher_idrange and we can start populating them. On the subscriber it is a different story. An insert after the second range is full will not automatixcally result in a new range. This is not possible because the publisher itself allocates the subscriber's ranges, so once the 2nd range is filled up on the subscriber you need to run the merge agent to get the next 2 ranges allocated.

As the above error is on the publisher, we take a look there and notice that 2 things are in place. The agent is running on a continuous profile and the insert is being done by an application user with db_datareader and db_datawriter role memberships. At the time of looking at the issue we see that inserts work correctly and the range is not filled up!

The merge insert trigger will allocate 2 new ranges only to users in the db_owner role - this is true for the publisher only. So, the range gets filled and we get an error from the check constraint preventing the insert from working - because our user is not in the db_owner role. Even though we have a non db_owner user who caused the problem, the merge agent itself will next run and will allocate 2 more ranges if the second is full. So the problem will go away for the next inserts after the merge agent has run and it will reappear later on when we fill the current range. Hence you can see sporadic errors with this setup.

The easiest solution to this on the publisher is to ensure the user doing the inserts is in the db_owner role. On the subscriber also note that the range might need to be large because there is no possibility of immediately getting 2 new ranges allocated by the merge trigger once the second range is filled up - this is only done when the merge agent runs.
When initializing I get this error - why?

The process could not bulk copy into table '"dbo"."xyz"'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20037)
Get help: http://help/MSSQL_REPL20037

Field size too large (Source: MSSQLServer, Error number: 0)


Try BCPing in the data manually as a test and I'd expect this will not work and will not give error output. In this case take a look at the collations on your subscriber. They are no doubt different to the publisher/distributor and this is the cause of the error - you'll need to standardise the collations to get this working.
Why do I get this error when editing a trigger?

Msg 21531, Level 16, State 1, Procedure sp_MSmerge_altertrigger, Line 67
The data definition language (DDL) command cannot be executed at the Subscriber. DDL commands can only be executed at the Publisher. In a republishing hierarchy, DDL commands can only be executed at the root Publisher, not at any of the republishing Subscribers.
Msg 21530, Level 16, State 1, Procedure sp_MSmerge_ddldispatcher, Line 191
The schema change failed during execution of an internal replication procedure. For corrective action, see the other error messages that accompany this error message.
Msg 3609, Level 16, State 2, Procedure mytrig1, Line 10
The transaction ended in the trigger. The batch has been aborted.

This error is caused by an attempted edit of a user trigger on a replicated table. Specifically, it is caused by a rollback from a database trigger called "MSmerge_tr_altertrigger". This database trigger is created when you set up the subscription. So, what are your options if you really want to edit your user trigger?
  1. "DISABLE TRIGGER [MSmerge_tr_altertrigger] ON DATABASE", make your user trigger change, followed by "ENABLE TRIGGER [MSmerge_tr_altertrigger] ON DATABASE".
  2. Drop the user trigger then recreate it.
How can I preserve the rowguid column when dropping a subscription?
This refers both to SQL Server 2005 and SQL 2008. When a subscription is dropped, the rowguid column is dropped as well. This is unless it already existed beforehand (with ROWGUIDCOL attribute), in which case it'll remain. This is quite a change to SQL 2000. In SQL 2000 it's always remain around. So, what's the problem? Well, your code might be based in some way on this column - perhaps you rely on the number of columns being constant after dropping the subscription? If so you could take a look at running this on the subscriber before dropping the subscription. Of course this is completely unsupported and must be tested in a UAT environment first!

update sysmergearticles set preserve_rowguidcol = 1 where name = 'yourarticlename'

Why is the MSmerge_history table not getting cleared out (applies to SQL 2005 and SQL 2008 but not SQL 2000!)?
Have a check on the merge agent's job step. If the -Continuous flag is set, then there will be a big problem in your merge history retention. The history retention cleanup uses sp_MShistory_cleanup which checks the table MSMerge_sessions to find the end_time of the synchronisation session. This value is used to determine which records can be
deleted from MSMerge_history. The relevant code is shown below.

@cutoff_time = dateadd(hour, -@history_retention, getdate())

DELETE dbo.MSmerge_history
FROM dbo.MSmerge_history msmh
  JOIN dbo.MSmerge_sessions msms
  ON msmh.session_id = msms.session_id
WHERE msms.end_time <= @cutoff_time

The problem is that by using the -Continuous setting, there is a single session and the end_time column keeps getting updated. Consequently records older than @history_retention are never deleted. The solution is to
restart continuously running agents periodically.
Why does tempdb sometimes run out of space when creating a snapshot which uses dynamic filters?
Did you specify @use_partition_groups=true for your publication? If it is set to false, SQL merge replication may heavily use tempdb and a huge transaction log may be generated. See this article for more info.
My merge (2005) publication is hanging on initialization - what should I do?
There's loads of mentions of this recently. It's caused by an infinite loop problem and is bypassed by using :
Update sysmergepublications set generation_leveling_threshold = 0

This problem be avoided by applying a hotfix, or by applying cumulative update #8, or sp3. Full details are here.
Is it possible to write conflict info to a separate (persistant) table?
MSmerge_articlehistory has a count of conflicts per article, while MSmerge_conflicts_info has more detailed info including the conflict type and the 2 nodes which are in conflict. Polling these tables and writing the values to your own history tables is the main option for reporting.
Is there a way in SQL Server 2005 to set "not for replication" flag on an existing identity column?
Have a look at the new procedure sp_identitycolumnforreplication
How can I get round the (conflict) problems encountered in SQL Server 2005 when using GetDate() in filters (Nate Cook)?
Nate found that he was getting bogus conflicts intermittently. The problem did not occur in SQL Server 2000 and was specific to using getdate() in the subset_filter for a table - especially if you have other child tables being filtered by join filters to the main table which has that subset_filter. He worked with Microsoft for a month sending them metadata log files, etc. and now there is an MSDN entry describing the workaround. The only thing not described in the link is the fact that a potential symptom of using getdate() in filters is the existence of intermittent bogus conflicts: e.g. "The record was updated at the subscriber and at the publisher. Which version do you want to keep?" when in actuality the record was only updated at the subscriber and not at the publisher. Or it might say the record was deleted at the publisher when it wasn't at all.
How do I control the flow of data using merge replication in SQL Server 2005?
Have a look at the @subscriber_upload_options in sp_addmergearticle. There are older alternative options which are now no longer recommended:
(1) SQL Server Upload Only Conflict Resolver. BOL: "Changes uploaded to the Publisher are rejected; changes are downloaded to the Subscriber". (set at design time for the publication)
(2) the -EXCHANGETYPE parameter (can be set at run time for the subscription)
What are the issues when combining filters and joins?
There are many issues which you need to be aware of when combining filters and joins and this could be made into a huge article. As an example for this section: if you have join filters, and a different static table filter on each table in the join, the results are not filtered correctly, or at least not at all intuitively. E.G. consider I merge replicate the 2 tables: Customers and Orders from the Northwind database. These are joined by a PK - FK relationship so a customer can place several orders. If I create a join filter to link the tables, all 'ordering' Customers and their associated Orders are replicated. I then add a filter clause on the Customers table (parent) to restrict the output to a particular customer: customerid = 'VINET'. The resulting (automatically created) merge view for Orders restricts the results to 5 orders - those belonging to customer 'VINET'. So, on initialization one customer and 5 associated orders are replicated. However, what if I also put a different filter on the Orders table: customerid = 'ALFKI'? Logically this should further restrict the replicated Orders. However, on initialization 11 orders are now replicated! In this case a view is created which unions the 2 clauses, rather than an AND clause being used. This means that rows arriving as part of the join, but which don't have a customerid = 'ALFKI' are additionally returned! This also occurs in SQL Server 2005. See Lizet's blog for other issues.
How can I dynamically change a filter clause?
Using static filters, it is possible to reallocate records at the highest level, and the effect will propagate downwards by virtue of the join filters. This will allow you to simulate changing the filtering dynamically. Practically speaking, there needs to be a separate table which relates filter details to subscribers, and table joins would then relate this assignment table to the other articles.
How can I find the list of pending merge changes?
Have a look at this script.
How can I manually start the merge agent outside of SQL Server agent?
The simplest way is to use the replmerg executable:
C:\Program Files\Microsoft SQL Server\80\COM>replmerg -Publisher DH1791628 -PublisherDB xxxPublisher -Publication xxxPublishertClients -Subscriber DH1791628 -SubscriberDB sub1 -Distributor DH1791628 -DistributorLogin sa -DistributorPassword sa (no not really in practice!!!)
Why are inserts on the publisher being removed?
This is very likely related to the "compensate_for_errors" setting. If a change from publisher fails to get applied at the subscriber (for some reason, PK, FK, CHECK,etc constraints), then synchronization will undo the change at the publisher. So an insert from publisher which fails at the subscriber will later on get deleted at the publisher. Similarly a delete from publisher which fails at the subscriber will get re-inserted at the publisher. The inverse also applies if changes initiated at the subscriber can’t be applied at the publisher. This behaviour is set at the article level and can be disabled if required ( there is an associated KB article for more info).

To determine for sure if this is the cause of the issue you are seeing is difficult. There is no extra column in MSmerge_articlehistory to show that the change being implemented is related to this setting and I have requested such an extra column to be added. You might be able to infer this is a compensating change from looking at MSmerge_sessions if there is a download conflict and then an upload delete in 2 adjacent sessions – but obviously if you have 100s of changes being applied at the same time this’ll be impossible as it will get lost in the avalanche of recorded changes.

To see if this setting is enabled, take a look at this type of query:

select name, compensate_for_errors from dbo.sysmergearticles

This query will tell you if the setting has been altered recently:

select * from dbo.MSmerge_settingshistory
How can I troubleshoot network (connectivity) issues?
Run a ping -t for a day and then look at the statistics to see if a hiccup occurred.
How can I report on the historical amount of updates, inserts and deletes done by publisher and subscriber over a period of time?
The key table is called MSmerge_sessions. This has data recorded much like the data in the replication monitor for merge replication - the number of inserts, updates, deletes, conflicts for uploads and downloads. It also has the number of  metadata cleanup records deleted. MSmerge_articlehistory has the same data, but on a per-article basis.
How can I fix rows not being transferred on synchronisation?
This can happen in 2 circumstances:
(1) If you bulk insert the rows and accept the defaults, then "FIRE_TRIGGERS" is false and consequently the rows are not added to MSmerge_contents.
(2) If you do a fast-load using the Transform Data task in DTS/SSIS.
In all cases you need to run sp_addtabletocontents to include the rows and then resynchronise. Alternatively you can use sp_mergedummyupdate for a single row. For the fast load case, in future if you deselect the check box in the SSIS/DTS data flow, the merge insert trigger will fire.
How can I avoid subscriber deletes i.e. when I use filtered articles and add a record on a subscriber it is later deleted?
Filtering works by creating a view on the publisher. Initialisation doesn't send this view to the subscriber, and it is only checked on the publisher. So, you can add a record on the subscriber and on synchronisation this record will be sent as an insert to the publisher. Here, the filter is checked, and a delete is downloaded to the subscriber. If this is not the way you want it to work you have 2 choices: (a) modify the insert trigger on the subscriber to ensure it doesn't write the changed record to MSmerge_contents. (b) use partitioned views to partition your data based on the original merge replication filter and only replicate one table.
When the merge agent runs I notice there are blocking issues and then there is a timeout.
To find out more information about the cause of your blocking issues you might like to use profiler and the "blocked process threshold" report. If it is simply a matter of having a lot of data modifications being synchronized then you need to optimize the merge agent. You could:
(a) increase -DownloadGenerationsPerBatch
(b) make sure the metadata is being cleaned up
(c) run the merge agent more frequently, so as to not accumulate changes
(d) decrease the -PollingInterval if you are running continuously
(e) increase the -QueryTimeOut
Alternatively, if you accept a high latency, you might run the merge agent out of hours.
Error Messages: "The process could not enumerate changes at the 'Subscriber'."
This is a generic , high-level error message simply indicating that the merge agent ran into a problem moving changes from the subscriber to the publisher. It can be caused by several different problems, and is not necessarily a bug. To get more info, you could enable logging on the merge agent (-output c:\somefile.log -outputverboselevel 3). Alternatively, most of these merge errors are solved by simply restarting the merge agent.
Error Messages: "Failed to enumerate changes in the filtered articles"
There is now a KB article related to this issue for SQL Server 2000.
When I insert a row on Site1 (the Publisher) the row gets replicated to Site2. Site2 is a republisher to Site3 but the row never arrives at Site3.
Check to see if you are using local subscriptions. All subscriptions need to be global for this republishing setup to work.