Home  |  Articles  |  Scripts  |  Consultancy  |  FAQs  |  Links  |  SSIS

General  |  Transactional  |  Merge

 

Transactional FAQs

 
Can I use cascading FK changes with transactional replication?
The answer is "it depends". The table below shows the result of my testing. it really relates to the nature of the FK definition on the subscriber and if the cascading option is selected for updates or deletes.
Change NFR Cascading Result
Update True True ok
Update True False ok
Update False True ok
Update False False error
Delete True True ok
Delete True False ok
Delete False True error
Delete False False error
 
How can I delete rows on the publisher but not the subscriber?
Easiest thing to do is put your delete logic into a stored procedure, add the stored proc as an article and mark it so that you replicate the execution of the stored proc, but at the subscriber remove the logic that does the delete statement.  Some customers either manually modify the proc on the subscriber side or add it to a post_snapshot script for sp_addpublication.

This accomplishes couple things:
1. you don't fill your distribution db with millions of unneeded records
2. you don't increase latency by blocking other commands with the millions of changes

Just don't ever reinit!

(Greg Y)

 
How can I find numerical info about the transactions remaining to be sent?
You could use the view "msdistribution_status" in the distribution database or the system stored proc "sp_replmonitorsubscriptionpendingcmds". The latter is used by the Replication Monitor itself.
 
How can I ensure identity columns are treated correctly for DR purposes?
If you select the 0x04 value in sp_addarticle, this will replicate identity columns as identity columns - this was introduced in SQL Server 2000 SP4. It isn't exposed in SSMS (one of my "favourite" bugbears is that all the article properties aren't properly available in the GUI!) but can be set in sp_addarticle or sp_changearticle. Having the identity column on the subscriber is just what you want for a DR setup so it can be used after failover. What we also want is for the identity values to be incremented normally on the subscriber during synchronization, so if there is a DR scenario, we don't then have to laboriously use DBCC CHECKIDENT afterwards to reseed all the identity columns. So, how to set this up?

Well, after changing the value of @schema_option as above, set @identityrangemanagementoption to 'manual' in sp_addarticle.

However, note that in SQL Server 2005 the creation of the transactional publication with these values will automatically change the identity column to NFR
on the publishing database. If it is left with this setting, it will also have this setting on the snapshot and on the subscriber. This setting isn't to allow for identity inserts during synchronization - it is to ensure that replication doesn't increment the identity value which is the opposite of what we are aiming at. When it comes to a failover, the replication inserts won't have changed the identity value and you'll then get PK violation errors when adding rows at the subscriber. What we need to do is to ensure that the NFR setting is not transferred into the snapshot. There's an easy way to do this - use the new system function sys.sp_identitycolumnforreplication to reset the publisher back to existing without the NFR setting. This must be done on the publisher BEFORE creating the snapshot. After that you can initialize as normal.

One caveat - after setting things up like this, having SSMS create the script annoyingly misses out the @identityrangemanagementoption being set to "manual" so the script needs manually editing before being run again.
 
How can I write a script to show which commands are waiting to be downloaded?
To find numerical summaries, there is the view MSdistribution_status and the system stored procedure sp_replmonitorsubscriptionpendingcmds. To read the actual commands there is the proc sp_browsereplcmds. This will return all commands but takes an optional parameter @xact_seqno_start. In the script below I show how this parameter can be used to get pending commands, once the blue parameter values are replaced.

declare @xact_seqno varbinary(16)
select @xact_seqno = max(xact_seqno)
from MSsubscriptions
inner join MSpublications
on MSpublications.publication_id = MSsubscriptions.publication_id
inner join MSdistribution_history
on MSdistribution_history.agent_id = MSsubscriptions.agent_id
Where subscriber_db = 'testsub'
AND Publication = 'testtrans'

declare @str varchar(255)
set @str = master.dbo.fn_varbintohexstr (@xact_seqno)
set @str = left(@str, len(@str) - 8)

if exists(select object_id('tempdb..#trancommands')) drop table #trancommands

create table #trancommands
(xact_seqno varbinary(16) null,
originator_srvname sysname null,
originator_db sysname null,
article_id int null,
type int null,
partial_command bit null,
hashkey int null,
originator_publication_id int null,
originator_db_version int null,
originator_lsn varbinary(16) null,
command nvarchar(1024) null,
command_id int)

insert into #trancommands
exec sp_browsereplcmds @xact_seqno_start = @str
select * from #trancommands where xact_seqno > @xact_seqno

 
Why is a complete snapshot being generated when a new article is added to my publication on SQL 2005?
This is expected behaviour if you have a merge or snapshot publication. However, if you have a transactional publication, a snapshot of all articles will always be generated if the immediate_sync publication property is set to true. Typically, the immediate_sync publication property is set to true if you allowed anonymous subscriptions while creating the publication through the CreatePublication wizard. To prevent the complete snapshot, run the script below after replacing the blue parameter values:

EXEC sp_changepublication
@publication = 'MainPub',
@property = N'allow_anonymous',
@value = 'false'
GO
EXEC sp_changepublication
@publication = 'MainPub',
@property = N'immediate_sync',
@value = 'false'
GO
 
What are the Pros and cons of restarting the log reader agent?
Sometimes under extreme high load you will get deadlocking between the log reader agent and the distribution clean up agent. In this case stopping the log reader agent to let the distribution clean up agent do its job will alleviate the problem. It is recommended that in this case you use a remote distributor. You also can bounce the log reader agent when you want to switch profiles. If you do stop the log reader agent, the latency will obviously increase and if you stop for a significant time, the commands' age might exceed the retention period. Also, the transaction log can't be truncated unless the log reader agent has marked it as read, which means the log might potentially balloon in size.
 
Why has my SQL 2000 log reader agent failed with the message: "No such interface"?
You need to re-register your log reader agent by using "regsvr32 logread.exe". You might also have to register the entire contents of "C:\Program Files\Microsoft SQL Server\90\Com".
 
Not all my logreaders start up in SQL Server 2005 - what can I do?
Increase the max_worker_threads setting in the syssubsystems table of the msdb database.
 
What does this mean: "error 14100: Specify all articles when subscribing to a publication using concurrent snapshot processing"?
If you add a new table to an existing publication using sp_addarticle when you try to subscribe to that newly added article from an existing subscription by using sp_addsubscription you might get the error above. This applies when the existing publication was set up with concurrent snapshot option and means that you can't synchronize subscriptions for such publications without a complete resynchronization. There are 2 unofficial workarounds: (a) you can circumvent the check by specifying @reserve = 'internal' when you add the subscription for the new article and the snapshot agent should generate snapshot for the new article after that and (b) you could change the immediate_sync property in syspublications to 0 (see sp_changepublication). Other more official workarounds including changing the sync_method from 'concurrent' to either 'database snapshot' (enterprise edition only in SQL Server 2005) or 'native' (which locks table during snapshot generation). Change the sync_method will force a reinitialization of all your subscriptions at this point. Alternatively you could create another publication and use this instead.
 
I have a replication setup which works fine in SQL Server 2000 but why is it that when the same setup is used in SQL Server 2005 I get "Incorrect syntax near ')'"?
The problem that you encountered probably has to do with a change of default behaviour from SQL 2000 to SQL 2005 to replicate a timestamp column as a timestamp instead of binary(8), and the fact that the custom procedure generation logic simply cannot handle the case where a timestamp column is part of the primary key when it is replicated as a timestamp. Taking a few steps back to look at the situation at a higher level, transactional replication simply cannot provide any reasonable behaviour if a timestamp column in a primary key is replicated as a timestamp column since the timestamp values will be different between the publisher and the subscriber and transactional replication is based on the assumption that primary key values are the same at the publisher and the subscriber. Luckily, you can work around the problem by disabling the 0x08 schema option (or setting the 'Convert timestamp to binary' option to true in the article property sheet), this will basically give you the same behaviour as you got from SQL 2000.
 
Why can't I truncate the transaction log? After restoring a database to another server, when I subsequently try to shrink the log I get the following error: "The log was not truncated because records at the beginning of the log are pending replication"?
Before truncating the log, you can execute sp_repldone. In cases where this is not enough, you might have to set up this database as a transactional publisher with a dummy publication before executing sp_repldone, then remove the publication afterwards.
 
How can I get useful replication information from DBCC OPENTRAN?
This is my take on the DBCC OPENTRAN:

If your log reader is keeping up with what's in the transaction log, the 'oldest non-distributed LSN' is (0:0:0):

Oldest distributed LSN : (10:384:4)
Oldest non-distributed LSN : (0:0:0)

If it's not able to keep up or is disabled, the output will be in the following format:

Oldest distributed LSN : (10:388:4)
Oldest non-distributed LSN : (10:390:1)

If you have an open transaction, the format of the output will include a section at the top:

Oldest active transaction:
SPID (server process ID) : 55
UID (user ID) : 1
Name : mytran
LSN : (10:391:1)
Start time : May 16 2005 10:10:28:920AM

Replicated Transaction Information:
Oldest distributed LSN : (10:388:4)
Oldest non-distributed LSN : (10:390:1)

Using this, you can use DBCC INPUTBUFFER(55) to find the open transaction's TSQL, and then decide to kill it if necessary.

The numbers in brackets -(10:388:4) - are the log sequence numbers. You can run "select * from ::fn_dblog(null,null)" to have a look at the list of these, or for more details you can see them in LogExplorer.
 
How can I avoid this error in SQL Server 2000: "Violation of Primary Key constraint 'PK__xxx'. Cannot insert duplicate key in object '#{UniqueNumber}'"?
You need to apply a new patch obtainable here. There are reports that this step alone is not sufficient, and because a parameter subscription_seqno is actually type varbinary(16) and not varchar(16), you need to change the column data type to varbinary(16) then it works.
 
How can I dynamically partition data in SQL Server 2000?
To do this you can use a transformable subscription. Transformable Subscriptions use DTS packages to transform the data, and a separate package may be created for each subscriber. When creating the publication, be sure to select 'Show Advanced Options' and 'Yes, Transform the Data'. Once the publication is created, create a DTS package to do the transformation. Due to a peculiar quirk of Enterprise Manager, this option is available under the publications folder of the published database, but not the Replication Monitor, Publishers folder. So, right-click on the publication and select Define Transformation of Published Data. In this wizard, a Data Driven Query is created. Make sure this query uses VBScript and modify the script to filter the rows:

if DTSSource("RegionID") = 1 then
Main = DTSTransformStat_OK
else
Main = DTSTransformStat_SkipRow
end if
 
What happens when a transaction fails at the publisher - does it still run at the subscriber?
Have a look at this article.
 
How can I read the transactions for TR in non-binary format?
These transactions exist in the transactions table MSrepl_commands: use sp_browsereplcmds to view them. In the case of a queue, use sp_replqueuemonitor to read the MSreplication_queue table.
 
How can I have an update replicated as an update - because when I update a record, the transactional replication implements a delete followed by an insert on the subscriber?
This applies to updates of columns which are part of a unique index and is known as a deferred update. You'll know when it's happening as the procedure sp_MSupdtablename never executes on the subscriber, only the sp_MSdeltablename and sp_MSinstablename procedures (although this could also occur if the snapshot commands tab shows 'NONE' for the update command!). Implementing trace flag 8207 will replicate the update as an update on the subscriber, but only if it is a singleton update. See this KB Article for more info.
 
How can I replicate users?
Users, and other such administrative objects can't be replicated because we can't replicate system tables. You'll need to use a pre-snapshot script if you want the users to exist at the subscriber and have them as object owners.
 
How can I prevent deletes from going to the subscriber?
If you're setting up the publisher then it's fairly straightforward - just change the command in the table article properties to 'NONE'. If your publisher is already set up, changing this property will cause a reinitialization of all subscribers. If this is just a temporary measure and you don't want to reinitialize, you could alter the delete stored procedure on the subscribers to prevent the delete.
 
When I create an immediate updating subscriber, it fails with the error message "Login failed for 'sa'" - why?
Subscribers use remote procedure calls to connect to the publisher. By default, the login is sa and a blank password, which rarely works! Use sp_link_publication with @security_mode = 2 to set this up correctly. The other arguments for this procedure are listed in BOL. This is now in a MS article.
 
What does this error message mean: "Could not find stored procedure 'sp_MSupd_Region'"?
If you have used @sync_type = 'None' in sp_addsubscription or in the subscription wizard's Initialize subscription page, you selected 'no, the subscriber already has the schema and data' you may get this error. The system tables for replication will be created on the subscriber in either case but you'll need to manually create the stored procedures on the subscriber if you use this option otherwise you'll have the above error when you change a row on the publisher. Use sp_scriptpublicationcustomprocs (assuming you have >= sp1) to create a script which creates the procedures and run this script on the subscriber.
 
Which schema changes are allowed to a publication in SQL Server 2000?
(1) Changing the width of a column: Not allowed - needs reinitialization or the combination of sp_repladdcolumn and sp_repldropcolumn mentioned here.
(2) Dropping a constraint on the published table - allowed for a check constraint as this can be dropped at the publisher but it is not replicated to the subscriber.
(3) creating a new constraint on the published table - allowed but not replicated
(4) dropping an index on the published table - allowed but not replicated.
(5) creating a new index on a published table - allowed but not replicated.
For those actions allowed on the publisher but not replicated, you could use sp_addscriptexec.