Thanks Rob.
This is a quiet test box so I'm not seeing anything in the way of volumes of excessive latency.
I'm manually generating inserts to my 5 sample tables. However, when I collect the statistics and run an insert accross all 5 tables I only see
1 operation (relating to my most recently refreshed table).
In the subscription events, I do get this though which could be a factor somehow(?) From what I've read I'm not really sure why its not using the
single scrape (there is no other cdcinstance active on this source server or indeed any other active subscriptions). I've also tried stopping starting
the source cdc instance as well.
Here is what I see in the trace log for the cdc instance - don't know if this not using the single scrape is an issue.
This is what happends when I have 1 table marked for refresh and the rest are all subsequently not replicated....
156 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.eventlog.EventLogger logActualEvent() Event logged: ID=225 MSG=Table TIMTEST.TEST_DATA_NEW5 refresh to TIMTEST has been confirmed by the target system. 7 rows were received, 8 rows were successfully applied, -1 rows failed.
157 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.eventlog.EventLogger logActualEvent() Event logged: ID=1437 MSG=Table TIMTEST.TEST_DATA_NEW5 refresh to TIMTEST is complete. 7 rows were sent.
158 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.eventlog.EventLogger logActualEvent() Event logged: ID=44 MSG=Mirroring has been initiated for table TIMTEST.TEST_DATA.
159 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.eventlog.EventLogger logActualEvent() Event logged: ID=44 MSG=Mirroring has been initiated for table TIMTEST.TEST_DATA_NEW.
160 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.eventlog.EventLogger logActualEvent() Event logged: ID=44 MSG=Mirroring has been initiated for table TIMTEST.TEST_DATA_NEW2.
161 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.eventlog.EventLogger logActualEvent() Event logged: ID=44 MSG=Mirroring has been initiated for table TIMTEST.TEST_DATA_NEW3.
162 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.eventlog.EventLogger logActualEvent() Event logged: ID=44 MSG=Mirroring has been initiated for table TIMTEST.TEST_DATA_NEW4.
163 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.eventlog.EventLogger logActualEvent() Event logged: ID=44 MSG=Mirroring has been initiated for table TIMTEST.TEST_DATA_NEW5.
164 2022-02-05 10:41:16.659 SHAREDSCRAPE{75} com.datamirror.ts.scrapers.singlescrape.SingleScrapeThread processMessage() Considering sub command [TIMTEST] bookmark [Journal name JOURNAL Journal bookmark 000204;0282e817:00001a21:0003;0282e817:00001a21:0003;0282e817:00001a21:0003|]
165 2022-02-05 10:41:16.659 SHAREDSCRAPE{75} com.datamirror.ts.engine.MemoryManager computeMaxInMemoryGlobalBytes() runtimeMaxMemory=1073741824 reservedMemoryInBytes=732368896 maxNumInMemoryGlobalBytes=341372928
166 2022-02-05 10:41:16.659 SHAREDSCRAPE{75} com.datamirror.ts.eventlog.EventLogger logActualEvent() Event logged: ID=2925 MSG=Subscription TIMTEST can not use the single scrape staging store because it is too far ahead. It will run with a private log reader and log parser. Subscription bookmark: Journal name JOURNAL Journal bookmark 000204;0282e817:00001a21:0003;0282e817:00001a21:0003;0282e817:00001a21:0003| Staging store oldest bookmark: Journal name JOURNAL Journal bookmark 000204;0282e816:00006854:0003;0282e816:00006854:0003;0282e816:00006854:0003| Staging store newest bookmark: Journal name JOURNAL Journal bookmark 000204;0282e816:000068c8:0003;0282e816:000068c8:0003;0282e816:000068c8:0003|
167 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.engine.component.AbstractPipelineComponent startComponent() Starting component SUBSCRIPTION STAGE; starting to allocate nodes and jobs.
168 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.engine.component.AbstractPipelineComponent startComponent() powering on com.datamirror.ts.scrapers.mssqlscraper.MssqlScraperImpl; there are 1 jobs in this node.
169 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.engine.component.AbstractPipelineComponent startComponent() Synchronizing with job com.datamirror.ts.scrapers.cdc.CDCLogScraperImpl$LogScraperPipelineJob
170 2022-02-05 10:41:16.659 TIMTEST SCRAPER MANAGER{170} com.datamirror.ts.util.TsThread run() Thread start
171 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.engine.component.AbstractPipelineComponent startComponent() All jobs have completed start-up initialization. Starting to execute.
172 2022-02-05 10:41:16.659 TIMTEST SCRAPER MANAGER{170} com.datamirror.ts.engine.MemoryManager computeMaxInMemoryGlobalBytes() runtimeMaxMemory=1073741824 reservedMemoryInBytes=732375896 maxNumInMemoryGlobalBytes=341365928
173 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.engine.component.AbstractPipelineComponent startComponent() Starting component PARSER STAGE; starting to allocate nodes and jobs.
174 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.engine.component.AbstractPipelineComponent startComponent() powering on com.datamirror.ts.scrapers.mssqlscraper.MssqlLogParser; there are 1 jobs in this node.
175 2022-02-05 10:41:16.659 TIMTEST SCRAPER MANAGER{170} com.datamirror.ts.scrapers.cdc.ScraperManagerCDC execute() TIMTEST start bookmark: Journal name JOURNAL Journal bookmark 000204;0282e817:00001a21:0003;0282e817:00001a21:0003;0282e817:00001a21:0003|
176 2022-02-05 10:41:16.659 TIMTEST Source Data Channel{163} com.datamirror.ts.engine.component.AbstractPipelineComponent startComponent() Synchronizing with job com.datamirror.ts.scrapers.cdc.LogParser$LogParserPipelineJob
177 2022-02-05 10:41:16.659 TIMTEST LOG PARSER{171} com.datamirror.ts.util.TsThread run() Thread start
178 2022-02-05 10:41:16.659 TIMTEST LOG PARSER{171} com.datamirror.ts.scrapers.cdc.TransactionQueues continueFromPosition() target commit position: 0282e817:00001a21:0003 target current position: 0282e817:00001a21:0003 journalId: 0
179 2022-02-05 10:41:16.659 TIMTEST LOG PARSER{171} com.datamirror.ts.scrapers.cdc.TxnQueuesPersistence continueFromPosition() Will not reuse the transaction queues for journal 0 because target restart position 0282e817:00001a21:0003 is >= the position of the last log record read at the time of shutdown: 0282e817:00001a21:0003. Persisted txn queue commit position: 0282e817:00001a21:0003 Persisted txn queue current position: 0282e817:00001a21:0003
180 2022-02-05 10:41:17.659 TIMTEST LOG PARSER{171} com.datamirror.ts.scrapers.cdc.LogParser continueFromPosition() Log reader start position 0282e817:00001a21:0003
181 2022-02-05 10:41:17.659 TIMTEST Source Data Channel{163} com.datamirror.ts.engine.component.AbstractPipelineComponent startComponent() All jobs have completed start-up initialization. Starting to execute.
I do see from the source cdc instance trace file that prior to this the single scrape process starts up ok
<<
Event logged: ID=2920 MSG=The single scrape component has started. The staging store is 0% full.
>>
#SupportMigration#GlobalDataOps#DataReplication#Support#DataReplication#DataReplicationCloudBeta#DataIntegration