Hi Team,
We are using Streams Export & Import operators extensively.which were implemented in C++ as mentioned in Documentation.
Is export/import operators use TCP IP functionality internally...?
If yes
Q1. If a single job exporting the data to more than 3 jobs. Let's assume First job exported 10 records to downstream jobs.
Each job will have its own set of data copy. Is TCP port also creates the data copy for each down stream job and keeps it in TCP buffer..?
However the congestion policy works once the connection is established and restarted the down Stream job then there will not be any data loss. How does exporting job make sure that there will not be any data loss after a restart of down Stream job.
Q2 Job 1 exporting data which is having 30 columns, connecting to 3 down stream jobs.
Ex: Down Stream job one need columns from 1 to 20.
Down Stream job two need columns from 21 to 28
Down Stream job three need columns from 29 to 30
In below mentioned approaches which is advisable...?
Approach 1 :
Job 1 is having a single export Operator and connecting to all 3 down streams job. After import it will filter the columns.
Approach 2 :
Job 1 will create the three output streams with expected columns
O/p stream 1 : 20 columns
O/p stream 2 : 8 columns
O/p stream 3 : 2 columns
and have 3 export operators which will connect to each individual jobs.
Thanks.
#OpenSourceOfferings#Streams#Support#SupportMigration