I wrote a Perl script to split up the large file into 240800 byte files. You would have to test your system to find the optimum size for you. Then I wrote a Flow to split the file, do a DIR command to get the file list of all the files ending in .seq and then invoke my main Flow with each of the 20 or more smaller files. I also used memory cleanup in my flows each time I dropped a large StringList or RecordList.
Note that a Perl interpreter is included on most Unix systems. There are free Perl interpreters for Windows available at CNET.com.
Here is the script:
#!/usr/bin/perl -w
use strict;
where’s the input data?
my $infile = ‘INBOUND.FILE’;
how many bases in each split?
my $splitsize = 240800;
open( IN, “$infile” )
or die “Can’t open Employee file ‘$infile’: $!\n”;
for( my($pos,$data,$got)=(1); !eof( IN ); $pos += $got ){
note: no gulping - instead ingestion via a small teaspoon
if( defined( $got = read IN, $data, $splitsize ) ){
my $file = “INBOUND_” . ( $pos + $got - 1) . “.seq”;
open( OUT )
or die "Can't open Inbound file '$file': $!\n";
print( OUT ">$file\n$data\n" )
or die "Can't write to Inbound file '$file': $!\n";
close( OUT );
} else { die “read on ‘$infile’ failed: $!\n”;
}
}
#webmethods-Protocol-and-Transport#Integration-Server-and-ESB#webMethods