Hi,
Our megabank project uses AWS S3 storage for a shared disk across 2 datacap servers and 6 rulerunner servers.
The client feeds about 1000 multipage pds to Datacap applications on daily basis.
The applications create batch folders on AWS S3, split tifs from each pdf and then runs recognition and normalize action on each tif.
We are facing lots of complaints from the client that the applications are either throwing exceptions
or just hung up frequently.
These issues rarely happen when we test the applications using the shared disk built on local hard disks.
Have you ever built applications with a shared disk on AWS S3? Did you experience a lot of IO slowness and errors associated with it?
We recently performed a long run copy test on both AWS S3 and the local drive. The result was ASW S3 was about 80 times slower
than the local drive. Is it possible that pdf split, recognize, normalize and other actions cannot handle slow disk performance
and throw errors or hung up? Is there any way of knowing if the AWS S3 slowness is the bottle neck?
------------------------------
dsakai
------------------------------