Hi Ahmed,
would you care to mention which order of magnitude you think is 'large volume of documents', because 500 Million documents are pretty common in FileNet installations and definitively not a problem, be it DB2, Oracle or SQL Server (I have no experience with PostgreSQL).
As usual, ti depends: In a simple data model docversion will be the predominant table in size and importance, but as soon as you go auditing (EVENT table) or lifecycle other factors come into play.
In general CPE has very little tuning options (index come to my mind) , but apart from what the compatibility matrix forbids you are free to apply tuning options as you deem necessary.
Hope this helps,
/Gerold
------------------------------
Gerold Krommer
Managing Director
The Document Content Profesionals, G.m.bH
Wien
+436602408515
------------------------------
Original Message:
Sent: Sun October 20, 2024 07:56 AM
From: Ahmed ElHussein
Subject: How does IBM FileNet handle the growth in DB2 databases?
Hello,
We are using IBM FileNet to manage a large volume of documents, and over time, the associated metadata stored in our DB2 operational database has grown significantly. As we continue to archive more data, the database size is increasing, which could affect performance since DB2 doesn't handle massive amounts of rows efficiently in operational environments.
I would like to understand how IBM FileNet handles this situation. Are there any best practices or recommended strategies to optimize or manage metadata growth in the DB2 database? Does FileNet offer any built-in mechanisms to address potential performance issues caused by the large volume of metadata?
Any advice on managing database size or performance tuning would be greatly appreciated.
Thank you in advance!
------------------------------
Ahmed ElHussein
------------------------------