Join / Log in
Hi Mike and Art
Thanks a lot to both.RgdsG
Art and Mike explained very well your situation. This is total nonsense, unless the goal is to take more disk space for indexes and/or make the decision harder to take for the query optimizer for each query.
Many times, having heavy/complex indexes cost more than having simple and efficient ones. The only mandatory rule here is the existence of primary key and foreign keys which are not negociable.For composite indexes, Mike's statement is the right one (most selective column as index header), but index self-join somewhat compensates the fact of having a badly selective first column.
Once again, sqltrace will be the final judge and will provide with undisputable facts: set sqltrace on (low is enough), and look how many buffer pages, disk pages and index are read to return the actual number of rows. The general idea is to compare the number of read pages Vs the actual number of rows returned. If you have tons of pages read return just a few rows, your indexes are either missing or not efficient. The general idea is to avoid unnecessary reads, because unnecessary reads take more cpu time.
A selective and efficient index will show few index pages read, a bad one will show many pages read. Then you'll have to play with those figures to understand what the engine does exactly and give hints to have better indexes.
You can also look with onstat -g ppf or equivalent in sysmaster to check whether the incriminated indexes are ever used
Eric VercellettoData Management Architect and Owner / Begooden IT ConsultingBoard of Directors, International Informix Users groupIBM Champion 2013,2014,2015,2016,2017,2018,2019,2020
Tel: +33(0) 298 51 3210Mob : +33(0)626 52 50 68skype: begooden-itGoogle Hangout: email@example.comEmail: firstname.lastname@example.org : http://www.vercelletto.comwww https://kandooerp.org