I am trying to identify any global perf issue on a system where users suddenly complain about a drastical loss of performance "with apparently NO CHANGE done on the system".
This is about a series of heavy load tasks that, according to customer's 'feeling', took around 4 hours to complete 'before' and now 17 hours.
I don't have any solid figures on those response times before but have to rely on 'impressions'
Obviously, no one changed anything in the system's configuration, that I can believe, or not...
Analyzing a full day of activity with bunches on onstat catches, I can see that I have an idle session starting at 9:20 AM and finishes by 12:30 PM
that (according to sqltrace running during the whole period): does NOTHING, i.e no query at all caught by sqltrace.
Nevertheless, onstat -g cpu for that session's sqlexec thread reports 258 secs of cpu time.
Is this expected ???
tracking onstat -g ath for that thread every 5 minutes says cond wait norm, which is expected, but it is normal that the thread consumes that much time ?
this is ifmx_14.10.FC2
red hat 6
VMWARE ESX in a government cloud ( not sure at all about how much dedicated are the system resources, for me all shared .... )
Any clue ?
Data Management Architect and Owner / Begooden IT Consulting
KandooERP Founder and CTO
IBM Champion 2013,2014,2015,2016,2017,2018,2019,2020
Tel: +33(0) 298 51 3210
Mob : +33(0)626 52 50 68
Google Hangout: firstname.lastname@example.org
www : http://www.vercelletto.com