
Challenges are designed to help sharpen your TBM and Apptio skills. See title for rating: [+] Easy [++] Moderate [+++] Challenging
Its the FINAL TBM Pursuit of 2018. Challenges one, two, three, four, and five are in the books.
Here's your last chance to earn a piece of the pie. So grab your thinking caps, because Chris is ending with a bang.
A correct answer is worth 30pts and a 2018 TBM Pursuit game piece
Submit your answer by Dec 31.

@Chris Davidson says...

Recently I reviewed the portion of my ATUM-compliant Cost model which estimates fully burdened physical server costs.
Here's an excerpt of the relevant cost model objects:

At the IT Resource Towers (ITRT) object level, I'm using TBM Taxonomy v2.1 (details here).
My IT Resource Towers object is backed by a data table containing 41 rows which correspond to each tower and sub-tower combination listed in the taxonomy.
As expected, my Data Center tower cost allocates from ITRT to Data Centers object.
Then it allocates from Data Centers to Physical Server object, weighted by # CPU cores per server.
(So for instance, a server with 8 cores receives twice as much Data Centers cost as a server with 4 cores.)
Also as expected, my Compute tower cost allocates from ITRT to Physical Server object, weighted by # CPU cores per server.
In my screenshot above, I have separate allocation lines for Unix and Windows, but I could combine these if I wanted to (by setting up an Operating System direct data reference between the two objects, to ensure cost does not get mixed between OS's).
But three issues weigh heavily (pun intended) on my mind:
1. Server depreciation cost allocates from Fixed Asset Ledger to ITRT to Physical Server, but since I weight solely by # CPU cores (with no allocation filters), some of this depreciation cost is probably being allocated to servers which are already fully depreciated, unfairly driving up their estimated cost.
2. Server depreciation cost (again, originating from Fixed Asset Ledger object) winds up allocating across multiple servers as weighted by # CPU cores, but the number of cores seems somewhat unrelated to the amount of depreciation each server should receive. I have many 8-core servers whose initial purchase price was lower than some of my 4-core servers, for example.
3. My data center power bill correctly rolls up through the model to ITRT to Data Centers to Physical Server object, and I understand that the majority of a server's power is used for its CPU(s). But different CPUs use different amounts of power, and besides, my server CPUs aren't 100% active all month long. Weighting data center power cost by # CPU cores per server therefore doesn't seem fully defensible.
What improvements can I make to my Cost model
to address all three of the issues above?