Typically we take in the server data, categorize it (e.g., physical, hypervisor, or virtual), send it on to its respective object in the model based on server type, and then on up to the supported application based on a separate feed that shows app/server relationship.
Virtual machines (VMs) spin up/down more than once/month and an application may not sit on one particular VM the entire month. Yet, when we pull the data prior to doing the monthly data loads, if an application shows as being associated with a particular VM, then it gets the dollars associated with that VM.
Does anyone get more granular? What if a customer or application owner complains that they were only on said VM for two days, yet it appears the application receives the cost of that VM for the entire month (given the monthly feeds and the monthly reporting)?
Anyone run into that? What do you do? Do you then act like a cloud provider and maybe charge based on some sort of metering? Like ingest the app/server relationship data daily, and then arrive at a weight based on the number of days an application appeared to be associated with a particular VM? Seems like that would be a TON of rows...
Would love to hear your thoughts, experiences, and/or suggestions - thank you!!