When it comes to Infrastructure discovery, I feel like there are two elephants in the room. The first would be ServiceNow, which I've already done some research on (above). ServiceNow is running about $1B of revenue a year, most of which is probably from their CMDB and Ticketing tools. I would guess that the DiscoveryTool portion is relatively small piece of their business (my own guess!)
BMC Software makes a number of products, and posts about $2B a year in revenue. Most germane to this discussion is their Atrium Discovery and Dependency Mapping (ADDM) software. One of my customer's uses this ADDM tool to run discovery on their network and server assets. I've seen the data culled from this tool, and it is a rich dataset, showing dependencies between Hosts and VM's, information on number of Physical CPU's, Physical Cores, Threads/Core, and Total Logical Threads as well as installed memory in GB. Their data does not map servers to applications, and their feedback is that to do this takes considerable effort to do the relationships. They could do it, but it would take time and resources that they don't have right now.
Here is a whitepaper from BMC about this product. http://documents.bmc.com/products/documents/18/60/451860/451860.pdf
I also found this great video. https://youtu.be/***_mI7t0cQ It is slow paced, but it is helpful because it discusses the architectural approach for ADDM deployment. There are plenty of videos out there that show people clicking around the UI, but for this thread I am more curious about the underlying requirements, how it works, and what typical decisions are needed in implementation.
Also very interesting, is the fact that BMC has an ADDM module specific for Storage discovery. I've had multiple clients express that getting attribute information on their storage devices, and logical pools/LUN's can be difficult and time consuming. Administrators may need to log into each device and run reports manually. It looks like BMC heard this, and responded with a product.
This white paper gives some high level information: http://documents.bmc.com/products/documents/92/57/459257/459257.pdf
I found this video very informative: https://youtu.be/4gBjymxPQx4 It does a great job of exploring the data that is returned by their Storage Discovery tool. They claim it can go across a heterogeneous storage environment, report back storage pools, volumes, and the associations between these and the compute resources (ESX hosts, in the example in the video). I suppose this could be used to drill down all the way from a business application, to server, to storage volume, to storage pool, to physical device. It reminds me of a time I was working as a BI developer (not at Apptio) and it took IT several days to determine that our SQL server was down due to a faulty FC HBA. I wonder how much faster that troubleshooting could have occurred with a tool like this?
One of my clients trialed this Storage Discovery tool from ADDM last year at one of their Data Centers. They said it met their needs except it did not report the number of disk drives and the total raw capacity by different drive interface (Fiber Channel, SAS, SATA, and/or SSD) per storage array.
In the context of Apptio: a strong discovery program like this can help us drive wonderfully detailed Application TCO! We could potentially link a particular storage asset and its depreciation, administration, and maintenance costs across storage pools, which are then allocated to servers, which themselves are ultimately allocated across one or more applications. This allows a defensible, and dynamic, allocation methodology. With a regularly running discovery tool, these allocations can change dynamically over time as the Infrastructure underlying the application layer is rationalized and optimized.