This post is part 3 of 4 of a series of articles addressing many of the common misconceptions associated with agent-based asset management solutions. The four parts in this series are:
Part 3: Data Collection
Another misconception we occasionally hear is that agent-less solutions can scan and collect information from more machines, more quickly. I’m not entirely certain how this assumption came to be, but this capability is in no way related to the presence or absence of an agent; rather, it has to do with the type of network protocols and/or communication methods (i.e. SNMP, Active Directory lookup, etc.) allowing the network to be scanned. Beyond that, the factors influencing data collection speed are identical for both methodologies; these include considerations such as how much data is being collected, the nature of the data being collected, network speed, and the number of machines.
When it comes to the nature of data being collected, people that are deeply familiar with IT asset management technologies will generally agree that the presence or absence of an agent should not in and of itself influence the nature or quality of computer inventory data collected. (Although the way in which an IT asset management solution recognizes installed applications may dictate that certain agent-less technologies are not feasible. For example, if recognition depends on a complete listing of executable files, remote access via WMI is probably not a reasonable solution.)
However, one situation in which I believe an agent is nothing short of essential is when comprehensive and detailed software usage data is needed. Admittedly, users of our own products cannot collect usage statistics without deploying our agent; but this is a trade-off most of our customers are happy to make because it’s the only method (to my knowledge) capable of accurately “intercepting” and logging application launch and termination activity. Although it’s possible to collect some basic “frequency of use” data from the registry, this source of information is, at best, a generalization and, in practice, not accurate enough to rely on for critical licensing decisions. In theory, one conceivable approach for an agent-less solution would be to “poll” the running process list remotely.
The drawback to this approach is that the data would only be as accurate as the polling frequency. For example, if the polling interval were very short (i.e. ten seconds), your accuracy would be high; but assuming most users run their apps for longer than ten seconds, you’d be sending the same list of running processes back to the database six time per minute. If the interval were too long (i.e. ten minutes), accuracy would deteriorate because launch and termination times would be rounded to the nearest ten minutes; plus you’d entirely overlook sessions if they were launched and terminated during the same polling interval.
The same limitations would make it impossible to “meter” or control the launch of applications for security, compliance, or other purposes. In contrast, an agent can provide very precise usage statistics because its continuous presence allows activity to be logged at the exact moment an application is launched and terminated. In addition, usage information for any given application is only be transmitted twice–once upon launch, and once upon termination. This is especially important if network traffic is a concern.
All that said, I’d be interested in hearing from anyone who has been able to obtain meaningful and accurate usage data without an agent. If so, we would by all means explore integrating this capability into our agent-less offering!
In my final post, I’ll discuss some of the myths related to the impact of agents on network bandwidth and end-users.