[Nagiosplug-help] check_ifoperstatus process utilization

Dominik Romaneschen d.romaneschen at douglas-informatik.de
Wed Aug 18 02:24:13 CEST 2004

Hi there,

maybe somebody can help me with a strange behavior in conjunction to the
check_ifoperstatus plug-in of Nagios. We use Nagios 1.1 with Red Hat 9.0 in
a VMware virtual machine and the first version of the plug-in addon. I
tried already the new version of the plug-in addon, but due to problems I
had to restore the older ones.

We have to check around 1000 routers in our VPN with auto ISDN backup and
the check_ifoperstatus should check the interface status by SNMP. In former
times we checked the router (without ISDN backup) by a ping with a system
utilization between 20-50% and a check period under 10 minutes. Now, one
process of the check_ifoperstatus plug-in need 100% of process utilization
for an average of 6 seconds!? Theoretical I tested this router query with a
simple script and snmp_query and this needs around 3 minutes without a high
system utilization. Is this behavior normal? I see as problematic the
constant full utilization of the system and due to our SLA, we can't get a
status feedback every 10 minutes with the actual configuration. Now, Nagios
with check_ifoperstatus needs about 25 minutes for a check period.

I have already compiled the perl script check_ifoperstatus with the Active
State Perl Dev Kit and I accelerate Nagios 3 times to the values specified
above. I use the paralization function with 15 concurrent checks (more is
not possible), no host checks (use the check_dummy) and I stopped logging
for a better performance. Does somebody know a better solution for this

Thanks for every reply,


More information about the Help mailing list