We continually strive to make the IOpipe library as lightweight as possible.  This default method to send telemetry to IOpipe is synchronous, directly to our APIs which run locally in every AWS region.  This allows us to provide near-realtime data reporting (typically within 3 seconds).  

If you're looking for a near zero-overhead solution, we are building an alternative asynchronous method using AWS Cloudwatch which will incur some delay (up to 1-2 minutes).  Give us a shout at support@iopipe.com or Slack if you're interested in this option.

Below are the overhead findings from our internal tests as of July 1, 2017:

Node.js

  • Library size for core agent: 48KB including dependencies
  • Average duration added by IOpipe on cold-start invocations: 85ms
  • Average duration added by IOpipe on warm invocations: 27ms

Updated November 27, 2018. 

Python

  • Library size: 20KB + requests module dependency
  • Average duration added by IOpipe on cold-start invocations: 88ms
  • Average duration added by IOpipe on warm invocations: 24ms

Java
On a Lambda with a memory limit of 512MB (the default) and an API Gateway implementation, the coldstart overhead is about 2.5 seconds and the non-coldstart overhead is about 10ms to 20ms. It should be noted that this time will vary depending on the configuration and the code that is used.

  • Library size: 17KB + 11MB of Dependencies
  • Average duration added by IOpipe on cold-start invocations: 2.5s
  • Average duration added by IOpipe on warm invocations: 10ms to 20ms

The added duration overhead occurs at the end of every invocation when the telemetry is sent to the IOpipe APIs.

Update November 27, 2018.

If you find any out of date info, errors, or just have any other questions, you can hit up our engineers and our community of users directly on Slack

Did this answer your question?