New Upstream Snapshot - ruby-benchmark-ips

Ready changes

Summary

Merged new upstream version: 2.10.0+git20221021.1.ea645f6 (was: 2.7.2).

Resulting package

Built on 2022-12-17T12:42 (took 6m38s)

The resulting binary packages can be installed (if you have the apt repository enabled) by running one of:

apt install -t fresh-snapshots ruby-benchmark-ips-docapt install -t fresh-snapshots ruby-benchmark-ips

Lintian Result

Diff

diff --git a/.gitignore b/.gitignore
deleted file mode 100644
index 03d7177..0000000
--- a/.gitignore
+++ /dev/null
@@ -1,5 +0,0 @@
-log
-tmp
-pkg
-doc
-Gemfile.lock
diff --git a/.hoeignore b/.hoeignore
deleted file mode 100644
index 2bfebec..0000000
--- a/.hoeignore
+++ /dev/null
@@ -1,7 +0,0 @@
-.git/
-.travis.yml
-.hoeignore
-.gitignore
-Gemfile
-*.gemspec
-examples/
diff --git a/.travis.yml b/.travis.yml
deleted file mode 100644
index 0512952..0000000
--- a/.travis.yml
+++ /dev/null
@@ -1,10 +0,0 @@
-language: ruby
-sudo: false
-rvm:
-  - 1.9.2
-  - 1.9.3
-  - 2.0.0
-  - 2.1.0
-  - 2.1.2
-  - 2.2.2
-  - ruby-head
diff --git a/Gemfile b/Gemfile
index 69329ce..208003e 100644
--- a/Gemfile
+++ b/Gemfile
@@ -1,6 +1,5 @@
 source 'https://rubygems.org'
 gem 'rake', '~> 10.5'
-gem 'hoe', '~> 3.14'
 
 gem 'minitest', :group => :test
 
diff --git a/History.txt b/History.md
similarity index 73%
rename from History.txt
rename to History.md
index 045a7d8..f9920e4 100644
--- a/History.txt
+++ b/History.md
@@ -1,13 +1,69 @@
-=== 2.7.2 / 2016-08-18
+### 2.10.0 / 2022-02-17
+
+* Feature
+  * Adds :order option to compare, with new `:baseline` order which compares all
+    variations against the first option benchmarked.
+
+### 2.9.3 / 2022-01-25
+
+* Bug fix
+  * All warmups and benchmarks must run at least once
+
+### 2.9.2 / 2021-10-10
+
+* Bug fix
+  * Fix a problem with certain configs of quiet mode
+
+### 2.9.1 / 2021-05-24
+
+* Bug fix
+  * Include all files in gem
+
+### 2.9.0 / 2021-05-21
+
+* Features
+  * Suite can now be set via an accessor
+  * Default SHARE_URL is now `ips.fastruby.io`, operated by Ombu Labs.
+
+### 2.8.4 / 2020-12-03
+
+* Bug fix
+  * Fixed hold! when results file does not exist.
+
+### 2.8.3 / 2020-08-28
+
+* Bug fix
+  * Fixed inaccuracy caused by integer overflows.
+
+### 2.8.2 / 2020-05-04
+
+* Bug fix
+  * Fixed problems with Manifest.txt.
+  * Empty interim results files are ignored.
+
+### 2.8.0 / 2020-05-01
+
+* Feature
+  * Allow running with empty ips block.
+  * Added save! method for saving interim results.
+  * Run more than just 1 cycle during warmup to reduce overhead.
+  * Optimized Job::Entry hot-path for fairer results on JRuby/TruffleRuby.
+
+* Bug fix
+  * Removed the warmup section if set to 0.
+  * Added some RDoc docs.
+  * Added some examples in examples/
+
+### 2.7.2 / 2016-08-18
 
 * 1 bug fix:
   * Restore old accessors. Fixes #76
 
-=== 2.7.1 / 2016-08-08
+### 2.7.1 / 2016-08-08
 
 Add missing files
 
-=== 2.7.0 / 2016-08-05
+### 2.7.0 / 2016-08-05
 
 * 1 minor features:
   * Add support for confidence intervals
@@ -24,9 +80,9 @@ Add missing files
   * Merge pull request #67 from benoittgt/master
   * Merge pull request #69 from chrisseaton/kalibera-confidence-intervals
 
-=== MISSING 2.6.0 and 2.6.1
+### MISSING 2.6.0 and 2.6.1
 
-=== 2.5.0 / 2016-02-14
+### 2.5.0 / 2016-02-14
 
 * 1 minor feature:
   * Add iterations option.
@@ -38,12 +94,12 @@ Add missing files
   * Merge pull request #58 from chrisseaton/iterations
   * Merge pull request #60 from chrisseaton/significance
 
-=== 2.4.1 / 2016-02-12
+### 2.4.1 / 2016-02-12
 
 * 1 bug fix:
   * Add missing files to gem
 
-=== 2.4.0 / 2016-02-12
+### 2.4.0 / 2016-02-12
 
 * 1 minor features
   * Add support for hold! and independent invocations.
@@ -78,7 +134,7 @@ Add missing files
   * Merge pull request #56 from chrisseaton/independence
   * Merge pull request #57 from chrisseaton/tighten-loop
 
-=== 2.3.0 / 2015-07-20
+### 2.3.0 / 2015-07-20
 
 * 2 minor features:
   * Support keyword arguments
@@ -92,7 +148,7 @@ Add missing files
   * Merge pull request #42 from kbrock/newer_travis
   * Merge pull request #43 from kbrock/non_to_s_labels
 
-=== 2.2.0 / 2015-05-09
+### 2.2.0 / 2015-05-09
 
 * 1 minor features:
   * Fix quiet mode
@@ -116,7 +172,7 @@ Add missing files
   * Merge pull request #29 from JuanitoFatas/feature/json-export
   * Merge pull request #26 from JuanitoFatas/feature/takes-symbol-as-report-parameter
 
-=== 2.1.1 / 2015-01-12
+### 2.1.1 / 2015-01-12
 
 * 1 minor fix:
   * Don't send label through printf so that % work directly
@@ -130,7 +186,7 @@ Add missing files
 * 1 PR merged:
   * Merge pull request #24 from zzak/simple-format-result-description
 
-=== 2.1.0 / 2014-11-10
+### 2.1.0 / 2014-11-10
 
 * Documentation changes:
   * Many documentation fixes by Juanito Fatas!
@@ -141,7 +197,7 @@ Add missing files
   * Formatting of large values improved (human vs raw mode)
     * Contributed by Charles Oliver Nutter
 
-=== 2.0.0 / 2014-06-18
+### 2.0.0 / 2014-06-18
 
 * The 'Davy Stevenson' release!
   * Codename: Springtime Hummingbird Dance
@@ -159,7 +215,7 @@ Add missing files
   *  Zachary Scott
   *  schneems (Richard Schneeman)
 
-=== 1.0.0 / 2012-03-23
+### 1.0.0 / 2012-03-23
 
 * 1 major enhancement
 
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..43c24c3
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,20 @@
+Copyright (c) 2015 Evan Phoenix
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+'Software'), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
diff --git a/Manifest.txt b/Manifest.txt
deleted file mode 100644
index 460bb72..0000000
--- a/Manifest.txt
+++ /dev/null
@@ -1,17 +0,0 @@
-.autotest
-Gemfile.lock
-History.txt
-Manifest.txt
-README.md
-Rakefile
-lib/benchmark/compare.rb
-lib/benchmark/ips.rb
-lib/benchmark/ips/job.rb
-lib/benchmark/ips/job/entry.rb
-lib/benchmark/ips/job/stdout_report.rb
-lib/benchmark/ips/report.rb
-lib/benchmark/ips/share.rb
-lib/benchmark/ips/stats/bootstrap.rb
-lib/benchmark/ips/stats/sd.rb
-lib/benchmark/timing.rb
-test/test_benchmark_ips.rb
diff --git a/README.md b/README.md
index 773ff53..3e5bc0a 100644
--- a/README.md
+++ b/README.md
@@ -1,13 +1,14 @@
+# benchmark-ips
+
+* rdoc :: http://rubydoc.info/gems/benchmark-ips
+* home :: https://github.com/evanphx/benchmark-ips
+
 [![Gem Version](https://badge.fury.io/rb/benchmark-ips.svg)](http://badge.fury.io/rb/benchmark-ips)
 [![Build Status](https://secure.travis-ci.org/evanphx/benchmark-ips.svg)](http://travis-ci.org/evanphx/benchmark-ips)
 [![Inline docs](http://inch-ci.org/github/evanphx/benchmark-ips.svg)](http://inch-ci.org/github/evanphx/benchmark-ips)
 
-# benchmark-ips
-
 * https://github.com/evanphx/benchmark-ips
 
-* [documentation](http://rubydoc.info/gems/benchmark-ips)
-
 ## DESCRIPTION:
 
 An iterations per second enhancement to Benchmark.
@@ -155,6 +156,11 @@ This will run only one benchmarks each time you run the command, storing
 results in the specified file. The file is deleted when all results have been
 gathered and the report is shown.
 
+Alternatively, if you prefer a different approach, the `save!` command is
+available. Examples for [hold!](examples/hold.rb) and [save!](examples/save.rb) are available in
+the `examples/` directory.
+
+
 ### Multiple iterations
 
 In some cases you may want to run multiple iterations of the warmup and
@@ -180,11 +186,15 @@ end
 
 ### Online sharing
 
-If you want to share quickly your benchmark result with others. Run you benchmark
-with `SHARE=1` argument. I.e.: `SHARE=1 ruby my_benchmark.rb`.
-Result will be sent to [benchmark.fyi](https://benchmark.fyi/) and benchmark-ips
+If you want to quickly share your benchmark result with others, run you benchmark
+with `SHARE=1` argument. For example: `SHARE=1 ruby my_benchmark.rb`.
+
+Result will be sent to [benchmark.fyi](https://ips.fastruby.io/) and benchmark-ips
 will display the link to share the benchmark's result.
 
+If you want to run your own instance of [benchmark.fyi](https://github.com/evanphx/benchmark.fyi)
+and share it to that instance, you can do this: `SHARE_URL=https://ips.example.com ruby my_benchmark.rb`
+
 ### Advanced Statistics
 
 By default, the margin of error shown is plus-minus one standard deviation. If
@@ -221,7 +231,7 @@ Benchmark.ips do |x|
 
   x.stats = :bootstrap
   x.confidence = 95
-  
+
   # confidence is 95% by default, so it can be omitted
 
 end
@@ -244,27 +254,3 @@ After checking out the source, run:
 This task will install any missing dependencies, run the tests/specs,
 and generate the RDoc.
 
-## LICENSE:
-
-(The MIT License)
-
-Copyright (c) 2015 Evan Phoenix
-
-Permission is hereby granted, free of charge, to any person obtaining
-a copy of this software and associated documentation files (the
-'Software'), to deal in the Software without restriction, including
-without limitation the rights to use, copy, modify, merge, publish,
-distribute, sublicense, and/or sell copies of the Software, and to
-permit persons to whom the Software is furnished to do so, subject to
-the following conditions:
-
-The above copyright notice and this permission notice shall be
-included in all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
-EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
-IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
-CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
-TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
-SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
diff --git a/Rakefile b/Rakefile
index 0d8b2a4..6489244 100644
--- a/Rakefile
+++ b/Rakefile
@@ -1,27 +1,15 @@
 # -*- ruby -*-
 
-require 'rubygems'
-require 'hoe'
+require "bundler/setup"
+require "rake/testtask"
+require "rubygems/package_task"
+require "bundler/gem_tasks"
 
-Hoe.plugin :minitest
-Hoe.plugin :git
-Hoe.plugin :ignore
+gemspec = Gem::Specification.load("benchmark-ips.gemspec")
+Gem::PackageTask.new(gemspec).define
 
-hoe = Hoe.spec 'benchmark-ips' do
-  developer('Evan Phoenix', 'evan@phx.io')
-
-  self.readme_file = 'README.md'
-
-  license "MIT"
-end
-
-file "#{hoe.spec.name}.gemspec" => ['Rakefile', "lib/benchmark/ips.rb"] do |t|
-  puts "Generating #{t.name}"
-  File.open(t.name, 'wb') { |f| f.write hoe.spec.to_ruby }
-end
-
-desc "Generate or update the standalone gemspec file for the project"
-task :gemspec => ["#{hoe.spec.name}.gemspec"]
+Rake::TestTask.new(:test)
 
+task default: :test
 
 # vim: syntax=ruby
diff --git a/benchmark-ips.gemspec b/benchmark-ips.gemspec
index bb62c41..113bb99 100644
--- a/benchmark-ips.gemspec
+++ b/benchmark-ips.gemspec
@@ -18,14 +18,14 @@ Gem::Specification.new do |s|
   s.date = "2015-01-12"
   s.description = "A iterations per second enhancement to Benchmark."
   s.email = ["evan@phx.io"]
-  s.extra_rdoc_files = ["History.txt", "Manifest.txt", "README.md"]
-  s.files = [".autotest", ".gemtest", "History.txt", "Manifest.txt", "README.md", "Rakefile", "lib/benchmark/compare.rb", "lib/benchmark/ips.rb", "lib/benchmark/ips/job.rb", "lib/benchmark/ips/report.rb", "lib/benchmark/timing.rb", "test/test_benchmark_ips.rb"]
+  s.extra_rdoc_files = ["History.md", "LICENSE", "README.md"]
+  s.files = `git ls-files -- examples lib`.split("\n") +
+            %w[History.md LICENSE README.md]
   s.homepage = "https://github.com/evanphx/benchmark-ips"
   s.licenses = ["MIT"]
   s.rdoc_options = ["--main", "README.md"]
   s.rubygems_version = "2.2.2"
   s.summary = "A iterations per second enhancement to Benchmark."
-  s.test_files = ["test/test_benchmark_ips.rb"]
 
   if s.respond_to? :specification_version then
     s.specification_version = 4
@@ -33,15 +33,12 @@ Gem::Specification.new do |s|
     if Gem::Version.new(Gem::VERSION) >= Gem::Version.new('1.2.0') then
       s.add_development_dependency(%q<minitest>, ["~> 5.4"])
       s.add_development_dependency(%q<rdoc>, ["~> 4.0"])
-      s.add_development_dependency(%q<hoe>, ["~> 3.13"])
     else
       s.add_dependency(%q<minitest>, ["~> 5.4"])
       s.add_dependency(%q<rdoc>, ["~> 4.0"])
-      s.add_dependency(%q<hoe>, ["~> 3.13"])
     end
   else
     s.add_dependency(%q<minitest>, ["~> 5.4"])
     s.add_dependency(%q<rdoc>, ["~> 4.0"])
-    s.add_dependency(%q<hoe>, ["~> 3.13"])
   end
 end
diff --git a/debian/changelog b/debian/changelog
index c435a36..3060091 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,12 +1,13 @@
-ruby-benchmark-ips (2.7.2-3) UNRELEASED; urgency=medium
+ruby-benchmark-ips (2.10.0+git20221021.1.ea645f6-1) UNRELEASED; urgency=medium
 
   * Trim trailing whitespace.
   * Update watch file format version to 4.
   * Update standards version to 4.5.1, no changes needed.
   * Bump debhelper from old 12 to 13.
   * Update standards version to 4.6.1, no changes needed.
+  * New upstream snapshot.
 
- -- Debian Janitor <janitor@jelmer.uk>  Wed, 23 Jun 2021 02:30:28 -0000
+ -- Debian Janitor <janitor@jelmer.uk>  Sat, 17 Dec 2022 12:37:57 -0000
 
 ruby-benchmark-ips (2.7.2-2) unstable; urgency=medium
 
diff --git a/examples/hold.rb b/examples/hold.rb
new file mode 100644
index 0000000..de799cc
--- /dev/null
+++ b/examples/hold.rb
@@ -0,0 +1,41 @@
+#!/usr/bin/env ruby
+# example to explain hold! usage https://github.com/evanphx/benchmark-ips/issues/85
+# The hold! feature expects to be run twice, generally with different Rubys.
+# hold! can also be used to compare modules changes which impact the run time
+# RUN_1: ruby examples/hold.rb
+# Warming up --------------------------------------
+#             without   172.168k i/100ms
+# Calculating -------------------------------------
+#             without      2.656M (± 3.3%) i/s -     13.429M in   5.062098s
+#
+# RUN_2: WITH_MODULE=true ruby examples/hold.rb
+# Warming up --------------------------------------
+#                 with    92.087k i/100ms
+# Calculating -------------------------------------
+#                 with      1.158M (± 1.4%) i/s -      5.801M in   5.010084s
+#
+# Comparison:
+#              without:  2464721.3 i/s
+#                 with:  1158179.6 i/s - 2.13x  slower
+require 'benchmark/ips'
+
+Benchmark.ips do |x|
+  x.report('without') do
+    'Bruce'.inspect
+  end
+
+  if ENV['WITH_MODULE'] == 'true'
+    class String
+      def inspect
+        result = %w[Bruce Wayne is Batman]
+        result.join(' ')
+      end
+    end
+  end
+
+  x.report('with') do
+    'Bruce'.inspect
+  end
+  x.hold! 'temp_results'
+  x.compare!
+end
diff --git a/examples/save.rb b/examples/save.rb
new file mode 100644
index 0000000..de0905a
--- /dev/null
+++ b/examples/save.rb
@@ -0,0 +1,50 @@
+#!/usr/bin/env ruby
+
+# example to explain save!
+# The save! feature expects to be run twice, generally with different Rubys.
+# save! can also be used to compare modules changes which impact the run time
+#
+# If you're comparing ruby versions, Just use the version in the label
+#
+#     x.report("ruby #{RUBY_VERSION}") { 'Bruce'.inspect }
+#
+# Or use a hash
+# 
+#     x.report("version" => RUBY_VERSION, "method" => 'bruce') { 'Bruce'.inspect }
+#
+# RUN_1: SAVE_FILE='run1.out' ruby examples/hold.rb
+# Warming up --------------------------------------
+#             without   172.168k i/100ms
+# Calculating -------------------------------------
+#             without      2.656M (± 3.3%) i/s -     13.429M in   5.062098s
+#
+# RUN_2: SAVE_FILE='run1.out' WITH_MODULE=true ruby examples/hold.rb
+# Warming up --------------------------------------
+#                 with    92.087k i/100ms
+# Calculating -------------------------------------
+#                 with      1.158M (± 1.4%) i/s -      5.801M in   5.010084s
+#
+# Comparison:
+#              without:  2464721.3 i/s
+#                 with:  1158179.6 i/s - 2.13x  slower
+# CLEANUP: rm run1.out
+
+require 'benchmark/ips'
+
+Benchmark.ips do |x|
+  x.report(ENV['WITH_MODULE'] == 'true' ? 'with' : 'without') do
+    'Bruce'.inspect
+  end
+
+  if ENV['WITH_MODULE'] == 'true'
+    class String
+      def inspect
+        result = %w[Bruce Wayne is Batman]
+        result.join(' ')
+      end
+    end
+  end
+
+  x.save! ENV['SAVE_FILE'] if ENV['SAVE_FILE']
+  x.compare!
+end
diff --git a/lib/benchmark/compare.rb b/lib/benchmark/compare.rb
index 6599749..51f9952 100644
--- a/lib/benchmark/compare.rb
+++ b/lib/benchmark/compare.rb
@@ -26,46 +26,78 @@ module Benchmark
   #   Reduce using to_proc:   247295.4 i/s - 1.13x slower
   #
   # Besides regular Calculating report, this will also indicates which one is slower.
+  #
+  # +x.compare!+ also takes an +order: :baseline+ option.
+  #
+  # Example:
+  #  > Benchmark.ips do |x|
+  #   x.report('Reduce using block')   { [*1..10].reduce { |sum, n| sum + n } }
+  #   x.report('Reduce using tag')     { [*1..10].reduce(:+) }
+  #   x.report('Reduce using to_proc') { [*1..10].reduce(&:+) }
+  #   x.compare!(order: :baseline)
+  # end
+  #
+  # Calculating -------------------------------------
+  #   Reduce using block    886.202k (± 2.2%) i/s -      4.521M in   5.103774s
+  #     Reduce using tag      1.821M (± 1.6%) i/s -      9.111M in   5.004183s
+  # Reduce using to_proc    895.948k (± 1.6%) i/s -      4.528M in   5.055368s
+  #
+  # Comparison:
+  #   Reduce using block:   886202.5 i/s
+  #     Reduce using tag:  1821055.0 i/s - 2.05x  (± 0.00) faster
+  # Reduce using to_proc:   895948.1 i/s - same-ish: difference falls within error
+  #
+  # The first report is considered the baseline against which other reports are compared.
   module Compare
 
     # Compare between reports, prints out facts of each report:
     # runtime, comparative speed difference.
     # @param entries [Array<Report::Entry>] Reports to compare.
-    def compare(*entries)
+    def compare(*entries, order: :fastest)
       return if entries.size < 2
 
-      sorted = entries.sort_by{ |e| e.stats.central_tendency }.reverse
-
-      best = sorted.shift
+      case order
+      when :baseline
+        baseline = entries.shift
+        sorted = entries.sort_by{ |e| e.stats.central_tendency }.reverse
+      when :fastest
+        sorted = entries.sort_by{ |e| e.stats.central_tendency }.reverse
+        baseline = sorted.shift
+      else
+        raise ArgumentError, "Unknwon order: #{order.inspect}"
+      end
 
       $stdout.puts "\nComparison:"
 
-      $stdout.printf "%20s: %10.1f i/s\n", best.label, best.stats.central_tendency
+      $stdout.printf "%20s: %10.1f i/s\n", baseline.label.to_s, baseline.stats.central_tendency
 
       sorted.each do |report|
         name = report.label.to_s
-        
+
         $stdout.printf "%20s: %10.1f i/s - ", name, report.stats.central_tendency
-        
-        best_low = best.stats.central_tendency - best.stats.error
-        report_high = report.stats.central_tendency + report.stats.error
-        overlaps = report_high > best_low 
-        
-        if overlaps
+
+        if report.stats.overlaps?(baseline.stats)
           $stdout.print "same-ish: difference falls within error"
+        elsif report.stats.central_tendency > baseline.stats.central_tendency
+          speedup, error = report.stats.speedup(baseline.stats)
+          $stdout.printf "%.2fx ", speedup
+          if error
+            $stdout.printf " (± %.2f)", error
+          end
+          $stdout.print " faster"
         else
-          slowdown, error = report.stats.slowdown(best.stats)
+          slowdown, error = report.stats.slowdown(baseline.stats)
           $stdout.printf "%.2fx ", slowdown
           if error
             $stdout.printf " (± %.2f)", error
           end
           $stdout.print " slower"
         end
-        
+
         $stdout.puts
       end
 
-      footer = best.stats.footer
+      footer = baseline.stats.footer
       $stdout.puts footer.rjust(40) if footer
 
       $stdout.puts
diff --git a/lib/benchmark/ips.rb b/lib/benchmark/ips.rb
index 27f8440..4bc6119 100644
--- a/lib/benchmark/ips.rb
+++ b/lib/benchmark/ips.rb
@@ -1,11 +1,14 @@
 # encoding: utf-8
 require 'benchmark/timing'
 require 'benchmark/compare'
+require 'benchmark/ips/stats/stats_metric'
 require 'benchmark/ips/stats/sd'
 require 'benchmark/ips/stats/bootstrap'
 require 'benchmark/ips/report'
+require 'benchmark/ips/noop_suite'
 require 'benchmark/ips/job/entry'
 require 'benchmark/ips/job/stdout_report'
+require 'benchmark/ips/job/noop_report'
 require 'benchmark/ips/job'
 
 # Performance benchmarking library
@@ -15,10 +18,10 @@ module Benchmark
   module IPS
 
     # Benchmark-ips Gem version.
-    VERSION = "2.7.2"
+    VERSION = "2.10.0"
 
     # CODENAME of current version.
-    CODENAME = "Cultivating Confidence"
+    CODENAME = "Watashi Wa Genki"
 
     # Measure code in block, each code's benchmarked result will display in
     # iteration per second with standard deviation in given time.
@@ -32,32 +35,30 @@ module Benchmark
         time, warmup, quiet = args
       end
 
-      suite = nil
-
       sync, $stdout.sync = $stdout.sync, true
 
-      if defined? Benchmark::Suite and Suite.current
-        suite = Benchmark::Suite.current
-      end
-
-      quiet ||= (suite && suite.quiet?)
-
-      job = Job.new({:suite => suite,
-                     :quiet => quiet
-      })
+      job = Job.new
 
       job_opts = {}
       job_opts[:time] = time unless time.nil?
       job_opts[:warmup] = warmup unless warmup.nil?
+      job_opts[:quiet] = quiet unless quiet.nil?
 
       job.config job_opts
 
       yield job
 
-      job.load_held_results if job.hold? && job.held_results?
+      job.load_held_results
 
       job.run
 
+      if job.run_single? && job.all_results_have_been_run?
+        job.clear_held_results
+      else
+        job.save_held_results
+        puts '', 'Pausing here -- run Ruby again to measure the next benchmark...' if job.run_single?
+      end
+
       $stdout.sync = sync
       job.run_comparison
       job.generate_json
@@ -102,4 +103,68 @@ module Benchmark
   end
 
   extend Benchmark::IPS # make ips available as module-level method
+
+  ##
+  # :singleton-method: ips
+  #
+  #     require 'benchmark/ips'
+  #
+  #     Benchmark.ips do |x|
+  #       # Configure the number of seconds used during
+  #       # the warmup phase (default 2) and calculation phase (default 5)
+  #       x.config(:time => 5, :warmup => 2)
+  #
+  #       # These parameters can also be configured this way
+  #       x.time = 5
+  #       x.warmup = 2
+  #
+  #       # Typical mode, runs the block as many times as it can
+  #       x.report("addition") { 1 + 2 }
+  #
+  #       # To reduce overhead, the number of iterations is passed in
+  #       # and the block must run the code the specific number of times.
+  #       # Used for when the workload is very small and any overhead
+  #       # introduces incorrectable errors.
+  #       x.report("addition2") do |times|
+  #         i = 0
+  #         while i < times
+  #           1 + 2
+  #           i += 1
+  #         end
+  #       end
+  #
+  #       # To reduce overhead even more, grafts the code given into
+  #       # the loop that performs the iterations internally to reduce
+  #       # overhead. Typically not needed, use the |times| form instead.
+  #       x.report("addition3", "1 + 2")
+  #
+  #       # Really long labels should be formatted correctly
+  #       x.report("addition-test-long-label") { 1 + 2 }
+  #
+  #       # Compare the iterations per second of the various reports!
+  #       x.compare!
+  #     end
+  #
+  # This will generate the following report:
+  #
+  #     Calculating -------------------------------------
+  #                 addition    71.254k i/100ms
+  #                addition2    68.658k i/100ms
+  #                addition3    83.079k i/100ms
+  #     addition-test-long-label
+  #                             70.129k i/100ms
+  #     -------------------------------------------------
+  #                 addition     4.955M (± 8.7%) i/s -     24.155M
+  #                addition2    24.011M (± 9.5%) i/s -    114.246M
+  #                addition3    23.958M (±10.1%) i/s -    115.064M
+  #     addition-test-long-label
+  #                              5.014M (± 9.1%) i/s -     24.545M
+  #
+  #     Comparison:
+  #                addition2: 24011974.8 i/s
+  #                addition3: 23958619.8 i/s - 1.00x slower
+  #     addition-test-long-label:  5014756.0 i/s - 4.79x slower
+  #                 addition:  4955278.9 i/s - 4.85x slower
+  #
+  # See also Benchmark::IPS
 end
diff --git a/lib/benchmark/ips/job.rb b/lib/benchmark/ips/job.rb
index 697fc0d..d815838 100644
--- a/lib/benchmark/ips/job.rb
+++ b/lib/benchmark/ips/job.rb
@@ -9,6 +9,7 @@ module Benchmark
       # The percentage of the expected runtime to allow
       # before reporting a weird runtime
       MAX_TIME_SKEW = 0.05
+      POW_2_30 = 1 << 30
 
       # Two-element arrays, consisting of label and block pairs.
       # @return [Array<Entry>] list of entries
@@ -50,19 +51,25 @@ module Benchmark
       # @return [Integer]
       attr_accessor :confidence
 
+      # Silence output
+      # @return [Boolean]
+      attr_reader :quiet
+
+      # Suite
+      # @return [Benchmark::IPS::NoopSuite]
+      attr_reader :suite
+
       # Instantiate the Benchmark::IPS::Job.
-      # @option opts [Benchmark::Suite] (nil) :suite Specify Benchmark::Suite.
-      # @option opts [Boolean] (false) :quiet Suppress the printing of information.
       def initialize opts={}
-        @suite = opts[:suite] || nil
-        @stdout = opts[:quiet] ? nil : StdoutReport.new
         @list = []
-        @compare = false
+        @run_single = false
         @json_path = false
+        @compare = false
+        @compare_order = :fastest
         @held_path = nil
         @held_results = nil
 
-        @timing = {}
+        @timing = Hash.new 1 # default to 1 in case warmup isn't run
         @full_report = Report.new
 
         # Default warmup and calculation time in seconds.
@@ -73,6 +80,8 @@ module Benchmark
         # Default statistical model
         @stats = :sd
         @confidence = 95
+
+        self.quiet = false
       end
 
       # Job configuration options, set +@warmup+ and +@time+.
@@ -86,6 +95,20 @@ module Benchmark
         @iterations = opts[:iterations] if opts[:iterations]
         @stats = opts[:stats] if opts[:stats]
         @confidence = opts[:confidence] if opts[:confidence]
+        self.quiet = opts[:quiet] if opts.key?(:quiet)
+        self.suite = opts[:suite]
+      end
+
+      def quiet=(val)
+        @stdout = reporter(quiet: val)
+      end
+
+      def suite=(suite)
+        @suite = suite || Benchmark::IPS::NoopSuite.new
+      end
+
+      def reporter(quiet:)
+        quiet ? NoopReport.new : StdoutReport.new
       end
 
       # Return true if job needs to be compared.
@@ -94,9 +117,10 @@ module Benchmark
         @compare
       end
 
-      # Set @compare to true.
-      def compare!
+      # Run comparison utility.
+      def compare!(order: :fastest)
         @compare = true
+        @compare_order = order
       end
 
       # Return true if results are held while multiple Ruby invocations
@@ -105,9 +129,27 @@ module Benchmark
         !!@held_path
       end
 
-      # Set @hold to true.
+      # Hold after each iteration.
+      # @param held_path [String] File name to store hold file.
       def hold!(held_path)
         @held_path = held_path
+        @run_single = true
+      end
+
+      # Save interim results. Similar to hold, but all reports are run
+      # The report label must change for each invocation.
+      # One way to achieve this is to include the version in the label.
+      # @param held_path [String] File name to store hold file.
+      def save!(held_path)
+        @held_path = held_path
+        @run_single = false
+      end
+
+      # Return true if items are to be run one at a time.
+      # For the traditional hold, this is true
+      # @return [Boolean] Run just a single item?
+      def run_single?
+        @run_single
       end
 
       # Return true if job needs to generate json.
@@ -116,7 +158,7 @@ module Benchmark
         !!@json_path
       end
 
-      # Set @json_path to given path, defaults to "data.json".
+      # Generate json to given path, defaults to "data.json".
       def json!(path="data.json")
         @json_path = path
       end
@@ -166,86 +208,112 @@ module Benchmark
       def iterations_per_sec cycles, time_us
         MICROSECONDS_PER_SECOND * (cycles.to_f / time_us.to_f)
       end
-      
-      def held_results?
-        File.exist?(@held_path)
-      end
-      
+
       def load_held_results
+        return unless @held_path && File.exist?(@held_path) && !File.zero?(@held_path)
+        require "json"
+        @held_results = {}
+        JSON.load(IO.read(@held_path)).each do |result|
+          @held_results[result['item']] = result
+          create_report(result['item'], result['measured_us'], result['iter'],
+                        create_stats(result['samples']), result['cycles'])
+        end
+      end
+
+      def save_held_results
+        return unless @held_path
         require "json"
-        @held_results = Hash[File.open(@held_path).map { |line|
-          result = JSON.parse(line)
-          [result['item'], result]
-        }]
+        data = full_report.entries.map { |e|
+          {
+            'item' => e.label,
+            'measured_us' => e.microseconds,
+            'iter' => e.iterations,
+            'samples' => e.samples,
+            'cycles' => e.measurement_cycle
+          }
+        }
+        IO.write(@held_path, JSON.generate(data) << "\n")
       end
-      
+
+      def all_results_have_been_run?
+        @full_report.entries.size == @list.size
+      end
+
+      def clear_held_results
+        File.delete @held_path if File.exist?(@held_path)
+      end
+
       def run
-        @stdout.start_warming if @stdout
-        @iterations.times do
-          run_warmup
+        if @warmup && @warmup != 0 then
+          @stdout.start_warming
+          @iterations.times do
+            run_warmup
+          end
         end
-        
-        @stdout.start_running if @stdout
-        
-        held = nil
-        
+
+        @stdout.start_running
+
         @iterations.times do |n|
-          held = run_benchmark
+          run_benchmark
         end
 
-        @stdout.footer if @stdout
-        
-        if held
-          puts
-          puts 'Pausing here -- run Ruby again to measure the next benchmark...'
-        end
+        @stdout.footer
       end
 
       # Run warmup.
       def run_warmup
         @list.each do |item|
-          next if hold? && @held_results && @held_results.key?(item.label)
-          
-          @suite.warming item.label, @warmup if @suite
-          @stdout.warming item.label, @warmup if @stdout
+          next if run_single? && @held_results && @held_results.key?(item.label)
+
+          @suite.warming item.label, @warmup
+          @stdout.warming item.label, @warmup
 
           Timing.clean_env
 
+          # Run for up to half of the configured warmup time with an increasing
+          # number of cycles to reduce overhead and improve accuracy.
+          # This also avoids running with a constant number of cycles, which a
+          # JIT might speculate on and then have to recompile in #run_benchmark.
           before = Timing.now
-          target = Timing.add_second before, @warmup
+          target = Timing.add_second before, @warmup / 2.0
 
-          warmup_iter = 0
+          cycles = 1
+          begin
+            t0 = Timing.now
+            item.call_times cycles
+            t1 = Timing.now
+            warmup_iter = cycles
+            warmup_time_us = Timing.time_us(t0, t1)
 
-          while Timing.now < target
-            item.call_times(1)
-            warmup_iter += 1
-          end
+            # If the number of cycles would go outside the 32-bit signed integers range
+            # then exit the loop to avoid overflows and start the 100ms warmup runs
+            break if cycles >= POW_2_30
+            cycles *= 2
+          end while Timing.now + warmup_time_us * 2 < target
 
-          after = Timing.now
+          cycles = cycles_per_100ms warmup_time_us, warmup_iter
+          @timing[item] = cycles
 
-          warmup_time_us = Timing.time_us(before, after)
+          # Run for the remaining of warmup in a similar way as #run_benchmark.
+          target = Timing.add_second before, @warmup
+          while Timing.now + MICROSECONDS_PER_100MS < target
+            item.call_times cycles
+          end
 
-          @timing[item] = cycles_per_100ms warmup_time_us, warmup_iter
+          @stdout.warmup_stats warmup_time_us, @timing[item]
+          @suite.warmup_stats warmup_time_us, @timing[item]
 
-          @stdout.warmup_stats warmup_time_us, @timing[item] if @stdout
-          @suite.warmup_stats warmup_time_us, @timing[item] if @suite
-          
-          break if hold?
+          break if run_single?
         end
       end
 
       # Run calculation.
       def run_benchmark
         @list.each do |item|
-          if hold? && @held_results && @held_results.key?(item.label)
-           result = @held_results[item.label]
-            create_report(item.label, result['measured_us'], result['iter'],
-                          create_stats(result['samples']), result['cycles'])
-            next
-          end
-          
-          @suite.running item.label, @time if @suite
-          @stdout.running item.label, @time if @stdout
+          next if run_single? && @held_results && @held_results.key?(item.label)
+
+          @suite.running item.label, @time
+          @stdout.running item.label, @time
 
           Timing.clean_env
 
@@ -257,8 +325,9 @@ module Benchmark
           cycles = @timing[item]
 
           target = Timing.add_second Timing.now, @time
-          
-          while (before = Timing.now) < target
+
+          begin
+            before = Timing.now
             item.call_times cycles
             after = Timing.now
 
@@ -270,7 +339,7 @@ module Benchmark
             iter += cycles
 
             measurements_us << iter_us
-          end
+          end while Timing.now < target
 
           final_time = before
 
@@ -286,31 +355,11 @@ module Benchmark
             rep.show_total_time!
           end
 
-          @stdout.add_report rep, caller(1).first if @stdout
-          @suite.add_report rep, caller(1).first if @suite
-          
-          if hold? && item != @list.last
-            File.open @held_path, "a" do |f|
-              require "json"
-              f.write JSON.generate({
-                :item => item.label,
-                :measured_us => measured_us,
-                :iter => iter,
-                :samples => samples,
-                :cycles => cycles
-              })
-              f.write "\n"
-            end
-            
-            return true
-          end
-        end
-        
-        if hold? && @full_report.entries.size == @list.size
-          File.delete @held_path if File.exist?(@held_path)
+          @stdout.add_report rep, caller(1).first
+          @suite.add_report rep, caller(1).first
+
+          break if run_single?
         end
-        
-        false
       end
 
       def create_stats(samples)
@@ -326,7 +375,7 @@ module Benchmark
 
       # Run comparison of entries in +@full_report+.
       def run_comparison
-        @full_report.run_comparison if compare?
+        @full_report.run_comparison(@compare_order) if compare?
       end
 
       # Generate json from +@full_report+.
diff --git a/lib/benchmark/ips/job/entry.rb b/lib/benchmark/ips/job/entry.rb
index 70b8350..e7d0c8c 100644
--- a/lib/benchmark/ips/job/entry.rb
+++ b/lib/benchmark/ips/job/entry.rb
@@ -11,10 +11,12 @@ module Benchmark
         def initialize(label, action)
           @label = label
 
+          # We define #call_times on the singleton class of each Entry instance.
+          # That way, there is no polymorphism for `@action.call` inside #call_times.
+
           if action.kind_of? String
-            compile action
+            compile_string action
             @action = self
-            @as_action = true
           else
             unless action.respond_to? :call
               raise ArgumentError, "invalid action, must respond to #call"
@@ -23,12 +25,10 @@ module Benchmark
             @action = action
 
             if action.respond_to? :arity and action.arity > 0
-              @call_loop = true
+              compile_block_with_manual_loop
             else
-              @call_loop = false
+              compile_block
             end
-
-            @as_action = false
           end
         end
 
@@ -40,25 +40,43 @@ module Benchmark
         # @return [String, Proc] Code to be called, could be String / Proc.
         attr_reader :action
 
-        # Call action by given times, return if +@call_loop+ is present.
+        # Call action by given times.
         # @param times [Integer] Times to call +@action+.
         # @return [Integer] Number of times the +@action+ has been called.
         def call_times(times)
-          return @action.call(times) if @call_loop
+          raise '#call_times should be redefined per Benchmark::IPS::Job::Entry instance'
+        end
 
-          act = @action
+        def compile_block
+          m = (class << self; self; end)
+          code = <<-CODE
+            def call_times(times)
+              act = @action
 
-          i = 0
-          while i < times
-            act.call
-            i += 1
-          end
+              i = 0
+              while i < times
+                act.call
+                i += 1
+              end
+            end
+          CODE
+          m.class_eval code
+        end
+
+        def compile_block_with_manual_loop
+          m = (class << self; self; end)
+          code = <<-CODE
+            def call_times(times)
+              @action.call(times)
+            end
+          CODE
+          m.class_eval code
         end
 
         # Compile code into +call_times+ method.
         # @param str [String] Code to be compiled.
         # @return [Symbol] :call_times.
-        def compile(str)
+        def compile_string(str)
           m = (class << self; self; end)
           code = <<-CODE
             def call_times(__total);
diff --git a/lib/benchmark/ips/job/noop_report.rb b/lib/benchmark/ips/job/noop_report.rb
new file mode 100644
index 0000000..144daac
--- /dev/null
+++ b/lib/benchmark/ips/job/noop_report.rb
@@ -0,0 +1,27 @@
+module Benchmark
+  module IPS
+    class Job
+      class NoopReport
+        def start_warming
+        end
+
+        def start_running
+        end
+
+        def footer
+        end
+
+        def warming(a, b)
+        end
+
+        def warmup_stats(a, b)
+        end
+
+        def add_report(a, b)
+        end
+
+        alias_method :running, :warming
+      end
+    end
+  end
+end
diff --git a/lib/benchmark/ips/job/stdout_report.rb b/lib/benchmark/ips/job/stdout_report.rb
index f105b30..7da9614 100644
--- a/lib/benchmark/ips/job/stdout_report.rb
+++ b/lib/benchmark/ips/job/stdout_report.rb
@@ -2,10 +2,14 @@ module Benchmark
   module IPS
     class Job
       class StdoutReport
+        def initialize
+          @last_item = nil
+        end
+
         def start_warming
           $stdout.puts "Warming up --------------------------------------"
         end
-        
+
         def start_running
           $stdout.puts "Calculating -------------------------------------"
         end
@@ -31,6 +35,7 @@ module Benchmark
         end
 
         def footer
+          return unless @last_item
           footer = @last_item.stats.footer
           $stdout.puts footer.rjust(40) if footer
         end
diff --git a/lib/benchmark/ips/noop_suite.rb b/lib/benchmark/ips/noop_suite.rb
new file mode 100644
index 0000000..d4c669e
--- /dev/null
+++ b/lib/benchmark/ips/noop_suite.rb
@@ -0,0 +1,25 @@
+module Benchmark
+  module IPS
+    class NoopSuite
+      def start_warming
+      end
+
+      def start_running
+      end
+
+      def footer
+      end
+
+      def warming(a, b)
+      end
+
+      def warmup_stats(a, b)
+      end
+
+      def add_report(a, b)
+      end
+
+      alias_method :running, :warming
+    end
+  end
+end
diff --git a/lib/benchmark/ips/report.rb b/lib/benchmark/ips/report.rb
index 7687816..5404ef2 100644
--- a/lib/benchmark/ips/report.rb
+++ b/lib/benchmark/ips/report.rb
@@ -52,6 +52,10 @@ module Benchmark
           @stats.error
         end
 
+        def samples
+          @stats.samples
+        end
+
         # Number of Cycles.
         # @return [Integer] number of cycles.
         attr_reader :measurement_cycle
@@ -72,7 +76,7 @@ module Benchmark
         # Return entry's standard deviation of iteration per second in percentage.
         # @return [Float] +@ips_sd+ in percentage.
         def error_percentage
-          100.0 * (@stats.error.to_f / @stats.central_tendency)
+          @stats.error_percentage
         end
 
         alias_method :runtime, :seconds
@@ -84,7 +88,7 @@ module Benchmark
         def body
           case Benchmark::IPS.options[:format]
           when :human
-            left = "%s (±%4.1f%%) i/s" % [Helpers.scale(@stats.central_tendency), error_percentage]
+            left = "%s (±%4.1f%%) i/s" % [Helpers.scale(@stats.central_tendency), @stats.error_percentage]
             iters = Helpers.scale(@iterations)
 
             if @show_total_time
@@ -93,7 +97,7 @@ module Benchmark
               left.ljust(20) + (" - %s" % iters)
             end
           else
-            left = "%10.1f (±%.1f%%) i/s" % [@stats.central_tendency, error_percentage]
+            left = "%10.1f (±%.1f%%) i/s" % [@stats.central_tendency, @stats.error_percentage]
 
             if @show_total_time
               left.ljust(20) + (" - %10d in %10.6fs" % [@iterations, runtime])
@@ -172,8 +176,8 @@ module Benchmark
       end
 
       # Run comparison of entries.
-      def run_comparison
-        Benchmark.compare(*@entries)
+      def run_comparison(order)
+        Benchmark.compare(*@entries, order: order)
       end
 
       # Generate json from Report#data to given path.
diff --git a/lib/benchmark/ips/share.rb b/lib/benchmark/ips/share.rb
index e296ab9..5109aec 100644
--- a/lib/benchmark/ips/share.rb
+++ b/lib/benchmark/ips/share.rb
@@ -1,3 +1,5 @@
+# frozen_string_literal: true
+
 require 'net/http'
 require 'net/https'
 require 'json'
@@ -5,7 +7,7 @@ require 'json'
 module Benchmark
   module IPS
     class Share
-      DEFAULT_URL = "https://benchmark.fyi"
+      DEFAULT_URL = "https://ips.fastruby.io"
       def initialize(report, job)
         @report = report
         @job = job
diff --git a/lib/benchmark/ips/stats/bootstrap.rb b/lib/benchmark/ips/stats/bootstrap.rb
index 3761329..79ed30a 100644
--- a/lib/benchmark/ips/stats/bootstrap.rb
+++ b/lib/benchmark/ips/stats/bootstrap.rb
@@ -3,33 +3,40 @@ module Benchmark
     module Stats
 
       class Bootstrap
-
-        attr_reader :data
+        include StatsMetric
+        attr_reader :data, :error, :samples
 
         def initialize(samples, confidence)
           dependencies
           @iterations = 10_000
           @confidence = (confidence / 100.0).to_s
+          @samples = samples
           @data = Kalibera::Data.new({[0] => samples}, [1, samples.size])
           interval = @data.bootstrap_confidence_interval(@iterations, @confidence)
           @median = interval.median
           @error = interval.error
         end
 
+        # Average stat value
+        # @return [Float] central_tendency
         def central_tendency
           @median
         end
 
-        def error
-          @error
-        end
-
+        # Determines how much slower this stat is than the baseline stat
+        # if this average is lower than the faster baseline, higher average is better (e.g. ips) (calculate accordingly)
+        # @param baseline [SD|Bootstrap] faster baseline
+        # @returns [Array<Float, nil>] the slowdown and the error (not calculated for standard deviation)
         def slowdown(baseline)
           low, slowdown, high = baseline.data.bootstrap_quotient(@data, @iterations, @confidence)
           error = Timing.mean([slowdown - low, high - slowdown])
           [slowdown, error]
         end
 
+        def speedup(baseline)
+          baseline.slowdown(self)
+        end
+
         def footer
           "with #{(@confidence.to_f * 100).round(1)}% confidence"
         end
diff --git a/lib/benchmark/ips/stats/sd.rb b/lib/benchmark/ips/stats/sd.rb
index ceb16d4..e6e7c59 100644
--- a/lib/benchmark/ips/stats/sd.rb
+++ b/lib/benchmark/ips/stats/sd.rb
@@ -1,33 +1,45 @@
 module Benchmark
   module IPS
     module Stats
-      
+
       class SD
-        
+        include StatsMetric
+        attr_reader :error, :samples
+
         def initialize(samples)
+          @samples = samples
           @mean = Timing.mean(samples)
           @error = Timing.stddev(samples, @mean).round
         end
-        
+
+        # Average stat value
+        # @return [Float] central_tendency
         def central_tendency
           @mean
         end
-        
-        def error
-          @error
-        end
 
+        # Determines how much slower this stat is than the baseline stat
+        # if this average is lower than the faster baseline, higher average is better (e.g. ips) (calculate accordingly)
+        # @param baseline [SD|Bootstrap] faster baseline
+        # @returns [Array<Float, nil>] the slowdown and the error (not calculated for standard deviation)
         def slowdown(baseline)
-          slowdown = baseline.central_tendency.to_f / central_tendency
-          [slowdown, nil]
+          if baseline.central_tendency > central_tendency
+            [baseline.central_tendency.to_f / central_tendency, nil]
+          else
+            [central_tendency.to_f / baseline.central_tendency, nil]
+          end
+        end
+
+        def speedup(baseline)
+          baseline.slowdown(self)
         end
 
         def footer
           nil
         end
-        
+
       end
-    
+
     end
   end
 end
diff --git a/lib/benchmark/ips/stats/stats_metric.rb b/lib/benchmark/ips/stats/stats_metric.rb
new file mode 100644
index 0000000..63358e0
--- /dev/null
+++ b/lib/benchmark/ips/stats/stats_metric.rb
@@ -0,0 +1,21 @@
+module Benchmark
+  module IPS
+    module Stats
+      module StatsMetric
+        # Return entry's standard deviation of iteration per second in percentage.
+        # @return [Float] +@ips_sd+ in percentage.
+        def error_percentage
+          100.0 * (error.to_f / central_tendency)
+        end
+
+        def overlaps?(baseline)
+          baseline_low = baseline.central_tendency - baseline.error
+          baseline_high = baseline.central_tendency + baseline.error
+          my_high = central_tendency + error
+          my_low  = central_tendency - error
+          my_high > baseline_low && my_low < baseline_high
+        end
+      end
+    end
+  end
+end
diff --git a/test/test_benchmark_ips.rb b/test/test_benchmark_ips.rb
index d9f00cf..f93be46 100644
--- a/test/test_benchmark_ips.rb
+++ b/test/test_benchmark_ips.rb
@@ -1,6 +1,7 @@
 require "minitest/autorun"
 require "benchmark/ips"
 require "stringio"
+require "tmpdir"
 
 class TestBenchmarkIPS < Minitest::Test
   def setup
@@ -13,13 +14,26 @@ class TestBenchmarkIPS < Minitest::Test
   end
 
   def test_kwargs
-    Benchmark.ips(:time => 1, :warmup => 1, :quiet => false) do |x|
+    Benchmark.ips(:time => 0.001, :warmup => 0.001, :quiet => false) do |x|
       x.report("sleep 0.25") { sleep(0.25) }
     end
 
     assert $stdout.string.size > 0
   end
 
+  def test_warmup0
+    $stdout = @old_stdout
+
+    out, err = capture_io do
+      Benchmark.ips(:time => 1, :warmup => 0, :quiet => false) do |x|
+        x.report("sleep 0.25") { sleep(0.25) }
+      end
+    end
+
+    refute_match(/Warming up -+/, out)
+    assert_empty err
+  end
+
   def test_output
     Benchmark.ips(1) do |x|
       x.report("operation") { 100 * 100 }
@@ -29,13 +43,50 @@ class TestBenchmarkIPS < Minitest::Test
   end
 
   def test_quiet
-    Benchmark.ips(1, nil, true) do |x|
+    Benchmark.ips(nil, nil, true) do |x|
+      x.config(:warmup => 0.001, :time => 0.001)
       x.report("operation") { 100 * 100 }
     end
 
     assert $stdout.string.size.zero?
 
     Benchmark.ips(:quiet => true) do |x|
+      x.config(:warmup => 0.001, :time => 0.001)
+      x.report("operation") { 100 * 100 }
+    end
+
+    assert $stdout.string.size.zero?
+
+    Benchmark.ips do |x|
+      x.config(:warmup => 0.001, :time => 0.001)
+      x.quiet = true
+      x.report("operation") { 100 * 100 }
+    end
+
+    assert $stdout.string.size.zero?
+  end
+
+  def test_quiet_option_override
+    Benchmark.ips(quiet: true) do |x|
+      x.config(:warmup => 0.001, :time => 0.001)
+      x.quiet = false
+      x.report("operation") { 100 * 100 }
+    end
+
+    assert $stdout.string.size > 0
+    $stdout.truncate(0)
+
+    Benchmark.ips(quiet: true) do |x|
+      x.config(quiet: false, warmup: 0.001, time: 0.001)
+      x.report("operation") { 100 * 100 }
+    end
+
+    assert $stdout.string.size > 0
+    $stdout.truncate(0)
+
+    Benchmark.ips(quiet: true, warmup: 0.001, time: 0.001) do |x|
+      # Calling config should not make quiet option overridden when no specified
+      x.config({})
       x.report("operation") { 100 * 100 }
     end
 
@@ -77,7 +128,7 @@ class TestBenchmarkIPS < Minitest::Test
   end
 
   def test_ips_old_config
-    report = Benchmark.ips(1,1) do |x|
+    report = Benchmark.ips(1, 1) do |x|
       x.report("sleep 0.25") { sleep(0.25) }
     end
 
@@ -103,6 +154,21 @@ class TestBenchmarkIPS < Minitest::Test
     assert_equal [:warming, :warmup_stats, :running, :add_report], suite.calls
   end
 
+  def test_ips_config_suite_by_accsr
+    suite = Struct.new(:calls) do
+      def method_missing(method, *args)
+        calls << method
+      end
+    end.new([])
+
+    Benchmark.ips(0.1, 0.1) do |x|
+      x.suite = suite
+      x.report("job") {}
+    end
+
+    assert_equal [:warming, :warmup_stats, :running, :add_report], suite.calls
+  end
+
   def test_ips_defaults
     report = Benchmark.ips do |x|
       x.report("sleep 0.25") { sleep(0.25) }
@@ -129,6 +195,7 @@ class TestBenchmarkIPS < Minitest::Test
 
   def test_ips_default_data
     report = Benchmark.ips do |x|
+      x.config(:warmup => 0.001, :time => 0.001)
       x.report("sleep 0.25") { sleep(0.25) }
     end
 
@@ -140,6 +207,17 @@ class TestBenchmarkIPS < Minitest::Test
     assert all_data[0][:stddev]
   end
 
+  def test_ips_empty
+    report = Benchmark.ips do |_x|
+
+    end
+
+    all_data = report.data
+
+    assert all_data
+    assert_equal [], all_data
+  end
+
   def test_json_output
     json_file = Tempfile.new("data.json")
 
@@ -158,4 +236,49 @@ class TestBenchmarkIPS < Minitest::Test
     assert data[0]["ips"]
     assert data[0]["stddev"]
   end
+
+  def test_hold!
+    temp_file_name = Dir::Tmpname.create(["benchmark-ips", ".tmp"]) { }
+
+    Benchmark.ips(:time => 0.001, :warmup => 0.001) do |x|
+      x.report("operation") { 100 * 100 }
+      x.report("operation2") { 100 * 100 }
+      x.hold! temp_file_name
+    end
+
+    assert File.exist?(temp_file_name)
+    File.unlink(temp_file_name)
+  end
+
+  def test_small_warmup_and_time
+    report = Benchmark.ips do |x|
+      x.config(:warmup => 0.0000000001, :time => 0.001)
+      x.report("addition") { 1 + 2 }
+    end
+    assert_operator report.entries[0].iterations, :>=, 1
+
+    report = Benchmark.ips do |x|
+      x.config(:warmup => 0, :time => 0.0000000001)
+      x.report("addition") { 1 + 2 }
+    end
+    assert_equal 1, report.entries[0].iterations
+
+    report = Benchmark.ips do |x|
+      x.config(:warmup => 0.001, :time => 0.0000000001)
+      x.report("addition") { 1 + 2 }
+    end
+    assert_operator report.entries[0].iterations, :>=, 1
+
+    report = Benchmark.ips do |x|
+      x.config(:warmup => 0.0000000001, :time => 0.0000000001)
+      x.report("addition") { 1 + 2 }
+    end
+    assert_operator report.entries[0].iterations, :>=, 1
+
+    report = Benchmark.ips do |x|
+      x.config(:warmup => 0, :time => 0)
+      x.report("addition") { 1 + 2 }
+    end
+    assert_equal 1, report.entries[0].iterations
+  end
 end

Debdiff

[The following lists of changes regard files as different if they have different names, permissions or owners.]

Files in second set of .debs but not in first

-rw-r--r--  root/root   /usr/lib/ruby/vendor_ruby/benchmark/ips/job/noop_report.rb
-rw-r--r--  root/root   /usr/lib/ruby/vendor_ruby/benchmark/ips/noop_suite.rb
-rw-r--r--  root/root   /usr/lib/ruby/vendor_ruby/benchmark/ips/stats/stats_metric.rb
-rw-r--r--  root/root   /usr/share/doc/ruby-benchmark-ips-doc/html/Benchmark/IPS/Job/NoopReport.html
-rw-r--r--  root/root   /usr/share/doc/ruby-benchmark-ips-doc/html/Benchmark/IPS/NoopSuite.html
-rw-r--r--  root/root   /usr/share/doc/ruby-benchmark-ips-doc/html/Benchmark/IPS/Stats/StatsMetric.html
-rw-r--r--  root/root   /usr/share/rubygems-integration/all/specifications/benchmark-ips-2.10.0.gemspec

Files in first set of .debs but not in second

-rw-r--r--  root/root   /usr/share/rubygems-integration/all/specifications/benchmark-ips-2.7.2.gemspec

Control files of package ruby-benchmark-ips: lines which differ (wdiff format)

  • Ruby-Versions: all

No differences were encountered between the control files of package ruby-benchmark-ips-doc

More details

Full run details