New Upstream Release - ruby-sequel-pg

Ready changes

Summary

Merged new upstream version: 1.17.1 (was: 1.17.0).

Resulting package

Built on 2023-05-19T14:36 (took 4m33s)

The resulting binary packages can be installed (if you have the apt repository enabled) by running one of:

apt install -t fresh-releases ruby-sequel-pg-dbgsymapt install -t fresh-releases ruby-sequel-pg

Lintian Result

Diff

diff --git a/.ci.gemfile b/.ci.gemfile
new file mode 100644
index 0000000..6b160b1
--- /dev/null
+++ b/.ci.gemfile
@@ -0,0 +1,51 @@
+# This file is only used for CI.
+
+source 'http://rubygems.org'
+
+gem 'minitest-hooks'
+gem 'minitest-global_expectations'
+
+# Plugin/Extension Dependencies
+gem 'tzinfo'
+
+if RUBY_VERSION < '2.1.0'
+  gem 'nokogiri', '<1.7.0'
+elsif RUBY_VERSION < '2.3.0'
+  gem 'nokogiri', '<1.10.0'
+else
+  gem 'nokogiri'
+end
+
+if RUBY_VERSION < '2.2.0'
+  gem 'activemodel', '<5.0.0'
+elsif RUBY_VERSION < '2.4.0'
+  gem 'activemodel', '<6.0.0'
+else
+  gem 'activemodel'
+end
+
+if RUBY_VERSION < '3.1.0' && RUBY_VERSION >= '3.0.0'
+  gem 'json', '2.5.1'
+  gem 'rake'
+elsif RUBY_VERSION < '2.0.0'
+  gem 'json', '<1.8.5'
+  gem 'rake', '<10.0.0'
+else
+  gem 'json'
+  gem 'rake'
+end
+
+if RUBY_VERSION < '2.4.0'
+  # Until mintest 5.12.0 is fixed
+  gem 'minitest', '5.11.3'
+else
+  gem 'minitest', '>= 5.7.0'
+end
+
+if RUBY_VERSION < '2.0.0'
+  gem "pg", '<0.19.0'
+  gem 'rake-compiler', '<1'
+else
+  gem "pg", RUBY_VERSION < '2.2.0' ? '<1.2.0' : '>0'
+  gem 'rake-compiler'
+end
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
new file mode 100644
index 0000000..7af7f45
--- /dev/null
+++ b/.github/workflows/ci.yml
@@ -0,0 +1,40 @@
+name: CI
+
+on:
+  push:
+    branches: [ master ]
+  pull_request:
+    branches: [ master ]
+
+permissions:
+  contents: read
+
+jobs:
+  tests:
+    runs-on: ubuntu-latest
+    services:
+      postgres:
+        image: postgres:latest
+        ports: ["5432:5432"]
+        options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
+        env:
+          POSTGRES_PASSWORD: postgres
+    strategy:
+      fail-fast: false
+      matrix:
+        ruby: [ "1.9.3", "2.0.0", 2.1, 2.2, 2.4, 2.5, 2.6, 2.7, "3.0", 3.1, 3.2 ]
+    name: ${{ matrix.ruby }}
+    env:
+      BUNDLE_GEMFILE: .ci.gemfile
+    steps:
+    - uses: actions/checkout@v3
+    - uses: actions/checkout@v3
+      with:
+        repository: jeremyevans/sequel
+        path: sequel
+    - run: sudo apt-get -yqq install libpq-dev
+    - uses: ruby/setup-ruby@v1
+      with:
+        ruby-version: ${{ matrix.ruby }}
+        bundler-cache: true
+    - run: bundle exec rake spec_ci
diff --git a/.gitignore b/.gitignore
index b057498..3e349e6 100644
--- a/.gitignore
+++ b/.gitignore
@@ -6,4 +6,4 @@
 /tmp
 /lib/*.so
 *.gem
-*.rbc
+/coverage
diff --git a/CHANGELOG b/CHANGELOG
index ac9576a..7a7d8fc 100644
--- a/CHANGELOG
+++ b/CHANGELOG
@@ -1,3 +1,31 @@
+=== 1.17.1 (2023-01-05)
+
+* Modify LDFLAGS when building on MacOS to allow undefined functions (delphaber) (#53)
+
+=== 1.17.0 (2022-10-05)
+
+* Do not use pgresult_stream_any when using pg <1.4.4, to avoid double free in certain cases (larskanis) (#50)
+
+* Support new pgresult_stream_any API in pg 1.4.4 (larskanis) (#50)
+
+=== 1.16.0 (2022-08-16)
+
+* Fix memory leak when using streaming with pg 1.3.4+ (jeremyevans) (#48)
+
+* Modify LDFLAGS when building on MacOS arm64 to allow undefined functions (maxsz) (#46)
+
+* Adjust many internal C types to fix compilation warnings (jeremyevans)
+
+=== 1.15.0 (2022-03-16)
+
+* Avoid deprecation warning in the pg_streaming extension on pg 1.3+ when streaming a query with bound parameters (jeremyevans)
+
+* Use pgresult_stream_any when using pg 1.3.4+ for faster streaming (jeremyevans)
+
+* Do not use streaming by default for Dataset#paged_each in the pg_streaming extension (jeremyevans)
+
+* Avoid verbose warning if loading sequel_pg after Sequel pg_array extension (jeremyevans)
+
 === 1.14.0 (2020-09-22)
 
 * Reduce stack memory usage for result sets with 64 or fewer columns (jeremyevans)
diff --git a/README.rdoc b/README.rdoc
index f3ac4b5..66fc8b3 100644
--- a/README.rdoc
+++ b/README.rdoc
@@ -77,9 +77,16 @@ variables to specify the shared library and header directories.
 
 == Running the specs
 
-sequel_pg doesn't ship with it's own specs.  It's designed to
-replace a part of Sequel, so it just uses Sequel's specs.
-Specifically, the spec_postgres rake task from Sequel.
+sequel_pg is designed to replace a part of Sequel, so it shold be tested
+using Sequel's specs (the spec_postgres rake task).  There is a spec_cov
+task that assumes you have Sequel checked out at ../sequel, and uses a
+small spec suite for parts of sequel_pg not covered by Sequel's specs.
+It sets the SEQUEL_PG_STREAM environment variable when running Sequel's
+specs, make sure that spec/spec_config.rb in Sequel is set to connect
+to PostgreSQL and use the following additional settings:
+
+  DB.extension(:pg_streaming)
+  DB.stream_all_queries = true
 
 == Reporting issues/bugs
 
@@ -112,20 +119,6 @@ requirements:
 
   rake build
 
-== Platforms Supported
-
-sequel_pg has been tested on the following:
-
-* ruby 1.9.3
-* ruby 2.0
-* ruby 2.1
-* ruby 2.2
-* ruby 2.3
-* ruby 2.4
-* ruby 2.5
-* ruby 2.6
-* ruby 2.7
-
 == Known Issues
 
 * You must be using the ISO PostgreSQL date format (which is the
diff --git a/Rakefile b/Rakefile
index d61d176..33b6cbf 100644
--- a/Rakefile
+++ b/Rakefile
@@ -1,7 +1,6 @@
-require "rake"
 require "rake/clean"
 
-CLEAN.include %w'**.rbc rdoc'
+CLEAN.include %w'**.rbc rdoc coverage'
 
 desc "Do a full cleaning"
 task :distclean do
@@ -19,3 +18,25 @@ begin
   Rake::ExtensionTask.new('sequel_pg')
 rescue LoadError
 end
+
+# This assumes you have sequel checked out in ../sequel, and that
+# spec_postgres is setup to run Sequel's PostgreSQL specs.
+desc "Run tests with coverage"
+task :spec_cov=>:compile do
+  ENV['RUBYLIB'] = "#{__dir__}/lib:#{ENV['RUBYLIB']}"
+  ENV['RUBYOPT'] = "-r #{__dir__}/spec/coverage_helper.rb #{ENV['RUBYOPT']}"
+  ENV['SIMPLECOV_COMMAND_NAME'] = "sequel_pg"
+  sh %'#{FileUtils::RUBY} -I ../sequel/lib spec/sequel_pg_spec.rb'
+
+  ENV['RUBYOPT'] = "-I lib -r sequel -r sequel/extensions/pg_array #{ENV['RUBYOPT']}"
+  ENV['SEQUEL_PG_STREAM'] = "1"
+  ENV['SIMPLECOV_COMMAND_NAME'] = "sequel"
+  sh %'cd ../sequel && #{FileUtils::RUBY} spec/adapter_spec.rb postgres'
+end
+
+desc "Run CI tests"
+task :spec_ci=>:compile do
+  ENV['SEQUEL_PG_SPEC_URL'] = ENV['SEQUEL_POSTGRES_URL'] = "postgres://localhost/?user=postgres&password=postgres"
+  sh %'#{FileUtils::RUBY} -I lib -I sequel/lib spec/sequel_pg_spec.rb'
+  sh %'cd sequel && #{FileUtils::RUBY} -I lib -I ../lib spec/adapter_spec.rb postgres'
+end
diff --git a/debian/changelog b/debian/changelog
index 721ef3f..ce7ca04 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,10 @@
+ruby-sequel-pg (1.17.1-1) UNRELEASED; urgency=low
+
+  * New upstream release.
+  * New upstream release.
+
+ -- Debian Janitor <janitor@jelmer.uk>  Fri, 19 May 2023 14:32:42 -0000
+
 ruby-sequel-pg (1.14.0-1) unstable; urgency=low
 
   [ Debian Janitor ]
diff --git a/ext/sequel_pg/extconf.rb b/ext/sequel_pg/extconf.rb
index e43c5a5..62b61b0 100644
--- a/ext/sequel_pg/extconf.rb
+++ b/ext/sequel_pg/extconf.rb
@@ -2,6 +2,7 @@ require 'mkmf'
 $CFLAGS << " -O0 -g" if ENV['DEBUG']
 $CFLAGS << " -Drb_tainted_str_new=rb_str_new -DNO_TAINT" if RUBY_VERSION >= '2.7'
 $CFLAGS << " -Wall " unless RUBY_PLATFORM =~ /solaris/
+$LDFLAGS += " -Wl,-U,_pg_get_pgconn -Wl,-U,_pg_get_result_enc_idx -Wl,-U,_pgresult_get -Wl,-U,_pgresult_stream_any " if RUBY_PLATFORM =~ /darwin/
 dir_config('pg', ENV["POSTGRES_INCLUDE"] || (IO.popen("pg_config --includedir").readline.chomp rescue nil),
                  ENV["POSTGRES_LIB"]     || (IO.popen("pg_config --libdir").readline.chomp rescue nil))
 
diff --git a/ext/sequel_pg/sequel_pg.c b/ext/sequel_pg/sequel_pg.c
index f788aff..363f26e 100644
--- a/ext/sequel_pg/sequel_pg.c
+++ b/ext/sequel_pg/sequel_pg.c
@@ -1,4 +1,4 @@
-#define SEQUEL_PG_VERSION_INTEGER 11400
+#define SEQUEL_PG_VERSION_INTEGER 11701
 
 #include <string.h>
 #include <stdio.h>
@@ -70,9 +70,11 @@
 PGconn* pg_get_pgconn(VALUE);
 PGresult* pgresult_get(VALUE);
 int pg_get_result_enc_idx(VALUE);
+VALUE pgresult_stream_any(VALUE self, int (*yielder)(VALUE, int, int, void*), void* data);
 
 static int spg_use_ipaddr_alloc;
 static int spg_use_pg_get_result_enc_idx;
+static int spg_use_pg_stream_any;
 
 static VALUE spg_Sequel;
 static VALUE spg_PGArray;
@@ -197,10 +199,10 @@ static int enc_get_index(VALUE val) {
   } while(0)
 
 static VALUE
-pg_text_dec_integer(char *val, int len)
+pg_text_dec_integer(char *val, size_t len)
 {
   long i;
-  int max_len;
+  size_t max_len;
 
   if( sizeof(i) >= 8 && FIXNUM_MAX >= 1000000000000000000LL ){
     /* 64 bit system can safely handle all numbers up to 18 digits as Fixnum */
@@ -255,7 +257,7 @@ pg_text_dec_integer(char *val, int len)
 
 static VALUE spg__array_col_value(char *v, size_t length, VALUE converter, int enc_index, int oid, VALUE db);
 
-static VALUE read_array(int *index, char *c_pg_array_string, int array_string_length, VALUE buf, VALUE converter, int enc_index, int oid, VALUE db) {
+static VALUE read_array(int *index, char *c_pg_array_string, long array_string_length, VALUE buf, VALUE converter, int enc_index, int oid, VALUE db) {
   int word_index = 0;
   char *word = RSTRING_PTR(buf);
 
@@ -351,7 +353,7 @@ static VALUE read_array(int *index, char *c_pg_array_string, int array_string_le
   return array;
 }
 
-static VALUE check_pg_array(int* index, char *c_pg_array_string, int array_string_length) {
+static VALUE check_pg_array(int* index, char *c_pg_array_string, long array_string_length) {
   if (array_string_length == 0) {
     rb_raise(rb_eArgError, "unexpected PostgreSQL array format, empty");
   } else if (array_string_length == 2 && c_pg_array_string[0] == '{' && c_pg_array_string[0] == '}') {
@@ -382,7 +384,7 @@ static VALUE parse_pg_array(VALUE self, VALUE pg_array_string, VALUE converter)
   /* convert to c-string, create additional ruby string buffer of
    * the same length, as that will be the worst case. */
   char *c_pg_array_string = StringValueCStr(pg_array_string);
-  int array_string_length = RSTRING_LEN(pg_array_string);
+  long array_string_length = RSTRING_LEN(pg_array_string);
   int index = 1;
   VALUE ary;
 
@@ -532,7 +534,7 @@ static VALUE spg_timestamp(const char *s, VALUE self, size_t length, int tz) {
   }
 
   if (remaining < 19) {
-    return spg_timestamp_error(s, self, "unexpected timetamp format, too short");
+    return spg_timestamp_error(s, self, "unexpected timestamp format, too short");
   }
 
   year = parse_year(&p, &remaining);
@@ -1010,12 +1012,12 @@ static int spg_timestamp_info_bitmask(VALUE self) {
   return tz;
 }
 
-static VALUE spg__col_value(VALUE self, PGresult *res, long i, long j, VALUE* colconvert, int enc_index) {
+static VALUE spg__col_value(VALUE self, PGresult *res, int i, int j, VALUE* colconvert, int enc_index) {
   char *v;
   VALUE rv;
   int ftype = PQftype(res, j);
   VALUE array_type;
-  VALUE scalar_oid;
+  int scalar_oid;
   struct spg_blob_initialization bi;
 
   if(PQgetisnull(res, i, j)) {
@@ -1249,20 +1251,20 @@ static VALUE spg__col_value(VALUE self, PGresult *res, long i, long j, VALUE* co
   return rv;
 }
 
-static VALUE spg__col_values(VALUE self, VALUE v, VALUE *colsyms, long nfields, PGresult *res, long i, VALUE *colconvert, int enc_index) {
+static VALUE spg__col_values(VALUE self, VALUE v, VALUE *colsyms, long nfields, PGresult *res, int i, VALUE *colconvert, int enc_index) {
   long j;
   VALUE cur;
   long len = RARRAY_LEN(v);
   VALUE a = rb_ary_new2(len);
   for (j=0; j<len; j++) {
     cur = rb_ary_entry(v, j);
-    rb_ary_store(a, j, cur == Qnil ? Qnil : spg__col_value(self, res, i, NUM2LONG(cur), colconvert, enc_index));
+    rb_ary_store(a, j, cur == Qnil ? Qnil : spg__col_value(self, res, i, NUM2INT(cur), colconvert, enc_index));
   }
   return a;
 }
 
-static long spg__field_id(VALUE v, VALUE *colsyms, long nfields) {
-  long j;
+static int spg__field_id(VALUE v, VALUE *colsyms, long nfields) {
+  int j;
   for (j=0; j<nfields; j++) {
     if (colsyms[j] == v) {
       return j;
@@ -1273,7 +1275,7 @@ static long spg__field_id(VALUE v, VALUE *colsyms, long nfields) {
 
 static VALUE spg__field_ids(VALUE v, VALUE *colsyms, long nfields) {
   long i;
-  long j;
+  int j;
   VALUE cur;
   long len = RARRAY_LEN(v);
   VALUE pg_columns = rb_ary_new2(len);
@@ -1286,9 +1288,9 @@ static VALUE spg__field_ids(VALUE v, VALUE *colsyms, long nfields) {
 }
 
 static void spg_set_column_info(VALUE self, PGresult *res, VALUE *colsyms, VALUE *colconvert, int enc_index) {
-  long i;
-  long j;
-  long nfields;
+  int i;
+  int j;
+  int nfields;
   int timestamp_info = 0;
   int time_info = 0;
   VALUE conv_procs = 0;
@@ -1378,10 +1380,10 @@ static void spg_set_column_info(VALUE self, PGresult *res, VALUE *colsyms, VALUE
 }
 
 static VALUE spg_yield_hash_rows_internal(VALUE self, PGresult *res, int enc_index, VALUE* colsyms, VALUE* colconvert) {
-  long ntuples;
-  long nfields;
-  long i;
-  long j;
+  int ntuples;
+  int nfields;
+  int i;
+  int j;
   VALUE h;
   VALUE opts;
   VALUE pg_type;
@@ -1481,7 +1483,7 @@ static VALUE spg_yield_hash_rows_internal(VALUE self, PGresult *res, int enc_ind
     case SPG_YIELD_KV_HASH_GROUPS:
       /* Hash with single key and single value */
       {
-        VALUE k, v;
+        int k, v;
         h = rb_hash_new();
         k = spg__field_id(rb_ary_entry(pg_value, 0), colsyms, nfields);
         v = spg__field_id(rb_ary_entry(pg_value, 1), colsyms, nfields);
@@ -1509,7 +1511,8 @@ static VALUE spg_yield_hash_rows_internal(VALUE self, PGresult *res, int enc_ind
     case SPG_YIELD_MKV_HASH_GROUPS:
       /* Hash with array of keys and single value */
       {
-        VALUE k, v;
+        VALUE k;
+        int v;
         h = rb_hash_new();
         k = spg__field_ids(rb_ary_entry(pg_value, 0), colsyms, nfields);
         v = spg__field_id(rb_ary_entry(pg_value, 1), colsyms, nfields);
@@ -1537,7 +1540,8 @@ static VALUE spg_yield_hash_rows_internal(VALUE self, PGresult *res, int enc_ind
     case SPG_YIELD_KMV_HASH_GROUPS:
       /* Hash with single keys and array of values */
       {
-        VALUE k, v;
+        VALUE v;
+        int k;
         h = rb_hash_new();
         k = spg__field_id(rb_ary_entry(pg_value, 0), colsyms, nfields);
         v = spg__field_ids(rb_ary_entry(pg_value, 1), colsyms, nfields);
@@ -1619,7 +1623,7 @@ def_spg_yield_hash_rows(1664)
 
 static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
   PGresult *res;
-  long nfields;
+  int nfields;
   int enc_index;
 
   if (!RTEST(rres)) {
@@ -1634,7 +1638,7 @@ static VALUE spg_yield_hash_rows(VALUE self, VALUE rres, VALUE ignore) {
   else if (nfields <= 64) return spg_yield_hash_rows_64(self, res, enc_index);
   else if (nfields <= 256) return spg_yield_hash_rows_256(self, res, enc_index);
   else if (nfields <= 1664) return spg_yield_hash_rows_1664(self, res, enc_index);
-  else rb_raise(rb_eRangeError, "more than 1664 columns in query (%ld columns detected)", nfields);
+  else rb_raise(rb_eRangeError, "more than 1664 columns in query (%d columns detected)", nfields);
 
   /* UNREACHABLE */
   return self;
@@ -1659,14 +1663,48 @@ static VALUE spg_set_single_row_mode(VALUE self) {
   return Qnil;
 }
 
+struct spg__yield_each_row_stream_data {
+  VALUE self;
+  VALUE *colsyms;
+  VALUE *colconvert;
+  VALUE pg_value;
+  int enc_index;
+  char type;
+};
+
+static int spg__yield_each_row_stream(VALUE rres, int ntuples, int nfields, void *rdata) {
+  struct spg__yield_each_row_stream_data* data = (struct spg__yield_each_row_stream_data *)rdata;
+  VALUE h = rb_hash_new();
+  VALUE self = data->self;
+  VALUE *colsyms = data->colsyms;
+  VALUE *colconvert= data->colconvert;
+  PGresult *res = pgresult_get(rres);
+  int enc_index = data->enc_index;
+  int j;
+
+  for(j=0; j<nfields; j++) {
+    rb_hash_aset(h, colsyms[j], spg__col_value(self, res, 0, j, colconvert , enc_index));
+  }
+
+  if(data->type == SPG_YIELD_MODEL) {
+    VALUE model = rb_obj_alloc(data->pg_value);
+    rb_ivar_set(model, spg_id_values, h);
+    rb_yield(model);
+  } else {
+    rb_yield(h);
+  }
+  return 1; /* clear the result */
+}
+
 static VALUE spg__yield_each_row_internal(VALUE self, VALUE rconn, VALUE rres, PGresult *res, int enc_index, VALUE *colsyms, VALUE *colconvert) {
-  long nfields;
-  long j;
+  int nfields;
+  int j;
   VALUE h;
   VALUE opts;
   VALUE pg_type;
   VALUE pg_value = Qnil;
   char type = SPG_YIELD_NORMAL;
+  struct spg__yield_each_row_stream_data data;
 
   nfields = PQnfields(res);
 
@@ -1684,6 +1722,18 @@ static VALUE spg__yield_each_row_internal(VALUE self, VALUE rconn, VALUE rres, P
 
   spg_set_column_info(self, res, colsyms, colconvert, enc_index);
 
+  if (spg_use_pg_stream_any) {
+    data.self = self;
+    data.colsyms = colsyms;
+    data.colconvert = colconvert;
+    data.pg_value = pg_value;
+    data.enc_index = enc_index;
+    data.type = type;
+    
+    pgresult_stream_any(rres, spg__yield_each_row_stream, &data);
+    return self;
+  }
+
   while (PQntuples(res) != 0) {
     h = rb_hash_new();
     for(j=0; j<nfields; j++) {
@@ -1729,7 +1779,7 @@ static VALUE spg__yield_each_row(VALUE self) {
   VALUE rres;
   VALUE rconn;
   int enc_index;
-  long nfields;
+  int nfields;
 
   rconn = rb_ary_entry(self, 1);
   self = rb_ary_entry(self, 0);
@@ -1750,7 +1800,7 @@ static VALUE spg__yield_each_row(VALUE self) {
   else if (nfields <= 1664) return spg__yield_each_row_1664(self, rconn, rres, res, enc_index);
   else {
     rb_funcall(rres, spg_id_clear, 0);
-    rb_raise(rb_eRangeError, "more than 1664 columns in query (%ld columns detected)", nfields);
+    rb_raise(rb_eRangeError, "more than 1664 columns in query (%d columns detected)", nfields);
   }
 
   /* UNREACHABLE */
@@ -1809,10 +1859,21 @@ void Init_sequel_pg(void) {
     }
   }
 
-  if (RTEST(rb_eval_string("defined?(PG::VERSION) && PG::VERSION.to_f >= 1.2"))) {
-    spg_use_pg_get_result_enc_idx = 1;
+  c = rb_eval_string("defined?(PG::VERSION) && PG::VERSION.split('.').map(&:to_i)");
+  if (RB_TYPE_P(c, T_ARRAY) && RARRAY_LEN(c) >= 3) {
+    if (FIX2INT(RARRAY_AREF(c, 0)) > 1) {
+      spg_use_pg_get_result_enc_idx = 1;
+      spg_use_pg_stream_any = 1;
+    } else if (FIX2INT(RARRAY_AREF(c, 0)) == 1) {
+      if (FIX2INT(RARRAY_AREF(c, 1)) >= 2) {
+        spg_use_pg_get_result_enc_idx = 1;
+      }
+      if (FIX2INT(RARRAY_AREF(c, 1)) > 4 || (FIX2INT(RARRAY_AREF(c, 1)) == 4 && FIX2INT(RARRAY_AREF(c, 2)) >= 4)) {
+        spg_use_pg_stream_any = 1;
+      }
+    }
   }
-  
+
   rb_const_set(spg_Postgres, rb_intern("SEQUEL_PG_VERSION_INTEGER"), INT2FIX(SEQUEL_PG_VERSION_INTEGER));
 
   spg_id_BigDecimal = rb_intern("BigDecimal");
diff --git a/lib/sequel/extensions/pg_streaming.rb b/lib/sequel/extensions/pg_streaming.rb
index 59c126e..7734738 100644
--- a/lib/sequel/extensions/pg_streaming.rb
+++ b/lib/sequel/extensions/pg_streaming.rb
@@ -1,9 +1,11 @@
+# :nocov:
 unless Sequel::Postgres.respond_to?(:supports_streaming?)
   raise LoadError, "either sequel_pg not loaded, or an old version of sequel_pg loaded"
 end
 unless Sequel::Postgres.supports_streaming?
   raise LoadError, "streaming is not supported by the version of libpq in use"
 end
+# :nocov:
 
 # Database methods necessary to support streaming.  You should load this extension
 # into your database object:
@@ -73,12 +75,20 @@ module Sequel::Postgres::Streaming
 
     private
 
+    # :nocov:
+    unless Sequel::Postgres::Adapter.method_defined?(:send_query_params)
+      def send_query_params(*args)
+        send_query(*args)
+      end
+    end
+    # :nocov:
+
     if Sequel::Database.instance_methods.map(&:to_s).include?('log_connection_yield')
       # If using single row mode, send the query instead of executing it.
       def execute_query(sql, args)
         if @single_row_mode
           @single_row_mode = false
-          @db.log_connection_yield(sql, self, args){args ? send_query(sql, args) : send_query(sql)}
+          @db.log_connection_yield(sql, self, args){args ? send_query_params(sql, args) : send_query(sql)}
           set_single_row_mode
           block
           self
@@ -87,6 +97,7 @@ module Sequel::Postgres::Streaming
         end
       end
     else
+      # :nocov:
       def execute_query(sql, args)
         if @single_row_mode
           @single_row_mode = false
@@ -98,6 +109,7 @@ module Sequel::Postgres::Streaming
           super
         end
       end
+      # :nocov:
     end
   end
 
@@ -122,7 +134,12 @@ module Sequel::Postgres::Streaming
       unless block_given?
         return enum_for(:paged_each, opts)
       end
-      stream.each(&block)
+
+      if stream_results?
+        each(&block)
+      else
+        super
+      end
     end
 
     # Return a clone of the dataset that will use streaming to load
diff --git a/lib/sequel_pg/sequel_pg.rb b/lib/sequel_pg/sequel_pg.rb
index 3a87fe7..5381331 100644
--- a/lib/sequel_pg/sequel_pg.rb
+++ b/lib/sequel_pg/sequel_pg.rb
@@ -53,11 +53,13 @@ class Sequel::Postgres::Dataset
     end
   end
 
+  # :nocov:
   unless Sequel::Dataset.method_defined?(:as_hash)
     # Handle previous versions of Sequel that use to_hash instead of as_hash
     alias to_hash as_hash
     remove_method :as_hash
   end
+  # :nocov:
 
   # In the case where both arguments given, use an optimized version.
   def to_hash_groups(key_column, value_column = nil, opts = Sequel::OPTS)
@@ -120,6 +122,11 @@ if defined?(Sequel::Postgres::PGArray)
   # pg_array extension previously loaded
 
   class Sequel::Postgres::PGArray::Creator
+    # :nocov:
+    # Avoid method redefined verbose warning
+    alias call call if method_defined?(:call)
+    # :nocov:
+
     # Override Creator to use sequel_pg's C-based parser instead of the pure ruby parser.
     def call(string)
       Sequel::Postgres::PGArray.new(Sequel::Postgres.parse_pg_array(string, @converter), @type)
diff --git a/sequel_pg.gemspec b/sequel_pg.gemspec
index dd3fe46..27e8cf7 100644
--- a/sequel_pg.gemspec
+++ b/sequel_pg.gemspec
@@ -17,13 +17,15 @@ SEQUEL_PG_GEMSPEC = Gem::Specification.new do |s|
   s.extensions << 'ext/sequel_pg/extconf.rb'
   s.add_dependency("pg", [">= 0.18.0", "!= 1.2.0"])
   s.add_dependency("sequel", [">= 4.38.0"])
-  s.metadata = {
-    'bug_tracker_uri'   => 'https://github.com/jeremyevans/sequel_pg/issues',
-    'changelog_uri'     => 'https://github.com/jeremyevans/sequel_pg/blob/master/CHANGELOG',
-    'documentation_uri' => 'https://github.com/jeremyevans/sequel_pg/blob/master/README.rdoc',
-    'mailing_list_uri'  => 'https://groups.google.com/forum/#!forum/sequel-talk',
-    'source_code_uri'   => 'https://github.com/jeremyevans/sequel_pg',
-  }
+  if s.respond_to?(:metadata=)
+    s.metadata = {
+      'bug_tracker_uri'   => 'https://github.com/jeremyevans/sequel_pg/issues',
+      'changelog_uri'     => 'https://github.com/jeremyevans/sequel_pg/blob/master/CHANGELOG',
+      'documentation_uri' => 'https://github.com/jeremyevans/sequel_pg/blob/master/README.rdoc',
+      'mailing_list_uri'  => 'https://github.com/jeremyevans/sequel_pg/discussions',
+      'source_code_uri'   => 'https://github.com/jeremyevans/sequel_pg',
+    }
+  end
   s.description = <<END
 sequel_pg overwrites the inner loop of the Sequel postgres
 adapter row fetching code with a C version.  The C version
diff --git a/spec/coverage_helper.rb b/spec/coverage_helper.rb
new file mode 100644
index 0000000..99c5998
--- /dev/null
+++ b/spec/coverage_helper.rb
@@ -0,0 +1,10 @@
+require 'simplecov'
+
+SimpleCov.start do
+  enable_coverage :branch
+  command_name ENV['SIMPLECOV_COMMAND_NAME']
+  root File.dirname(__dir__)
+  add_filter "/spec/"
+  add_group('Missing'){|src| src.covered_percent < 100}
+  add_group('Covered'){|src| src.covered_percent == 100}
+end
diff --git a/spec/sequel_pg_spec.rb b/spec/sequel_pg_spec.rb
new file mode 100644
index 0000000..99b761c
--- /dev/null
+++ b/spec/sequel_pg_spec.rb
@@ -0,0 +1,41 @@
+gem 'minitest'
+ENV['MT_NO_PLUGINS'] = '1' # Work around stupid autoloading of plugins
+require 'minitest/global_expectations/autorun'
+
+require 'sequel/core'
+
+Sequel.extension :pg_array
+db = Sequel.connect(ENV['SEQUEL_PG_SPEC_URL'] || 'postgres:///?user=sequel_test')
+db.extension :pg_streaming
+Sequel::Deprecation.output = false
+
+describe 'sequel_pg' do
+  it "should support deprecated optimized_model_load methods" do
+    db.optimize_model_load.must_equal true
+    db.optimize_model_load = false
+    db.optimize_model_load.must_equal true
+    
+    ds = db.dataset
+    ds.optimize_model_load.must_equal true
+    proc{ds.optimize_model_load = false}.must_raise RuntimeError
+    ds.with_optimize_model_load(false).optimize_model_load.must_equal false
+  end
+
+  it "should have working Sequel::Postgres::PGArray::Creator#call" do
+    Sequel::Postgres::PGArray::Creator.new('text').call('{1}').must_equal ["1"]
+  end
+
+  it "should raise for map with symbol and block" do
+    proc{db.dataset.map(:x){}}.must_raise Sequel::Error
+  end
+
+  it "should support paged_each with and without streaming" do
+    a = []
+    db.select(Sequel.as(1, :v)).paged_each{|row| a << row}
+    a.must_equal [{:v=>1}]
+
+    a = []
+    db.select(Sequel.as(1, :v)).stream.paged_each{|row| a << row}
+    a.must_equal [{:v=>1}]
+  end
+end

Debdiff

[The following lists of changes regard files as different if they have different names, permissions or owners.]

Files in second set of .debs but not in first

-rw-r--r--  root/root   /usr/lib/debug/.build-id/73/96d00fae7ec637fc471e8c4544d668d7256035.debug
-rw-r--r--  root/root   /usr/share/rubygems-integration/3.1.0/specifications/sequel_pg-1.17.1.gemspec

Files in first set of .debs but not in second

-rw-r--r--  root/root   /usr/lib/debug/.build-id/ea/ccf184290ae08c206a42a790dcd53eb6c55cbd.debug
-rw-r--r--  root/root   /usr/share/rubygems-integration/3.1.0/specifications/sequel_pg-1.14.0.gemspec

No differences were encountered between the control files of package ruby-sequel-pg

Control files of package ruby-sequel-pg-dbgsym: lines which differ (wdiff format)

  • Build-Ids: eaccf184290ae08c206a42a790dcd53eb6c55cbd 7396d00fae7ec637fc471e8c4544d668d7256035

More details

Full run details