Cool Tips When Using Knex with Postgres
I’ve been using Knex for a long time, closing in on two years. Just to clarify, Knex is NOT an ORM (Object Relation Model). ORM’s try to model your database into neat and logic classes, but I agree with Laurie Voss, and ORM is an anti-pattern. I’ve used Sequelize, Doctrine ORM, and others. Every. Single. Time. ORM’s will let you down and keep you from leveraging your database to its fullest potential. Developers like them because you can encapsulate all the logic in the code and treat your database as just a dumb data store. This is a mistake.
Using an ORM is an anti-pattern and flies in the face of separation of concerns, abstraction, and more.
— Shane A. Stillwell
Knex makes creating SQL queries compose-able, programmable, and reusable. Let’s face it, the SQL syntax does not lend itself to programmatically creating queries. It’s much more like English sentences, than programming constructs.
the output of SQL is not an object but an answer to a question
— Laurie Voss
Armed with this new knowledge, let’s see how we can leverage knex for our Postgres projects
The Ground Rules
Since it’s 2018 now, we need to make some assumptions.
You’re familiar with how to build an app using the 12 Factor App. Particularity the section on Config. In there we learn all configuration options should be set by environment variables. My personal preference is to set sane defaults in my
config.js
file and all them can be overridden with either an environment file (not included in code) or environment variables.You ❤️ love ❤️ PG Ranges. What are ranges? Think of something like an event which has a start and end. You could store it as two fields, but in PG, you can store it as one field
[2018-05-29, 2018-06-02)
. Now you can run queries and ask “does the range include this date”.SELECT * FROM table WHERE `[2018-05-29, 2018-06-02)` @> '2018-06-01';
You’re using Knex for migrations as well. This is not a requirement to use Knex, but it does come with some handy tools to create solid migrations.
A Better knexfile.js
This is the heart of your configuration for knex / pg. The knexfile.js recommended by docs is pretty lame. We can do better. This transforms it from just a static file, to a file that relies on environment variables. See the inline comments.
const pg = require('pg')
// We'll cover this later
require('pg-range').install(pg)
const path = require('path')
// YMMV: My config has the options
const config = require('./config')
// Pull out config options from our config
// Use a query connection string, don't mess around with individual host, user, pass, db options
const URL = config.get('DATABASE_URL')
const POOL_MIN = config.get('DATABASE_POOL_MIN')
const POOL_MAX = config.get('DATABASE_POOL_MAX')
const TIMEOUT = config.get('DATABASE_TIMEOUT')
const DEBUG = config.get('DEBUG')
const NODE_ENV = config.get('NODE_ENV')
// Set migration options. Here we have a single migration to rule them all
const migrations = {
tableName: 'knex_migrations',
directory: path.normalize(path.join(__dirname, '/migrations')),
}
// Set seed options. Here we have seed options for each environment
const seeds = {
directory: path.normalize(path.join(__dirname, `/seeds/${NODE_ENV}`)),
}
// Needed sometimes when connecting to Heroku locally
// To trigger this, you need to add the connection string query
// e.g. postgres://posgres@localhost/postgres?sslmode=require
if (/sslmode=require/.test(URL)) {
pg.defaults.ssl = true
}
// Now just export a custom object based on our environment
exports[NODE_ENV] = {
client: 'pg',
connection: URL,
pool: {
min: POOL_MIN,
max: POOL_MAX,
},
acquireConnectionTimeout: TIMEOUT,
debug: DEBUG,
migrations,
seeds,
}
Using Ranges
As mentioned, you love PG ranges. One library I’ve found useful in a new site I’m building is PR Range. If you look back at our knexfile.js
file, you can see we included support for this with the line
// We'll cover this later
require('pg-range').install(pg)
With this small change, it will now convert all range columns into an object.
This value
{
duration: '[2018-06-01, 2018-06-30)'
}
Is converted to this object
{
duration: {
begin: 2018-06-01,
end: 2018-06-30,
bounds: '[)'
}
}
*Caveat: The example above is contrived. I think it converts pg date
into a Javascript data, but need to double check.
Under the hood it uses the stRange.js library. The API is pretty useful
stRange.js API
For extended documentation on all functions, please see the stRange.js API Documentation.
Range
- begin
- bounds
- end
- .prototype.compareBegin(begin)
- .prototype.compareEnd(end)
- .prototype.contains(value)
- .prototype.intersects(other)
- .prototype.isBounded()
- .prototype.isEmpty()
- .prototype.isFinite()
- .prototype.isInfinite()
- .prototype.isUnbounded()
- .prototype.toJSON()
- .prototype.toString()
- .prototype.valueOf()
- .compareBeginToBegin(a, b)
- .compareBeginToEnd(a, b)
- .compareEndToEnd(a, b)
- .parse(range, [parseEndpoint])
- .union(union, a, b)
Additionally, you can use the class to create a range used to insert into PG.
const { Range } = require('pg-range')
const duration = Range('2018-06-01', '2018-06-30', '[)')
*Careful: The default bounds is [ ]
which may not be what you’re expecting.
Knex Super Model
https://gitlab.com/shanestillwell/supermodel
This is a recent find, that I wish I knew about previously. It’s just a thin Base class that makes it easy to Create, Read, Update, and Delete (CRUD) to/from the database. Some of the conventions for getting the table name are genius. I’ve just started using it in a new project of mine, but I can see it’s going to be a keeper. The module hasn’t been updated in two years, but I don’t think it’s such a big deal since it’s pretty basic and is really more Javascript specific stuff than Knex specific.
class User extends Base {}
const user = await User.create({ foo: 'bar' })
expect(user instanceof User).toBe(true)
Knex Migrate for a better migration
Sure Knex comes with some migration functions, but it’s limited. You can only rollback one or go to the latest migration. Instead, lets call on Knex-Migrate save our bacon when it comes to migrations. When I run my contract tests, you know, the ones that start up a server and hit your API like it’s a client. Before I run them, I want to migrate the database all the way down to nothing and then back up. With Knex, this would be a pain, but with Knex-migration, we just need specify we want to go down to 0.
knex-migrate down --to 0 && knex-migrate up
It has a quite impressive list of options that I’ve had to call on a few times to fix my migrations.
➜ api (dev) ✗ knex-migrate --help
Usage
$ knex-migrate <command> [options]
Commands
generate Generate migration
pending Lists all pending migrations
list Lists all executed migrations
up Performs all pending migrations
down Rollbacks last migration
rollback Rollbacks last batch of migrations
redo Rollbacks last batch and performs all migrations
Options for "up" and "down":
--to, -t Migrate up to (down to) specific version
--from, -f Start migration from specific version
--only, -o Migrate only specific version
--step, -s Limit the number of migrations to apply
Global options:
--cwd Specify the working directory
--knexfile Specify the knexfile path ($cwd/knexfile.js)
--migrations Specify migrations path ($cwd/migrations)
--env Specify environment ($KNEX_ENV || $NODE_ENV || 'development')
--raw Disable transactions
--verbose Be more verbose
As a convenience, you can skip --to flag, and just provide migration name.
Examples
$ knex-migrate up # migrate to the latest version
$ knex-migrate up 20160905 # migrate to a specific version
$ knex-migrate up --to 20160905 # the same as above
$ knex-migrate up --only 201609085 # apply a single migration
$ knex-migrate up --step # apply only the next migration
$ knex-migrate up --step 2 # apply only the next two migrations
$ knex-migrate down --to 0 # rollback all migrations
$ knex-migrate down # rollback single migration
$ knex-migrate down --step 2 # rollback the previous two migrations
$ knex-migrate rollback # rollback previous "up"
$ knex-migrate redo --verbose # rollback and migrate everything
$ knex-migrate generate create_users # generate migration creating users table
ProTip with Migrations
Good migrations are well thought out, they have an up
and down
section. At the beginning of a project, it’s easy to write migrations that just work. But as a project moves along, you need to make sure your migrations are bullet proof. I suggest that your migrations are tested (by your wonderful CI/CD of course, but when you test them, don’t just test up, don’t just test down.
Test your migrations by migrating up, then migrating down, then back up again.
This strategy will catch so many incorrect migration down problems. Like for instance if you forget to remove a function or enum.
Some Migration Helpers
Timestamps
I’m a big fan of adding created_at
/ updated_at
in my tables where it makes sense. Since we’re not idiot programmers that think this belongs in the app logic, we will need to have a table trigger to update our updated_at
timestamp whenever the new record has been touched. We can accomplish this task with a function made just for adding those two fields in knex migrations.
function addTimeStamps (knex, name) {
return knex.schema.alterTable(name, (t) => {
t.timestamp('created_at').defaultTo(knex.fn.now())
t.timestamp('updated_at').defaultTo(knex.fn.now())
})
.then(() => {
// We need to ensure the function exists, then add the table trigger
return knex.raw(`
CREATE OR REPLACE FUNCTION update_modified_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = now();
RETURN NEW;
END;
$$ language 'plpgsql';
CREATE TRIGGER update_${name}_updated_at
BEFORE UPDATE ON ${name}
FOR EACH ROW
EXECUTE PROCEDURE update_modified_column();
`)
})
}
exports.up = function (knex, Promise) {
return addTimestamps(knex, 'configuration')
}
Seeds, sharing, each env
As mentioned earlier, I keep seed folders separated by environments. However, many times, I need to share some files to configurations in all environments. Below you can see that I’m using a 00-init.js
, that can be imported and run by other environments. For example if my development and test need the same data, then one can import from the other.
./seeds
├── base-init.js
├── development
│ └── 00-init.js
├── shared.js
└── test
└── 00-init.js
Idempotent Seeds
Once in a while, you may have seeds that run over and over. For example, lets say you have a configuration
table and needs some prepopulated data*. Knex provides a great way to use PG’s ON CONFLICT feature to ignore existing values and only add new ones.
const configurationValues = [...]
exports.seed = function (knex, Promise) {
return Promise.resolve()
.then(() => {
const inserts = Object.entries(configurationValues).map(val => ({ id: val[0], value: val[1] }))
const query = knex('configuration').insert(inserts)
return knex.raw(`? ON CONFLICT DO NOTHING`, [query]).then(() => true)
})
}
*NOTE: The debate rages on. If you have need to prepopulate a table with some values, do you use a migration or seed? My personal rule of thumb is, if it’s the same values across all environments, then a migration might make more senese. YMMV.
Conclusion
This wraps up a ton of lessons learned over the years while using Knex and PG. I hope some of these tips will prove helpful in your project. Feel free to shoot me any comments or questions that may have come up while reading this article.
Like This Article?
Have a look at some of my other Postgres articles