Models and Errors

And so, we have come to the last part of this journey through the major parts of app architecture. At this point, we have a fully functional app, which can take requests through a REST API, do the necessary database calls, and then return what needs to be returned to the user.

There are still two major flaws in our application, though. While we have neatly separated the initialization, configuration, routing, application logic, and database logic, we still do not have a place to stick the bread and butter of your project in - the business logic.

Also, as things stand now, we are always passing raw data between layers. What if, instead, we simply wrapped everything into Models, to get some guarantees, common functionalities, and a place to put the business logic for the data?

Models

What is a model? It is the blueprint for what a group of data and functionalities should look and function like.

Before we dive into creating our Todo sample model and everything that goes with it, I would like to make a short excursion into Error handling territory, however, because

  1. Errors are models as well, and

  2. Model-creation can be an error-prone process, so it is good to have pre-prepared Error types, to make things more streamlined.

Error Models

For starters, create the folder ./models and the files ./models/error.js and ./controllers/rest/router/error.js.

Now open the last one.

// ./controllers/rest/router/error.js

module.exports = (err, req, res, next) => {
  res.status(err.data ? err.data.code || 500 : 500);
  res.json(err);
};

You should recognize this as the Express error-handling middleware. It should be self-explanatory, what this does.

Now that we are using the error handling middleware, we can also remove the catch Promise method from ./controllers/rest/router.

Next, open ./models/error.js, and type in this:

const _ = require('lodash');

const errors = {
  MODEL_MISSING_PROPERTY: {
    message: 'You have tried to create a model without all required properties',
    code: 500,
  },
};

function _Error (type, info) {
  this.data = _.merge(
    errors[type]
      ? errors[type]
      : {
        message: `Error ${type} does not exist`,
        code: 500,
        info,
      },
    info ? { info } : null);
}

_Error.prototype = new Error();

_Error.prototype.toJSON = function () {
  return this.data;
};


function CustomError (type, info) {
  throw new _Error(type, info);
}

module.exports = CustomError;

After importing lodash, we create an object, which specifies all the custom errors we would like to be able to throw.

Next, we create a constructor function _Error, which should be, again, clear in its function.

We set _Error's prototype to be an instance of the vanilla javascript Error, so myError instanceof Error returns true, which can come in quite handy.

All our models will have our data saved in the data property, so we create the toJSON method, which will return just that data object.

The rest should be clear.

So, this is the file where you specify all the various errors you are expecting to throw. I only used CustomError there which always throws. This is not the best approach for your real-life use case and was done here for the sake of expediency. The point of the error model is to define one or more error models (CustomError in our case), which can then be used in any way you see fit, by being thrown, normally returned through rejected or even resolved promises, or otherwise, depending on your needs and wants.

Now, we will continue by creating the Base - a base model, from which all the other models will inherit common functionalities.

Business Logic Models

// ./models/base.js

const _ = require('lodash');

const Base = function () {};
const CustomError = require('./error');

Base.prototype.get = function (prop) {
  return this.data[prop];
};

Base.prototype.set = function (prop, val) {
  return Object.assign(this.data,
    arguments.length === 1
    ? prop
    : { [prop]: val });
};

Base.prototype.toJSON = function () {
  return this.data;
};

Base.create = (Model, data) =>
  Base.checkRequiredProperties(Model.requiredProperties, data)
  && _.merge(Model, { data },
    data.created_at
    ? {}
    : { data: { created_at: new Date().toUTCString() } });

Base.checkRequiredProperties = (requiredProperties, data) => {
  if (!Array.isArray(requiredProperties)) {
    return new CustomError('UNEXPECTED_TYPE', 'requiredProperties');
  }

  const missingProperties = requiredProperties
    .filter(p => data[p] === undefined);
  return missingProperties.length
  ? new CustomError('MODEL_MISSING_PROPERTY', missingProperties.join(', '))
  : true;
};

module.exports = Base;

The Base model has 1 job — it provides common functionalities other models inherit.

Going from top to bottom, you can see that I put three methods on Base's prototype. These are the methods that will be inherited by and thus available on every instance created from the Base model or its descendants. Their meanings and functions should be obvious. These are to be used after a model has already been created.

There are, however, also methods tacked straight onto Base. These will not be available on its descendants. These are to be used before or during model creation.

Base.create takes a Model constructor function and the data to be put into that model. Every model construction function we create will have an array of required properties packed into it, so we can check if we got all the info we need right at hand. If we do, great. If not, we return one of our CustomError's, depending on what went wrong.

Now that we have our Base covered, we can create the actual model, we will directly play with:

// ./models/todo.js

const Base = require('./base');

function Todo (data) {
  return Base.create(this, data);
}
Todo.prototype = Object.assign(Todo.prototype, Base.prototype);

Todo.prototype.requiredProperties = [
  'type',
  'content',
];

Todo.create = data => new Todo(data);


module.exports = Todo;

Easy, right? Since this one does not require any special initialization, we can just call Base.create with the required data and all is well.

Now, here, instead of using prototypal inheritance, I opted for concatenative inheritance. Basically, I just told javascript to take all the methods that I put onto the Base.prototype and copy them onto our Todo.prototype. Just be careful, because synonymous properties in Todo.prototype would get overwritten by those in Base.prototype. There are several ways to prevent this, however, e.g. by using lodash's _.omit method combined with Object.keys for Todo.prototype.

The rest should be clear.

Todo would also be the place to put business logic in. For example, if you are building an app, which has the sole purpose of telling you how many days it has been since your birth to an arbitrary date, you would create a model which would take the birth and arbitrary dates when being created. It would have a method named e.g. calculateTimeDiff, which would calculate the number of days and save it somewhere on the model. This would constitute business logic.

Wiring Everything Together

All that is left is to incorporate the new and shiny models into our app. First, create:

// ./models/index.js

const Todo = require('./todo');
const CustomError = require('./error');

module.exports = {
  Todo,
  CustomError,
};

Next, wire it into globals by adding to ./index.js:

const errorHandler = require('./controllers/rest/router/error');

and

globals.models = require('./models');

and

app.use(errorHandler);

Make sure that you assign models before requiring repositories and services and to register the errorHandler in app after requiring ./controllers/rest/router.

Now, we just have to tell our services to create and send models down to repositories and vice-versa.

Repositories

First, the repositories. Change ./repositories/base.js to read:

// ./repositories/base.js

const _ = require('lodash');

module.exports = globals => modelName => ({
  get: meta => () => globals.db(`${modelName}s`)
    .where(meta)
    .then(models =>
      models.map(model =>
        new globals.models[`${modelName[0].toUpperCase()}${modelName.slice(1)}`](model))),

  delete: meta => () =>
    globals.db(`${modelName}s`)
    .where(meta)
    .delete(),

  save: meta => model =>
    (meta.id
      ? globals.db(`${modelName}s`)
        .where(meta)
        .update(_.merge(model.toJSON(), { updated_at: new Date().toUTCString() }))
      : globals.db(`${modelName}s`)
        .insert(model.toJSON())),
});

As you can see, we changed the method signatures a bit. This was done to more strongly emphasize the difference between the data in params, query, and body. Repositories will now accept 2 "kinds" of data. meta, which tells the database e.g. the limit, offset, sort order etc., and a model, which contains the data to be saved into the database.

When returning data from the database, if that data is a row from the database, a model has to be created from it, as you can see in the get method.

Services

Finally, we have to update our services. They now look as follows:

// ./services/base.js

const _ = require('lodash');

module.exports = globals => serviceName => ({
  service: serviceName,
  get: params => query => () =>
    globals.repositories[serviceName].get(Object.assign({}, params, query))(),

  create: params => query => body =>
    globals.repositories[serviceName].save(Object.assign({}, params, query))(
      new globals.models[`${serviceName[0].toUpperCase()}${serviceName.slice(1)}`](body)),

  update: params => query => body =>
    globals.repositories[serviceName].get(Object.assign({}, params, query))()
    .then(([r]) => (!r
      ? []
      : globals.repositories[serviceName]
        .save(Object.assign({}, params, query))(
          new globals.models[`${serviceName[0].toUpperCase()}${serviceName.slice(1)}`](_.merge({}, r.data, body))))),

  delete: params => query => () =>
    globals.repositories[serviceName].delete(Object.assign({}, params, query))(),

  _returnActual: result => id =>
    result
    .then(createdId =>
      globals.repositories[serviceName]
      .get({ id: id || createdId[0] })()),
});

It may look confusing, but the only thing that changed is that we now work with models instead of raw data. This is much more clearly visible in ./services/todo.js:

// ./services/todo.js

const baseServices = require('./base');

module.exports = (globals) => {
  const base = baseServices(globals)('todo');

  const save = params => query => body =>
    globals.repositories.todo
    .save(Object.assign({}, params, query))(new globals.models.Todo(body));

  return Object.assign({}, base, {
    create: params => query => body =>
      base._returnActual(save(params)(query)(body))(),

    update: params => query => body =>
      base._returnActual(base.update(params)(query)(body))(params.id),
  });
};

The save function now passes meta and a model to globals.repositories.todo.save, and the update method has switched from using the save function to using the base.update function, as we now first have to check for the existence of the model we are trying to change in the database, since now we have to create a "final form" of the model, which we then save to the database, which has to include all the model data, not just the new ones.

Models and Errors — Github Repository

Conclusion

This really is all there is to models. They exist to make our lives simpler by abstracting the last piece of logic out of services. They also provide for much much easier testability, since they can be very easily mocked or used with other mocks.

This also concludes the main part of this guide, which I hope successfully explained how logic is commonly separated for better maintainability, readability, and general awesomeness.

There are, however, three more chapters foreseen in the future.

In the next chapter, we will cover data sanitizing after we get a request and before we respond. This is the first line of defence against most basic injection techniques.

This will be followed by a chapter on authorization, where we will restrict API endpoint access to certain user groups

Finally, the guide will conclude for good with a chapter on authentication, where we will check if the user is really who they say they are.

results matching ""

    No results matching ""