We will outline some additional considerations you must address when crafting APIs.
You shouldn't deliver software without automated testing, period. This is too large a topic to handle here and probably a good candidate for another post, but seriously, don't try to deliver something without built-in testing. Some good frameworks in this space are mocha and supertest.
Application Performance Monitoring (APM)
This cannot be overemphasized: Understanding how your API is performing, especially in production, is paramount to its success. This should not be treated as a nice-to-have; this should be a requirement for any API before being launched. The two leaders in this category are AppDynamics and New Relic.
You must understand how your application performs under load. Are there deadlocks? Memory leaks? Crazy CPU or memory usage? These are all things you need to figure out before releasing an API to production. It will also help you plan your server capacity and scaling options. This is also a large topic to tackle with a large number of tools available to perform these tests. Some example frameworks are Apache ab, loadtest, weighttp and LoadUI.
In Part 2, we briefly mentioned that logging was set up for this app but didn't dive deeply into this topic. Logging is a critical part of any system, but especially in ours, since we set up all 500 Internal Server Errors to go to the logging system and give the caller a support identifier for the corresponding entry. The logging solution here uses the excellent Winston library. At the moment, logging is only configured to write to the console. You should consider using an online tool such as Graylog to easily manage all of this information. Winston is easily configured to work with Graylog and a number of other options via their Transports mechanism.
How are consumers going to know what your API is capable of? What can it do? How does it work? What kind of information do I need to supply to interact with it? What is it returning? How do I deal with errors or rejections? These are questions that should easily be answered by your documentation. One of our favorite libraries to accomplish this is Swagger.
"Everything changes and nothing stands still." - Heraclitus
It's inevitable that your API will evolve and change. How do you plan on managing that change?
Formalize your strategy for how you will roll out these changes. Are you going with url versioning such as /v1, /v2 etc? Are you going with semantic versioning via an Accept-Version header? There are pros and cons to each approach, so you should decide what works best for you and your consumers. Out of the box, Restify supports the semantic versioning approach.
Being able to apply and roll back database changes is a fundamental problem to tackle, regardless of the nature of an application. Sequelize, the ORM framework we chose, has built-in support for managing migrations. If that implementation doesn't suit your needs, there are others in the market that will assist you, but you need to have a strategy for managing database changes.
Throttling and Preventing DoS Attacks
You should have a plan for how you will prevent a malicious attack or runaway consumer application from taking down your API. Are you going to throttle requests based on IP? User? Is it a global requests-per-second setting? Is it a per URL throttle? You should formulate a plan for addressing these concerns. Restify has has the ability to throttle, based on IP and a user name with request-per-second and burst settings.
We have covered a lot of ground in this series and shown several best practices when developing APIs in node. In Part 1, we listed a number of things you should consider when beginning to craft an API, but, as demonstrated above, there is still a lot more to consider.
projekt202 has a vast amount of experience crafting applications and APIs. We would love the opportunity to speak with you and your team about any needs you may have, or to hear that you've found this series useful.