The NDCODE project founder is Dr. Nick Downing. Nick has always been a technical guy, and he is usually found building something—unless he is spending quality time with his wife Laura or parenting a large brood of children. Nick enjoys working with his hands, and equally his mind.
Nick’s background is in embedded software and hardware. This shows in his focus on writing efficient code that uses minimal resources. Nick was creating networked point-of-sale systems long before the Internet became a thing. Since then he has become an Internet of Things (IoT) expert.
As well as his successful industry career, Nick holds a PhD in Computer Science from the University of Melbourne, with specializations in Pure Mathematics and Business. Nick is an effective teacher and has taught all aspects of Computer Science to subsequent students at the University.
This site serves as a repository for open-source projects. Smaller projects will show up as a blog entry, and for longer-term projects this expands to a project page, tutorials, online documentation and so forth. The project would also welcome any feedback or contributions you may have.
Public git source code repositories are at https://git.ndcode.org, and eventually there will be an apt repository there too, for the binary versions of packages. Project pages contain the explanations of what things are, with links to the appropriate repositories. We host our own repositories because we are opposed to the idea of for-profit companies leveraging our work to bring valuable traffic to their sites.
For a casual contribution please send patches to nick ‘at’ ndcode ‘dot’ org. We can give git write access to trusted contributors. Note that we do not use Pull Requests (PRs). Besides being excessively bureaucratic, the use of PRs tends to shift the responsibility onto a contributor to push his/her contribution through the system. We see this as a shared responsibility, not just the contributor’s.
The projects on this site are open source and they are mainly MIT licensed or GPLv2’d. Generally, projects you would use in production will be MIT licensed, to give you complete freedom. Projects you would use in development, such as code generators, are more likely to be GPLv2’d, but with appropriate exceptions to allow unrestricted use of the resulting output. We prefer GPLv2 over GPLv3 as we don’t support the anti-Tivoization clauses.
We do not charge individual licensing fees for use of the software. We prefer that you use the software unrestricted, and promote its use within and outside your organization. This policy allows us to benefit from consulting fees if you require further development or customization, usually on a non-exclusive basis so that all users can make use of any improvements. Our fees are very reasonable and we encourage you to make contact if you have any requirements.
We emphasize precision and we generally don’t leave unhandled cases in our logic, unless they are clearly flagged by assertions and such like. (In a long career as a professional programmer, one encounters Murphy’s Law on a regular basis). We also prefer consistency in matters such as identifier naming, indenting and so on.
Having said that, our emphasis is on productivity rather than dotting i’s and crossing t’s. So the code can be rough until we decide to polish it up for release. As an example, we would implement a language compiler with nearly all language features as stubs that crash the compiler with an assertion message. This allows us to get up and running and handling trivial test cases almost immediately. A feedback process then tells us which further language features are needed. Esoteric features are never added, since those assertion messages are not triggered. We rely on your input in such cases.
Commenting in the code is sparse at best. This is because our emphasis is on refactoring the code until it solves a problem appropriately, according to our evolving understanding of the problem. To spend inordinate effort writing up the interfaces and algorithms used in the code is a waste of time when the code is experimental. And once it has stabilized there is rarely a reason to revisit it, unless we’re adding documentation comments for the API reference.
Because we use few comments (only in cases where the logic relies on subtleties that won’t be obvious later), we tend to write code that is self-documenting. This means writing out longer code for operations that can be accomplished with language tricks (p == NULL rather than !p), and avoiding abbreviations in most places. Of course if we find we cannot understand our code later, we will comment it once we have figured it out again!
We refer the reader to an excellent article, ‘The No. 1 unit testing best practice: Stop doing it’. We do in fact use tests, and they are often to be found in a /test or /tests directory in our repositories, but the tests are only what we needed during development to exercise the code, and they might not be maintained once we are satisfied that the code works.
The problem with unit tests that are required to be maintained (usually as a matter of company policy) is similar to the issue with extensive code commenting: You end up investing too much into the current way the code works, making it difficult to change it when you see a better way. Thus tests should be kept more general (End-to-End or E2E testing is best) and used as a means to an end rather than an end in themselves.
The project members are established professional programmers who have spent their lives working in salaried work or for clients. And in this environment, we have to fit into established practices. This usually means unneeded complexity (for example, the use of many middleware layers just to achieve a simple task) and wasted time dealing with internal processes (code review, unit testing and the like). In our free time, we are free to seek a better way.
The NDCODE project aims to challenge established practices in the areas we touch. The goal is to find new ways of doing things that improve efficiency, as measured by how much we can achieve or build in a limited time. By making a frank assessment of what works and what doesn’t work, both in the existing method and in our experimental new methods, we are able to find a synergy, but paradigms are not upended overnight, and the process takes tedious experimentation.
We tend to use established tools such as Linux, Python, node.js, and the extensive infrastructure that comes along with them, as a platform for our experiments. Whilst these tools are far from perfect, they provide significant power for rapidly prototyping solutions to problems. However, we try not to add a lot of middleware layers like Docker, MySQL, nginx, and the like. Not only do they add complexity, but they work in very prescribed ways and the rest of a system tends to become designed around them. Thus, complexity gets ossified and rusted on over time.
Some of the specific projects on this site are:
There are also some associated Computer Aided Software Engineering (CASE) tools that we built as examples, such as the c_to_python tool which (roughly) translates C code into Python using the πtree framework. Having custom-built CASE tools can really turbocharge a refactoring project and make the impossible possible. CASE tools can also make it possible to develop larger and more ambitious projects efficiently.