This document serves to outline at a high level the various technologies used in the technology stack and architecture at Nori.
Node js (10+)
Our UI is mostly inspired by out-of-the-box Material UI, with a few tweaks here and there. Some custom component work where needed, but for the most part we try to focus our design efforts on layouts and UX rather than components.
The main repository for nori is a monorepo, meaning that it holds a bunch of different packages all in a single repository. We use lerna and yarn to link them all together and update dependencies in unison.
Custom `nori` CLI commands
We frequently use the gcloud CLI command to interact with various parts of Google Cloud. Additionally, this is used to change the way our local development server interacts with the cloud.
Our architecture planning was mostly motivated by the departure of our CTO, in an attempt to wrap our heads around what he had largely built, and figure out how we were going to revamp and/or extend what was in place. Since then, we revisit the architecture diagrams on occasion, but it’s been awhile since we’ve built out any new significant portion of our architecture.
The architecture planning repository can be found here.
Currently all engineers happen to use VS Code. But any editor that can ideally support eslint + prettier configurations would be fine. That said, there is some power in having engineers using the same editor (shared configuration, plugins/extensions, code review, etc).
Express (Web application framework)
Google Cloud Datastore (NoSQL database)
Google Cloud BigQuery (SQL-like database)
Google Cloud KMS (Key management)
Google Cloud Error Reporting (Manages and reports errors)
Google Cloud PubSub (Subscribing to and publishing data)
Turf JS (GeoJSON helpers)
Stripe (Payment processing)
The nori-graphql service is our primary backend service that handles server requests from our other services.
Example: A supplier opens their project at nori.com/app/projects/project?projectId=abc123
This request is passed in a query through various middlewares before being handed off to our graphql schema, which authorizes that the viewer is allowed to see the requested data. The schema then resolves the request by getting data from Google Cloud Datastore, or any other external APIs or data sources required by the request. If necessary, the server will then render the page, otherwise, it will deliver the data back to the client.
Example: A buyer wants to purchase removals at nori.com/remove-carbon/checkout
This request is passed through a mutation, which is handled similarly to the above. The mutation performs some action, which is interacting with Stripe, the blockchain, and Google Cloud datastore in order to charge the payment method provided and create the certificate, before delivering the new certificate to the client.
Firebase (User authentication)
Nori-website is our application for suppliers, buyers, and verifiers, hosted under nori.com/app. This service allows users to sign-in or sign-up with their account, and perform a variety of tasks, such as suppliers registering projects, verifiers completing verification jobs, and buyers participating in NRT markets.
Nori-marketing is responsible for much of the content you can find on nori.com. Any of our needs from a sales or marketing perspective are usually handled through this service, such as promotional pages, hosting our podcasts and blogs, providing updates, and so on.
In addition, nori-marketing also handles the purchasing of carbon removal tonnes, through a checkout process powered by Stripe.
lzma-native (for decompressing the file the send as an attachment to an email)
This is a sendgrid inbound email parsing webhook micro-service used to parse the email given as a response from the COMET API. Since the “API response” to a POST request sent to COMET is in the format of an attachment to an email, this service is meant to be used to intercept all such emails and then parse and store the attached files so it can be consumed elsewhere in the UI/back-end for soil carbon reporting and quantification. More here.
Responsible for running scheduled mutations against entities. This service uses Kue to manage and create jobs that are scheduled to be executed at some point in the future (such as a forward contract auction). Because those jobs are scheduled to execute in the future, and the complexity around resuming/restoring job ques in a CI/CD environment, this service takes advantage of Redis and Cloud Memorystore to prevent duplicate jobs and job loss.
This is a simple proxy server that routes requests to the other services depending on the URL/domain that is used. We use this to emulate the way GKE works when in production. It is a legacy and imperfect solution, yet gets us where we need to go for now.
Nori-admin is our admin web-application, used to provide Nori employees with convenient methods of viewing our application data, and performing admin-only actions, such as user management, order fulfillment, blockchain interactions such as wallet management and contract invocation, and registering NRTs for projects.
This is not a microservice that is run, and instead is the tool we use to manage our interactions with our local emulation of the server environment. We make use of a custom `nori` CLI command to perform actions such as building cloud containers, running manual deployments (mandatory for production), starting local development servers, interacting with kubernetes locally and more.
This package houses our smart contracts and is the effective foundation for Nori’s blockchain functionality. You can find that repo here.
This package is a nested monorepo that can help manage and organize public packages or other code we intend to share. This package is a WIP. You can find that repo here.
We use Google Cloud as our cloud hosting provider. We use their tool suites for networking, container hosting, storage, and pretty much everything else you might think of. You can find a slightly outdated network diagram here.
Our testing is performed through Jest, which provides us with utilities for unit and integration tests, and Cypress, which allows us to automate a browser, and test simulated user experiences of our services.
It is definitely our priority to operate under a Test Driven Development protocol, although we’ve struggled with balancing the time we focus on testing versus developing new features and meeting other needs of the company.
We are a fully CI/CD engineering team. Our pipeline runs through a number of steps that test, build, and deploy changes on every pull request made. When a PR is created, the changes are always peer reviewed before merging. In addition we use a number of GitHub apps to qualify PRs with qualifiers such as code coverage, performance and browser snapshot reviews
Stores application data for users, projects, marketplace, etc.
Stores geographic data for projects, also used to query spreadsheets.
Stores spreadsheets for supplier projects, files uploaded by users.
Stores the raw input and output XML files from COMET farm API calls, among other things.
Stores land management records for supplier projects.
Stripe handles the payment processing patterns we use at Nori.
Our FIFO market will charge users with Stripe when they checkout to buy carbon removals. After checking to make sure that their transaction is valid and there are tonnes available to buy, we will charge the card, and then create their certificate.
Otherwise, if there are no carbon removals available, we use Stripe to set up a future payment, which we charge once the carbon removals become available.
COMET Farm (one of our partners) hosts an API that we use to generate Soil Carbon Stock (current and future projections). This API is the foundation to all of our work around quantification.
Granular is one of our primary data platform partners. We co-developed a universal data specification with them and use this as our foundation to scale with additional data platforms. The data provided following this spec can be used to import a project in nori-website.
LAMPS is a service that provides us with data about a suppliers’ fields based on their geographic data - namely, what was most likely planted where, and when. This allows us to help generate data for the supplier in filling out their land management records.
This repository contains many diagrams and markdown documents that the engineering team often uses and creates in collaborative planning sessions (for planning new features or articulating ideas to others).