Return to Articles
Solid Foundations for a Rust Backend
Jun 4, 2025

No pun intended. Lessons learned creating a flexible Rust backend that is fun to work with.
Switching to Rust
A year or so ago I made the decision to rewrite my backend service for Kalico in Rust I was excited by all the flexibility this would afford me. Beforehand, my backend was coupled to my web dashboard written in Next.js. While I quite like the direction Next.js has gone in the last couple major version, at the time every update was breaking dependencies and the brittleness of Node didn’t seem viable in the long-term. I found myself essentially creating a new framework on the backend to get the structure and safety I wanted—for several reasons I decided I should just tear out the backend (GraphQL API + domain logic) and treat it as a separate entity.
Rust sold me because I felt that if I could get an MVP to compile, I could trust it to run mostly unmonitored. A big part of this was the ability to use algebraic data types in my domain layer, something I got put on to by this great talk by Scott Wlaschin and had tried hacking together in TypeScript. I also felt like a greenfield project in a lower level language would give me the freedom to build the backend exactly how I wanted it, and more importantly would allow me to adapt it to any future needs.
However, the benefit of an established framework is that it’s very clear where things should go. You don’t have to spend much mental overhead on tracking the right folder for things. It turns out creating this structure without any guide, especially in a strict language like Rust, has a lot of pitfalls. Circular dependencies are surprisingly easy to stumble upon, you might end up compiling one large binary or working with lots of small libraries, and abstraction works very differently from familiar languages like C++.
Requirements of the Foundation
My project is a backend service for a point-of-sale system, so my needs are reliability, speed, and a unified API for several different types of frontend clients. For my backend I use GraphQL to created one endpoint for all clients and I employ ADT/newtypes in the domain layer to ensure correctness in the critical business logic. I also need to interact with several external services: payment gateways, auth providers, multiple databases, streams, etc. Boiled down, I needed my frontend as a monolith, a strictly isolated domain layer, and a durable way to interact with external services.
Enter the Hexagon
You could just go full bore flat file structure, one massive project and avoid most of my hangups. But, realistically, that quickly becomes spaghetti no one wants to work on. I think the goals for an architecture should be: it works, it doesn’t get in the way, and it is enjoyable to work in. With that in mind, you need some structure and hexagonal architecture was my choice for this project. Despite its GoF style name, there isn’t really a textbook description of how this architecture should work and it’s quite flexible to your application’s use case. The core points are:
- Your system is a loose collection of components that communicate through ports.
- The domain layer is isolated at the core and kept clean of dependencies.
- The application layer interacts with the outside world through adapters.
My point isn’t to write about the architecture specifically, but like any architecture it does play a major role in the structure of the project. I recommend this article if you want to learn more about hexagonal architecture, especially in Rust since many examples are typically Java/C#. Regardless of your chosen architecture, I think the following tangent could be helpful or at least save you the time I wasted.
Oodles of Modules
When I was first switching to Rust, I read a lot of complaints about compile times on large projects. Naturally I thought I would have one of these great big projects and I needed to immediately optimize for this. The solution: make every domain its own library in a modules
subdirectory of my workspace. The idea behind this was that if you are working on one domain, the compiler only needs to recompile that library and the modules that depend on it, which in a perfect world would be the top-level binary only. Additionally, the Rust compiler is has limited parallelism for a single crate, but can compile several libraries in parallel if they don’t depend on each other. In each module I had a matching domain
, infrastructure
, and application
folder structure. At the root I had a shared
library for common dependencies and a server
binary that bootstrapped everything.
In tune with separating each domain, I had made put specific query, mutation, and subscription objects in each module. In server
it was easy enough to stitch these together with async-graphql
with a MergedObject
:
use modules::business::application::schema::{BusinessQuery};
use modules::menu::application::schema::{MenuQuery};
#[derive(MergedObject, Default)]
struct Query(UserQuery, MovieQuery);
let schema = Schema::new(
Query::default(),
EmptyMutation,
EmptySubscription
);
Problems Bubble Up
At first I made some quick progress with this, as I was working only in one domain, but when I started to work across the domains things went south. The issues first came up with this structure when I started implementing events. Events are a great way to enable cross-domain communication, but with this setup if two domains try to listen to events from each other they create a circular dependency. So events got moved to their own top-level module—a similar situation occurred with errors.
The problem is, it is very hard to create perfectly isolated modules when you have all these different responsibilities packed into each. In theory, you could do this multi-library approach for strictly your domain layer, but infrastructure and application layers wrap around the entire domain and can’t be split off like this. Eventually, everything just leaks up to the shared
library and you wonder what the point is. Additionally, the main motivation was compile times and they ballooned given the amount of linking that was happening. Another issue was having to add common workspace dependencies to each crate, which quickly becomes tedious.
What I (Eventually) Settled On
example_project/
├── Cargo.toml
├── src/
│ ├── main.rs
│ ├── lib.rs
│ │
│ ├── domain/ # Core business entities and rules
│ │ ├── mod.rs
│ │ ├── entities/
│ │ │ ├── mod.rs
│ │ │ ├── account.rs
│ │ │ ├── withdrawal.rs
│ │ │ ├── deposit.rs
│ │ │ ├── loan.rs
│ │ ├── value_objects/
│ │ │ ├── mod.rs
│ │ │ ├── ids.rs
│ │ │ ├── money.rs
│ │ │ └── address.rs
│ │ └── events/
│ │ ├── mod.rs
│ │ ├── account_events.rs
│ │ └── loan_events.rs
│ │
│ ├── application/ # Use cases and service layer
│ │ ├── mod.rs
│ │ ├── services/
│ │ │ ├── mod.rs
│ │ │ ├── loan_service.rs
│ │ │ ├── account_service.rs
│ │ ├── dto/ # Request/Response types
│ │ │ ├── mod.rs
│ │ │ ├── withdrawal_dto.rs
│ │ │ ├── deposit_dto.rs
│ │ │ └── balance_dto.rs
│ │ │ └── loan_dto.rs
│ │ └── ports/ # Abstract interfaces
│ │ ├── mod.rs
│ │ ├── repositories.rs
│ │ └── external_services.rs
│ │
│ ├── infrastructure/ # External concerns
│ │ ├── mod.rs
│ │ ├── database/
│ │ │ ├── mod.rs
│ │ │ ├── repositories/
│ │ │ │ ├── mod.rs
│ │ │ │ ├── account_repository.rs
│ │ │ │ └── loan_repository.rs
│ │ ├── external_apis/
│ │ │ ├── mod.rs
│ │ │ └── payment_gateway.rs
│ │ └── messaging/
│ │ ├── mod.rs
│ │ └── event_bus.rs
│ │
│ ├── presentation/ # API layer
│ │ ├── mod.rs
│ │ ├── graphql/
│ │ │ ├── mod.rs
│ │ │ ├── schema.rs
│ │ │ ├── context.rs
│ │ │ ├── resolvers/
│ │ │ │ ├── mod.rs
│ │ │ │ ├── account_resolvers.rs
│ │ │ │ ├── loan_resolvers.rs
│ │ │ └── types/
│ │ │ ├── mod.rs
│ │ │ ├── account_types.rs
│ │ │ └── loan_types.rs
│ │ ├── rest/ # If you add REST later
│ │ │ └── mod.rs
│ │ └── middleware/
│ │ ├── mod.rs
│ │ ├── auth.rs
│ │ └── logging.rs
│ │
│ ├── shared/ # Cross-cutting concerns
│ │ ├── mod.rs
│ │ ├── errors.rs
│ │ ├── config.rs
│ │ ├── validation.rs
│ │ └── utils.rs
│ │
│ └── container.rs # Dependency injection container
│
├── tests/ # Integration tests
│ ├── graphql_tests.rs
│ └── service_tests.rs
│
├── migrations/ # Database migrations
Compilation
This structure is one large binary (so far compile times have been quicker).
File Structure
Your main.rs
bootstraps the service container in container.rs
that ties everything together and starts your server. presentation
contains your user facing interfaces, in this example that is an API but in my project I also have a CLI here for development. domain
is isolated and doesn’t depend on any other part of the program. application
defines port traits for external services and repositories that are implemented in infrastructure
.
Services
Application services reside in application
and are concrete structs that implement the use cases the outside world interacts with.
FooService
trait?It’s tempting to use another abstraction for application services, but I don’t think there is really a point. For mocking, you just mock the dependencies of the service and its not likely you would want to ever need to swap between implementations of a service.
Service inputs and outputs are data transfer objects defined in dto
. The response DTOs can be injected with metadata and computed attributes, all while censoring domain attributes you don’t want users to see. Internally, these services operate on domain objects. Application services interact with external services/repositories through port traits, with concrete implementations injected at creation. For example:
// application/services/tenant_service.rs
struct TenantService {
pub db_manager: Arc<dyn DatabaseManager>
}
impl TenantService {
pub fn new(db_manager: Arc<dyn DatabaseManager>) -> Self {
Self {
db_manager
}
}
}
// container.rs
use application::services::tenant_service::TenantService;
use infrastructure::database::NeonDatabaseManager;
impl ServiceContainer {
pub fn new() {
Self {
...
tenant_service: TenantService::new(NeonDatabaseManager::new())
...
}
}
}
With the actual implementation determined when bootstrapping, it is easy to swap out your dependencies with mocks, alternate implementations, etc. Unlike the previous architecture,
Cross-sectional Concerns & Errors
shared
here remains quite small. For example, errors.rs
just defines AppError
which is a general error type that application services and the main function can return. This broad error type absorbs all the more specific error types:
#[derive(thiserror::Error, Debug)]
pub enum AppError {
#[error("Configuration error: {0}")]
Config(#[from] ConfigError),
#[error("Domain error: {0}")]
Domain(#[from] DomainError),
#[error("DatabaseError: {0}")]
Domain(#[from] DbErr),
}
This is easier visualized:
The largest portion of my shared module is actual config.rs
since many services (internal and external) need specific config structs that are defined here. These are usually passed on creation by the service container.
Dependency Inversion
The important thing is to grasp is how all the core modules depend on each other. Once you get the inversion of dependencies grokked its not to hard to figure out where to place things.
- Domain (green) has no outward dependencies, purely business logic
- Application (purple) depends only on domain and ports
- Infrastructure (blue) implements application interfaces
- Shared (purple) provides cross-cutting concerns
Fun and Flexible
So far I have enjoyed working with this architecture and generally my productivity has been way up. It is rigid enough that I know exactly where everything should go, like I am using a framework, but at the same time I have enough flexibility that I won’t run up against a wall. I am a little concerned about compile times, especially stemming from the domain layer since I’ve made heavy use of macros like nutype
to define objects there. I have it marked as a candidate for splitting off, but for now I’m focused on an MVP and don’t want to get bogged down in premature optimizations. It has offered me a lot of flexibility, for example adding a CLI frontend took maybe half of an hour and all I had to refactor was a small portion of my main function. Most importantly, once I landed on this architecture—after far to long feeling uncomfortable about foundations—I was excited to code again.
Don’t Get Bogged Down
This was one of many things recently that I spent too much time on because I felt like I could get some “perfect” outcome. More accurately, I spent months on different rewrites scared that not having the right base would kill the project years later. This was a silly waste of time and while I am happy where the project is at now, I could have ran with worse legs and been much farther. I think it’s important to recognize when perfectionism is in the pursuit of feasible improvement and when its just a fear of failing.

Cameron Cash
Building Kalico