Framework Evaluation
Web Framework Options
1. Axum
Overview: Tokio-based web framework from the Tower ecosystem
Pros:
- Native async/await support
- Excellent performance
- Type-safe extractors and responses
- Built on hyper (battle-tested HTTP)
- Great middleware system via Tower
- Good WebSocket support
Cons:
- Relatively newer, smaller ecosystem than Actix
- Steeper learning curve for Tower concepts
Use Case Fit: ⭐⭐⭐⭐⭐ Excellent for our API-heavy service
2. Actix-web
Overview: High-performance web framework with actor model
Pros:
- Extremely fast (consistently benchmarks as fastest)
- Large ecosystem and community
- Mature and stable
- Excellent WebSocket support
Cons:
- Actor model can be overkill for simple cases
- Some unsafe code in core (though extensively audited)
- Heavier runtime than Axum
Use Case Fit: ⭐⭐⭐⭐ Very good, but Axum’s integration with Tokio ecosystem is nicer
3. Rocket
Overview: Opinionated framework with derive macros
Pros:
- Very ergonomic API
- Excellent documentation
- Built-in form validation
- Type-safe routing
Cons:
- Requires nightly Rust
- Slower than Axum/Actix
- Less flexible for our specific needs
Use Case Fit: ⭐⭐⭐ Good but nightly requirement is a concern
Recommendation: Axum
Primary reasons:
- Perfect integration with Tokio ecosystem (we’ll use Tokio everywhere)
- Excellent middleware story for auth, rate limiting
- Clean, composable design matches our architecture
- Strong WebSocket support for streaming
LLM Client Options
1. async-openai
Overview: Unofficial async OpenAI client
Pros:
- Type-safe API
- Full OpenAI API coverage
- Streaming support
- Well-maintained
Cons:
- OpenAI-only (need adapters for other providers)
2. llm
Overview: Rust-native LLM inference (llama.cpp bindings)
Pros:
- Run models locally without external API
- No network dependency
- Privacy-preserving
Cons:
- Limited to supported models
- Requires significant compute resources
- Complex deployment
3. Custom Abstraction
Build a trait-based abstraction allowing multiple backends:
#![allow(unused)]
fn main() {
#[async_trait]
trait LlmProvider {
async fn complete(&self, request: CompletionRequest) -> Result<CompletionResponse>;
async fn stream(&self, request: CompletionRequest) -> Result<Stream<Item=Token>>;
}
struct OpenAiProvider { ... }
struct AnthropicProvider { ... }
struct LocalProvider { ... }
}
Recommendation: Custom abstraction using async-openai as reference
Build our own trait system but learn from async-openai’s patterns. This gives us:
- Multi-provider support
- Consistent interface
- Easy testing with mocks
Serialization / API Schema
1. Serde + JSON
Standard choice, no question here.
2. JSON Schema Validation
Options:
- schemars: Generate JSON Schema from Rust types
- jsonschema: Validate data against schema
- validator: Input validation with derive macros
Recommendation: schemars + validator
Database / Persistence
1. SQLx
Overview: Compile-time checked SQL
Pros:
- Type-safe queries
- No ORM overhead
- Async native
- Multiple database support
Cons:
- SQL knowledge required
- Compile times can be slow with query checking
2. Diesel
Overview: ORM with query builder
Pros:
- Mature and stable
- Type-safe query builder
- Migrations support
Cons:
- Not fully async (requires async wrapper)
- Sync-only
3. SeaORM
Overview: Async ORM
Pros:
- Fully async
- ActiveRecord pattern
- GraphQL integration
Cons:
- Heavier than SQLx
- Less control over queries
Recommendation: SQLx
Reasons:
- We want full control over queries
- Compile-time checking catches errors early
- No ORM magic, explicit is better
Sandboxing Options
1. Firecracker MicroVMs
Overview: AWS’s microVM technology
Pros:
- VM-level isolation (strongest)
- Fast startup (~125ms)
- Minimal overhead
- Production-proven (AWS Lambda)
Cons:
- Requires KVM
- Complex setup
- Linux-only
2. gVisor
Overview: User-space kernel for containers
Pros:
- Strong syscall-level isolation
- Compatible with OCI containers
- Good performance-security balance
- Used by Google Cloud Run
Cons:
- Some syscall overhead
- Requires privileged container runtime
3. Wasmtime (WebAssembly)
Overview: WASM runtime with WASI
Pros:
- Near-native performance
- Capability-based security
- Fast startup
- Language agnostic (compile to WASM)
Cons:
- Limited to WASM target
- Ecosystem still maturing
- Some system APIs unavailable
4. Bubblewrap
Overview: Unprivileged sandboxing tool
Pros:
- Simple, minimal
- No special privileges needed
- Good for simple cases
Cons:
- Weaker isolation than gVisor
- Limited features
Recommendation: Hybrid: Wasmtime for user tools, gVisor for complex code
Configuration Management
1. config crate
Standard layered config:
- Default values
- Config file (TOML/YAML/JSON)
- Environment variables
- Command-line arguments
2. Figment
More flexible, used by Rocket
Recommendation: config crate - simpler, sufficient
Observability
1. Tracing
Structured logging:
tracingfor instrumentationtracing-subscriberfor outputtracing-opentelemetryfor distributed tracing
2. Metrics
Prometheus-compatible:
metricscrate for recordingmetrics-exporter-prometheusfor exposition
3. Error Handling
thiserrorfor library errorsanyhowfor application errorseyreas alternative to anyhow
Recommendation: tracing + metrics + thiserror/anyhow
Authentication / Security
1. JSON Web Tokens (JWT)
jsonwebtokencrate- RS256 for production (asymmetric)
2. API Keys
- Custom implementation with rate limiting
- Argon2 for key hashing (if storing)
3. Argon2
argon2crate for password/key hashing
Testing
1. Unit Testing
Built-in cargo test
2. Integration Testing
tokio::testfor async testswiremockfor HTTP mockingtestcontainersfor database testing
3. Property-Based Testing
proptestfor fuzzing inputs
Recommendation: All of the above for comprehensive coverage