
Cross-platform compressor + download funnel — MVP in weeks
Electron app for local image/video compression (Windows, macOS, Linux) plus a landing page with installers and onboarding.
A real-time operational system that gives trucking and logistics teams a single place to see fleet activity and device telemetry as it happens, backed by a clear service and data architecture (NATS JetStream ingestion, core domain services, PostgreSQL and Redis, read replicas) and tooling to prepare reports for FMCSA compliance and submissions.
Confidential
2024

Logify is an internal operations product for a trucking and logistics company. It aggregates activity and telemetry from vehicles and connected devices into a real-time dashboard so dispatchers, fleet managers, and leadership can see what the fleet is doing without switching tools or waiting for batch reports.
It is built for people who run day-to-day transport operations: where assets are, whether they are moving or idle, and whether device data looks healthy - trustworthy operational signal, not just a map.
The problem it solves is fragmentation and latency. Fleet reality is continuous and distributed, but many teams still depend on delayed exports, phone calls, or disconnected systems. Logify centralizes that stream of truth so decisions are based on current state, not yesterday’s spreadsheet.
It also reduces the compliance scramble: the same operational record set can be shaped into reports suitable for FMCSA-related requirements, so safety and regulatory roles spend less time re-keying data across portals and spreadsheets.
Trucks and IoT endpoints sit on variable networks. Payloads can arrive out of order, reconnect often, or burst after offline periods - the system has to tolerate gaps without corrupting operational views.
Ingestion runs through NATS JetStream so bursts and reconnects land in a durable queue before core services consume them - parsing, versioning, validation, and back-pressure have to stay strict so one bad frame or spike does not destabilize the pipeline.
Operators need hierarchy and clear states - not a dump of fields. The UI must stay readable under high update rates without feeling jittery or overwhelming.
Different roles see different slices of the fleet and sensitive data. Keycloak integration must be reliable in the browser and on APIs, with consistent enforcement so permissions are not duplicated or drift between frontend and backend.
A Next.js (TypeScript) frontend provides the dashboard; a Node.js backend owns normalization, business rules, and APIs consumed by the app. IoT traffic lands in NATS JetStream first, then flows into modular core services (users and access, fleet control, device registry, compliance, live tracking, session state) so spikes and retries do not starve the UI path.
Persistence uses PostgreSQL (with read replicas for scale-out reads) and Redis for hot state and caching; core services stay the single place domain rules live, with storage details isolated behind clear boundaries.
Authentication uses standard OIDC/OAuth flows with Keycloak; tokens gate API access, and role claims drive visibility. Frontend checks improve UX; backend enforcement is the source of truth.
The dashboard is organized around fleet overview, per-asset detail, and state-first panels (moving, stopped, offline, degraded device) rather than raw sensor dumps.
Reporting layers turn normalized fleet and operational data into exportable packages aligned with what FMCSA workflows expect - separate from the live UI path so batch generation does not compete with real-time ingestion.
Platform observability covers API documentation (Swagger), centralized logging, deployment pipelines, real-time metrics, and alerting so operations can see ingestion health, service errors, and capacity signals alongside the product.
Explore additional work built with the same craft and attention to detail.