Overview
Summary
Event Hub is the central component that manages cross-service event delivery in the Keymate platform. It accepts events from upstream services through a single gRPC endpoint, persists them in a relational database outbox table using the Transactional Outbox pattern, and forwards each event type to its corresponding event bus topic via scheduler-based mechanisms. This architecture eliminates the direct event bus dependency from source services and provides at-least-once delivery guarantees.
Why It Exists
Cross-service event delivery in a distributed platform presents two fundamental challenges:
-
Atomicity problem — When a service both performs its own database operation and sends a message to the event bus, an inconsistency risk arises between the two operations. If the database operation succeeds but the event bus send fails, the event is lost; if the event bus send succeeds but the database operation rolls back, the system produces a phantom event.
-
Dependency management — Requiring each service to manage its own event bus client configuration, Schema Registry integration, and retry mechanism creates unnecessary complexity.
Event Hub solves these problems with the Transactional Outbox pattern: the upstream service sends its event to Event Hub via gRPC, Event Hub writes the event to the relational database first (atomic guarantee), then independently forwards it to the event bus via scheduler jobs.
Where It Fits in Keymate
Event Hub is the event-driven communication backbone of the Keymate platform. Upstream services (external system integrations, identity management services, third-party connectors) produce events, and Event Hub distributes them to downstream consumers via the event bus.
Event Hub also sends an audit log to the Audit & Observability component for every incoming event. This connection is fault-tolerant — the event processing flow continues uninterrupted even if the audit service is unreachable.
Boundaries
In scope:
- Accepting and validating events via gRPC
- Writing to the relational database outbox table
- Scheduler-based forwarding to the event bus
- Event status management (
NEW→PROCESSED/FAILED) - Sending audit logs to the audit service
Out of scope:
- Event bus consumer logic — downstream services manage their own consumers
- Event content transformation — Event Hub forwards events as-is (Protobuf binary) without transforming their content
- Topic creation and management — the infrastructure team pre-defines event bus topics
How It Works
Event Hub's event processing flow consists of three stages:
Stage 1: Event Acceptance and Validation
The upstream service sends an EventHubEvent Protobuf message to Event Hub's gRPC endpoint. Event Hub validates every incoming event to ensure it contains the required fields and follows the expected schema structure. Event Hub accepts only events that pass validation and rejects invalid events with an appropriate gRPC error response.
Event Hub forwards validated events to the audit service (fault-tolerant), then writes them to the outbox table.
Stage 2: Writing to the Outbox
Event Hub persists the validated event as an outbox record in the relational database. Each record stores the event's unique identifier, source service, event type, the original event payload, and a status field that tracks the event through its lifecycle. Event Hub sets the initial status to NEW.
The gRPC call completes with a success response once Event Hub persists the record.
Stage 3: Forwarding to the Event Bus
A separate scheduler job runs for each event type at configurable intervals. Each job queries the outbox table for unprocessed records and ensures that when multiple Event Hub instances run simultaneously, only one instance processes each event — preventing duplicate delivery.
Event Hub deserializes queried events from Protobuf, sends them to the corresponding event bus topic with Schema Registry integration, and updates each record to the resulting status (PROCESSED or FAILED).
A configurable cleanup job removes records in PROCESSED status from the outbox table to prevent indefinite growth.
Diagram
Example Scenario
Scenario
An external integration service creates a new employee record and wants to propagate this information to other platform components.
Input
- Actor: External integration service
- Resource: Employee lifecycle event
- Action: gRPC call to Event Hub
- Context:
- Event ID: UUID format
- Event type: employee lifecycle
- Source service: integration service
Expected Outcome
- The validation layer validates all required fields — passes
- Event Hub sends an audit log to the audit service
- Event Hub inserts a record with
NEWstatus into the outbox table - gRPC response indicates success
- The scheduler job picks up the record and sends it to the corresponding event bus topic
- Event Hub updates the record status to
PROCESSED
Common Misunderstandings
-
"Event Hub transforms events" — Event Hub is not a transformation layer. It forwards events as-is in Protobuf binary format. Content transformation is the responsibility of upstream or downstream services.
-
"Events are lost if the event bus is unreachable" — Thanks to the outbox pattern, Event Hub writes events to the relational database first. Even if the event bus is temporarily unreachable, events remain in the outbox table with
NEWorFAILEDstatus, and scheduler jobs deliver them once the event bus becomes available again. -
"Each event type is a separate service" — A single Event Hub instance processes all event types. Separation exists only at the event bus topic level; Event Hub routes each event type to its own topic.
Event Hub does not automatically retry FAILED events with exponential backoff. The scheduler queries for all non-processed records on every run, which causes the scheduler to re-attempt FAILED records. However, permanent failures (such as invalid Protobuf data) cause the scheduler to retry these records indefinitely.
Design Notes / Best Practices
-
Horizontal scaling — Event Hub supports running multiple instances simultaneously. The system ensures only one instance processes each event, preventing duplicate delivery and enabling linear throughput scaling.
-
Tune scheduler and batch size to your workload — The scheduler interval and batch size are configurable. Adjust these values based on your event volume to balance between latency and throughput.
-
Enable the cleanup job — Event Hub disables the cleanup job by default. Enable it in production to prevent indefinite growth of the outbox table.
The audit service connection is fault-tolerant — event processing continues uninterrupted when the audit service is unreachable. Teams can enable additional audit logging to improve event traceability in production environments.
Related Use Cases
- Propagating employee lifecycle events from external systems to other platform components
- Distributing change notifications from external data sources across the platform
- Broadcasting external transaction events to all relevant downstream services
- Centrally delivering user contact information changes to all consumers
Next Step
Continue with Delivery & Subscription Model to learn the event bus topic structure, scheduler-based delivery mechanism, and Protobuf serialization model.
Related Docs
Delivery & Subscription Model
Event bus topic structure, scheduler mechanism, and Protobuf serialization details.
Outbox Replay & DLQ
Event lifecycle in the outbox table, retry strategy, and cleanup mechanism.
Consumer Contracts
Event schemas and compatibility rules for event bus consumers.
Audit & Observability
The central observability component that receives audit logs from Event Hub.
What protocol does Event Hub use to accept events?
Event Hub accepts events via gRPC. A single endpoint receives EventHubEvent messages in Protobuf format and responds with a success or failure result.
How many event types does Event Hub support?
Event Hub supports multiple event types, each defined in the EventHubType enum. Event Hub routes every event type to its own dedicated event bus topic. Teams can add new event types by registering additional channels and scheduler jobs.
Are events lost if the event bus becomes unreachable?
No. Event Hub uses the Transactional Outbox pattern — it first writes events to a relational database table with NEW status. Even if the event bus is temporarily unreachable, the database stores the events and scheduler jobs deliver them once the event bus becomes available again.
Can multiple Event Hub pods run simultaneously?
Yes. Event Hub supports horizontal scaling. When multiple instances run concurrently, the system ensures only one instance processes each event, preventing duplicate delivery.
Does Event Hub transform or enrich events?
No. Event Hub is a pass-through delivery component. It forwards events as-is in Protobuf binary format to the event bus. Content transformation or enrichment is the responsibility of the upstream or downstream services.