Data Security Authorization Control (DSAC)
Summary
Data Security Authorization Control (DSAC) is an authorization model that makes access decisions based on data sensitivity levels and classification metadata — not only on who the user is, but on what they are trying to access. While ABAC evaluates access based on user, resource, and environment attributes, DSAC extends this by evaluating data sensitivity via metadata tags, enabling compliance-ready access control tied directly to data classification. DSAC enforces column-level, field-level, and dataset-level authorization, ensuring that sensitive data such as personal identifiers, financial records, and confidential documents are protected at the granularity where the classification applies.
Why It Exists
Traditional access control models answer the question "Is this user allowed to perform this action on this resource?" DSAC adds a critical dimension: "Is this user allowed to see this specific data, given its classification?"
Consider these scenarios:
- "Allow HR staff to view employee records, but mask national ID fields classified as PII"
- "Allow analysts to query financial datasets, but block access to columns tagged as Confidential"
- "Allow API consumers to retrieve customer profiles, but redact fields classified as Restricted based on their clearance level"
These decisions cannot be expressed through roles or user attributes alone — they depend on what the data is, not only on who is requesting it. DSAC solves this by incorporating data classification metadata into the authorization decision.
This approach provides:
- Data-aware decisions — Authorization policies reference sensitivity labels and classification tags attached to the data itself
- Compliance alignment — Access control is driven by data governance classifications, supporting regulatory requirements for PII protection, data residency, and confidentiality
- Granular enforcement — Control access at the column level, field level, or dataset level — not only at the resource level
- Dynamic classification sync — Classification metadata flows from metadata catalog systems into the authorization layer, keeping policies aligned with the current data landscape
Where It Fits in Keymate
DSAC operates alongside other authorization models in Keymate. It extends the platform's attribute-based evaluation by introducing data classification metadata as a first-class input to policy decisions.
Classification sources: Data sensitivity metadata enters Keymate through two paths — automated sync from a metadata catalog via event-based integration, or manual definition through the Admin Console.
Policy evaluation: When a permission request arrives, the enforcer (SDKs, API gateway / Service Mesh plugin, or service mesh plugin) sends a permission check through the Access Gateway, which proxies it to the Authorization Decision Provider. The Policy Engine evaluates DSAC policies by checking the sensitivity tags associated with the requested data fields against the user's attributes and clearance level.
Enforcement: The Access Gateway returns the decision result to the enforcer. The enforcer applies the decision — granting full access, denying access entirely, or applying field-level masking before the response reaches the client.
Boundaries
What DSAC covers:
- Access decisions based on data sensitivity labels (e.g., PII, Confidential, Restricted, Public)
- Column-level and field-level authorization for database queries and API responses
- Integration with metadata catalog systems for automated classification sync
- DSL-based policy expressions that reference metadata tags (e.g.,
resource.tags includes "PII") - Security grade enforcement across classification tiers (Public, Confidential, Restricted)
What DSAC does not cover:
- General attribute-based access checks not related to data classification — use ABAC
- Static role-based permission checks — use RBAC
- Relationship-based access (e.g., "user owns this record") — use ReBAC
- Risk scoring and adaptive decisions based on behavioral signals — use RADAC
- Data classification itself (defining what is PII, what is Confidential) — DSAC consumes classifications, it does not create them
How It Works
Classification Metadata
DSAC policies depend on classification metadata attached to data fields and datasets. This metadata describes the sensitivity level and category of the data:
| Classification | Description | Example Fields |
|---|---|---|
| Public | Data that can be shared without restriction | Company name, public product catalog |
| Confidential | Data requiring access control based on role or clearance | Internal reports, employee salary |
| Restricted | Highly sensitive data with strict access limits | National ID numbers, financial account details |
| PII | Personally identifiable information subject to regulatory protection | Email address, phone number, date of birth |
Metadata Sources
Classification metadata enters the policy evaluation context through two paths:
- Metadata catalog integration — Keymate syncs classification data from external metadata catalog systems through event-based integration. When a data steward classifies a column or field in the catalog, the classification propagates to the Policy Engine automatically.
- Manual definition — Administrators define classification tags directly in the Admin Console, associating sensitivity labels with specific resources and fields.
Policy Expressions
DSAC policies use the Keymate policy DSL to reference metadata tags in their conditions. A policy expression can check whether a resource field carries a specific classification:
resource.tags includes "PII.NationalID" AND user.clearance < "RESTRICTED"
→ DENY
This expression denies access when a user without Restricted clearance attempts to access a field tagged as PII containing a national ID.
Evaluation Flow
When the Authorization Decision Provider processes a DSAC policy:
- Metadata injection — The platform retrieves classification metadata for the requested resource fields and injects it into the evaluation context
- Tag matching — The policy engine evaluates the DSL expression, comparing the resource's sensitivity tags against the policy conditions
- User context check — The engine also evaluates user attributes (clearance level, department, role) referenced in the policy
- Decision — Based on the combined evaluation, the policy produces one of three outcomes: GRANT (full access), DENY (no access), or MASK (return data with sensitive fields obfuscated)
Enforcement Points
DSAC decisions are enforced at multiple levels:
| Enforcement Point | Behavior |
|---|---|
| API responses | Sensitive fields are removed or masked before the response reaches the client |
| Database queries | Column-level restrictions prevent sensitive columns from appearing in query results |
| UI components | Frontend applications hide or obfuscate fields based on DSAC decisions |
Diagram
Example Scenario
Scenario
A financial services company classifies certain database columns as PII — including national ID numbers and bank account details. The company requires that only users with Restricted clearance can view these fields in API responses. Other authorized users can access the same records but with PII fields masked.
Input
- Actor: Authenticated user (
analyst@example.com) with departmentFinanceand clearance levelCONFIDENTIAL - Resource:
customer-profileAPI endpoint, fields includename,email,national_id(taggedPII.NationalID),account_number(taggedPII.Financial) - Action: GET request to retrieve customer profile
- Context: DSAC policy requires
RESTRICTEDclearance for fields taggedPII.NationalIDorPII.Financial
Expected Outcome
- Decision: MASK
- Why: The user has valid access to the
customer-profileresource (via their Finance role), but their clearance level (CONFIDENTIAL) is below theRESTRICTEDthreshold required by the DSAC policy for PII-tagged fields. The platform returns the full record withnational_idandaccount_numbervalues masked (e.g.,***-****-1234), while non-sensitive fields likenameandemailare returned in full.
Common Misunderstandings
-
"DSAC is the same as ABAC with extra attributes" — While DSAC builds on attribute-based evaluation, it introduces a distinct concept: data classification metadata as a first-class policy input. ABAC evaluates user and environment attributes; DSAC evaluates the sensitivity of the data itself. The two models complement each other and can be composed in PBAC policies.
-
"DSAC requires a metadata catalog integration" — No. While automated metadata catalog sync is the recommended approach for large-scale deployments, administrators can define classification tags manually through the Admin Console. Both paths produce the same policy evaluation behavior.
-
"DSAC only applies to databases" — DSAC enforcement extends to API responses, UI components, and any resource where field-level sensitivity metadata is available. It is not limited to database column-level access.
DSAC policies are only as effective as the underlying classification metadata. Ensure that data stewards maintain up-to-date sensitivity labels in the metadata catalog or Admin Console. Unclassified fields are not protected by DSAC policies — they pass through without sensitivity checks.
Design Notes / Best Practices
-
Establish a classification taxonomy first — Before writing DSAC policies, define a consistent set of sensitivity labels (Public, Confidential, Restricted, PII) and sub-categories (PII.NationalID, PII.Financial, PII.Health). A well-defined taxonomy makes policies readable and maintainable.
-
Use automated classification sync for production — Manual tagging works for small environments, but large-scale deployments benefit from event-based metadata catalog integration that keeps classifications current as data schemas evolve.
-
Combine DSAC with ABAC and RBAC in PBAC policies — Use RBAC for coarse-grained role checks, ABAC for contextual conditions, and DSAC for data-sensitivity enforcement. Compose them in a PBAC policy with a UNANIMOUS decision strategy for defense in depth.
-
Audit DSAC decisions — Enable decision logging for DSAC policies to maintain a compliance trail showing which users accessed which classified data fields and what masking was applied.
Start by classifying your most sensitive data fields (national IDs, financial accounts, health records) and writing DSAC policies for those. Expand coverage incrementally as your classification taxonomy matures.
Related Use Cases
- Protecting PII fields in customer-facing APIs based on data sensitivity classification
- Enforcing column-level database access control for regulatory compliance
- Masking financial data in reporting dashboards based on user clearance
- Restricting access to health records fields classified as sensitive under data protection regulations
- Implementing data-level access tiers across multi-tenant environments
Related Docs
ABAC
Attribute-based authorization using user, resource, and environment attributes.
Data Classification & Masking Model
Classification-based masking and blocking model.
PBAC
Compose DSAC with other models into unified access decisions.
Policy Evaluation Model
How permission requests are evaluated and decisions produced.