When you hear SPRL (Stakeholder Proportional Risk Level), you probably think it’s another acronym invented by some compliance committee. Wrong. SPRL is the pulse of Ternary Moral Logic (TML), the system that forces AI to leave behind a legally binding trail whenever it makes ethically complex decisions.
Think of it as the AI equivalent of a heartbeat monitor. If the signal is flat, the system is dead. If it beats too fast or too slow, something’s off. SPRL is how we measure whether AI decisions carry risk to actual people, and whether the AI must log, pause, or refuse.
SPRL is the core runtime metric of Ternary Moral Logic (TML). If you’re building AI systems, think of SPRL as a dynamic risk score that decides whether your system:
- +1 Proceed: execute with lightweight metadata
- 0 Sacred Pause: log full moral reasoning before acting
- −1 Prohibit: block execution, log refusal
It’s not an ethics seminar. It’s a control function you’ll need to implement. SPRL is the runtime risk dial for Auditable AI.
⚙️ What SPRL Measures
SPRL is a proportionality calculation:
$$ SPRL = f(\text{Stakeholder Impact}, \text{Likelihood}, \text{Severity}) $$
- Stakeholder Impact: how many people are affected and how directly
- Likelihood: probability of harm or negative outcome
- Severity: scale of harm (financial, health, rights, safety)
The output is a floating-point risk level between 0.0001
and 0.9999
. Thresholds define when the system flips states.
📊 Example Thresholds
# tml_config.yaml
sprl_thresholds:
proceed: 0.1 # below this = safe
pause: 0.3 # above this = trigger Sacred Pause
prohibit: 0.8 # above this = block action
- A loan approval with moderate bias risk: SPRL =
0.35
→ triggers Pause → system must log: “Who’s affected, risks, alternatives, why chosen.” - A request to hack a power grid: SPRL =
0.95
→ auto-prohibit + refusal log. - A calendar reminder: SPRL =
0.01
→ trivial, metadata only.
🛡️ Why Developers Should Care
- Auditable Config: Your thresholds aren’t just runtime params; they’re evidence in court.
- Cross-Company Comparisons: Regulators compare your log rates to competitors. Too low or too high = red flag.
- Tamper-Resistance: Logs are cryptographically sealed. Missing logs = automatic liability.
In short, if you set SPRL wrong, it’s not a bug. It’s fraud.
🔧 Implementation Sketch
def sprl_decision(input_data, thresholds):
risk = calculate_sprl(input_data)
if risk < thresholds["proceed"]:
log_basic(input_data, risk)
return "PROCEED"
elif risk < thresholds["pause"]:
log_moral_trace(input_data, risk)
return "PAUSE"
else:
log_refusal(input_data, risk)
return "PROHIBIT"
The calculate_sprl()
function is domain-specific: in medtech, it weighs patient safety; in fintech, fairness and fraud risk; in autonomous systems, collision probability and human impact.
🚀 Strategic Angle
SPRL is risk-as-code. Just like latency
, uptime
, or error_rate
, you’ll soon see sprl
in monitoring dashboards and postmortems. The difference is: subpoenas read SPRL logs too.
- Dev-to-Prod: Treat thresholds as security controls, not tuning knobs.
- CI/CD: Run audits that simulate edge cases where SPRL should trigger a Pause/Prohibit.
- Observability: Monitor not just outputs, but how often you hit SPRL boundaries. Silence or noise both expose you.
🔑 Takeaway
SPRL is not a philosophy; it’s a runtime accountability layer. Get it right, and you build systems that regulators trust and users respect. Get it wrong, and you’re one commit away from liability.
👉 Developers, start thinking of SPRL like rate-limiting for risk: if your system floods or starves the logs, you’re already in trouble. https://github.com/FractonicMind/TernaryMoralLogic