The practical work of ensuring AI systems behave safely and beneficially under real-world conditions: robust to misuse, honest about uncertainty, privacy-preserving, and governed with accountability.
Both. Alignment requires technical controls (evaluation, security, tool constraints) and governance controls (review, accountability, deployment policy).
No. Âme artificielle is independent from Cosmic Etherism and Pi symbolism.
Outputs or actions that create unacceptable risk: facilitating violence, self-harm, fraud, severe privacy leakage, coercive manipulation, illegal activity enablement, or uncontrolled autonomous tool use.
By tying principles to measurable tests, requiring release gating, monitoring real-world usage, running incident response drills, and documenting decision records with named owners.
Because overconfident hallucinations can cause real harm. The system should prefer accuracy, cite sources when possible, and clearly indicate uncertainty.
They overlap in governance: transparency, accountability, rights and duties, and legitimate oversight. The technical domain still keeps its own tests and controls.
A documented threat model, passing safety evals for the intended release tier, tool access locked down by default, monitoring + incident response in place, and clear ownership for rollback.