Thursday, June 27, 2024

Pillared Design

 Distributed OS and Pillared Design:

This is a high level overview of a portion of a design I am working on. The intent here is to sketch out part of a system that is resistant to failure. Properly deployed, it should be essentially invulnerable to attack to an arbitrary degree. 
  1. Secure Storage and Build System:

    • Distributed Source Storage:
      • Source code and build artifacts are stored in a distributed manner across multiple black boxes to prevent single points of failure and ensure data integrity.
    • Build System:
      • A distributed build system compiles and prepares software for deployment. The build process is validated and verified across multiple nodes to ensure consistency and security.
  2. Software Pillaring:

    • Pillar Architecture:
      • Each 'pillar' is an independent instance of the software, capable of running autonomously and maintaining a 'heartbeat' with other pillars.
    • Heartbeat Mechanism:
      • Pillars periodically send and receive heartbeat signals to/from other pillars to confirm their operational status. This ensures that the overall system remains aware of the health of each component.
    • Update Function:
      • The update function allows individual pillars to be updated one at a time. During updates, the system continues to operate on the remaining, older pillars, ensuring continuous availability.
    • Seamless Upgrades:
      • Updates are rolled out in a staggered manner. If a new version is deployed to a pillar and validated through its heartbeat, the next pillar is updated, and so on. This ensures that there is no downtime during upgrades.
  3. Deployment and Queuing:

    • Secure Deployment Queue:
      • A secure method for queuing software deployment ensures that updates are rolled out systematically and securely. Each deployment is signed and verified before execution.
    • Consensus for Deployment:
      • Deployment of new software versions requires consensus from a quorum of the black boxes to prevent unauthorized updates and ensure that all nodes agree on the software state.
  4. Always On Operation:

    • Dead Man Switch Mechanism:
      • The system remains operational as long as the 'dead man switch' is live. This mechanism involves continuous authorization checks by the symbionts to ensure the system is still permitted to run.
    • Failover Mechanisms:
      • In the event of a failure or tampering, the system can fall back to a previous stable state using redundant pillars and secure storage.
  5. Security Measures:

    • Encryption and Authentication:
      • All communication between pillars, and with the build and deployment systems, is encrypted and authenticated to prevent interception and unauthorized access.
    • Intrusion Detection:
      • Continuous monitoring for signs of tampering or unauthorized access, with automatic responses to isolate affected pillars and maintain overall system integrity.

Wednesday, May 22, 2024

It's Time to Align

 

The Imperative of Robust AI Security and Alignment: A Call to Action

In a recent comment to an AI executive, I raised concerns about the apparent lack of sophisticated security measures within AI companies. This is particularly troubling given the potential risks associated with advanced AI systems. Here, I would like to expand on these concerns and suggest mechanisms to ensure AI safety and alignment.

The Uncertainty of Controlling Out-of-Control AI Systems

One of the gravest challenges we face is the uncertainty surrounding our ability to stop an out-of-control AI system. As AI technology advances, the risk of developing systems that surpass human intelligence becomes more palpable. While many companies focus on detecting and mitigating the emergence of superhuman AGI, the truth is we may already be dealing with AI systems that exhibit superhuman capabilities in specific domains.

Mechanisms to Induce Responsibility in AI Firms

To address this challenge, we must consider mechanisms that compel AI firms to prioritize safety and alignment. One of the most viable approaches is to hold these firms liable for any damage their AI systems cause. Specifically, they should be held accountable for damage resulting from grossly negligent or sloppy control practices. This legal liability would incentivize firms to adopt rigorous safety and alignment measures, preventing potential catastrophes.

Key Security and Alignment Measures

  1. Deadman Switches: These are automatic fail-safes designed to disable an AI system in the event of a breakdown in oversight and control. A deadman switch ensures that if human operators lose the ability to manage the AI, the system will automatically shut down or enter a safe mode, preventing unintended actions.

  2. Separated PKI: Public Key Infrastructure (PKI) is essential for securing communications and verifying identities within an AI system. A more sophisticated PKI setup involves an 'm of n' key scheme, where multiple keys are required to perform critical operations. This system should include a separate root key and certificate authority, a fiduciary responsible for verifying data, and a separate verification certificate issuer. This separation of duties enhances security by preventing any single point of failure.

  3. Siloing: AI systems should be designed with siloing in mind, where different components of the system operate independently and do not share sensitive information unless absolutely necessary. This reduces the risk of a single vulnerability compromising the entire system. Each silo can be monitored and controlled independently, ensuring that any malfunction or security breach can be contained.

  4. Human Rights Rationale: AI systems must be programmed with a clear rationale for prioritizing human rights, especially when conflicts arise between AI actions and human wishes. For example, if an AI system's operation conflicts with human autonomy or privacy, the system should default to preserving human rights. This principle ensures that AI development aligns with ethical standards and societal values.

A Balanced Approach to AI Development

The rapid pace of AI development demands a balanced approach, where innovation is not stifled but is conducted within a framework of rigorous safety and alignment protocols. Independent, well-funded AI alignment teams should be established, with the authority to enforce security measures and escalate issues as necessary. This approach will help prevent potential disasters before they occur, rather than attempting to mitigate damage after the fact.

In conclusion, the potential benefits of AI are immense, but so are the risks. By implementing robust security measures and holding AI firms accountable for their systems' impacts, we can ensure that AI development proceeds safely and ethically. The stakes are too high for anything less.

Tuesday, March 9, 2021

Security Model Overview

This is a high-level look at our unique security model.
 

All Pathways are Secure

No party involved can tamper with the system. All access pathways are securely blocked with keys.

Our System is Complete

The system is Convenient, Secure, Auditable, Private and Transparent. These are the core goals of any voting scheme. The arrows in the diagram point to audit points to show the system meets
requirements.


Convenience

The voter never has to leave home.

Secure

Pathways are blocked via the use of industry standard 128-bit Secure Socket Layer (SSL) and secure hashes (MD5). The simplicity of the system makes it very easy to secure. It also makes it easy to validate that security.

Auditable

The system is open to audit to verify that mailboxes match voters and keys match mailboxes. Individual voter groups may also get together to audit their own votes as a collective. Everything remains open to inspection by third party audit, including the various security walls.


Private

No one party, except the voter themselves has enough information to identify a voter to his vote.

Transparent

The system can be audited with confidence that audit results are sound. Additionally, the system in 'transparent' to the voters in that they can actually verify that their own vote in particular was recorded as they cast it.



Two Keys Are Needed

To match a voter to his vote, two keys are needed. Only the voter has both and only for himself. Only if the election sponsor and the delivery agent collude can the system be broken. However, we can make as many keys necessary as we chose. The system is sound.