Dhryl Anton
16 min readJan 30, 2022

--

Security & Control

Repost from Cloudmode 2012

Security & Control

There is a difference between security and control. You can have security without control but once you have control then security is less of an issue.

First Principles

Since philosophy is my gig, let’s examine it from first principles. We will define data security as: protecting data from destructive forces and from the unwanted actions of unauthorized users. We will define control as: the ability to determine which specific set of actions a person gets to perform. Now, we must also further separate this from privacy, which we will define as: the ability to determine what data in a computer system is shared with third parties. Security then is about the prevention of action against a target by an unauthorized actor. Whereas control is about being able to determine what action the actor can take with regard to the target.

A Shift in Thinking

Even at the definition level we can see a fundamental shift in which element of the schema we are affecting. Here, a schema is a model that helps organize and interpret information. A schema is a way to define the structure of something at it’s most basic level. At the most basic level then we have three ontologically distinct entities: the actor, the target and actions.

In the security formulation we are trying to secure the target by preventing what actions can be taken against the target. We must therefore predict all the possible actions an actor can take and construct the necessary defenses. In the real world this is a large and complex task. We must account for the real possibility that the actor may come up with actions that we never anticipated and have no defense for. We must therefore incorporate a method of accounting that, at the very least, keeps a record of all the actions that were taken against the target. In this manner we can analyze and discover when novel actions occur, learn from them and incorporate this learning into our defense strategy.

In the control formulation, we focus on the actor in the system, as opposed to the target. We define instead what actions the actor can take, irrespective of the target. We therefore exclude all other possible actions. The target is not burdened with the complex tasks and overhead needed for prediction, analysis, and learning. This also greatly reduces the accounting function.

But Does it Float?

Now, this is all well and good in a closed ecosystem. But it is problematic to implement a control schema in an open-system where the role of the actor is undefinable. In order to implement control there are other conditions that must also be realized. For example: actions of the actor must be quantifiable and definable; actions must be excludable; the actor credentials must be unforgeable; and the target interaction must retain it’s integrity. To implement a control schema one must address each of these requirements. Should the actor credentials be forgeable then they may impersonate another actor and therefore be able to realize unauthorized actions. Should the integrity of the interaction be compromised, then unauthorized actions may be realized through cloning a request and injecting a modified action. And so on.

The control schema is also unrealizable in a hierarchically modeled environment where inheritance is an inert property. A hierarchy is an organizational structure where every entity in the organization, except one (“the root”), is subordinate to a single other entity. An example of a hierarchical model is the hierarchical file system (HFS). The subordination of a hierarchy implicitly embodies inheritance as one entity inherits the properties of the thing it is subordinate to. In order to implement control, all entities or targets must be organized in a model where every entity is on the same level — thus eliminating forced inheritance of properties. There are some concrete reasons why security and not control is the function of permissions in a computing system.

A Core Departure

The CloudMoDe Operating Stack (OS) is different from the ground up because all data is stored in a semantic system, where everything is on the same level. CloudMoDe OS uses an irreversible cryptographic chain of blocks to record transactions, where transactions are defined as the result of an instance of interaction. Transactions can describe the credentials of the actor and be used to make all instances of interaction unique, thus making them unforgeable. Transactions can also be used to describe actions, thus making them quantifiable and definable. Once you eliminate the subordination problem with a new data structure, use a uniqueness quantification method for transactions, and meet the other requirements of an effective control schema, implementing such a control system becomes elementary. Every great invention is obvious after it has been made.

What Did We Learn

Security is protecting data from destructive forces and from the unwanted actions of unauthorized users.

Control is the ability to determine which specific set of actions a person gets to perform.

There are three parts to the control equation: actor, actions, target.

Security is protecting the target.

Control is limiting the actor and excluding non-authorized actions.

HFS is problematic because each node inherent the properties of the node it is subordinate to.

Conclusion

Security and control are not the same thing. There are fundamental differences and having control opens up a new space on the internet. You can have security without control but once you have control then security is less of an issue. It’s like the old adage that being married gives you security, but being single gives you control. It’s just a question of which state you are after.

Page 2 of 2

If you haven’t heard by now there is currently a great buzz going on at the edges of the internet that herald the coming of a block chain revolution. We are going to expand on this and give you a path to clarity on the subject.

Bitcoin has been a constant theme of headlines for the past year, as 2013 marked its coming of age. — The Year of BitCoin, Patrick L. Young If 2013 was the year of the bitcoin (Business Insider awarded Satoshi Nakamoto its 2013 person of the year award), then 2014 was the year of the Block Chain, the year when tech investors and tech entrepreneurs discovered the Block Chain, the technology at the heart of the revolution.

While the public may just be discovering block chains, at CloudMode we’ve been working with this technology for the past three years. In our experience, while block chains aren’t simple, they are much easier to understand if you adhere to the following technique. The key is to separate the components into ontologically distinct entities, then define them by what they do. Most people try to take the example and reason it out. They take a metaphor or an analogy and try to reason from that. This creates confusion. By stripping away the content and paying attention to the structure you get to something a lot more useful. By focusing on what things are, you remove the fog of confusion brought about by trying to interpret what things mean. You can use this technique to develop a framework for understanding the Block Chain Revolution.

The Chain of Blocks

As a first step, we suggest a change in terminology. The central concept of a block chain is a set of nodes in a network which sequentially record transactions on a public “block”, creating a unique “chain”, the block chain. For the sake of clarity, let us call this simply a “chain of blocks”, an ordered sequence of entries, where each block is itself an ordered sequence of transactions. Let us also clarify the word transaction, as it often has a specific commerce connotation and we want to strip that away. So, here we define a transaction as an instance of an interaction, where each instance of an interaction is completely definable and each block contains a record of an instance of an interaction. Therefore, the chain of blocks is an ordered sequence of records. If we have a way to make this sequence irreversible, then this “chain of blocks” becomes an important concept as a way to store information. It is an irreversible, linear storage mechanism.

The next step in understanding, is to separate the “chain of blocks”, the storage medium, from the method used to make an entry. You can have a network of nodes making entries, where these nodes can be decentralized, as in Bitcoin. Alternatively, the making of entries can be centralized. This distinction of centralized or decentralized is merely a configuration of the network and not inherent in the “chain of blocks”.

The Ledger

Now that you have a chain of blocks, as a concept, we need a way to record instances of interactions, a linear container space, a kind of data-store. In the block chain world, this a called a ledger. Ledger is a word that means a book in which transactions (instances of interactions) are recorded, in this case, a file. The ledger acts as a kind of permanent record. Ledgers can be in a centralized location or in a decentralized location. In the Bitcoin approach the ledger is decentralized and maintained by a vast, distributed peer-to-peer network, which makes it far more permanent, in that a copy will always be present, than data kept in a centralized location. It is important to grasp here that safety and security are different. They are two words because they are two different things.

Making Entries

Who can make entries in the Ledger? What constitutes an entry? When do we commit entries? For a decentralized system like Bitcoin, these questions create two challenges. The first is devising a secure and reliable method for updating a ledger, of which there is a myriad of copies distributed throughout the network.

In a distributed system, you can have lots of nodes making entries in the data store or ledger. You only want one record to be committed to the data store. You also want the data structure to have integrity and make it so that the data in the data store cannot be modifiable after records have been made. This was Nakamoto’s true invention. which was an ingenious way to solve these challenges.

In the absence of a third party to maintain integrity and verification, the acceptance of which is validation, the second problem one has to solve in a distributed configuration is, how to create the necessary incentives for users to contribute resources to verify records and thus validate entries.

To solve the problem of who can commit entries to the chain of blocks, Nakamoto’s idea was to do this through a key concept of the bitcoin block chain called “proof-of-work”. The “proof of work” is a “right” to participate in the block chain system. Proof of work is part of a scheme to implement a measure of control over the system. The node in the network that gets to commit an entry is the one that computes the proof of work first. The entry is then added to the block and from this point on any changes to the block would require the work to be redone. The Proof of Work scheme both randomizes the moment that the block is submitted to the rest of the network as being valid and it signs or seals the block so that it cannot be easily changed.

Validation

The definition of validation is: to grant official sanction to by marking. To establish the soundness, accuracy, or legitimacy of. Functionally, validation is a declaration of a decision made by doing something, where that something is comparing a to b and declaring it is so. The thing being validated is a block of entries in a ledger, records in a data store. In a centralized system, like a database, an entry is made and a confirmation is sent by the software that says the entry was recorded. To validate it, we can accept the message that the entry has been recorded or have a neutral third party or process examine the entry and confirm it is present and correct. If the nodes making entries in the Ledger are distributed, then all of the nodes in the network need to resolve whether or not a particular entry is valid.

Validation and its implications, that a thing separate from the act of accepting a thing has occurred, can be a source of confustion. Valid is something a human accepts. Verification of and providing proof is something a machine can do. A machine can have no opinion as to whether or not something is valid. It is important to keep it simple. Verifying a record and creating a mark that says it’s so, which subscribers accept, has nothing to do with its implications or what it is used for, i.e. validation and verification two separate concepts.

In the case of a financial transaction the validity of the entries, as a result of their implications, can take on a whole different meaning. In the case of the Bitcoin block-chain, validation means, for example, that the sender (say Alice), actually owns the commodity being transferred to the receiver (say Bob). A valid block-chain transaction, then is a declaration by a validator or the validation service (the provider of the mark of validation) that there is an entry in a ledger where Alice was debited and Bob was credited. The transaction is valid if an examination of the ledger shows the transaction to have occurred and if the declaration (or mark) is accepted by the parties of the transaction.

Of course, you can simply accept a trusted source when they make the declaration that something is valid. You do this at your bank. You want this because should some undesired outcome occur, you want to be able to back out of the transaction. When you make a purchase using a credit card you trust that they will accurately record the event. However, what if you don’t trust the source of the declaration? What if that source can be compromised in some way? A mathematical way of examining an event without the need for a human comparison of a to b, is difficult.

A problem, mathematicians have been working on for a long time is how different parties can know if information exchanged online represents the consensus, without the need to rely on a third party. Until recently, this was considered impossible. — thenextweb, 2 — 15 — 2014

So what is required is a method of generating a declaration or mark that is independent of human intervention. Enter Satoshi Nakamoto and his invention: Bitcoin.

(Digital Means of Validation) Satoshi Nakamoto’s idea is a brilliant method for establishing that something is verified and marked as valid. A block of data to be entered in the Ledger, is put through a unique process. First the information in the block, the hash of the last block stored, and other information is collected and then a participant (a miner) applies a mathematical formula to it. This formula turns the data into a shorter, seemingly random sequence of letters and numbers known as a hash. This hash is stored along with the block, at the end of the chain blocks at that point in time. Hashes are easy to produce from a collection of data, but it’s practically impossible to work out what the data was just by looking at the hash. And while it is very easy to produce a hash from a large amount of data, each hash is unique. If you change just one character in the source data, the resulting hash will change completely. Because each block’s hash is produced using the hash of the block before it, it becomes a digital version of a seal and can be interpreted as a mark of being valid. This mark is tamper proof.

Consensus

Nakamoto’s invention also solved a problem thought to be impossible to solve by computer scientists: getting an autonomous group to work together to reach a consensus. Consensus means a general agreement about something by multiple parties. We will step out of the math world and look at it from the perspective of philosophy, which is a lot easier to understand and not so mystical (mysterious). At its most fundamental level, there are three ingredients, three steps — each a consensus of its own, to a consensus:

A delclaration about the record — agree to the rules about how a record is to be structured, the values that are recorded,

A declaration about history — agree to the rules about exactly which transactions have occurred,

A declaration about the rules — agree to the rules that determine which transactions are allowed and which are not.

You can think of them as a law of inclusion (what’s in), a law of origination (history) and a law of exclusion (what’s out). All participants must subscribe to these declarations. These ingredients are interdependent. An invalidation of any one of the laws, will unravel the other two.

Now lets look at the implementation in context of Block Chains. These rules are established programmatically, a set of instructions for a computing system, in the form of a program. This program or part thereof is run by all participants. In a distributed system when a majority of the nodes in the network agree that a block passes the tests of the rule system, they commit it to the data store or ledger and use the updated version of the ledger as a basis for recording and verifying future transactions. By stepping through a series of rules, decisions are reached, the addition of a new block to the chain. What emerges from this process is what we can percieve as consensus. I say perceived because computers cannot agree or disagree. Consensus comes from the Latin word cōnsēnsus (agreement), which is from cōnsentiō meaning literally “to feel together”. What is being described is a method of resolving incongruity instantiated in a distributed configuration of a rule system. It is important to keep this clear before wandering off into the world of speculation, where this mechanism represents a way for world peace.

The Participation Scheme

Entropy (a rough measure of disorder within a closed system) arises in any system, that is to say some knucklehead tries to get around the rules. In order to gain some measure of stability you have to balance the equation using a system of controls. You need a way to incentivize new parties to invest in maintaining the records, adhering to the rule set, and present a “big enough hurdle” that prevents users from breaking the rules, changing records, in the block chain. This is where complexity, cryptography, encryption, Proof of Work, all can be employed as a control mechanism. In Bitcoin, each hash cycle takes about 10 minutes and requires a colossal amount of computing power. You need a “right” to participate, which is Proof of Work. All these methods make it expensive and acts as a deterrent, as there is a consequence, which is re-doing the proof of work, to not following the rules. Proof of Work also has a benefit, by providing the string that solves the problem, participants prove that they have done the work and not cheated. There is a system of rewards that pays for participation in doing the work to maintain the validity of the “chain of blocks”. The scheme therefore balances an incentive for computing resources with deterrents against entropy. It is important to note that elaborate schemes only arise in order to regulate a decentralized system. Centralized systems are based on a measure of trust.

The Role of Trust

Remember validation literally means trusting the declaration that the transaction (an instance of an interaction) has occurred. Trust is what you accept. It is not something that is done to something by something or in some way. A method for coming up with a declaration that was mathematical, and transparent, without human biases, may engender humans to reasonably “put trust” in it.

Nakamoto’s expressed goal in creating this distributed block-chain validation method was to create a trusted, publicly visible, record keeping service, that was independent of any third party validation requirements. Effectively a trusted network, without having to arbitrarily trust a third party to verify the transactions. He did this by creating a transparent method. Validation is an act of trusting a mark. You have to trust something. The difference between the Bitcoin distributed consensus system and PayPal is that PayPal’s trust mechanism is not exposed. The method of regulation is the law. In Bitcoin you are trusting a transparent method that is self-regulating. One is private, one is public. Which one you trust is still a choice.

WHAT DID WE LEARN

We learned about a way to break down the block chain in a way that is easy to make sense of

Chain of Blocks — an ordered sequence of transactions, a data structure.

A Ledger — a place to store those blocks, a data store.

Making Entries — rules about making entries or committing entries to the data store. Proof of work as a way to manage who gets to commit entries to the ledger.

Validation — verifying an event has occured and making a mark you can accept. Nakamoto’s brilliant method for using hashing to mark transactions and seal blocks.

Consensus Method of Validation — How to get an autonomous group of machines to reach a resolution by making rules about law of inclusion, exclusion and origination. Accepting that system is a consensus.

Participation Scheme — How to incentivize and deter participants in a distributed configuration. This is only necessary in a distributed configuration.

Conclusion

This is what “block chain” is. We have not addressed what it means. The objective of our exploration here is only to get a more useful understanding of what it is. You can now take each component and hypothesize what it means. For example: Think of a block more like a linear container space. The block chain is like a database, except part of the information stored is public, in that anyone can examine it, and part is private because only the participants (private-key) can unlock what’s inside the container. You can think of consensus as an autonomous decision making process, where decisions (addition of a new block to the chain) are mediated through incentives (mining rewards), and consensus is reached (accepting new blocks) and regulated by costs. The speculation as to what this all means. We leave to your imagination, have fun. — -Dhryl Anton

Michael McFall

Now that we have a way to easily break down what’s going on in BlockChain Revolution, what was once hard to understand, becomes much clearer. The key is to separate the components into ontologically distinct entities, then define them by what they do. This removes the fog of confusion brought by trying to interpret what they mean. The core components of the system is the “chain of blocks”, “ the method of record keeping”, the “validation method”. The ancillary component is the scheme for computing power. You can then distinguish it by how it uses this scheme. In our examples we tried to cover the spectrum. On one end, is Ethereum is a public chain of blocks, that uses Nakamoto’s public validation consensus mechanism. In the middle, is Codius is an application that rides on top of whatever chain of blocks and validation method the user wants,. And on the other end, is Ripple Labs, offers a private validation method, with a private but verifiable chain of blocks. CloudMoDe is in a class by itself, block chains are only a small part of a larger system that reinvents data storage, content delivery and use control.

Using The Tool

We now how have a tool we can use to deconstruct, identify and classify the current players in the BlockChain 2.0 market. Specifically, we can look for how the “chain of blocks” is implemented and used, classify the “validation method”, and identify what scheme is used to capitalize the computational requirements of that method.

Ethereum

  • chain of blocks: public
  • - validation method: public, consensus, anonymous miners
  • - mining scheme: percentage of transaction paid to miners
  • - uses: user authentication, payments, cloud storage, instant messaging, reputation, trust
  • - operational date : sliding, out to spring 2015 as of this writing
  • Codius — Smart Oracles
  • chain of blocks: none, uses other chains
  • - validation method: public or private, depends on choice chain
  • - mining scheme: dependent on validation method
  • - uses: contract system that uses selectable chains and validation methods to transfer ownership of digital assets
  • - operational date : first hosts are running in March 2015, currently two.
  • Ripple Labs -Currency
  • chain of blocks: independent from public chains
  • - validation method: private, only authorized participants, identified in transaction
  • - mining scheme: percentage of transaction paid to private miners.
  • - uses: high speed transaction clearing for banks, currency exchange
  • - operational date: custom currency (XRP) trading since fall 2012
  • CloudMoDe -Authentication
  • chain of blocks: private independent from public chains
  • - validation method: proprietary, only authorized participants,
  • - mining scheme: none
  • - uses: high speed transaction clearing for authentication, cloud storage, instant messaging, sharing
  • - operational date: since fall 2011

--

--