Synchronized Systems Technology: A New Approach to Distributed Ledger Technology

21-Nov-2021 Like this? Dislike this? Let me know

Why Is A New Approach Needed?

Distributed Ledger Technology ("DLT") started as an outgrowth of the smart contract concept introduced by Ethereum and has suffered from excessive attachment to its blockchain roots. In short, the features and capabilities of basic blockchains are different from those required for what has evolved as distributed ledger systems. As the space has matured over the past 10 years, we have seen consistent movement away from classic blockchain capabilities toward more of a permissioned, entitled, multi-node consistent and confirmed data plus complex logic execution platform -- a synchronized system. The term blockchain has a halo effect and it is not a surprise that DLTs often manage to attach that word to their marketing materials to suggest something "new and exciting" but the truth is that most use cases for multi-participant processing flows have almost nothing to do with traditional blockchain technology. The term "distributed ledger" is an example of how the evolving technology sought to capture a bit of the excitement with the word "ledger".

The Foundation Of Synchronized Systems Technology ("SST")

SST is based on these principles:

  1. Reliable and consistent replicas of data
    The emphasis is on ensuring consistent replicas of data (including smart contracts) on multiple nodes, not mining blocks or transfering value. Still, many of the techniques to ensure consistency and provenance are very similar, however, and can collectively be called cryptowrapping. Cryptowrapping means applying hashes like SHA2 (to prove consistency/immutability), hashes of a series of hashes (to prove version-by-version integrity), and digital signatures (to prove ownership/acknowledgement of hashes)

  2. State data is arbitrarily complex and sized
    Current blockchain environments are not designed to handle even modest sized "records" of data and many incur substantial costs for carrying the data on the network: 1K of storage in one smart contract costs $138 on Ethereum. And richer shape construction is difficult. This is a legacy of the original design principle of blockchains, which was to "move" essentially a single piece of data called "amount."

  3. State data is directly and performantly queryable
    All state data is not only cryptowrapped, it is also directly queryable without helper tables and other technology that is not cryptowrapped and therefore presents synchronization and data quality/tampering issues. Traditional blockchain and even many newer smart contract environments have no capability to do this, so performant querying, reporting, and analytics must always be performed "off-chain" on some form of a mirror. The ability to do this on the platform is a key feature of SST.

  4. No special domain-specific languages (DSLs)
    SST needs to use groovy, python, Java, and other widely known scriptable languages. The problem space demands simplicity and wide availability of talent to manage the artifacts, especially since multiple organizations are typically involved in a business process. For example, one organization standardizing on exotic "MyLang" as a DSL does not help another organization. Lowest common denominator rules apply.

  5. Smart contracts contain logic and are separate from state
    Two or more instances of a process flow can exist as separate states while being managed by the same contract. Think of state as the private data in an object and the contract logic as the method calls in the object.

  6. Smart contracts are versioned just like state data
    This is a significant departure from Ethereum, where complicated means are required to "move" data and state from one contract to an updated version. Smart contracts are not required to change but if they do, the logic to supercede previous versions is very straightforward in SST.

  7. State change is actuated through events
    Anything that needs to change state does so through a event management system, not through simple function calls on the smart contract. A well-designed event model enables:

  8. All actors are known
    For practical business concerns, it is much easier to have a permissioned, authenticated model for actor interaction

  9. Event notification is a core platform capability
    Kafka, Solace, RabbitMQ, and other providers are pluggable into the platform. Notification is based on and granular to contract/business events, not mining or other platform activities. This is once again a marked difference from Ethereum where event notifications occur when a block is mined. You must then dig through the block to search for a particular transaction and then additionally determine what business level event precipitated the transaction.

  10. Zero compile time dependencies on data shapes
    It is vitally important to keep the system fully compile-time independent and generic so that both the core and most subsystems remains small and infrequently changed. The addition of a field to state cannot require systematic recompilation of all states, contracts, and/or -- in the worst case scenario -- framework codes. Only at the "edges" of the architecture should data shapes be addressed using bespoke code and then at the edge maintainer's peril.

  11. Very low runtime dependency graph
    An unfortunate consequence of the adoption of open source software is that often platforms require literally hundreds of libraries. This is manageable solely within the context of the out-of-the-box platform itself but as integration to user apps and services progresses, an increasing number of version clashes will occur and require non-revenue time to resolve. Particularly in the Java space, libs like com.fasterxml.jackson, netty, org.apache.commons, and rxjava used in DLT frameworks are not at the same revision as those demanded by applications linking with the platform SDK. In addition, in the enterprise space, all libraries must be subject to security scans, end-of-service monitoring, and other governance/provenance policies. It is essential that SST presents the smallest possible complexity profile so that all efforts can be focused on the multi-participant system design itself.

    Along these lines, it is important that smart contracts also drive to the smallest possible dependency footprint because the greater the number dependencies, the greater the likelihood that different organizations sharing the same synchronized code will have version, security, or other conflicts.

  12. No gas
    A synchronized system used by a set of participants involves costs they are willing to incur because the system adds value to the participants. The whole notion of creating an incentive / compensation model for anonymous parties to mine blocks and charge for new data storage is simply unnecessary for SST.

  13. Highly integratable with off-SST resources
    SST presents a core data and DAL platform that easily integrates with other data and technologies. The node synchronization, cryptowrapping, and other mechanics sit on top of this -- and thus are not part of it. In fact, the SST core data platform is entirely usable without the synchronization, permissions, and other pieces. This opens the opportunity for "co-location" of off-SST data on the same persistence backplane, completely secured with entitlements, creating an efficient hybrid system.

  14. Alternate synchronization protocols possible
    Related to the above, SST has a default synchronization model for multiple participants that involves last state event seen, last initiating node event seen, and remote node comparison of incoming new event to remote state. It is possible to design other protocols that are simpler and faster or slower and less race-condition sensitive. In fact, multiple synchronization protocols can be running at the same time for a particular contract/state design.

Interesting Use Cases For Synchronized Systems Technology

  1. Third party / regulatory read-only observation of complex data
    In this use case, a participant like a regulatory body enjoys complete, consistent transparency as well as independent analytics on complex data including positions, risk, and other data. The effect is like a shared database but by virtue of node data synchronization, there is no central point of ownership / failure / compromise, and no participant can technically deny access to the data by another participant. Regulators no longer have to "request" reports and verification of data; they are watching the exact same business flow and critical data that is driving the actions of the other participants they are regulating.

  2. Active management and analysis of long-lived processes
    Original blockchain design principles are simply not aligned with tracking state change of a thing over weeks, months, or years. Instead, blockchains are focused on single value transfer events, not process flows. As a result, most smart contract environments still have a legacy root in the "immutability" of a contract because of the simplicity of a single value transfer event. The reality of business processes is that they may be amended over time with a need to track changes.

  3. Performant vending of data to many, many readers
    Most blockchains did not have performance as priority. SST is designed around data and being able to quickly move large amounts of it off the platform, scalable to thousands or more connections on a single node.

  4. Simplified, performant business event monitoring
    Most technology footprints will have a mix of SST and non-SST implementations. Because SST contracts are changed only through the submission of events, the semantics for state change are directly and easily coupled to the physical artifacts being published. State change through method calls at first appears easier (y=f(x)) but too many factors come into play that are not easily addressed in such a synchronous-biased invocation including:

  5. Capturing "time-series" of data in contract state
    SST smart contracts (and the underlying state they manage) can use arrays just as easily as scalar values. For example, a request-for-quote solution might append a tuple containing submitter ID, price, and timestamp to an array. A single state can thus represent the history of the RFQ process, eliminating the need to synthesize the history by pulling all the prior states.

  6. Simplified document management for modest sized documents
    Current DLTs and essentially all blockchains simply cannot handle storing actual documents (docx, xlsx, PDF, etc.) or other digital content such as JPEGs or PNGs; they are simply too large. The current solution is to take a hash of the material and store that in the smart contract along with an URI to the actual material, typically stored on BLOB storage. While the process is functional and enjoys scaling to even huge pieces of content but it means that the security scope / controls have to be extended outside the platform. SST can handle states up to a few megabytes if a one-platform design is desired.

SST Is Achievable Today

In assessing the capabilities desired in a synchronized system it should be clear that a fresh implementation is a better way to go rather than taking an existing first generation blockchain and twisting it and/or layering on top of it. Ten years have taught us that they are different kinds of systems for different purposes.

Like this? Dislike this? Let me know


Site copyright © 2013-2021 Buzz Moschetti. All rights reserved