🌴
Guardian-dev
  • Guardian
    • 🌏Getting Started
      • 🏜️Discovering Environmental assets on Hedera
      • πŸŽ“Guardian Glossary
      • πŸ“–Installation Guide
        • πŸ—’οΈPrerequisites
        • πŸ› οΈInstallation
          • πŸ”¨Building from source and run using Docker
            • Deploying Guardian using default Environment
            • Deploying Guardian using a specific Environment ( DEVELOP )
            • Deploying Guardian using a specific Environment (QA)
          • πŸ”¨Building from Pre-build containers
          • πŸ”¨Build executables and run manually
          • ☁️Cloud Deployment
          • ⬆️Upgrading
          • πŸ”™Backup tools
          • 🏑Setting up environment parameters
          • πŸ“Summary of URLs and Ports
          • πŸ’»Launching Guardian
          • πŸ§ͺHow to perform Unit Tests?
        • πŸ”¨How to Configure Hedera Local Node
        • πŸ”¨How to Configure HashiCorp Vault
        • πŸ”¨How to create Operator ID and Operator Key
        • πŸ”¨How to generate Web3.Storage API values
        • πŸ”¨How to Change Explorer URL
        • πŸ”¨How to Restore Account from Database/Hashicorp Vault during Setup
      • πŸ™Contributing
        • πŸš‡Contribute a New Policy
      • πŸ“–Frameworks/Libraries
        • πŸ’»Guardian CLI
      • βš™οΈAPI Guideline
      • πŸ”Guardian Vault
      • 🌎Environments
        • 🌎Multi session consistency according to Environment
        • πŸ”‘Dynamic Vault/KMS path configuration according to the environment
        • 🏑Ecosystem Environments
      • πŸ“ƒChange Log
      • πŸ›£οΈRoadmap
      • 🚨License
      • πŸ“žContact
      • πŸ”Security
      • πŸ”‘Meeco authentication
        • πŸ”‘How it works
    • πŸ‘·Architecture
      • ⬇️Deep Dive Architecture
      • πŸ”ΊHigh Level Architecture
      • πŸ‘Policies,Projects and Topics Mapping Architecture
      • βž—MRV Splitting Logic
      • πŸ”‘Internal (with Vault)Signing Sequence Diagram
      • πŸ”₯FireBlocks Signing Sequence Diagram
    • πŸ“‡Global Indexer
      • πŸ’»Indexer User Guide
      • βš™οΈIndexer APIs
        • Full Text Indexer Search
        • Returns Standard Registries
        • Returns Registry as per MessageID
        • Returns Registry Users
        • Returns Registry User as per MessageID
        • Returns Policies
        • Returns policy as per MessageID
        • Returns Tools
        • Returns Tool as per MessageID
        • Returns Modules
        • Returns Module as per MessageID
        • Returns Schemas
        • Returns Schema as per MessageID
        • Returns Schema Tree
        • Returns Tokens
        • Returns Token as per TokenID
        • Returns Roles
        • Returns Role as per MessageID
        • Returns DIDs
        • Returns DID as per MessageID
        • Returns DID Relationships
        • Returns VP Documents
        • Returns VP Document as per MessageID
        • Returns VP Relationships
        • Returns VC Documents
        • Returns VC Document as per MessageID
        • Returns VC Relationships
        • Returns NFTs
        • Returns NFT as per Serial No.
        • Returns Topics
        • Returns Topic as per TopicID
        • Returns Contracts
        • Returns Contract as per MessageID
        • Returns Landing Page Analytics
        • Returns Project Coordinates
        • Returns Search Policy Results
        • Attempts to refresh linked files for the selected documents
        • Returns Hedera Network
        • Returns Hedera Network Explorer Settings
        • Returns Data Loading Progress Result
        • Returns Registry Relationships
        • Returns Policy Relationships
        • Retrieve the list of formulas
        • Retrieve the formula by message ID
        • Retrieve linked documents which are related to formula
        • Returning Topic Data Priority Loading Progress
        • Adding Topic Data Priority Loading
        • Adding Policy Data for Priority Loading
        • Adding Token Data for Priority Loading
        • Adding Document to Data Priority Loading
    • πŸ—ΊοΈMap Related APIs
      • Returning map API Key
      • Returning Sentinel API Key
    • πŸ—„οΈStandard Registry
      • πŸ› οΈSchemas
        • πŸ“‚Available Schema Types
        • πŸ“‚Property Glossary
        • ℹ️Types of Schemas
        • ℹ️Schema Versioning & Deprecation Policy
        • πŸ“ΉHow to design a Schema of a Methodology
        • πŸ’»Creating Schema using UI
        • βš™οΈSchema APIs
          • Listing of Schema
          • Publishing Schema based on Schema ID
          • Updating Schema
          • Deleting a Schema
          • Schema Preview from IPFS
          • Schema Preview from Zip
          • Export message IDs of Schema
          • Export Files from Schema
          • Creation of Schema related to the topic
          • Returns all Schemas related to the topic
          • Importing Zip file containing Schema
          • Importing Schema from IPFS
          • Returning Schema by SchemaID
        • βš™οΈSystem Schema APIs
          • Returns Schema by Type
          • Creates New System Schema
          • Returns Schema by Username
          • Updates the Schema
          • Delete System Schema
          • Publishes the Schema
          • Schema Type
          • Returns Map API Key
        • βš™οΈSchema APIs for Asynchronous Execution
          • Creation of Schema
          • Publishing Schema
          • Previews the Schema from IPFS
          • Importing Schema from IPFS
          • Importing Schema from .zip
        • πŸ“Schema Differentiation
          • πŸ’»Schema Differentiation using UI
          • βš™οΈSchema Differentiation APIs
            • Compare Schemas
            • Exports Schema Differentiation Results
        • πŸ“Example Data
          • πŸ’»Adding Example data using UI
        • πŸ“‚Schema Tree
          • πŸ’»Schema Tree UI
          • βš™οΈAPI for Returning Schema Tree
        • πŸ“Tag Schema
          • πŸ’»Creating Tag Schemas using UI
          • βš™οΈSchema Tags APIs
            • Returning all Schema Tags
            • Creating new Schema Tag
            • Deleting Schema Tag
            • Updating Schema Tag
            • Publishing Schema
            • Returning list of published schemas
        • Schema Predefined Values using UI
        • Schema Rules
          • Defining Schema Rules using UI
          • APIs related to Schema Rules
            • Creation of the new schema rule
            • Retrieve the schema rules
            • Retrieve the configuration of the rule by its ID
            • Update the configuration of the rule with the corresponding ID
            • Delete the rule by its ID
            • Activate the rule with the specified ID
            • Deactivate the rule with the specified ID
            • List all the schemas and policy relevant to the rule with the specified ID
            • Retrieve all the data needed for evaluating the rules
            • Create a new rule from the file
            • Load the file and return its preview
            • Export the selected rule (by ID) into the file
      • πŸ› οΈPolicies
        • πŸŽ“Policy Glossary
        • πŸ“Versioning and Deprecation Policy
          • ℹ️Policy Versioning & Deprecation Policy
          • ℹ️API Versioning & Deprecation Policy
          • ℹ️Internal APIs Versioning & Deprecation Policy
        • πŸ”©Policy Creation
          • πŸ”„Available Policy Workflow Blocks
            • InterfaceContainerBlock
            • PolicyRolesBlock
            • InterfaceStepBlock
            • requestVCDocumentBlock
            • sendToGuardianBlock
            • reassigningBlock
            • InformationBlock
            • InterfaceDocumentsSourceBlock
            • paginationAddon
            • DocumentsSourceAddOn
            • filtersAddOnBlock
            • InterfaceActionBlock
            • externalDataBlock
            • retirementDocumentBlock
            • calculateContainerBlock & calculateMathAddOnBlock
            • reportBlock & reportItemBlock
            • switchBlock
            • aggregateDocumentBlock
            • TimerBlock
            • revokeBlock
            • setRelationshipsBlock
            • buttonBlock
            • documentValidatorBlock
            • tokenActionBlock
            • tokenConfirmationBlock
            • mintDocumentBlock
            • Events
            • groupManagerBlock
            • multiSignBlock
            • customLogicBlock
            • splitBlock
            • wipeDocumentBlock
            • Create Token Block
            • impactAddon
            • Http Request Block
            • historyAddon
            • selectiveAttributes Block
            • tagsManagerBlock
            • extractDataBlock
            • externalTopicBlock
            • messagesReportBlock
            • notificationBlock
            • Button Block Addon (buttonBlockAddon)
            • Dropdown Block Addon (dropdownBlockAddon)
            • Request Vc Document Block Addon (requestVcDocumentBlockAddon)
            • Data Transformation Addon
          • πŸ’»Creating Policy using UI
          • πŸ’»Creating a Policy through Policy Configurator
            • Getting Started with the Policy Workflows
            • Policy Workflow Step 1
            • Policy Workflow Step 2
            • Policy Workflow Step 3
            • Policy Workflow Step 4
            • Policy Workflow Step 5
            • Policy Workflow Step 6
            • Policy Workflow Step 7
            • Policy Workflow Step 8
            • Policy Workflow Step 9
            • Policy Workflow Step 10
            • Policy Workflow Step 11
            • Policy Workflow Step 12
            • Policy Workflow Step 13
            • Policy Workflow Step 14
            • Policy Workflow Step 15
            • Policy Workflow Step 16
            • Policy Workflow Step 17
            • Policy Workflow Step 18
            • Policy Workflow Step 19
            • Policy Workflow Step 20
            • Policy Workflow Step 21
            • Policy Workflow Step 22
            • Policy Workflow Step 23
            • Policy Workflow Step 24
            • Policy Workflow Step 25
            • Policy Workflow Wrap Up
          • βš™οΈCreating a Policy using APIs
            • Prerequesite Steps
            • Creation of a Policy
            • Policy Listing
            • Import a Policy from IPFS
            • Policy Preview from IPFS
            • Retrieves Policy Configuration
            • Updates Policy Configuration
            • Publish a Policy
            • Policy Validation
            • Retrieval of Data for Root Policy Block
            • Request Block Data
            • Sends Data to Specified Block
            • Returns Block ID by tag
            • Exporting Message ID
            • Export to zip file
            • Import from zip file
            • Retrieves Block Data by Tag
            • Sends Data to specified Block by Tag
            • Returns list of Groups of a particular user
            • Make the selected Group active
            • Creating link between policies
            • Requesting Multi Policy Config
            • Importing Policy from a Zip file with Metadata
          • βš™οΈAPIs for Asynchronous Execution
            • Creates new Policy
            • Publishing a Policy
            • Importing a Policy from IPFS
            • Importing a Policy from file
            • Policy Review
            • Importing Policy from a Zip file with Metadata
        • πŸ“Dry Run
          • πŸ’»Dry Run Mode using UI
          • βš™οΈDry Run Mode using APIs
            • Running Policy without making any changes
            • Returning all Virtual Users
            • Creating Virtual Account
            • Logging Virtual User
            • Restarting the execution of Policy
            • Returns List of Transactions
            • Returns List of Artifacts
            • Returns List of IPFS Files
            • Returning Policy to Editing
            • Create Savepoint
            • Returns Savepoint State
            • Restoring SavePoint
            • Deletes SavePoint
        • πŸ§‘β€πŸ€β€πŸ§‘Roles and Groups
          • πŸ’»Creating Roles and Groups using Policy Configurator UI
        • πŸ“Record/Replay
          • πŸ’»Policy execution record and replay using UI
          • βš™οΈRecord/Replay APIs
            • Get Recording
            • Start Recording
            • Stop Recording
            • Get Recorded Actions
            • Run record from zip file
            • Stop Running
            • Get Running Results
            • Get Running Details
            • Fast Forward
            • Retry Step
            • Skip Step
        • πŸ“Global Policy Search & Compare
          • πŸ’»Global search and comparison UI
          • πŸ’»Policy Differentiation using UI
          • βš™οΈPolicy Compare and Search APIs
            • Comparing Policies
            • Searching Policies
            • Exports Comparison results
        • πŸ”Block/Policy Discoverability
          • πŸ’»Search Policy using UI
          • βš™οΈSearch Policy APIs
            • Search Policy
          • πŸ’»Search Block using UI
          • βš™οΈSearch Block APIs
            • Searching Same Blocks
        • πŸ“‚Document Comparison
          • πŸ’»Document Comparison using UI
          • βš™οΈDocument Comparison APIs
            • Compare Documents
            • Export Comparison Results
        • πŸ“‚Tools
          • πŸ’»Tools using UI
          • βš™οΈTools APIs
            • Creating new Tool
            • Returns list of tools
            • Creating new tool asynchronously
            • Deletes the Tool
            • Retrieves Tool Configuration
            • Updates Tool Configuration
            • Publishes Tool onto IPFS
            • Publishes Tool into IPFS asynchronously
            • Validates Selected Tool
            • Returns Tools and its artifacts in zip format
            • Retrieves Hedera Message ID
            • Previews Imported Tool from IPFS
            • Imported Tool from IPFS
            • Previews Imported Tool from Zip
            • Importing Tool from Zip
            • Imports new tool from Zip Asynchronously
            • Imports new tool from IPFS Asynchronously
            • Returns List of Tools
            • Importing Tool from a Zip file
            • Importing Tool from a Zip file asynchronously
        • πŸ“Modules
          • πŸ’»Modules using UI
          • βš™οΈModules APIs
            • Returns all Modules
            • Creating new Module
            • Returns Module Menu
            • Retrieves Module Configuration
            • Updates Module Configuration
            • Delete the Module
            • Publishing Module onto IPFS
            • Returns Hedera ID for specific Module
            • Exporting Module in zip format
            • Import Module from IPFS
            • Import module from zip file
            • Preview Module from IPFS
            • Preview Module from zip file
            • Validates Module
          • πŸ“Modules Differentiation
            • πŸ’»Module Differentiation using UI
            • βš™οΈModule Differentiation APIs
              • Returns result of Module Comparison
              • Exports Comparison Result
        • πŸ“Tagging
          • πŸ’»Tagging using UI
          • βš™οΈTagging APIs
            • Creating Tag
            • Searching Tag
            • Deleting Tag
            • Synchronization of tags
        • πŸ“Themes
          • πŸ’»Themes using UI
          • βš™οΈThemes APIs
            • Returning all themes
            • Creating theme
            • Updating theme Configuration
            • Deleting theme
            • Returning zip file containing themes
            • Importing theme
        • πŸ“Policy Wizard
          • πŸ’»Demo on Policy Wizard using UI
          • βš™οΈPolicy Wizard APIs
            • Creating new Policy
            • Getting Policy Configuration
        • πŸ“‚Auto Suggestion
          • πŸ’»Demo using UI
          • βš™οΈAuto Suggestion APIs
            • Get next and nested suggested block types
            • Get suggestions configuration
            • Set suggestions configuration
        • πŸ“Auto Testing of the Policies
          • πŸ’»Auto Testing using UI
          • βš™οΈAuto Testing Policies APIs
            • Adding new Test to the policy
            • Returning Policy Test by ID
            • Running the Policy Test
            • Stopping the Specified Test
            • Deleting the Specified Test
            • Returning details of the most recent test run
        • πŸ“”Library of Policy Examples
          • πŸ’»Creating and using Roles
          • πŸ”’Data input via Forms, using Roles to partition user activities.
          • πŸͺ™Token Operations
          • πŸ”ŽMRV Document Operations
          • ⛓️TrustChain reports
          • βž—MRV aggregation and splitting for minting tokens
        • πŸ’»Demo on Integrating external policies using UI
        • Policy Labels
          • Policy Labels UI
          • βš™οΈAPIs related to Policy Labels
            • Creating new Label definition
            • Retrieve the list of Label definitions
            • Retrieve a label definition configuration by ID
            • Update Label configuration by ID
            • Delete Label definition by ID
            • Publish Label definition by ID
            • Publish Label definition by ID asynchronously
            • Retrieve the list of components for Label configuration (schemas, policies, etc)
            • Import Label configuration from a file
            • Export Label configuration to a file
            • Preview of the imported file
            • Search for Labels and Statistics for importing into Label configuration
            • Retrieve the list of created tokens (VPs) for which a Label document can be created
            • Retrieve token (VP) and all its dependencies by document ID
            • Create a new Label document for token (VP)
            • Retrieve a list of created Label documents
            • Retrieve Label document by ID
            • Retrieve linked Label documents by ID
        • Formula Linked Definitions
          • Formula Linked Definitions using UI
          • βš™οΈAPIs related to Formula Linked
            • Creating a new formula
            • Returns a list of formulas
            • Returns a formula by its ID
            • Update the formula by its ID
            • Delete the formula by its ID
            • Retrieve the list of all schemas and policies linked to a Formula
            • Create a new formula (import) from a file
            • Export selected formulas into a file
            • Loads (import) a file and return its preview
            • Publish a formula
            • Retrieve all data from documents that needed for displaying the formula
      • πŸ”‘Bring your own DIDs
        • πŸ’»Bring your own (BYO) DIDs UI
        • βš™οΈAPIs
          • Validate DID Format
          • Validate DID Keys
      • πŸ“Import/Export in Excel
        • πŸ’»Import and Export Excel file User Guide
        • βš™οΈImport/Export Schemas/Policies APIs
          • Import Schemas in Excel file format into a policy
          • Asynchronously Imports Schemas in Excel file format into a policy
          • Previews Schema from Excel file
          • Returns Schema in Excel file format
          • Returns list of Schemas
          • Exporting Policy to Excel
          • Import Schemas in Excel file format into a Policy
          • Asynchronously Imports Schemas in Excel file format into a policy
          • Policy Preview from Excel file
      • πŸ“Project Comparison
        • πŸ’»Project Comparison using UI
        • βš™οΈProject Comparison APIs
          • Comparing Project Data Documents
          • Comparing VP Documents - V1
          • Retrieves all Properties
          • Search Projects by filters
      • πŸ”‘Selective Disclosure
        • πŸ“”User Guide
        • πŸ”Selective Disclosure Demo
      • πŸ“ˆUsage Statistics
        • πŸ’»Statistics
        • βš™οΈAPIs related to Statistics
          • Returns the status of the current report
          • Update current report
          • Returns all reports
          • Returns report data by report uuid
          • Export report data in a csv file format
          • Export report data in a xlsx file format
          • Returns all dashboards
          • Returns dashboard by uuid
          • Returns Metrics
      • πŸ“’Artifacts
        • πŸ’»Importing/Deleting Artifacts using UI
        • βš™οΈArtifacts APIs
          • Returns all Artifacts
          • (deprecated) Returns all Artifacts
          • Upload Artifacts
          • (deprecated) Upload Artifacts
          • Delete Artifact
          • (deprecated) Delete Artifact
      • πŸ’»Asynchronous Tasks Status
      • Show list of Hedera Transactions
        • Showing List of Hedera Transactions using
        • APIs
          • Returning all transactions for Policy
          • Returning all transactions for Schema
          • Returning all transactions for Token
          • Returning all transactions for Contract
      • πŸ””Notifications
        • πŸ’»User Guide
        • βš™οΈAPIs related to Notification
          • Get All Notifications
          • Get new Notifications
          • Get Progresses
          • Read All Notifications
          • Delete Notifications
      • πŸ“Discontinuing Policy Workflow
        • πŸ’»User Guide
        • βš™οΈAPIs related to Discontinuing Policy workflow
          • Discontinue Policy
          • MigratePolicy Data
          • Migrate Policy Data Asynchronous
          • Get Policy Documents
      • πŸ“Live Project Data Migration
        • ↔️Live Project Data Migration UI
        • βš™οΈAPIs related to Live Project Data Migration
          • Getting Policy Data
          • Uploading Policy Data
          • Getting Policy Tag Block Map
          • Getting Policy Virtual Keys
          • Uploading Policy Virtual Keys
      • πŸ”₯FireBlocks Raw Signing
        • Fireblocks signing in Guardian UI
        • Getting Keys from FireBlocks UI
      • πŸ™Roles & Permissions
        • πŸ’»Roles and Permissions User Guide
        • βš™οΈAPIs related to Roles & Permissions
          • Returns list of all permissions
          • Returns list of all roles
          • Creates a New Role
          • Updates Role Configuration
          • Deletes Role
          • Setting Default Role
          • Returns list of all users for whom the current user can change the role
          • Retrieves information about the user (roles, permissions assigned policies)
          • Updates User Roles (only SR)
          • Returns list of all Policies
          • Assigns Policies to a User - Only SR
          • Updates user roles (for ordinary uses)
          • Assigns policies to a user (for ordinary users)
      • Decentralized Guardian
        • Remote Policy UI
        • APIs
          • Returns the list of requests for adding remote policies
          • Previews the policy from IPFS without loading it into the local DB.
          • Policy Import from IPFS
          • Approves policy Asynchronously
          • Rejects policy Asynchronously
          • Approves Policy
          • Rejects policy
          • Return a list of all policies
          • Approves a request for an action from a remote Guardian
          • Rejects a request for an action from a remote Guardian
          • Return a count of policy requests
      • Change Password
        • Password Security Hardening and Change Password using UI
        • βš™οΈAPI related to Change Password
          • Change Password
      • πŸ“TrustChain
        • βš™οΈTrustChain APIs
          • Requesting
          • Building and returning
      • 🏜️External Events
        • πŸ› οΈMonitoring Tools
          • ⛏️Application-events module
        • βš™οΈSend Data using the External Data APIs
          • Sends Data from an External Source
      • πŸ“±Mobile Support for Data Interface
        • πŸ“±Mobile operation for the Standard Registry
      • πŸ› οΈStandard Registry Operations
        • βš™οΈSettings APIs
          • Displaying Current Settings
          • Adding Settings
        • βš™οΈLogs APIs
          • Returning Logs
          • Returning Log Attributes
        • βš™οΈTask Statuses APIs
          • Returning Task Statuses
      • πŸ“ΉDemo Experience
    • 🀡Users
      • πŸ› οΈUser Operations
        • βš™οΈAccount APIs
          • Authentication Process
          • User listing except Standard Registry and Auditor
          • User Balance
          • User Session
          • User Login
          • Registering new account
          • Returns all Standard Registries
          • Returns Access Token
        • Profile APIs
          • User Account Balance
          • User Account Information
          • Setting User Credentials
          • Setting User Credentials Asynchronously
      • πŸ“±Mobile Support for Data Interface
        • πŸ“±Mobile Operation for the user
      • πŸ› οΈUser Profile Setup
      • πŸ€–AI Search
        • πŸ’»AI Search using UI
        • βš™οΈAI Search APIs
          • Returns response
          • Rebuilds vector based on policy data
      • πŸ”ŽGuided Search of Methodologies
        • πŸ’»Search using UI
        • βš™οΈSearch APIs
          • Retrieves list of all categories
          • List of policies that are best suited for given parameters
      • βœ–οΈMulti Policy
        • πŸ’»Configuring Multi Policy using UI
      • Bottom Up Data Traceability
        • Bottom Up Data Traceability using UI
        • βš™οΈRelated APIs
          • Create new Statistics Definition
          • Get the list of Statistics Definitions
          • Retrieve details of the Statistics Definition by ID
          • Update configuration of the Statistics Definition by ID
          • Delete the Statistics Definition by ID
          • Publish Statistics Definition by ID
          • Retrieve the list of linked schemas and policy
          • Retrieve the list of all documents conforming the rules of the Statistics Definition.
          • Create a new Statistics Assessment based on the Statistics Definition
          • Retrieve the list of existing Statistics Assessment
          • Retrieve the Statistics Assessment by ID
          • Retrieve all VC documents related to the Statistics Assessment
    • πŸͺ™Tokens
      • πŸ’»Creating Token using UI
      • πŸ“’Token Template
        • Creating Token Template using UI
        • Creating Token through UI using Token Template
      • πŸ“–Token Authenticity
        • ℹ️Establishing Token Authenticity
      • Dynamic Token Creation in Policies
        • Dynamic Token Creation in Guardian Policies using UI
      • πŸ› οΈToken Operations
        • βš™οΈToken APIs
          • Token Listing
          • Creation of Token
          • User Info for selected token
          • Associates the user with token
          • Disassociates the user with token
          • Grants KYC for the user
          • Revoke KYC of the user
          • Freeze Tokens of a user
          • UnFreeze Tokens of a user
          • Returns Token Serials
        • βš™οΈAPIs for Asynchronous Execution
          • Token Creation
          • Associating User with the Hedera Token
          • Disassociating User with the Hedera Token
          • Setting KYC for the User
          • Unsetting KYC for the User
      • πŸ“”Token Retirement Contract
        • πŸ’»Creating Contract using UI
        • ⛓️TrustChain representation of token retirement
        • βš™οΈRetirement APIs
          • Returning all contracts
          • Creating new Contract
          • Importing new Contract
          • Get Contract Permissions
          • Removing Contract
          • Returns a list of all Wipe requests
          • Enabling Wipe Requests
          • Disabling Wipe Requests
          • Approving Wipe Requests
          • Rejecting Wipe Requests
          • Clearing Wipe Requests
          • Adding Wipe Admin
          • Removing Wipe Admin
          • Adding Wipe Manager
          • Removing Wipe Manager
          • Adding Wipe Wiper
          • Removing Wipe Wiper
          • Syncing Retire Pools
          • Returning list of all Retire Requests
          • Returning list of all Retire Pools
          • Deleting Retire Requests
          • Deleting Retire Pools
          • Setting Retire Pools
          • Unsetting Retire Pool
          • Unsetting Retire Request
          • Retiring Tokens
          • Approving Retire Request
          • Cancelling Retire Request
          • Adding Retire Admin
          • Removing Retire Admin
          • Returning all Retired VCs
          • Adding Wipe for specific token
          • Remove Wipe request for specific token
          • Deleting Wipe request for Hedera Account
          • Get Retirement VCs from Indexer
    • πŸ‘ΎAutomation Testing
      • πŸ’»Performing API Automation Testing
      • πŸ’»Performing UI Automation Testing
    • πŸ“•Logging Configuration using Pino Library
    • πŸ“”Guidance for Open Source Policy Submissions
    • πŸ“Demo Guide
      • πŸ”‹Renewable Energy Credits
        • πŸ“–Introduction to International Renewable Energy Credit Standard (iREC)
        • βš™οΈiREC API Demo Guide
        • βš™οΈDemo Using APIs and UI
        • πŸ’»iREC 5 Demo UI Guide
        • βš™οΈiREC 5 json
        • πŸ’»iREC 7 User Journey UI Demo Guide
        • πŸ’»iREC 7 Demo UI Guide
      • ☘️Carbon Offsets
        • πŸ“–Introduction to Verra Redd+
        • πŸ’»Verra Redd VM0007 Demo UI Guide
        • πŸ’»Verra Redd_3 User Journey Demo UI Guide
        • 🎍VM0017 Adoption of Sustainable Agricultural Land Management, v1.0
        • 🎍VM0042 Methodology for Improved Agricultural Land Management
        • 🌲Verra VM0047 - Afforestation, Reforestation, and Revegetation (ARR) v0.1
        • 🌲Gold Standard Afforestation and Reforestation (AR) v2.0
        • πŸƒDovu Methodologies
        • πŸ€Dovu MMCM
        • ♨️Improved Cookstove
        • ♨️GoldStandard - Metered Energy Cooking
        • πŸ€Carbon Reduction Measurement - GHG Corporate Standard Policy Guid
        • 🏒VM0044 Methodology for Biochar Utilization in Soil and Non-Soil Applications
        • 🏭CDM AMS-III.AR : Substituting fossil fuel based lighting with LED/CFL lighting systems
        • 🏨CDM AMS II.G: Energy Efficiency Measures in Thermal Applications of Non-Renewable Biomass
        • 🏭CDM AMS III.D: Methane Recovery in Animal Manure Management Systems
        • 🏭CDM AMS III.BB: Electrification of communities through grid extension
        • 🏭CDM AR-ACM0003: Methodology for Afforestation and Reforestation of Lands Except Wetlands
        • 🏭CDM ACM0001: Flaring or Use of Landfill Gas
        • 🏭CDM ACM0002: Grid-Connected Electricity Generation from Renewable Sources
        • 🏭CDM ACM0006: Electricity and Heat Generation from Biomass
        • 🏒CDM ACM0007: Conversion from Single Cycle to Combined Cycle Power Generation
        • 🏭CDM AMS-I.A.: Electricity Generation by the User
        • 🏭CDM AMS-I.C.: Thermal Energy Production with or Without Electricity
        • 🏨CDM AMS-I.F.: Renewable Electricity Generation for Captive Use and Mini-Grid
        • 🏭CDM AMS-II.J.: Demand-Side Activities for Efficient Lighting Technologies
        • 🏨CDM AMS-III.AV.: Low Greenhouse Gas Emitting Safe Drinking Water Production Systems
        • 🏭CDM AMS-III.F.: Avoidance of Methane Emissions Through Composting
        • 🏒CDM AMS-III.H.: Methane Recovery in Wastewater Treatment
        • 🏭CDM ACM0018: Electricity Generation from Biomass in Power-Only Plants
        • ⬇️Verra PWRM0001 :Plastic Waste Collection Methodology
        • 🏭VM0041 Methodology for the Reduction of Enteric Methane Emissions from Ruminants through the Use of
        • πŸ₯‡Carbon Sequestration through Accelerated Carbonation of Concrete Aggregate
        • 🏭AMS-I.D: Grid Connected Renewable Electricity Generation – v.18.0
        • 🏭PWRM0002 : Plastic Waste Recycling Methodology
        • 🍚Methane Emission Reduction by Adjusted Water Management Practice in Rice Cultivation
        • β›½Verra VMR0006: Energy Efficiency and Fuel Switch Measures in Thermal Applications
        • 🌩️AMS-I.E Switch from Non-Renewable Biomass for Thermal Applications by the User
        • GCCM001 v.4 Methodology for Renewable Energy Generation Projects Supplying Electricity to Grid
        • Landfill Gas Destruction and Beneficial Use Projects, Version 2.0
        • Climate Action Reserve’s U.S. Landfill Protocol Version 6.0
        • VM0042 Improved Agricultural Land Management, v2.1
      • 🏭Carbon Emissions
        • 🏑Remote Work GHG Policy
          • πŸ“–Introduction to Remote Work GHG
          • πŸ’»GHG Policy User Journey UI Demo Guide
          • πŸ’»Remote GHG Policy Demo Guide
        • 🏒Carbon Emissions Measurement - GHG Corporate Standard Policy Guide
        • 🏭atma GHG Scope II Carbon Emission Policy
        • 🏭Atma Scope 3 GHG Policy
        • 🏭GHGP Corporate Standard
        • 🏭GHGP Corporate Standard V2
        • Climate Action Reserve’s U.S. Landfill Protocol Version 6.0
        • Landfill Gas Destruction and Beneficial Use Projects, Version 2.0
    • ❓FAQs
    • πŸ‘¬Community Standards
      • Guardian Policy Standards (GPS)
      • Guardian System Standards (GSS)
      • Proposal for Defining Standards
  • Feedback
    • Feedback in Pipelines
  • πŸ“ˆGuardian in Production
    • πŸ“„API Architecture Customization
    • πŸ“‰Monitoring tools
    • Performance Improvement
    • Cloud Infrastructure
    • Independent Packaged Deployment
Powered by GitBook
On this page
  • Introduction
  • Actors and Participants
  • Theory
  • Requirements
  • Data Upgrading Process
  • Migration Consistency
  • Tools Comparison
  • Services Upgradability Service Profiling and data migration mapping
  • Methodologies, best practice for microservices upgrading
  • Upgrading Guardian
  • Implementation : Upgrade Guide for Hedera Application
  • Tasks Checklist prior to the upgrade
  • Test on a copy of the production
  • Review the release notes and documentation
  • Perform a Database and Environment backup operation
  • Perform Guardian Vault backup operation
  • consul-backup.sh #!/bin/bash
  • Tasks Checklist during the upgrade
  • Tasks checklist after the upgrade
  • End of Blue-Green Upgrade
  1. Guardian
  2. Getting Started
  3. Installation Guide
  4. Installation

Upgrading

Introduction

This document can be used as a tool to implement an upgrade process in the Hedera Guardian application. It provides detailed step-by-step instructions for upgrading an open-source Hedera Guardian application from the current version to the target version. It includes expanded information and additional guidance for each section of the upgrade process. Please follow the instructions outlined below:

Actors and Participants

The actors that will be involved in the guardian upgrading process are:

  • Guardian Development Team

    • Solution development.

    • Documentation provisioning.

  • Guardian Administrator (customer side)

    • Backup execution.

    • Scripting Execution.

    • Configuration customization.

Theory

Requirements

Depending on how large the upgrades are, there could be a lot of work keeping versions correct. Proper tools, documentation, and methodologies should be created to respond to upgrade needs (How will our customers upgrade their solution? What solutions need to be put in place? Etc.)

Related requirements:

  1. Find a qualified source to create an enterprise-grade version of Guardian;

  2. Consolidate, package, and normalize the solution architecture to match development best practices, supporting existing Hedera environments (currently defined as a local node, testnet, previewnet, or mainnet) deployed on-premises and on clouds;

  3. Cloud Infrastructure: All Guardian source code and secrets should be deployed via Infrastructure as Code in cloud. In particular, the repo should contain all the artifacts and the documentation for the deployment of the Guardian on Amazon Web Services, Google Cloud Platform and Microsoft Azure.

Data Upgrading Process

The upgrading of the Guardian functionalities may include the necessity of applying changes in the database schemas. In this case the Process of Upgrading is split between Developer and Customer.

Data Upgrading process involves the developer team providing the solution for Upgrading while the Customer is the solution executer. The main problem while upgrading a run time operational database is the migration of all data from the previous version schema to the new version.

The migration process guides the team to produce artifacts that will help to correctly define the migration itself and the customer to decide for upgrading and executing the data migration.

In this case the migration that we account for is an homogeneous migration: a migration from source databases to target databases where the source and target databases are of the same database management system. During upgrading the system, the schemas for the source and target databases are almost identical except for changes in some of the fields, collections and documents. For changing data the source databases must be transformed during migration.

1) Data Migration Profiling:

Without a good understanding of the Data model the organization could run into a critical flaw that halts the system and brings Guardian to stop for data corruption and inconsistency. This phase would have β€œData Migration Model” as output. This document outlines all the data that needs to be migrated, the complete mapping between the Data Source and Data Destination and every transformation in terms of:

  • Data type: to cast the source value into the target value based on type transformation rules.

  • Data structure: to describe modification of the structure of a collection in the database model.

  • Data value: to change the format of data without changing the data type.

  • Data enrichment and correlation (adding and merging to one collection).

  • Data reduction and filtering (splitting to several collections).

  • Data views: to allow the maintenance of DAO contracts during Data reduction.

Furthermore, the document should:

  • Map every data to User Functionality (Rest API) that involves that data.

  • Map every data to messages data flows to realize the functionality.

  • Specify data replication in the guardian data sources (only DB Data, Blockchain Data, Multi Service).

  • Break the data into subsets to determine all the data changes that have to be applied together.

The document has to specify the following data parameters:

  • Expected size of your data,

  • the number of data sources,

  • the number of target systems,

  • Migration time evaluation per data size reading, writing, network latency and the expected time per expected data size.

2) Design phase: this phase has the β€œDesign Document” as output.

The type of data migration could be either big bang or trickle:

  • In a big bang data migration, the full transfer is completed within a limited window of time. Live systems experience downtime while data goes through ETL (Extract, transform, load) processing and transitions to the new database.

  • Trickle migrations, in contrast, complete the migration process in phases. During implementation, the old system and the new are run in parallel, which eliminates downtime or operational interruptions. Processes running in real-time can keep data migrating continuously.

The document should contain:

  • the requirements and the timeline for the project. Allocate time for every testing phase and validation phase.

  • Should define the migration type as described above.

  • The Migration process needs to be detailed, taking care of:

    • Target database addressing using environment description.

    • Persistence of in-transit data: To resume at the point where special events happen, the system needs to keep an internal state on the migration progress: Errors, Connection Lost, large window processing of the data, provides process repeatability.

    • Define how to track the items that are filtered out from transformation/migration phases , you can then compare the source and target databases along with the filtered items.

    • For every batch of data define the exact plan and roll back strategy

    • Define Customer test to verify consistency: This check ensures that each data item is migrated only once, and that the datasets in the source and target databases are identical and that the migration is complete.

  • Define roles and responsibilities of the data migration.

  • A Validation phase has to be defined with:

    • Who has the authority to determine whether the migration was successful?

    • After database migration, who will validate data?

    • Which tool will help in data validation: this tool will be the main instrument to verify data consistency. This check ensures that each data item is migrated only once, and that the datasets in the source and target databases are identical and that the migration is complete.

  • Define backup and disaster recovery strategies. Create a DB backup of Mongo: replica set is a very good solution for availability but to provide real backup solution define a dedicated backup Mongo copy.

3) Build the Migration Solution

Break the data into subsets and build out migration of one category at a time, followed by a test. (TOOL) The Developer

4) Build the consistency validation Test

Build the customer check to compare the source and target databases along with the filtered items.

5) Back up

The data before executing. In case something goes wrong during the implementation, you can’t afford to lose data. Make sure there are backup resources and that they’ve been tested before you proceed (MongoDB: Replica set).

6) Conduct a Live Test

The testing process isn’t over after testing the code during the build phase. It’s important to test the data migration design with real data to ensure the accuracy of the implementation and completeness of the application: consistency test. (TOOL)

7) Execute the plan

Implementing what described in step 2. (TOOL)

Migrate data in batches. Migration can take a long time, so batching up the data will prevent any interruption in service. Once the first batch is successfully migrated and tested, you can move on to the next set and revalidate accordingly.

8) Test your migration process

During the first batch of data being migrated, try to analyze all the steps and see if the process is completed successfully or if it needs to be modified before moving on to the next batch.

9) Validation Test

You need to verify that your database migration is complete and consistent. Before you deploy this production-level data, test the new data with real life scenarios before moving it to production in order to validate that all the work done aligns with the overall plan.

10) Audit

Once the implementation has gone live, set up a system to audit the data in order to ensure the accuracy of the migration. (Performance and monitoring)

Migration Consistency

The expectation is that a database migration is consistent. In the context of migration, consistent means the following:

  • Complete. All data that is specified to be migrated is actually migrated. The specified data could be all data in a source database or a subset of the data.

  • Duplicate free. Each piece of data is migrated once, and only once. No duplicate data is introduced into the target database.

  • Ordered. The data changes in the source database are applied to the target database in the same order as the changes occurred in the source database. This aspect is essential to ensure data consistency.

An alternative way to describe migration consistency is that after a migration completes, the data state between the source and the target databases is equivalent. For example, in a homogenous migration that involves the direct mapping of a relational database, the same tables and rows must exist in the source and the target databases.

Tools Comparison

Self scripted tools

These solutions are ideal for small-scale projects and quick fixes. These can also be used when a specific destination or source is unsupported by other tools. Self-Scripted Data Migration Tools can be developed pretty quickly but require extensive coding knowledge. Self-Scripting solutions offer support for almost any destination or source but are not scalable. They are suitable only for small projects. Most of the Cloud-Based and On-Premise tools handle numerous data destinations and sources.

  • Scalability: Small and 1 Location

  • Flexibility: any data

  • Maintenance, error management, Issues during execution

Some reasons for building database migration functionality instead of using a database migration system include the following:

  • You need full control over every detail.

  • You want to reuse functionality.

  • You want to reduce costs or simplify your technological footprint.

On-Premise tools

On-Premise solutions come in handy for static data requirements with no plans to scale. They are data center level solutions that offer low latency and complete control over the stack from the application to the physical layers.

  • Data center migration level.

  • Limited scalability.

  • Secure: give full process control.

CloudBased tools

Cloud-Based Data Migration Tools are used when you need to scale up and down to meet the dynamic data requirements (mainly in ETL solution). These tools follow a pay-as-you-go pricing that eliminates unnecessary spending on unused resources.

  • Based on the cloud.

  • Big Scalability.

  • Has security concerns.

Data Migration Software parameters

Setup: easy set up in your environment.

Monitoring & Management: provides features to monitor the ETL process effectively. Enable users to take reports on various crucial data sets.

Ease of Use: learning curve.

Robust Data Transformation: data transformation feature after the data is loaded into the database. You can just useSQL.

Setup

Monitoring & Management

Ease of Use

Robust Data Transformation

Pricing / Open Source

Custom functionality

Npm/Coding

no

Yes integrated in the solution

Tested Npm tool: migrate-mongo

free

AWS Data Pipeline

yes

yes

yes

yes

$0.60 to $2.5 per activity

Hevo Data

yes

yes

yes (Autoschema mapping)

yes

FREE (1 million events)

Talend Open Studio

yes

no

By GUI

yes

Open Source / Free

MongoSyphon

JSON format configuration files

No

no GUI, SQL, scheduling via cron

early stage tool, SQL

Open Source / Free

Meltano

yes

Airflow

yes

yes

Open Source / Free

Singer

Python

No

No

taps and targets (Meltano provided)

Open Source / Free

AirByte

yes

No

yes

SQL, dbt

Free

Several other tools and pricing both on open source and commercial:

Services Upgradability Service Profiling and data migration mapping

To describe services we introduce β€œServices canvas”. A microservice canvas is a concise description of a service. It’s similar to a CRC (Class-responsibility-collaboration) card that’s sometimes used in object-oriented design. This is a template which allows a synthetic description of the service itself both for developers and stakeholder clarity. It will be compiled by developers and architects, and will be used as input during the delivery of the data migration process.

It has the following section: Service Name, Managed Data, Dependencies, Service API.

Canvas wil be used to describe the development realized in that very release in a way to be introduced incrementally. The Upgrade canvas is built not as a complete Service Canvas, but it must only describe the upgrading of the service/functionalities. In this way it will directly contain the same items really implemented in the release. A complete description of the service could also be provided in a SERVICE CANVAS that is out of the scope of the upgrading, much more difficult to be produced and more design oriented than the document.

Main Parameters

Name

Name of Service

Description

Type of Development

< Creation, Update, Deletion >

Version

< Major, Minor, Patch >

Capabilities

  • Main Service Functionality

Managed Data

Collection Names:

Type of Development:

< Creation, Update, Deletion >

Data Model Reference

If Creation: Document JSON Document Reference Link

If Update: Data Mapping Document Reference Link

Dependencies

Invokes

Invoked by

<Service1Name>:

  • Service1FunctionName()

<Service2Name>:

  • Service2FunctionName()

  • ….

<Service2Name>:

  • Service3FunctionName()

<Service3Name>:

  • Service2FunctionName()

  • ….

Subscribes to

Subscribed by

<Service3Name>:

  • <eventName1> event

  • <eventName2> event

Saga reply channels:

  • <SagaName1> Saga

  • <SagaName2> Saga

  • …..

<Service3Name>:

  • <eventName1> event

  • <eventName2> event

Service API

Commands

Queries

Events

Created:

Synchronous:

  • FunctionName1()

  • FunctionName2()

  • …..

Asynchronous:

  • FunctionName3()

  • …..

Updated:

Synchronous:

……

Asynchronous:

Deleted:

Synchronous:

Asynchronous:

  • getFunctions()

  • Created

  • Authorized

  • Revised

  • Canceled

  • ...

Service versioning and compatibility

To describe the compatibility between services in more detail it is possible to provide a square compatibility matrix.

To build this matrix it is possible to start with a dependency matrix detailing all the services dependent from one another in terms of service producers and service consumers. This matrix won’t be a complete correlation matrix but on the rows it will have just the upgraded and new services while on the columns it will show all services in the application.

service1

service2

service3

service4

service5

service6

Service 1

x

x

x

Service 3

x

x

Service 4

x

Service 6

x

x

x

Starting from this table it will be easier to infer the dependency between different versions of one service with the dependent ones versions.

For example

Service1 2.1.3 release is compatible with Service2 starting from version 1 until version 2.

Service1 2.1.3 is compatible with only with version 3.2.x of service3 and just bug fixes of that

Service 2.1.3 is backward compatible with with all versions of service6 until 4.x.x

Service 3.2.3 ……

service1

service2

service3

service4

service5

service6

Service 1 2.1.3

1.x.x

2.x.x

3.2.x

4.x.x

Service 3

3.2.3

…..

….

….

..

Service 4

..

Service 5

..

Service 6

..

..

..

This solution is about to provide upgrading delta Online reference.

Here are two tools to implement the complete matrix analysis for microservices:

Data Model Reference

In case of newly introduced data, the data model section of the canvas will be the JSON document file that describes the collection itself.

In case of a data update, the reference Data Model will be the link to the Data mapping document.

The Data mapping document describes the model for the data migration. The document should outline all the data that needs to be migrated, the complete mapping between the Data Source and Data Destination and every transformation in terms of:

  • Data type: to cast the source value into the target value based on type transformation rules.

  • Data structure: to describe the structure modification of a collection in the database model.

  • Data value: to change the format of data without changing the data type.

  • Data enrichment and correlation (adding and merging to one collection).

  • Data reduction and filtering (splitting to several collections).

  • Data views: to allow the maintenance of DAO contracts during Data reduction.

The canvas Itself provides the framework in which the data belongs. Overmore the document should:

  • Map every data to User Functionality (Rest API) that involves that data.

  • Map every data to message data flows to realize the functionality.

  • Specify data replication in the guardian data sources (only DB Data, Blockchain Data, Multi Service).

  • Break the data into subsets to determine all the data changes that have to be applied together.

Here is how the mapping will look like

Mapping Indicator
Change Description
Key Indicator
Source Collection
Source Field name
Source Field Length
Source Data Type
Business Rule
Target Collection
Target Field Name
Target Data Type
Target Field Length
Description & comments

A

Split

na

Collection 1

Field1

50

string

Direct Mapping

Collection2

Field1

string

50

A

Split

na

Collection 1

Field2

50

string

Direct Mapping

Collection3

Field1

string

50

C

Split

na

Collection 1

Field3

50

string

if "Sales" then "S"

if "Transport" then "T"

Collection3

Field2

string

1

The following information is contained in the table:

1) Mapping indicator (Values A: Add, D: Delete, C: Change)

2) Change description (Indicates mapping changes introduced)

3) Key Indicator (Indicates whether the field is a primary key or not)

4) Source Table/Collection Name

5) Source Field Name

6) Source Field Length

7) Source Field Data Type

8) Source Field Description(The description will be used as a meta data for end user)

9) Business Rule to transform data if needed

10) Target Table/Collection Name

11) Target Field Name

12) Target Data Type

13) Target Field Length

14) Description and comments

Methodologies, best practice for microservices upgrading

1) Services should be organized around business domain boundaries:

Architects recommend the use of β€œseparation of concerns”: strong internal cohesion in each microservice and loose coupling microservices should be grouped according to their problem domain.

Architects need to have a strong understanding of the relation between impacted use cases and backend data flows in a way to always map use case modification in backend microservices upgrading and know how data modification impacts interservices messages between consumer and produced services and their APIs.

A service here has the sole authority over its data and exposes operations to other services.

2) Keep admin scripts together with the application codebase

Guardian migration consists of a small script that runs as the first step of every first time installation performing a one-time load. Is it possible to write a small function to read and save data in batch into the database running these scripts offline.

Guardian dials with Schema breaking changes

  • Removing or renaming an element;

  • Changing any of its non-descriptive properties e.g. type or readOnly status.

Deprecation Notice:

  • Issued via the deprecated meta-data annotation;

  • Release Notes;

  • VC revocation notice is issued into the corresponding Hedera Topic.

Guardian dials with Policy Breaking changes

  • Removing or renaming a block, changing any of its non-descriptive properties.

  • Changing used schema version to a new one with breaking changes. (Changes Impact)

  • Changing workflow sequence, dependencies or bind block.

  • Introducing new, or changing existing external data sources.

Guardian dials with Breaking changes in general

  • Removing an API endpoint, HTTP method or enum value;

  • Renaming an API endpoint, HTTP method or enum value;

  • Changing the type of the field;

  • Changing behavior of an API request.

3) Every microservice should always explicitly declare all of its dependencies.

We should do this using a dependency declaration manifest. For NodeJS we have NPM.

A different possibility could be the use of dependency Management tools:

ORTELIUS: Ortelius is an open source, supply chain evidence catalog for publishing, versioning and sharing microservices and other Components such as DB objects and file objects. Ortelius centralizes everything you need to know about a component-driven architecture including component level ownership, SBOMs, vulnerabilities, dependency relationships, key values, deployment metadata, consuming applications and versions.

ISTIO: A completely different approach that has been found during the preparation of the present methodology. The approach suggests the usage of the Service Mesh pattern for microservices. Also this choice represents a viable path but needs rethinking to the platform architecture. Also the Documenting path proposed here will naturally facilitate the assumption of a similar pattern.

4) A microservices app should be tracked in a single code repository and must not share that repository with any other apps.

Versioning:

All microservices should make it clear what version of a different microservice they require and what version they are.

A good way of versioning is through semantic versioning, that is, keeping versions as a set of numbers that make it clear when a breaking change happens (for instance, one number can mean that the API has been modified).

Version Technique

  • Header versioning: This microservice versioning approach passes version information through the HTTP protocol header β€œcontent-version” to specify a particular service.

5) Microservice apps are supposed to dispose of a service and to handle it gracefully.

Application processes can be shut down on purpose or through an unexpected event. An application process should be completely disposable without any unwanted side-effects. Moreover, processes should start quickly.

An important part of managing dependencies has to do with what happens when a service is updated to fit new requirements or solve a design issue. Other microservices may depend on the semantics of the old version or worse: depend on the way data is modeled in the database. As microservices are developed in isolation, this means a team usually cannot wait for another team to make the necessary changes to a dependent service before going live. The way to solve this is through versioning. All microservices should make it clear what version of a different microservice they require and what version they are.

6) Microservice apps are expected to run in an execution environment as stateless processes.

In other words, they can not store persistent state locally between requests.

Upgrading Guardian

Guardian is a Microservices Application organized with an API Gateway and the Message System NATS. This architecture is natively thought of as a cloud application so it can be improved by deploying on cloud.

There are several benefits in deploying microservices architectures on cloud thanks to the Application Managers:

  • The microservices are deployed independently and communicate by APIs. (We got it)

  • The overall infrastructure gains resiliency to node failures. (Application Manager)

  • The containerization can give the application bigger portability. (We got it)

  • CI/CD strategies and automation are applicable to the microservices, making development cycles fast. (Could be implemented)

  • It allows automatic resource allocation following the user demand and scaling the infrastructure horizontally.

  • It allows the application to upgrade and maintain the availability of the overall system.

Our main target cloud infrastructures are: Azure, AWS, Google.

Although cloud targets infrastructures, Azure and AWS, namely, offer their own Containerized Application Manager infrastructure. Google developed the Kubernetes platform that became the standard de facto in the area. Overmore it is an open source platform so it is possible to use it on-premise as well. So Kubernetes became one of the most important Cloud Agnostic solutions. Both Azure and AWS provides their own container manager solution:

  • Azure App Services, optimized for web services enables the deployment:

    • From source code (gain cloud dependency);

    • From docker image;

    • From the docker-compose.yml file (the docker containers are inside a single AppService,single POD, rather than multiple AppServices as one might expect.) and

  • Amazon elastic container registry.

At the same time, they offer services that grant direct access to Kubernetes: Azure has its Azure Kubernetes Service (AKS) while AWS has Amazon EKS (and obviously on EC2).

When it comes to physical upgrades what we want is for customers to be able to upgrade Guardian in the cloud that they choose to go with for their enterprise solution. There will be the need to deploy new versions without downtime to maintain overall application stability. Every service will rely on others to be up and running, so you also need to maximize the availability of every service.

Three common deployment patterns are available for zero-downtime deployments:

β€’ Rolling deploy β€” You progressively take old instances (version N) out of service while you bring up new instances (version N+1), ensuring that you maintain a minimum percentage of capacity during deployment.

β€’ Canaries β€” You add a single new instance1 into service to test the reliability of version N+1 before continuing with a full rollout (A-B TESTING). This pattern provides an added measure of safety beyond a normal rolling deployment.

β€’ Blue-green deploys β€” You create a parallel group of services (the green set), running the new version of the code; you progressively shift requests away from the old version (the blue set). This can work better than canaries in scenarios where service consumers are highly sensitive to error rates and can’t accept the risk of an unhealthy canary.

Implementation : Upgrade Guide for Hedera Application

The methodology that we follow to upgrade the system is the Blue-Green deployment. This allows us to upgrade while minimizing the downtime and the risks involved in the upgrade itself. We create a new instance of Guardian running the new version, the green instance, and in this instance we run all the tests, after which, we switch all the traffic on this. The current environment, the blues one, runs at the same time and continues the normal operative.

The upgrade process requires that the team or the person running/executing the process should have minimum 3 to 5 years of experience in the following technologies to implement the upgrade process.

a. Backend development experience in NodeJS and npm packages.

b. MongoDB installation, using, and troubleshooting.

c. AWS or Azure experience of CLI and infrastructure.

d. Shell scripting, YAML, and Docker & Kubernetes.

The upgrade process should take between 40 and 80 hours, depending on the individual steps and any issues that arise during the process.

Tasks Checklist prior to the upgrade

Test on a copy of the production

In general, before initiating the upgrade process, it is highly recommended to create a copy of the production environment and perform testing on the replicated instance. By testing on a copy, you can identify and address any potential issues without impacting the live production environment.

  • If any issues are encountered during the testing phase, take the appropriate steps to address them and resolve them before proceeding to the next steps.

For the Guardian Upgrade process the Green Instance will be the copy on which all the tests are going to be executed.

Review the release notes and documentation

Thoroughly review the release notes and documentation provided for the target version. These resources will help you understand the changes, new features, and any potential breaking changes in the upgraded version.

Perform a Database and Environment backup operation

It is essential to create a complete backup of the existing Hedera Guardian application and its associated databases before proceeding with the upgrade. This ensures that the application data is safeguarded and can be restored if needed.

While backing up consider that until release 2.13.0 environment was described by .env.docker

files in every of the following folders: ./guardian, ./<service-name>/ and for the following services: api-gateway, auth-service, guardian-service, logger-service, policy-service and worker-service.

But starting with release 2.13.0 environment is holded by two different kind of files depending on the kind of installation:

  1. Complete Ecosystem: .env.<GUARDIAN_ENV>.guardian.system

At folder: ./guardian/configs

  1. Single Service: .env.<GUARDIAN_ENV>.<service-name>

At folder: ./guardian/<service-name>/configs/

  1. configure /usr/local/bin to contain the whole guardian tree folders.

  2. change line 6 of script configs-backup.sh from:

zip -r -D /tmp/configs.zip /usr/local/bin/configs

To

zip -i "*env.*" -r /tmp/configs.zip /usr/local/bin/guardian\

This will ensure that the complete ecosystem environment is backed up.

Perform Guardian Vault backup operation

Starting with release 2.12.1, Guardan can store secret data in dedicated KMS. It can be a self maintained Hashicorp vault server or third party KMS provided by a cloud infrastructure. This storage is mainly used to store user wallets for all the users as long as some important operational server side data (Operator: system wallet, IPFS api key, Access Token account).

KMS stored secret data also needs to be backed up too.

As an example, here is provided a script to backup Hashicorp Vault secrets. The execution of the script would provide the snapshot for the consul server that contains the Vault storage and copies the cryptographic material to access the vault after it is restored using the snapshot. The file could be added to Guardian Application to create the backup that is going to be stored in the file guardian/vault/hashicorp/backup/secret-backup.snap.

Create the file guardian/vault/hashicorp/scripts/consul/consul_backup.sh with the following content.

consul-backup.sh #!/bin/bash

BASE_DIR=$PWD/vault/hashicorp
BACKUP_DIR=$BASE_DIR/backup/
VAULT_ROOT_TOKEN_PATH=$BASE_DIR/vault/.root
CONSUL_ADDR=http://localhost:8500

# Executes a vault read command using curl

# $1: URI vault path to be executed

# $2: name of the snapshot file

read() {
 URL=$CONSUL_ADDR/$1
 OUTPUT=$BACKUP_DIR/$2
 curl $URL --output $OUTPUT
}

# Execute the complete snapshot for the consul server

execute_backup() {

# create a backup dir /vault/hashicorp/backup

mkdir $BACKUP_DIR

# backup root access file

cp $VAULT_ROOT_TOKEN_PATH $BACKUP_DIR/.root

# copy TLS material

cp -r $CERT_REPOSITORY_DIR $BACKUP_DIR

# execute read from server and backup in secret-backup.snap

 read v1/snapshot secret-backup.snap
}
echo "execute backup"
execute_backup

Ensure prerequisite accounts (Optional: Only for the first time installation)

Make sure you have a Hedera Testnet Account and a Web3.Storage Account readily available for the upgrade process. These accounts will be required during the upgrade process to facilitate compatibility and connectivity with the Hedera network.

Identify and document version-specific customizations

If the prior version of the Hedera Guardian application has been customized by your company to cater to specific requirements, thoroughly document all the customizations made. It is important to have a clear understanding of the changes to ensure a smooth transition to the upgraded version. Follow data upgrading process best practice for your custom data.

Identify performance behavior

Tasks Checklist during the upgrade

Clone the Guardian repository

Begin by cloning the Guardian repository using Git. Run the following command to clone the repository to your local environment:

Follow the installation guide

Consult the installation guide provided in the Hedera Guardian documentation for the target version. This guide will provide detailed instructions on setting up and configuring the upgraded Guardian application in your environment.

Update configuration files

Depending on the kind of installation that you are following: running as docker containers by an orchestrator(docker compose) or running manually after building executables, modify the relevant configuration files to include the necessary information for your account. This information is essential for establishing the connection with the Hedera network and IPFS while enabling seamless interaction with the blockchain.

Upgrading to a release later than 2.13.0, the configuration files differ from previous versions:

  • for the execution by the orchestrator, first configure the .env file in the Guardian Application folder. Copy and paste the .env.template and configure the variables there, mainly the GUARDIAN_ENV. Then configure the right file .env.<GUARDIAN_ENV>.guardian.system at folder ./guardian/configs, and finally copy and paste .env.template.guardian.system as with the examples provided in the folder itself.

  • for the manual execution in the same node or in the free deployment style, you need to configure each of the services separately. Configure first the ./<service-name>/.env file for each of the services and, secondly, configure .env.<GUARDIAN_ENV>.<service-name> at folder ./guardian/<service-name>/configs/. Finally, copy and paste .env.template.<service-name> as with the examples provided in the folder itself.

Execute the upgrade process

Follow the specific instructions provided in the upgrade guide or release notes to perform the upgrade process for the Hedera Guardian application. Make sure to carefully follow each step to ensure a successful upgrade.

While performing the upgrade keep in mind that Guardian has the following four main data storage:

  • The blockchain Hedera Net;

  • The MongoDB Database;

  • The KMS;

  • The Configuration files.

This storage is the boundary conditions for Guardian application execution.

The methodology that we follow to upgrade the system is the Blue-Green Deployment, we create a new instance of guardian running the new version, the green instance, and in this instance we run the previously defined tests. To be sure that the behavior of the Guardian platform is not affected by the boundary conditions, we need to run it using the current starting state for all the storages.

Green Instance boundary condition:

  • Use the same blockchain Hedera Net used by the blue instance already running: configure HEDERA_NET appropriately;

  • Clone the MongoDB Database;

  • Use the same KMS;

If you are running Guardian as a docker container you can clone the mongo database using the following instruction:

  1. Create a backup directory in the blue instance: Create a directory on your local system to store the backup files.

  2. Use the docker run command with the --volumes-from option to access the mongo volume and perform the backup. Run the following command:

docker run --rm --volumes-from guardian-mongo-1 -v /path/to/backup:/backup mongo bash -c "cd /data/db && tar cvf /backup/mongo-backup.tar ."

This command creates a .tar archive of the mongo db data directory (/data/db) and saves it as mongo-backup.tar in the specified backup directory.

  1. Copy the mongo-backup.tar in a folder /path/to/backup in the Green Instance.

  2. In the Green Instance, modify the volumes section of the mongo service definition in docker-compose.yml file:

services:
 mongo:
   image: mongo:6.0.3
   command: "--setParameter allowDiskUseByDefault=true"
   restart: always
   volumes:
     - /path/to/backup:/data/db
   expose:
     - 27017

By specifying the backup directory as a volume, Docker Compose will mount the contents of the backup directory to the /data/db directory within the mongo container. This allows the container to access and use the previously backed up data.

Configure Load Balancer

Set up a load balancer to distribute traffic between the blue and green environments. Initially, configure the load balancer to direct all traffic to the blue environment.

Tasks checklist after the upgrade

Test the upgraded application

After the upgrade, thoroughly test the functionality and performance of the Hedera Guardian application in the Green Instance. Conduct comprehensive testing of all major features and use cases to ensure they are functioning as expected in the upgraded version.

Security and integrity testing

Perform security and integrity testing on the upgraded application to identify any vulnerabilities or potential issues. Implement necessary security measures and address any identified vulnerabilities to ensure the application's robustness.

Validate customizations

If the implementer company had made any customizations to the prior version, reapply those customizations to the upgraded version. Verify that the customizations are working correctly and are compatible with the new version.

Update documentation and user guides

Review and update the application documentation, user guides, and any related internal resources to reflect the changes and new features introduced in the upgraded version. This will help users understand and leverage the enhancements brought by the upgrade.

End of Blue-Green Upgrade

Switch Traffic to the Green Environment

Once testing is successfully completed:

  1. Repeat the cloning steps to update the Green instance with the last transaction to avoid losing any data about transactions that may have happened during the testing phase.

  2. Update the load balancer configuration to start directing the incoming traffic to the green environment.

Monitor and Rollback if Needed

If any critical issues arise, you can quickly rollback by switching the load balancer to route all traffic back to the blue environment.

Complete Transition

Decommission the blue environment or keep it as a backup, depending on your requirements.

Last updated 5 months ago

Should consider for the data. Any data that needs to be protected should have protection threaded throughout the plan.

Establish data quality and health checks by determining which could arise from your data set.

Guardian already deals with this problem: Due to the long-term nature of some sustainability projects, Policy Engine (PE) maintains unlimited β€˜read’ backward compatibility with 'old’ schema definition language elements. In other words, new PE versions will recognize and be able to process all existing valid policies with schemas defined starting from the beginning of Guardian existence. ()

Track it in a version control system. is the most popular version control system in use today and is almost ubiquitous.

URI versioning: In this approach, developers add version information directly to a service's , which provides a quick way to identify a specific version of the service by simply glancing at either the . Here's an example of how that looks:

Azure Container Apps (based on Kubernetes platform and technologies like , , and E),

You can find the installation guide and release notes for the target version in the Hedera Guardian and in the Guardian .

Refer to this document, , for more details.

Make sure to back up all these files. As for an example, starting from the implementation provided at :

Refer to the Hedera Guardian GitHub for more details.

Collect metrics from the current Guardian running instance to analyze performance, logs, and metrics to identify current instance behavior as of available for Guardian since release 2.12.1.

git clone

Detailed installation steps can be found in the Guardian .

Configure the Environment as at

If you are running Guardian manually, after building executables you can restore in the Green Instance mongo db, the backed up data obtained at

About KMS, that is strongly recommended for your production environment, care to copy all the cryptographic material. This is held for every service based on the KMS configuration that you are using as specified at documentation. In particular, for Hashicorp vault, copy the .<service>/tls folder in every Blue Instance of yours to the Green Instance homonymous services.

Now, the last element to worry about is the update of the Environment using the new configuration file obtained at . You can bootstrap the Green Instance of Guardian application and follow next steps.

Continuously monitor the green environment's performance, logs, and metrics to identify any issues or anomalies. Compare the result of previous metrics to the new revealed metrics as per the available for Guardian since release 2.12.1.

🌏
πŸ“–
πŸ› οΈ
⬆️
security plans
data integrity problems
https://www.talend.com/resources/understanding-data-migration-strategies-best-practices/
https://hevodata.com/learn/best-mongodb-etl-tools/
https://hevodata.com/learn/data-migration-tools/
https://blog.panoply.io/top-9-mongodb-etl-tools
https://airbyte.com/
https://cloud.google.com/architecture/database-migration-concepts-principles-part-1
https://chrisrichardson.net/post/microservices/general/2019/02/27/microservice-canvas.html#:~:text=A%20microservice%20canvas%20is%20concise,used%20in%20object%2Doriented%20design.
https://dzone.com/articles/dependency-structure-matrix-for-software-architect
https://www.ndepend.com/docs/dependency-structure-matrix-dsm
https://docs.hedera.com/guardian/guardian/standard-registry/schemas/schema-versioning-and-deprecation-policy
https://docs.hedera.com/guardian/guardian/standard-registry/policies/policy-versioning-and-deprecation-policy
https://docs.hedera.com/guardian/guardian/standard-registry/policies/api-versioning-and-deprecation-policy
Git
URI
URL or URN
http://productservice/v1.1.2/v1/GetAllProducts
http://productservice/v2.0.0/GetProducts
Dapr
KEDA
nvoy
documentation
official repository
Backup tool
Backup tool
repository
monitoring tools
https://github.com/hashgraph/guardian.git
installation guide
Guardian Vault
monitoring tools
Update configuration files
Perform a Database and Environment backup operation
Update configuration files