Add policy support for more than one external data block
Allow more than one external data block per policy. Each external data block should be able to handle a different schema, enabling multiple types of data to be sent from external sources as needed.
Cross-context (API+UI) refresh token invalidation (regression from v2.18.0)
Looking into the code, it seems that the refresh token should last for a year, this is fine as it is configurable, but losing login context (or a user potentially feeling they lost all their data) isn't great UX.
In terms of code behaviour, I would presume that this change would fix the issue:
const user = await new DataBaseHelper(User).findOne({refreshToken: decryptedToken.id, username: decryptedToken.name});
to
const user = await new DataBaseHelper(User).findOne({username: decryptedToken.name});
The reason why this might be okay, is that the expire at decoding happens on the line above, so a refresh token, would last for the period of time by default.
As this is authentication related, it requires review from more people.
Business UseCase for Emissions Reduction/Removals (ERRs)Calculation Pre-Calculator in Guardian
We are in the process of creating a few approaches to this ticket from the business use case perspective. One is essentially an βestimatorβ with a simplified workflow that can be used to estimate emission reductions, token issuance, etc. upfront to help the user better anticipate issuances and the impacts of various project activities and methodological choices. The other is more of a βsummary previewβ of the actual calculation results, that can be implemented just before validation (or anytime thereafter) to see summary KPIs based on the actual inputs and methodological choices made by the user, and they can then interact with the data like the Nerd Wallet retirement calculator to see how changes to the project activities could impact issuances. To be discussed further with the team.
On-demand state proof generation for critical Guardian operations
Introduce the ability to trigger Hedera State Proof generation from Guardian interface for particular actions/operations or the general status of the Policy.
State proofs are to be generated and presented to users for download. Guardian will not keep any registry or store them on the system.
Introduce UI and the backend functionality to verify Hedera State Proofs generated by Guardian, ensure the long-term compatibility of this functionality.
Change the password to a strong, random value, or create additional setup steps were the deployer is required to set this secrets. Additionally, set the most secure configuration as the default in the repository. This ensures that any user deploying the repository will benefit from enhanced security by default. In addition, as highlighted on the issue finding "Lack of Security Hardening Guides", a security guideline is recommened so that users can configure securely their environment before deploying it.
It is recommended to separate the system functionality from the registry functionality specifically for log management. Additionally, it would be appropriate to restrict access to system logs to a different admin role, who would only review the system logs. For registry logs, it is recommended to ensure that one registry can only view its logs, without seeing the logs of other registry users.
Accessing a Guardian policy from a Guardian instance other than the publishing instance
A Guardian user should be able to access a policy published by another Guardian instance from their own Guardian instance. This access should be based on a request-grant model.
Server-Side Request Forgery (SSRF) in Request Data module
If the functionality is important enough to keep despite the risk, then all URLs should be requested through a secure proxy server. This is a significant effort, and to be secure the proxy must ensure that:
The URL does not resolve to a private or local IP address 2. Redirects are not followed
Only HTTP(S) protocol schemes are supported
Additionally, the application server should define and enforce rate limits to discourage abuse of the functionality as a web scanner.
If the application is hosted on AWS servers, enforce usage of AWS βInstance Metadata Service Version 2β with token usage required. This is a new AWS metadata API which severely curtails the ability of attackers to abuse SSRF to access the AWS metadata API. However, this will not prevent attacks against other internal services.
We should be able to export the complete project data of any policy in CSV format through Guardian.
We should also be able to apply filters to the project data, which should be included in the export file.
When we export the data in csv format, it should follow proper naming convention such as saving the exported file by its respective policy name_version.csv
It is recommended to implement mutual authentication for all internal microservice communications to ensure that each service can verify the identity of the other. It is recommended to ensure that each service is properly authenticated, using authorization roles and permissions to ensure that each service can only publish or consume messages in the queues relevant to its designated function. Moreover, messages could be digitally signed, ensuring they originate from the correct service. At each step in the process, the signatures can be verified to ensure that the message has not been tampered with. Where applicable, integrate these recommendations into the security hardening guide to ensure organizations deploying the application can implement these best practices effectively.
The application should use an alternative mechanism for transmitting session tokens, for example, the Authorization header, as it is done by the rest of the web application.22
Manual trigger of re-indexing for specific policy, SR, token
Introduce a new capability into the indexer to trigger manual re-indexing for a specific 'vertical', starting at a specific topic and navigating (only) down the hierarchy for immediate availability of data.
Develop a UI for users to provide a Topic ID for one of the specific items below as an entry point into the 'vertical':
Policy
Standard registry
Token
When manual re-indexing is scheduled it must take priority, or the rest should gets postponed until the manually-triggered update is finished.
The user who triggered the update must be notified when the update is finished.
Introduce the 'Test' button to all places where source or math code can be inputted in Guardian policy which would trigger the 'in place' execution of the code based on the execution context and defined inputs/outputs of the block. This tool could prompt the user for input data when required.
Add ability to 'print' (i.e. log) data and variable values somewhere when test-running policy (e.g. in Dry-run mode). This way policy authors would be able to examine the data structures passed into the functions and identify unexpected behaviour in this area.
Add recommendations to the documentation wrt running this code in an separate developer environment, i.e. all the needed execution context such as imported libraries etc so those developers who prefer to use their code editors can replicate the execution of the code there.
It is recommended to create a password policy, that can be configured by the organizations using the application. It should also be noted that recent guidance from NCSC promotes password policies which are designed to decrease the burden on the user. This can include relaxing controls requiring users to change their passwords at regular intervals in favor of the use of suitably complex passwords. The NCSC password guidance21 should be reviewed to determine if this new guidance can be applied to the environment reviewed.
Ensure the Guardian code is covered by an effective patching policy that allows the latest server software upgrades, updates, or patches to be tested and applied within a short time frame following their release by the vendor.
the data can be defined as mandatory or optional (by the policy author)
data imported into Policy artifacts is stored and displayed in its native format, preserving 'mime type' and/or any other indication of the nature of the data as well as the identity/credentials of the source, time/date and other identifying information as appropriate
Authorization Headers Potentially Leaked through IPFS in Request Data Module
Implement a secure method to handle secrets in the Request Data module that ensures sensitive information, such as authorization headers, is not published with the policy. A possibility may be to include encrypted headers with the public key that only the private key of the policy owner can decrypt. Other possibility may be to store the secrets headers in the vault and fetch them at runtime using appropriate access controls. Update the documentation to explicitly warn policy creators about the risks of including sensitive information in the policies and recommend using the module only for public HTTP methods. Provide guidelines on securely configuring policies to avoid the leakage of sensitive data.
We need to enhance Indexer UI for consumers "Tree API," project/tonnage API, and other consumer projects for the purpose of eCommerce supportive transactions.
Improving Guardian UI by adding more UI elements and also adding more colorful headers which can be customized.
Creating a static landing page which will have capability of performing project comparison within same instance using different parameters such as scale size, sectoral scopes, etc.
Implementing AI search for allowing Project developers to search policies as per the information entered.
Implementing Guider Search for allowing project developers to search policies using different parameters within same instance.
Implementation of property field when schema is created, which will be used for standardizing as per IWA specification.
Implement the policy deprecation workflow which includes:
Guardian UI allowing issuing SR to discontinue a policy (version) or the entire policy from a certain date (in the future or 'now').
Policy grid should display a suitable marker against non-active policies, and a different for the ones soon expiring.
An appropriate message posted in the corresponding Hedera topic recording the 'discontinuing' decision
For in-progress projects that have been registered and are operating under the policy it should be possible to 'switch' to the appropriate version of the policy which is still valid.
Gold Standardβs Carbon Sequestration through Accelerated Carbonation of Concrete Aggregate Webinar
Design schemas for the Carbon Sequestration through Accelerated Carbonation of Concrete Aggregate methodology, create a PowerPoint presentation, and conduct webinar.
Development of the policy using the schemas and workflow designed
Business User Policy Development Feature - schemas MVP
Create a excel 'schema representation' standard suitable for non-technical users. Note: use existing excel schemas from Tools and UNFCCC initiatives as guidance.
Create an explicit template for the above, downloadable from Guardian UI, which users can take and update/change to develop new schemas.
Create an Export/Import UI and tooling which would allow seamless transformation of schemas written in Excel into valid Guardian JSON schemas and vice versa
Ensure manual interventions are possible for corrections/adjustments of complex formulas and other issues.
Introduce support for geoTIFF and other raster types of data such that:
Guardian documents (i.e. in schemas) can reference raster data (in geoTIFF and other common formats) which are located on external (3rd party) systems.
Guardian UI can display raster images and their georeferencing data when they are encountered in documents.
Guardian policy can access and manipulate (use in calculations, etc) data from raster sources.
Support externally controlled DIDs with keys in Guardian
Introduce a workflow into the Guardian where a DID Controller would introduce a dedicated verification method into the main DID for which the private key would be stored and managed by a Guardian instance. This way Guardian would only be able to control the specific verification method's key, but not the rest of the DID.
Introduce the ability to mint Mitigation Asset Type tokens as the result of the calculation of the diff between planned (and reported on the Environmental) and actual results of the calculations based on the MRV data for a reporting period. This would likely require:
New type of blocks in the policy definition language specifying 'target' numbers.
Policy Engine ability to mint different types of tokens depending on the conditions
Development of ACM0002: Grid-Connected Electricity Generation from Renewable Sources
Development of the policy with all details mentioned in the design schema.
Tools involved in this policy also needs to be developed. The tools are listed below:
Tool 01- Tool for the demonstration and assessment of additionality
Tool 02- Combined tool to identify the baseline scenario and demonstrate additionality
Tool 03- Tool to calculate project or leakage CO2 emissions from fossil fuel combustion
Tool 05- Baseline, project and/or leakage emissions from electricity consumption and monitoring of electricity generation
Tool 07- Tool to calculate the emission factor for an electricity system
Tool 10- Tool to determine the remaining lifetime of equipment
Live project (data) migration across Policies, across Guardian instances
Implement User Interface (UI) and tooling allowing users to execute multiple cycles of 'export a live project' from a policy and 'import a live project' into another policy. This migration process should work irrespective of the policy versions, standard registries, and Guardian instances, automatically mapping data/documents to the corresponding policy steps in an intelligent way, referring to the Project Developer in situations needing human input via a convenient UI/UX ('User Experience'):
Project Developer can preview and assess the compatibility of policies and data, and the result of the migration using something analogous to the 'dry-run' mode.
For cases where the 'new' schemas and policy steps match perfectly the 'old' valid data/documents from the 'source', the 'old' ones should be automatically accepted into the 'target' policy flow with no human intervention.
Project Developer can review and select/guide the matching and the destination of the 'source' data/documents into the new policy flow with full visibility with regard to:
'source' and 'target' policy structure (side by side), with details of block parameters etc where required.
content of the original and destination documents with field-level granularity
Where data needs to be augmented and thus new signatures are required the corresponding Guardian users (e.g. Standard Registry) get requests to sign the data.
The migration process should be automated, and should result in the 'stopped' project/policy execution on the 'source platform' and 'resumed' from the same point in the policy flow on the 'destination' (other) platform, with full data and tokens visibility and provenance provability in the trust chain. The 'old' data and artifacts produced on the 'source' should be fully useable on the 'target', e.g.
used in reports
viewable in the UI
data referencable and useable in calculations and other policy actions (such as minting)
operations on 'old' tokens are supported in the new policy smart contracts (retirement, exchanges, etc)
We need to integrate FireBlocks , a Key management tool to manage the Keys and secure Guardian. To get complete info on Fireblocks, please look at https://www.fireblocks.com/
Full project data comparison as produced/captured by policies
Introduce a comparison functionality where it'd be possible to 'diff' arbitrary sections or the entire trust-chains for different tokens, potentially issued by different policies such that the system would:
graphically display the differences where a user would then be able to 'scroll' through and review them in the UI
get a numerical 'similarity score' indicating how similar the two 'chains' are
Global environmental/Guardian data search (indexer) component for Hedera and IPFS
Improve the data storage and indexing capabilities of Guardian for the data belonging to the local instance such that complex analytical queries could be run efficiently, such as 'search for data similar to this' and 'what is the possibility of this being a double entry for something submitted elsewhere'.
Introduce a global search and indexing capability for data produce by other (all) instances such that queries above could be run on the entire body of Guardian data produced from the beginning of time (in blockchain sense).
Extend Block and policy discoverability/search #2281 for users to be able to preview the usage of the block without having to import "other SR's" policy into their Guardian instance
Fundamentally separate the concept of users, roles and permissions in Guardian
Introduce granular concept of permissions which could be assigned to users, a user could then perform a specific function within the role if its assigned role 'contains' this permission. These should include (but not limited to):
Policy edit/submit for review
Policy view
Policy approval & publish
Introduce a "user admin" role, which allows:
defining new roles from permissions
assigning of roles to users
Create a permissioning system which verifies actor role before any action has been taken throughout Guardian
Package in suitable most-common role set into Guardian so it can be operated immediately 'out of the box' without the need for additional configuration
Create a concept of 'delegation' where a user with a particular role/permission can explicitly 'delegate' this role/permission to another user
Introduce the functionality to produce a report (page, download) which lists all users and their roles/permissions mapping in the system
Create a Guardian 'transaction execution' service which would assure orderly transaction execution and their status tracking, and provide intelligent retry and failure recovery functionality such that required transactions would be guaranteed to be asynchronously executed once, and only once, and in the right order.
Further evolution of policy comparison (a.k.a 'mass diff')
Relying on the work done in the course of #1793 (i.e. creating data structures (hashes) to enable more efficient comparison), allow for mass-comparison of policies such that a user should be able to search for local policies 'similar' to 'different' to some other policy based on some similarity threshold. This is related (but different) to #2281 as it focuses on 'easy diff' vs 'easy search'.
Add suitable API facilities which would allow programmatic access to the indexed data and analytics, which include policy structure data (such as formulas used in the various elements - e.g. Tools) as well as project data.
Filtering data for blocks is stateful API, introduce stateless data filters for API usage
I don't necessarily think there is a hard requirement to remove the stateful nature of guardian filtering, as we cannot predict, what are the downstream API consumers are using this functionality or affects, they will be without some kind of deprecation notice.
So, the recommendation would be:
Add ability to filter using a GET request for a filter, so data can be fetched and filtered in one action
(As an alternative - preferred) It would be preferable to enable filtering at the block level when retrieving data so a API consumer does not need to add explicit filter blocks in block can use the Guardian API to be more RESTful by default.
Post a six month deprecation notice for stateful usage of the filter (revert if hard requirement for others)
An example, code enhancement could be implemented like this (tags are easier to reason about):
From old version:
public function filterByTag(string $policyId, string $tag, string $uuid): object
{
return (object) $this->httpClient->post("policies/{$policyId}/tag/{$tag}/blocks", [
'filterValue' => $uuid
], true);
}
to:
public function filterByTag(string $policyId, string $tag, string $uuid): object
{
return (object) $this->httpClient->get("policies/{$policyId}/tag/{$tag}/blocks?filterValue={$uuid}");
}
Or provide/document clearly a mechanism to filter on an interface document block itself, which would be preferred.
Different token IDs for different projects by the same policy
Introduce the facility to dynamically create new TokenIDs and 'assign' them to (newly registered) specific projects such that all data associated with these specific projects would be linked to the corresponding TokenIDs upon minting instances of the token.
Ensure clear association with the same methodology for all TokenIDs and their respective trustchains. I.e. it should be clear that 'these tokens' have been issued by the same Policy, but for different projects.
Extend trust chain to show multiple tokens and multiple projects 'managed' by the same policy
Enhance MongoDB Integration by incorporating seamless support for popular third-party services, such as MongoDB Atlas.
The task at hand involves modifying the codebase to seamlessly integrate the new MongoDB Atlas connection string without the redundant mongodb:// prefix. The correct format for the DB_HOST environment variable should be mongodb+srv://:@staging.wj9lvfj.mongodb.net/?retryWrites=true&w=majority. This adjustment will ensure a successful and accurate connection to our MongoDB Atlas instance.
Global Carbon Council (GCC) GCCM001: Methodology for Renewable Energy Generation Projects Supplying Electricity to Grid or Captive Consumers β Version 4.0
Creating Schema design for this methodology.
Development of the schema and policy.
Testing the policy development through Guardian UI and configurator.
Introduce facilities into the Guardian schema language which would allow Guardian policy engine (and humans when they read these schemas in json) to recognize what values should be considered default for the documents based on these schemas.
Make Guardian policy engine UI to put in the default values into the fields of forms based on such schemas. The fact that these are default values automatically inserted into the field should be clearly identifiable, i.e. they need to look different from the values users explicitly put into the (other) fields.
All standard tools/libraries (e.g. for verification) should work with such schemas out of the box
Calculation Logic for values in 'automatic fields' in schemas
Introduce facilities into schema definition language which would allow for the referencing of other fields/values and specification of mathematics needed to calculate the field value based on those.
It should be possible to reference fields from 'other' schemas in the chain - parents/children.
Design and implement specialized analytics engine which would enable Guardian to identify, trace and display mathematical relations between data in different artifacts (VCs/VPs/tokens) including events (transactions/messages) on Hedera hashgraph, with unlimited traceability depth.
Intelligent 'understanding' of the nature of the transformations (e.g. in formulas in calculation blocks) is out of scope of this ticket, the analytics engine can view transformations as black boxes. If the 'original' data are used as 'input' into such a black box, for the purposes of this analytics reporting it can be assumed that the 'output' data depends on that 'original' data.
The system should correctly identify and display references to the 'original' data such as when VC document fields reference document fields in other VCs.
Users should be able to perform complex data searches with the scope limited to the dependencies graph.
Guardian UI should enable Project Developers to define (and name) new datapoints and publish these definitions in a standardized manner linking them both to the originator's author DID, project ID and policy ID.
Letβs say I submit incorrect data to a block (as I would like to rely on the schema validation on the guardian)
Looking at the logs, I received an error about JSON schema validation:
I am receiving a 500 status code back from a block submission in point where I would normally expect to receive something kin to a 422 (unprocessable entity)?
Once the approach has been approved, we can update the GHGP policy and run the example data, publish the PCFs to the Hedera Network, and demonstrate how another guardian policy (of a supply chain partner) can reference a dynamic PCF to support scope 3 calculations. I believe Wes was interested in having this be a methodology breakdown.
API facilities to retrieve unique references (IDs) of results for API-triggered operations
Design a generic approach to the 'traceability' of API calls such that for each API call a chain of events and actions within Guardian policy and especially to outside systems can be established via the unique IDs culminating in:
Hedera transactions
Hedera topics messages
Hedera contract calls
Artifacts published on IPFS
Introduce a corresponding UI where users can visually observe the same information
Guardian analytics: labels and top down data way points
Introduce 2 new workflows into Guardian, which include the corresponding roles and access permissions:
labels author, for users to be able to create the 'rulesets' for evaluating data for their compliance with the chosen 'label',
auditor workflow, for users which would use these 'rulesets' to apply to data.
Introduce the concept of labels, which can be specified to combine multiple statistics (introduced in Guardian analytics: bottom-up data traceability #3336) to create 'higher-order' statistics which themselves can be combined further essentially enabling the creation of 'data transformation' trees which, when applied to data, would ultimately get resolved into binary compliant/non-compliant answers. The top-level 'nodes' in these trees are 'Labels'.
Enhance the current capability of qualitative evaluations in Statistics to support the ability for users to attach external evidence and add textual comments/explanations whenever a human input is enabled. The evidence would then become part of the 'evaluation trust-chain', i.e. it should be hashed and stored verifiably. Evidence in the image formats should be viewable in the browser, archives (zip files), pdfs, csv files should be supported for attachment and then download.
Enable Auditors to apply 'label rulesets' to tokens, Guardian would then automatically traverse the token trust-chain to find and evaluate the required data to produce the label conclusion, i.e. the compliant/non-compliant results. These results can optionally be published to IPFS/topics by Auditors that generated them.
Enable ordinary users to search for statistics, label ruleset, and label conclusions that have been published.
Trustchain support for contract-based issuance and retirement implementation
Extend/modify trustchain implementation to support new contract-based issuance and retirement functionality such that users have visibility to the entire lifecycle of the token and have access to all significant artifacts produced as a result.
American Carbon Registry (ACR) ACR Methodology for Quantifying, Monitoring, Reporting, and Verifying Greenhouse Gas Emissions Reductions and Removals from Landfill Gas Destruction and Beneficial Use Projects
Creating Schema design for this methodology.
Development of the schema and policy.
Testing the policy development through Guardian UI and configurator.
Some items that could help take this policy to the next level would be to build out scope 3 and PCF referencing capabilities, build out SEC compliance aspects, and pursue a βBuilt on GHGP Markβ of approval. I believe this will help drive the policy to be attractive to real world users and ready for adoption.
Formula Linked Definitions & Schema Tree Enhancement
Introduce a UI component, or 2 separate but compatible components, into the Guardian which can display mathematical formulas in a format familiar to the user (like formulas in a LaTex documents of PDFs). These formulas should be interactive, i.e.:
at the viewing time individual elements of the formulas should be clickable so users can drill into the variables and see corresponding schemas/documents.
users should be able to input formulas (in a formula editor) of sufficient complexity to cover all VCM cases
users should be able to copy/paste entire formulas or parts thereof
Enable policy authors to map schema tree structures to formulas, linking the fields and variables so Guardian UI can display them as per point above
Enhance Guardian schema, policy and VC/VPs views to display the formulas whenever they are available.
Introduce the ability to attach a PDF file to the schemas/formulas at the policy/schema creation time, and specify the (external) 'origin' link so the original source of the math can be traced to the original paper.
Enhance schema tree view to display the formulas alongside schemas.
Dry-run policy execution 'savepoints' - restart policy dry-run from the list of 'saved' places
Introduce a new functionality for users to 'save' dry-run execution status at arbitrary points by clicking 'save state' button.
The system should support the creation of multiple save points for the same execution workflow
Next time the (draft) policy is executed in the dry-run mode users should be given a choice whether to restart from the beginning or continue execution from any of the 'save points'.
Starting execution from a 'save point' invalidates and removes all the other save points that logically followed it
It should be possible to delete some or all save points manually