Architecture

v1.1 (last updated: 02-12-2022)

LVL Protocol Architecture Overview

This is currently being designed and developed, so, keep an eye to the future amendments done here. Any substantial updates will be communicated accordingly.

Abstract

LVL Protocol is an on-chain reputation and skills protocol designed for EVM compatible blockchains, as explained in-depth in the LVL Protocol Whitepapers. By relying on the different data we can get from known and custom external systems as data sources, we can then aggregate the data to have it in a processable state, generate a JSON file, then use that file as input to some customizable functionality that we will refer to as โ€œRoll-upโ€. Each rollup function will use any of the data pieces from the external data sources and come up with a skill level which will be calculated per season for a unique user address and specific DAO combination.

It should be pointed out that each skill belongs to a skillset, which is basically a UINT256 that itโ€™s divided into 32 8-bits chunks, each of them representing the skill (value between 0 and 255). Finally, once the rollup logic happens for all desired skills, LVL Protocol grants each member an [ERC-721] $LVL NFT that represents their profile, then associates skill values to their NFT within the context of each community.

Actors

Community

Any environment in which members convene and interact. e.g. DAO.

Integration Partner

This person is in charge of connecting the different rollup functions based on DAO specific configuration and also building their own to meet DAO needs.

Community Owner

This person is in charge of configuring LVL Protocol based on DAO needs, with the ability to set their owned skill sets, set the rollup function to use for each desired skill and querying the level profile for each of the members.

Member

Any entity within a community that can be interacted with. Members are most often individuals, but can also represent a group of people presenting themselves as a single unit within a community.

Worflows

There is a number of workflows that should be described in order to have a good sense of how the different components will interact via predefined interfaces (see API contracts). We will focus on three main workflows for now:

  • Data Storage Flow: It handles the interaction with different third-party or in-house integrations with the level protocol systems via webhooks that would be in charge of retrieving all the required data to have a meaningful input that would become a meaningul skill value.

  • Data Approval Flow: Once the data coming from different data sources: Sourcecred, Coordinape, Github, LinkedIn, (or even your own data source), is stored on-chain into IPFS. This data can easily be inspected by the CORE team and relevant parties, to approve it. There are a couple of proposed flows to do that, involving Snapshot and email or discord communications.

  • Data Rollup Flow: Once the data is approved, we need a way to transform that data into appropiate skill level values. This is where we would have the data sources aggregated data as input and a value between 0 and 255, returned as a uint256 as output of a relevant rollup function.

Data Storage Flow

  1. A REST endpoint is called to initiate the data storage process. See API contract section for more details on expected request and response.

  2. We start by retrieving off-chain data from FaunaDB. More specifically, we retrieve the community data (which can be seen in the Data contract section).

  3. For each member of the retrieved community, we will then connect to the integrated third-party data sources.

  4. Then, an aggregated JSON file is generated.

  5. We rely on IPFS to save this generated file on-chain. The main idea of having this file stored there is to be able to track its content by cid, so that we can use it to set up an approval process, where everyone can see the content, which provides transparency to the whole flow.

  6. The cid is persisted back into Fauna (off-chain), so that we can have an additional association to the Community/DAO.

Data Approval Flow

  1. A POST request to the /communities/rollup endpoint will be performed to start the rollup process.

  2. The skillset configuration is retrieved either via Fauna (off-chain) or IPFS (on-chain).

  3. Prior to proceeding with rollup, we need to make sure the JSON metadata file that has all data sources aggregation, is good to go. To do that, a communication via email or discord will be sent with an attached Snapshot document where each relevant CORE member should approve it by signing it with their wallets.

  4. The approval is set into the Community Season Data and now, itโ€™s time for the rollup to start.

Data Rollup Flow

  1. The owned skillsets are retrieved via an Skills Smart Contract.

  2. Then, for each skill, the rollup function to be used is identified, by querying its configuration from an IPFS file that would hold the desired mapping.

  3. Loop through the members of the community: for each member, we retrieve the JSON metadata from IPFS and we prepare the required input according to the roll-up function specifications.

  4. Now, the rollup function gets executed, this includes any javascript logic you would like based on your needs. For instance, for NodeJS skill, you need to use some data points from Github, but also, some Coordinape ones as weights/multipliers. As long as the function output is a value between 0 and 255 returned as a UINT256, the system will then use that to build the skillet back with the calculated skills and write it back to the Ethereum Network via the Smart Contract.

  5. Here, we rely on the ERC-721 standard, by creating an NFT token representing the profile for a member. The data would be grouped by skillsets of 8 or less skills.

API Contract

Data Contract

Last updated