Tagger Documentation
  • About Tagger
    • 🛸 Our Vision
    • 🔌 What We Do
  • Challenges in the Current Data Economy
    • 🌏 Chaotic Data Authentication
    • 📂 Difficulty in Data Acquisition
    • 📌 Quality of Labeled Data
    • 🪝 Data Silos
    • 🛡️ Privacy and Ethical Issues
    • 🧳 The Need for Continuous Maintenance
  • Our Solutions
    • 📃 Data Authentication Protocol
    • 🌲 A Full-Stack Decentralized AI Data Solutions Platform
      • Web 3 Crowdsourcing
      • Simple Onboarding, Instant Global Payments
      • DePIN-Based Data Collection and Sharing
      • AI Copilot Labeling Tool
      • Permissionless AI Marketplace
      • Data Developer Community
      • Human-In-The-Loop
  • Tagger Features
    • Data Authentication Protocol
    • Decentralized AI Data Collection
    • Decentralized AI Data Labeling
    • Data Evaluation, Cleaning, and Processing
    • Data Trading and Management
    • HITL Telegram Mini App
  • Hardware
    • ⌚ Health Monitoring Wristband
  • Tokenomics
    • ☑️ $TAG
    • 🪙 Token Distribution
    • 💡 Task Reward Calculation
      • AI Copilot Labeling
      • Manual Labeling
      • Data Review and Staking
      • 👥 Daily Task Bonus
  • Smart Contract and Audit
    • 📄 Audits
    • 🖼️ NFT Smart Contract
    • 🪙 Token Smart Contract
  • Roadmap
  • Team
  • Contact Us
Powered by GitBook
On this page
  • What is deemed a "correct" Data Review attempt?
  • Staking and Eligibility
  1. Tokenomics
  2. 💡 Task Reward Calculation

Data Review and Staking

Data Review operates within a community-driven framework, forming a key pillar of a scalable autonomous DeCorp model. Unlike traditional corporate hierarchies, which see managerial and operational costs rise exponentially with workforce expansion, this system scales without such inefficiencies. It follows the principles of true decentralization where verification and validation are maintained through a peer-to-peer structure, eliminating reliance on centralized oversight.

What is deemed a "correct" Data Review attempt?

Each manually labeled data submission undergoes a first-round review by three assigned reviewers. A piece of manually labeled data will be:

Passed within the first round if:

  • All three first-round reviewers unanimously accept the data labeling attempt.

Denied within the first round if:

  • At least two first-round reviewers deny the data labeling attempt.

Passed on to a single second-round reviewer if:

  • Exactly two first-round reviewers accept the data labeling attempt and one first-round reviewer denies the attempt.

In the case where a piece of data is passed on to the single second-round reviewer, their decision will represent the final outcome of the data review.

To maintain objectivity, reviewers are never informed whether they are acting in the first or second round for any given piece of data.

A correct data review attempt is therefore deemed as the case where the reviewer's decision aligns with the final outcome of the reviewed data. For example, if a piece of data is ultimately denied, any reviewer who selected "yes" will be marked as having made an incorrect attempt.

Staking and Eligibility

To qualify as a data reviewer, one must rank among the top n stakers in the system, where n represents the number of data reviewers assigned per review window based on the amount of data that require review. These review windows open and close based on the volume of data requiring review, with prior notice given.

The minimum staking requirement to become a data reviewer is displayed on the Staking Management page in the dashboard. The staking duration is a minimum of 7 days, and unstaking requires a 7-day waiting period after a request is made. Upon making a request to unstake, one may no longer remain as a data reviewer until the end of the review window.

Importantly, staking serves only as a prerequisite for data review eligibility and does not generate DeFi-like returns. This ensures that only highly committed reviewers participate, acting as the final safeguard against inaccurate data labeling.

Reward Calculation Formula $TAG = single task reward × halving coefficient × account level coefficient × daily review accuracy coefficient

Single Task Reward Rewards vary across data catalogs, with each assigned based on its difficulty and complexity. These rewards will be displayed in the Task Plaza before users select their tasks.

Account Coefficient Refer to the "How to Earn $TAG" section.

Halving Coefficient Refer to the "Token Distribution" section.

Daily Review Accuracy Coefficient

Daily Review Accuracy reflects the percentage of data review tasks that is deemed as "correct attempts" on the previous working day.

Daily Review Accuracy Coefficient
Daily Review Accuracy % (x)

0

x < 30%

0.3

30% ≤ x < 50%

0.5

50% ≤ x < 70%

0.8

70% ≤ x < 85%

1.0

85% ≤ x < 90%

1.2

90% ≤ x < 95%

1.5

x ≥ 95%

PreviousManual LabelingNext👥 Daily Task Bonus

Last updated 2 months ago