๐ช Token Distribution
Last updated
Last updated
$TAG is the native token and governance token of the Tagger platform, with a total supply of 405,380,800,000 tokens.
$TAG is the native currency that powers Tagger, generated in a fair manner through participants labor using the proof of work model. $TAG can be used for staking, purchasing datasets, utilizing datasets, subscribing to software services, AI model customization services, and more.
Proof-of-Human-Work
74.00449158%
300,000,000,000
Tag-to-Pump
21.06187565%
85,380,800,000
Liquidity
4.93363277%
20,000,000,000
Proof-of-Human-Work: Tokens are generated through the direct contribution of participantsโ data processing efforts. Anyone is free to engage with TAGGERโs tasks in a permissionless manner, receiving real-time $TAG rewards for every completed assignment. The platform divides these tasks into data annotation, verification, and Human-In-The-Loop reviews. Further details of earning $TAG can be found by consulting the โHow to Earn $TAGโ guide.
The system requires no specialized expertise to participate. Through an AI Copilot feature, individuals can identify and segment data targets with minimal friction. Even without being domain experts, contributors can leverage the AI Copilot to produce annotations of high quality. TAGGER employs a combination of AI-standardized protocols and human-led review measures to audit all submitted work, ensuring a fair distribution of $TAG to diligent participants. In this way, the integrity of the data output is reliably upheld, creating a peer-driven process that is both transparent and trustless. $TAG Proof-of-Human-Work Distribution follows the rule below:
This mechanism governs how the Single Task Reward (T_n+1
) is determined for each subsequent three-hour window, based on the actual mining outcomes of the previous three-hour period. The logic parallels Bitcoinโs method for recalibrating mining difficultyโthough Bitcoin performs an adjustment every ten minutes, whereas our system adjusts the reward once every three hours.
We incorporate factors such as account level, accuracy coefficient, and daily bonus into a single value, Pi_n
, although they do not appear explicitly in the formula. By reflecting these variables in Pi_n
, their effects still bear upon the reward calculation.
To illustrate, suppose our objective for the last three-hour window was to release 100 TAG
tokens, so a_n = 100
. If the actual tokens released in that period amounted to 120, so Pi_n = 120
, then the Single Task Reward for the next window, T_n+1
, is set to be the previous reward, T_n
, multiplied by the ratio 100 / 120
. In that scenario, the Single Task Reward for the next period (T_n+1
) will be the previous reward (T_n
) multiplied by the ratio 100 / 120
, that is, a_n+1 / Pi_n
.
You might wonder why we use โnext periodโs expected issuance (a_n+1
) divided by the current actual issuance (Pi_n
)โ rather than โcurrent target issuance (a_n
) divided by the current actual issuance (Pi_n
)โ. The answer lies in how we handle โbridging days.โ Around the transition from day 90 to day 91 in the first quarter, for instance, a_n+1
may become smaller than a_n
to reflect the new issuance curve. Our formulation smoothly adjusts across these boundary moments because a_n
cancels out, leaving only a_n+1
in the ratio. On a non-bridging day, a_n+1 = a_n
, but on such โbridgingโ days, a_n+1 < a_n
.
We now turn to the matter of selecting the value a_n
โthe three-hour target issuance figureโacross time. Its progression is defined by a sequence that converges to 1, providing a controlled and predictable ramp-down of token issuance, as illustrated below.
In this design, the target issuance amount for each three-hour period of $TAG will be recalibrated every 90 days, resulting in a progressively reduced token output. The variable C_n
represents the fractionโdrawn from the 74.00449158% of TAG
tokens reserved for Proof-of-Human-Workโthat we target to issue over each 90-day window.
Tag-to-Pump: 21.06187565% of $TAG tokens have been launched via four.meme in an experiment of the DeCorp model, testing the viability of a Web 3 approach to crowdsourcing data-labeling tasks. A preorder of 106,726 data labeling tasks worth $50,000 was used for this experiment. $TAG tokens were distributed to the data labelers who participated.
Liquidity: Initially, 4.93363277% of $TAG tokens, paired with $50,000 worth of $BNB have been deployed to PancakeSwap as liquidity.