In the standard signal detection (SDT) framework, assuming that observers’ decisions (Type 1 decision) and confidence (Type 2 decision) about decisions are based on the same information, their sensitivity (d’) to sensory stimulus can be assessed both from decision accuracy and confidence reports. However, ample empirical evidence indicates that d’ derived from these two sources are not equal, suggesting that Type 1 and 2 decisions rely on at least partially distinct information. Based on this insight, several studies explored the use of d’ derived from Type 2 decisions independently (meta-d’) to characterize metacognitive performance. The resulting single core algorithm used by most popular methods is built to follow the logic of standard SDT without explicitly defining a normative framework. By developing a normative generative model of metacognition and through theoretical analyses and simulations, we found that the core algorithm does not fit the natural extension of the classical SDT-based generative model. It provides correct measures—according to the natural extension—only in the case when no added or subtracted noise is assumed during the confidence judgment compared to the decision stage, i.e. when d’ = meta-d’. For example, at a typical value of d’ = 1.16, if meta-d’ deviates from d’ by 10%, the core algorithm will predict as much as 30% deviation. As a result, using the core algorithm eliminates the rigorous link between the descriptions of Type 1 and Type 2 decisions, and in turn, the fundamental logic of the M-ratio-based metric using meta-d’/d’ is called into question. In contrast, our analysis also provides a computational method of meta-d’ that restores the link while adhering to the normative generative framework. In conclusion, we identified a significant flaw in the popular method of treating Type 2 decisions and provided a normatively justified algorithm for assessing metacognitive performance.