This paper explores the fundamental question of whether data or identity should be prioritized in the era of automated decision-making, focusing on the role of algorithms in shaping human identity. In the contemporary era, big data-based algorithms function not merely as classification tools but as environmental conditions that interpret human lives and construct identities. However, extant discourse on explainable AI (XAI) has predominantly centered on functional and technical criteria, such as transparency and predictive accuracy. This focus has overlooked the role of technology as a co-author of narrative identity. This study draws upon Paul Ricoeur's concept of narrative identity and Coekelbergh & Reijers's theory of techno-narrative co-authorship to reinterpret identity protection as a narrative and relational issue. The findings indicate that the provision of algorithmic explanations engenders a form of narrative power, thereby posing a risk of confining human identity within technical interpretive patterns. Consequently, this paper posits that XAI must be redefined as a mechanism that guarantees narrative authority, thereby allowing individuals to contest and revise algorithmic interpretations. This is particularly salient in an age of increasing technological intimacy, as exemplified by recommendation systems and conversational AI. In this context, explainability should function as an ethical institution that empowers users to co-author their identity alongside technology. By redefining privacy not as a matter of ownership or control, but as a right to self-interpretation, this research establishes a novel philosophical foundation for future information ethics and technological design.