I asked annotators to give binary ratings to multiple items; the objective is to calculate the inter-rater agreement on one item for each rating (0 and 1). Can Krippendorff's alpha be used for this? I could not find any resource on using Krippendorrf's alpha on a single item. The solution for a similar question (https://stats.stackexchange.com/questions/164965/raters-agreement-for-each-item) is present but it uses simple percentage agreement, which I cannot use for academic publication.