UTF-8 encoded file is picked by chardetect as ASCII

497 views Asked by At

I am writing a single file combining all the files present inside the folder.I want the text file to be UTF-8 encoded.My code is as follows

import os
import codecs
import re
def file_concatenation(path):
    with codecs.open('C:/Users/JAYASHREE/Documents/NLP/text-corpus.txt', 'w',encoding='utf8') as outfile:
        for root, dirs, files in os.walk(path):            
                    for dir_name in dirs:    
                        for fname in os.listdir(root+"/"+dir_name):
                            with open(root+"/"+dir_name+"/"+fname) as infile:
                                for line in infile:                                    
                                    new_line = re.sub('[^a-zA-Z]', ' ',line)                                      
                                    outfile.write(re.sub("\s\s+", " ", new_line.lstrip()))
file_concatenation('C:/Users/JAYASHREE/Documents/NLP/bbc-fulltext/bbc')

When I use chardetect to find my encoding,it is showing as ASCII with confidence 1.0

C:\Users\JAYASHREE>chardetect "C:/Users/JAYASHREE/Documents/NLP/text-corpus.txt"
C:/Users/JAYASHREE/Documents/NLP/text-corpus.txt: ascii with confidence 1.0

Kindly resolve the issue. Thanks

1

There are 1 answers

0
user2722968 On

Use encoding='utf-8-sig"' to force a BOM at the start of the file. It should get picked up by chardetect.