I am using the Office365-REST-Python-Client library to upload some relatively large CSV files to a SharePoint document library via an io.BytesIO
instance. I do this by passing the byte array to the following method:
from office365.sharepoint.folders.folder import Folder
from office365.sharepoint.files.file import File
def write_file_bytes(self, relative_url: str, file_name: str, file_bytes: bytes) -> None:
folder: Folder = self.client_context.web.get_folder_by_server_relative_url(relative_url)
chunk_size: int = 1024 * 1024 * 15
# File bytes to IO stream
file_bytes: io.BytesIO = io.BytesIO(file_bytes)
folder.files.create_upload_session(file_bytes, chunk_size=chunk_size, file_name=file_name).execute_query()
Based on this StackOverflow question, writing the file from a io.BytesIO
instance is indeed possible, but the file_name and file_size should be passed as keyword arguments to chunk_uploaded
. However, even in specifying a callback that takes the file size as an argument, I still get an io.UnsupportedOperation: fileno
exception.
Uploading the file from either a byte array or an io.BytesIO
instance is necessary due to the nature of what I am doing. So I can unfortunately not specify a local path to the file.
When performing a simple upload using the following:
folder.upload_file(file_name, file_bytes).execute_query()
Everything works as expected, but this is limited to a file size of 4.0MB, which is unfortunately too small for my needs.
My solution was to make use of a temp file and open it as a
BufferedReader
using a context manager, and write it to the relevant document library on SharePoint. This was implemented as follows: