Optimize inserting data to Cassandra database through Python driver

804 views Asked by At

I try to insert 150.000 generated data to the Cassandra using BATCH in Python driver. And it take approximately 30 seconds. What should I do to optimize it and insert data faster ? Here is my code:

from cassandra.cluster import Cluster
from faker import Faker
import time
fake = Faker()

cluster = Cluster(['127.0.0.1'], port=9042)
session = cluster.connect()
session.default_timeout = 150
num = 0
def create_data():
    global num
    BATCH_SIZE = 1500
    BATCH_STMT = 'BEGIN BATCH'

    for i in range(BATCH_SIZE):
        BATCH_STMT +=  f" INSERT INTO tt(id, title) VALUES ('{num}', '{fake.name()}')";
        num += 1

    BATCH_STMT += ' APPLY BATCH;'
    prep_batch = session.prepare(BATCH_STMT)
    return prep_batch

tt = []
session.execute('USE ttest_2')

prep_batch = []
print("Start create data function!")
start = time.time()
for i in range(100):
    prep_batch.append(create_data())

end = time.time()
print("Time for create fake data: ", end - start)

start = time.time()

for i in range(100):
    session.execute(prep_batch[i])
    time.sleep(0.00000001)

end = time.time()
print("Time for execution insert into table: ", end - start)
1

There are 1 answers

0
Alex Ott On

Main problem is that you're using batches for inserting the data - in Cassandra, that's a bad practice (see documentation for explanation). Instead you need to prepare a query, and insert data one by one - this will allow driver to route data to specific node, decreasing the load onto that node, and allow to perform data insertion faster. Pseudo-code would look as following (see the python driver code for exact syntax):

prep_statement = session.prepare("INSERT INTO tt(id, title) VALUES (?, ?)")
for your_loop:
   session.execute(prep_statement, [id, title])

Another problem is that you're using synchronous API - this means that driver waits until insert happens & then fire the next one. To speedup you need to use asynchronous API instead (see the same doc for details). See the Developing applications with DataStax drivers guide for a list of best practices, etc.

But really, if you just want to load database with data, I recommend not to re-invent the wheel, but either:

  • generate the data into CSV file & load into Cassandra using DSBulk that is heavily optimized for loading of data
  • use NoSQLBench to generate data & populate Cassandra - it's also heavily optimized for data generation & loading (not only into Cassandra).