I'm trying to find the best parameters for the UMAP (dimensionality reduction) model together with HistGradientBoostingClassifier.
The loop I have created is:
vectorizer = TfidfVectorizer(use_idf=True, max_features = 6000)
corpus = list(df['comment'])
x = vectorizer.fit_transform(corpus)
y = df['CONTACT']
n_componentes = [2,10,20,40,60,80,100,150,200]
for component in n_componentes:
reducer = umap.UMAP(metric='cosine',n_components=component)
embedding = reducer.fit_transform(X)
print (f"Component: {embedding.shape}")
X_train,X_test,y_train,y_test=train_test_split(embedding, y, test_size=0.2, random_state=123, stratify=y)
clf = HistGradientBoostingClassifier()
n_iter_search = 20
random_search = RandomizedSearchCV(clf,
param_distributions=parameters,
n_iter=n_iter_search,
scoring='accuracy',
random_state=123)
random_search.fit(X_train,y_train)
print(f"Best Parameters {random_search.best_params_}")
print(f"DBCV score :{random_search.best_estimator_.relative_validity_}")
Run time is 4 hours and only takes one lap. Can you tell me another way to perform this task more optimized? Thank you!
If I were you, I will add "n_components" into the parameters, set n_jobs = -1, to leverage your multiprocess.