I am trying to extract data from MySQL to GCS using cloud data fusion in parquet format, however when running the pipeline/validating the pipeline I am getting error
Error encountered while configuring the stage: 'Unable to create config for validatingOutputFormat parquet Required property 'schema' is missing.'
which is asking for schema however It is not the case with other format and it is extracting the data smoothly. Since it is a multi file I am unsure which schema I need to define.

Below is my answer based on the comments and the details we had above;
Allow flexible schemas in Output param works only with Avro, JSON, and CSV file formats.
Output Schema is required for Avro, Parquet, and ORC schema. So, this might be a union schema of all tables or each schema referenced by the table name.
Both cases are not manageable as you mentioned, it would be hell for the developer.
Additionally, this is already an issue I noticed on the plugin's page; https://cdap.atlassian.net/browse/PLUGIN-1139
I would offer you one of the below flows/approaches to do what you want here:
If Data Fusion is a must; then you can use the PySpark or Spark plugins in Data Fusion and run one of the ready templates to extract data from MySQL to GCS.
If data fusion is optional and you are already using Cloud Composer, you can still use the above-mentioned template with a serverless Dataproc or a Dataproc cluster that you create for this job and shut it down. This would save you a ton in the long run as you won't pay for Data Fusion and would just pay for Cloud Composer (which you already have) and Dataproc (which you were already going to pay for even when using Data Fusion) with the ability to shut it down.