I've recently implemented a system where I execute end-to-end written on playwright tests through an API call on my server. The process involves triggering a Python script that performs browser-based tests using Playwright. Everything seems to be working well in my local environment.
The test script is fairly simple, I've a FLASK webserver that initiates subprocess
to call another python file which holds e2e test
app.py
from flask import Flask, request, jsonify
import pytest
import subprocess
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
@app.route('/execute-hook', methods=['POST'])
def run_test():
url = request.json.get('url') # Assuming the JSON payload contains the URL
result = subprocess.run(['python3', 'auto_confirm_appointment.py', url, '--browser chrome'], capture_output=True, text=True)
message = "The time slot is no longer available" if result.returncode == 0 else "The appointment is scheduled"
return jsonify({"message": message, "success": result.returncode == 1 })
if __name__ == '__main__':
app.run(debug=True)
auto_confirm_appointment.py
# Not putting whole code as it is unrelated. just for the sake of question, the code contain
and external link and perform various browser related operation
I'm considering deploying this solution in a server or serverless environment. However, I'm unsure whether this approach is the right one. Are there any potential issues or challenges I should be aware of when running end-to-end tests in a serverless/server environment? Also As the traffic increases, the number of API calls and tests will also rise. How can I effectively manage this scalability challenge
Your insights and experiences would be greatly appreciated!