What could be a reasonable setup for this? Can I call Task.init() multiple times in the same execution?
How should Trains be used with hyper-param optimization tools like RayTune?
164 views Asked by Michael Litvin At
1
There are 1 answers
Related Questions in TRAINS
- Train timegraph with big step
- ClearML How to get configurable hyperparameters?
- How to manage datasets in ClearML Web UI?
- ClearML get max value from logged values
- ClearML Web UI custom column not persistent
- ClearML multiple tasks in single script changes logged value names
- ClearML SSH port forwarding fileserver not available in WEB Ui
- ClearML server IP address not used with localhost and SSH port forwarding
- Can ClearML (formerly Trains) work a local server?
- How to fix trainserver empty server?
- Trains: reusing previous task id
- pip install trains fails
- How should Trains be used with hyper-param optimization tools like RayTune?
- Will Trains automagically log Tensorboard HParams?
- How resilient is reporting to Trains server?
Related Questions in CLEARML
- pip install trains fails
- ClearML Change Debug Samples output destination after moving task to another project and renaming it
- Mounting an S3 bucket in docker in a clearml agent
- ClearML webapp is slow
- No preview images for dataset in ClearML web UI
- ALB host based routing without domain name
- make clearml agent do not install envs for every task
- ClearML how to change clearml.conf file in AWS Sagemaker
- Remotely execute ClearML task using local-only repo
- ModuleNotFoundError: No module named 'allegroai'
- How to run the ClearML Cleanup?
- Parallel Coordinates Plot in TRAINS
- ClearML How to get configurable hyperparameters?
- how to capture logger values using clearml
- ClearML multiple tasks in single script changes logged value names
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Popular Tags
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Disclaimer: I'm part of the allegro.ai Trains team
One solution is to inherit from trains.automation.optimization.SearchStrategy and extend the functionality. This is similar to the Optuna integration, where Optuna is used for the Bayesian optimization and Trains does the hyper-parameter setting, launching experiments, and retrieving performance metrics.
Another option (not scalable but probably easier to start with), is to use have the RayTuner run your code (obviously setting the environment / git repo / docker etc is on the user), and have your training code look something like:
This means every time the RayTuner executes the script a new experiment will be created, with new set of hyper parameters (assuming
haparmis a dictionary, it will be registered on the experiment as hyper-parameters)