What could be a reasonable setup for this? Can I call Task.init() multiple times in the same execution?
How should Trains be used with hyper-param optimization tools like RayTune?
164 views Asked by Michael Litvin At
1
There are 1 answers
Related Questions in TRAINS
- Train timegraph with big step
- ClearML How to get configurable hyperparameters?
- How to manage datasets in ClearML Web UI?
- ClearML get max value from logged values
- ClearML Web UI custom column not persistent
- ClearML multiple tasks in single script changes logged value names
- ClearML SSH port forwarding fileserver not available in WEB Ui
- ClearML server IP address not used with localhost and SSH port forwarding
- Can ClearML (formerly Trains) work a local server?
- How to fix trainserver empty server?
- Trains: reusing previous task id
- pip install trains fails
- How should Trains be used with hyper-param optimization tools like RayTune?
- Will Trains automagically log Tensorboard HParams?
- How resilient is reporting to Trains server?
Related Questions in CLEARML
- setup clearml serving after seting up self-hosted clearml server
- How to save hidden/visible scalars layout in "Compare Experiments" tab?
- How can I view log in ElasticSearch In Clearml?
- ClearML Change Debug Samples output destination after moving task to another project and renaming it
- Replacing IPs in MongoDB
- Automating a Script with ClearML
- No preview images for dataset in ClearML web UI
- ClearML webapp is slow
- How to run the ClearML Cleanup?
- Experiment tracking for multiple ML independent models using WandB in a single main evaluation
- ClearML: How to merge 2 datasets which the 2 datasets inherited from a main dataset?
- How to make ClearML not upload annotations twice when they have the same ID?
- ClearML, how to query the best performing model for a specific project and metric
- ClearML - dynamically updating Plotly plots?
- Does ClearML have accounting of information security events
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Disclaimer: I'm part of the allegro.ai Trains team
One solution is to inherit from trains.automation.optimization.SearchStrategy and extend the functionality. This is similar to the Optuna integration, where Optuna is used for the Bayesian optimization and Trains does the hyper-parameter setting, launching experiments, and retrieving performance metrics.
Another option (not scalable but probably easier to start with), is to use have the RayTuner run your code (obviously setting the environment / git repo / docker etc is on the user), and have your training code look something like:
This means every time the RayTuner executes the script a new experiment will be created, with new set of hyper parameters (assuming
haparmis a dictionary, it will be registered on the experiment as hyper-parameters)