Let's assume I have N git projects, which combined together define a release/ repository R.
When R pass a sanity test, T, we name it a good R and if it fails we name it a bad R.
I want to come up with a script, and in the future push it to google repo, which generalizes the git bisect mechanism for a repository R defined by N git projects.
The aim is to find the latest good R named best R. VonC suggested a solution with submodules which is great but I am looking for a solution/algorithm for a repo-google based repository contains N git projects (e.g. Android).
I don't think a special script is necessary at first:
If those
Ngit repos are referenced together in a parent repo as submodules, you can go back in the history of that parent repo and get back allNrepos as they were versioned at the time.Apply your
git-bisecton your parent repo, and make sure your testTtake advantage of the sources of theNrepos inNsub-directories of that main repo.That won't be as precise as a bisect done directly in the faulty submodule, but it can certainly help narrowing the search.
As in this blog post, your test
Tmight have to run asub-git bisectin each submodules.