Let's assume I have N
git projects, which combined together define a release/ repository R
.
When R
pass a sanity test, T
, we name it a good R
and if it fails we name it a bad R
.
I want to come up with a script, and in the future push it to google repo, which generalizes the git bisect
mechanism for a repository R
defined by N
git projects.
The aim is to find the latest good R
named best R
. VonC suggested a solution with submodules which is great but I am looking for a solution/algorithm for a repo-google
based repository contains N
git projects (e.g. Android).
I don't think a special script is necessary at first:
If those
N
git repos are referenced together in a parent repo as submodules, you can go back in the history of that parent repo and get back allN
repos as they were versioned at the time.Apply your
git-bisect
on your parent repo, and make sure your testT
take advantage of the sources of theN
repos inN
sub-directories of that main repo.That won't be as precise as a bisect done directly in the faulty submodule, but it can certainly help narrowing the search.
As in this blog post, your test
T
might have to run asub-git bisect
in each submodules.