How much memory does a single, cloned git repo occupy?

120 views Asked by At

After using git for version control purely on my remote server, I am now looking to use git for version control across both my remote and local file system.

My approach to doing this so far is to:

  1. Create a remote bare repo as a 'save' directory:
  • Create the directory mkdir /save
  • Create a save repo for this project mkdir /save/projectName
  • Enter the project repo cd /save/projectName and initialise it as bare git init --bare
  1. Clone the remote save repo locally, add, edit and commit, then push back to remote save:
  • Create a local development directory mkdir /webDev and enter it cd /webDev
  • Clone the remote save of this project `git clone user@host:/save/projectName
  • Add files then run git add *, edit them and then commit `git commit * -m "Update."
  • Push changes to the remote save repo with git push origin master
  1. Clone the updated save repo into a development repo on the remote machine:
  • Enter the server directory /srv, and the development sub-directory /srv/dev
  • Clone the remote saved repo with git clone /save/projectName
  1. Check the development site works as expected, then repeat (3) for the production directory:
  • Run git clone /save/projectName in the production directory /srv

This all works fine, however my concern is the memory taken up by having 3 directories with the same contents, on top of which, for multiple projects, will increase as 3*N for project number N.

I've read many online tutorials and sites about using git, however I haven't been able to follow any clearly. There is often talk about working with branches but I don't want to think about branches yet - just cloning, pushing and pulling.

I've thought I would ideally like to have the bare /save repo on my local machine with a dynamic IP, and then somehow copy the contents to the remote machines development and production directories. This would reduce 3 directories per project to 2, which would be better, but I haven't found a way to conveniently git clone from a dynamic IP address.

In summary, there's a few questions I can think of that will address the issue I have:

  1. Do cloned git repositories occupy the same memory space as their raw file equivalents ? Or does git somehow make the memory size more succinct ?

  2. Is there an industry standard way of setting up the process of local, remote dev and remote prod locations, that gets around the memory issue ?

  3. Is there a means of hosting the bare git repo on my local machine, and then somehow moving them to the remote dev and prod locations ?

Any direction with the above questions, or possible misconceptions I may have, along with an explanation, would be appreciated.

1

There are 1 answers

0
VonC On

There is no "memory" issue I am aware of. Only disk space issue.

The usual workflow is a remote By default, the working tree is not modified by a push.

The best practice is a remote bare repository, with a post-receive hook which can be configured to execute any command you need.

Like going to your actual repository (the dev one or the prod one, depending on the remote) and doing a git pull (to update the working tree from the commits you pushed to the bare repository from your local development environment.

If your dev/prod repositories are on the same remote server (where your bare repository is, and where you are pushing to), then you can use git worktree (that I present here), to keep one actual repository, and two checked out working trees.