git
Even though i have to admit that git has a high learning curve it’s worth the time.
I started with the ProGIT book and use it today for a wide range:
- source code
- to manage the images of my camera
- in conjunction with etckeeper
Here is an example of a .gitconfig file for your homedirectory:
[user] name = Your Name email = your@emailaddress [core] excludesfile = /home/uid/.gitignore_global [diff] tool = meld [merge] tool = meld [color] ui = true [diff "exif"] textconv = exiftool
And a recommendation for your .gitignore\_global:
# Compiled source # ################### *.com *.class *.dll *.exe *.o *.so *.aux # Packages # ############ # it's better to unpack these files and commit the raw source # git has its own built in compression methods *.7z *.dmg *.gz *.iso *.jar *.rar *.tar *.zip # Logs and databases # ###################### *.log *.sql *.sqlite # OS generated files # ###################### .DS_Store* ehthumbs.db Icon? Thumbs.db .ViperDB # editor generated backupstuff *~ *.bak *.old
If you really care about your data you should add theses lines to your config file of your repositories:
[receive] fsckObjects = true
Even though it takes a couple of cpu cycles you can be sure that no data got mangled during the transfer.
It can be used like RCS in every directory. Just a git init and you are ready. Later you decide that you want to push it to a central repository. Your central repository is not reachable from your server? Don’t care! Use it the other way around and configure a cron job to pull this directory from your central server.
git can easily handle even huge repositories. One issue i haven’t figured yet is common with all other version control systems: Your ISP disconnects your connection after some hours of transfer and you have to start over. As git clone is only a wrapper around some other git commands I probably have to have a closer look to find a workaround.
Probably something like git pack-objects in conjunction with rsync.