Backup for important files/pictures?
from frosch@sh.itjust.works to selfhosted@lemmy.world on 06 Jun 06:56
https://sh.itjust.works/post/39568776

How do you backup important things you store in selfhosted clouds?

I’m currently thinking about hosting Ente myself for syncing all my pictures. Maybe also spinning up nextcloud for various other shared files. However, for me one main benefit of using services like iCloud is the mitigated risk of losing everything in case the hardware fails (and fire, theft, water-damages, …).

Do you keep regular updates on hosted services? Do you keep really important stuff on other providers? Do you have other failsafes?

#selfhosted

threaded - newest

sznowicki@lemmy.world on 06 Jun 07:04 next collapse

Borg + hetzner backup storage (that supports Borg and rsync but I use Borg so my backups are encrypted)

undefined@lemmy.hogru.ch on 06 Jun 07:06 next collapse

Storj Tardigrade with client-side encryption. I use rclone so you could even encrypt it before hitting the Storj library if you’re extra paranoid (among other things like caching, chunking, etc).

Im_old@lemmy.world on 06 Jun 07:15 next collapse

Borg from the server to the nas, aws glacier from the nas to offsite.

Shimitar@downonthestreet.eu on 06 Jun 07:33 next collapse

Rest+backrest and the 3.2.1 rule.

One backup local on an external drive on my server Second backup on another disk connected to a WiFi ap in the house.

Third off site backup copy on my VPS.

All done by rest.

iii@mander.xyz on 06 Jun 07:38 next collapse

I have 5 copies of all my files on 5 devices, synced using syncthing with staggered file versioning. 2 of those are with friends and family who let me put a thin client at their place.

To protect against me misconfiguring syncthing, or some bug deleting all copies, every 3 months I manually make a copy and put it on a hard drive into a fire resistant safe.

Ek-Hou-Van-Braai@piefed.social on 06 Jun 08:16 next collapse

Now this is solid!!

doem_scroller@feddit.nl on 06 Jun 09:04 next collapse

This

tfowinder@lemmy.ml on 06 Jun 11:29 next collapse

Need to give synthing a try

CybranM@feddit.nu on 06 Jun 21:31 collapse

While impressive this seems like such a hassle to keep up

iii@mander.xyz on 06 Jun 21:39 collapse

The manual copy is a bit annoying, but in the end it’s maybe 10 minutes of work. Start the transfer in the evening, it’s finished in the morning.

0x0@lemmy.zip on 06 Jun 08:15 next collapse

Put important stuff in a cryptomator folder, sync it elsewhere.

Ek-Hou-Van-Braai@piefed.social on 06 Jun 08:16 next collapse

I have TrueNAS with 2x 6TB HDD's in a ZFS Mirror

I plan on getting another 6TB drive, leave it at my parents and have it power on once a week and sync, so that if my house burns down I don't lose everything

doeknius_gloek@discuss.tchncs.de on 06 Jun 11:25 collapse

Obligatory: RAID is not a backup.

Ek-Hou-Van-Braai@piefed.social on 06 Jun 18:46 collapse

I'll make sure to follow the 3-2-1 principle as I move off of the cloud

zerakith@lemmy.ml on 06 Jun 08:40 next collapse

Personally, I’m planning additional physical storage of photos off site. Not yet configured but planning for a subset of photos deemed too important to lose to be automatically printed and stored on physical media (DVDs).

In general I’m hoping it to promote a more careful approach to what media really is important to keep.

sj_zero@lotide.fbxl.net on 06 Jun 09:10 next collapse

I use nextcloud and I love it.

You want to follow the 3-2-1 strategy: 3 copies of your data on at least 2 different forms of media, and 1 backup being off-line.

melroy@kbin.melroy.org on 06 Jun 10:31 next collapse

Nextcloud

cotlovan@lemm.ee on 06 Jun 10:32 next collapse

One copy on the file system (which gets pulled into selfhosted picture apps) and once again in my nextcloud instance. The original folder gets a versioned remote backup

robber@lemmy.ml on 06 Jun 11:19 next collapse

It really depends on how much you enjoy to set things up for yourself and how much it hurts you to give up control over your data with managed solutions.

If you want to do it yourself, I recommend taking a look at ZFS and its RAIDZ configurations, snapshots and replication capabilities. It’s probably the most solid setup you will achieve, but possibly also a bit complicated to wrap your head around at first.

But there are a ton of options as beautifully represented by all the comments.

zorflieg@lemmy.world on 06 Jun 11:31 next collapse
  1. Entropy is a law of our universe. All data wants to be lost given a long enough timeline and without attention.

  2. Divide your data into what you can’t do without and what you may not care about losing.

  3. Take a backup out of your hands, make it as automatic as possible.

  4. I sync to encrypted folders on Google drive then use msp360 cloud to automatically copy everything in that drive to another cheap cloud storage that is client side encrypted.

For the protection it gives me, it’s cheap.

lankydryness@lemmy.world on 06 Jun 11:47 next collapse

I don’t follow the full 3-2-1 rule, but I did want some sort of offsite backup for my Nextcloud so I use Duplicity to back up my user data from Nextcloud, plus all my DockerCompose files that run my server, to an S3 bucket. Costs me like $2/mo. Way cheaper than google drive

CoyoteFacts@lemmy.ca on 06 Jun 11:50 next collapse

I use a 48TB ZFS RAIDZ2 pool to maintain data integrity locally and keep rolling ZFS snapshots with sanoid so that data recovery to any point within the last month is possible. Then I use borgmatic (borg) to sync the important data (~1TB) to a Servarica VPS (Polar Bear plan, which works out to be cheaper than Backblaze B2 costs for my purposes). The Servarica server really sucks in terms of CPU, and it’s quite sluggish, but it’s enough for daily backups. I also self-host healthchecks.io on a free Fly.io VPS thing (not sure if they offer this anymore) to make sure the backups are actually happening successfully, and hosting that on a third-party VPS means that it’s not going to fail at the same time my server does. Then I use Uptime Kuma to make sure everything is consistently alive (especially the healthchecks.io server, which in turn verifies that Uptime Kuma stays alive). I also run the same borg configuration to back up to a plain non-redundant disk locally.

The downside of this setup is that I’m only truly backing up a fraction of my pool, but most of my pool is stuff that I can redownload and set up again in the event of e.g. a house fire. I also run a daily script to dump a lot of metadata about my systems and pool, like directory listings of my media folders and installed programs/etc, which means that even if the data might be lost, I have a map of what I need to grab again.

CmdrShepard49@sh.itjust.works on 06 Jun 17:45 collapse

How big are these ZFS snapshots compared to the stored data size? 1:1?

CoyoteFacts@lemmy.ca on 06 Jun 18:06 collapse

Snapshots basically put a pin in all the data at that moment and say “these blocks are not going to be physically deleted as long as I exist”, so the “additional” data use of the snapshots is equal to the data contained within the snapshot that doesn’t exist at the current moment. I.e., if I have two 50GB files, take a snapshot, and delete one, I will still have 100GB physical disk usage. I can also take 400 more snapshots and disk usage will remain at 100GB, as the snapshots are just virtual. Then I can either bring that deleted file back from the snapshot, or I can delete the snapshot itself and my disk usage will adjust to the “true” 50GB as the snapshot releases its hold on the blocks.

What sanoid and other snapshot managers do is they repeatedly take snapshots at specified intervals and delete old snapshots past a certain date, which in practice results in a “rolling” system where you can access snapshots from e.g. every hour in the past month. Then once a snapshot becomes older than a month, sanoid will auto-delete it and free up the disk space that it’s holding onto. The exact settings are all configurable to what you’re comfortable with in terms of trading additional physical disk usage of potential “dead” data for the convenience of being able to resurrect that data for a certain amount of time.

I really like the “data comet” visual from this Ars Technica article.

Vinstaal0@feddit.nl on 06 Jun 12:13 next collapse

I have ResillioSync setup witch syncs between different family members. Both me and my uncle make offline backups of the dataset.

My pictures on my phone are backupped by iCloud, OneDrive, Resilliosync and Immich … The exports are all posted in the Resilliosync dataset and in Immich.

My most important files are on Protondrive and I used to make backups using Perfectbackup to my NAS, but since I ditched WIndows I need something else/

wise_pancake@lemmy.ca on 06 Jun 12:23 next collapse

I’m new/planning to get more into self hosting

I have a crappy NAS in the basement I archive to and copy my borg repos to.

Then I pay for a Dropbox style cloud service and I copy my borg archives there. It’s kind of janky but it’s cheap and works.

dieTasse@feddit.org on 06 Jun 12:34 next collapse

I use storj. Its well integrated with true nas. And the encrypted files are fragmented, duplicated and scattered around the world. (many people host storj node no their true nas as well).

bacon_pdp@lemmy.world on 06 Jun 12:42 next collapse

!datahoarder@lemmy.ml

Is the best place to ask

Tywele@lemmy.dbzer0.com on 06 Jun 12:42 next collapse

I rsync to a storage box from Hetzner.

non_burglar@lemmy.world on 06 Jun 13:43 next collapse

I rsync nightly to an old synology box. It’s in an out building, so if there’s a fire, it comes with me.

irmadlad@lemmy.world on 06 Jun 17:31 next collapse

I use Backblaze personal/unlimited, and have for quite a while. A lot of the other storage options go by GB/price which is fine, but I have a ton of stuff that is irreplaceable such as my music collection of around 80k songs I converted out to flac, pictures, business docs, etc. I realize it’s not really in the selfhosted arena, but Backblaze works out for me. If you are backing up a lot of data, re-initializing multiple TB backups can be a chore. Backblaze has a program where you buy a 10 TB drive from them, they ship you your data, once transferred you can send the drive back for a full refund.

DrunkAnRoot@sh.itjust.works on 06 Jun 19:32 next collapse

500 gigs in hdds i ripped of cable boxes all wired together an hooked up to my old thinkpad t470 running thunar

Appoxo@lemmy.dbzer0.com on 06 Jun 21:05 collapse

PC: Veeam
Phone/general pics: Immich (both automatic and manual)
Some general phone files: Syncthing
The remaining stuff is on my NAS at home.
Off-site of the most important VMs and some infrastructure: Veeam Backup Copy to an external HDD I keep at my workplace (encrypted)