Jennifer Aniston and Friends Cost Us 377GB and Broke Ext4 Hardlinks
Comments
replooda
UltraSane
The real problem is they aren't deduplicating at the filesystem level like sane people do.
otterley
From the article:
> [W]e shipped an optimization. Detect duplicate files by their content hash, use hardlinks instead of downloading each copy.
UltraSane
I meant TRANSPARENT filesystem level dedupe. They are doing it at the application level. filesystem level dedupe makes it impossible to store the same file more than once and doesn't consume hardlinks for the references. It is really awesome.
mmh0000
Filesystem/file level dedupe is for suckers. =D
If the greatest filesystem in the world were a living being, it would be our God. That filesystem, of course, is ZFS.
Handles this correctly:
UltraSane
I was talking about block level dedupe.
mmh0000
I thought you might be.
I just wanted to mention ZFS.
Have I mentioned how great ZFS is yet?
otterley
ZFS is great! However, it's too complicated for most Linux server use cases (especially with just one block device attached); it's not the default (root filesystem); and it's not supported for at least one major enterprise Linux distro family.
burnt-resistor
File system dedupe is expensive because it requires another hash calculation that cannot be shared with application-level hashing, is a relatively rare OS-fs feature, doesn't play nice with backups (because files will be duplicated), and doesn't scale across boxes.
A simpler solution is application-level dedupe that doesn't require fs-specific features. Simple scales and wins. And plays nice with backups.
Hash = sha256 of file, and abs filename = {{aa}}/{{bb}}/{{cc}}/{{d}} where
aa = hash 2 hex most significant digits
bb = hash next 2 hex digits
cc = hash next 2 hex after that
d = remaining hex digits
otterley
For ZFS, at least, `zfs send` is the backup solution. And it performs incremental backups with the `-i` argument.
UltraSane
zfs send is really awesome when combined with dedupe and incremental
UltraSane
All good backup software should be able to do deduped incremental backups at the block level. I'm used to veeam and commvault
burnt-resistor
That costs even more, unreuseable time and effort. It's simpler to dedupe at the application level rather than shift the burden onto N things. I guess you don't understand or appreciate simplicity.
dj_rock
We were on a break...of your filesystem!
uticus
And I thought this was a reference to a Win95 problem https://www.slashgear.com/1414245/jennifer-aniston-matthew-p...
mingus88
Yeah Block level dedupe has been an industry standard for decades. Tracking file hashes? Why?
And I see above that this is a self-hosted platform and I still don’t get it. I was running terabytes of ZFS with dedupe=on on cheap supermicro gear in 2012
zulux
File hashes are great to get two systems to work together to dedupe themselves. I have a Windows backup that sends hashes to a backup server, so we don't back up crud we already have.
niobe
Completely Claude written FWIW. I recongise the style.
trixn86
The Problem. The fix. The Limit.
Is it just me or is everybody else just as fed up with always the same AI tropes?
I've reached a point where I just close the tab the moment I read a headline "The problem". At least use tropes.fyi please
colejohnson66
Doesn’t read like AI to me
snickerbockers
Let that sink in.
otterley
Another reason to use XFS -- it doesn't have per-inode hard link limits.
(Some say ZFS as well, but it's not nearly as easy to use, and its license is still not GPL-friendly.)
burnt-resistor
xfs on mdraid is what I use on my homelab NAS across several giant RAID arrays. While it lacks some integrity and CoW features, it's really, really stable. I had ZoL ZFS troubles that the maintainers shrugged off requiring transferring everything to another volume.. so I won't ever use or recommend ZFS unless it's Sun-Oracle.
bravetraveler
As is always the case, short vs long term... but I think I'd put effort into migrating to a filesystem that is aware of duplication instead of trying to recreate one with links [while retaining duplicates, just fewer].
Effectiveness is debatable, this approach still has duplication. An insignificant amount, I'll admit. The filesystem handling this at the block level is probably less problematic/prone to rework and more efficient.
edit: Eh, ignore me. I see this is preparing for [whatever filesystem hosts chose] thanks to 'ameliaquining' below. Originally thought this was all Discourse-proper, processing data they had.
ameliaquining
Discourse is self-hostable; they can't require their users to use a filesystem that supports deduplication. (Or, well, they could, but it would greatly complicate installation and maintenance and whatnot, and also there would need to be some kind of story for existing installations.)
bravetraveler
Fair, I am/was confused by the hosting model and presentation. This is a nice User-preparation/consideration, I guess. I still maintain a backup filesystem unaware of duplication at the block level is a mistake.
I completely overlooked the shipping-of-tarballs. Links make sense, here. I had 'unpacked' and relatively-local data in mind. Absolutely would not go as far to suggest their scheme pick up 'zfs {send,receive}'/equivalent, lol.
ameliaquining
They do also offer it as multi-tenant hosted SaaS, and the post is about their experience running backups on that. But whatever solution they use has to also work with the self-hosted version, which imposes some constraints.
bravetraveler
Sweet
mikehotel
[dead]
UltraSane
This makes them look rather incompetent. Storing the exact same file 246,173 times is just stupid. Dedupe at the filesystem level and make your life easier.
In short: Deduplication efforts frustrated by hardlink limits per inode — and a solution compatible with different file systems.