Epstein Files Jan 30, 2026

Data hoarders on reddit have been hard at work archiving the latest Epstein Files release from the U.S. Department of Justice. Below is a compilation of their work with download links.

Please seed all torrent files to distribute and preserve this data.

Ref: https://old.reddit.com/r/DataHoarder/comments/1qrk3qk/epstein_files_datasets_9_10_11_300_gb_lets_keep/

Epstein Files Data Sets 1-8: INTERNET ARCHIVE LINK

Epstein Files Data Set 1 (2.47 GB): TORRENT MAGNET LINK
Epstein Files Data Set 2 (631.6 MB): TORRENT MAGNET LINK
Epstein Files Data Set 3 (599.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 4 (358.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 5: (61.5 MB) TORRENT MAGNET LINK
Epstein Files Data Set 6 (53.0 MB): TORRENT MAGNET LINK
Epstein Files Data Set 7 (98.2 MB): TORRENT MAGNET LINK
Epstein Files Data Set 8 (10.67 GB): TORRENT MAGNET LINK


Epstein Files Data Set 9 (Incomplete). Only contains 49 GB of 180 GB. Multiple reports of cutoff from DOJ server at offset 48995762176.

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 6ae129b76fddbba0776d4a5430e71494245b04c4

/u/susadmin’s More Complete Data Set 9 (96.25 GB)
De-duplicated merger of (45.63 GB + 86.74 GB) versions

Unverified version incomplete at ~101 GB.


Epstein Files Data Set 10 (78.64GB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA256: 7D6935B1C63FF2F6BCABDD024EBC2A770F90C43B0D57B646FA7CBD4C0ABCF846 MD5: B8A72424AE812FD21D225195812B2502


Epstein Files Data Set 11 (25.55GB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 574950c0f86765e897268834ac6ef38b370cad2a


Epstein Files Data Set 12 (114.1 MB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 20f804ab55687c957fd249cd0d417d5fe7438281
MD5: b1206186332bb1af021e86d68468f9fe
SHA256: b5314b7efca98e25d8b35e4b7fac3ebb3ca2e6cfd0937aa2300ca8b71543bbe2


This list will be edited as more data becomes available, particularly with regard to Data Set 9.

  • xodoh74984@lemmy.worldOP
    link
    fedilink
    arrow-up
    3
    ·
    3 hours ago

    Absolutely! By the way, I hadn’t thanked you yet for your massive effort here. Thank you very much for putting this all together. Also, love your username.

    Do you think we could modify the script to use HTTP Range headers and download from the end of the file to the beginning? Or, perhaps we could work together and target different byte ranges?

    You seem much better versed in this than I am to know what’s possible.

    • WhatCD@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 hour ago

      Ok updated the script. Added --startByte and --endByte and --totalFileBytes

      https://pastebin.com/sjMBCnzm

      Using --totalFileBytes 192613274080 avoids an HTTP head request at the beginning of the script making it slightly less brittle.

      To grab the last 5 GB of the file you would add the following to your command:

      --startByte 187244564960 --endByte 192613274079 --totalFileBytes 192613274080
      
        • WhatCD@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          49 minutes ago

          This would be the largest three gaps from what I have:

          • --startByte 49981423616 --endByte 60299411455 (9.61 GB)
          • --startByte 110131937280 --endByte 120424759295 (9.59 GB)
          • --startByte 134211436544 --endByte 144472801279 (9.56 GB)
          • WorldlyBasis9838@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            25 minutes ago

            I’ll work on the second –startByte 110131937280 --endByte 120424759295 (9.59 GB)

            EDIT: I’m probably at 20-30 passes by now. Got squat.

            Do you think this is a bug, or is it possible the chunk is not there?

          • xodoh74984@lemmy.worldOP
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            15 minutes ago

            I will grab the first segment: -–startByte 49981423616 --endByte 60299411455 (9.61 GB)

            EDIT: I, too, remain chunkless after 8 passes. Haven’t been able to grab anything yet, but trying.

            EDIT2: Been IP hopping and refreshing cookies to try to work around the issue. On my 3rd IP address, but still at 0%.

          • kongstrong@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            4 minutes ago

            I can also take up some of these. Do you happen to have more of those gaps?

            Also, are you guys using some chat channel for this? Might be a little more accessible

            E: other users that run into this thread, DM me and I can add you to an element group to coordinate all this

    • WorldlyBasis9838@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      3 hours ago

      If we could target different byte ranges, having 10-20 different people spaced through the expected range could cover a lot of ground!