Epstein Files Jan 30, 2026

Data hoarders on reddit have been hard at work archiving the latest Epstein Files release from the U.S. Department of Justice. Below is a compilation of their work with download links.

Please seed all torrent files to distribute and preserve this data.

Ref: https://old.reddit.com/r/DataHoarder/comments/1qrk3qk/epstein_files_datasets_9_10_11_300_gb_lets_keep/

Epstein Files Data Sets 1-8: INTERNET ARCHIVE LINK

Epstein Files Data Set 1 (2.47 GB): TORRENT MAGNET LINK
Epstein Files Data Set 2 (631.6 MB): TORRENT MAGNET LINK
Epstein Files Data Set 3 (599.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 4 (358.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 5: (61.5 MB) TORRENT MAGNET LINK
Epstein Files Data Set 6 (53.0 MB): TORRENT MAGNET LINK
Epstein Files Data Set 7 (98.2 MB): TORRENT MAGNET LINK
Epstein Files Data Set 8 (10.67 GB): TORRENT MAGNET LINK


Epstein Files Data Set 9 (Incomplete). Only contains 49 GB of 180 GB. Multiple reports of cutoff from DOJ server at offset 48995762176.

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 6ae129b76fddbba0776d4a5430e71494245b04c4

/u/susadmin’s More Complete Data Set 9 (96.25 GB)
De-duplicated merger of (45.63 GB + 86.74 GB) versions

Unverified version incomplete at ~101 GB.


Epstein Files Data Set 10 (78.64GB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA256: 7D6935B1C63FF2F6BCABDD024EBC2A770F90C43B0D57B646FA7CBD4C0ABCF846 MD5: B8A72424AE812FD21D225195812B2502


Epstein Files Data Set 11 (25.55GB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 574950c0f86765e897268834ac6ef38b370cad2a


Epstein Files Data Set 12 (114.1 MB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 20f804ab55687c957fd249cd0d417d5fe7438281
MD5: b1206186332bb1af021e86d68468f9fe
SHA256: b5314b7efca98e25d8b35e4b7fac3ebb3ca2e6cfd0937aa2300ca8b71543bbe2


This list will be edited as more data becomes available, particularly with regard to Data Set 9.

  • redbarinternet@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 hour ago

    ADD MY DISCORD FOR MORE DISCUSSION: redbarinternet

    Dataset 9 is cooked:

    Anyone got discord? I have been scraping the website collecting all the links. There is not even 1million links available. I have a full dashboard for it:

    Theres no where close to 3.5 million files even in the full link collection. I have scraped all possible links too.

    Current streak is how many page in a row on dataset 9 that it scraped before finding a new page. My threshold for stopping is set at 4000 duplicate pages in a row.

    Why?

    Yes this means what you think. Each streak represents at least 5 separate times where there were greater than 400 duplicate pages before new data was found. These are unique instances too so that means at one point you would have went through 436 pages before a new one, then another was 816 pages before a new one, and so on.

    Total counts based on links available that were scraped at the time my database tracked them: This indicates that we have the potential to download ~900k files out of the current document range that we should have 1 to 2,731,783

    • xodoh74984@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      ·
      1 hour ago

      You might try merging with the set below to see if you’ve scraped files that aren’t in it?

      /u/susadmin’s More Complete Data Set 9 (96.25 GB)
      De-duplicated merger of (45.63 GB + 86.74 GB) versions

      I bet you’ve grabbed a bunch of missing pieces from the puzzle.

      • redbarinternet@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        53 minutes ago

        I’ll check it out. I was mostly working to scrape all links since there was no direct download. Then, downloading those. So, I mostly did data collection on what was there first.