Epstein Files Jan 30, 2026

Data hoarders on reddit have been hard at work archiving the latest Epstein Files release from the U.S. Department of Justice. Below is a compilation of their work with download links.

Please seed all torrent files to distribute and preserve this data.

Ref: https://old.reddit.com/r/DataHoarder/comments/1qrk3qk/epstein_files_datasets_9_10_11_300_gb_lets_keep/

Epstein Files Data Sets 1-8: INTERNET ARCHIVE LINK

Epstein Files Data Set 1 (2.47 GB): TORRENT MAGNET LINK
Epstein Files Data Set 2 (631.6 MB): TORRENT MAGNET LINK
Epstein Files Data Set 3 (599.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 4 (358.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 5: (61.5 MB) TORRENT MAGNET LINK
Epstein Files Data Set 6 (53.0 MB): TORRENT MAGNET LINK
Epstein Files Data Set 7 (98.2 MB): TORRENT MAGNET LINK
Epstein Files Data Set 8 (10.67 GB): TORRENT MAGNET LINK


Epstein Files Data Set 9 (Incomplete). Only contains 49 GB of 180 GB. Multiple reports of cutoff from DOJ server at offset 48995762176.

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 6ae129b76fddbba0776d4a5430e71494245b04c4

/u/susadmin’s More Complete Data Set 9 (96.25 GB)
De-duplicated merger of (45.63 GB + 86.74 GB) versions

Unverified version incomplete at ~101 GB.


Epstein Files Data Set 10 (78.64GB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA256: 7D6935B1C63FF2F6BCABDD024EBC2A770F90C43B0D57B646FA7CBD4C0ABCF846 MD5: B8A72424AE812FD21D225195812B2502


Epstein Files Data Set 11 (25.55GB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 574950c0f86765e897268834ac6ef38b370cad2a


Epstein Files Data Set 12 (114.1 MB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 20f804ab55687c957fd249cd0d417d5fe7438281
MD5: b1206186332bb1af021e86d68468f9fe
SHA256: b5314b7efca98e25d8b35e4b7fac3ebb3ca2e6cfd0937aa2300ca8b71543bbe2


This list will be edited as more data becomes available, particularly with regard to Data Set 9.

  • WhatCD@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 hours ago

    Updated the script to display information better: https://pastebin.com/S4gvw9q1

    It has one library dependency so you’ll have to do:

    pip install rich
    

    I haven’t been getting blocked with this:

    python script.py 'https://www.justice.gov/epstein/files/DataSet%209.zip' -o 'DataSet 9.zip' --cookies cookie.txt --retries 2 --referer 'https://www.justice.gov/age-verify?destination=%2Fepstein%2Ffiles%2FDataSet+9.zip' --ua '<set-this>' --timeout 90 -t 16 -c auto
    

    The new script can auto set threads and chunks, I updated the main comment with more info about those.

    I’m setting the --ua option which let’s you override the user agent header. I’m making sure it matches the browser that I use to request the cookie.

    • WorldlyBasis9838@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      53 minutes ago

      I had the script crash at line 324: BadStatusLine: HTTP/1.1 0 Init

      EDIT: It’s worth noting that about every time I (re) start it after seemingly been blocked a bit, I get about 1gb more before it slows WAY down (no server response).

      EDIT: It looks to me, that if I’m getting only FAILED: No server response, stopping the script for a minute or two and restarting immediately garners a lot more results. I think having a longer pause with many failures might be worth looking at. – I’ll play around a bit.

    • WorldlyBasis9838@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      Gonna grab a some tea, then get back at it. Will update when I have something.

      Thanks for this!

      EDIT: This works quite well. Getting chunks right off the bat. About 1 per second, just guessing.

      • xodoh74984@lemmy.worldOP
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 hour ago

        I’ve been trying to achieve the same thing using aria2 with 8 concurrent download threads + cookies exported from my browser, the user agent from my browser, and a random retry interval between 5-30 seconds after each download failure.

        But I think I’ve been blocked by the server.

        My download attempts started to fail before I began using my browser’s user agent, so it’s difficult for me to know what exactly caused me to get blocked. The download was incredibly fast before things started breaking and could’ve finished within 30 minutes.

        Does anyone know how long the apparent IP ban lasts?

        • WhatCD@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 hour ago

          I don’t know exactly, but seems about an hour or two if you get a 401 unauthorized.

          Would you be interested in joining out effort here? I’m hoping to crowd source these chunks and then combine our effort.

          • xodoh74984@lemmy.worldOP
            link
            fedilink
            arrow-up
            2
            ·
            1 hour ago

            Absolutely! By the way, I hadn’t thanked you yet for your massive effort here. Thank you very much for putting this all together. Also, love your username.

            Do you think we could modify the script to use HTTP Range headers and download from the end of the file to the beginning? Or, perhaps we could work together and target different byte ranges?

            You seem much better versed in this than I am to know what’s possible.

            • WhatCD@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              3 minutes ago

              Ok updated the script. Added --startByte and --endByte and --totalFileBytes

              https://pastebin.com/9Dj2Nhyb

              Using --totalFileBytes 192613274080 avoids an HTTP head request at the beginning of the script making it slightly less brittle.

              To grab the last 5 GB of the file you would add the following to your command:

              --startByte 187244564960 --endByte 192613274079 --totalFileBytes 192613274080
              
            • WorldlyBasis9838@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              57 minutes ago

              If we could target different byte ranges, having 10-20 different people spaced through the expected range could cover a lot of ground!

          • WorldlyBasis9838@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            16 minutes ago

            My IP appears to have been completely blocked by the domain. Multiple browsers, devices, confirm it.

            If anyone has any suggestions for other options, I’m listening.