Epstein Files Jan 30, 2026
Data hoarders on reddit have been hard at work archiving the latest Epstein Files release from the U.S. Department of Justice. Below is a compilation of their work with download links.
Please seed all torrent files to distribute and preserve this data.
Epstein Files Data Sets 1-8: INTERNET ARCHIVE LINK
Epstein Files Data Set 1 (2.47 GB): TORRENT MAGNET LINK
Epstein Files Data Set 2 (631.6 MB): TORRENT MAGNET LINK
Epstein Files Data Set 3 (599.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 4 (358.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 5: (61.5 MB) TORRENT MAGNET LINK
Epstein Files Data Set 6 (53.0 MB): TORRENT MAGNET LINK
Epstein Files Data Set 7 (98.2 MB): TORRENT MAGNET LINK
Epstein Files Data Set 8 (10.67 GB): TORRENT MAGNET LINK
Epstein Files Data Set 9 (Incomplete). Only contains 49 GB of 180 GB. Multiple reports of cutoff from DOJ server at offset 48995762176.
ORIGINAL JUSTICE DEPARTMENT LINK
SHA1: 6ae129b76fddbba0776d4a5430e71494245b04c4
/u/susadmin’s More Complete Data Set 9 (96.25 GB)
De-duplicated merger of (45.63 GB + 86.74 GB) versions
Unverified version incomplete at ~101 GB.
Epstein Files Data Set 10 (78.64GB)
ORIGINAL JUSTICE DEPARTMENT LINK
SHA256: 7D6935B1C63FF2F6BCABDD024EBC2A770F90C43B0D57B646FA7CBD4C0ABCF846 MD5: B8A72424AE812FD21D225195812B2502
Epstein Files Data Set 11 (25.55GB)
ORIGINAL JUSTICE DEPARTMENT LINK
SHA1: 574950c0f86765e897268834ac6ef38b370cad2a
Epstein Files Data Set 12 (114.1 MB)
ORIGINAL JUSTICE DEPARTMENT LINK
SHA1: 20f804ab55687c957fd249cd0d417d5fe7438281
MD5: b1206186332bb1af021e86d68468f9fe
SHA256: b5314b7efca98e25d8b35e4b7fac3ebb3ca2e6cfd0937aa2300ca8b71543bbe2
This list will be edited as more data becomes available, particularly with regard to Data Set 9.

I’m working on a different method of obtaining a complete dataset zip for dataset 9. For those who are unaware, for a time yesterday there was an official zip available from the DOJ. To my knowledge no one was able to fully grab it. But I believe the 49Gb zip is a partial of that before downloads got cut. It’s my thought that this original zip likely contained incriminating information and it’s why it got halted.
What I’ve observed is that Akamai still serves that zip sporadically in small chunks. It’s really strange and I’m not sure why it does, but I have verified with
stringsthat there are pdf file names in the zip data. I’ve been able to use a script to pull small chunks from the CDN across the entire span of the file’s byte range.Using the 49GB file as a starting point I’m working on piecing the file together, however progress is extremely extremely slow. If there is anyone willing to team up on this and combine the chunks please let me know.
How to grab the chunked data:
Script link: https://pastebin.com/9Dj2Nhyb
For the script will probably have to:
Grab DATASET 9, INCOMPLETE AT ~48GB:
magnet:?xt=urn:btih:0a3d4b84a77bd982c9c2761f40944402b94f9c64&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2FannounceThen name the downloaded file 0-(the last byte the file spans).bin
So for example the 48 GB file it would be:
0-48995762175.binNext to the python script make a directory called:
DataSet 9.zip.chunksMove the renamed first byte range 48 GB file in to that directory.
Make a new file next to the script called
cookies.txtInstall the cookie editor browser extension (https://cookie-editor.com/)
With the browser extension open go to: https://www.justice.gov/age-verify?destination=%2Fepstein%2Ffiles%2FDataSet+9.zip
The download should start in your browser, cancel it.
Export the cookies in Netscape Format. They will copy to your clipboard.
Paste those in your
cookies.txt, save and close it.You can run the script like so:
python3 script.py \ 'https://www.justice.gov/epstein/files/DataSet%209.zip' \ -o 'DataSet 9.zip' \ --cookies cookies.txt --retries 3 \ --backoff 5.0 \ --referer 'https://www.justice.gov/age-verify?destination=%2Fepstein%2Ffiles%2FDataSet+9.zip' \ -t auto -c autoScript Options:
-t- The number of concurrent threads to use which results in trying that many byte ranges at the same time. Setting this toautowill auto calculate based on your CPU but will cap at 8 to be safe and avoid getting banned by Akamai.-c- The chunk size to request from the server in MB. This is not always respected by the server and you may get a smaller or larger chunk, but the script should handle that. Setting this toautoscales with the file size, though feel free to try different sizes.--backoff- The backoff factor between failures, helps prevent Akimai throttling your requests.--retries- The number of times to retry a byte range for that iteration before moving on to the next byte range. If it moves on it will come back to it again on the next loop.--cookies- The path to the file containing your Netscape formatted cookies.-o- The final file name. The chunks directory is derived from this so make sure it matches the name of the chunk directory that you primed with the torrent chunk.--referer- Just leave this for Akimai, set the referer http header.There are more options if you tun the script with the
--helpoption.If you start to receive HTML and or HTTP/200 responses then you need to refresh your cookie.
If you start to receive HTTP/400 responses then you need to refresh your cookie in a different browser, Akamai is very fussy.
A VPN and multiple browser might be useful to change your cookie and location combo.
Edit
I tested the script on Dataset 8 and it was able to stitch a valid zip together so assuming we’re getting valid data with Dataset 9 it should work.
Awesome, I don’t really understand what’s happening but I’m also running it (also doing it for the presumably exact same 48GB torrent, but I’m supposed to do that right?)
this method is not working for me anymore
Yeah :/ I haven’t been able pull anything in a while now.I was just able to pull 6 chunks, the data is still out there!I messaged you on the other site; I’m currently getting a
Could not determine Content-Length (got None)errorWhat happens when you go to
https://www.justice.gov/epstein/files/DataSet%209.zipin your browser?age gate > page not found
Yeah when I run into this I’ve switched browsers and it’s helped. I’ve also switched IP addresses and it’s helped.
alrighty, I’m currently in the middle of the archive.org upload but I can transfer the chunks I already have over to a different machine and do it there with a new IP
I also was getting the same error. Going to the link successfully downloads.
Updating the cookies fixed the issue.
Can also confirm, receiving more chunks again.
EDIT: Someone should play around with the retry and backoff settings to see if a certain configuration can avoid being blocked for a longer period of time. IP rotating is too much trouble.
Updated the script to display information better: https://pastebin.com/S4gvw9q1
It has one library dependency so you’ll have to do:
I haven’t been getting blocked with this:
python script.py 'https://www.justice.gov/epstein/files/DataSet%209.zip' -o 'DataSet 9.zip' --cookies cookie.txt --retries 2 --referer 'https://www.justice.gov/age-verify?destination=%2Fepstein%2Ffiles%2FDataSet+9.zip' --ua '<set-this>' --timeout 90 -t 16 -c autoThe new script can auto set threads and chunks, I updated the main comment with more info about those.
I’m setting the
--uaoption which let’s you override the user agent header. I’m making sure it matches the browser that I use to request the cookie.I had the script crash at line 324:
BadStatusLine: HTTP/1.1 0 InitEDIT: It’s worth noting that about every time I (re) start it after seemingly been blocked a bit, I get about 1gb more before it slows WAY down (no server response).
EDIT: It looks to me, that if I’m getting only FAILED: No server response, stopping the script for a minute or two and restarting immediately garners a lot more results. I think having a longer pause with many failures might be worth looking at. – I’ll play around a bit.
Gonna grab a some tea, then get back at it. Will update when I have something.
Thanks for this!
EDIT: This works quite well. Getting chunks right off the bat. About 1 per second, just guessing.
I would be interested in obtaining the chunks that you gathered and stitch them to what I gathered.
Nor I. I got a single chunk back before never getting anything again.
I’m using a partial download I already had and not the 48gb version but I will be gathering as many chunks as I can as well. Thanks for making this
how big is the partial that you managed to get?
about 25gb