If you don't need it easily parseable, you can go to your saves and just scroll down until there are no pages left (this may be a RES function) and then save as html and/or print to PDF.
Reddit Migration
### About Community Tracking and helping #redditmigration to Kbin and the Fediverse. Say hello to the decentralized and open future. To see latest reeddit blackout info, see here: https://reddark.untone.uk/
So this is just about getting saved comments and posts?
Edit: It's nice that it can parse the links out of the csv file, rather than having to copy and delete the ID at the start of each row.
Also, make sure to click "Decline" during the JDownloader install, as it's for marketing, not regalar Ts&Cs.
Also check out https://kbin.social/m/RedditMigration/t/47320/PSA-If-you-have-more-than-1000-posts-more-than which explains how to get your own posts and comments, and is useful for folks who are waiting in frustrating while their GDPR requests have no responses, see https://kbin.social/m/RedditMigration/t/50981/Reddit-Data-Retrieval-Request-timeline-thread for those of us who are still waiting.
I didn't use save posts so didn't need to save anything, but it's enough to change one line in the textposts.py script from "reddit.user.me().submissions.new(" to "reddit.user.me().saved(" to pull those down.
Alas, if you have more than 1000 of these saved posts, there's no good way aside from the GDPR archive to find them - saved posts are private to a user so pushshift wouldn't have a copy of these.