Neither Git nor Mercurial handle large binary files very well. They both make the assumption that files being tracked are relatively small and easily diffable, but PDF files are neither. If you've already run git gc, then your repository isn't going to get much smaller than it already is.
If you don't want a third-party solution, you can mitigate this with Git by using submodules. If it makes sense, you can split the different files in you repository into submodules, and clone them separately. That way, you can clone the master project to get all the submodule references, then clone each of the submodules as necessary.
However, as you suspect, git annex is probably the best solution. It's an artifact repository, somewhat like bfiles for Mercurial. These artifact repositories are meant to be used with large, binary, non-diffable files. They manage the retrieval of the artifacts; Git and Mercurial are only responsible for maintaining references. This way when you clone with Git, you only have to clone the references, and artifact retrieval is a separate step performed as needed.
If you go one of these routes, you might want to consider rewriting history to remove all the previous committed objects and move them into submodules or git annex. If you don't, then your repository will always be at least as large as it is now.
As a side note, the reason git gc did not reduce the repository size was because Git's garbage collection only removes unreferenced objects from the repository and compacts loose objects into pack files. Since your PDFs are all referenced and they don't compress well in the pack files, the repository would not have gotten much smaller.