You definitely should do this in two steps:
1) Create the empty records. Write a small script in python to do that, using calibre apis and run it with calibre-debug script.py
2) Write a script to transfer the book files, that avoids the calibre apis and uses file renames (as opposed to file copies/moves) + direct access to the data table in the calibredb to populate it with the entries.
Should be about a days work, and should finish importing all 300K books in a few hours. Do it first with a few thousand books to get a sense for the performance and feasability.
Sample code for (1)
Code:
from calibre.library import db
from calibre.metadata.books.base import Metadata
books = [Metadata('title1', ['author1']), Metadata('title2', 'author2'), ...]
db = db('path to library folder').new_api
for book in books:
db.create_book_entry(mi, apply_import_tags=False)
For (2) you just need to create entries in the data table in metadata.db which should be trivial and rename the files into the calibre library using a similar naming scheme as calibre uses for its files.
However, running calibre with 300K entries is not going to be very performant. I suggest splitting up your archive into 5-10 libraries.