You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be awesome, and probably not a huge leap, if we could pass a sitemap of an entire website rather than single page and have monolith create multiple individually downloaded files and update the links between the captured pages to refer to each other effectively downloading a site rather than page by page.
The text was updated successfully, but these errors were encountered:
I absolutely love the idea!
It may be better to keep that as a separate tool that uses Monolith under the hood, but I don't like scripts too much myself, they tend to be badly tested and glitch a lot. Let me think more about this, it's something that could be done in conjunction with filename templates (auto-generating file names based on page titles, currently in development).
I'm glad you like the idea. I'm looking forward to seeing what you come up with.
Just for some more background: I could write a simple little app to automate monolith but my concern was that internally the links between pages (i.e. between monolith runs) might get out of sync. Although if I carefully juggled the directory structure of output files, I could pull it off I think.
That said, I think it would be a cool aspect of your project which is close to that feature already. :)
Nice CLI tool, thanks!
It would be awesome, and probably not a huge leap, if we could pass a sitemap of an entire website rather than single page and have monolith create multiple individually downloaded files and update the links between the captured pages to refer to each other effectively downloading a site rather than page by page.
The text was updated successfully, but these errors were encountered: