The best Hacker News comments are written with a complete disregard for the linked article.
hncynic
is an attempt at capturing this phenomenon by training a model to predict
Hacker News comments just from the submission title. More specifically, I trained a
Transformer encoder-decoder model on
Hacker News data.
In my second attempt, I also included data from Wikipedia.
The generated comments are fun to read, but often turn out meaningless or contradictory -- see here for some examples generated from recent HN titles.
There is a demo live at https://hncynic.leod.org/.
Train a model on Hacker News data only:
- data: Prepare the data and extract title-comment pairs from the HN data dump.
- train: Train a Transformer translation model on the title-comment pairs using TensorFlow and OpenNMT-tf.
Train a model on Wikipedia data, then switch to Hacker News data:
- data-wiki: Prepare data from Wikipedia articles.
- train-wiki: Train a model to predict Wikipedia section texts from titles.
- train-wiki-hn: Continue training on HN data.
- Acquire GCP credits, train for more steps.
- It's probably nonideal to use encoder-decoder models. In retrospect, I should have trained
a language model instead, on data like
title <SEP> comment
. - I've completely excluded HN comments that are replies from the training data. It might be interesting to train on these as well.