The Word2Vec model used is the Skip-Gram model, which is trained on a small chunk of Wikipedia articles (the text8 dataset). Word2Vec is a popular word embedding technique that represents words as ...