We just released a new internal linking relevance model that uses updated embeddings based on transformers (the T in GPT). We tested out a bunch of different transformer models to find the ones that did the best.
Our model assesses link relevance using two different approaches:
  • AI embeddings
  • SERP overlap
This new approach promises significant improvement over our traditional embedding-based relevance model. To activate it, navigate to:
Settings > System >
EMBEDDING_VIA_TRANSFORMER