Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up canonical URL & robots crawler policy #439

Open
1 of 4 tasks
SteelWagstaff opened this issue Jan 14, 2022 · 0 comments
Open
1 of 4 tasks

Clean up canonical URL & robots crawler policy #439

SteelWagstaff opened this issue Jan 14, 2022 · 0 comments

Comments

@SteelWagstaff
Copy link
Member

SteelWagstaff commented Jan 14, 2022

At present, Google is confused as to what our Directory is and what it should lead people to. Examples below from Travis' SEO presentation (https://docs.google.com/presentation/d/1EkKPiWVT8A_CjRGassHXKSpyG_cmBDSMHS54EyMe_Bk/edit#slide=id.gf8eed1d0f8_0_45)

Screenshot from 2022-01-14 08-12-35
Screenshot from 2022-01-14 08-12-26

Our goal should be to remove confusing/misleading links from Google search results. To do this, we should

  • Ensure that https://pressbooks.directory is the site's only canonical URL to help us with faceted navigation and avoid wasting our crawl budget
  • Create new robots.txt to tell search engines to 'no index' all other router URLs for Directory (filters, etc)
  • Make sure staging and dev sites are completely 'no indexed'
  • Marketing team reaches out to people linking to staging Directory and tell them to link to production Directory
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant