* Add a `robots.txt` file to prevent crawlers from scraping the site * Added `ASSET_RIGHTS` entry to config.yaml to control whether `/robots.txt` is served or not * Always import robots.py, determine config in route function * Finish writing a comment * Remove unnecessary redundant import and config |
||
|---|---|---|
| .. | ||
| img | ||
| network diagram | ||
| CODEOWNERS | ||
| adding games.md | ||
| apworld specification.md | ||
| code_of_conduct.md | ||
| contributing.md | ||
| network protocol.md | ||
| options api.md | ||
| running from source.md | ||
| settings api.md | ||
| style.md | ||
| tests.md | ||
| triage role expectations.md | ||
| webhost configuration sample.yaml | ||
| world api.md | ||
| world maintainer.md | ||