User:Clouserw/AMO/Validator:v2
From MozillaWiki
Warning: Please don't add anything else here! This has been organized into a more formal spec (link below). You can add info there if you have input
Version 1 of our add-on validator has been a great success and really proven the value of an automated validation system. This year we need to take it to the next level with some new features, finishing some rough edges, and making the code rock solid.
Version 1 stuff:
Version 2
Idea List
- Written in Python
- Written in such a way as it's portable, e.g. ./validate.py addon.xpi
- Some checks (checksums, etc.) would be nice to be able to adjust via the web, but if we provide a portable distribution those won't be included.
- Supports offline processing via gearman so we don't hold up server threads waiting on processing and we don't hit memory/time limits.
- Progress should be checked via ajax
- We should be able to see how many add-ons are in the queue at any time. If it's ever a large number we'll get more processing power.
- Jetpack verification support
- Better L10n support (there are a couple old bugs about this, and we need to update the external)
- Add-ons for unit tests can be built on the fly. Jbalogh has a short script that this can be based on.
- Levels of flagging? Like, a minimum level and a super picky level. Or would this just be too confusing and everyone would always only use one?
- Real time virus scanning
Jorge suggests:
- Be smart about caching: only clear validation results cache when the file is modified or the validator is modified. Maybe clearing the validation cache could be added as a step for production pushes.
- Give priority to add-on uploads (by authors) in the validation queue.
- Give severity weight to warnings and sort them by severity.
- Show help text next to the warnings, instead of a separate page.
- Recognize common validation patterns and make it easier to add them. For example:
- [ file mask (*.js), what to look for (/eval/), severity (high), grouping ].
- (the grouping would be for grouping similar results together and showing the link to the validation page)
- Easy to download and distribute (to developers).
- note that we offer the validation tool to add-ons that don't host on AMO already
- Smarter JS library recognition. Use common internal patterns instead of just file names.
- Nils' validator, written in Python. We should definitely coordinate with him and see how we can integrate his ideas.
Ian suggests:
- It seems like Redis is going to come into the mix in Zamboni, and there's already Cucumber, so one of those would be appropriate for the queuing.
- Presumably in stand-along mode everything is just done immediately, no queuing (i.e., the queuing is really a separate system).
- Jetpack has a bunch of Python tools for building stuff, I don't know if they have validation too.
- I would flag everything and only filter in the UI.
- As a result, it might be good to use some structured output from the validate script (also maybe more easily extendable by unit tests?). Maybe just use the XUnit output format.
Nils suggests:
- Make it a priority to support a web-validator. A downloadable standalone tool is nice to have but not really required. A web-validator has to be set up by people who know their stuff (only once), while a standalone tool would require regular folks to set up stuff.
- Instead of a python-standalone consider a toolkit XPI extension. This would make it easier for folks to use it and additionally jshydra can be shipped with it as a binary component.
- Add "error" severity. AMO shouldn't accept uploads having errors.
- All Javascript verification tests should be migrated from regexps to jshydra. E.g. /eval/ to JSOP_CALL/TOK_NAME(eval).
- Make it easier to write jshydra tests, by implementing some sort of query language (e.g. ASTQuery similar to XQuery for DOM)
- [error] Verify all locales are "complete". See Compare-locales. Incomplete locales should be regarded as an "error", as all users using that locale would see yellow-screen-of-xml-error or missing information/errors in case of string bundles. Locale errors often go unnoticed during development and reviews as usually only particular locales are affected. Often DTD/properties have the wrong encoding hence failing to load.
- [error] Check XUL/XML for parsing errors. With all those chrome:-URLs (DTDs) it might be tricky to do so.
- [error] Check for any remote (https?|ftp) script sources in XUL
- [warn] Check global symbols in overlays only. See checkpollutions prototype (python + jsyhydra + jsyhdra patches + jsyhdra script patches)
- [error] Blocklist certain global symbols, see checkpollutions prototype for a sample list.
- [warn] base64 encoding should use atob/btoa() instead of own implementation (check for b64/base64 and variations in symbols and/or file names)
- [warn] md5/sha-1/sha2* should use nsICryptoHash.
- [warn] Dynamically creating of any xul:script/html:script tags. These should draw special attention by reviewers.
- [warn] Use xul:prefwindow. This can be checked by looking at the resource behind optionsURL.
- [error] Instead of just white-listing particular libraries (and versions) support blacklisting those with known defects or which heavily pollute the global namespace.
- [warn] Check for uneval(), as this is likely used as an insecure replacement for JSON.stringify.