BOOST WIKI | RecentChanges | Preferences | Page List | Links List
Difference (from revision 90 to current revision)
(minor diff, author diff)
(The revisions are identical or unavailable.)
If you are looking for information about running the regression tests on your machine, see [instructions].
- Performance: recent additions of regression runners pushed us behind 2 hours mark! [Misha]
- Produce release report and normal report on a single run?
- Some other tests do not appear in the reports:
Building / Testing
Short-term TODO list
If you'd like to work on any of these, or vote for them so that they get priority over others, please [write us]! To request features not present in this list, please post your query to the [Boost.Testing mailing list] -- this way we will have a chance to follow up/clarify it if needed.
- Split explicit-failures-markup.xml by the library. Among other things, that would reduce the impact of erroneous checkins (only invalid parts of the markup would be ignored, as opposite to the whole thing (http://article.gmane.org/gmane.comp.lib.boost.testing/2608).
- Update scheme for "libs/expected_results.xml".
- Detect broken status/explicit-failures-markup.xml and show a prominent message in reports. [Lesha]
- Same with CVS update errors
- Same with outdated tarball
- "=turkanis" (old) results appear everywhere in XML but not in the final reports.
- Clean up [Misha]
- "Unusables" generation
- Sudden updates for not changed stuff
- Dirxion test still results still reference libs for which there are no test results (link goes to "page no found!"). [Misha] - check if is still there.
- Turn Issues page into a "view". [Lesha]
- Allow the same runner ID for different platforms (include platform name into zip file name) [Misha]
- Submit Python bug with platform.system() on Windows 2003/no win32api [Lesha].
- gzip/compress individual report pages if the browser supports it (http://groups-beta.google.com/group/google.public.support.general/browse_thread/thread/2218b3948e5ee27e).
- Compiler errors are reported as linker output (see )
- I belive this is the result of using the "-d+2" option to run tests. This is also showing up as failures without any error output. Because of the way the jam log processing works it takes the last log entry as the failure, which without -d+2 would be correct (most times). AFAIK -grafik.
- Thanks for the research. We will postpone this then, hopefully switching to BBv2 will help somewhat. - Misha
- Sometimes (?) anonymous CVS access from regression.py doesn't work -- the CVS client complains about the need to login before doing checkout:
- Distinguish between "real" unexpected failures and failures that were present in the LKG release (http://article.gmane.org/gmane.comp.lib.boost.testing/1882).
- Teach process_jam_log to truncate large output (e.g. > 64Kb) (see http://article.gmane.org/gmane.comp.lib.boost.testing/2445)
- Distinguish between failures in the library itself and failures because of dependencies: [Misha,Lesha]
- Automatic regression notifications through email.
- RSS feeds per runner/toolset
- Report warnings (need to hack process_jam_log)
- Introduce "Today View", showing only fresh results timestamped by the current date.
- build_monitor issues:
- Per toolset (failures, expected failures, n/a markup percentage, etc.)
- Libraries by portability
- Write up "Volunteering" page [Lesha]
- Regressions against the previous run
- Keep reports history [Misha]
- May be employ the standard backups scheme (e.g. keep hourly results for a day, daily results for a week, etc.)?
- Probably not going to work: zip = 55MB. 55MB*24 runs/day = 1320MB per day.
- Collect/report tests run time to determine abusive tests/possibly report compilation time statistics across the compilers
- (low priority?) Make it easy for the library authors or library regression runners (like Martin for Spirit) to generate the reports.
- Suggestion: It used to be that one could see if a toolset passed all the tests in the summary because there was a smaller set of libraries. But now there's some scrolling up/down to figure out if a toolset is good enough for release. It would be nice to have an overall status at the top indicating a clean board. This could be related to the "Statistics" item above. -grafik
Building / Testing
- Incremental testing is not reliable:
- Marked as expected-to-fail tests are rerun. There is not point to rerun tests if the library is marked as unusable or the test is marked as expected to fail on particular toolset. BBv1 running in testing mode should accept the list of tests which are disabled.
- The obsolete tests (test which do not exists any more) are still included in the test results. The tests which have been removed still have their test results in the component directories.
- Jamfiles/rule files are not included as dependencies.
- bjam doesn't track dependencies if they were included as #include MACRO
- Tests are run for compilers for which they are known to fail.
- process_jam_log is a major source of complexity/fragility in the regression tools chain. Bjam / Boost.Build really needs to be able to dump its output directly in XML.
Recently resolved issues
- FIXED: Slow reports updating/long waiting time if a runner happened to upload results just after the new cycle started:
- FIXED: Check in new regression processor code into CVS
- FIXED: The links from the tester names give a "Page not found!"
- FIXED: The debug info about notes gets displayed in the details report 
- FIXED: Reports use ascii encoding, but need to use utf-8. UNDONE: Explicit markup has been modified locally to remove non-ascii characters. PICKED UP NOW As a consequence, any changes to the explicit markup are not being picked up at the moment.
- DONE: Correctly upload the results to web site [Lesha]
- Transaction should end with "move/rename"
- DONE: fix handle_http.py to redirect to index.html
- DONE: Changes in explicit failures markup don't trigger re-merging with "old" regression results http://article.gmane.org/gmane.comp.lib.boost.devel/122753). [Misha]
- track "status/explicit-failures-markup.xml"
- track "libs/???.xml"
- DONE: Deleting results archive from ftp should delete from reports [Misha]
- DONE: Support for regular expressions in "something*rather" form (http://news.gmane.org/gmane.comp.lib.boost.testing) [Lesha]
- DONE: Highlight old results (like it was done in /regression-logs/ index) [Lesha]
- DONE: Generalize "corner-case tests" to test categories to allow arbitrary grouping of test cases (use case - wchar tests in serialization library) [Lesha]
- Specify markup and use case (serialization library)
- Performance. Recent additions of regression runners put us over 30min mark. [Misha]
- DONE Now average is <15min: Improve performance on the last stages of pipeline
- NOT NEEDED: Restructure extended_test_results into hiererarhical lib/toolset/tests format.
- DONE:Generate developer/output files on the first stage
- FIXED: Boost.Python issue: http://article.gmane.org/gmane.comp.lib.boost.devel/107272/
- DONE: Restore user reports ("User View") [Misha, Lesha]
- Links from developer pages to the corresponding user pages and back
[buy lipitor online] [buy lipitor] [[buy lipitor online]]
[buy fioricet online] [buy fioricet] [[buy fioricet online]]
Disclaimer: This site not officially maintained by Boost Developers