This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Running Tests
Loading…
Running Tests
Relevant source files
This page provides instructions for running the test suite locally and understanding the test organization within the DeepWiki-to-mdBook converter project. It covers local execution methods, test structure, and integration with the development workflow. For information about the automated CI/CD test workflow, see Test Workflow.
Test Organization
The test suite is located in the python/tests/ directory and consists of multiple test modules that validate different components of the system.
Test Structure
python/
├── tests/
│ ├── conftest.py # pytest fixtures and configuration
│ ├── test_template_processor.py # Template system tests
│ ├── test_mermaid_normalization.py # Mermaid diagram normalization tests
│ └── test_numbering.py # Page numbering and path resolution tests
Sources: scripts/run-tests.sh:1-43 python/tests/conftest.py:1-16
Test Categories
| Test Module | Purpose | Test Framework |
|---|---|---|
test_template_processor.py | Validates template variable substitution, conditional rendering, and header/footer injection | Standalone Python (no pytest required) |
test_mermaid_normalization.py | Tests the seven-step Mermaid diagram normalization pipeline | pytest |
test_numbering.py | Validates page numbering logic and path resolution algorithms | pytest |
Sources: scripts/run-tests.sh:7-30
Running Tests Locally
There are two primary methods for running tests locally: using the convenience shell script or invoking pytest directly.
Method 1: Using the Shell Script
The run-tests.sh script provides a unified interface for running all tests with appropriate error handling and formatted output:
This script:
- Runs template processor tests directly with Python
- Detects if pytest is installed
- Runs pytest-based tests if available
- Provides summary output with pass/fail status
Sources: scripts/run-tests.sh:1-43
Method 2: Using pytest Directly
For pytest-based tests, you can invoke pytest directly for more control:
Sources: .github/workflows/tests.yml:24-25
Method 3: Individual Test Execution
The template processor tests can run independently without pytest:
Sources: scripts/run-tests.sh11
Test Execution Flow
Test Execution Flow Diagram
This diagram shows how tests are executed locally. The run-tests.sh script checks for pytest availability and runs tests accordingly, while developers can also invoke pytest or Python directly.
Sources: scripts/run-tests.sh:1-43 python/tests/conftest.py:1-16
Prerequisites
Required Dependencies
Install Python dependencies before running tests:
The requirements.txt file contains all runtime dependencies needed by the scraper and test utilities.
Sources: .github/workflows/tests.yml:19-23
Python Version
Tests are designed for Python 3.12, which is the version used in both the Docker container and CI workflow:
Sources: .github/workflows/tests.yml:17-18
Test Module Details
Test Module Dependencies Diagram
This diagram shows the organization of test modules and their relationship to the conftest.py fixture system. The scraper_module fixture dynamically loads deepwiki-scraper.py for use in pytest-based tests.
Sources: python/tests/conftest.py:1-16 scripts/run-tests.sh:7-30
Template Processor Tests
The test_template_processor.py module tests the template variable substitution system used for header and footer injection. It validates:
- Variable substitution with
{{VARIABLE_NAME}}syntax - Conditional blocks with
{{#if CONDITION}}...{{/if}} - Edge cases like missing variables and nested conditions
This module can run independently without pytest and directly imports the template processing functions.
Sources: scripts/run-tests.sh:7-11
Mermaid Normalization Tests
The test_mermaid_normalization.py module validates the seven-step normalization pipeline that ensures Mermaid 11 compatibility. Each normalization step has dedicated tests:
| Normalization Step | Test Coverage |
|---|---|
| Unescape sequences | \n, \t, \u003c character handling |
| Multiline edge labels | Flattening logic for edge descriptions |
| State descriptions | State : Description syntax fixes |
| Flowchart nodes | Pipe character removal |
| Statement separators | Semicolon insertion |
| Empty labels | Fallback label generation |
| Gantt task IDs | Synthetic ID generation for unnamed tasks |
This module uses the scraper_module fixture from conftest.py to access normalization functions.
Sources: scripts/run-tests.sh:16-22 python/tests/conftest.py:7-16
Numbering Tests
The test_numbering.py module validates page numbering logic and path generation algorithms. It tests:
- Hierarchical numbering schemes (e.g.,
1.2.3) - Numeric sorting that correctly handles multi-digit sections
- Path generation from page numbers
- Link rewriting for internal references
This module also uses the scraper_module fixture to access numbering functions.
Sources: scripts/run-tests.sh:24-30 python/tests/conftest.py:7-16
The conftest.py Fixture System
The conftest.py file provides a session-scoped fixture that loads the deepwiki-scraper.py module dynamically:
This approach allows tests to import functions from the scraper without requiring it to be installed as a package. The fixture is shared across all test sessions for efficiency.
Sources: python/tests/conftest.py:7-16
CI Integration
The test suite integrates with GitHub Actions through the tests.yml workflow, which:
- Triggers on push to
mainand on pull requests - Sets up Python 3.12
- Installs dependencies from
python/requirements.txt - Installs pytest
- Runs all pytest tests with the
-sflag (show output)
For detailed information about the CI test workflow, configuration, and failure handling, see Test Workflow.
Sources: .github/workflows/tests.yml:1-26
Understanding Test Output
Successful Test Run
When all tests pass using run-tests.sh, you will see:
==========================================
Running Template Processor Tests
==========================================
[Template test output...]
==========================================
Running Mermaid Normalization Tests
==========================================
[pytest output with test results...]
==========================================
Running Numbering Tests
==========================================
[pytest output with test results...]
==========================================
✓ All tests passed!
==========================================
Sources: scripts/run-tests.sh:34-42
Pytest Not Available
If pytest is not installed, the script will skip pytest-based tests:
==========================================
Running Template Processor Tests
==========================================
[Template test output...]
==========================================
⚠ Template tests passed (mermaid/numbering tests skipped)
Note: pytest not found, install with: pip install pytest
==========================================
Sources: scripts/run-tests.sh:34-42
Pytest Verbose Output
Using pytest with the -v flag provides detailed test information:
The -s flag shows print statements and output, useful for debugging:
Sources: .github/workflows/tests.yml25
Local vs. CI Test Execution
Local vs. CI Test Execution Comparison
This diagram illustrates the difference between local and CI test execution. Local execution allows for flexible Python versions and gracefully handles missing pytest, while CI enforces Python 3.12 and guarantees pytest availability.
Sources: scripts/run-tests.sh:13-31 .github/workflows/tests.yml:1-26
Key Differences
| Aspect | Local Execution | CI Execution |
|---|---|---|
| Python Version | Any 3.x version | Fixed at 3.12 |
| pytest Requirement | Optional (graceful fallback) | Always installed |
| Execution Method | run-tests.sh or manual | pytest python/tests/ -s |
| Output Control | User-configurable verbosity | Fixed -s flag for output |
| Trigger | Manual by developer | Automatic on push/PR |
Sources: scripts/run-tests.sh:13-31 .github/workflows/tests.yml:17-25
Best Practices
Running Tests Before Commits
Always run the test suite before committing changes:
This ensures your changes don’t break existing functionality.
Iterative Testing
When developing new features, run specific test modules for faster feedback:
Adding New Tests
When adding new functionality:
- Create test functions in the appropriate test module
- Use the
scraper_modulefixture for accessing scraper functions - Test locally with both methods (script and pytest)
- Verify CI passes on your pull request
Sources: python/tests/conftest.py:7-16 .github/workflows/tests.yml:1-26
Dismiss
Refresh this wiki
Enter email to refresh