Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Running Tests

Loading…

Running Tests

Relevant source files

This page provides instructions for running the test suite locally and understanding the test organization within the DeepWiki-to-mdBook converter project. It covers local execution methods, test structure, and integration with the development workflow. For information about the automated CI/CD test workflow, see Test Workflow.

Test Organization

The test suite is located in the python/tests/ directory and consists of multiple test modules that validate different components of the system.

Test Structure

python/
├── tests/
│   ├── conftest.py              # pytest fixtures and configuration
│   ├── test_template_processor.py   # Template system tests
│   ├── test_mermaid_normalization.py  # Mermaid diagram normalization tests
│   └── test_numbering.py        # Page numbering and path resolution tests

Sources: scripts/run-tests.sh:1-43 python/tests/conftest.py:1-16

Test Categories

Test ModulePurposeTest Framework
test_template_processor.pyValidates template variable substitution, conditional rendering, and header/footer injectionStandalone Python (no pytest required)
test_mermaid_normalization.pyTests the seven-step Mermaid diagram normalization pipelinepytest
test_numbering.pyValidates page numbering logic and path resolution algorithmspytest

Sources: scripts/run-tests.sh:7-30

Running Tests Locally

There are two primary methods for running tests locally: using the convenience shell script or invoking pytest directly.

Method 1: Using the Shell Script

The run-tests.sh script provides a unified interface for running all tests with appropriate error handling and formatted output:

This script:

  1. Runs template processor tests directly with Python
  2. Detects if pytest is installed
  3. Runs pytest-based tests if available
  4. Provides summary output with pass/fail status

Sources: scripts/run-tests.sh:1-43

Method 2: Using pytest Directly

For pytest-based tests, you can invoke pytest directly for more control:

Sources: .github/workflows/tests.yml:24-25

Method 3: Individual Test Execution

The template processor tests can run independently without pytest:

Sources: scripts/run-tests.sh11

Test Execution Flow

Test Execution Flow Diagram

This diagram shows how tests are executed locally. The run-tests.sh script checks for pytest availability and runs tests accordingly, while developers can also invoke pytest or Python directly.

Sources: scripts/run-tests.sh:1-43 python/tests/conftest.py:1-16

Prerequisites

Required Dependencies

Install Python dependencies before running tests:

The requirements.txt file contains all runtime dependencies needed by the scraper and test utilities.

Sources: .github/workflows/tests.yml:19-23

Python Version

Tests are designed for Python 3.12, which is the version used in both the Docker container and CI workflow:

Sources: .github/workflows/tests.yml:17-18

Test Module Details

Test Module Dependencies Diagram

This diagram shows the organization of test modules and their relationship to the conftest.py fixture system. The scraper_module fixture dynamically loads deepwiki-scraper.py for use in pytest-based tests.

Sources: python/tests/conftest.py:1-16 scripts/run-tests.sh:7-30

Template Processor Tests

The test_template_processor.py module tests the template variable substitution system used for header and footer injection. It validates:

  • Variable substitution with {{VARIABLE_NAME}} syntax
  • Conditional blocks with {{#if CONDITION}}...{{/if}}
  • Edge cases like missing variables and nested conditions

This module can run independently without pytest and directly imports the template processing functions.

Sources: scripts/run-tests.sh:7-11

Mermaid Normalization Tests

The test_mermaid_normalization.py module validates the seven-step normalization pipeline that ensures Mermaid 11 compatibility. Each normalization step has dedicated tests:

Normalization StepTest Coverage
Unescape sequences\n, \t, \u003c character handling
Multiline edge labelsFlattening logic for edge descriptions
State descriptionsState : Description syntax fixes
Flowchart nodesPipe character removal
Statement separatorsSemicolon insertion
Empty labelsFallback label generation
Gantt task IDsSynthetic ID generation for unnamed tasks

This module uses the scraper_module fixture from conftest.py to access normalization functions.

Sources: scripts/run-tests.sh:16-22 python/tests/conftest.py:7-16

Numbering Tests

The test_numbering.py module validates page numbering logic and path generation algorithms. It tests:

  • Hierarchical numbering schemes (e.g., 1.2.3)
  • Numeric sorting that correctly handles multi-digit sections
  • Path generation from page numbers
  • Link rewriting for internal references

This module also uses the scraper_module fixture to access numbering functions.

Sources: scripts/run-tests.sh:24-30 python/tests/conftest.py:7-16

The conftest.py Fixture System

The conftest.py file provides a session-scoped fixture that loads the deepwiki-scraper.py module dynamically:

This approach allows tests to import functions from the scraper without requiring it to be installed as a package. The fixture is shared across all test sessions for efficiency.

Sources: python/tests/conftest.py:7-16

CI Integration

The test suite integrates with GitHub Actions through the tests.yml workflow, which:

  1. Triggers on push to main and on pull requests
  2. Sets up Python 3.12
  3. Installs dependencies from python/requirements.txt
  4. Installs pytest
  5. Runs all pytest tests with the -s flag (show output)

For detailed information about the CI test workflow, configuration, and failure handling, see Test Workflow.

Sources: .github/workflows/tests.yml:1-26

Understanding Test Output

Successful Test Run

When all tests pass using run-tests.sh, you will see:

==========================================
Running Template Processor Tests
==========================================

[Template test output...]

==========================================
Running Mermaid Normalization Tests
==========================================

[pytest output with test results...]

==========================================
Running Numbering Tests
==========================================

[pytest output with test results...]

==========================================
✓ All tests passed!
==========================================

Sources: scripts/run-tests.sh:34-42

Pytest Not Available

If pytest is not installed, the script will skip pytest-based tests:

==========================================
Running Template Processor Tests
==========================================

[Template test output...]

==========================================
⚠ Template tests passed (mermaid/numbering tests skipped)

Note: pytest not found, install with: pip install pytest
==========================================

Sources: scripts/run-tests.sh:34-42

Pytest Verbose Output

Using pytest with the -v flag provides detailed test information:

The -s flag shows print statements and output, useful for debugging:

Sources: .github/workflows/tests.yml25

Local vs. CI Test Execution

Local vs. CI Test Execution Comparison

This diagram illustrates the difference between local and CI test execution. Local execution allows for flexible Python versions and gracefully handles missing pytest, while CI enforces Python 3.12 and guarantees pytest availability.

Sources: scripts/run-tests.sh:13-31 .github/workflows/tests.yml:1-26

Key Differences

AspectLocal ExecutionCI Execution
Python VersionAny 3.x versionFixed at 3.12
pytest RequirementOptional (graceful fallback)Always installed
Execution Methodrun-tests.sh or manualpytest python/tests/ -s
Output ControlUser-configurable verbosityFixed -s flag for output
TriggerManual by developerAutomatic on push/PR

Sources: scripts/run-tests.sh:13-31 .github/workflows/tests.yml:17-25

Best Practices

Running Tests Before Commits

Always run the test suite before committing changes:

This ensures your changes don’t break existing functionality.

Iterative Testing

When developing new features, run specific test modules for faster feedback:

Adding New Tests

When adding new functionality:

  1. Create test functions in the appropriate test module
  2. Use the scraper_module fixture for accessing scraper functions
  3. Test locally with both methods (script and pytest)
  4. Verify CI passes on your pull request

Sources: python/tests/conftest.py:7-16 .github/workflows/tests.yml:1-26

Dismiss

Refresh this wiki

Enter email to refresh