=============== Getting Started =============== Installation ============ From Source (Recommended) ------------------------- .. code-block:: bash git clone https://github.com/lhzn-io/biologger-pseudotrack.git cd biologger-pseudotrack pip install -e . From PyPI (When Released) ------------------------- .. code-block:: bash pip install biologger-pseudotrack From TestPyPI (Development) --------------------------- .. code-block:: bash pip install --index-url https://test.pypi.org/simple/ \ --extra-index-url https://pypi.org/simple \ biologger-pseudotrack Basic Usage =========== Command-Line Interface ---------------------- .. code-block:: bash # Process swordfish deployment (adaptive sensor fusion mode) python -m biologger_pseudotrack --config examples/swordfish_config.yml # Process whale shark deployment (post-facto mode) python -m biologger_pseudotrack --config examples/whaleshark_postfacto.yml Python API ---------- .. code-block:: python from biologger_pseudotrack.streaming.processor import StreamingProcessor # Create adaptive sensor fusion processor processor = StreamingProcessor(filt_len=48, freq=16) # Process data in real-time for record in sensor_data: result = processor.process(record) print(f"Pitch: {result['pitch_deg']:.2f}°") print(f"Roll: {result['roll_deg']:.2f}°") Configuration ============= The pipeline uses YAML configuration files. Example configurations are provided in the ``examples/`` directory: - ``swordfish_config.yml`` - Swordfish adaptive processing - ``whale_shark_config.yml`` - Whale shark processing - ``seal_config.yml`` - Seal processing Calibration Modes ----------------- Both pipelines share a unified ``calibration:`` config block with three modes: **Progressive** (adaptive default) Accumulates calibration data online using exponential moving averages. Memory-efficient, suitable for real-time processing. Converges within first 2-3 minutes of deployment. **Fixed** (pre-computed values) Uses locked calibration parameters from prior runs. Fastest processing (single-pass, no calibration overhead). Requires prior calibration from batch_compute or R analysis. **Batch Compute** (post-facto only) Two-pass processing: collect full dataset, compute calibrations, reprocess. Matches R gRumble's ``colMeans()`` and ``MagOffset()`` exactly. Validation target: <0.1° error vs. R reference implementation. Example Configuration --------------------- .. code-block:: yaml # config_swordfish_postfacto.yml input: file: "data/Swordfish-RED001_20220812_19A0564/19A0564.csv" postfacto: parameters: filt_len: 48 freq: 16 calibration: attachment_angle_mode: 'batch_compute' magnetometer_mode: 'batch_compute' enable_depth_interpolation: true output: file: "output/swordfish_processed.csv" Next Steps ========== - See :doc:`pipelines` for detailed pipeline architecture - Check example configurations in ``examples/`` directory - Review species-specific configs in ``data/`` subdirectories