Dev deeplearning Integration (nnLM and new models)#106
Conversation
| from numpy.typing import NDArray | ||
|
|
||
| if "snakemake" in globals() and hasattr(snakemake, "threads"): | ||
| print(f"DEBUG: Snakemake threads = {snakemake.threads}") |
1217b99 to
deb1213
Compare
Updated config model placeholders and added all new trained fid models
|
@thrower19 Can you create a new branch from this and revise all the Continuous Integration (CI) testing we do here? |
|
|
|
Also |
@Dhananjhay the following command using the 'detect_without_prior' flag worked for me. Which parameters were you using? ./autoafids/run.py test_data/bids_T1w test_out participant --participant-label 002 --detect_without_prior --inference-overlap 0.25 --inference-batch-size 8 -np |
|
@thrower19 I didn't specify |
|
@Dhananjhay that command was copy pasted from the testing script in the pipeline. But when run as a wet run on an interactive job using the test data ds003653 it worked. |
|
What happens when you try with the default values for |
|
@Dhananjhay it still runs with the defaults.
|
|
That's odd cause it doesn't work with the data I have. I'll share it with you and try if you can run the pipeline again. |
|
The |
Notes
This pull request introduces major enhancements to the AFID detection workflow, adding support for a new nnLandmark (nnLM) detection mode, improving configuration flexibility, and optimizing inference for both GPU and CPU environments. The changes enable users to select between three detection strategies (prior-based, sliding-window without prior, and nnLM single-pass), provide more granular control over inference parameters, and restructure the Snakemake rules to support both parallel and sequential inference workflows.
Key changes include:
Detection Mode and Configuration Enhancements:
snakebids.ymlto select detection mode (--detect_with_prior,--detect_without_prior,--detect_with_nnlm), and to configure nnLM-specific parameters (fold, plans, checkpoint, device), as well as sliding-window inference overlap and batch size.afids_inferencesection for per-AFID checkpoint configuration, patch size, device, and overlap.enable_sequential_inferenceconfig option to control whether inference is run sequentially (recommended for GPU) or in parallel (recommended for CPU).Workflow and Rule Structure:
Snakefileto dynamically include the appropriate rules (cnn.smkornnlm.smk) based on the selected detection mode, and to automatically enable sequential inference for GPU.rule allinput and descriptor logic to select the correct FCSV output based on the detection mode, ensuring output files are labeled appropriately.CNN Inference Rule Improvements:
cnn.smkrules to support both sequential (single job for all AFIDs) and parallel (one job per AFID) inference for both prior-based and no-prior detection, including new gather rules to combine per-AFID outputs. [1] [2]pytorch.yaml) and improved parameter passing for model checkpoints and inference settings.Environment and Dependency Updates:
pytorch.yaml) and nnLM (nnlm.yaml), specifying compatible Python and library versions and including the nnLandmark package for nnLM inference. [1] [2]Quality-of-Life Improvements:
These changes make the workflow more flexible, scalable, and ready for integration with the new nnLandmark model while maintaining backward compatibility with existing CNN-based modes.