Skip to content

Dev deeplearning Integration (nnLM and new models)#106

Open
mahmoud1yaser wants to merge 8 commits into
mainfrom
dev-deeplearning
Open

Dev deeplearning Integration (nnLM and new models)#106
mahmoud1yaser wants to merge 8 commits into
mainfrom
dev-deeplearning

Conversation

@mahmoud1yaser
Copy link
Copy Markdown

Notes

This pull request introduces major enhancements to the AFID detection workflow, adding support for a new nnLandmark (nnLM) detection mode, improving configuration flexibility, and optimizing inference for both GPU and CPU environments. The changes enable users to select between three detection strategies (prior-based, sliding-window without prior, and nnLM single-pass), provide more granular control over inference parameters, and restructure the Snakemake rules to support both parallel and sequential inference workflows.

Key changes include:

Detection Mode and Configuration Enhancements:

  • Added new command-line arguments and configuration options in snakebids.yml to select detection mode (--detect_with_prior, --detect_without_prior, --detect_with_nnlm), and to configure nnLM-specific parameters (fold, plans, checkpoint, device), as well as sliding-window inference overlap and batch size.
  • Introduced a new nnLM model resource URL and a detailed afids_inference section for per-AFID checkpoint configuration, patch size, device, and overlap.
  • Added enable_sequential_inference config option to control whether inference is run sequentially (recommended for GPU) or in parallel (recommended for CPU).

Workflow and Rule Structure:

  • Refactored the Snakemake workflow in Snakefile to dynamically include the appropriate rules (cnn.smk or nnlm.smk) based on the selected detection mode, and to automatically enable sequential inference for GPU.
  • Updated the rule all input and descriptor logic to select the correct FCSV output based on the detection mode, ensuring output files are labeled appropriately.

CNN Inference Rule Improvements:

  • Overhauled cnn.smk rules to support both sequential (single job for all AFIDs) and parallel (one job per AFID) inference for both prior-based and no-prior detection, including new gather rules to combine per-AFID outputs. [1] [2]
  • Added logic to use the correct environment (pytorch.yaml) and improved parameter passing for model checkpoints and inference settings.

Environment and Dependency Updates:

  • Added new Conda environment files for PyTorch (pytorch.yaml) and nnLM (nnlm.yaml), specifying compatible Python and library versions and including the nnLandmark package for nnLM inference. [1] [2]

Quality-of-Life Improvements:

  • Suppressed extraneous warnings for unrecognized BIDS entities and explicitly set the BIDS spec version in the workflow.

These changes make the workflow more flexible, scalable, and ready for integration with the new nnLandmark model while maintaining backward compatibility with existing CNN-based modes.

@mahmoud1yaser mahmoud1yaser requested a review from Dhananjhay May 5, 2026 12:46
@github-actions github-actions Bot requested a review from ataha24 May 5, 2026 12:46
@mahmoud1yaser mahmoud1yaser changed the title Dev deeplearning Dev deeplearning Integration (nnLM and new models) May 5, 2026
from numpy.typing import NDArray

if "snakemake" in globals() and hasattr(snakemake, "threads"):
print(f"DEBUG: Snakemake threads = {snakemake.threads}")
Copy link
Copy Markdown
Member

@ataha24 ataha24 May 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can comment this out?

@Dhananjhay
Copy link
Copy Markdown
Contributor

@thrower19 Can you create a new branch from this and revise all the Continuous Integration (CI) testing we do here?

@Dhananjhay
Copy link
Copy Markdown
Contributor

detect-with-prior flag is extremely slow even on lowresMRI T1w images -- is this something we want to offer to users? If yes, when would they choose this flag over detect_with_nnlm and detect_without_prior? @mahmoud1yaser

@Dhananjhay
Copy link
Copy Markdown
Contributor

Also detect_without_prior doesn't seem to work -- the pipeline gets stuck in the applyfidmodel_noprior_all rule and doesn't progress.

@thrower19
Copy link
Copy Markdown

Also detect_without_prior doesn't seem to work -- the pipeline gets stuck in the applyfidmodel_noprior_all rule and doesn't progress.

@Dhananjhay the following command using the 'detect_without_prior' flag worked for me. Which parameters were you using?

./autoafids/run.py test_data/bids_T1w test_out participant --participant-label 002 --detect_without_prior --inference-overlap 0.25 --inference-batch-size 8 -np

@Dhananjhay
Copy link
Copy Markdown
Contributor

@thrower19 I didn't specify --inference-overlap 0.25 --inference-batch-size 8 and also you seem to be running a dry run, can you try a wet run and see if it runs till the end?

@thrower19
Copy link
Copy Markdown

thrower19 commented May 11, 2026

@Dhananjhay that command was copy pasted from the testing script in the pipeline. But when run as a wet run on an interactive job using the test data ds003653 it worked.

@Dhananjhay
Copy link
Copy Markdown
Contributor

What happens when you try with the default values for inference-batch-size and inference-overlap?

@thrower19
Copy link
Copy Markdown

@Dhananjhay it still runs with the defaults.

./autoafids/run.py /nfs/khan/trainees/jthrower/ds003653 test_without_prior_default participant --cores all -- detect_without_prior

@Dhananjhay
Copy link
Copy Markdown
Contributor

That's odd cause it doesn't work with the data I have. I'll share it with you and try if you can run the pipeline again.

@thrower19
Copy link
Copy Markdown

The --detect_without_prior flag is able to run when using a high amount of resources on slurm interactive jobs salloc --time=1:30:00 --cpus-per-task=32 --gres=gpu:1 --mem=32G --partition=hx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants