You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run preprocessing/pickle/fits_to_pkl.py on the FITS file.
Run preprocessing/pickle/add_splits.py to add training splits.
Run preprocessing/pickle/mark_pkl.py to compute corruption and filter data based on star properties.
Run preprocessing/pickle/clean_pkl.py to filter data based on corruption.
Training
Download the TESS FFI scripts for the desired sectors from TESS Bulk Downloads.
Create a directory for the FFIs. Inside, create folders for each sector and place the respective download script in each folder.
Run preprocessing/cubes/download_sectors.py with the directory and sector names to start downloading.
Run preprocessing/cubes/check_download_completed.py to verify downloads.
Run preprocessing/cubes/build_datacubes.py to convert FITS frames into datacubes.
Run preprocessing/hdf5/zarr_to_hdf5.py to store datapoints efficiently. (Requires cleaned labels pickle file.)
Run preprocessing/cubes/save_fits_header_timestamps.py to save timestamps for each sector.
Update training/configs/dataset with the correct data paths and adjust hyperparameters if needed.
Run training/pipeline.py to train the model.
Evaluation
Run preprocessing/catalog/download_files.py to download files from the TESS Catalog. The provided header.csv and md5sum.txt within the repository are provided from this url.
Run preprocessing/catalog/split_pkl.py to clean and split input pickle files into manageable chunks.
Run training/eval_catalog.py to process input chunks and generate outputs.
(Optional) Run preprocessing/catalog/join_preds.py to merge prediction chunks into a single file.