Large version (n_fft=4096)

LASAFT-Net-v2 large version

Preliminaries

!pip install git+https://github.com/ws-choi/LASAFT-Net-v2.git/ --quiet

Download a sample mixture track

Track info: Feel this breeze - (Prod. JoeSwan) - HyungWoo & Sunmin’ (spotify, apple music, youtube music)

!wget https://github.com/ws-choi/LASAFT-Net-v2/raw/main/data/test/sample/mixture.wav --quiet

Load a Sample Mixture Track

import soundfile as sf
from IPython.display import display, Audio
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning) 

print('Track info: Feel this breeze - (Prod. JoeSwan) - HyungWoo & Sunmin')
mixture, _ = sf.read('mixture.wav')

display(Audio(mixture.T, rate=44100))
Track info: Feel this breeze - (Prod. JoeSwan) - HyungWoo & Sunmin

Load the pretrained model

from lasaft.pretrained import get_v2_large_709
model = get_v2_large_709()
checkpoint is loaded 

Separate sources with LASAFT-Net-V2

result = model.separate_tracks(mixture, ['vocals', 'drums', 'bass', 'other'], overlap_ratio=0.5, batch_size=4)
/home/wschoi/exit/envs/tutorial-environment/lib/python3.9/site-packages/torch/functional.py:545: UserWarning: istft will require a complex-valued input tensor in a future PyTorch release. Matching the output from stft with return_complex=True.  (Triggered internally at  /pytorch/aten/src/ATen/native/SpectralOps.cpp:817.)
  return _VF.istft(input, n_fft, hop_length, win_length, window, center,  # type: ignore[attr-defined]
print('separated vocals:')
display(Audio(result['vocals'].T, rate=44100))
separated vocals:
print('separated drums:')
display(Audio(result['drums'].T, rate=44100))
separated drums:
print('separated bass:')
display(Audio(result['bass'].T, rate=44100))
separated bass:
print('separated other:')
display(Audio(result['other'].T, rate=44100))
separated other: