【笔记】Whisper学习笔记

前言

Whisper学习笔记

安装显卡驱动(可选)

安装CUDA(可选)

安装PyTorch(可选)

下载依赖

  • 下载可以调用 CUDA 11.8 的PyTorch
1
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

检查PyTorch是否可以调用GPU

  • 如果返回True,则说明可以调用GPU
1
2
import torch
print(torch.cuda.is_available())

下载依赖

setuptools-rust

1
pip3 install setuptools-rust

Whisper

通过pip仓库下载依赖

1
pip3 install -U openai-whisper

通过Github仓库下载依赖

1
pip3 install git+https://github.com/openai/whisper.git
更新依赖
1
pip3 install --upgrade --no-deps --force-reinstall git+https://github.com/openai/whisper.git

通过命令行工具使用Whisper

1
python3 -m whisper -h

语音转文本

--model <model>:指定模型,可以手动从https://github.com/openai/whisper/blob/main/whisper/__init__.py获取模型链接,将模型放到~/.cache/whisper目录下

tiny.enhttps://openaipublic.azureedge.net/main/whisper/models/d3dd57d32accea0b295c96e26691aa14d8822fac7d9d27d5dc00b4ca2826dd03/tiny.en.pt
tinyhttps://openaipublic.azureedge.net/main/whisper/models/65147644a518d12f04e32d6f3b26facc3f8dd46e5390956a9424a650c0ce22b9/tiny.pt
base.enhttps://openaipublic.azureedge.net/main/whisper/models/25a8566e1d0c1e2231d1c762132cd20e0f96a85d16145c3a00adf5d1ac670ead/base.en.pt
basehttps://openaipublic.azureedge.net/main/whisper/models/ed3a0b6b1c0edf879ad9b11b1af5a0e6ab5db9205f891f668f8b0e6c6326e34e/base.pt
small.enhttps://openaipublic.azureedge.net/main/whisper/models/f953ad0fd29cacd07d5a9eda5624af0f6bcf2258be67c92b79389873d91e0872/small.en.pt
smallhttps://openaipublic.azureedge.net/main/whisper/models/9ecf779972d90ba49c06d968637d720dd632c55bbf19d441fb42bf17a411e794/small.pt
medium.enhttps://openaipublic.azureedge.net/main/whisper/models/d7440d1dc186f76616474e0ff0b3b6b879abc9d1a4926b7adfa41db2d497ab4f/medium.en.pt
mediumhttps://openaipublic.azureedge.net/main/whisper/models/345ae4da62f9b3d59415adc60127b97c714f32e89e936602e85993674d08dcb1/medium.pt
large-v1https://openaipublic.azureedge.net/main/whisper/models/e4b87e7e0bf463eb8e6956e646f1e277e901512310def2c24bf0e11bd3c28e9a/large-v1.pt
large-v2https://openaipublic.azureedge.net/main/whisper/models/81f7c96c852ee8fc832187b0132e569d6c3065a3252ed18e56effd0b6a73e524/large-v2.pt
large-v3https://openaipublic.azureedge.net/main/whisper/models/e5b1a55b89c1367dacf97e3e19bfd829a01529dbfdeefa8caeb59b3f1b81dadb/large-v3.pt
largehttps://openaipublic.azureedge.net/main/whisper/models/e5b1a55b89c1367dacf97e3e19bfd829a01529dbfdeefa8caeb59b3f1b81dadb/large-v3.pt
large-v3-turbohttps://openaipublic.azureedge.net/main/whisper/models/aff26ae408abcba5fbf8813c21e62b0941638c5f6eebfb145be0c9839262a19a/large-v3-turbo.pt
turbohttps://openaipublic.azureedge.net/main/whisper/models/aff26ae408abcba5fbf8813c21e62b0941638c5f6eebfb145be0c9839262a19a/large-v3-turbo.pt

--output_format srt:指定输出格式
--language Chinese:指定识别语言
--device cuda:对于支持CUDA的GPU,启用GPU加速
<file>.wav:指定输入的文件,可以是音频文件也可以是视频文件,如果是视频文件需要配置ffmpeg环境变量

1
python3 -m whisper --model medium --output_format srt --language Chinese --device cuda <file>.wav

通过Python使用Whisper

引入依赖

1
import whisper

载入模型

1
model = whisper.load_model("medium")

使用GPU加速

1
model = whisper.load_model("medium", device="cuda")

识别音频

1
result = model.transcribe("<file>.wav")

指定语言

1
result = model.transcribe("<file>.wav", language="Chinese")

完成

参考文献

哔哩哔哩——略懂的大龙猫
CSDN——e2_
CSDN——lazy_11717
简书——写给我的公主的简书