最近在做移动办公平台时智能机器人时,需要用到语音听写和语音合成。因为以前用过科大讯飞的语音听写,并且我们已经封装成一个接口,听写界面也很平易近人。所以自然而然的想到使用科大讯飞。但是在使用的过程中发现,科大讯飞免费版的每天只提供20000条语音听写。对于我们公司有三千多人的情况,一人一条还不够。智能考虑其他的厂家的语音听写功能。
由于以前做过使用苹果自带的Speech.framework和AVFoundation写过一个语音转文本,并且翻译成其他语言,在转化为语音的翻译demo.此时正好派上用场。下面是它的各种逻辑
语音听写
下面参考Building a Speech-to-Text App Using Speech Framework in iOS 10教程中的方法, 其中是用swift写的,我自己把它转化为OC版本的。
- 设置属性
#import <Speech/Speech.h>
@interface NTListenUserSiri() <SFSpeechRecognizerDelegate, SFSpeechRecognitionTaskDelegate>
@property (nonatomic, strong) SFSpeechRecognizer *speechRecognizer;
@property (nonatomic, strong) SFSpeechAudioBufferRecognitionRequest *recognitionRequest;
@property (nonatomic, strong) SFSpeechRecognitionTask *recognitionTask;
@property (nonatomic, strong) AVAudioEngine * audioEngine;
@property (nonatomic, strong) NSTimer *timer;
- 在实例化对象时判断siri是否有权限
-(instancetype)init {
if(self) {
//初始化siri
self.speechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:[[NSLocale alloc] initWithLocaleIdentifier:@"zh-CN"]];
self.speechRecognizer.delegate = self;
[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status) {
switch (status) {
case SFSpeechRecognizerAuthorizationStatusAuthorized:
NSLog(@"验证通过");
break;
case SFSpeechRecognizerAuthorizationStatusDenied:
NSLog(@"siri权限被拒绝");
break;
case SFSpeechRecognizerAuthorizationStatusRestricted:
NSLog(@"siri权限未开启");
break;
default:
break;
}
}];
self.audioEngine = [[AVAudioEngine alloc] init];
}
return self;
}
- 开始语音听写
- (void)startUseSiriListen {
if(self.audioEngine.isRunning){
...
//退出录音
}else {
self.isListening = YES;
[self startRecording];
//开始录音
}
}
#pragma mark -- Private method
- (void)startRecording {
//检查revognitionTask是否在运行,如果是则取消
if(self.recognitionTask != nil) {
[self.recognitionTask cancel];
self.recognitionTask = nil;
}
/*
创建一个AVAudioSeesion对象为录音做准备
将category设置为Record(录制音频,静音不停止)
model设置我measurement 减少设备在处理音频I/O的信号量时的影响,然后启用
*/
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
@try {
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionMixWithOthers error:nil];
[audioSession setMode:AVAudioSessionModeDefault error:nil];
[audioSession setActive:YES withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:nil];
[[AVAudioSession sharedInstance] overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:nil];
} @catch (NSException *exception) {
NSLog(@"audioSession Properties weren't set because of an error");
}
//初始化recognitionRequest,新建SFSpeechAudioBufferRecognitionRequest对象,之后会使用它来将音频数据递送给苹果服务器
self.recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
//检查audioEngine是否有录音的音频输入,没有则报告一个重要错误
AVAudioInputNode *inputNode = self.audioEngine.inputNode;
//检查recognitionRequest对象是否被初始化是否为空。
self.recognitionRequest.shouldReportPartialResults = YES;
//告诉recognitionRequest分段输出用户说话的语音识别结果。
NTWeakself;
[self.speechRecognizer recognitionTaskWithRequest:self.recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
BOOL isFinal = NO;
if (result != nil){
// weakself.listenText = result.bestTranscription.formattedString; 1
if(weakself.delegate && weakself.isListening){
[weakself.delegate siriListeningWithText:result.bestTranscription.formattedString];
}
isFinal = result.isFinal;
[weakself createTimer:1.5 isWillRecorder:NO];
}
// //若没有任何错误或者有最终结果了,则停止audioEngine、recognitionRequest与recognitionRequest。
// if(error != nil || isFinal) {
// [weakself.audioEngine stop];
// [inputNode removeTapOnBus:0];
// [weakself.recognitionRequest endAudio];
// weakself.recognitionRequest = nil;
// weakself.recognitionTask = nil;
// }
}];
//给recognitionRequest添加一个音频输入,注意在启动recognitionTask之后添加音频输入是可以的。Speech框架会在添加音频输入的同时开始识别。
AVAudioFormat *recordingFormat = [inputNode outputFormatForBus:0];
[inputNode installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
[weakself.recognitionRequest appendAudioPCMBuffer:buffer];
}];
[self.audioEngine prepare];
@try {
[self.audioEngine startAndReturnError:nil];
} @catch (NSException *exception) {
NSLog(@"audioEngine could't start because of an error");
}
[self createTimer:4.0f isWillRecorder:YES];
}
- SFSpeechRecognizerDelegate
#pragma mark SFSpeechRecognizerDelegate
- (void)speechRecognizer:(SFSpeechRecognizer *)speechRecognizer availabilityDidChange:(BOOL)available {
if(available){
//可以使用录音
}else {
//不可以使用录音
[HUDNotificationCenter showMessage:@"录音权限没有被开启" hideAfter:1.0];
}
}
5.定时器
- (void)createTimer:(NSTimeInterval)interval isWillRecorder:(BOOL)isWillRecord{
[self.timer invalidate];
self.timer = nil;
NTWeakself;
self.timer = [NSTimer scheduledTimerWithTimeInterval:interval repeats:NO block:^(NSTimer * _Nonnull timer) {
if(weakself.audioEngine.isRunning){
[weakself endUseSiriListenWithIsContailText:!isWillRecord];
}
}];
}
需要注意的地方
- 根据参考文章介绍的,在startRecording方法中,audioSession 中设置属性方法是这样写的
try audioSession.setCategory(AVAudioSessionCategoryRecord)
try audioSession.setMode(AVAudioSessionModeMeasurement)
try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
当使用参考文章中的方法时,只用到语音听写时没有问题,但是当语音听写和语音合成同时使用时,就会出现语音合成播放不出来声音的现象。查看了一下stackOverflow发现是设置参数的问题,改成我写的方法就没有问题了。
- siri是通过把语音转到苹果服务器,再有服务器转化为文本返回。用过siri功能的都知道,它的语音听写不是一次把结果返回的,而是根据上下文来实时返回。并且想要暂停的时候,必须手动来实施暂停。而不是向siri一样,当一句话结束以后,自动暂停。
而我们就想向siri一样,一句话结束自动暂停。所以我在程序里面加了一个定时器方法,原理是当打开语音听写,设置定时器为5秒,如果5秒之内没有说话,就执行结束语音听写。如果5秒之内,有人说话的话,就会执行一个两秒的定时器,如果2秒之内苹果服务器没有返回文本,我们就认为说话已经结束,自动停止。经过测试,基本达到我们的需求。
语音合成
语音合成,说的通俗一点就是把文本转化为语音,让他播放出来。这个功能相对来说比较简单,下面就直接粘贴代码。
#import <AVFoundation/AVFoundation.h>
@interface NTSpeakUserSiri ()
@property (nonatomic , strong) AVSpeechSynthesizer *av;
@end
@implementation NTSpeakUserSiri
static NTSpeakUserSiri* sharedSpeaker;
+ (NTSpeakUserSiri*)sharedSpeaker {
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
sharedSpeaker = [[NTSpeakUserSiri alloc] init];
});
return sharedSpeaker;
}
- (void)speakWithText:(NSString *)text {
if(text && text.length > 0) {
AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:text];
utterance.rate = 0.5;
utterance.pitchMultiplier = 1.0f;
utterance.volume = 1.0;
AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"zh-cn"];
utterance.voice = voice;
[self.av speakUtterance:utterance];
}
}
- (void)stopSpeaking {
[self.av stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
}
- (AVSpeechSynthesizer *)av {
if (!_av) {
_av = [[AVSpeechSynthesizer alloc] init];
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryPlayback withOptions:AVAudioSessionCategoryOptionMixWithOthers error:nil];
[audioSession setMode:AVAudioSessionModeDefault error:nil];
[audioSession setActive:YES withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:nil];
[[AVAudioSession sharedInstance] overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:nil];
}
return _av;
}
麦克风权限验证和siri权限验证
麦克风权限验证
//检测麦克风权限是否开启
- (BOOL)checkMicophonStatus {
AVAuthorizationStatus videoAuthStatus = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];
if(videoAuthStatus == AVAuthorizationStatusRestricted || videoAuthStatus == AVAuthorizationStatusDenied) {// 未授权
return NO;
}else{// 已授权
return YES;
}
}
检测siri权限
//检测siri权限
- (BOOL)checkSiriStatus {
SFSpeechRecognizerAuthorizationStatus status = [SFSpeechRecognizer authorizationStatus];
if(status == SFSpeechRecognizerAuthorizationStatusDenied || status == SFSpeechRecognizerAuthorizationStatusRestricted) {
return NO;
}else {
return YES;
}
}
知行办公,专业移动办公平台https://zx.naton.cn/
【总监】十二春秋之,3483099@qq.com;
【Master】zelo,616701261@qq.com;
【运营】狼行天下,897221533@qq.com;****
【产品设计】流浪猫,364994559@qq.com;
【体验设计】兜兜,2435632247@qq.com;
【iOS】淘码小工,492395860@qq.com;iMcG33K,imcg33k@gmail.com;
【Android】人猿居士,1059604515@qq.com;思路的顿悟,1217022114@qq.com;
【java】首席工程师MR_W,feixue300@qq.com;
【测试】土镜问道,847071279@qq.com;
【数据】喜乐多,42151960@qq.com;
【安全】保密,你懂的。
网友评论