Mac OS X一直都有一个NSSpeechSynthesizer类,使用这个类可以很方便地在Cocoa应用程序中添加“文本到语音”功能。开发者可使用AV Foundation中的AVSpeechSynthesizer类向iOS应用程序中添加类似功能。这个类用来播放一个或多个语音内容,这些语音内容都是名为AVSpeechUtterance的类的实例。如果你希望播放语句“我爱你",具体的实现代码如下所示:
AVSpeechSynthesizer *synthersizer = [[AVSpeechSynthesizer alloc] init];
AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:@"我爱你";];
utterance.voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"zh-CN"];
[synthersizer speakUtterance:utterance];
如果运行这段代码,将听见以中文读出语句“我爱你”。现在通过创建一个简单的应用程序实现这个功能,在应用程序中将用到AV Foundation会话。
代码清单:
RGSpeechController.h
#import <AVFoundation/AVFoundation.h>
@interface RGSpeechController : NSObject
@property (strong, nonatomic, readonly) AVSpeechSynthesizer *synthesizer;
+ (instancetype)speechController;
- (void)beginConversation;
@end
RGSpeechController.m
#import "RGSpeechController.h"
@interface RGSpeechController ()
@property (strong, nonatomic) AVSpeechSynthesizer *synthesizer; // 1
@property (strong, nonatomic) NSArray *voices;
@property (strong, nonatomic) NSArray *speechStrings;
@end
@implementation RGSpeechController
+ (instancetype)speechController {
return [[self alloc] init];
}
- (instancetype)init {
self = [super init];
if (self) {
_synthesizer = [[AVSpeechSynthesizer alloc] init]; // 2
_voices = @[[AVSpeechSynthesisVoice voiceWithLanguage:@"en-US"], // 3
[AVSpeechSynthesisVoice voiceWithLanguage:@"en-GB"],
];
_speechStrings = [self buildSpeechStrings];
NSArray *speechVoices = [AVSpeechSynthesisVoice speechVoices];
for (AVSpeechSynthesisVoice *voice in speechVoices) {
NSLog(@"%@", voice.name);
}
}
return self;
}
- (NSArray *)buildSpeechStrings { // 4
return @[@"Hello AV Foundation. How are you?",
@"I'm well! Thanks for asking",
@"Are you excited about the book?",
@"Very! I have always felt so misunderstood",
@"What's your favorite feature",
@"Oh, they're all my babies. I couldn't possibly choose.",
@"It was great to speak with you!",
@"The pleasure was all mine! Have fun!"];
}
- (void)beginConversation {
for (NSUInteger i = 0; i < self.speechStrings.count; i++) {
AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:self.speechStrings[i]];
utterance.voice = self.voices[i % 2];
utterance.rate = 0.4f;// 播放语音内容的速率 0.0~1.0之间
utterance.pitchMultiplier = 0.8f;//在播放特定语句时改变声音的音调 0.5(低音调)~2.0(高音调)之间
utterance.postUtteranceDelay = 0.1f;//在播放下一语句之前有短时间的暂停
[self.synthesizer speakUtterance:utterance];
}
}
@end
1、在类的扩展部分定义类所需的属性,重新对之前在头部定义的synthesizer属性进行定义,这样就可以支持读写操作。此外,还需要为具体对话中用到的声音和语音字符串定义属性。
2、创建一个新的AVSpeechSynthesizer的实例。该对象用于执行具体的“文本到语音”会话。对于一个或多个AVSpeechUtterance实例,该对象起到队列的作用,提供了接口供控制和监视正在进行的语音播放。
3、创建一个包含两个AVSpeechSynthesisVoice实例的NSArray对象。对于声音的支持目前非常有限。开发者不能像在Mac机器上那样使用指定名称的语音进行播报。取而代之的是每种语言/区域设置都有预先设置好的声音。这种情况下,1号扬声器使用的是美式英语,2号扬声器使用的是英式英语的发音。开发者可能通过调用AVSpeechSynthesisVoice中的speechVoices类方法来查看完整的声音支持列表。
4、创建一个字符串数组用来定义所设计会话的有关前进和后退的处理。
网友评论