附:苹果官方文档
Speech Recognition
iOS 10 introduces a new API that supports continuous speech recognition and helps you build apps that can recognize speech and transcribe it into text. Using the APIs in the Speech framework (Speech.framework), you can perform speech transcription of both real-time and recorded audio. For example, you can get a speech recognizer and start simple speech recognition using code like this:
let recognizer = SFSpeechRecognizer()
let request = SFSpeechURLRecognitionRequest(url: audioFileURL)
recognizer?.recognitionTask(with: request, resultHandler: { (result, error) in
print (result?.bestTranscription.formattedString)
})
As with accessing other types of protected data, such as Calendar and Photos data, performing speech recognition requires the user’s permission (for more information about accessing protected data classes, seeSecurity and Privacy Enhancements). In the case of speech recognition, permission is required because data is transmitted and temporarily stored on Apple’s servers to increase the accuracy of recognition. To request the user’s permission, you must add theNSSpeechRecognitionUsageDescriptionkey to your app’sInfo.plistfile and provide content that describes your app’s usage.
When you adopt speech recognition in your app, be sure to indicate to users when their speech is being recognized so that they can avoid making sensitive utterances at that time.
在Iphone7/7P火爆的同时,IOS10.0系统新增了多中特性,比如:语音识别的API。那我应该如何使用呐,上面已经贴出了苹果官方的解释(读官方文档是个好习惯呐,虽然看不懂)。
使用Xcode8来体现这个API的时候,*必须先配置一下工程的Info.plist文件,因为更新后苹果对于安全性的提高那可不是一丁半点。我们需要授权一下两个属性
NSSpeechRecognitionUsageDescription
NSMicrophoneUsageDescription
Speech 具有连续的语音识别、对语音文件以及语音流的识别、并且支持多语言的听写(此时的我,为科大讯飞捏了把汗)
如要体验这个强大的原生语音识别,我们需要在工程中导入 #import<Speech/Speech.h>
核心代码:
#import
//1.创建本地化标识符
NSLocale*local =[[NSLocale alloc] initWithLocaleIdentifier:@"zh_CN"];
//2.创建一个语音识别对象
SFSpeechRecognizer*sf =[[SFSpeechRecognizer alloc] initWithLocale:local];
//3.将bundle中的资源文件加载出来返回一个url
NSURL*url =[[NSBundle mainBundle] URLForResource:@"斑马.mp3" withExtension:nil];
//4.将资源包中获取的url(录音文件的地址)传递给request对象
SFSpeechURLRecognitionRequest*res =[[SFSpeechURLRecognitionRequest alloc] initWithURL:url];
//5.发送一个请求
[sf recognitionTaskWithRequest:res resultHandler:^(SFSpeechRecognitionResult* _Nullable result,NSError* _Nullable error) {
if(error!=nil) {
NSLog(@"语音识别解析失败,%@",error);
}
else
{
//解析正确
NSLog(@"---%@",result.bestTranscription.formattedString);
}
}];
持续更新,后续上DEMO。
网友评论