美文网首页
iOS开发 ---对音频文件中的声音进行分类(OC版)下

iOS开发 ---对音频文件中的声音进行分类(OC版)下

作者: 我是卖报的小行家 | 来源:发表于2022-03-11 11:44 被阅读0次

话接上文,

1.导入我们刚才创建的模型
#import "SoundClassifier.h"
#import <CoreML/CoreML.h>
#import <SoundAnalysis/SoundAnalysis.h>
2.遵循<SNResultsObserving>协议
3.创建声音分类请求
1.通过将版本标识符传递给初始化程序来创建一个。
//path为文件路径
NSURL *url = [NSURL URLWithString:path];

SNAudioFileAnalyzer *audioFileAnalyzer = [[SNAudioFileAnalyzer alloc] initWithURL:url error:&error];


MLModelConfiguration *defaultConfig = [[MLModelConfiguration alloc] init];

defaultConfig.allowLowPrecisionAccumulationOnGPU = YES;

SMSoundClassifier *soundClassifier = [[SMSoundClassifier alloc] initWithConfiguration:defaultConfig error:&error];

if (@available(iOS 15.0, *)) {//创建使用框架的内置声音分类模型的请求 iOS15及以上
     request = [[SNClassifySoundRequest alloc] initWithClassifierIdentifier:SNClassifierIdentifierVersion1 error:&error];
     request.windowDuration = CMTimeMakeWithSeconds(2, 16000);
   } else {//创建使用自定义声音分类模型的请求。iOS15以下系统
      request = [[SNClassifySoundRequest alloc] initWithMLModel:soundClassifier.model error:&error];
  }

 [audioFileAnalyzer addRequest:request withObserver:self error:&error];

 [audioFileAnalyzer analyze];
2.实现结果观察器

通过采用该协议实现从音频分析器接收结果的类型。该协议定义了分析器在生成结果或错误或完成任务时调用的方法 SNResultsObserving

- (void)request:(id<SNRequest>)request didProduceResult:(id<SNResult>)result  API_AVAILABLE(ios(13.0)){
    if ([result isKindOfClass:SNClassificationResult.class]) {
        //Downcast the result to a classification result.
        SNClassificationResult* ret = (SNClassificationResult*)result;
        NSTimeInterval timeInSeconds =  ret.timeRange.start.value / ret.timeRange.start.timescale;
        NSString* formattedTime = [NSString stringWithFormat:@"%.2f",timeInSeconds];
     if (ret.classifications.count > 0) {
            // Convert the confidence to a percentage string.
            SNClassification* classification = ret.classifications.firstObject;
            if (classification) {
                CGFloat percent = classification.confidence * 100;
                NSLog(@"percent = %.2f------self.maxPercent = %.2f",percent,self.maxPercent);
                NSString* percentString = [NSString stringWithFormat:@"%.2f%%", percent];
                //NSLog(@"Analysis result for audio at time: %@", formattedTime);
                NSLog(@"%@:%@ for audio at time: %@.\n", classification.identifier, percentString,formattedTime);
    }
  }
}

此示例中的观察者将预测结果(时间戳、分类名称和分类器的置信度)打印到控制台。实施您的观察者以根据结果采取适合您的应用的操作。
必须保持对观察者的强烈参考。声音分析仪不会强引用您的观察者。

输出结果
Analysis result for audio at time: 1.45
Acoustic Guitar: 92.39% confidence.

...

Analysis result for audio at time: 8.74
Acoustic animal: 94.45% confidence.

...

Analysis result for audio at time: 14.15
Tambourine: 85.39% confidence.

...

Analysis result for audio at time: 20.92
Snare Drum: 96.87% confidence.

相关文章

网友评论

      本文标题:iOS开发 ---对音频文件中的声音进行分类(OC版)下

      本文链接:https://www.haomeiwen.com/subject/lhmcdrtx.html