主要用到的有:
1.阿里OSS对象存储管理语音文件
2.AVAudioRecorder录音
3.AVAudioPlayer播放
下面按照:录音->上传->下载-播放的顺序来列出:
一:变量,常量声名
//阿里oss相关
NSString * const endPoint = @"你的节点地址";
NSString * const multipartUploadKeyId = @"你的keyId";
//这里我把secret放在本地,因为语音没有保密必要,阿里可以将secret放到服务端,
//通过请求来获得,具体请自行参考阿里开发文档
NSString * const multipartUploadKeySecret = @"你的keySecret";
static AVAudioRecorder *audioRecorder;
static AVAudioPlayer *audioPlayer;
static OSSClient *client;
二:录音
#import <AVFoundation/AVFoundation.h>
/**
@param 阿里oss fileKey
@return "flag":0->失败,其它->成功
*/
+ (NSString*)startAudioRecordWithFileKey:(NSString*)fileKey{
NSLog(@"self==>%@", self);
BOOL flag = [AppController checkMicrophoneAvailability];
if (!flag) {
DLog(@"麦克风不可用");
return [NSString stringWithFormat:@"%d", flag];
}
NSURL *url = [AppController fileUrlWithfileKey:fileKey];
NSMutableDictionary *settings = [NSMutableDictionary dictionary];
//采样率,8000是电话采样率,对一般的录音已经足够了
[settings setObject:[NSNumber numberWithFloat:8000] forKey:AVSampleRateKey];
//文件格式,aac兼容Android
[settings setObject:[NSNumber numberWithInt:kAudioFormatMPEG4AAC] forKey:AVFormatIDKey];
//通道数,有的文章说iPhone只有一个麦克风,一个通道已经足够了,但是我看很多用的都是2,都
//可以正常录音的,我试过
[settings setObject:@2 forKey:AVNumberOfChannelsKey];
//采样位数
[settings setObject:@16 forKey:AVLinearPCMBitDepthKey];
audioRecorder = [[AVAudioRecorder alloc] initWithURL:url settings:[settings copy] error:nil];
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryRecord error:nil];
audioRecorder.delegate = APP_DELEGATE;
audioRecorder.meteringEnabled = YES;
flag = [audioRecorder record];
return [NSString stringWithFormat:@"%d", flag];
}
//检查麦克风是否可用
+ (BOOL)checkMicrophoneAvailability{
__block BOOL ret = NO;
AVAudioSession *session = [AVAudioSession sharedInstance];
if ([session respondsToSelector:@selector(requestRecordPermission:)]) {
[session performSelector:@selector(requestRecordPermission:) withObject:^(BOOL granted) {
ret = granted;
}];
} else {
ret = YES;
}
return ret;
}
//录音文件路径,我这里用的是temp文件夹,因为录音上传完或者播放完后就可以删除了
+ (NSURL*)fileUrlWithfileKey:(NSString*)fileKey{
return [NSURL fileURLWithPath:[NSTemporaryDirectory() stringByAppendingPathComponent:fileKey]];
}
//实现AVAudioRecorderDelegate方法
#pragma mark - AVAudioRecorderDelegate
//播放后将audioRecorder设置为nil,因为每次都需要新建一个来录音,当然ARC下不需要置nil应该也没问题
- (void)audioRecorderDidFinishRecording:(AVAudioRecorder *)recorder successfully:(BOOL)flag{
NSLog(@"%s", __func__);
audioRecorder = nil;
}
- (void)audioRecorderEncodeErrorDidOccur:(AVAudioRecorder *)recorder error:(NSError *)error{
NSLog(@"%@", error);
}
三:停止录音:
[audioRecorder stop];
四:初始化OSSClient
//初始化OSSClient,实现阿里OSS的文件,bucket管理
+ (OSSClient*)client{
if (!client) {
[AppController initOSSClient];
}
return client;
}
+ (void)initOSSClient {
// 明文设置secret的方式建议只在测试时使用,更多鉴权模式参考后面链接给出的官网完整文档的`访问控制`章节
id<OSSCredentialProvider> credential = [[OSSPlainTextAKSKPairCredentialProvider alloc] initWithPlainTextAccessKey:multipartUploadKeyId
secretKey:multipartUploadKeySecret];
client = [[OSSClient alloc] initWithEndpoint:endPoint credentialProvider:credential];
}
五:接下来上传录音文件:
上传文件需要两个参数
一个是bucketName,相当于文件夹名,自行定义,可以用代码管理bucketName,这里就不写了;
一个是fileKey,相当于文件名,我这里的fileKey是外面传入的,用的是"userId_时间戳"这种格式.
开始想的是每天开一个新的bucket来上传,以便管理云上的文件,删除旧的.
但是有个问题就是如果用户刚好在24点时语音,会出现问题,解决方法应该是上传,下载后告知服务器此语音的bucketName加fileKey,由于时间问题,没做
/**
@return "flag,fileKey"
flag 1成功,0失败
fileKey 阿里oss fileKey
*/
+ (void)uploadAudioRecordWithBucketName:(NSString*)bucketName fileKey:(NSString*)fileKey{
NSURL *fileUrl = [AppController fileUrlWithfileKey:fileKey];
OSSPutObjectRequest *put = [OSSPutObjectRequest new];
// required fields
put.bucketName = bucketName;
put.objectKey = fileKey;
put.uploadingFileURL = fileUrl;
OSSTask *putTask = [[AppController client] putObject:put];
[putTask continueWithBlock:^id(OSSTask *task) {
//上传成功后删除录音文件
[AppController removeAudioFileWithUrl:fileUrl];
BOOL flag = task.error == nil;
if (flag) {
//在这里告知公司服务器用户上传了语音消息
NSLog(@"upload object success!");
} else {
NSLog(@"upload object failed, error: %@" , task.error);
}
return nil;
}];
}
六:告知公司服务器用户上传了语音消息
参数:userId, fileKey,bucketName
七:公司服务器告知消息接收方需要下载消息
参数:bucketName,fileKey
八:下载录音文件
+ (void)downloadAudioRecordWithBucketName:(NSString*)bucketName fileKey:(NSString*)fileKey{
OSSGetObjectRequest * request = [OSSGetObjectRequest new];
// required
request.bucketName = bucketName;
request.objectKey = fileKey;
NSURL *downloadUrl = [AppController fileUrlWithfileKey:fileKey];
request.downloadToFileURL = downloadUrl;
OSSTask * getTask = [[AppController client] getObject:request];
[getTask continueWithBlock:^id(OSSTask *task) {
if (!task.error) {
//去播放语音
[AppController playAudioRecordWithUrl:downloadUrl];
} else {
NSLog(@"download object failed, error: %@" ,task.error);
}
return nil;
}];
}
九:播放
+ (void)playAudioRecordWithUrl:(NSURL*)url{
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback error:nil];
NSError *error;
//注意这里,之前用wav格式,初始化Andoird端发出的语音时,audioPlayer总是返回nil,解决方法见最下方
audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error];
audioPlayer.delegate = APP_DELEGATE;
BOOL success = [audioPlayer play];
if (success) {
NSLog(@"播放成功");
}else{
NSLog(@"播放失败");
}
}
//实现AVAudioPlayerDelegate
#pragma mark - AVAudioPlayerDelegate
- (void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag{
audioPlayer = nil;
//播放完成后,删除录音文件
[AppController removeAudioFileWithUrl:player.url];
}
- (void)audioPlayerDecodeErrorDidOccur:(AVAudioPlayer *)player error:(NSError *)error{
DLog(@"%@", error);
//播放失败后,也删除录音文件
[AppController removeAudioFileWithUrl:player.url];
}
十:使用后删除录音文件
+ (void)removeAudioFileWithUrl:(NSURL*)url{
NSFileManager *fileManager = [NSFileManager defaultManager];
NSError *error;
[fileManager removeItemAtURL:url error:&error];
if (error) {
DLog(@"deleteAudioFileWithUrl failed: %@", error);
}
}
十一:坑
1.iOS这边流程正常,但是出现了iOS上传的录音Android能放,但是Android的录音iOS不能放
将Android的录音下载后,在其它播放器上可以正常播放,开始时候我们统一用的.wav格式,报错是:
OSStatus error 2003334207
关于这个错误,大家可以用mac的计算器看下,打开计算器,在上栏中选择编程型:
image.png
然后,粘贴错误码2003334207
image.png
给出的信息是"wht?",网上查了,说是这意味着没什么有用的信息
最后找到一文章说用aac格式兼容Android,一试果然成功!
2.阿里的相关endPoint要与自己的服务对应,不要北京的选个杭州的节点这种~
网友评论