给视图添加文字或语音描述

作者: codingZero | 来源:发表于2016-10-12 17:13 被阅读491次

前言

不知不觉已经四个月没更新简书了,正好最近公司项目又抄袭了一个功能,苦逼的我折腾了两天,终于搞定了,好东西大家一起分享,先看效果图

效果图.png

人变懒了,就不详细介绍实现思路了,也懒得组织语言了,简单的说下吧,源码链接在文章最后

添加文字功能

首先自定义一个view命名为RemarkView用来显示文字,结构如下



红色的为自定义的view,蓝色的是label,用来显示文字,黑色的是给label当背景的,右边两个按钮都是添加在红色view上面的,一个删除,一个编辑

给黑色view添加一个拖拽手势用来实现view的拖动,添加一个点击手势用来显示或隐藏右边的两个按钮

给RemarkView定义代理,让控制器成为它的代理来处理某些事件

/**
 *  点击编辑按钮时的回调方法
 */
- (void)remarkViewEditContent:(RemarkView *)remarkView;

/**
 *  拖拽控件时的回调方法
 *
 */
- (void)remarkView:(RemarkView *)remarkView moveWithGesture:(UIPanGestureRecognizer *)gesture;

拖拽的代码如下,_remarkAndVoiceView是用来放文字或语音的父控件

- (void)remarkView:(RemarkView *)remarkView moveWithGesture:(UIPanGestureRecognizer *)gesture {
   CGPoint translation = [gesture translationInView:gesture.view];
   remarkView.x += translation.x;
   remarkView.y += translation.y;
   [self layoutView:remarkView];
   [gesture setTranslation:CGPointZero inView:gesture.view];
}

//防止view拖出父控件外
- (void)layoutView:(UIView *)view {
    CGFloat maxX = CGRectGetMaxX(view.frame);
    CGFloat maxY = CGRectGetMaxY(view.frame);
    if (maxX > _remarkAndVoiceView.width) {
        view.x = _remarkAndVoiceView.width - view.width;
    }

    if (maxY > _remarkAndVoiceView.height) {
        view.y = _remarkAndVoiceView.height - view.height;
    }
    if (view.x < 0) view.x = 0;
    if (view.y < 0) view.y = 0;
}

在这之后需要一个控制器用来添加或者编辑文字,这个简单,不做叙述

添加语音功能

按住按钮进行录音,松开完成录音,手指滑出按钮松开则取消录音,录音过程中根据音量大小显示不同的图片

按钮不同事件执行的代码如下

- (void)didTouchRecordButon:(RecordButton *)sender event:(UIControlEvents)event {
    self.bgView.hidden = NO;
    _isDragExit = (event == UIControlEventTouchDragExit);
    if (event == UIControlEventTouchDown || event == UIControlEventTouchDragEnter) {
        if (event == UIControlEventTouchDown) [self startRecord];
        _promptView.image = [UIImage imageNamed:@"recording0"];
        _promptLabel.text = @"上滑取消录音";
    } else if (event == UIControlEventTouchDragExit) {
        _promptView.image = [UIImage imageNamed:@"cancel"];
        _promptLabel.text = @"松开取消录音";
     } else if (event == UIControlEventTouchUpInside) {
        NSTimeInterval duration = _audioRecorder.currentTime;
        [self.audioRecorder stop];
        if (duration < 0.2) {
            _promptView.image = [UIImage imageNamed:@"warning"];
            _promptLabel.text = @"说话时间太短";
            [_audioRecorder deleteRecording];
            [UIView animateWithDuration:0.3 delay:0.3 options:kNilOptions animations:^{
                self.bgView.alpha = 0;
            } completion:^(BOOL finished){
                self.bgView.hidden = YES;
                self.bgView.alpha = 1;
            }];
        } else {
            self.bgView.hidden = YES;
            VoiceView *voiceView = [VoiceView voiceViewWithURL:_recordURL];
            voiceView.delegate = self;
            if (CGPointEqualToPoint(_addViewPoint, CGPointZero)) {
                voiceView.centerX = _remarkAndVoiceView.width * 0.5;
                voiceView.centerY = _remarkAndVoiceView.height * 0.5;
            } else {
                voiceView.x = _addViewPoint.x;
                voiceView.y = _addViewPoint.y;
            }
            [self.remarkAndVoiceView addSubview:voiceView];
            [self layoutView:voiceView];
        }
         [_timer invalidate];
    } else {
        [self.audioRecorder stop];
        self.bgView.hidden = YES;
        [_audioRecorder deleteRecording];
        [_timer invalidate];
    }
}

录音的代码如下

AVAudioSession *session = [AVAudioSession sharedInstance];
NSError *sessionError;
//AVAudioSessionCategoryPlayAndRecord用于录音和播放
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:&sessionError];
if(sessionError) {
    NSLog(@"Error creating session: %@", [sessionError description]);
} else {
    [session setActive:YES error:nil];
}


NSDictionary *settings = @{AVFormatIDKey: @(kAudioFormatLinearPCM),
                           AVSampleRateKey: @8000.00f,
                           AVNumberOfChannelsKey: @1,
                           AVLinearPCMBitDepthKey: @16,
                           AVLinearPCMIsNonInterleaved: @NO,
                           AVLinearPCMIsFloatKey: @NO,
                           AVLinearPCMIsBigEndianKey: @NO};

NSString *fileName = [NSString stringWithFormat:@"%f.wav", [NSDate date].timeIntervalSince1970];
NSString *filePath = [[NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES) lastObject] stringByAppendingPathComponent:fileName];
_recordURL = [NSURL fileURLWithPath:filePath];
//根据存放路径及录音设置创建录音对象
_audioRecorder = [[AVAudioRecorder alloc] initWithURL:_recordURL settings:settings error:nil];
_audioRecorder.meteringEnabled = YES;
[_audioRecorder prepareToRecord];
[_audioRecorder record];
_timer = [NSTimer scheduledTimerWithTimeInterval:0.03 target:self selector:@selector(detectionVoice) userInfo:nil repeats:YES];

下面的代码是用来检测音量变化的

[_audioRecorder updateMeters];//刷新音量数据
double result = pow(10, (0.05 * [_audioRecorder averagePowerForChannel:0]));
if (!_isDragExit) {
    if (result < 0.01) {
        _promptView.image = [UIImage imageNamed:@"recording0"];
    } else if (result < 0.1) {
        _promptView.image = [UIImage imageNamed:@"recording1"];
    } else if (result < 0.3){
        _promptView.image = [UIImage imageNamed:@"recording2"];
    } else {
        _promptView.image = [UIImage imageNamed:@"recording3"];
    }
}

在语音播放的时候会显示进度条,我们可以通过图层来实现

self.shapeLayer.hidden = playProgress == 0;
CGFloat radius = self.bounds.size.width * 0.5;
CGPoint center = CGPointMake(radius, radius);
CGFloat endAngle = -M_PI_2 + _playProgress * 2 * M_PI;

self.shapeLayer.path = [UIBezierPath bezierPathWithArcCenter:center radius:radius - 3.5 startAngle:-M_PI_2 endAngle:endAngle clockwise:YES].CGPath;

哎呀,懒得写了,直接看运行效果

运行效果

源码看这里,觉得有帮助的同学,动动手star一下吧

相关文章

网友评论

本文标题:给视图添加文字或语音描述

本文链接:https://www.haomeiwen.com/subject/tppiyttx.html