文章

[摘]音视频学习系列第(二)篇---音频采集和播放之AudioRecord

文章目录

AudioRecord与MediaRecorder区别

前者采集的是原始的音频数据,后者会对音频数据进行编码压缩并存储成文件

AudioRecord的使用

参数配置
public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,
        int bufferSizeInBytes)
  1. audioSource

音频采集的输入源,可选值在MediaRecorder.AudioSource中以常量值定义,如

public static final int MIC = 1;   //表示手机麦克风输入
  1. sampleRateInHz

采样率,录音设备1S内对声音信号的采集次数,单位Hz,目前44100Hz是唯一可以保证兼容所有Android手机的采样率。

背景知识

Hz,物质在1S内周期性变化的次数 我们知道人耳能听到的声音频率范围在20Hz到20KHz之间,为了不失真,采样频率应该在40KHz以上。

  1. channelConfig

通道数的配置,可选值在AudioFormat中以常量值定义,常用的如下

public static final int CHANNEL_IN_LEFT = 0x4;
public static final int CHANNEL_IN_RIGHT = 0x8;
public static final int CHANNEL_IN_FRONT = 0x10;
//单通道
public static final int CHANNEL_IN_MONO = CHANNEL_IN_FRONT;   
//双通道
public static final int CHANNEL_IN_STEREO = (CHANNEL_IN_LEFT | CHANNEL_IN_RIGHT);
  1. audioFormat

用来配置数据位宽,可选值在可选值在AudioFormat中以常量值定义,常用的如下

public static final int ENCODING_PCM_16BIT = 2;
public static final int ENCODING_PCM_8BIT = 3;

背景知识

PCM通过抽样、量化、编码三个步骤将连续变化的模拟信号转换为数字编码。

  1. bufferSizeInBytes

配置的是AudioRecord内部音频缓冲区的大小,该值不能低于一帧音频帧的大小,一帧音频帧的大小计算如下

int size=采样率 * 采样时间 * 位宽 * 通道数

其中采样时间一般取2.5ms~120ms,具体取多少由厂商或者应用决定

每一帧采样的时间越短,产生的延时越小,但碎片化的数据也会越多

在Android开发中,应该使用AudioRecord类中的方法

static public int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat)

来计算音频缓冲区的大小

音频采集方法

audioRecord.startRecording();   //开始录制
audioRecord.stop();    //停止录制
audioRecord.read(bytes,0,bytes.length);  //读取录音数据

示例代码

<uses-permission android:name="android.permission.RECORD_AUDIO"/>

并且该权限属于危险权限,需要动态获取权限

public class AudioCapture {
private static final String TAG = "AudioCapture";

private final int DEFAULT_SOURCE = MediaRecorder.AudioSource.MIC;  //麦克风
private final int DEFAULT_RATE = 44100;    //采样率
private final int DEFAULT_CHANNEL = AudioFormat.CHANNEL_IN_STEREO;   //双通道(左右声道)
private final int DEFAULT_FORMAT = AudioFormat.ENCODING_PCM_16BIT;   //数据位宽16位

private AudioRecord mAudioRecord;
private int mMinBufferSize;
private onAudioFrameCaptureListener mOnAudioFrameCaptureListener;

private boolean isRecording = false;

public void startRecord() {
    startRecord(DEFAULT_SOURCE, DEFAULT_RATE, DEFAULT_CHANNEL, DEFAULT_FORMAT);
}


public void startRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat) {

    mMinBufferSize = AudioRecord.getMinBufferSize(sampleRateInHz, channelConfig, audioFormat);
    if (mMinBufferSize == AudioRecord.ERROR_BAD_VALUE) {
        Log.d(TAG, "Invalid parameter");
        return;
    }

    mAudioRecord = new AudioRecord(audioSource, sampleRateInHz, channelConfig,
            audioFormat, mMinBufferSize);
    if (mAudioRecord.getState() == AudioRecord.STATE_UNINITIALIZED) {
        Log.d(TAG, "AudioRecord initialize fail");
        return;
    }

    mAudioRecord.startRecording();
    isRecording = true;
    CaptureThread t = new CaptureThread();
    t.start();
    Log.d(TAG, "AudioRecord Start");
}


public void stopRecord() {
    isRecording = false;
    if (mAudioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
        mAudioRecord.stop();
    }
    mAudioRecord.release();
    mOnAudioFrameCaptureListener = null;
    Log.d(TAG, "AudioRecord Stop");
}


private class CaptureThread extends Thread {

    @Override
    public void run() {
        while (isRecording) {
            byte[] buffer = new byte[mMinBufferSize];
            int result = mAudioRecord.read(buffer, 0, buffer.length);
            Log.d(TAG, "Captured  " + result + "  byte");
            if (mOnAudioFrameCaptureListener != null) {
                mOnAudioFrameCaptureListener.onAudioFrameCapture(buffer);
            }
        }
    }
}


public interface onAudioFrameCaptureListener {
    void onAudioFrameCapture(byte[] audioData);
}

public void setOnAudioFrameCaptureListener(onAudioFrameCaptureListener listener) {
    mOnAudioFrameCaptureListener = listener;
  }
}

调用方式

audioCapture=new AudioCapture();
audioCapture.startRecord();

摘抄

1.《音视频学习系列第(二)篇---音频采集和播放

发表评论