Visualizer 是使应用程序能够检索当前播放音频的一部分以进行可视化。它不是录音接口,仅返回部分低质量的音频内容。但是,为了保护某些音频数据的隐私,使用 Visualizer 需要 android.permission.RECORD_AUDIO权限。传递给构造函数的音频会话 ID 指示应可视化哪些音频内容:
- 如果会话为 0,则音频输出混合可视化
- 如果会话不为 0,则显示来自特定会话
android.media.MediaPlayer
或使用此音频会话的音频android.media.AudioTrack
可以捕获两种类型的音频内容表现形式:
- 波形数据:使用该
getWaveForm(byte[])
方法连续的8位(无符号)单声道样本 - 频率数据:采用8位幅度FFT
getFft(byte[])
方法
捕获的长度可以通过分别调用getCaptureSize()
和setCaptureSize(int)
方法来检索或指定。捕获大小必须是返回范围内的 2 的幂getCaptureSizeRange()
。
1. 权限请求
需要在Manifest里面添加
<uses-permission android:name="android.permission.RECORD_AUDIO" />
RECORD_AUDIOAndroid 6.0后需要动态请求权限
val audioPermission =ActivityCompat.checkSelfPermission(this, Manifest.permission.RECORD_AUDIO)if (audioPermission != PackageManager.PERMISSION_GRANTED) {ActivityCompat.requestPermissions(this,permissions,PERMISSION_REQUEST_CODE /* your request code */)}
2. 初始化Visualizer
首先初始化MediaPlayer,初始MediaPlayer是为了拿到audioSessionId
MusicPlayHelper.init(application, object : IMusicPlayListener {override fun loadMusicFinish(boolean: Boolean, position: Int) {MusicPlayHelper.play()}})
初始化Visualizer,musicId就是上面的audioSessionId,如果传0是全局改变,但是有可能会报错。
if (mVisualizer == null) {mVisualizer = Visualizer(musicId)}mVisualizer?.enabled = falsemVisualizer?.captureSize = Visualizer.getCaptureSizeRange()[1]mVisualizer?.setDataCaptureListener(captureListener,Visualizer.getMaxCaptureRate() / 2,true,true)// Enabled Visualizer and disable when we're done with the streammVisualizer?.enabled = true
setDataCaptureListener
为可视化对象设置采样监听数据的回调,setDataCaptureListener的参数作用如下:
listener:回调对象
rate:采样的频率,其范围是0~Visualizer.getMaxCaptureRate(),此处设置为最大值一半。
waveform:是否获取波形信息
fft:是否获取快速傅里叶变换后的数据
OnDataCaptureListener
中的两个回调方法分别为:
onWaveFormDataCapture:波形数据回调
onFftDataCapture:傅里叶数据回调,即频率数据回调
3.设置Visualizer是否接收数据
enable为true正常接收,为false关闭
fun setVisualizerEnable(flag: Boolean) {mVisualizer?.enabled = flag
}
4. 释放Visualizer
使用完需要调用release方法释放
fun release() {mVisualizer?.enabled = falsemVisualizer?.release()mVisualizer = null}
完整VisualizerView代码
package com.example.knowledgemanagement.visualizerimport android.content.Context
import android.graphics.Canvas
import android.graphics.Rect
import android.media.audiofx.Visualizer
import android.media.audiofx.Visualizer.OnDataCaptureListener
import android.util.AttributeSet
import android.view.View
import com.xing.commonlibrary.log.LogUtilsclass VisualizerView @JvmOverloads constructor(context: Context?,attrs: AttributeSet? = null,defStyleAttr: Int = 0
) :View(context, attrs, defStyleAttr) {private val TAG = "VisualizerView"private var mBytes: ByteArray? = nullprivate var mFFTBytes: ByteArray? = nullprivate val mRect = Rect()private var mVisualizer: Visualizer? = nullprivate var mRenderers: MutableSet<Renderer> = HashSet()var left1 = 0var top1 = 0var right1 = 0var bottom1 = 0private var mCanvas: Canvas? = nullprivate var mAudioSamplingTime: Long = 0private var mFftSamplingTime: Long = 0private val mSamplingTime = 100 //数据采样时间间隔private var isLink = falseinit {setLayerType(LAYER_TYPE_SOFTWARE, null) //禁止硬件加速init()}private fun init() {mBytes = nullmFFTBytes = null}override fun onLayout(changed: Boolean, left: Int, top: Int, right: Int, bottom: Int) {super.onLayout(changed, left, top, right, bottom)left1 = lefttop1 = topright1 = rightbottom1 = bottom}override fun onDraw(canvas: Canvas) {super.onDraw(canvas)mCanvas = canvasmRect[0, 0, width] = heightmBytes?.let {// Render all audio renderersfor (r in mRenderers) {r.audioRender(canvas, it, mRect)}}mFFTBytes?.let {// Render all FFT renderersfor (r in mRenderers) {r.fftRender(canvas, it, mRect)}}}fun link(musicId: Int) {try {//使用前先释放if (mVisualizer != null) {release()}if (mVisualizer == null && !isLink) {mVisualizer = Visualizer(musicId)isLink = true}LogUtils.e(TAG, "nsc =" + mVisualizer?.enabled)mVisualizer?.enabled = falsemVisualizer?.captureSize = Visualizer.getCaptureSizeRange()[1]// Pass through Visualizer data to VisualizerViewval captureListener: OnDataCaptureListener = object : OnDataCaptureListener {override fun onWaveFormDataCapture(visualizer: Visualizer, bytes: ByteArray,samplingRate: Int) {val currentTimeMillis = System.currentTimeMillis()LogUtils.i(TAG, "onWaveFormDataCapture")if (currentTimeMillis - mAudioSamplingTime >= mSamplingTime) {mBytes = bytesinvalidate()mAudioSamplingTime = currentTimeMillis}}override fun onFftDataCapture(visualizer: Visualizer, bytes: ByteArray,samplingRate: Int) {LogUtils.i(TAG, "onFftDataCapture")val currentTimeMillis = System.currentTimeMillis()if (currentTimeMillis - mFftSamplingTime >= mSamplingTime) {mFFTBytes = bytesinvalidate()mFftSamplingTime = currentTimeMillis}}}mVisualizer?.setDataCaptureListener(captureListener,Visualizer.getMaxCaptureRate() / 2,true,true)// Enabled Visualizer and disable when we're done with the streammVisualizer?.enabled = true} catch (e: RuntimeException) {}}fun setVisualizerEnable(flag: Boolean) {mVisualizer?.enabled = flag}fun release() {mVisualizer?.enabled = falsemVisualizer?.release()mVisualizer = nullisLink = false}fun addRenderer(renderer: Renderer?) {if (renderer != null) {mRenderers.add(renderer)}}fun clearRenderers() {mRenderers.clear()}
}
5. 实现简单的柱状显示
实现很简单,就是将拿到的数据通过canvas.drawLines绘制出来,
class ColumnarRenderer( private val mPaint: Paint) : Renderer() {private val mSpectrumNum = 96override fun onAudioRender(canvas: Canvas, data: ByteArray, rect: Rect) {}override fun onFftRender(canvas: Canvas, data: ByteArray, rect: Rect) {val baseX = rect.width() / mSpectrumNumval height = rect.height()for (i in 0 until mSpectrumNum) {val magnitude = (baseX * i + baseX / 2).toFloat()mFFTPoints?.let {it[i * 4] = magnitudeit[i * 4 + 1] = (height / 2).toFloat()it[i * 4 + 2] = magnitudeit[i * 4 + 3] = (height / 2 - data[i] * 4).toFloat()}}mFFTPoints?.let { canvas.drawLines(it, mPaint) }}
}
然后调用visualizerView添加到Renderer即可
visualizerView.addRenderer(columnarRenderer);
6.实现能量块跳动
代码里面有详细备注
package com.example.knowledgemanagement.visualizerimport android.graphics.Canvas
import android.graphics.Color
import android.graphics.Paint
import android.graphics.Rect
import java.util.Random
import kotlin.math.abs
import kotlin.math.hypotclass EnergyBlockRenderer(private val mPaint: Paint) : Renderer() {companion object {private const val TAG = "EnergyBlockRenderer"private const val MAX_LEVEL = 30 //音量柱·音频块 - 最大个数private const val CYLINDER_NUM = 26 //音量柱 - 最大个数private const val DN_W = 470 //view宽度与单个音频块占比 - 正常480 需微调private const val DN_H = 300 //view高度与单个音频块占比private const val DN_SL = 10 //单个音频块宽度private const val DN_SW = 2 //单个音频块高度}private var mData = ByteArray(CYLINDER_NUM) //音量柱 数组private var hGap = 0private var vGap = 0private var levelStep = 0private var strokeWidth = 0fprivate var strokeLength = 0fvar mDataEn = trueinit {levelStep = 230 / MAX_LEVEL}fun onLayout(left: Int, top: Int, right: Int, bottom: Int) {val w: Float = (right - left).toFloat()val h: Float = (bottom - top).toFloat()val xr: Float = w / DN_W.toFloat()val yr: Float = h / DN_H.toFloat()strokeWidth = DN_SW * yrstrokeLength = DN_SL * xrhGap = ((w - strokeLength * CYLINDER_NUM) / (CYLINDER_NUM + 1)).toInt()vGap = (h / (MAX_LEVEL + 2)).toInt() //频谱块高度mPaint.strokeWidth = strokeWidth //设置频谱块宽度}//绘制频谱块和倒影private fun drawCylinder(canvas: Canvas, x: Float, value: Byte, rect: Rect) {var value = valueif (value.toInt() == 0) {value = 1} //最少有一个频谱块for (i in 0 until value) { //每个能量柱绘制value个能量块val y = (rect.height() / 2 - i * vGap / 2 - vGap).toFloat() //计算y轴坐标val y1 = (rect.height() / 2 + i * vGap / 2 + vGap).toFloat()//绘制频谱块mPaint.color = color //画笔颜色canvas.drawLine(x, y, x + strokeLength, y, mPaint) //绘制频谱块//绘制音量柱倒影if (i <= 6 && value > 0) {mPaint.color = Color.WHITE //画笔颜色mPaint.alpha = 100 - 100 / 6 * i //倒影颜色canvas.drawLine(x, y1, x + strokeLength, y1, mPaint) //绘制频谱块}}}private val color: Intprivate get() {val ranColor = intArrayOf(Color.RED, Color.YELLOW, Color.MAGENTA, Color.BLUE, Color.GREEN, Color.GRAY,Color.CYAN, Color.LTGRAY, Color.TRANSPARENT)val random = Random()val value = random.nextInt(ranColor.size - 1)return ranColor[value]}override fun onAudioRender(canvas: Canvas, data: ByteArray, rect: Rect) {}override fun onFftRender(canvas: Canvas, data: ByteArray, rect: Rect) {val model = ByteArray(data.size / 2 + 1)if (mDataEn) {model[0] = abs(data[1].toInt()).toByte()var j = 1var i = 2while (i < data.size) {model[j] = hypot(data[i].toDouble(), data[i + 1].toDouble()).toInt().toByte()i += 2j++}} else {for (i in 0 until CYLINDER_NUM) {model[i] = 0}}for (i in 0 until CYLINDER_NUM) {val a = (abs(model[CYLINDER_NUM - i].toInt()) / levelStep).toByte()val b = mData[i]if (a > b) {mData[i] = a} else {if (b > 0) {mData[i]--}}}var j = -4for (i in 0 until CYLINDER_NUM / 2 - 4) {drawCylinder(canvas, strokeWidth / 2 + hGap + i * (hGap + strokeLength), mData[i], rect)}for (i in CYLINDER_NUM downTo CYLINDER_NUM / 2 - 4) {j++drawCylinder(canvas, strokeWidth / 2 + hGap + (CYLINDER_NUM / 2 + j - 1) * (hGap + strokeLength), mData[i - 1], rect)}}}
Render代码
package com.example.knowledgemanagement.visualizerimport android.graphics.Canvas
import android.graphics.Rectabstract class Renderer {// Have these as members, so we don't have to re-create them each timevar mPoints: FloatArray? = nullvar mFFTPoints: FloatArray? = nullvar isPlaying = true// As the display of raw/FFT audio will usually look different, subclasses// will typically only implement one of the below methods/*** Implement this method to audioRender the audio data onto the canvas** @param canvas - Canvas to draw on* @param data - Data to audioRender* @param rect - Rect to audioRender into*/abstract fun onAudioRender(canvas: Canvas, data: ByteArray, rect: Rect)/*** Implement this method to audioRender the FFT audio data onto the canvas** @param canvas - Canvas to draw on* @param data - Data to audioRender* @param rect - Rect to audioRender into*/abstract fun onFftRender(canvas: Canvas, data: ByteArray, rect: Rect)// These methods should actually be called for rendering/*** Render the audio data onto the canvas** @param canvas - Canvas to draw on* @param data - Data to audioRender* @param rect - Rect to audioRender into*/fun audioRender(canvas: Canvas, data: ByteArray, rect: Rect) {if (mPoints == null || mPoints!!.size < data.size * 4) {mPoints = FloatArray(data.size * 4)}onAudioRender(canvas, data, rect)}/*** Render the FFT data onto the canvas** @param canvas - Canvas to draw on* @param data - Data to audioRender* @param rect - Rect to audioRender into*/fun fftRender(canvas: Canvas, data: ByteArray, rect: Rect) {if (mFFTPoints == null || mFFTPoints!!.size < data.size * 4) {mFFTPoints = FloatArray(data.size * 4)}onFftRender(canvas, data, rect)}
}
7. 错误原因跟解决办法
下面错误是因为没有获取音乐的SessionId传入,传了0导致的问题。
The Visualizer initCheck failed -3 error typically occurs due to missing
permissions, invalid audio session IDs, hardware limitations, or timing issues.
By addressing these potential causes, you should be able to resolve the issue and
successfully initialize the Visualizer in your Android application.
想看更详细的介绍可以看谷歌文档:Visualizer | Android Developers
代码下载:https://download.csdn.net/download/u011324501/90038203