Building an Audio Visualizer With JavaScript.

 

Building an Audio Visualizer With JavaScript.

Creating visuals to go along with music is one of the oldest past times. You will find many videos on Youtube with some pretty neat designs being played in parallel with music. Usually your operating system will also have a built in audio visualizer, although its relatively limited. But the web as a whole seems to be lacking in a nice selection of visualizers. This is likely due to the canvas api and web audio api being relatively new.

Audio Visualizer Libraries

If you’re looking for a pre built library to use to visualize audio, I would suggest Wave.js for dynamic visuals that respond to a audio html element or media stream. This library works with browser environments and has a npm package for React type environments. For working with static visuals you can use Wavesurfer.js. Which has been around for a while but does lack a npm package.

Building Your Own Visualizer

To build your own visualizer with javascript there are only a few basic components that you need to get the flow working.

  • The canvas api
  • The web audio api
  • requestAnimationFrame

With these tools you can build just about any 2d or 3d visual.

Step 1:

We need the canvas api so we have a place to display our visuals. You could just make some div elements and change their heights or something of that nature. But doing that is both very slow and very limited. So first thing to do is add a canvas element into your page.

<canvas id="audio_visual"></canvas>

Then in your javascript you need to grab the canvas element.

let canvas = document.getElementById("audio_visual");

After you have a reference to the element, you need to make a context variable based on a 2d or webgl (3d) context. Its much easier to work with 2d, so that’s what this tutorial will do.

let ctx = canvas.getContext("2d");

The context is the real important piece here. Its what we will use to draw shapes onto the canvas.

As a side note, the size of a canvas element behaves a bit differently then other elements. The default height is 150px, and width is 300px. Which is what the grid will be that is created in the canvas. If you change these values in css, you wont change the size of the canvas grid, you will only stretch or shrink the canvas element view. To change the grid size, you have to set the height and width property on the canvas element.

<canvas id="audio_visual" height="500" width="500"></canvas>

Doing this lets the canvas know to resize its internal grid. The grids top left corner is really [0,0], and the bottom right corner is [width,height], its not like a standard grid you see in math class. So if you supply a negative value, it will be plotted outside the view of the grid; and if you give it a positive value for y, your point will go farther down, not up. This is something you should be aware of.

Step 2:

Next we need an audio element so we can get some music to analyze.

<audio id="source" src="../my/audio/file.mp3"></audio>

An audio element tag can take a src that is either a local file, or remote url to a audio file. It also can take a live audio stream and play it back for you, such as from your microphone. For that you would need to set the srcObject attribute on the audio element in your javascript code equal to a stream object that you usually get from navigator.mediaDevices.getUserMedia.

After you have a audio element in your html, you need to grab it in your javascript.

let audioElement = document.getElementById("source");

side-note: getElementById has a slightly faster lookup time then querySelector because it doesn’t have to parse the argument.

Now that we have all the pieces, we can start connecting them.

We need to create a new AudioContext node that will help us make other useful audio nodes.

let audioCtx = new AudioContext();

If your working with Safari, you need to use webkitAudioContext instead of AudioContext, because safari is the only browser that hasn’t made the transition yet.

Next we need to make a Analyser node, which is very important because its the piece that will give us the frequency data that we will use to make visuals with.

let analyser = audioCtx.createAnalyser();

Its useful to set the analysers fftSize after making it. This tells the analyser how large the array of data should be that it gives back to us. It takes the number you give it a divides it by 2. Also note that you have to give it a value that is a power of 2. Such as 2 ** 11 = 2048. So the array size it gives back is 1024.

I have found that this is the max size it will give back to you, although the documentation may say otherwise.

analyser.fftSize = 2048;

The last node we need is our source node. The analyser node cant work on a dom element, so we convert the audio element into a node with createMediaElementSource.

let source = audioCtx.createMediaElementSource(audioElement);

Side-note: You can only connect an element to a media source like this once per page load.

Now we need to connect all the node together so they can read each others data.

source.connect(analyser);

Step 3:

Once all the nodes are created and connected, we need create an array to store our data. The web audio api is very particular about this.

The array needs to be an unsigned array, meaning it has no negative numbers, that has a length of your fftSize we set earlier divided by 2.

let data = new Uint8Array(analyser.frequencyBinCount);

This is how that normally is done.

step 4:

Everything is now ready for use to start our render loop. Every x amount of times per second, we want to update our canvas will the new data from our audio element, so we can draw a different visual. You could do this in a setInterval, but there is a much better method.

requestAnimationFrame is a global function that takes a callback function as an argument. It will call this function usually 60 times a second, and it does it before the paint event happens. This is useful so there arnt any weird lags or artifacts that get drawn to the screen.

requestAnimationFrame(loopingFunction);

Our looping function should do a few things.

First we need to call requestAnimationFrame recursively inside our function. This is done because requestAnimationFrame only calls your callback function once. So in order to loop, we have to call it again inside.

requestAnimationFrame(loopingFunction);

Then we need to populate our data array with the sounds from our audio. This is done by calling getByteFrequencyData on our analyser node, and passing it a array to put the data in.

analyser.getByteFrequencyData(data); //passing our Uint data array

Side-note: The data array is being passed by reference, and the values are being changed directly, a copy is not being made.

Now our array has 1024 values in it, all of which represent a frequency and its volume. data[99] represent the frequency 100hz, and the value is its volume, data[400] = 401hz, etc. The volume will always be a value from 0–255. This will be important when we make our visuals in a little bit.

Since we have our data, when can finally draw something to our canvas with this data.

function loopingFunction(){

Step 5:

Our draw function is where the magic happens. Here you can really put any 2d design on the screen that you want. In this tutorial we will go over a simple bar style design, but the sky is the limit.

When we get our data as a parameter, it may be a good idea to convert it to a proper array, since it will be passed as an unsigned int array. Just so you can use all the methods your familiar with on it.

data = [...data];

In our draw function we need to make sure we clear our canvas, because the last thing we drew will still be painted on the canvas unless we tell it to clear.

ctx.clearRect(0,0,canvas.width,canvas.height); //x,y,width,height

Then we start drawing our design.

First let figure out the space between each bar. We take the length of our data and the width of our canvas, and do some math.

let space = canvas.width / data.length;

Now lets run through our data and draw something for each data point. For each point we will draw a line from the bottom of the grid, to the height of each data point. Remember the grid is sort of upside down, so we don’t add to the y value, we subtract from it.

data.forEach((value,i)=>{

That’s pretty much it for our draw function. You can make many more complex shapes with the other canvas api functions.

function draw(data){

Final step:

In all modern browsers, audio context is paused by default so you don’t spam a user with sound. The audio context can only be un-muted when the user interacts with the page, such as a button click or touch event. So what I do is resume the audio when the user clicks the play button.

audioElement.onplay = ()=>{

audioCtx.resume();

That should work for most browsers. But if you’re using Safari it wont. Safari wants you to create the AudioContext node inside of a user event, then when the user event happens you can use it. The process is similar you just need to do everything we just did basically but put it into a function, and call it when the user triggers a click or touch or play event.

Conclusion:

Thanks for reading, if you’re interested in audio visualizers or don’t want to code this all yourself please check out Wave.js on github at foobar404/Wave.js or here for the npm package.




SHARE

Oscar perez

Arquitecto especialista en gestion de proyectos si necesitas desarrollar algun proyecto en Bogota contactame en el 3006825874 o visita mi pagina en www.arquitectobogota.tk

  • Image
  • Image
  • Image
  • Image
  • Image
    Blogger Comment
    Facebook Comment

0 comentarios:

Publicar un comentario