Audio
Special rules for using audio within a module
Playing Audio
Playing audio using the Web Audio API or the <audio>
html tag poses a number of issues when running inside our application. Notably, audio not playing when silent mode is enabled and game audio not being captured in full screen recording if using headphones or if the device volume is turned down.
As such, there's a custom AudioManager you can use to load and unload AudioClips which can be played and managed much the same as an HTMLAudioElement
.
When the o3h.js file is loaded, it will polyfill the Audio
class / <audio>
tag / HTMLAudioElement
object, to play audio through the native Oooh application. We suggest that you use these DOM interfaces instead of the custom AudioManager as it should be more natural. Additionally you can use this polyfill with your favorite JavaScript audio library with only minor changes.
Best practices
Please follow these important guidelines when using our <audio>
polyfill.
Clear src to unload
You can save resources by clearing the src
attribute when you are done with an <audio>
node.
audio.addEventListener('ended', () => {
audio.src = "";
});
The same applies to engines that use HTML5 audio. They will usually give you a way to access the underlying <audio>
node.
Sound sprites
You can use sound sprites, but please follow these rules of thumb:
- DO use sound sprites to group many short sounds.
- DO NOT use looping audio in a sound sprite. You will get better looping by making the looping sound its own file.
- DO NOT include background music or other long clips in a sound sprite. Put music in individual files without sound sprites.
Audio formats
Oooh will play a number of file types. However, there's no advantage to providing audio files in multiple types as fallbacks. Please provide it in a single format that we accept.
Oooh will accept MP3, WAV, and OGG. We recommend you simply use MP3 throughout.
Using <audio>
<audio>
Most of the functionality of the HTMLAudioElement
prototype which underlies the <audio>
tag will work automatically for you. Please note these ways in which our <audio>
polyfill differs from the browser's <audio>
:
controls
The audio tag does not route correctly when accessed directly via the auto generated UI, so you'll need to control the audio via JavaScript yourself. Additionally the o3h library will remove the controls from any audio elements it finds when loading.
crossorigin
This shouldn't be needed with the native implementation and you should never load audio files from another origin
preload
This is partially supported: if the preload attribute is metadata
, auto
, or not specified (defaults to auto
), the audio will start loading immediately. Set preload to none
to prevent preloading.
Either all data is loaded or no data is loaded. Preloading metadata alone is not supported, but you will be able to access the duration
property regardless.
events
Not all events are fired and the ones that are aren't trusted events, if you require an event that's missing let us know the use case for it and we'll look into adding it.
There are other less common properties and functions of HTMLAudioElement
that aren't supported either. Again, if you require functionality please present a use case and we'll look into adding it.
Using Howler
You'll need to enable html5 audio in order for Howler to behave properly.
const sound = new Howl({
src: ['sound.mp3'],
html5: true
});
sound.play();
We recommend using a larger pool size, the default of 5 may introduce issues if you fire a large number of simultaneous sounds. Try starting at 25.
Using Phaser 3
You'll need to both enable html5 audio using an AudioConfig instance in your game's configuration:
const config = {
audio: {
disableWebAudio: true
}
// ... other config ...
};
new Phaser.Game(config);
and since Phaser does not play html5 audio until a user interaction, fake a tap:
if (window.hasOwnProperty('TouchEvent')) {
document.body.dispatchEvent(new TouchEvent('touchend'));
}
If you need to play audio before the user taps, for example, intro music, you will need to place this code immediately after you have loaded audio files, for example, when the preloader is done, or during create()
of the initial scene.
We have found that the timing of this tap is important. First, only fire it if you need to. If the user will be tapping the screen to advance, it is not necessary and will introduce problems. Second, fire it after audio has preloaded. If you load more sounds after the initial load, you will need another tap to unlock these, whether it's from the user or this fake tap.
Using Phaser 2 or Phaser CE
Disable Web Audio with a PhaserGlobal variable. Be sure to define this before you create your Game object.
window.PhaserGlobal = {
disableWebAudio: true
};
Using pixi-sound
Pixi.js doesn't pack in a sound library, so you can choose your own way to play sounds. If you choose to use pixi-sound, opt out of Web Audio with useLegacy:
import { sound } from "@pixi/sound";
sound.useLegacy = true;
You must do this before any sounds are loaded.
Using PlayCanvas
In order to get PlayCanvas to use <audio>
and play audio through the Oooh application, you'll have to load o3h.js to get the polyfill, and hide the Web Audio API from PlayCanvas, before loading PlayCanvas.
Your index.html should look something like:
<script defer type="module">
import * as o3h from '/api/o3h.js';
window.o3h = o3h;
delete window.AudioContext;
delete window.webkitAudioContext;
</script>
<script defer src="playcanvas-stable.min.js"></script>
<script defer src="__settings__.js"></script>
<script defer src="__modules__.js"></script>
<script defer src="__start__.js"></script>
<script defer src="__loading__.js"></script>
- By loading o3h.js with
import
instead of the dynamicimport()
, inside a script tag flaggedmodule
, you prevent o3h.js from being loaded asynchronously; the script does not finish until o3h.js is loaded. - By marking all the other scripts
defer
, you ensure that they load in the order that they appear in the HTML. (Module scripts are alwaysdefer
) - Combining both of these, you can ensure that the polyfill in o3h.js is loaded before playcanvas begins loading.
- Deleting both references to
AudioContext
is needed to coerce PlayCanvas to run in legacy<audio>
mode.- Obviously, don't use the
forceWebAudioApi
option when creating aSoundManager
.
- Obviously, don't use the
If you prefer, you could alternately:
- Save the original
AudioContext
as another name, if you need to use it elsewhere - Modify
playcanvas.js
orplaycanvas.min.js
so thathasAudioContext()
returns false.
One final note: this disables positional audio. To prevent unexpected results, you should uncheck Positional on all Sound components.
PlayCanvas will create a lot of <audio>
nodes, one for every SoundInstance
. It is good practice to unload these. You can do so by listening to their end
event and clearing the src
property on the underlying <audio>
tag.
var clip = gameobject.sound.slot('MySoundSlot').play();
clip.on('end', () => clip.source.src = "");
Web Audio API
At this time the Web Audio API is not supported for native audio playback, but you may use it—or libraries that rely on it—to analyse audio data, for example to render a waveform or perform beat detection.
Listening to the Microphone
When web content embedded in an app you cannot access the microphone on iOS as you normally would via navigator.mediaDevices.getUserMedia({audio: true})
. Instead you must use the o3h bridge to stream the data provided by the native microphone via InputManager.getMicrophoneAudioNode.
The returned AudioNode can be used to analyse the properties of the audio signal (pitch, tone, etc.) captured by the microphone. Samples will be delayed by 1 or 2 frames, and under system load you may notice some data gaps. Do not attempt to record this audio for playback, as you'll run into many iOS restrictions, and even if you manage the quality will be less than ideal. Instead use a MicrophoneRecorder to record an audio asset.
import * as o3h from "/api/o3h";
// standard web audio functionality, a context to hook everything together with, and an analyser to inspect the audio data
const audioCtx = new (window.AudioContext || window.webkitAudioContext)();
const analyser = audioCtx.createAnalyser();
// get an AudioNode that outputs the microphone audio
const microphoneAudioNode = await o3h.Instance.getInputManager().getMicrophoneAudioNode(audioCtx);
o3h.Instance.ready(() => {
// send the microphone audio into the analyser
microphoneAudioNode.connect(analyser);
// stop the microphone audio stream when finished with it
microphoneAudioNode.disconnect();
});
Updated over 2 years ago