Defines a sound that can be played in the application. The sound can either be an ambient track or a simple sound played in reaction to a user action.
Create a sound and attach it to a scene
Name of your sound
Url to the sound to load async or ArrayBuffer, it also works with MediaStreams
Provide a callback function if you'd like to load your code once the sound is ready to be played
Objects to provide with the current available options: autoplay, loop, volume, spatialSound, maxDistance, rolloffFactor, refDistance, distanceModel, panningModel, streaming
Does the sound autoplay once loaded.
Gets or sets the inner angle for the directional cone.
Gets or sets the outer angle for the directional cone.
Define the distance attenuation model the sound will follow.
Is this sound currently paused.
Is this sound currently played.
Does the sound loop after it finishes playing once.
Define the max distance the sound should be heard (intensity just became 0 at this point).
The name of the sound in the scene.
Observable event when the current playing sound finishes.
Define the reference distance the sound should be heard perfectly.
Define the roll off factor of spatial sounds.
The sound track id this sound belongs to.
Does this sound enables spatial sound.
Does the sound use a custom attenuation curve to simulate the falloff happening when the source gets further away from the camera.
Attach the sound to a dedicated mesh
The transform node to connect the sound with
Connect this sound to a sound track audio node like gain...
the sound track audio node to connect to
Detach the sound from the previously attached mesh
Release the sound and its associated resources
Gets the current underlying audio buffer containing the data
the audio buffer
Gets the volume of the sound.
the volume of the sound
Gets if the sounds is ready to be played or not.
true if ready, otherwise false
Put the sound in pause
Play the sound
(optional) Start the sound after X seconds. Start immediately (0) by default.
(optional) Start the sound setting it at a specific time
Serializes the Sound in a JSON representation
the JSON representation of the sound
Sets a new custom attenuation function for the sound.
Defines the function used for the attenuation
Sets the data of the sound from an audiobuffer
The audioBuffer containing the data
Transform this sound into a directional source
Size of the inner cone in degree
Size of the outer cone in degree
Volume of the sound outside the outer cone (between 0.0 and 1.0)
Sets the local direction of the emitter if spatial sound is enabled
Defines the new local direction
Set the sound play back rate
Define the playback rate the sound should be played at
Sets the position of the emitter if spatial sound is enabled
Defines the new posisiton
Sets a dedicated volume for this sounds
Define the new volume of the sound
Define in how long the sound should be at this value
Stop the sound
(optional) Stop the sound after X seconds. Stop immediately (0) by default.
Switch the panning model to Equal Power: Represents the equal-power panning algorithm, generally regarded as simple and efficient. equalpower is the default value.
Switch the panning model to HRTF: Renders a stereo output of higher quality than equalpower — it uses a convolution with measured impulse responses from human subjects.
Updates the current sounds options such as maxdistance, loop...
A JSON object containing values named as the object properties
Parse a JSON representation of a sound to innstantiate in a given scene
Define the JSON representation of the sound (usually coming from the serialize method)
Define the scene the new parsed sound should be created in
Define the rooturl of the load in case we need to fetch relative dependencies
Define a cound place holder if do not need to instantiate a new one
the newly parsed sound
Generated using TypeDoc