posted on 2023-08-30, 20:12authored byMario Bagnoli
Artificial autonomous agents are systems capable of traversing a path with no human intervention. The agent uses sensory cues or navigation aids to gather information from the environment and update its position until a target is reached. Advances in technologies have made possible the creation of fast and intelligent autonomous agents; however, they tend to rely on visual cues or ultrasonic rotating systems. If the primary camera or the rotating mechanism of the sonar malfunction, autonomous navigation could be compromised. Navigation informed by acoustic cues in the hearing range can compensate for such shortcomings and has applications, particularly for robotic agents acting in low light environments.
This work is inspired by biological echolocation and describes how a simulated autonomous agent can employ auditory cues captured with a microphone array to scan, map and navigate confined rooms in a virtual environment. A novel virtual environment equipped with three-dimensional acoustics has been built in Unity3D using the Image Source Method (ISM) to generate Room Impulse Responses (RIRs) for versatile physical configurations of reverberant environments. This approach can produce results in real-time in three dimensions and is not limited to cuboid rooms or the number of successive reflections.
A virtual agent in this reverberant environment scans the room by gathering simulated room impulse responses with a self-attached microphone array and sound source. A method has been developed to calculate a flat, "walkable" safe area in the form of a convex polygon around the agent. This is accomplished as a reverse process of the Image Source Method, where the RIRs associated with each microphone are used to estimate iteratively the positions of the image sources and via multidimensional scaling of a Euclidean Distance Matrix. Image sources are then used to estimate the positions of room walls and obstacles.
Autonomous acoustic-guided navigation of the virtual agent is developed for two different scenarios: (1) navigation towards an external sound source and (2) identifying and navigating towards a room aperture (exit). Novel adaptive algorithms are proposed for each scenario based on the estimation of interaural time and level difference of the microphone array. The proposed algorithms were validated by testing in various virtual room configurations, including internal obstacles and occlusions. Tests proved that robust acoustic-guided autonomous wayfinding can be achieved in a virtual environment.
Beyond the robotics field, this research can be applied to the development of echolocation training tools and accessible computer games for the visually impaired community.
History
Institution
Anglia Ruskin University
File version
Accepted version
Language
eng
Thesis name
PhD
Thesis type
Doctoral
Legacy posted date
2022-08-23
Legacy creation date
2022-08-23
Legacy Faculty/School/Department
Theses from Anglia Ruskin University/Faculty of Science and Engineering