These are medium-sized Mumble projects that we've though about, and which would be really beneficial to have, but which we so far haven't even scratched the surface of. We'd really like help with these, but be aware that they non-trivial to implement.
Skills: Low level Windows graphics, Windows Kernel.
Mumble's overlay currently renders the overlay in the Mumble process with the CPU in a shared memory segment, then signals the in-game injected overlay to update its full screen texture. This is resource-intensive. On OSX, you can share surfaces, and on Linux you can share contexts, which allows you to render-to-texture the overlay, and then have the in-game overlay use that rendered texture without any memory transfer. On Windows, in Vista/Win7, you have DXGI Shared Surfaces.
The focus of this project is twofold. First, get DXGI Shared Surfaces working. Ideally we'd like to render with OpenGL in the Mumble process, and have the resulting surface be used both in OpenGL and Direct3D applications.
Second, have a peek at moving the overlay into the driver layer, either as part of the userspace graphics driver or in the kernel. Right now the overlay is rendered by patching the running processes, which works most of the time, but also causes problems with some anti-cheat utilities and does not deal with faulty applications very well.
Skills: Signal Processing, Qt
A lot of work has been done on HRTF research, and we'd like to use it in Mumble. Positional audio already works well, but nowhere near as well as it could with proper HRTF applied. The focus here will be on headphone based HRTF, as there is no way to guarnatee speaker placement for users. This project consists of three separate tasks.
First, find a way to apply HRTFs to the current audio engine with minimal latency increase.
Second, find a way to smoothly interpolate the HRTFs as audio sources and listeners move around.
Third, create a simple wizard to guide the user to choosing the correct HRTF for his/her ears.