Why does Potato Virtual Input need more buffering than Virtual Audio Cable?
Posted: Tue Jun 16, 2020 7:46 pm
The question is in the last paragraph, but may need some background, so here we go!
I'm running Virtual Audio Cable with 3x256 buffers, and it's doing well. (I haven't tried 3x128 -- I don't need to go to that extreme for my desktop audio.)
In Virtual Inputs in Potato, however, if I have a buffer size less than 2048, the virtual audio inputs glitch out.
My computer (running Windows 10) is fairly high spec, and generally I can run audio programs with small buffers.
My assumption is that the VAC installs a driver, that uses the sound miniport driver, to deliver data written from the "input" (or empty on a timer) to the output.
My further assumption is that the "virtual inputs" in Potato instead are application-level DirectShow endpoints.
My current best theory for this behavior is that the DirectShow graph simply can't deal with smaller buffers in a timely manner, whereas VAC, using the kernel-level infrastructure, can.
If that theory is correct, then what is the fundamental limitation here? Is it scheduling of the sound processing in Potato? Is it scheduling in the applications that would play "into" these virtual channels? Is it in the DirectShow graph infrastructure itself? Is it something else?
I'm running Virtual Audio Cable with 3x256 buffers, and it's doing well. (I haven't tried 3x128 -- I don't need to go to that extreme for my desktop audio.)
In Virtual Inputs in Potato, however, if I have a buffer size less than 2048, the virtual audio inputs glitch out.
My computer (running Windows 10) is fairly high spec, and generally I can run audio programs with small buffers.
My assumption is that the VAC installs a driver, that uses the sound miniport driver, to deliver data written from the "input" (or empty on a timer) to the output.
My further assumption is that the "virtual inputs" in Potato instead are application-level DirectShow endpoints.
My current best theory for this behavior is that the DirectShow graph simply can't deal with smaller buffers in a timely manner, whereas VAC, using the kernel-level infrastructure, can.
If that theory is correct, then what is the fundamental limitation here? Is it scheduling of the sound processing in Potato? Is it scheduling in the applications that would play "into" these virtual channels? Is it in the DirectShow graph infrastructure itself? Is it something else?