Can we build a 2 way & stereo active speakers system with Arylic boards?

Hi,

here is my project description:

I spent a long time to sort out an issue with the Spdif connection not working, when in fact, it was, but… the volume on the receiver with spdif IN was sooo low compared to the main board speakers (amp v4) that was not hearing the speakers. My bad.

So, I pushed the spdif input to the max with ACP workbench, plus changed the volume balance with the phone app, and now both cards play at the same volume.
With ACP workbench I have set the output of the left board with the LEFT audio Signal, and the right signal for the board driving the right speakers.

BUT: I realized that the SPDIF OUT of the AMP V4 is affected by the filters that I have configured to drive the Left speakers. The goal being to build active speakers, I have set a high path filter for the tweeter channel (left channel of AMP v4) and low path for the boomer.
so, in fact, the AMP 2.1 board, receive only the filtered sound of the music left channel :frowning:
So, l’m stuck, wondering if there is a way to configure the SPDIF out of AMP V4 to send the original signal not affected by the effects chain.
Worse case, is there a pin on the board to capture the signal before the effects, to send it to the second board? May be on the Wifi/Bluetooth board?

Let me complement with latest experiments: independantly from the issue of the SPDIF OUT signal that is affected by the master board filters, I found another limitation to chain / cascade several boards:
the signal processing along the chain is introducing a small delay that is translating into a strange micro-reverb / phase effect, poluting the sound. I like it when its a “flanger” pedal on my guitar, but not for music!
So, it seems irrelevant to try cascading several boards, with each of them playing sound.

Hi. Just for reference, the human ear is extremely sensitive to the time delay between sounds, that’s how we can detect from where the sounds come or maintain our balance for instance but not only. You can fool your brain just giving a minimal time difference between 2 identical sounds in each ear, making it believing it’s more on the left or right side with the same sound level, or even rear, front, above & below… That’s exactly what is commonly used in the ‘false’ surround sound systems using only 2 speakers.
So your conclusion is totally logical, the delay introduced by the multiple computations on each board is way above the ear phase tolerance… (echo, reverb feeling…). But more importantly, try to use your 2 boards synchronized in multiple room mode (one master the other slave), each with the same restitution channel (left for instance) so that you have the exact same signal at the origin, with 2 identical set ups (volume, speakers …) seperated by a couple of meters each and away from the walls. Place yourself in the exact middle of the sound scene. Listen to a piece of sound where you can easily hear the sound composition for instance, a voice. You will notice that the sound scene is floating, doesn’t constantly remain in the middle (image in front of you) . This is inherent to the impossible exact synchronization of the 2 signals output by each board. The boards are designed to use a Wifi/network connection. This existing upnp / tcpip / ieexxxx protocols are nor targetting exact time sync but data error free communciation. In our case the boards with relatively slow CPU’s, try to minimize the lags in the time chain, by handshaking (bidirectional communication - ‘you have to next play this chunk of music, starting at time XXX:XX:XX ending at time XX:XX:XX, once the current is done’, ‘have you received it?’,‘what’s your time sync parameters?’ etc…) and try to deal with the time used for the error correction computation, which by definition varies with the transmission. The ‘basic’ computation systems hardly manage real time sync with poor quartz clocks introducing time shifts fatal to the ear detection.
Btw, that’s the dream that communication companies try to sell us with 5G, i.e reduce the time lag to allow real time synchronization between devices.
And finally, in your SPDIF chain you totally loose the time reference since it’s relying on the clocks of each boards, plus time delay for transmission. There is no information of absolute start time.
Have fun!

1 Like

Very professional comment.

Back to the original topic, regarding the delay issue. If sending audio signal over SPDIF, the delayed time should be fixed and should be able to calibrate. Although it’s not sync exactly, but could be at least not so sensitive for normal human, like me :slight_smile:. In the new firmware (not finally released yet for DIY boards), we have added a delay settings for DACX channel, and this could be applied on the DAC0 channel also to calibrate the delay in this project. And this delay need memory to buffer audio frames, and it’s an issue for this system as the memory is very limited. So need a way to configure it.

Another issue is the SPDIF out signal affected by the 2 active filters applied on master device. And this could be fixed with moving the filters to DAC output only. But this might affect other applications. So also need to consider a way to configure it.

1 Like

Thanks a lot Frank for your updates related to the firmware; so, finally there might be solutions in the future :slight_smile: