here is my project description:
I spent a long time to sort out an issue with the Spdif connection not working, when in fact, it was, but… the volume on the receiver with spdif IN was sooo low compared to the main board speakers (amp v4) that was not hearing the speakers. My bad.
So, I pushed the spdif input to the max with ACP workbench, plus changed the volume balance with the phone app, and now both cards play at the same volume.
With ACP workbench I have set the output of the left board with the LEFT audio Signal, and the right signal for the board driving the right speakers.
BUT: I realized that the SPDIF OUT of the AMP V4 is affected by the filters that I have configured to drive the Left speakers. The goal being to build active speakers, I have set a high path filter for the tweeter channel (left channel of AMP v4) and low path for the boomer.
so, in fact, the AMP 2.1 board, receive only the filtered sound of the music left channel
So, l’m stuck, wondering if there is a way to configure the SPDIF out of AMP V4 to send the original signal not affected by the effects chain.
Worse case, is there a pin on the board to capture the signal before the effects, to send it to the second board? May be on the Wifi/Bluetooth board?
Let me complement with latest experiments: independantly from the issue of the SPDIF OUT signal that is affected by the master board filters, I found another limitation to chain / cascade several boards:
the signal processing along the chain is introducing a small delay that is translating into a strange micro-reverb / phase effect, poluting the sound. I like it when its a “flanger” pedal on my guitar, but not for music!
So, it seems irrelevant to try cascading several boards, with each of them playing sound.
Hi. Just for reference, the human ear is extremely sensitive to the time delay between sounds, that’s how we can detect from where the sounds come or maintain our balance for instance but not only. You can fool your brain just giving a minimal time difference between 2 identical sounds in each ear, making it believing it’s more on the left or right side with the same sound level, or even rear, front, above & below… That’s exactly what is commonly used in the ‘false’ surround sound systems using only 2 speakers.
So your conclusion is totally logical, the delay introduced by the multiple computations on each board is way above the ear phase tolerance… (echo, reverb feeling…). But more importantly, try to use your 2 boards synchronized in multiple room mode (one master the other slave), each with the same restitution channel (left for instance) so that you have the exact same signal at the origin, with 2 identical set ups (volume, speakers …) seperated by a couple of meters each and away from the walls. Place yourself in the exact middle of the sound scene. Listen to a piece of sound where you can easily hear the sound composition for instance, a voice. You will notice that the sound scene is floating, doesn’t constantly remain in the middle (image in front of you) . This is inherent to the impossible exact synchronization of the 2 signals output by each board. The boards are designed to use a Wifi/network connection. This existing upnp / tcpip / ieexxxx protocols are nor targetting exact time sync but data error free communciation. In our case the boards with relatively slow CPU’s, try to minimize the lags in the time chain, by handshaking (bidirectional communication - ‘you have to next play this chunk of music, starting at time XXX:XX:XX ending at time XX:XX:XX, once the current is done’, ‘have you received it?’,‘what’s your time sync parameters?’ etc…) and try to deal with the time used for the error correction computation, which by definition varies with the transmission. The ‘basic’ computation systems hardly manage real time sync with poor quartz clocks introducing time shifts fatal to the ear detection.
Btw, that’s the dream that communication companies try to sell us with 5G, i.e reduce the time lag to allow real time synchronization between devices.
And finally, in your SPDIF chain you totally loose the time reference since it’s relying on the clocks of each boards, plus time delay for transmission. There is no information of absolute start time.
Very professional comment.
Back to the original topic, regarding the delay issue. If sending audio signal over SPDIF, the delayed time should be fixed and should be able to calibrate. Although it’s not sync exactly, but could be at least not so sensitive for normal human, like me . In the new firmware (not finally released yet for DIY boards), we have added a delay settings for DACX channel, and this could be applied on the DAC0 channel also to calibrate the delay in this project. And this delay need memory to buffer audio frames, and it’s an issue for this system as the memory is very limited. So need a way to configure it.
Another issue is the SPDIF out signal affected by the 2 active filters applied on master device. And this could be fixed with moving the filters to DAC output only. But this might affect other applications. So also need to consider a way to configure it.
Thanks a lot Frank for your updates related to the firmware; so, finally there might be solutions in the future
Following this thread…
Have a similar idea, but want to use 2x2.1 boards (sub integrated in both L and R speaker).
Some initial thoughts, AND questions;
1:: would love the possibility of connecting analogue input to one board, and have sound from both speakers… BUT suspect that the timing issues will make it difficult, and good synchronisation might add to much delay, so for instance connecting it to a TV could be impossible… Probably have to live with a split audio cable feeding a mono signal to each speaker?
2:: when the speakers are grouped in the app, using Bluetooth or Spotify connect, I guess volume and tone control will work fine from the app… but IS IT POSSIBLE to sync the cards so that the potensiometers one one card would be read from both also? Or is this a feature that could be added in future firmware upgrades?
3:: when the speakers are grouped in the app, will source selection and internet radio from the app be synchronized? Will a push on the volume control (source selection) on one card change the input on both? Or is this a feature that could be added in future firmware upgrades?
4:: considering a second stage enhancement of the system with surround (center and rear) speakers… can the last cards be limited to line input only in ACP Workbench (source selection disabled, not visible in the APP).
On the question #1: my understanding is that you want to use for both boards the line input only. In such case in fact you don’t use the value of the wifi & Bluetooth inputs. But pay for it…. Correct?
That is why I ask questions 2 and 3 ;o)
Want to use line-in L+R with no (minimum) delay, towards live pictures.
If I can syncronice source selection and other controls, the bluetooth and WiFi will be used, but this is not delay critical, only needs timing.
If I cannot make this work, I will need to use a single pre amp board, with wifi/bluetooth, and two pure power amps with DSP, and one cabled audio signal in between.
Question 2 answer: no way currently.
Question 3 answer: yes, with the 4STREAM app, when 2 cards are grouped, you can change the volume of both with a single slider.
But: if your 2 boards are in the same room, there will be phase effects between the left and right speakers that will make the solution ( 2 boards grouped in wifi or Bluetooth with the multiroom mode) really not usable ( I tried).
Frank/Arylic - is this something you would look into in new firmware?
Othervise I guess a combination of one “pro v3” and two separate dsp/amps will be the solution…
I went down a similar road when putting my system together.
I initially used the DSP in the one Arylic board then used external power amps. I was still stuck with all the faults of the original passive crossovers so ditched this and moved to;
I use an Up2Stream HD board, then feed the optical out to a MiniDSP (https://www.minidsp.com/products/minidsp-in-a-box/minidsp-2x4-hd) where I now do the crossover, time alignment and all DSP fixes. This ensures that both channels are in sync which would be a real issue with an Arylic for left and a separate one for right channel. I then feed the analogue audio from the MiniDSP to two separate stereo power amps.
To get the sub to work I feed the analogue audio from the Up2Stream to a stand alone sub which has all the filter, level and phase adjustments onboard already.
Not quite as integrated as your original proposal but the overall sound is fantastic as the Up2StreamHD is so good.
Maybe this would be an OK solution for you.
Thanks Martin! I also have another system very similar to yours, done several years ago: I use a minidsp 2x4 HD to feed 2 separate stereo power amps + an active Sub But, I use a DAC who is capricious with USB [dis]connections, which made me thing about buying an Up2Stream HD DAC. You seem happy about it. Right? I wonder why there is no version in a box…
I am currently sourcing an extruded metal case which I will machine to match the ArylicUp2Stream HD. I also plan to build in the power supply etc so that it is mains powered.
I use Ice Power amps but be careful as there are many copies around. I also modded the sub to remove the mains transformer…I hate hum! I replaced the transformer with a switching power supply. Much better and no mechanical hum. I also swapped out the main smoothing caps as they were getting a bit old.
I get no audio drop outs with the miniDSP and the HD version supports the HD output of the Arylic.
The only issue I see is that the Arylic often fails to switch to Line in which I use for TV sound. If you turn the volume down abruptly it forces it to switch to line in. Must be a bug.