I have just started using Audacious as the audio player with my QLC+ set-up and this post is there to document my experiences.
WARNING and caveat: This method uses external 3rd party software that is not part of QLC+ so know that you cannot expect any sort of support from the normal QLC+ gurus/devs in using this. I will monitor this thread infrequently to answer any queries. Also note that my usage includes me building QLC+ and Audacious from source and running QLC+ on the pi in a way that is not currently supported.
Use case: I use QLC+ almost exclusively in a theatre setting, for a community theatre group. Mostly it is for a musical or music concert. We have limited resources in terms of funds but also equipment and space. A typical show would have about 20 pars, 2 moving heads (Chauvet Qspot), some home made LED pixel strings and strips, a dmx smoke machine and a 4 channel dmx mains dimmer. I set up the scenes and then put them in a chaser determining the fade hold settings so that it becomes close to a one button operation. I also do some other funky things like using the moving heads as follow spots and controlling video from another Raspberry Pi via dmx, but that is for another post. In addition to all this there is always audio that needs to be played like backing tracks, dance tracks or sound effects. In addition we also provide front of house music before the show, during interval and afterwards.
When QLC+ started using the Qt media library I was quite excited because I could use that to play my audio. Playing the FOH playlist was not really an option as it begs a UI dedicted to adding, sorting etc of a playlist. With the show audio it mostly worked except when fading in or out on demand and playing a piece in the middle meant massaging the audio files directly. Then I had problems playing the audio on my Raspberry Pi, that I was sure related to Qt libraries.
On the other hand the media player Audacious ticked a lot of boxes for me. It has 2 killer playlist features: remove silence (which automatically truncates the very silent bits at the start and end of a song) and crossfade. Both of which combines to make it seem as if someone is carefully selecting and mixing tracks.
Combining Audacious and QLC+ as separate players proved to be a disaster when they were using the same audio out device on my linux laptop. The first half of the show had to run without any sound, opening night.
While solving that (disaster) I was reading up on Audacious and realised that it has a command line tool that allows control of the player. Couple that with the QLC+ scripting feature where I can send a command line instruction using systemcommand, I now have what I think is the perfect solution for my use case. In addition I can write more complicated scripts as external executable scripts and call them as well. Furthermore I can combine different scripts inside QLC+ chasers and collections and do all sorts of complicated things, automagically.
I am attaching a few screenshots that shows buttons with pre-programmed scripts/controls as well as views of some of the scripts and chasers.
Functionality that I implemented with this - all executable from buttons and cue lists:
- Starting and killing the Audacious player.
- Playing the FOH playlist. This will fade in and play the current active song in the playlist. From there onward Audacious plays/mixes the playlist tracks until I stop it.
- Various volume controls including fade from current volume to 80% or 90%. This prevents a sudden jump in volume.
- Fade out the FOH playlist, stop the playlist, cue the next song, put the volume back at 90% and turn off crossfade, ready for the first show cue.
- Play the show cues on demand from buttons and/or a cue list.
- Other transport controls like prev/next, seek and play/pause.
- I can also have the same track being used and cued to start in the middle somewhere, fade in/out and then stop. Then later using the same track but a different part of it. This approach also allows the flexibility of starting an audio track and then fading out on demand, when an actor reaches a specific place for example.
I'm running this on a Raspberry Pi 2 with the latest Raspbian (jessie) and have found it to be rock solid in my tests so far with the CPU hovering well below 10%, occasionally spiking up to about 25%. I will be testing the full thing while also using QLC+ to drive lights via the uart very soon.
I can post my test workspace and some the external scripts if anyone is interested.
A final note. I have only tested this on linux but with a bit batch file wizardry I'm pretty sure it could work on Windows as well. At least I know I can control Audacious from the command line tool. You may have to set your path variable.